O procedimento a seguir mostra os comandos necessários para instalar manualmente o cluster de armazenamento do Ceph.
Gere os segredos das chaves para os serviços do Ceph que você pretende executar. É possível usar o seguinte comando para gerá-los:
python -c "import os ; import struct ; import time; import base64 ; \ key = os.urandom(16) ; header = struct.pack('<hiih',1,int(time.time()),0,len(key)) ; \ print base64.b64encode(header + key)"
Adicione as chaves aos chaveiros relacionados. Primeiramente para o client.admin
, depois para os monitores e, em seguida, para outros serviços relacionados, como OSD, Object Gateway ou MDS:
ceph-authtool -n client.admin \ --create-keyring /etc/ceph/ceph.client.admin.keyring \ --cap mds 'allow *' --cap mon 'allow *' --cap osd 'allow *' ceph-authtool -n mon. \ --create-keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring \ --set-uid=0 --cap mon 'allow *' ceph-authtool -n client.bootstrap-osd \ --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring \ --cap mon 'allow profile bootstrap-osd' ceph-authtool -n client.bootstrap-rgw \ --create-keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring \ --cap mon 'allow profile bootstrap-rgw' ceph-authtool -n client.bootstrap-mds \ --create-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring \ --cap mon 'allow profile bootstrap-mds'
Crie um monmap, um banco de dados de todos os monitores em um cluster:
monmaptool --create --fsid eaac9695-4265-4ca8-ac2a-f3a479c559b1 \ /tmp/tmpuuhxm3/monmap monmaptool --add osceph-02 192.168.43.60 /tmp/tmpuuhxm3/monmap monmaptool --add osceph-03 192.168.43.96 /tmp/tmpuuhxm3/monmap monmaptool --add osceph-04 192.168.43.80 /tmp/tmpuuhxm3/monmap
Crie um novo chaveiro e importe as chaves dos chaveiros de admin e dos monitores para lá. Em seguida, use-as para iniciar os monitores:
ceph-authtool --create-keyring /tmp/tmpuuhxm3/keyring \ --import-keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring ceph-authtool /tmp/tmpuuhxm3/keyring \ --import-keyring /etc/ceph/ceph.client.admin.keyring sudo -u ceph ceph-mon --mkfs -i osceph-03 \ --monmap /tmp/tmpuuhxm3/monmap --keyring /tmp/tmpuuhxm3/keyring systemctl restart ceph-mon@osceph-03
Verifique o estado dos monitores em systemd
:
systemctl show --property ActiveState ceph-mon@osceph-03
Verifique se o Ceph está em execução e relata o status do monitor:
ceph --cluster=ceph \ --admin-daemon /var/run/ceph/ceph-mon.osceph-03.asok mon_status
Verifique o status dos serviços específicos usando as chaves existentes:
ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin -f json-pretty status [...] ceph --connect-timeout 5 \ --keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring \ --name mon. -f json-pretty status
Importe o chaveiro dos serviços existentes do Ceph e verifique o status:
ceph auth import -i /var/lib/ceph/bootstrap-osd/ceph.keyring ceph auth import -i /var/lib/ceph/bootstrap-rgw/ceph.keyring ceph auth import -i /var/lib/ceph/bootstrap-mds/ceph.keyring ceph --cluster=ceph \ --admin-daemon /var/run/ceph/ceph-mon.osceph-03.asok mon_status ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin -f json-pretty status
Prepare os discos/partições para os OSDs usando o sistema de arquivos XFS:
ceph-disk -v prepare --fs-type xfs --data-dev --cluster ceph \ --cluster-uuid eaac9695-4265-4ca8-ac2a-f3a479c559b1 /dev/vdb ceph-disk -v prepare --fs-type xfs --data-dev --cluster ceph \ --cluster-uuid eaac9695-4265-4ca8-ac2a-f3a479c559b1 /dev/vdc [...]
Ative as partições:
ceph-disk -v activate --mark-init systemd --mount /dev/vdb1 ceph-disk -v activate --mark-init systemd --mount /dev/vdc1
No SUSE Enterprise Storage versão 2.1 e versões anteriores, crie os pools padrão:
ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .users.swift 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .intent-log 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .rgw.gc 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .users.uid 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .rgw.control 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .users 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .usage 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .log 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .rgw 16 16
Crie a chave de instância do Object Gateway com base na chave do protocolo de boot:
ceph --connect-timeout 5 --cluster ceph --name client.bootstrap-rgw \ --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create \ client.rgw.0dc1e13033d2467eace46270f0048b39 osd 'allow rwx' mon 'allow rw' \ -o /var/lib/ceph/radosgw/ceph-rgw.rgw_name/keyring
Habilite e inicie o Object Gateway:
systemctl enable ceph-radosgw@rgw.rgw_name systemctl start ceph-radosgw@rgw.rgw_name
Opcionalmente, crie a chave de instância do MDS com base na chave do protocolo de boot e, em seguida, habilite-o e inicie-o:
ceph --connect-timeout 5 --cluster ceph --name client.bootstrap-mds \ --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create \ mds.mds.rgw_name osd 'allow rwx' mds allow mon \ 'allow profile mds' \ -o /var/lib/ceph/mds/ceph-mds.rgw_name/keyring systemctl enable ceph-mds@mds.rgw_name systemctl start ceph-mds@mds.rgw_name