Nella procedura seguente sono illustrati i comandi necessari per installare manualmente il cluster di memorizzazione Ceph.
Generare le chiavi segrete per i servizi Ceph che si intende eseguire. È possibile utilizzare il seguente comando per generarle:
python -c "import os ; import struct ; import time; import base64 ; \ key = os.urandom(16) ; header = struct.pack('<hiih',1,int(time.time()),0,len(key)) ; \ print base64.b64encode(header + key)"
Aggiungere le chiavi ai portachiavi correlati. Prima per client.admin
, quindi per i monitoraggi e gli altri servizi correlati, come OSD, Object Gateway o MDS:
ceph-authtool -n client.admin \ --create-keyring /etc/ceph/ceph.client.admin.keyring \ --cap mds 'allow *' --cap mon 'allow *' --cap osd 'allow *' ceph-authtool -n mon. \ --create-keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring \ --set-uid=0 --cap mon 'allow *' ceph-authtool -n client.bootstrap-osd \ --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring \ --cap mon 'allow profile bootstrap-osd' ceph-authtool -n client.bootstrap-rgw \ --create-keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring \ --cap mon 'allow profile bootstrap-rgw' ceph-authtool -n client.bootstrap-mds \ --create-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring \ --cap mon 'allow profile bootstrap-mds'
Creare un monmap, un database di tutti i monitoraggi nel cluster:
monmaptool --create --fsid eaac9695-4265-4ca8-ac2a-f3a479c559b1 \ /tmp/tmpuuhxm3/monmap monmaptool --add osceph-02 192.168.43.60 /tmp/tmpuuhxm3/monmap monmaptool --add osceph-03 192.168.43.96 /tmp/tmpuuhxm3/monmap monmaptool --add osceph-04 192.168.43.80 /tmp/tmpuuhxm3/monmap
Creare un nuovo portachiavi e importare in esso chiavi provenienti dai portachiavi admin e monitoraggi. Utilizzarli quindi per avviare i monitoraggi:
ceph-authtool --create-keyring /tmp/tmpuuhxm3/keyring \ --import-keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring ceph-authtool /tmp/tmpuuhxm3/keyring \ --import-keyring /etc/ceph/ceph.client.admin.keyring sudo -u ceph ceph-mon --mkfs -i osceph-03 \ --monmap /tmp/tmpuuhxm3/monmap --keyring /tmp/tmpuuhxm3/keyring systemctl restart ceph-mon@osceph-03
Verificare lo stato dei monitoraggi in systemd
:
systemctl show --property ActiveState ceph-mon@osceph-03
Verificare che Ceph sia in esecuzione e indichi lo stato del monitoraggio:
ceph --cluster=ceph \ --admin-daemon /var/run/ceph/ceph-mon.osceph-03.asok mon_status
Verificare lo stato di servizi specifici utilizzando le chiavi esistenti:
ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin -f json-pretty status [...] ceph --connect-timeout 5 \ --keyring /var/lib/ceph/bootstrap-mon/ceph-osceph-03.keyring \ --name mon. -f json-pretty status
Importare il portachiavi dai servizi Ceph esistenti e verificare lo stato:
ceph auth import -i /var/lib/ceph/bootstrap-osd/ceph.keyring ceph auth import -i /var/lib/ceph/bootstrap-rgw/ceph.keyring ceph auth import -i /var/lib/ceph/bootstrap-mds/ceph.keyring ceph --cluster=ceph \ --admin-daemon /var/run/ceph/ceph-mon.osceph-03.asok mon_status ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin -f json-pretty status
Preparare dischi/partizioni per gli OSD, utilizzando il file system XFS:
ceph-disk -v prepare --fs-type xfs --data-dev --cluster ceph \ --cluster-uuid eaac9695-4265-4ca8-ac2a-f3a479c559b1 /dev/vdb ceph-disk -v prepare --fs-type xfs --data-dev --cluster ceph \ --cluster-uuid eaac9695-4265-4ca8-ac2a-f3a479c559b1 /dev/vdc [...]
Attivare le partizioni:
ceph-disk -v activate --mark-init systemd --mount /dev/vdb1 ceph-disk -v activate --mark-init systemd --mount /dev/vdc1
Per SUSE Enterprise Storage versione 2.1 e precedenti, creare i pool di default:
ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .users.swift 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .intent-log 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .rgw.gc 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .users.uid 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .rgw.control 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .users 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .usage 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .log 16 16 ceph --connect-timeout 5 --keyring /etc/ceph/ceph.client.admin.keyring \ --name client.admin osd pool create .rgw 16 16
Creare la chiave istanza Object Gateway dalla chiave bootstrap:
ceph --connect-timeout 5 --cluster ceph --name client.bootstrap-rgw \ --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create \ client.rgw.0dc1e13033d2467eace46270f0048b39 osd 'allow rwx' mon 'allow rw' \ -o /var/lib/ceph/radosgw/ceph-rgw.rgw_name/keyring
Abilitare e avviare Object Gateway:
systemctl enable ceph-radosgw@rgw.rgw_name systemctl start ceph-radosgw@rgw.rgw_name
Facoltativamente, creare la chiave istanza MDS dalla chiave bootstrap, quindi abilitarla e avviarla:
ceph --connect-timeout 5 --cluster ceph --name client.bootstrap-mds \ --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create \ mds.mds.rgw_name osd 'allow rwx' mds allow mon \ 'allow profile mds' \ -o /var/lib/ceph/mds/ceph-mds.rgw_name/keyring systemctl enable ceph-mds@mds.rgw_name systemctl start ceph-mds@mds.rgw_name