Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 6

9 Ceph Object Gateway Edit source

Ceph Object Gateway is an object storage interface built on top of librgw to provide applications with a RESTful gateway to Ceph clusters. It supports two interfaces:

  • S3-compatible: Provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API.

  • Swift-compatible: Provides object storage functionality with an interface that is compatible with a large subset of the OpenStack Swift API.

The Object Gateway daemon uses 'Beast' HTTP front-end by default. It uses the Boost.Beast library for HTTP parsing and the Boost.Asio library for asynchronous network I/O operations.

Because Object Gateway provides interfaces compatible with OpenStack Swift and Amazon S3, the Object Gateway has its own user management. Object Gateway can store data in the same cluster that is used to store data from CephFS clients or RADOS Block Device clients. The S3 and Swift APIs share a common name space, so you may write data with one API and retrieve it with the other.

Important
Important: Object Gateway Deployed by DeepSea

Object Gateway is installed as a DeepSea role, therefore you do not need to install it manually.

To install the Object Gateway during the cluster deployment, see Section 5.3, “Cluster Deployment”.

To add a new node with Object Gateway to the cluster, see Section 2.2, “Adding New Roles to Nodes”.

9.1 Object Gateway Manual Installation Edit source

  1. Install Object Gateway on a node that is not using port 80. The following command installs all required components:

    cephadm@ogw > sudo zypper ref && zypper in ceph-radosgw
  2. If the Apache server from the previous Object Gateway instance is running, stop it and disable the relevant service:

    cephadm@ogw >  sudo systemctl stop disable apache2.service
  3. Edit /etc/ceph/ceph.conf and add the following lines:

    [client.rgw.gateway_host]
     rgw frontends = "beast port=80"
    Tip
    Tip

    If you want to configure Object Gateway/Beast for use with SSL encryption, modify the line accordingly:

    rgw frontends = beast ssl_port=7480 ssl_certificate=PATH_TO_CERTIFICATE.PEM
  4. Restart the Object Gateway service.

    cephadm@ogw > sudo systemctl restart ceph-radosgw@rgw.gateway_host

9.1.1 Object Gateway Configuration Edit source

Several steps are required to configure an Object Gateway.

9.1.1.1 Basic Configuration Edit source

Configuring a Ceph Object Gateway requires a running Ceph Storage Cluster. The Ceph Object Gateway is a client of the Ceph Storage Cluster. As a Ceph Storage Cluster client, it requires:

  • A host name for the gateway instance, for example gateway.

  • A storage cluster user name with appropriate permissions and a keyring.

  • Pools to store its data.

  • A data directory for the gateway instance.

  • An instance entry in the Ceph configuration file.

Each instance must have a user name and key to communicate with a Ceph storage cluster. In the following steps, we use a monitor node to create a bootstrap keyring, then create the Object Gateway instance user keyring based on the bootstrap one. Then, we create a client user name and key. Next, we add the key to the Ceph Storage Cluster. Finally, we distribute the keyring to the node containing the gateway instance.

  1. Create a keyring for the gateway:

    cephadm@adm > ceph-authtool --create-keyring /etc/ceph/ceph.client.rgw.keyring
    cephadm@adm > sudo chmod +r /etc/ceph/ceph.client.rgw.keyring
  2. Generate a Ceph Object Gateway user name and key for each instance. As an example, we will use the name gateway after client.radosgw:

    cephadm@adm > ceph-authtool /etc/ceph/ceph.client.rgw.keyring \
      -n client.rgw.gateway --gen-key
  3. Add capabilities to the key:

    cephadm@adm > ceph-authtool -n client.rgw.gateway --cap osd 'allow rwx' \
      --cap mon 'allow rwx' /etc/ceph/ceph.client.rgw.keyring
  4. Once you have created a keyring and key to enable the Ceph Object Gateway with access to the Ceph Storage Cluster, add the key to your Ceph Storage Cluster. For example:

    cephadm@adm > ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.rgw.gateway \
      -i /etc/ceph/ceph.client.rgw.keyring
  5. Distribute the keyring to the node with the gateway instance:

    cephadm@adm > scp /etc/ceph/ceph.client.rgw.keyring  ceph@HOST_NAME:/home/ceph
    cephadm@adm > ssh ceph@HOST_NAME
    cephadm@ogw > mv ceph.client.rgw.keyring /etc/ceph/ceph.client.rgw.keyring
Tip
Tip: Use Bootstrap Keyring

An alternative way is to create the Object Gateway bootstrap keyring, and then create the Object Gateway keyring from it:

  1. Create an Object Gateway bootstrap keyring on one of the monitor nodes:

    cephadm@mon > ceph \
     auth get-or-create client.bootstrap-rgw mon 'allow profile bootstrap-rgw' \
     --connect-timeout=25 \
     --cluster=ceph \
     --name mon. \
     --keyring=/var/lib/ceph/mon/ceph-NODE_HOST/keyring \
     -o /var/lib/ceph/bootstrap-rgw/keyring
  2. Create the /var/lib/ceph/radosgw/ceph-RGW_NAME directory for storing the bootstrap keyring:

    cephadm@mon > mkdir \
    /var/lib/ceph/radosgw/ceph-RGW_NAME
  3. Create an Object Gateway keyring from the newly created bootstrap keyring:

    cephadm@mon > ceph \
     auth get-or-create client.rgw.RGW_NAME osd 'allow rwx' mon 'allow rw' \
     --connect-timeout=25 \
     --cluster=ceph \
     --name client.bootstrap-rgw \
     --keyring=/var/lib/ceph/bootstrap-rgw/keyring \
     -o /var/lib/ceph/radosgw/ceph-RGW_NAME/keyring
  4. Copy the Object Gateway keyring to the Object Gateway host:

    cephadm@mon > scp \
    /var/lib/ceph/radosgw/ceph-RGW_NAME/keyring \
    RGW_HOST:/var/lib/ceph/radosgw/ceph-RGW_NAME/keyring

9.1.1.2 Create Pools (Optional) Edit source

Ceph Object Gateways require Ceph Storage Cluster pools to store specific gateway data. If the user you created has proper permissions, the gateway will create the pools automatically. However, ensure that you have set an appropriate default number of placement groups per pool in the Ceph configuration file.

The pool names follow the ZONE_NAME.POOL_NAME syntax. When configuring a gateway with the default region and zone, the default zone name is 'default' as in our example:

.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
default.rgw.buckets.index
default.rgw.buckets.data

To create the pools manually, see Section 11.2.2, “Create a Pool”.

Important
Important: Object Gateway and Erasure-Coded Pools

Only the default.rgw.buckets.data pool can be erasure coded. All other pools need to be replicated, otherwise the gateway is not accessible.

9.1.1.3 Adding Gateway Configuration to Ceph Edit source

Add the Ceph Object Gateway configuration to the Ceph Configuration file. The Ceph Object Gateway configuration requires you to identify the Ceph Object Gateway instance. Then, specify the host name where you installed the Ceph Object Gateway daemon, a keyring (for use with cephx), and optionally a log file. For example:

[client.rgw.INSTANCE_NAME]
host = HOST_NAME
keyring = /etc/ceph/ceph.client.rgw.keyring
Tip
Tip: Object Gateway Log File

To override the default Object Gateway log file, include the following:

log file = /var/log/radosgw/client.rgw.INSTANCE_NAME.log

The [client.rgw.*] portion of the gateway instance identifies this portion of the Ceph configuration file as configuring a Ceph Storage Cluster client where the client type is a Ceph Object Gateway (radosgw). The instance name follows. For example:

[client.rgw.gateway]
host = ceph-gateway
keyring = /etc/ceph/ceph.client.rgw.keyring
Note
Note

The HOST_NAME must be your machine host name, excluding the domain name.

Then turn off print continue. If you have it set to true, you may encounter problems with PUT operations:

rgw print continue = false

To use a Ceph Object Gateway with subdomain S3 calls (for example http://bucketname.hostname), you must add the Ceph Object Gateway DNS name under the [client.rgw.gateway] section of the Ceph configuration file:

[client.rgw.gateway]
...
rgw dns name = HOST_NAME

You should also consider installing a DNS server such as Dnsmasq on your client machine(s) when using the http://BUCKET_NAME.HOST_NAME syntax. The dnsmasq.conf file should include the following settings:

address=/HOST_NAME/HOST_IP_ADDRESS
listen-address=CLIENT_LOOPBACK_IP

Then, add the CLIENT_LOOPBACK_IP IP address as the first DNS server on the client machine(s).

9.1.1.4 Create Data Directory Edit source

Deployment scripts may not create the default Ceph Object Gateway data directory. Create data directories for each instance of a radosgw daemon if not already done. The host variables in the Ceph configuration file determine which host runs each instance of a radosgw daemon. The typical form specifies the radosgw daemon, the cluster name, and the daemon ID.

root # mkdir -p /var/lib/ceph/radosgw/CLUSTER_ID

Using the example ceph.conf settings above, you would execute the following:

root # mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.gateway

9.1.1.5 Restart Services and Start the Gateway Edit source

To ensure that all components have reloaded their configurations, we recommend restarting your Ceph Storage Cluster service. Then, start up the radosgw service. For more information, see Chapter 4, Introduction and Section 15.3, “Operating the Object Gateway Service”.

When the service is up and running, you can make an anonymous GET request to see if the gateway returns a response. A simple HTTP request to the domain name should return the following:

<ListAllMyBucketsResult>
      <Owner>
              <ID>anonymous</ID>
              <DisplayName/>
      </Owner>
      <Buckets/>
</ListAllMyBucketsResult>
Print this page