Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Enterprise Storage 7 Documentation / Administration and Operations Guide / Accessing Cluster Data / Export Ceph data via Samba
Applies to SUSE Enterprise Storage 7

24 Export Ceph data via Samba

This chapter describes how to export data stored in a Ceph cluster via a Samba/CIFS share so that you can easily access them from Windows* client machines. It also includes information that will help you configure a Ceph Samba gateway to join Active Directory in the Windows* domain to authenticate and authorize users.

Note
Note: Samba gateway performance

Because of increased protocol overhead and additional latency caused by extra network hops between the client and the storage, accessing CephFS via a Samba Gateway may significantly reduce application performance when compared to native Ceph clients.

24.1 Export CephFS via Samba share

Warning
Warning: Cross protocol access

Native CephFS and NFS clients are not restricted by file locks obtained via Samba, and vice versa. Applications that rely on cross protocol file locking may experience data corruption if CephFS backed Samba share paths are accessed via other means.

24.1.1 Configuring and exporting Samba packages

To configure and export a Samba share, the following packages need to be installed: samba-ceph and samba-winbind. If these packages are not installed, install them:

cephuser@smb > zypper install samba-ceph samba-winbind

24.1.2 Single gateway example

In preparation for exporting a Samba share, choose an appropriate node to act as a Samba Gateway. The node needs to have access to the Ceph client network, as well as sufficient CPU, memory, and networking resources.

Failover functionality can be provided with CTDB and the SUSE Linux Enterprise High Availability Extension. Refer to Section 24.1.3, “Configuring high availability” for more information on HA setup.

  1. Make sure that a working CephFS already exists in your cluster.

  2. Create a Samba Gateway specific keyring on the Ceph admin node and copy it to both Samba Gateway nodes:

    cephuser@adm > ceph auth get-or-create client.samba.gw mon 'allow r' \
     osd 'allow *' mds 'allow *' -o ceph.client.samba.gw.keyring
    cephuser@adm > scp ceph.client.samba.gw.keyring SAMBA_NODE:/etc/ceph/

    Replace SAMBA_NODE with the name of the Samba gateway node.

  3. The following steps are executed on the Samba Gateway node. Install Samba together with the Ceph integration package:

    cephuser@smb > sudo zypper in samba samba-ceph
  4. Replace the default contents of the /etc/samba/smb.conf file with the following:

    [global]
      netbios name = SAMBA-GW
      clustering = no
      idmap config * : backend = tdb2
      passdb backend = tdbsam
      # disable print server
      load printers = no
      smbd: backgroundqueue = no
    
    [SHARE_NAME]
      path = CEPHFS_MOUNT
      read only = no
      oplocks = no
      kernel share modes = no

    The CEPHFS_MOUNT path above must be mounted prior to starting Samba with a kernel CephFS share configuration. See Section 23.3, “Mounting CephFS in /etc/fstab.

    The above share configuration uses the Linux kernel CephFS client, which is recommended for performance reasons. As an alternative, the Samba vfs_ceph module can also be used to communicate with the Ceph cluster. The instructions are shown below for legacy purposes and are not recommended for new Samba deployments:

    [SHARE_NAME]
      path = /
      vfs objects = ceph
      ceph: config_file = /etc/ceph/ceph.conf
      ceph: user_id = samba.gw
      read only = no
      oplocks = no
      kernel share modes = no
    Tip
    Tip: Oplocks and share modes

    oplocks (also known as SMB2+ leases) allow for improved performance through aggressive client caching, but are currently unsafe when Samba is deployed together with other CephFS clients, such as kernel mount.ceph, FUSE, or NFS Ganesha.

    If all CephFS file system path access is exclusively handled by Samba, then the oplocks parameter can be safely enabled.

    Currently kernel share modes needs to be disabled in a share running with the CephFS vfs module for file serving to work properly.

    Important
    Important: Permitting access

    Samba maps SMB users and groups to local accounts. Local users can be assigned a password for Samba share access via:

    # smbpasswd -a USERNAME

    For successful I/O, the share path's access control list (ACL) needs to permit access to the user connected via Samba. You can modify the ACL by temporarily mounting via the CephFS kernel client and using the chmod, chown, or setfacl utilities against the share path. For example, to permit access for all users, run:

    # chmod 777 MOUNTED_SHARE_PATH

24.1.2.1 Starting Samba services

Start or restart stand-alone Samba services using the following commands:

# systemctl restart smb.service
# systemctl restart nmb.service
# systemctl restart winbind.service

To ensure that Samba services start on boot, enable them via:

# systemctl enable smb.service
# systemctl enable nmb.service
# systemctl enable winbind.service
Tip
Tip: Optional nmb and winbind services

If you do not require network share browsing, you do not need to enable and start the nmb service.

The winbind service is only needed when configured as an Active Directory domain member. See Section 24.2, “Joining Samba Gateway and Active Directory”.

24.1.3 Configuring high availability

Important
Important: Transparent failover not supported

Although a multi-node Samba + CTDB deployment is more highly available compared to the single node (see Chapter 24, Export Ceph data via Samba), client-side transparent failover is not supported. Applications will likely experience a short outage on Samba Gateway node failure.

This section provides an example of how to set up a two-node high availability configuration of Samba servers. The setup requires the SUSE Linux Enterprise High Availability Extension. The two nodes are called earth (192.168.1.1) and mars (192.168.1.2).

For details about SUSE Linux Enterprise High Availability Extension, see https://documentation.suse.com/sle-ha/15-SP1/.

Additionally, two floating virtual IP addresses allow clients to connect to the service no matter which physical node it is running on. 192.168.1.10 is used for cluster administration with Hawk2 and 192.168.2.1 is used exclusively for the CIFS exports. This makes it easier to apply security restrictions later.

The following procedure describes the example installation. More details can be found at https://documentation.suse.com/sle-ha/15-SP2/html/SLE-HA-all/art-sleha-install-quick.html.

  1. Create a Samba Gateway specific keyring on the Admin Node and copy it to both nodes:

    cephuser@adm > ceph auth get-or-create client.samba.gw mon 'allow r' \
        osd 'allow *' mds 'allow *' -o ceph.client.samba.gw.keyring
    cephuser@adm > scp ceph.client.samba.gw.keyring earth:/etc/ceph/
    cephuser@adm > scp ceph.client.samba.gw.keyring mars:/etc/ceph/
  2. SLE-HA setup requires a fencing device to avoid a split brain situation when active cluster nodes become unsynchronized. For this purpose, you can use a Ceph RBD image with Stonith Block Device (SBD). Refer to https://documentation.suse.com/sle-ha/15-SP2/html/SLE-HA-all/cha-ha-storage-protect.html#sec-ha-storage-protect-fencing-setup for more details.

    If it does not yet exist, create an RBD pool called rbd (see Section 18.1, “Creating a pool”) and associate it with rbd (see Section 18.5.1, “Associating pools with an application”). Then create a related RBD image called sbd01:

    cephuser@adm > ceph osd pool create rbd
    cephuser@adm > ceph osd pool application enable rbd rbd
    cephuser@adm > rbd -p rbd create sbd01 --size 64M --image-shared
  3. Prepare earth and mars to host the Samba service:

    1. Make sure the following packages are installed before you proceed: ctdb, tdb-tools, and samba.

      # zypper in ctdb tdb-tools samba samba-ceph
    2. Make sure the Samba and CTDB services are stopped and disabled:

      # systemctl disable ctdb
      # systemctl disable smb
      # systemctl disable nmb
      # systemctl disable winbind
      # systemctl stop ctdb
      # systemctl stop smb
      # systemctl stop nmb
      # systemctl stop winbind
    3. Open port 4379 of your firewall on all nodes. This is needed for CTDB to communicate with other cluster nodes.

  4. On earth, create the configuration files for Samba. They will later automatically synchronize to mars.

    1. Insert a list of private IP addresses of Samba Gateway nodes in the /etc/ctdb/nodes file. Find more details in the ctdb manual page (man 7 ctdb).

      192.168.1.1
      192.168.1.2
    2. Configure Samba. Add the following lines in the [global] section of /etc/samba/smb.conf. Use the host name of your choice in place of CTDB-SERVER (all nodes in the cluster will appear as one big node with this name). Add a share definition as well, consider SHARE_NAME as an example:

      [global]
        netbios name = SAMBA-HA-GW
        clustering = yes
        idmap config * : backend = tdb2
        passdb backend = tdbsam
        ctdbd socket = /var/lib/ctdb/ctdb.socket
        # disable print server
        load printers = no
        smbd: backgroundqueue = no
      
      [SHARE_NAME]
        path = /
        vfs objects = ceph
        ceph: config_file = /etc/ceph/ceph.conf
        ceph: user_id = samba.gw
        read only = no
        oplocks = no
        kernel share modes = no

      Note that the /etc/ctdb/nodes and /etc/samba/smb.conf files need to match on all Samba Gateway nodes.

  5. Install and bootstrap the SUSE Linux Enterprise High Availability cluster.

    1. Register the SUSE Linux Enterprise High Availability Extension on earth and mars:

      root@earth # SUSEConnect -r ACTIVATION_CODE -e E_MAIL
      root@mars # SUSEConnect -r ACTIVATION_CODE -e E_MAIL
    2. Install ha-cluster-bootstrap on both nodes:

      root@earth # zypper in ha-cluster-bootstrap
      root@mars # zypper in ha-cluster-bootstrap
    3. Map the RBD image sbd01 on both Samba Gateways via rbdmap.service.

      Edit /etc/ceph/rbdmap and add an entry for the SBD image:

      rbd/sbd01 id=samba.gw,keyring=/etc/ceph/ceph.client.samba.gw.keyring

      Enable and start rbdmap.service:

      root@earth # systemctl enable rbdmap.service && systemctl start rbdmap.service
      root@mars # systemctl enable rbdmap.service && systemctl start rbdmap.service

      The /dev/rbd/rbd/sbd01 device should be available on both Samba Gateways.

    4. Initialize the cluster on earth and let mars join it.

      root@earth # ha-cluster-init
      root@mars # ha-cluster-join -c earth
      Important
      Important

      During the process of initialization and joining the cluster, you will be interactively asked whether to use SBD. Confirm with y and then specify /dev/rbd/rbd/sbd01 as a path to the storage device.

  6. Check the status of the cluster. You should see two nodes added in the cluster:

    root@earth # crm status
    2 nodes configured
    1 resource configured
    
    Online: [ earth mars ]
    
    Full list of resources:
    
     admin-ip       (ocf::heartbeat:IPaddr2):       Started earth
  7. Execute the following commands on earth to configure the CTDB resource:

    root@earth # crm configure
    crm(live)configure# primitive ctdb ocf:heartbeat:CTDB params \
        ctdb_manages_winbind="false" \
        ctdb_manages_samba="false" \
        ctdb_recovery_lock="!/usr/lib64/ctdb/ctdb_mutex_ceph_rados_helper
            ceph client.samba.gw cephfs_metadata ctdb-mutex"
        ctdb_socket="/var/lib/ctdb/ctdb.socket" \
            op monitor interval="10" timeout="20" \
            op start interval="0" timeout="200" \
            op stop interval="0" timeout="100"
    crm(live)configure# primitive smb systemd:smb \
        op start timeout="100" interval="0" \
        op stop timeout="100" interval="0" \
        op monitor interval="60" timeout="100"
    crm(live)configure# primitive nmb systemd:nmb \
        op start timeout="100" interval="0" \
        op stop timeout="100" interval="0" \
        op monitor interval="60" timeout="100"
    crm(live)configure# primitive winbind systemd:winbind \
        op start timeout="100" interval="0" \
        op stop timeout="100" interval="0" \
        op monitor interval="60" timeout="100"
    crm(live)configure# group g-ctdb ctdb winbind nmb smb
    crm(live)configure# clone cl-ctdb g-ctdb meta interleave="true"
    crm(live)configure# commit
    Tip
    Tip: Optional nmb and winbind primitives

    If you do not require network share browsing, you do not need to add the nmb primitive.

    The winbind primitive is only needed when configured as an Active Directory domain member. See Section 24.2, “Joining Samba Gateway and Active Directory”.

    The binary /usr/lib64/ctdb/ctdb_mutex_ceph_rados_helper in the configuration option ctdb_recovery_lock has the parameters CLUSTER_NAME, CEPHX_USER, RADOS_POOL, and RADOS_OBJECT, in this order.

    An extra lock-timeout parameter can be appended to override the default value used (10 seconds). A higher value will increase the CTDB recovery master failover time, whereas a lower value may result in the recovery master being incorrectly detected as down, triggering flapping failovers.

  8. Add a clustered IP address:

    crm(live)configure# primitive ip ocf:heartbeat:IPaddr2
        params ip=192.168.2.1 \
        unique_clone_address="true" \
        op monitor interval="60" \
        meta resource-stickiness="0"
    crm(live)configure# clone cl-ip ip \
        meta interleave="true" clone-node-max="2" globally-unique="true"
    crm(live)configure# colocation col-with-ctdb 0: cl-ip cl-ctdb
    crm(live)configure# order o-with-ctdb 0: cl-ip cl-ctdb
    crm(live)configure# commit

    If unique_clone_address is set to true, the IPaddr2 resource agent adds a clone ID to the specified address, leading to three different IP addresses. These are usually not needed, but help with load balancing. For further information about this topic, see https://documentation.suse.com/sle-ha/15-SP2/html/SLE-HA-all/cha-ha-lb.html.

  9. Check the result:

    root@earth # crm status
    Clone Set: base-clone [dlm]
         Started: [ factory-1 ]
         Stopped: [ factory-0 ]
     Clone Set: cl-ctdb [g-ctdb]
         Started: [ factory-1 ]
         Started: [ factory-0 ]
     Clone Set: cl-ip [ip] (unique)
         ip:0       (ocf:heartbeat:IPaddr2):       Started factory-0
         ip:1       (ocf:heartbeat:IPaddr2):       Started factory-1
  10. Test from a client machine. On a Linux client, run the following command to see if you can copy files from and to the system:

    # smbclient //192.168.2.1/myshare

24.1.3.1 Restarting HA Samba resources

Following any Samba or CTDB configuration changes, HA resources may need to be restarted for the changes to take effect. This can be done by via:

# crm resource restart cl-ctdb

24.2 Joining Samba Gateway and Active Directory

You can configure the Ceph Samba gateway to become a member of Samba domain with Active Directory (AD) support. As a Samba domain member, you can use domain users and groups in local access lists (ACLs) on files and directories from the exported CephFS.

24.2.1 Preparing Samba installation

This section introduces preparatory steps that you need to take care of before configuring the Samba itself. Starting with a clean environment helps you prevent confusion and verifies that no files from the previous Samba installation are mixed with the new domain member installation.

Tip
Tip: Synchronizing clocks

All Samba Gateway nodes' clocks need to be synchronized with the Active Directory Domain controller. Clock skew may result in authentication failures.

Verify that no Samba or name caching processes are running:

cephuser@smb > ps ax | egrep "samba|smbd|nmbd|winbindd|nscd"

If the output lists any samba, smbd, nmbd, winbindd, or nscd processes, stop them.

If you have previously run a Samba installation on this host, remove the /etc/samba/smb.conf file. Also remove all Samba database files, such as *.tdb and *.ldb files. To list directories containing Samba databases, run:

cephuser@smb > smbd -b | egrep "LOCKDIR|STATEDIR|CACHEDIR|PRIVATE_DIR"

24.2.2 Verifying DNS

Active Directory (AD) uses DNS to locate other domain controllers (DCs) and services, such as Kerberos. Therefore AD domain members and servers need to be able to resolve the AD DNS zones.

Verify that DNS is correctly configured and that both forward and reverse lookup resolve correctly, for example:

cephuser@adm > nslookup DC1.domain.example.com
Server:         10.99.0.1
Address:        10.99.0.1#53

Name:   DC1.domain.example.com
Address: 10.99.0.1
cephuser@adm > 10.99.0.1
Server:        10.99.0.1
Address:	10.99.0.1#53

1.0.99.10.in-addr.arpa	name = DC1.domain.example.com.

24.2.3 Resolving SRV records

AD uses SRV records to locate services, such as Kerberos and LDAP. To verify that SRV records are resolved correctly, use the nslookup interactive shell, for example:

cephuser@adm > nslookup
Default Server:  10.99.0.1
Address:  10.99.0.1

> set type=SRV
> _ldap._tcp.domain.example.com.
Server:  UnKnown
Address:  10.99.0.1

_ldap._tcp.domain.example.com   SRV service location:
          priority       = 0
          weight         = 100
          port           = 389
          svr hostname   = dc1.domain.example.com
domain.example.com      nameserver = dc1.domain.example.com
dc1.domain.example.com  internet address = 10.99.0.1

24.2.4 Configuring Kerberos

Samba supports Heimdal and MIT Kerberos back-ends. To configure Kerberos on the domain member, set the following in your /etc/krb5.conf file:

[libdefaults]
	default_realm = DOMAIN.EXAMPLE.COM
	dns_lookup_realm = false
	dns_lookup_kdc = true

The previous example configures Kerberos for the DOMAIN.EXAMPLE.COM realm. We do not recommend to set any further parameters in the /etc/krb5.conf file. If your /etc/krb5.conf contains an include line it will not work—you must remove this line.

24.2.5 Resolving localhost name

When you join a host to the domain, Samba tries to register the host name in the AD DNS zone. For this, the net utility needs to be able to resolve the host name using DNS or using a correct entry in the /etc/hosts file.

To verify that your host name resolves correctly, use the getent hosts command:

cephuser@adm > getent hosts example-host
10.99.0.5      example-host.domain.example.com    example-host

The host name and FQDN must not resolve to the 127.0.0.1 IP address or any IP address other than the one used on the LAN interface of the domain member. If no output is displayed or the host is resolved to the wrong IP address and you are not using DHCP, set the correct entry in the /etc/hosts file:

127.0.0.1      localhost
10.99.0.5      example-host.samdom.example.com    example-host
Tip
Tip: DHCP and /etc/hosts

If you are using DHCP, check that /etc/hosts only contains the '127.0.0.1' line. If you continue to have problems, contact the administrator of your DHCP server.

If you need to add aliases to the machine host name, add them to the end of the line that starts with the machine's IP address, not to the '127.0.0.1' line.

24.2.6 Configuring Samba

This section introduces information about specific configuration options that you need to include in the Samba configuration.

Active Directory domain membership is primarily configured by setting security = ADS alongside appropriate Kerberos realm and ID mapping parameters in the [global] section of /etc/samba/smb.conf.

[global]
  security = ADS
  workgroup = DOMAIN
  realm = DOMAIN.EXAMPLE.COM
  ...

24.2.6.1 Choosing the back-end for ID mapping in winbindd

If you need your users to have different login shells and/or Unix home directory paths, or you want them to have the same ID everywhere, you will need to use the winbind 'ad' back-end and add RFC2307 attributes to AD.

Important
Important: RFC2307 Attributes and ID Numbers

The RFC2307 attributes are not added automatically when users or groups are created.

The ID numbers found on a DC (numbers in the 3000000 range) are not RFC2307 attributes and will not be used on Unix Domain Members. If you need to have the same ID numbers everywhere, add uidNumber and gidNumber attributes to AD and use the winbind 'ad' back-end on Unix Domain Members. If you do decide to add uidNumber and gidNumber attributes to AD, do not use numbers in the 3000000 range.

If your users will only use the Samba AD DC for authentication and will not store data on it or log in to it, you can use the winbind 'rid' back-end. This calculates the user and group IDs from the Windows* RID. If you use the same [global] section of the smb.conf on every Unix domain member, you will get the same IDs. If you use the 'rid' back-end, you do not need to add anything to AD and RFC2307 attributes will be ignored. When using the 'rid' back-end, set the template shell and template homedir parameters in smb.conf. These settings are global and everyone gets the same login shell and Unix home directory path (unlike the RFC2307 attributes where you can set individual Unix home directory paths and shells).

There is another way of setting up Samba—when you require your users and groups to have the same ID everywhere, but only need your users to have the same login shell and use the same Unix home directory path. You can do this by using the winbind 'ad' back-end and using the template lines in smb.conf. This way you only need to add uidNumber and gidNumber attributes to AD.

Tip
Tip: More Information about Back-ends for ID Mapping

Find more detailed information about available ID mapping back-ends in the related manual pages: man 8 idmap_ad, man 8 idmap_rid, and man 8 idmap_autorid.

24.2.6.2 Setting user and group ID ranges

After you decide which winbind back-end to use, you need to specify the ranges to use with the idmap config option in smb.conf. By default, there are multiple blocks of user and group IDs reserved on a Unix domain member:

Table 24.1: Default Users and Group ID Blocks
IDsRange
0-999Local system users and groups.
Starting at 1000Local Unix users and groups.
Starting at 10000DOMAIN users and groups.

As you can see from the above ranges, you should not set either the '*' or 'DOMAIN' ranges to start at 999 or less, as they would interfere with the local system users and groups. You also should leave a space for any local Unix users and groups, so starting the idmap config ranges at 3000 seems to be a good compromise.

You need to decide how large your 'DOMAIN' is likely to grow and if you plan to have any trusted domains. Then you can set the idmap config ranges as follows:

Table 24.2: ID Ranges
DomainRange
*3000-7999
DOMAIN10000-999999
TRUSTED1000000-9999999

24.2.6.3 Mapping the domain administrator account to the local root user

Samba enables you to map domain accounts to a local account. Use this feature to execute file operations on the domain member's file system as a different user than the account that requested the operation on the client.

Tip
Tip: Mapping the Domain Administrator (Optional)

Mapping the domain administrator to the local root account is optional. Only configure the mapping if the domain administrator needs to be able to execute file operations on the domain member using root permissions. Be aware that mapping Administrator to the root account does not allow you to log in to Unix domain members as 'Administrator'.

To map the domain administrator to the local root account, follow these steps:

  1. Add the following parameter to the [global] section of your smb.conf file:

    username map = /etc/samba/user.map
  2. Create the /etc/samba/user.map file with the following content:

    !root = DOMAIN\Administrator
Important
Important

When using the 'ad' ID mapping back-end, do not set the uidNumber attribute for the domain administrator account. If the account has the attribute set, the value overrides the local UID '0' of the root user, and therefore the mapping fails.

For more details, see the username map parameter in the smb.conf manual page (man 5 smb.conf).

24.2.7 Joining the Active Directory domain

To join the host to an Active Directory, run:

cephuser@smb > net ads join -U administrator
Enter administrator's password: PASSWORD
Using short domain name -- DOMAIN
Joined EXAMPLE-HOST to dns domain 'DOMAIN.example.com'

24.2.8 Configuring the name service switch

To make domain users and groups available to the local system, you need to enable the name service switch (NSS) library. Append the winbind entry to the following databases in the /etc/nsswitch.conf file:

passwd: files winbind
group:  files winbind
Important
Important: Points to Consider
  • Keep the files entry as the first source for both databases. This enables NSS to look up domain users and groups from the /etc/passwd and /etc/group files before querying the winbind service.

  • Do not add the winbind entry to the NSS shadow database. This can cause the wbinfo utility to fail.

  • Do not use the same user names in the local /etc/passwd file as in the domain.

24.2.9 Starting the services

Following configuration changes, restart Samba services as per Section 24.1.2.1, “Starting Samba services” or Section 24.1.3.1, “Restarting HA Samba resources”.

24.2.10 Test the winbindd connectivity

24.2.10.1 Sending a winbindd ping

To verify if the winbindd service is able to connect to AD Domain Controllers (DC) or a primary domain controller (PDC), enter:

cephuser@smb > wbinfo --ping-dc
checking the NETLOGON for domain[DOMAIN] dc connection to "DC.DOMAIN.EXAMPLE.COM" succeeded

If the previous command fails, verify that the winbindd service is running and that the smb.conf file is set up correctly.

24.2.10.2 Looking up domain users and groups

The libnss_winbind library enables you to look up domain users and groups. For example, to look up the domain user 'DOMAIN\demo01':

cephuser@smb > getent passwd DOMAIN\\demo01
DOMAIN\demo01:*:10000:10000:demo01:/home/demo01:/bin/bash

To look up the domain group 'Domain Users':

cephuser@smb > getent group "DOMAIN\\Domain Users"
DOMAIN\domain users:x:10000:

24.2.10.3 Assigning file permissions to domain users and groups

The name service switch (NSS) library enables you to use domain user accounts and groups in commands. For example to set the owner of a file to the 'demo01' domain user and the group to the 'Domain Users' domain group, enter:

cephuser@smb > chown "DOMAIN\\demo01:DOMAIN\\domain users" file.txt