Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise High Availability Extension 11 SP4

18 Samba Clustering

A clustered Samba server provides a High Availability solution in your heterogeneous networks. This chapter explains some background information and how to set up a clustered Samba server.

18.1 Conceptual Overview

Trivial Database (TDB) has been used by Samba for many years. It allows multiple applications to write simultaneously. To make sure all write operations are successfully performed and do not collide with each other, TDB uses an internal locking mechanism.

Cluster Trivial Database (CTDB) is a small extension of the existing TDB. CTDB is described by the project as a cluster implementation of the TDB database used by Samba and other projects to store temporary data.

Each cluster node runs a local CTDB daemon. Samba communicates with its local CTDB daemon instead of writing directly to its TDB. The daemons exchange metadata over the network, but actual write and read operations are done on a local copy with fast storage. The concept of CTDB is displayed in Figure 18.1, “Structure of a CTDB Cluster”.

Note
Note: CTDB For Samba Only

The current implementation of the CTDB Resource Agent configures CTDB to only manage Samba. Everything else, including IP failover, should be configured with Pacemaker.

CTDB is only supported for completely homogeneous clusters. For example, all nodes in the cluster need to have the same architecture. You cannot mix i586 with x86_64.

Structure of a CTDB Cluster
Figure 18.1: Structure of a CTDB Cluster

A clustered Samba server must share certain data:

  • Mapping table that associates Unix user and group IDs to Windows users and groups.

  • The user database must be synchronized between all nodes.

  • Join information for a member server in a Windows domain must be available on all nodes.

  • Metadata has to be available on all nodes, like active SMB sessions, share connections, and various locks.

The goal is that a clustered Samba server with N+1 nodes is faster than with only N nodes. One node is not slower than an unclustered Samba server.

18.2 Basic Configuration

Note
Note: Changed Configuration Files

The CTDB Resource Agent automatically changes /etc/sysconfig/ctdb and /etc/samba/smb.conf. Use crm ra info CTDB to list all parameters that can be specified for the CTDB resource.

To set up a clustered Samba server, proceed as follows:

Procedure 18.1: Setting Up a Basic Clustered Samba Server
  1. Prepare your cluster:

    1. Configure your cluster (OpenAIS, Pacemaker, OCFS2) as described in this guide in Part II, “Configuration and Administration”.

    2. Configure a shared file system, like OCFS2, and mount it, for example, on /shared.

    3. If you want to turn on POSIX ACLs, enable it:

      • For a new OCFS2 file system use:

        root # mkfs.ocfs2 --fs-features=xattr ...
      • For an existing OCFS2 file system use:

        root # tunefs.ocfs2 --fs-feature=xattrDEVICE

        Make sure the acl option is specified in the file system resource. Use the crm shell as follows:

        crm(live)configure#  primary ocfs2-3 ocf:heartbeat:Filesystem options="acl" ...
    4. Make sure the services ctdb, smb, nmb, and winbind are disabled:

      root # chkconfig ctdb off
      chkconfig smb off
      chkconfig nmb off
      chkconfig winbind off
  2. Create a directory for the CTDB lock on the shared file system:

    root # mkdir -p /shared/samba/
  3. In /etc/ctdb/nodes insert all nodes which contain all private IP addresses of each node in the cluster:

    192.168.1.10
    192.168.1.11
  4. Copy the configuration file to all of your nodes by using csync2:

    root # csync2 -xv

    For more information, see Procedure 3.10, “Synchronizing the Configuration Files with Csync2”.

  5. Add a CTDB resource to the cluster:

    crm configure
    crm(live)configure# primitive ctdb ocf:heartbeat:CTDB params \
        ctdb_manages_winbind="false" \ 
        ctdb_manages_samba="true" \
        ctdb_recovery_lock="/shared/samba/ctdb.lock" \
          op monitor interval="10" timeout="20" \
          op start interval="0" timeout="90" \
          op stop interval="0" timeout="100"
    crm(live)configure# clone ctdb-clone ctdb \
        meta globally-unique="false" interleave="true"
    crm(live)configure# colocation ctdb-with-fs inf: ctdb-clone fs-clone
    crm(live)configure# order start-ctdb-after-fs inf: fs-clone ctdb-clone
    crm(live)configure# commit
  6. Add a clustered IP address:

    crm(live)configure# primitive ip ocf:heartbeat:IPaddr2 params ip=192.168.2.222 \
      clusterip_hash="sourceip-sourceport" op monitor interval=60s
    crm(live)configure# clone ip-clone ip meta globally-unique="true"
    crm(live)configure# colocation ip-with-ctdb inf: ip-clone ctdb-clone
    crm(live)configure# order start-ip-after-ctdb inf: ctdb-clone ip-clone
    crm(live)configure# commit
  7. Check the result:

    root # crm status
    Clone Set: dlm-clone
         Started: [ hex-14 hex-13 ]
     Clone Set: o2cb-clone
         Started: [ hex-14 hex-13 ]
     Clone Set: c-ocfs2-3
         Started: [ hex-14 hex-13 ]
     Clone Set: ctdb-clone
         Started: [ hex-14 hex-13 ]
     Clone Set: ip-clone (unique)
         ip:0       (ocf::heartbeat:IPaddr2):       Started hex-13
         ip:1       (ocf::heartbeat:IPaddr2):       Started hex-14
  8. Test from a client machine. On a Linux client, add a user for Samba access:

    root # smbpasswd -a USERNAME
  9. Test if you can reach the new user's home directory:

    root # smbclient -u USERNAME//192.168.2.222/USERNAME

18.3 Joining an Active Directory Domain

Active Directory (AD) is a directory service for Windows server systems.

The following instructions outline how to join a CTDB cluster to an Active Directory domain:

  1. Consult your Windows Server documentation for instructions on how to set up an Active Directory domain. In this example, we use the following parameters:

    AD and DNS server

    win2k3.2k3test.example.com

    AD domain

    2k3test.example.com

    Cluster AD member NetBIOS name

    CTDB-SERVER
  2. Procedure 18.2, “Configuring CTDB”

  3. Procedure 18.3, “Joining Active Directory”

The next step is to configure the CTDB:

Procedure 18.2: Configuring CTDB
  1. Make sure you have configured your cluster as shown in Section 18.2, “Basic Configuration”.

  2. Stop the CTDB resource on one node:

    root #  crm resource stop ctdb-clone
  3. Open the /etc/samba/smb.conf configuration file, add your NetBIOS name, and close the file:

    [global
        netbios name = CTDB-SERVER

    Other settings such as security, workgroup etc. are added by the YaST wizard.

  4. Update on all nodes the file /etc/samba.conf:

    root # csync2 -xv
  5. Restart the CTDB resource:

    root # crm resource start ctdb-clone

Finally, join your cluster to the Active Directory server:

Procedure 18.3: Joining Active Directory
  1. Make sure the following files are included in Csync2's configuration to become installed on all cluster hosts:

    /etc/samba/smb.conf
    /etc/security/pam_winbind.conf
    /etc/krb5.conf
    /etc/nsswitch.conf
    /etc/security/pam_mount.conf.xml
    /etc/pam.d/common-session

    You can also use YaST's Configure Csync2 module for this task, see Section 3.5.4, “Transferring the Configuration to All Nodes”.

  2. Create a CTDB resource as described in Procedure 18.1, “Setting Up a Basic Clustered Samba Server”.

  3. Run YaST and open the Windows Domain Membership module from the Network Services entry.

  4. Enter your domain or workgroup settings and finish with Ok.

18.4 Debugging and Testing Clustered Samba

To debug your clustered Samba server, the following tools which operate on different levels are available:

ctdb_diagnostics

Run this tool to diagnose your clustered Samba server. Detailed debug messages should help you track down any problems you might have.

The ctdb_diagnostics command searches for the following files which must be available on all nodes:

/etc/krb5.conf
/etc/hosts
/etc/ctdb/nodes
/etc/sysconfig/ctdb
/etc/resolv.conf
/etc/nsswitch.conf
/etc/sysctl.conf
/etc/samba/smb.conf
/etc/fstab
/etc/multipath.conf
/etc/pam.d/system-auth
/etc/sysconfig/nfs
/etc/exports
/etc/vsftpd/vsftpd.conf

If the files /etc/ctdb/public_addresses and /etc/ctdb/static-routes exist, they will be checked as well.

ping_pong

Check whether your file system is suitable for CTDB with ping_pong. It performs certain tests of your cluster file system like coherence and performance (see http://wiki.samba.org/index.php/Ping_pong) and gives some indication how your cluster may behave under high load.

send_arp Tool and SendArp Resource Agent

The SendArp resource agent is located in /usr/lib/heartbeat/send_arp (or /usr/lib64/heartbeat/send_arp). The send_arp tool sends out a gratuitous ARP (Address Resolution Protocol) packet and can be used for updating other machines' ARP tables. It can help to identify communication problems after a failover process. If you cannot connect to a node or ping it although it shows the clustered IP address for samba, use the send_arp command to test if the nodes only need an ARP table update.

For more information, refer to http://wiki.wireshark.org/Gratuitous_ARP.

To test certain aspects of your cluster file system proceed as follows:

Procedure 18.4: Test Coherence and Performance of Your Cluster File System
  1. Start the command ping_pong on one node and replace the placeholder N with the amount of nodes plus one. The file ABSPATH/data.txt is available in your shared storage and is therefore accessible on all nodes (ABSPATH indicates an absolute path):

    ping_pong ABSPATH/data.txt N

    Expect a very high locking rate as you are running only one node. If the program does not print a locking rate, replace your cluster file system.

  2. Start a second copy of ping_pong on another node with the same parameters.

    Expect to see a dramatic drop in the locking rate. If any of the following applies to your cluster file system, replace it:

    • ping_pong does not print a locking rate per second,

    • the locking rates in the two instances are not almost equal,

    • the locking rate did not drop after you started the second instance.

  3. Start a third copy of ping_pong. Add another node and note how the locking rates change.

  4. Kill the ping_pong commands one after the other. You should observe an increase of the locking rate until you get back to the single node case. If you did not get the expected behavior, find more information in Chapter 14, OCFS2.

Print this page