Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Enterprise Storage 7 Documentation / SUSE Enterprise Storage for Windows guide / Ceph for Microsoft Windows
Applies to SUSE Enterprise Storage 7

1 Ceph for Microsoft Windows

1.1 Introduction

Ceph is a highly-resilient software-defined-storage offering, which has only been available to Microsoft Windows environments through the use of iSCSI or CIFS gateways. This gateway architecture introduces a single point of contact and limits fault-tolerance and bandwidth, in comparison to the native I/O paths of Ceph with RADOS.

In order to bring the benefits of native Ceph to Microsoft Windows environments, SUSE partnered with Cloudbase Solutions to port Ceph to the Microsoft Windows platform. This work is nearing completion, and provides the following functionality:

  • RADOS Block Device (RBD)

  • CephFS

You can find additional information on the background of this effort through the following SUSECON Digital session:

  • Ceph in a Windows World (TUT-1121) Presenters: Mike Latimer (SUSE) Alessandro Pilotti (Cloudbase Solutions)

1.2 Technology preview

SUSE Enterprise Storage Driver for Windows is currently being offered as a technology preview. This is a necessary step toward full support as we continue work to ensure this driver performs well in all environments and workloads. You can contribute to this effort by reporting any issues you may encounter to SUSE Support.

CephFS functionality requires a third party FUSE wrapper provided through the Dokany project. This functionality should be considered experimental, and is not recommended for production use.

1.3 Supported platforms

Microsoft Windows Server 2016 and 2019 are supported. Previous Microsoft Windows Server versions, including Microsoft Windows client versions such as Microsoft Windows 10, may work, but for the purpose of this document have not been thoroughly tested.

Note
Note

Early builds of Microsoft Windows Server 2016 do not provide UNIX sockets, in which case the Ceph admin socket feature is unavailable.

1.4 Compatibility

RADOS Block Device images can be exposed to the OS and host Microsoft Windows partitions or they can be attached to Hyper-V VMs in the same way as iSCSI disks.

Note
Note

At the moment, the Microsoft Failover Cluster refuses to use Windows Block Device (WNBD) driver disks as Cluster Shared Volumes (CSVs) underlying storage.

OpenStack integration has been proposed and may be included in the next OpenStack release. This will allow RBD images managed by OpenStack Cinder to be attached to Hyper-V VMs managed by OpenStack Nova.

1.5 Installing and configuring

Ceph for Microsoft Windows can be easily installed through the SES4Win.msi setup wizard. You can download this from SES4Win. This wizard performs the following functions:

  • Installs Ceph-related code to the C:\Program Files\Ceph directory.

  • Adds C:\Program Files\Ceph\bin to the %PATH% environment variable.

  • Creates a Ceph RBD Mapping Service to automatically map RBD devices upon machine restart (using rbd-wnbd.exe).

After installing Ceph for Microsoft Windows, manual modifications are required to provide access to a Ceph cluster. The files which must be created or modified are as follows:

C:\ProgramData\ceph\ceph.conf
C:\ProgramData\ceph\keyring

These files can be copied directly from an existing OSD node in the cluster. Sample configuration files are provided in Appendix A, Sample configuration files.

1.6 RADOS Block Device (RBD)

Support for RBD devices is possible through a combination of Ceph tools and Microsoft Windows WNBD. This driver is in the process of being certified by the Windows Hardware Quality Labs (WHQL).

Once installed, the WNBD SCSI Virtual Adapter driver can be seen in the Device Manager as a storage controller. Multiple adapters may be seen, in order to handle multiple RBD connections.

The rbd command is used to create, remove, import, export, map, or unmap images, exactly like it is used on Linux.

1.6.1 Mapping images

The behavior of the rbd command is similar to its Linux counterpart, with a few notable differences:

  • Device paths cannot be requested. The disk number and path is picked by Microsoft Windows. If a device path is provided by the user when mapping an image, it is used as an identifier. This can also be used when unmapping the image.

  • The show command was added, which describes a specific mapping. This can be used for retrieving the disk path.

  • The service command was added, allowing rbd-wnbd to run as a Microsoft Windows service. All mappings are currently persistent and will be recreated when the service stops, unless they are explicitly unmapped. The service disconnects the mappings when being stopped.

  • The list command also includes a status column.

The mapped images can either be consumed by the host directly or exposed to Hyper-V VMs.

1.6.2 Hyper-V VM disks

The following sample imports an RBD image and boots a Hyper-V VM using it.

      # Feel free to use any other image. This one is convenient to use for
      # testing purposes because it's very small (~15MB) and the login prompt
      # prints the pre-configured password.
      wget http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img `
           -OutFile cirros-0.5.1-x86_64-disk.img

      # We'll need to make sure that the imported images are raw (so no qcow2 or vhdx).
      # You may get qemu-img from https://cloudbase.it/qemu-img-windows/
      # You can add the extracted location to $env:Path or update the path accordingly.
      qemu-img convert -O raw cirros-0.5.1-x86_64-disk.img cirros-0.5.1-x86_64-disk.raw

      rbd import cirros-0.5.1-x86_64-disk.raw
      # Let's give it a hefty 100MB size.
      rbd resize cirros-0.5.1-x86_64-disk.raw --size=100MB

      rbd-wnbd map cirros-0.5.1-x86_64-disk.raw

      # Let's have a look at the mappings.
      rbd-wnbd list
      Get-Disk

      $mappingJson = rbd-wnbd show cirros-0.5.1-x86_64-disk.raw --format=json
      $mappingJson = $mappingJson | ConvertFrom-Json

      $diskNumber = $mappingJson.disk_number

      New-VM -VMName BootFromRBD -MemoryStartupBytes 512MB
      # The disk must be turned offline before it can be passed to Hyper-V VMs
      Set-Disk -Number $diskNumber -IsOffline $true
      Add-VMHardDiskDrive -VMName BootFromRBD -DiskNumber $diskNumber
      Start-VM -VMName BootFromRBD

1.6.3 Configuring Microsoft Windows partitions

The following sample creates an empty RBD image, attaches it to the host and initializes a partition:

  rbd create blank_image --size=1G
  rbd-wnbd map blank_image

  $mappingJson = rbd-wnbd show blank_image --format=json
  $mappingJson = $mappingJson | ConvertFrom-Json

  $diskNumber = $mappingJson.disk_number

  # The disk must be online before creating or accessing partitions.
  Set-Disk -Number $diskNumber -IsOffline $false

  # Initialize the disk, partition it and create a fileystem.
  Get-Disk -Number $diskNumber | `
      Initialize-Disk -PassThru | `
      New-Partition -AssignDriveLetter -UseMaximumSize | `
      Format-Volume -Force -Confirm:$false

1.7 RBD Microsoft Windows service

In order to ensure that rbd-wnbd mappings survive host reboots, a new Microsoft Windows service, called the Ceph RBD Mapping Service has been created. This service automatically maintains mappings as they are added using the Ceph tools. All mappings are currently persistent and are recreated when the service starts, unless they are explicitly unmapped. The service disconnects all mappings when stopped.

This service also adjusts the Microsoft Windows service start order so that RBD images can be mapped before starting any services that may depend on them. For example, VMs.

RBD maps are stored in the Microsoft Windows registry at the following location:

SYSTEM\CurrentControlSet\Services\rbd-wnbd

1.8 Configuring CephFS

Note
Note

The following feature is experimental, and is not intended for use in production environments.

Ceph for Microsoft Windows provides CephFS support through the Dokany FUSE wrapper. In order to use CephFS, install Dokany v1.4.1 or newer using the installers available here: https://github.com/dokan-dev/dokany/releases

With Dokany installed, and ceph.conf and ceph.client.admin.keyring configuration files in place, CephFS can be mounted using the ceph-dokan.exe command. For example:

ceph-dokan.exe -l x

This command mounts the default Ceph file system using the drive letter X. If ceph.conf is not placed at the default location (C:\ProgramData\ceph\ceph.conf), a -c parameter can be used to specify the location of ceph.conf.

The -l argument also allows using an empty folder as a mountpoint instead of a drive letter.

The UID and GID used for mounting the file system defaults to 0 and may be changed using the following ceph.conf options:

[client]
# client_permissions = true
client_mount_uid = 1000
client_mount_gid = 1000
Important
Important

Microsoft Windows Access Control Lists (ACLs) are ignored. Portable Operating System Interface (POSIX) ACLs are supported but cannot be modified using the current CLI.

Important
Important

CephFS does not support mandatory file locks, which Microsoft Windows heavily relies upon. At the moment, we are letting Dokan handle file locks, which are only enforced locally.

For debugging purposes, -d and -s may be used. The former enables debug output and the latter enables stderr logging. By default, debug messages are sent to a connected debugger.

You may use --help to get the full list of available options. Additional information on this experimental feature may be found in the upstream Ceph documentation: https://docs.ceph.com/en/latest/cephfs/ceph-dokan