16 NVMe-oF #
This chapter describes how to set up an NVMe-oF host and target.
16.1 Overview #
NVM Express (NVMe) is an interface standard for accessing non-volatile storage, commonly SSD disks. NVMe supports much higher speeds and has a lower latency than SATA.
NVMe-oF is an architecture to access NVMe storage over different networking fabrics, for example RDMA or NVMe over Fibre Channel (FC-NVMe). The role of NVMe-oF is similar to iSCSI. To increase the fault-tolerance, NVMe-oF has built-in support for multipathing.
The NVMe host is the machine that connects to an NVMe target. The NVMe target is the machine that shares its NVMe block devices.
NVMe is supported on SUSE Linux Enterprise Server 12 SP5. There are Kernel modules available for the NVMe block storage and NVMe-oF target and host.
To see if your hardware requires any special consideration, refer to Section 16.4, “Special Hardware Configuration”.
16.2 Setting Up an NVMe-oF Host #
To use NVMe-oF, a target must be available with one of the supported networking methods. Supported are NVMe over Fibre Channel and RDMA. The following sections describe how to connect a host to an NVMe target.
16.2.1 Installing Command Line Client #
To use NVMe-oF, you need the nvme
command line
tool. Install it with zypper
:
tux >
sudo
zypper in nvme-cli
Use nvme --help
to list all available
subcommands. Man pages are available for nvme
subcommands. Consult them by executing man
nvme-SUBCOMMAND
. For example,
to view the man page for the discover
subcommand,
execute man nvme-discover
.
16.2.2 Discovering NVMe-oF Targets #
To list available NVMe subsystems on the NVMe-oF target, you need the discovery controller address and service ID.
tux >
sudo
nvme discover -t TRANSPORT -a DISCOVERY_CONTROLLER_ADDRESS -s SERVICE_ID
Replace TRANSPORT with the underlying
transport medium: loop
, rdma
or
fc
. Replace
DISCOVERY_CONTROLLER_ADDRESS with the
address of the discovery controller. For RDMA this should be an
IPv4 address. Replace SERVICE_ID with
the transport service ID. If the service is IP based, like RDMA,
service ID specifies the port number. For Fibre Channel, the service
ID is not required.
The NVMe hosts only see the subsystems they are allowed to connect to.
Example:
tux >
sudo
nvme discover -t rdma -a 10.0.0.1 -s 4420
For more details, see man nvme-discover
.
16.2.3 Connecting to NVMe-oF Targets #
After you have identified the NVMe subsystem, you can connect it
with the nvme connect
command.
tux >
sudo
nvme connect -t transport -a DISCOVERY_CONTROLLER_ADDRESS -s SERVICE_ID -n SUBSYSTTEM_NQN
Replace TRANSPORT with the underlying
transport medium: loop
, rdma
or
fc
.
Replace DISCOVERY_CONTROLLER_ADDRESS with
the address of the discovery controller. For RDMA this should be an
IPv4 address.
Replace SERVICE_ID with the transport
service ID. If the service is IP based, like RDMA, this specifies
the port number.
Replace SUBSYSTEM_NQN with the NVMe
qualified name of the desired subsystem as found by the discovery
command. NQN is the abbreviation for
NVMe Qualified Name. The NQN must be unique.
Example:
tux >
sudo
nvme connect -t rdma -a 10.0.0.1 -s 4420 -n nqn.2014-08.com.example:nvme:nvm-subsystem-sn-d78432
Alternatively, use nvme connect-all
to connect
to all discovered namespaces. For advanced usage please see
man nvme-connect
and man
nvme-connect-all
.
16.2.4 Multipathing #
NVMe native multipathing is disabled by default. To
print the layout of the multipath devices, use the command
nvme list-subsys
. To enable NVMe native
multipathing, add nvme-core.multipath=on
as a boot
parameter.
16.3 Setting Up an NVMe-oF Target #
16.3.1 Installing Command Line Client #
To configure an NVMe-oF target, you need the nvmetcli
command line
tool. Install it with zypper
:
tux >
sudo
zypper in nvmetcli
The current documentation for nvmetcli
is
available at http://git.infradead.org/users/hch/nvmetcli.git/blob_plain/HEAD:/Documentation/nvmetcli.txt.
16.3.2 Configuration Steps #
The following procedure provides an example of how to set up an NVMe-oF target.
The configuration is stored in a tree structure. Use the command
cd
to navigate. Use ls
to
list objects. You can create new objects with
create
.
Start the
nvmectli
interactive shell:tux >
sudo
nvmetcli
Create a new port:
(nvmetcli)>
cd ports
(nvmetcli)>
create 1
(nvmetcli)>
ls 1/
o- 1 o- referrals o- subsystemsCreate an NVMe subsystem:
(nvmetcli)>
cd /subsystems
(nvmetcli)>
create nqn.2014-08.org.nvmexpress:NVMf:uuid:c36f2c23-354d-416c-95de-f2b8ec353a82
(nvmetcli)>
cd nqn.2014-08.org.nvmexpress:NVMf:uuid:c36f2c23-354d-416c-95de-f2b8ec353a82/
(nvmetcli)>
ls
o- nqn.2014-08.org.nvmexpress:NVMf:uuid:c36f2c23-354d-416c-95de-f2b8ec353a82 o- allowed_hosts o- namespacesCreate a new namespace and set an NVMe device to it.
(nvmetcli)>
cd namespaces
(nvmetcli)>
create 1
(nvmetcli)>
cd 1
(nvmetcli)>
set device path=/dev/nvme0n1
Parameter path is now '/dev/nvme0n1'.Enable the previously created namespace:
(nvmetcli)>
cd ..
(nvmetcli)>
enable
The Namespace has been enabled.Display the created namespace:
(nvmetcli)>
cd ..
(nvmetcli)>
ls
o- nqn.2014-08.org.nvmexpress:NVMf:uuid:c36f2c23-354d-416c-95de-f2b8ec353a82 o- allowed_hosts o- namespaces o- 1Allow all hosts to use the subsystem. Only do this in secure environments.
(nvmetcli)>
set attr allow_any_host=1
Parameter allow_any_host is now '1'.Alternatively, you can allow only specific hosts to connect:
(nvmetcli)>
cd nqn.2014-08.org.nvmexpress:NVMf:uuid:c36f2c23-354d-416c-95de-f2b8ec353a82/allowed_hosts/
(nvmetcli)>
create hostnqn
List all created objects:
(nvmetcli)>
cd /
(nvmetcli)>
ls
o- / o- hosts o- ports | o- 1 | o- referrals | o- subsystems o- subsystems o- nqn.2014-08.org.nvmexpress:NVMf:uuid:c36f2c23-354d-416c-95de-f2b8ec353a82 o- allowed_hosts o- namespaces o- 1Make the target available via RDMA:
(nvmetcli)>
cd ports/1/
(nvmetcli)>
set addr adrfam=ipv4 trtype=rdma traddr=10.0.0.1 trsvcid=4420
Parameter trtype is now 'rdma'. Parameter adrfam is now 'ipv4'. Parameter trsvcid is now '4420'. Parameter traddr is now '10.0.0.1'.Alternatively, you can make it available with Fibre Channel:
(nvmetcli)>
cd ports/1/
(nvmetcli)>
set addr adrfam=fc trtype=fc traddr=nn-0x1000000044001123:pn-0x2000000055001123 trsvcid=none
16.3.3 Back Up and Restore Target Configuration #
You can save the target configuration in a JSON file with the following commands:
tux >
sudo
nvmetcli
(nvmetcli)>
saveconfig nvme-target-backup.json
To restore the configuration, use:
(nvmetcli)>
restore nvme-target-backup.json
You can also wipe the current configuration:
(nvmetcli)>
clear
16.4 Special Hardware Configuration #
16.4.1 Overview #
Some hardware needs special configuration to work correctly. Skim the titles of the following sections to see if you are using any of the mentioned devices or vendors.
16.4.2 Broadcom #
If you are using the Broadcom Emulex LightPulse Fibre
Channel SCSI driver, add a Kernel configuration
parameter on the target and host for the lpfc
module:
tux >
sudo
echo "options lpfc lpfc_enable_fc4_type=3" > /etc/modprobe.d/lpfc.conf
Make sure that the Broadcom adapter firmware has at least version 11.2.156.27. Also make sure that you have the current versions of nvme-cli, nvmetlci and the Kernel installed.
To enable a Fibre Channel port as an NVMe target, set a module
parameter lpfc_enable_nvmet=
COMMA_SEPARATED_WWPNS
. Only listed WWPNs
will be configured for target mode. A Fibre Channel port can either
be configured as target or as initiator.
16.4.3 Marvell #
FC-NVMe is supported on QLE269x and QLE27xx adapters. FC-NVMe support is enabled by default in the Marvell® QLogic® QLA2xxx Fibre Channel driver.
To confirm NVMe is enabled, run the following command:
tux >
cat /sys/module/qla2xxx/parameters/ql2xnvmeenable
A resulting 1
suggests NVMe is enabled, a
0
indicates it is disabled.
Next, ensure that the Marvell adapter firmware has at least version 8.08.204 by checking the output of the following command:
tux >
cat /sys/class/scsi_host/host0/fw_version
Last, ensure that the latest versions available for SUSE Linux Enterprise Server of nvme-cli, QConvergeConsoleCLI, and the Kernel are installed. You can, for example, run
root #
zypper lu && zypper pchk
to check for updates and patches.
For more details on installation, please refer to the FC-NVMe sections of the following Marvell user guides:
16.5 More Information #
For more details about the abilities of the nvme
command,
refer to nvme nvme-help
.
The following links provide a basic introduction to NVMe and NVMe-oF: