Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 5.5 (SES 5 & SES 5.5)

10 Installation of iSCSI Gateway Edit source

iSCSI is a storage area network (SAN) protocol that allows clients (called initiators) to send SCSI commands to SCSI storage devices (targets) on remote servers. SUSE Enterprise Storage 5.5 includes a facility that opens Ceph storage management to heterogeneous clients, such as Microsoft Windows* and VMware* vSphere, through the iSCSI protocol. Multipath iSCSI access enables availability and scalability for these clients, and the standardized iSCSI protocol also provides an additional layer of security isolation between clients and the SUSE Enterprise Storage 5.5 cluster. The configuration facility is named lrbd. Using lrbd, Ceph storage administrators can define thin-provisioned, replicated, highly-available volumes supporting read-only snapshots, read-write clones, and automatic resizing with Ceph RADOS Block Device (RBD). Administrators can then export volumes either via a single lrbd gateway host, or via multiple gateway hosts supporting multipath failover. Linux, Microsoft Windows, and VMware hosts can connect to volumes using the iSCSI protocol, which makes them available like any other SCSI block device. This means SUSE Enterprise Storage 5.5 customers can effectively run a complete block-storage infrastructure subsystem on Ceph that provides all the features and benefits of a conventional SAN, enabling future growth.

This chapter introduces detailed information to set up a Ceph cluster infrastructure together with an iSCSI gateway so that the client hosts can use remotely stored data as local storage devices using the iSCSI protocol.

10.1 iSCSI Block Storage Edit source

iSCSI is an implementation of the Small Computer System Interface (SCSI) command set using the Internet Protocol (IP), specified in RFC 3720. iSCSI is implemented as a service where a client (the initiator) talks to a server (the target) via a session on TCP port 3260. An iSCSI target's IP address and port are called an iSCSI portal, where a target can be exposed through one or more portals. The combination of a target and one or more portals is called the target portal group (TPG).

The underlying data link layer protocol for iSCSI is commonly Ethernet. More specifically, modern iSCSI infrastructures use 10 Gigabit Ethernet or faster networks for optimal throughput. 10 Gigabit Ethernet connectivity between the iSCSI gateway and the back-end Ceph cluster is strongly recommended.

10.1.1 The Linux Kernel iSCSI Target Edit source

The Linux kernel iSCSI target was originally named LIO for linux-iscsi.org, the project's original domain and Web site. For some time, no fewer than four competing iSCSI target implementations were available for the Linux platform, but LIO ultimately prevailed as the single iSCSI reference target. The mainline kernel code for LIO uses the simple, but somewhat ambiguous name "target", distinguishing between "target core" and a variety of front-end and back-end target modules.

The most commonly used front-end module is arguably iSCSI. However, LIO also supports Fibre Channel (FC), Fibre Channel over Ethernet (FCoE) and several other front-end protocols. At this time, only the iSCSI protocol is supported by SUSE Enterprise Storage.

The most frequently used target back-end module is one that is capable of simply re-exporting any available block device on the target host. This module is named iblock. However, LIO also has an RBD-specific back-end module supporting parallelized multipath I/O access to RBD images.

10.1.2 iSCSI Initiators Edit source

This section introduces brief information on iSCSI initiators used on Linux, Microsoft Windows, and VMware platforms.

10.1.2.1 Linux Edit source

The standard initiator for the Linux platform is open-iscsi. open-iscsi launches a daemon, iscsid, which the user can then use to discover iSCSI targets on any given portal, log in to targets, and map iSCSI volumes. iscsid communicates with the SCSI mid layer to create in-kernel block devices that the kernel can then treat like any other SCSI block device on the system. The open-iscsi initiator can be deployed in conjunction with the Device Mapper Multipath (dm-multipath) facility to provide a highly available iSCSI block device.

10.1.2.2 Microsoft Windows and Hyper-V Edit source

The default iSCSI initiator for the Microsoft Windows operating system is the Microsoft iSCSI initiator. The iSCSI service can be configured via a graphical user interface (GUI), and supports multipath I/O for high availability.

10.1.2.3 VMware Edit source

The default iSCSI initiator for VMware vSphere and ESX is the VMware ESX software iSCSI initiator, vmkiscsi. When enabled, it can be configured either from the vSphere client, or using the vmkiscsi-tool command. You can then format storage volumes connected through the vSphere iSCSI storage adapter with VMFS, and use them like any other VM storage device. The VMware initiator also supports multipath I/O for high availability.

10.2 General Information about lrbd Edit source

lrbd combines the benefits of RADOS Block Devices with the ubiquitous versatility of iSCSI. By employing lrbd on an iSCSI target host (known as the lrbd gateway), any application that needs to make use of block storage can benefit from Ceph, even if it does not speak any Ceph client protocol. Instead, users can use iSCSI or any other target front-end protocol to connect to an LIO target, which translates all target I/O to RBD storage operations.

Ceph Cluster with a Single iSCSI Gateway
Figure 10.1: Ceph Cluster with a Single iSCSI Gateway

lrbd is inherently highly-available and supports multipath operations. Thus, downstream initiator hosts can use multiple iSCSI gateways for both high availability and scalability. When communicating with an iSCSI configuration with more than one gateway, initiators may load-balance iSCSI requests across multiple gateways. In the event of a gateway failing, being temporarily unreachable, or being disabled for maintenance, I/O will transparently continue via another gateway.

Ceph Cluster with Multiple iSCSI Gateways
Figure 10.2: Ceph Cluster with Multiple iSCSI Gateways

10.3 Deployment Considerations Edit source

A minimum configuration of SUSE Enterprise Storage 5.5 with lrbd consists of the following components:

  • A Ceph storage cluster. The Ceph cluster consists of a minimum of four physical servers hosting at least eight object storage daemons (OSDs) each. In such a configuration, three OSD nodes also double as a monitor (MON) host.

  • An iSCSI target server running the LIO iSCSI target, configured via lrbd.

  • An iSCSI initiator host, running open-iscsi (Linux), the Microsoft iSCSI Initiator (Microsoft Windows), or any other compatible iSCSI initiator implementation.

A recommended production configuration of SUSE Enterprise Storage 5.5 with lrbd consists of:

  • A Ceph storage cluster. A production Ceph cluster consists of any number of (typically more than 10) OSD nodes, each typically running 10-12 object storage daemons (OSDs), with no fewer than three dedicated MON hosts.

  • Several iSCSI target servers running the LIO iSCSI target, configured via lrbd. For iSCSI fail-over and load-balancing, these servers must run a kernel supporting the target_core_rbd module. Update packages are available from the SUSE Linux Enterprise Server maintenance channel.

  • Any number of iSCSI initiator hosts, running open-iscsi (Linux), the Microsoft iSCSI Initiator (Microsoft Windows), or any other compatible iSCSI initiator implementation.

10.4 Installation and Configuration Edit source

This section describes steps to install and configure an iSCSI Gateway on top of SUSE Enterprise Storage.

10.4.1 Deploy the iSCSI Gateway to a Ceph Cluster Edit source

You can deploy the iSCSI Gateway either during Ceph cluster deployment process, or add it to an existing cluster using DeepSea.

To include the iSCSI Gateway during the cluster deployment process, refer to Section 4.5.1.2, “Role Assignment”.

To add the iSCSI Gateway to an existing cluster, refer to Section 1.2, “Adding New Roles to Nodes”.

10.4.2 Create RBD Images Edit source

RBD images are created in the Ceph store and subsequently exported to iSCSI. We recommend that you use a dedicated RADOS pool for this purpose. You can create a volume from any host that is able to connect to your storage cluster using the Ceph rbd command line utility. This requires the client to have at least a minimal ceph.conf configuration file, and appropriate CephX authentication credentials.

To create a new volume for subsequent export via iSCSI, use the rbd create command, specifying the volume size in megabytes. For example, in order to create a 100 GB volume named testvol in the pool named iscsi, run:

cephadm > rbd --pool iscsi create --size=102400 testvol

The above command creates an RBD volume in the default format 2.

Note
Note

Since SUSE Enterprise Storage 3, the default volume format is 2, and format 1 is deprecated. However, you can still create the deprecated format 1 volumes with the --image-format 1 option.

10.4.3 Export RBD Images via iSCSI Edit source

To export RBD images via iSCSI, use the lrbd utility. lrbd allows you to create, review, and modify the iSCSI target configuration, which uses a JSON format.

Tip
Tip: Import Changes into openATTIC

Any changes made to the iSCSI Gateway configuration using the lrbd command are not visible to DeepSea and openATTIC. To import your manual changes, you need to export the iSCSI Gateway configuration to a file:

root@minion > lrbd -o /tmp/lrbd.conf

Then copy it to the Salt master so that DeepSea and openATTIC can see it:

root@minion > scp /tmp/lrbd.conf ses5master:/srv/salt/ceph/igw/cache/lrbd.conf

Finally, edit /srv/pillar/ceph/stack/global.yml and set:

igw_config: default-ui

In order to edit the configuration, use lrbd -e or lrbd --edit. This command will invoke the default editor, as defined by the EDITOR environment variable. You may override this behavior by setting the -E option in addition to -e.

Below is an example configuration for

  • two iSCSI gateway hosts named iscsi1.example.com and iscsi2.example.com,

  • defining a single iSCSI target with an iSCSI Qualified Name (IQN) of iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol,

  • with a single iSCSI Logical Unit (LU),

  • backed by an RBD image named testvol in the RADOS pool rbd,

  • and exporting the target via two portals named "east" and "west":

{
    "auth": [
        {
            "target": "iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol",
            "authentication": "none"
        }
    ],
    "targets": [
        {
            "target": "iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol",
            "hosts": [
                {
                    "host": "iscsi1.example.com",
                    "portal": "east"
                },
                {
                    "host": "iscsi2.example.com",
                    "portal": "west"
                }
            ]
        }
    ],
    "portals": [
        {
            "name": "east",
            "addresses": [
                "192.168.124.104"
            ]
        },
        {
            "name": "west",
            "addresses": [
                "192.168.124.105"
            ]
        }
    ],
    "pools": [
        {
            "pool": "rbd",
            "gateways": [
                {
                    "target": "iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol",
                    "tpg": [
                        {
                            "image": "testvol"
                        }
                    ]
                }
            ]
        }
    ]
    }

Note that whenever you refer to a host name in the configuration, this host name must match the iSCSI gateway's uname -n command output.

The edited JSON is stored in the extended attributes (xattrs) of a single RADOS object per pool. This object is available to the gateway hosts where the JSON is edited, as well as to all gateway hosts connected to the same Ceph cluster. No configuration information is stored locally on the lrbd gateway.

To activate the configuration, store it in the Ceph cluster, and do one of the following things (as root):

  • Run the lrbd command (without additional options) from the command line,

or

  • Restart the lrbd service with service lrbd restart.

The lrbd "service" does not operate any background daemon. Instead, it simply invokes the lrbd command. This type of service is known as a "one-shot" service.

You should also enable lrbd to auto-configure on system start-up. To do so, run the systemctl enable lrbd command.

The configuration above reflects a simple, one-gateway setup. lrbd configuration can be much more complex and powerful. The lrbd RPM package comes with an extensive set of configuration examples, which you may refer to by checking the content of the /usr/share/doc/packages/lrbd/samples directory after installation. The samples are also available from https://github.com/SUSE/lrbd/tree/master/samples.

10.4.4 Authentication and Access Control Edit source

iSCSI authentication is flexible and covers many possibilities. The five possible top level settings are none, tpg, acls, tpg+identified and identified.

10.4.4.1 No Authentication Edit source

'No authentication' means that no initiator will require a user name and password to access any LUNs for a specified host or target. 'No authentication' can be set explicitly or implicitly. Specify a value of 'none' for authentication to be set explicitly:

{
    "host": "igw1",
    "authentication": none
}

Removing the entire auth section from the configuration will use no authentication implicitly.

10.4.4.2 TPG Authentication Edit source

For common credentials or a shared user name/password, set authentication to tpg. This setting will apply to all initiators for the associated host or target. In the following example, the same user name and password are used for the redundant target and a target local to igw1:

{
  "target": "iqn.2003-01.org.linux-iscsi.igw.x86:sn.redundant",
  "authentication": tpg,
  "tpg": {
      "userid": "common1",
      "password": "pass1"
  }
},
{
    "host": "igw1",
    "authentication": tpg,
    "tpg": {
        "userid": "common1",
        "password": "pass1"
    }
}

Redundant configurations will have the same credentials across gateways but are independent of other configurations. In other words, LUNs configured specifically for a host and multiple redundant configurations can have a unique user name and password for each.

One caveat is that any initiator setting will be ignored when using tpg authentication. Using common credentials does not restrict which initiators may connect. This configuration may be suitable in isolated network environments.

10.4.4.3 ACLs Authentication Edit source

For unique credentials for each initiator, set authentication to acls. Additionally, only defined initiators are allowed to connect.

{
    "host": "igw1",
    "authentication": acls,
    "acls": [
        {
            "initiator": "iqn.1996-04.de.suse:01:e6ca28cc9f20",
            "userid": "initiator1",
            "password": "pass1",
        }
    ]
},

10.4.4.4 TPG+identified Authentication Edit source

The previous two authentication settings pair two independent features: TPG pairs common credentials with unidentified initiators, while ACLs pair unique credentials with identified initiators.

Setting authentication to tpg+identified pairs common credentials with identified initiators. Although you can imitate the same behavior choosing acls and repeating the same credentials with each initiator, the configuration would grow huge and harder to maintain.

The following configuration uses the tpg configuration with only the authentication keyword changing.

{
  "target": "iqn.2003-01.org.linux-iscsi.igw.x86:sn.redundant",
  "authentication": tpg+identified,
  "tpg": {
      "userid": "common1",
      "password": "pass1"
  }
},
{
    "host": "igw1",
    "authentication": tpg+identified,
    "tpg": {
        "userid": "common1",
        "password": "pass1"
    }
}

The list of initiators is gathered from those defined in the pools for the given hosts and targets in the authentication section.

10.4.4.5 Identified Authentication Edit source

This type of authentication does not use any credentials. In secure environments where only assignment of initiators is needed, set the authentication to identified. All initiators will connect but only have access to the images listed in the pools section.

{
    "target": "iqn.2003-01.org.linux-iscsi:igw.x86:sn.redundant",
    "authentication": "identified",
},
{
    "host": "igw1",
    "authentication": "identified",
}

10.4.4.6 Discovery and Mutual Authentication Edit source

Discovery authentication is independent of the previous authentication methods. It requires credentials for browsing.

Authentication of type tpg, tpg+identified, acls, and discovery support mutual authentication. Specifying the mutual settings requires that the target authenticates against the initiator.

Discovery and mutual authentications are optional. These options can be present, but disabled allowing experimentation with a particular configuration. After you decide, you can remove the disabled entries without breaking the configuration.

Refer to the examples in /usr/share/doc/packages/lrbd/samples. You can combine excerpts from one file with others to create unique configurations.

10.4.5 Optional Settings Edit source

The following settings may be useful for some environments. For images, there are uuid, lun, retries, sleep, and retry_errors attributes. The first two—uuid and lun—allow hardcoding of the 'uuid' or 'lun' for a specific image. You can specify either of them for an image. The retries, sleep and retry_errors affect attempts to map an rbd image.

Tip
Tip

If a site needs statically assigned LUNs, then assign numbers to each LUN.

"pools": [
    {
        "pool": "rbd",
        "gateways": [
        {
        "host": "igw1",
        "tpg": [
                    {
                        "image": "archive",
                        "uuid": "12345678-abcd-9012-efab-345678901234",
                        "lun": "2",
                        "retries": "3",
                        "sleep": "4",
                        "retry_errors": [ 95 ],
                        [...]
                    }
                ]
            }
        ]
    }
]

10.4.6 Advanced Settings Edit source

lrbd can be configured with advanced parameters which are subsequently passed on to the LIO I/O target. The parameters are divided up into iSCSI and backing store components, which can then be specified in the "targets" and "tpg" sections, respectively, of the lrbd configuration.

Warning
Warning

Unless otherwise noted, changing these parameters from the default setting is not recommended.

"targets": [
    {
        [...]
        "tpg_default_cmdsn_depth": "64",
        "tpg_default_erl": "0",
        "tpg_login_timeout": "10",
        "tpg_netif_timeout": "2",
        "tpg_prod_mode_write_protect": "0",
    }
]

A description of the options follows:

tpg_default_cmdsn_depth

Default CmdSN (Command Sequence Number) depth. Limits the amount of requests that an iSCSI initiator can have outstanding at any moment.

tpg_default_erl

Default error recovery level.

tpg_login_timeout

Login timeout value in seconds.

tpg_netif_timeout

NIC failure timeout in seconds.

tpg_prod_mode_write_protect

If set to 1, prevents writes to LUNs.

"pools": [
    {
        "pool": "rbd",
        "gateways": [
        {
        "host": "igw1",
        "tpg": [
                    {
                        "image": "archive",
                        "backstore_block_size": "512",
                        "backstore_emulate_3pc": "1",
                        "backstore_emulate_caw": "1",
                        "backstore_emulate_dpo": "0",
                        "backstore_emulate_fua_read": "0",
                        "backstore_emulate_fua_write": "1",
                        "backstore_emulate_model_alias": "0",
                        "backstore_emulate_pr": "1",
                        "backstore_emulate_rest_reord": "0",
                        "backstore_emulate_tas": "1",
                        "backstore_emulate_tpu": "0",
                        "backstore_emulate_tpws": "0",
                        "backstore_emulate_ua_intlck_ctrl": "0",
                        "backstore_emulate_write_cache": "0",
                        "backstore_enforce_pr_isids": "1",
                        "backstore_fabric_max_sectors": "8192",
                        "backstore_hw_block_size": "512",
                        "backstore_hw_max_sectors": "8192",
                        "backstore_hw_pi_prot_type": "0",
                        "backstore_hw_queue_depth": "128",
                        "backstore_is_nonrot": "1",
                        "backstore_max_unmap_block_desc_count": "1",
                        "backstore_max_unmap_lba_count": "8192",
                        "backstore_max_write_same_len": "65535",
                        "backstore_optimal_sectors": "8192",
                        "backstore_pi_prot_format": "0",
                        "backstore_pi_prot_type": "0",
                        "backstore_queue_depth": "128",
                        "backstore_unmap_granularity": "8192",
                        "backstore_unmap_granularity_alignment": "4194304"
                    }
                ]
            }
        ]
    }
]

A description of the options follows:

backstore_block_size

Block size of the underlying device.

backstore_emulate_3pc

If set to 1, enables Third Party Copy.

backstore_emulate_caw

If set to 1, enables Compare and Write.

backstore_emulate_dpo

If set to 1, turns on Disable Page Out.

backstore_emulate_fua_read

If set to 1, enables Force Unit Access read.

backstore_emulate_fua_write

If set to 1, enables Force Unit Access write.

backstore_emulate_model_alias

If set to 1, uses the back-end device name for the model alias.

backstore_emulate_pr

If set to 0, support for SCSI Reservations, including Persistent Group Reservations, is disabled. While disabled, the SES iSCSI Gateway can ignore reservation state, resulting in improved request latency.

Tip
Tip

Setting backstore_emulate_pr to 0 is recommended if iSCSI initiators do not require SCSI Reservation support.

backstore_emulate_rest_reord

If set to 0, the Queue Algorithm Modifier has Restricted Reordering.

backstore_emulate_tas

If set to 1, enables Task Aborted Status.

backstore_emulate_tpu

If set to 1, enables Thin Provisioning Unmap.

backstore_emulate_tpws

If set to 1, enables Thin Provisioning Write Same.

backstore_emulate_ua_intlck_ctrl

If set to 1, enables Unit Attention Interlock.

backstore_emulate_write_cache

If set to 1, turns on Write Cache Enable.

backstore_enforce_pr_isids

If set to 1, enforces persistent reservation ISIDs.

backstore_fabric_max_sectors

Maximum number of sectors the fabric can transfer at once.

backstore_hw_block_size

Hardware block size in bytes.

backstore_hw_max_sectors

Maximum number of sectors the hardware can transfer at once.

backstore_hw_pi_prot_type

If non-zero, DIF protection is enabled on the underlying hardware.

backstore_hw_queue_depth

Hardware queue depth.

backstore_is_nonrot

If set to 1, the backstore is a non-rotational device.

backstore_max_unmap_block_desc_count

Maximum number of block descriptors for UNMAP.

backstore_max_unmap_lba_count:

Maximum number of LBAs for UNMAP.

backstore_max_write_same_len

Maximum length for WRITE_SAME.

backstore_optimal_sectors

Optimal request size in sectors.

backstore_pi_prot_format

DIF protection format.

backstore_pi_prot_type

DIF protection type.

backstore_queue_depth

Queue depth.

backstore_unmap_granularity

UNMAP granularity.

backstore_unmap_granularity_alignment

UNMAP granularity alignment.

For targets, the tpg attributes allow tuning of kernel parameters. Use with caution.

"targets": [
{
    "host": "igw1",
    "target": "iqn.2003-01.org.linux-iscsi.generic.x86:sn.abcdefghijk",
    "tpg_default_cmdsn_depth": "64",
    "tpg_default_erl": "0",
    "tpg_login_timeout": "10",
    "tpg_netif_timeout": "2",
    "tpg_prod_mode_write_protect": "0",
    "tpg_t10_pi": "0"
}

For initiators, the attrib and param settings allow the tuning of kernel parameters. Use with caution. These are set in the authentication section. If the authentication is tpg+identified or identified, then the subsection is identified.

"auth": [
  {
      "authentication": "tpg+identified",
      "identified": [
        {
          "initiator": "iqn.1996-04.de.suse:01:e6ca28cc9f20",
          "attrib_dataout_timeout": "3",
          "attrib_dataout_timeout_retries": "5",
          "attrib_default_erl": "0",
          "attrib_nopin_response_timeout": "30",
          "attrib_nopin_timeout": "15",
          "attrib_random_datain_pdu_offsets": "0",
          "attrib_random_datain_seq_offsets": "0",
          "attrib_random_r2t_offsets": "0",
          "param_DataPDUInOrder": "1",
          "param_DataSequenceInOrder": "1",
          "param_DefaultTime2Retain": "0",
          "param_DefaultTime2Wait": "2",
          "param_ErrorRecoveryLevel": "0",
          "param_FirstBurstLength": "65536",
          "param_ImmediateData": "1",
          "param_InitialR2T": "1",
          "param_MaxBurstLength": "262144",
          "param_MaxConnections": "1",
          "param_MaxOutstandingR2T": "1"
        }
      ]
  }
]

If the authentication is acls, then the settings are included in the acls subsection. One caveat is that settings are only applied for active initiators. If an initiator is absent from the pools section, the acl entry is not created and settings cannot be applied.

10.5 Exporting RADOS Block Device Images using tcmu-runner Edit source

Since version 5, SUSE Enterprise Storage ships a user space RBD back-end for tcmu-runner (see man 8 tcmu-runner for details).

Warning
Warning: Technology Preview

tcmu-runner based iSCSI Gateway deployments are currently a technology preview. See Chapter 10, Installation of iSCSI Gateway for instructions on kernel-based iSCSI Gateway deployment with lrbd.

Unlike kernel-based lrbd iSCSI Gateway deployments, tcmu-runner based iSCSI Gateways do not offer support for multipath I/O or SCSI Persistent Reservations.

As DeepSea and openATTIC do not currently support tcmu-runner deployments, you need to manage the installation, deployment, and monitoring manually.

10.5.1 Installation Edit source

On your iSCSI Gateway node, install the tcmu-runner-handler-rbd package from the SUSE Enterprise Storage 5 media, together with the libtcmu1 and tcmu-runner package dependencies. Install the targetcli-fb package for configuration purposes. Note that the targetcli-fb package is incompatible with the 'non-fb' version of the targetcli package.

Confirm that the tcmu-runner systemd service is running:

root # systemctl enable tcmu-runner
tcmu-gw:~ # systemctl status tcmu-runner
● tcmu-runner.service - LIO Userspace-passthrough daemon
  Loaded: loaded (/usr/lib/systemd/system/tcmu-runner.service; static; vendor
  preset: disabled)
    Active: active (running) since ...

10.5.2 Configuration and Deployment Edit source

Create a RADOS Block Device image on your existing Ceph cluster. In the following example, we will use a 10G image called 'tcmu-lu' located in the 'rbd' pool.

Following RADOS Block Device image creation, run targetcli, and ensure that the tcmu-runner RBD handler (plug-in) is available:

root # targetcli
targetcli shell version 2.1.fb46
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> ls
o- / ................................... [...]
  o- backstores ........................ [...]
...
  | o- user:rbd ......... [Storage Objects: 0]

Create a backstore configuration entry for the RBD image:

/> cd backstores/user:rbd
/backstores/user:rbd> create tcmu-lu 10G /rbd/tcmu-lu
Created user-backed storage object tcmu-lu size 10737418240.

Create an iSCSI transport configuration entry. In the following example, the target IQN "iqn.2003-01.org.linux-iscsi.tcmu-gw.x8664:sn.cb3d2a3a" is automatically generated by targetcli for use as a unique iSCSI target identifier:

/backstores/user:rbd> cd /iscsi
/iscsi> create
Created target iqn.2003-01.org.linux-iscsi.tcmu-gw.x8664:sn.cb3d2a3a.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.

Create an ACL entry for the iSCSI initiator(s) that you want to connect to the target. In the following example, an initiator IQN of "iqn.1998-01.com.vmware:esxi-872c4888" is used:

/iscsi> cd
iqn.2003-01.org.linux-iscsi.tcmu-gw.x8664:sn.cb3d2a3a/tpg1/acls/
/iscsi/iqn.20...a3a/tpg1/acls> create iqn.1998-01.com.vmware:esxi-872c4888

Finally, link the previously created RBD backstore configuration to the iSCSI target:

/iscsi/iqn.20...a3a/tpg1/acls> cd ../luns
/iscsi/iqn.20...a3a/tpg1/luns> create /backstores/user:rbd/tcmu-lu
Created LUN 0.
Created LUN 0->0 mapping in node ACL iqn.1998-01.com.vmware:esxi-872c4888

Exit the shell to save the existing configuration:

/iscsi/iqn.20...a3a/tpg1/luns> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json

10.5.3 Usage Edit source

From your iSCSI initiator (client) node, connect to your newly provisioned iSCSI target using the IQN and host name configured above.

"auth": [
    {
        "host": "igw1",
        "authentication": "acls",
        "acls": [
            {
                "initiator": "iqn.1996-04.de.suse:01:e6ca28cc9f20",
                "userid": "initiator1",
                "password": "pass1",
                "attrib_dataout_timeout": "3",
                "attrib_dataout_timeout_retries": "5",
                "attrib_default_erl": "0",
                "attrib_nopin_response_timeout": "30",
                "attrib_nopin_timeout": "15",
                "attrib_random_datain_pdu_offsets": "0",
                "attrib_random_datain_seq_offsets": "0",
                "attrib_random_r2t_offsets": "0",
                "param_DataPDUInOrder": "1",
                "param_DataSequenceInOrder": "1",
                "param_DefaultTime2Retain": "0",
                "param_DefaultTime2Wait": "2",
                "param_ErrorRecoveryLevel": "0",
                "param_FirstBurstLength": "65536",
                "param_ImmediateData": "1",
                "param_InitialR2T": "1",
                "param_MaxBurstLength": "262144",
                "param_MaxConnections": "1",
                "param_MaxOutstandingR2T": "1"
            }
        ]
    },
]
Print this page