2.5. Managing Node Disks

2.5.1. vinfra node disk list

List node disks:

usage: vinfra node disk list [-a | --node <node>]
-a, --all
List disks on all nodes
--node <node>
Node ID or hostname to list disks on (default: node001.vstoragedomain)

Example:

# vinfra node disk list --node 94d58604-6f30-4339-8578-adb7903b7277 \
-c id -c node_id -c device -c used -c size -c role
+----------------+----------------+--------+--------+-----------+------------+
| id             | node_id        | device | used   | size      | role       |
+----------------+----------------+--------+--------+-----------+------------+
| E0B7CE6F-<...> | 94d58604-<...> | sda    | 5.5GiB | 239.1GiB  | mds-system |
| EAC7DF5D-<...> | 94d58604-<...> | sdb    | 2.1GiB | 1007.8GiB | cs         |
| 49D792CA-<...> | 94d58604-<...> | sdc    | 2.1GiB | 1007.8GiB | cs         |
+----------------+----------------+--------+--------+-----------+------------+

This command lists disks on the node with the ID 94d58604-6f30-4339-8578-adb7903b7277. (The output is abridged to fit on page.)

2.5.2. vinfra node disk show

Show details of a disk:

usage: vinfra node disk show [--node <node>] <disk>
--node <node>
Node ID or hostname
<disk>
Disk ID or device name (default: node001.vstoragedomain)

Example:

# vinfra node disk show EAC7DF5D-9E60-4444-85F7-5CA5738399CC \
--node 94d58604-6f30-4339-8578-adb7903b7277
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| being_released     | False                                |
| device             | sdb                                  |
| disk_status        | ok                                   |
| encryption         |                                      |
| id                 | EAC7DF5D-9E60-4444-85F7-5CA5738399CC |
| is_blink_available | False                                |
| is_blinking        | False                                |
| latency            |                                      |
| lun_id             |                                      |
| model              | Vz_HARDDISK2                         |
| mountpoint         | /vstorage/33aac2d5                   |
| node_id            | 94d58604-6f30-4339-8578-adb7903b7277 |
| role               | cs                                   |
| rpm                |                                      |
| serial_number      | 45589b5823ce4c188b55                 |
| service_id         | 1026                                 |
| service_params     | journal_type: inner_cache            |
|                    | tier: 0                              |
| service_status     | ok                                   |
| slot               |                                      |
| smart_status       | not_supported                        |
| space              | full_size: 1099511627776             |
|                    | size: 1082101518336                  |
|                    | used: 2246164480                     |
| tasks              |                                      |
| temperature        | 0.0                                  |
| transport          |                                      |
| type               | hdd                                  |
+--------------------+--------------------------------------+

This command shows the details of the disk with the ID EAC7DF5D-9E60-4444-85F7-5CA5738399CC attached to the node with the ID 94d58604-6f30-4339-8578-adb7903b7277.

2.5.3. vinfra node disk assign

Add multiple disks to the storage cluster:

usage: vinfra node disk assign --disk <disk>:<role>[:<key=value,...>]
                               [--node <node>]
--disk <disk>:<role>[:<key=value,...>]

Disk configuration in the format:

  • <disk>: disk device ID or name
  • <role>: disk role (cs, mds, journal, mds-journal, mds-system, cs-system, system)
  • comma-separated key=value pairs with keys (optional):
    • tier: disk tier (0, 1, 2 or 3)
    • journal-tier: journal (cache) disk tier (0, 1, 2 or 3)
    • journal-type: journal (cache) disk type (no_cache, inner_cache or external_cache)
    • journal-disk: journal (cache) disk ID or device name
    • journal-size: journal (cache) disk size, in bytes
    • bind-address: bind IP address for the metadata service

E.g., sda:cs:tier=0,journal-type=inner_cache. This option can be used multiple times.

--node <node>
Node ID or hostname (default: node001.vstoragedomain)

Example:

# vinfra node disk assign --disk sdc:cs --node f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4
+---------+--------------------------------------+
| Field   | Value                                |
+---------+--------------------------------------+
| task_id | 080337ba-0508-44a0-9363-eddcd9df9f0d |
+---------+--------------------------------------+

This command creates a task to assign the role cs to the disk sdc on the node with the ID f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4

Task outcome:

# vinfra task show 080337ba-0508-44a0-9363-eddcd9df9f0d
+---------+------------------------------------------------------------------------------+
| Field   | Value                                                                        |
+---------+------------------------------------------------------------------------------+
| args    | []                                                                           |
| kwargs  | cluster_id: 1                                                                |
|         | disks:                                                                       |
|         | - id: D3BEF4BB-AA3B-4DB6-9376-BC7CDA636700                                   |
|         |   role: cs                                                                   |
|         |   service_params: {}                                                         |
|         | logger:                                                                      |
|         |   __classname: backend.logger.tracer.TracingLogger                           |
|         |   __dict:                                                                    |
|         |     prefix: POST /api/v2/1/nodes/f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4/disks/ |
|         |     token: '3215629651314950'                                                |
|         | node_id: f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4                                |
| name    | backend.tasks.disks.BulkAssignDiskTask                                       |
| result  | {}                                                                           |
| state   | success                                                                      |
| task_id | 080337ba-0508-44a0-9363-eddcd9df9f0d                                         |
+---------+------------------------------------------------------------------------------+

2.5.4. vinfra node disk release

Release a disk from the storage cluster. Start data migration from the node as well as cluster replication and rebalancing to meet the configured redundancy level:

usage: vinfra node disk release [--force] [--node <node>] <disk>
--force
Release without data migration
--node <node>
Node ID or hostname (default: node001.vstoragedomain)
<disk>
Disk ID or device name

Example:

# vinfra node disk release sdc --node f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4
+---------+--------------------------------------+
| Field   | Value                                |
+---------+--------------------------------------+
| task_id | 587a936d-3953-481c-a2cd-b1223b890bec |
+---------+--------------------------------------+

This command creates a task to release the role cs from the disk sdc on the node with the ID f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4.

Task outcome:

# vinfra task show 587a936d-3953-481c-a2cd-b1223b890bec
+---------+---------------------------------------------------------------------------------+
| Field   | Value                                                                           |
+---------+---------------------------------------------------------------------------------+
| args    | []                                                                              |
| kwargs  | cluster_id: 1                                                                   |
|         | disk_id: 43EF3400-EA95-43DE-B624-3D7ED0F9DDDD                                   |
|         | force: false                                                                    |
|         | logger:                                                                         |
|         |   __classname: backend.logger.tracer.TracingLogger                              |
|         |   __dict:                                                                       |
|         |     prefix: POST /api/v2/1/nodes/f59dabdb-                                      |
|         | bd1c-4944-8af2-26b8fe9ff8d4/disks/43EF3400-EA95-43DE-B624-3D7ED0F9DDDD/release/ |
|         |     token: '3217122839314940'                                                   |
|         | node_id: f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4                                   |
| name    | backend.tasks.disks.ReleaseDiskTask                                             |
| state   | success                                                                         |
| task_id | 587a936d-3953-481c-a2cd-b1223b890bec                                            |
+---------+---------------------------------------------------------------------------------+

2.5.7. vinfra node iscsi target add

Add an iSCSI target as a disk to a node:

usage: vinfra node iscsi target add [--auth-username <auth-username>]
                                    [--auth-password <auth-password>]
                                    --portal <portal> --node <node> <target-name>
--auth-username <auth-username>
User name
--auth-password <auth-password>
User password
--portal <portal>
Portal IP address in the format IP:port (this option can be specified multiple times)
--node <node>
Node ID or hostname
<target-name>
Target name

Example:

# vinfra node iscsi target add iqn.2014-06.com.vstorage:target1 \
--portal 172.16.24.244:3260 --node f1931be7-0a01-4977-bfef-51a392adcd94
+---------+--------------------------------------+
| Field   | Value                                |
+---------+--------------------------------------+
| task_id | c42bfbe5-7292-41c2-91cb-446795535ab9 |
+---------+--------------------------------------+

This command creates a task to connect a remote iSCSI target iqn.2014-06.com.vstorage:target1 with the IP address 172.16.24.244 and port 3260 to the node with the ID f1931be7-0a01-4977-bfef-51a392adcd94.

Task outcome:

# vinfra task show c42bfbe5-7292-41c2-91cb-446795535ab9
+---------+---------------------------------------------------------------+
| Field   | Value                                                         |
+---------+---------------------------------------------------------------+
| args    | - f1931be7-0a01-4977-bfef-51a392adcd94                        |
| kwargs  | portals:                                                      |
|         | - address: 172.16.24.244                                      |
|         |   port: 3260                                                  |
|         | target_name: iqn.2014-06.com.vstorage:target1                 |
| name    | backend.presentation.nodes.iscsi_initiators.tasks.ConnectTask |
| result  | connected: true                                               |
|         | portals:                                                      |
|         | - address: 172.16.24.244                                      |
|         |   port: 3260                                                  |
|         | state: connected                                              |
|         | target_name: iqn.2014-06.com.vstorage:target1                 |
| state   | success                                                       |
| task_id | c42bfbe5-7292-41c2-91cb-446795535ab9                          |
+---------+---------------------------------------------------------------+

2.5.8. vinfra node iscsi target delete

Delete an iSCSI target from a node:

usage: vinfra node iscsi target delete --node <node> <target-name>
--node <node>
Node ID or hostname
<target-name>
Target name

Example:

# vinfra node iscsi target delete iqn.2014-06.com.vstorage:target1 \
--node f1931be7-0a01-4977-bfef-51a392adcd94
+---------+--------------------------------------+
| Field   | Value                                |
+---------+--------------------------------------+
| task_id | c8dc74ee-86d6-4b89-8b6f-153ff1e78cb7 |
+---------+--------------------------------------+

This command creates a task to disconnect a remote iSCSI target iqn.2014-06.com.vstorage:target1 from the node with the ID f1931be7-0a01-4977-bfef-51a392adcd94.

Task outcome:

# vinfra task show c8dc74ee-86d6-4b89-8b6f-153ff1e78cb7
+---------+------------------------------------------------------------------+
| Field   | Value                                                            |
+---------+------------------------------------------------------------------+
| args    | - f1931be7-0a01-4977-bfef-51a392adcd94                           |
| kwargs  | target_name: iqn.2014-06.com.vstorage:target1                    |
| name    | backend.presentation.nodes.iscsi_initiators.tasks.DisconnectTask |
| state   | success                                                          |
| task_id | c8dc74ee-86d6-4b89-8b6f-153ff1e78cb7                             |
+---------+------------------------------------------------------------------+