3.1. Creating and deleting the compute cluster

3.1.1. vinfra service compute create

Create a compute cluster:

usage: vinfra service compute create [--public-network <network>]
                                     [--subnet cidr=CIDR[,key=value,...]]
                                     [--cpu-model <cpu-model>] [--force]
                                     [--enable-k8saas] [--enable-lbaas]
                                     [--enable-metering] --nodes <nodes>
                                     [--notification-forwarding <transport-url>]
                                     [--disable-notification-forwarding]
                                     [--endpoint-hostname <hostname>]
                                     [--vlan-id <vlan-id>] [--custom-param <service_name>
                                     <config_file> <section> <property> <value>]
                                     [--nova-scheduler-ram-weight-multiplier <value>]
                                     [--neutron-openvswitch-vxlan-port <value>]
                                     [--nova-scheduler-host-subset-size <value>]
                                     [--nova-compute-cpu-allocation-ratio <value>]
--public-network <network>
An infrastructure network to connect the compute physical network to. It must include the ‘VM public’ traffic type.
--subnet cidr=CIDR[,key=value,...]

Subnet for IP address management in the compute physical network (the --public-network option is required):

  • cidr: subnet range in CIDR notation;
  • comma-separated key=value pairs with keys (optional):
    • gateway: gateway IP address.
    • dhcp: enable/disable the virtual DHCP server.
    • allocation-pool: allocation pool of IP addresses from CIDR in the format ip1-ip2, where ip1 and ip2 are starting and ending IP addresses. Specify the key multiple times to create multiple IP pools.
    • dns-server: DNS server IP address, specify multiple times to set multiple DNS servers.

Example: --subnet cidr=192.168.5.0/24,dhcp=enable.

--cpu-model <cpu-model>
CPU model for virtual machines. View the list of available CPU models using vinfra service compute cluster show.
--force
Skip checks for minimal hardware requirements.
--enable-k8saas
Enable Kubernetes-as-a-Service services.
--enable-lbaas
Enable Load-Balancing-as-a-Service services.
--enable-metering
Enable metering services.
--notification-forwarding <transport-url>

Enable notification forwarding through the specified transport URL in the format driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]?query, where

  • driver is the supported transport driver (kafka)
  • user:pass are the username and password used for authentication with the messaging broker
  • host:port specifies the hostname or IP address and port number of the messaging broker
  • query are parameters that override those from the broker configuration file:
    • topic specifies the topic name
    • driver is the messaging driver: messaging, messagingv2, routing, log, test, noop

Example: kafka://10.10.10.10:9092?topic=notifications

--disable-notification-forwarding
Disable notification forwarding
--endpoint-hostname <hostname>
Use the given hostname for a public endpoint. Specify an empty value to use the raw IP.
--vlan-id <vlan-id>
Create VLAN-based physical network by the given VLAN ID.
--custom-param <service_name> <config_file> <section> <property> <value>

Set custom parameters for OpenStack configuration files:

  • service_name is the service name: nova-scheduler, nova-compute, or neutron-openvswitch-agent
  • config_file specifies the service configuration file: nova.conf for nova-scheduler and nova-compute, or ml2_conf.ini for neutron-openvswitch-agent
  • section specifies the section in the service configuration file where the parameter is defined: DEFAULT in nova.conf or agent in ml2_conf.ini
  • property is the parameter to be changed: ram_weight_multiplier, scheduler_host_subset, and cpu_allocation_ratio in nova.conf or vxlan_udp_port in ml2_conf.ini
  • value is a new parameter value
--nova-scheduler-ram-weight-multiplier <value>
Shortcut for --custom-param nova-scheduler nova.conf DEFAULT ram_weight_multiplier <value>
--neutron-openvswitch-vxlan-port <value>
Shortcut for --custom-param neutron-openvswitch-agent ml2_conf.ini agent vxlan_udp_port <value>
--nova-scheduler-host-subset-size <value>
Shortcut for --custom-param nova-scheduler nova.conf DEFAULT scheduler_host_subset_size <value>
--nova-compute-cpu-allocation-ratio <value>
Shortcut for --custom-param nova-scheduler nova.conf DEFAULT cpu_allocation_ratio <value>
--nodes <nodes>
A comma-separated list of node IDs or hostnames.

Example:

# vinfra service compute create --nodes 7ffa9540-5a20-41d1-b203-e3f349d62565,\
02ff64ae-5800-4090-b958-18b1fe8f5060,6e8afc28-7f71-4848-bdbe-7c5de64c5013,\
37c70bfb-c289-4794-8be4-b7a40c2b6d95,827a1f4e-56e5-404f-9113-88748c18f0c2 \
--public-network Public --subnet cidr=10.94.0.0/16,dhcp=enable,\
gateway=10.94.0.1,allocation-pool=10.94.129.64-10.94.129.79,\
dns-server=10.30.0.27,dns-server=10.30.0.28
+---------+--------------------------------------+
| Field   | Value                                |
+---------+--------------------------------------+
| task_id | be517afa-fae0-457e-819c-f4d6399f3ae2 |
+---------+--------------------------------------+

This command creates a task to create the compute cluster from five nodes specified by ID. It also specifies the physical network for VMs, the gateway, the allocation pool of IP addresses to assign to VMs, and the DNS servers to use.

Task outcome:

# vinfra task show be517afa-fae0-457e-819c-f4d6399f3ae2
+----------+-------------------------------------------------------------+
| Field    | Value                                                       |
+----------+-------------------------------------------------------------+
| details  |                                                             |
| name     | backend.presentation.compute.tasks.DeployComputeClusterTask |
| progress | 100                                                         |
| result   |                                                             |
| state    | success                                                     |
| task_id  | be517afa-fae0-457e-819c-f4d6399f3ae2                        |
+----------+-------------------------------------------------------------+

3.1.2. vinfra service compute delete

Delete all nodes from the compute cluster:

usage: vinfra service compute delete

Example:

# vinfra service compute delete
+---------+--------------------------------------+
| Field   | Value                                |
+---------+--------------------------------------+
| task_id | 063e8a15-fcfe-4629-865f-b5e5fa44b38f |
+---------+--------------------------------------+

This command creates a task to release nodes from the compute cluster.

Task outcome:

# vinfra task show 063e8a15-fcfe-4629-865f-b5e5fa44b38f
+---------+--------------------------------------------------------------+
| Field   | Value                                                        |
+---------+--------------------------------------------------------------+
| details |                                                              |
| name    | backend.presentation.compute.tasks.DestroyComputeClusterTask |
| result  |                                                              |
| state   | success                                                      |
| task_id | 063e8a15-fcfe-4629-865f-b5e5fa44b38f                         |
+---------+--------------------------------------------------------------+