.. _Managing Virtual Machines: Managing Virtual Machines ------------------------- .. include:: /includes/managing-virtual-machines-part3.inc - A storage policy for volumes (see :ref:`Managing Storage Policies`) - A flavor (see :ref:`Managing Flavors`) .. include:: /includes/managing-virtual-machines-part4.inc - **Migrate** moves a VM to another node in the compute cluster (for more information, see :ref:`Migrating Virtual Machines`). .. To rebuild a VM means to redeploy it from the chosen image keeping the same ID, flavor, network interfaces (IP and MAC addresses), and all the volumes except the boot one. .. include:: /includes/managing-virtual-machines-part5.inc .. _Migrating Virtual Machines: Migrating Virtual Machines ~~~~~~~~~~~~~~~~~~~~~~~~~~ VM migration helps facilitate cluster upgrades and workload balancing between compute nodes. |product_name| allows you to perform two types of migration: - **Cold migration** for stopped and suspended virtual machines - **Hot migration** for running virtual machines (allows you to avoid VM downtime) For both migration types, a virtual machine is migrated between compute nodes using shared storage, so no block device migration takes place. Hot migration consists of the following steps: #. All VM memory is copied to the destination node while the virtual machine keeps running on the source node. If a VM memory page changes, it is copied again. #. When only a few memory pages are left to copy, the VM is stopped on the source node, the remaining pages are transferred, and the VM is restarted on the destination node. Large virtual machines with write-intensive workloads write to memory faster than memory changes can be transferred to the destination node, thus preventing migration from converging. For such VMs, the auto-converge mechanism is used. When a lack of convergence is detected during live migration, VM's vCPU execution speed is throttled down, which also slows down writing to VM memory. Initially, virtual machine's vCPU is throttled by 20% and then by 10% during each iteration. This process continues until writing to VM memory slows down enough for migration to complete or the VM vCPU is throttled by 99%. .. include:: /includes/managing-virtual-machines-part2.inc To migrate a VM, do the following: #. On the **COMPUTE** > **Virtual machines** > **VIRTUAL MACHINES** tab, click a VM to migrate, click the ellipsis button and choose **Migrate**. .. only:: ac .. image:: /images/stor_image104_ac.png :align: center :class: align-center .. only:: vz .. image:: /images/stor_image104_vz.png :align: center :class: align-center #. In the new window, specify the destination node: - **Auto**. Automatically select the optimal destination among cluster nodes based on available CPU and RAM resources. - Select the destination node manually from the drop-down list. .. only:: ac .. image:: /images/stor_image105_ac.png :align: center :class: align-center .. only:: vz .. image:: /images/stor_image105_vz.png :align: center :class: align-center #. By default, running VMs are migrated live. You can change the migration mode to offline by ticking the **Cold migration** checkbox. A VM will be stopped and restarted on the destination node after migration. #. Click **Migrate** to reserve resources on the destination node and start migration. The admin panel will show the migration progress. .. include:: /includes/managing-virtual-machines-part6.inc - Change the flavor for running and shelved VMs .. _Configuring Virtual Machine High Availability: Configuring Virtual Machine High Availability ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ High availability keeps virtual machines operational if the node they are located on fails due to kernel crash, power outage and such or becomes unreachable over the network. Graceful shutdown is not considered a failure event. .. important:: The compute cluster can survive the failure of only one node. In the event of failure, the system will attempt to evacuate affected VMs automatically, that is, migrate them offline with auto-scheduling to other healthy compute nodes in the following order: - VMs with the "Active" status are evacuated first and automatically started. - VMs with the "Shut down" status are evacuated next and remain stopped. - All other VMs are ignored and left on the failed node. If something blocks the evacuation, for example, destination compute nodes lack resources to host the affected VMs, these VMs remain on the failed node and receive the "Error" status. You can evacuate them manually after solving the issue (providing sufficient resources, joining new nodes to the cluster, etc.). To do this, click the ellipsis button next to such a VM or open its panel and click **Evacuate**. .. only:: ac .. image:: /images/stor_image155_ac.png :align: center :class: align-center .. only:: vz .. image:: /images/stor_image155_vz.png :align: center :class: align-center When the failed node becomes available again, it is fenced from scheduling new VMs on it and can be returned to operation manually. To do it, click the ellipsis button next to the fenced node or open its panel and then click **Return to operation**. .. only:: ac .. image:: /images/stor_image154_ac.png :align: center :class: align-center .. only:: vz .. image:: /images/stor_image154_vz.png :align: center :class: align-center By default, high availability for virtual machines is enabled automatically after creating the compute cluster. If required, you can disable it manually as follows: #. Click the VM for which you wish to disable HA. #. On the VM panel, click the pencil icon next to the **High availability** parameter. #. In the **High availability** window, disable HA for the VM and click **Save**. .. only:: ac .. image:: /images/stor_image156_ac.png :align: center :class: align-center .. only:: vz .. image:: /images/stor_image156_vz.png :align: center :class: align-center Virtual machines with disabled HA will not be evacuated to healthy nodes in case of failover. .. |vm_menu_nav| replace:: **COMPUTE** > **Virtual machines** > **VIRTUAL MACHINES** tab .. |ref_to_man_networks| replace:: :ref:`Managing Virtual Networks`