2.3. Planning Node Hardware Configurations

Acronis Cyber Infrastructure works on top of commodity hardware, so you can create a cluster from regular servers, disks, and network cards. Still, to achieve the optimal performance, a number of requirements must be met and a number of recommendations should be followed.

Note

If you are unsure of what hardware to choose, consult your sales representative. You can also use the online hardware calculator. If you want to avoid the hassle of testing, installing, and configuring hardware and/or software, consider using Acronis Appliance. Out of box, you will get an enterprise-grade, fault-tolerant five-node infrastructure solution with great storage performance that runs in a 3U form factor.

2.3.1. Hardware Limits

The following table lists the current hardware limits for Acronis Cyber Infrastructure servers:

Table 2.3.1.1 Server hardware limits
Hardware Theoretical Certified
RAM 64 TB 1 TB
CPU 5120 logical CPUs 384 logical CPUs

A logical CPU is a core (thread) in a multicore (multithreading) processor.

2.3.2. Hardware Requirements

The following table lists the minimum and recommended disk requirements according to the disk roles (refer to Storage Architecture Overview):

Table 2.3.2.1 Disk requirements
Disk role Quantity Minimum Recommended
System One disk per node 100 GB SATA/SAS HDD 250 GB SATA/SAS SSD
Metadata

One disk per node

Five disks recommended for one cluster

100 GB enterprise-grade SSD with power loss protection, 1 DWPD endurance minimum
Cache

Optional

One SSD disk per 4-12 HDDs

100+ GB enterprise-grade SSD with power loss protection and 75 MB/s sequential write performance per serviced HDD; 1 DWPD endurance minimum, 10 DWPD recommended
Storage

Optional

At least one per cluster

100 GB minimum, 16 TB maximum recommended

SATA/SAS HDD or SATA/SAS/NVMe SSD (enterprise-grade with power loss protection, 1 DWPD endurance minimum)

The following table lists the recommended amount of RAM and CPU cores for one node according to the services you will use:

Table 2.3.2.2 CPU and RAM requirements
Service RAM* CPU cores**
System 6 GB 2 cores
Storage services: each disk with Storage role or Cache role (any size)*** 1 GB 0.2 core
Compute 8 GB 3 cores
Load Balancer Service 1 GB 1 core
Each load balancer 1 GB 1 core
Kubernetes 2 GB 2 cores
S3 4.5 GB 3 cores
Backup Gateway**** 1 GB 2 cores
NFS Service 4 GB 2 cores
Each share 0.5 GB 0.5 core
iSCSI Service 1 GB 1 core
Each volume 0.1 GB 0.5 core

* Use only Error correction code (ECC) memory, to avoid data corruption.

** 64-bit x86 AMD-V or Intel VT processors with hardware virtualization extensions enabled. For Intel processors, enable “unrestricted guest” and VT-x with Extended Page Tables in BIOS. It is recommended to have the same CPU models on each node to avoid VM live migration issues. A CPU core here is a physical core in a multicore processor (hyperthreading is not taken into account).

*** For clusters larger than 1 PB of physical space, please additionally add 0.5 GB of RAM per Metadata service.

**** When working with public clouds and NFS, Backup Gateway consumes as much RAM and CPU as with a local storage.

As for the networks, at least 2 x 10 GbE interfaces are recommended; 25 GbE, 40 GbE, and 100 GbE are even better. Bonding is recommended. However, you can start with 1 GbE links, but they can limit cluster throughput on modern loads.

Let us consider some examples and calculate the requirements for particular cases.

  • If you have 1 node (1 system disk and 4 storage disks) and want to use it for Backup Gateway, the node should meet the following requirements: the system requirements (6 GB, 2 cores) + storage services for 4 disks (4 GB, 0.8 cores) + Backup Gateway (1 GB, 2 cores). All in all, that is 11 GB of RAM and 5 cores for the node.
  • If you have 3 nodes (1 system disk and 4 storage disks) and want to use them for the Compute service, each cluster node should meet the following requirements: the system requirements (6 GB, 2 cores) + Compute (8 GB, 3 cores). All in all, that is 14 GB of RAM and 5 cores for each node. If you want to enable, for example, one Kubernetes VM and one load balancer, add the following requirements to the management node: Load Balancer (2 GB, 2 cores) + Kubernetes (2 GB, 2 cores). All in all, that is 18 GB of RAM and 9 cores for the management node.
  • If you have 5 nodes (2 system disks and 10 storage disks) and want to use them for Backup Gateway, each cluster node should meet the following requirements: the system requirements (6 GB, 2 cores) + storage services for 10 disks (10 GB, 2 cores) + Backup Gateway (1 GB, 2 cores). All in all, that is 17 GB of RAM and 6 cores for each node.

In general, the more resources you provide for your cluster, the better it works. All extra RAM is used to cache disk reads. And extra CPU cores increase the performance and reduce latency.

2.3.3. Hardware Recommendations

In general, Acronis Cyber Infrastructure works on the same hardware that is recommended for Red Hat Enterprise Linux 7, including AMD EPYC processors: servers, components.

The following recommendations further explain the benefits added by specific hardware in the hardware requirements table. Use them to configure your cluster in an optimal way.

2.3.3.1. Storage Cluster Composition Recommendations

Designing an efficient storage cluster means finding a compromise between performance and cost that suits your purposes. When planning, keep in mind that a cluster with many nodes and few disks per node offers higher performance while a cluster with the minimum number of nodes (3) and a lot of disks per node is cheaper. See the following table for more details.

Table 2.3.3.1.1 Cluster composition recommendations
Design considerations Minimum nodes (3), many disks per node Many nodes, few disks per node (all-flash configuration)
Optimization Lower cost. Higher performance.
Free disk space to reserve More space to reserve for cluster rebuilding as fewer healthy nodes will have to store the data from a failed node. Less space to reserve for cluster rebuilding as more healthy nodes will have to store the data from a failed node.
Redundancy Fewer erasure coding choices. More erasure coding choices.
Cluster balance and rebuilding performance Worse balance and slower rebuilding. Better balance and faster rebuilding.
Network capacity More network bandwidth required to maintain cluster performance during rebuilding. Less network bandwidth required to maintain cluster performance during rebuilding.
Favorable data type Cold data (e.g., backups). Hot data (e.g., virtual environments).
Sample server configuration Supermicro SSG-6047R-E1R36L (Intel Xeon E5-2620 v1/v2 CPU, 32GB RAM, 36 x 12TB HDDs, a 500GB system disk). Supermicro SYS-2028TP-HC0R-SIOM (4 x Intel E5-2620 v4 CPUs, 4 x 16GB RAM, 24 x 1.9TB Samsung PM1643 SSDs).

Take note of the following:

  1. These considerations only apply if failure domain is host.
  2. The speed of rebuilding in the replication mode does not depend on the number of nodes in the cluster.
  3. Acronis Cyber Infrastructure supports hundreds of disks per node. If you plan to use more than 36 disks per node, contact sales engineers who will help you design a more efficient cluster.

2.3.3.2. General Hardware Recommendations

  • At least five nodes are required for a production environment. This is to ensure that the cluster can survive failure of two nodes without data loss.
  • One of the strongest features of Acronis Cyber Infrastructure is scalability. The bigger the cluster, the better Acronis Cyber Infrastructure performs. It is recommended to create production clusters from at least ten nodes for improved resilience, performance, and fault tolerance in production scenarios.
  • Even though a cluster can be created on top of varied hardware, using nodes with similar hardware in each node will yield better cluster performance, capacity, and overall balance.
  • Any cluster infrastructure must be tested extensively before it is deployed to production. Such common points of failure as SSD drives and network adapter bonds must always be thoroughly verified.
  • It is not recommended for production to run Acronis Cyber Infrastructure on top of SAN/NAS hardware that has its own redundancy mechanisms. Doing so may negatively affect performance and data availability.
  • To achieve best performance, keep at least 20% of cluster capacity free.
  • During disaster recovery, Acronis Cyber Infrastructure may need additional disk space for replication. Make sure to reserve at least as much space as any single storage node has.
  • It is recommended to have the same CPU models on each node to avoid VM live migration issues. For more details, see the Administrator’s Command Line Guide.
  • If you plan to use Backup Gateway to store backups in the cloud, make sure the local storage cluster has plenty of logical space for staging (keeping backups locally before sending them to the cloud). For example, if you perform backups daily, provide enough space for at least 1.5 days’ worth of backups. For more details, see the Administrator’s Guide.
  • It is recommended to use UEFI instead of BIOS if supported by hardware. In particular, if you use NVMe drives.

2.3.3.3. Storage Hardware Recommendations

  • It is possible to use disks of different size in the same cluster. However, keep in mind that, given the same IOPS, smaller disks will offer higher performance per terabyte of data compared to bigger disks. It is recommended to group disks with the same IOPS per terabyte in the same tier.
  • Using the recommended SSD models may help you avoid loss of data. Not all SSD drives can withstand enterprise workloads and may break down in the first months of operation, resulting in TCO spikes.
    • SSD memory cells can withstand a limited number of rewrites. An SSD drive should be viewed as a consumable that you will need to replace after a certain time. Consumer-grade SSD drives can withstand a very low number of rewrites (so low, in fact, that these numbers are not shown in their technical specifications). SSD drives intended for storage clusters must offer at least 1 DWPD endurance (10 DWPD is recommended). The higher the endurance, the less often SSDs will need to be replaced, improving TCO.
    • Many consumer-grade SSD drives can ignore disk flushes and falsely report to operating systems that data was written while it, in fact, was not. Examples of such drives include OCZ Vertex 3, Intel 520, Intel X25-E, and Intel X-25-M G2. These drives are known to be unsafe in terms of data commits, they should not be used with databases, and they may easily corrupt the file system in case of a power failure. For these reasons, use enterprise-grade SSD drives that obey the flush rules (for more information, see http://www.postgresql.org/docs/current/static/wal-reliability.html). Enterprise-grade SSD drives that operate correctly usually have the power loss protection property in their technical specification. Some of the market names for this technology are Enhanced Power Loss Data Protection (Intel), Cache Power Protection (Samsung), Power-Failure Support (Kingston), Complete Power Fail Protection (OCZ).
    • It is highly recommended to check the data flushing capabilities of your disks as explained in Checking Disk Data Flushing Capabilities.
    • Consumer-grade SSD drives usually have unstable performance and are not suited to withstand sustainable enterprise workloads. For this reason, pay attention to sustainable load tests when choosing SSDs.
    • Performance of SSD disks may depend on their size. Lower-capacity drives (100 to 400 GB) may perform much slower (sometimes up to ten times slower) than higher-capacity ones (1.9 to 3.8 TB). Consult drive performance and endurance specifications before purchasing hardware.
  • Using NVMe or SAS SSDs for write caching improves random I/O performance and is highly recommended for all workloads with heavy random access (e.g., iSCSI volumes). In turn, SATA disks are best suited for SSD-only configurations but not write caching.
  • Using shingled magnetic recording (SMR) HDDs is strongly not recommended, even for backup scenarios. Such disks have unpredictable latency that may lead to unexpected temporary service outages and sudden performance degradations.
  • Running metadata services on SSDs improves cluster performance. To also minimize CAPEX, the same SSDs can be used for write caching.
  • If capacity is the main goal and you need to store non-frequently accessed data, choose SATA disks over SAS ones. If performance is the main goal, choose NVMe or SAS disks over SATA ones.
  • The more disks per node the lower the CAPEX. As an example, a cluster created from ten nodes with two disks in each will be less expensive than a cluster created from twenty nodes with one disk in each.
  • Using SATA HDDs with one SSD for caching is more cost effective than using only SAS HDDs without such an SSD.
  • Create hardware or software RAID1 volumes for system disks using RAID or HBA controllers, respectively, to ensure its high performance and availability.
  • Use HBA controllers as they are less expensive and easier to manage than RAID controllers.
  • Disable all RAID controller caches for SSD drives. Modern SSDs have good performance that can be reduced by a RAID controller’s write and read cache. It is recommend to disable caching for SSD drives and leave it enabled only for HDD drives.
  • If you use RAID controllers, do not create RAID volumes from HDDs intended for storage. Each storage HDD needs to be recognized by Acronis Cyber Infrastructure as a separate device.
  • If you use RAID controllers with caching, equip them with backup battery units (BBUs) to protect against cache loss during power outages.
  • Disk block size (e.g., 512b or 4K) is not important and has no effect on performance.

2.3.3.4. Network Hardware Recommendations

  • Use separate networks (and, ideally albeit optionally, separate network adapters) for internal and public traffic. Doing so will prevent public traffic from affecting cluster I/O performance and also prevent possible denial-of-service attacks from the outside.
  • Network latency dramatically reduces cluster performance. Use quality network equipment with low latency links. Do not use consumer-grade network switches.
  • Do not use desktop network adapters like Intel EXPI9301CTBLK or Realtek 8129 as they are not designed for heavy load and may not support full-duplex links. Also use non-blocking Ethernet switches.
  • To avoid intrusions, Acronis Cyber Infrastructure should be on a dedicated internal network inaccessible from outside.
  • Use one 1 Gbit/s link per each two HDDs on the node (rounded up). For one or two HDDs on a node, two bonded network interfaces are still recommended for high network availability. The reason for this recommendation is that 1 Gbit/s Ethernet networks can deliver 110-120 MB/s of throughput, which is close to sequential I/O performance of a single disk. Since several disks on a server can deliver higher throughput than a single 1 Gbit/s Ethernet link, networking may become a bottleneck.
  • For maximum sequential I/O performance, use one 1Gbit/s link per each hard drive, or one 10Gbit/s link per node. Even though I/O operations are most often random in real-life scenarios, sequential I/O is important in backup scenarios.
  • For maximum overall performance, use one 10 Gbit/s link per node (or two bonded for high network availability).
  • It is not recommended to configure 1 Gbit/s network adapters to use non-default MTUs (e.g., 9000-byte jumbo frames). Such settings require additional configuration of switches and often lead to human error. 10+ Gbit/s network adapters, on the other hand, need to be configured to use jumbo frames to achieve full performance.
  • The currently supported Fibre Channel host bus adapters (HBAs) are QLogic QLE2562-CK and QLogic ISP2532.
  • It is recommended to use Mellanox ConnectX-4 and ConnectX-5 InfiniBand adapters. Mellanox ConnectX-2 and ConnectX-3 cards are not supported.
  • Adapters using the BNX2X driver, such as Broadcom Limited BCM57840 NetXtreme II 10/20-Gigabit Ethernet / HPE FlexFabric 10Gb 2-port 536FLB Adapter, are not recommended. They limit MTU to 3616, which affects the cluster performance.

2.3.4. Hardware and Software Limitations

Hardware limitations:

  • Each management node must have at least two disks (one system+metadata, one storage).
  • Each compute or storage node must have at least three disks (one system, one metadata, one storage).
  • Three servers are required to test all the features of the product.
  • The system disk must have at least 100 GBs of space.
  • Admin panel requires a Full HD monitor to be displayed correctly.
  • The maximum supported physical partition size is 254 TiB.

Software limitations:

  • One node can be a part of only one cluster.
  • Only one S3 cluster can be created on top of a storage cluster.
  • Only predefined redundancy modes are available in the admin panel.
  • Thin provisioning is always enabled for all data and cannot be configured otherwise.
  • Admin panel has been tested to work at resolutions 1280x720 and higher in the following web browsers: latest Firefox, Chrome, Safari.

For network limitations, see Network Limitations.

2.3.5. Minimum Storage Configuration

The minimum configuration described in the table will let you evaluate the features of the storage cluster. It is not meant for production.

Table 2.3.5.1 Minimum cluster configuration
Node # 1st disk role 2nd disk role 3rd+ disk roles Access points
1 System Metadata Storage iSCSI, S3 private, S3 public, NFS, Backup Gateway
2 System Metadata Storage iSCSI, S3 private, S3 public, NFS, Backup Gateway
3 System Metadata Storage iSCSI, S3 private, S3 public, NFS, Backup Gateway
3 nodes in total   3 MDSs in total 3+ CSs in total Access point services run on three nodes in total.

Note

SSD disks can be assigned System, Metadata, and Cache roles at the same time, freeing up more disks for the storage role.

Even though three nodes are recommended even for the minimum configuration, you can start evaluating Acronis Cyber Infrastructure with just one node and add more nodes later. At the very least, a storage cluster must have one metadata service and one chunk service running. A single-node installation will let you evaluate services such as iSCSI, Backup Gateway, etc. However, such a configuration will have two key limitations:

  1. Just one MDS will be a single point of failure. If it fails, the entire cluster will stop working.
  2. Just one CS will be able to store just one chunk replica. If it fails, the data will be lost.

Important

If you deploy Acronis Cyber Infrastructure on a single node, you must take care of making its storage persistent and redundant to avoid data loss. If the node is physical, it must have multiple disks so you can replicate the data among them. If the node is a virtual machine, make sure that this VM is made highly available by the solution it runs on.

Note

Backup Gateway works with the local object storage in the staging mode. It means that the data to be replicated, migrated, or uploaded to a public cloud is first stored locally and only then sent to the destination. It is vital that the local object storage is persistent and redundant so the local data does not get lost. There are multiple ways to ensure the persistence and redundancy of the local storage. You can deploy your Backup Gateway on multiple nodes and select a good redundancy mode. If your gateway is deployed on a single node in Acronis Cyber Infrastructure, you can make its storage redundant by replicating it among multiple local disks. If your entire Acronis Cyber Infrastructure installation is deployed in a single virtual machine with the sole purpose of creating a gateway, make sure this VM is made highly available by the solution it runs on.

2.3.7. Raw Disk Space Considerations

When planning the infrastructure, keep in mind the following to avoid confusion:

  • The capacity of HDD and SSD is measured and specified with decimal, not binary prefixes, so “TB” in disk specifications usually means “terabyte”. The operating system, however, displays drive capacity using binary prefixes meaning that “TB” is “tebibyte” which is a noticeably larger number. As a result, disks may show capacity smaller than the one marketed by the vendor. For example, a disk with 6TB in specifications may be shown to have 5.45 TB of actual disk space in Acronis Cyber Infrastructure.
  • 5% of disk space is reserved for emergency needs.

Therefore, if you add a 6TB disk to a cluster, the available physical space should increase by about 5.2 TB.

2.3.8. Checking Disk Data Flushing Capabilities

It is highly recommended to make sure that all storage devices you plan to include in your cluster can flush data from cache to disk if power goes out unexpectedly. Thus you will find devices that may lose data in a power failure.

Acronis Cyber Infrastructure ships with the vstorage-hwflush-check tool that checks how a storage device flushes data to disk in emergencies. The tool is implemented as a client/server utility:

  • The client continuously writes blocks of data to the storage device. When a data block is written, the client increases a special counter and sends it to the server that keeps it.
  • The server keeps track of counters incoming from the client and always knows the next counter number. If the server receives a counter smaller than the one it has (e.g., because the power has failed and the storage device has not flushed the cached data to disk), the server reports an error.

To check that a storage device can successfully flush data to disk when power fails, follow the procedure below:

  1. On one node, run the server:

    # vstorage-hwflush-check -l
    
  2. On a different node that hosts the storage device you want to test, run the client, for example:

    # vstorage-hwflush-check -s vstorage1.example.com -d /vstorage/stor1-ssd/test -t 50
    

    where

    • vstorage1.example.com is the host name of the server.
    • /vstorage/stor1-ssd/test is the directory to use for data flushing tests. During execution, the client creates a file in this directory and writes data blocks to it.
    • 50 is the number of threads for the client to write data to disk. Each thread has its own file and counter. You can increase the number of threads (max. 200) to test your system in more stressful conditions. You can also specify other options when running the client. For more information on available options, see the vstorage-hwflush-check man page.
  3. Wait for at least 10-15 seconds, cut power from the client node (either press the Power button or pull the power cord out) and then power it on again.

  4. Restart the client:

    # vstorage-hwflush-check -s vstorage1.example.com -d /vstorlage/stor1-ssd/test -t 50
    

Once launched, the client will read all previously written data, determine the version of data on the disk, and restart the test from the last valid counter. It then will send this valid counter to the server and the server will compare it to the latest counter it has. You may see output like:

id<N>:<counter_on_disk> -> <counter_on_server>

which means one of the following:

  • If the counter on the disk is lower than the counter on the server, the storage device has failed to flush the data to the disk. Avoid using this storage device in production, especially for CS or journals, as you risk losing data.
  • If the counter on the disk is higher than the counter on the server, the storage device has flushed the data to the disk but the client has failed to report it to the server. The network may be too slow or the storage device may be too fast for the set number of load threads so consider increasing it. This storage device can be used in production.
  • If both counters are equal, the storage device has flushed the data to the disk and the client has reported it to the server. This storage device can be used in production.

To be on the safe side, repeat the procedure several times. Once you have checked your first storage device, continue with all the remaining devices you plan to use in the cluster. You need to test all devices you plan to use in the cluster: SSD disks used for CS journaling, disks used for MDS journals and chunk servers.