2.1. Understanding Acronis Storage Architecture

The fundamental component of Acronis Storage is a cluster: a group of physical servers interconnected by network. Each server in a cluster is assigned one or more roles and typically runs services that correspond to these roles:

  • storage role: chunk service or CS
  • metadata role: metadata service or MDS
  • network roles:
    • iSCSI access point service (iSCSI)
    • Acronis Backup Gateway access point service (ABGW)
    • S3 gateway (access point) service (GW)
    • S3 name service (NS)
    • S3 object service (OS)
    • Admin panel
    • SSH
  • supplementary roles:
    • management,
    • SSD cache,
    • system

Any server in the cluster can be assigned a combination of storage, metadata, and network roles. For example, a single server can be an S3 access point, an iSCSI access point, and a storage node at once.

Each cluster also requires that a web-based admin panel be installed on one (and only one) of the nodes. The panel enables administrators to manage the cluster.

2.1.1. Storage Role

Storage nodes run chunk services, store all the data in the form of fixed-size chunks, and provide access to these chunks. All data chunks are replicated and the replicas are kept on different storage nodes to achieve high availability of data. If one of the storage nodes fails, remaining healthy storage nodes continue providing the data chunks that were stored on the failed node.

Only a server with disks of certain capacity can be assigned the storage role.

2.1.2. Metadata Role

Metadata nodes run metadata services, store cluster metadata, and control how user files are split into chunks and where these chunks are located. Metadata nodes also ensure that chunks have the required amount of replicas and log all important events that happen in the cluster.

To provide system reliability, Acronis Storage uses the Paxos consensus algorithm. It guarantees fault-tolerance if the majority of nodes running metadata services are healthy.

To ensure high availability of metadata in a production environment, at least five nodes in a cluster must be running metadata services. In this case, if up to two metadata services fail, the remaining ones will still be controlling the cluster.

2.1.3. Network Roles (Storage Access Points)

Storage access points enable you to access data stored in storage clusters via the standard iSCSI and S3 protocols and use the clusters as backend storage for Acronis Backup Cloud.

To benefit from high availability, access points should be set up on multiple node.

The following access points are currently supported:

iSCSI
Allows you to use Acronis Storage as a highly available block storage for virtualization, databases, office applications, and other needs.
S3
A combination of scalable and highly available services that allows you to use Acronis Storage as a modern backend for solutions like OpenXchange AppSuite, Dovecot, and Acronis Access. In addition, developers of custom applications can benefit from a Amazon S3-compatible API and compatibility with the S3 libraries for various programming languages, S3 browsers, and web browsers.
ABGW
Acronis Backup Gateway allows you to connect Acronis Storage to Acronis Backup Cloud via Acronis FES API.
NFS
Allows you to create redundant NFS exports in Acronis Storage that can be mounted like regular directories shared over NFS.

The following remote management roles are supported:

Admin panel
Allows you to access the web-based user interface from an external network.
SSH
Allows you to connect to Acronis Storage nodes via SSH.

2.1.4. Supplementary Roles

Internal management
Provides a web-based admin panel that enables administrators to configure, manage, and monitor storage clusters. Only one admin panel is needed to create and manage multiple clusters (and only one is allowed per cluster).
SSD cache
Boosts chunk read/write performance by creating write caches on selected solid-state drives (SSDs). It is recommended to also use such SSDs for metadata, see Metadata Role. The use of write journals may speed up write operations in the cluster by two and more times.
System
One disk per node that is reserved for the operating system and unavailable for data storage.