Performance Improvements

Many organizations deploy large numbers of AFS file servers attached to many small file partitions. Increasing the number of partitions is used to reduce the time necessary to restart a server hosting large numbers of volumes. Large file server pools reduce the risk that resource contention on one set of volumes, directories, files and callbacks will have a negative impact on the ability to service requests if the file server’s limited thread pool is exhausted. This best practice when combined with off the shelf server configurations such as the Dell R720 result in a significant number of processor cores and I/O bandwidth unused. This deployment strategy has become best practice out of necessity.

AuriStor File System servers are designed to utilize the full capabilities of the server hardware. Best practice is to deploy fewer physical servers with more processor cores, more network bandwidth and fewer larger partitions. The number of file servers within a cell should be determined based upon the required geographic deployment of the servers, the number of required file server security policies, and the volume replication strategy for disaster recovery. It is anticipated that most organizations with six or more file servers per cell can significantly reduce the number of file servers deployed using AuriStorFS.

Rx Network Transport Improvements

The AuriStor Rx implementation is designed to minimize lock contention between the listener thread which processes all inbound network packets and the application threads that consume them. Lockless data structures are used wherever possible and aligned to avoid cross-core cache line invalidation. The packet loss recovery algorithms have been significantly improved to permit the use of larger window sizes without suffering from performance degradation.

The AuriStor Rx implementation transmits half as many acknowledgement packets as OpenAFS Rx. This reduces the work load on the peer’s rx listener thread and the time spent contending locks. The result is more time available for processing data packets.

AuriStor Rx is fully compatible with IBM AFS and OpenAFS. When AuriStor Rx is used by both endpoints sustained rates exceeding 8.2 Gbits/second per listener thread can be achieved. Significant improvements can be achieved simply by deploying AuriStor Rx on the endpoint transmitting bulk data.

AuriStor File Server Performance Improvements

The AuriStorFS file server has been designed to avoid lock contention and improve horizontal scale via the use of dynamic thread pools. AuriStorFS servers are capable of processing thousands of RPCs at a time limited by the OS and hardware capabilities. A single AuriStorFS file server properly configured can service the load of more than sixty OpenAFS 1.6 file servers. AuriStorFS server threads contend for resources less frequently and hold them exclusively for shorter time periods. The result is significantly reduced risk of cascading call waits leading to meltdowns. AuriStorFS file servers are resistant to denial-of-service attacks caused by unresponsive cache manager callback services; whether intentional or not.

AuriStor server deployments can consist of fewer physical servers and thereby save organizations money that would otherwise be spent on electricity, cooling, server acquisition, and maintenance.

Most importantly, work flows involving multiple producers and consumers sharing a common set of the directories or files are now safe to execute in the /afs name space.

AuriStorFS Database Server Performance Improvements

AuriStorFS and AFS rely upon a number of distributed databases to store Location information about file servers and volumes; and Protection information about users and groups. The distributed transaction processing protocol is UBIK. The AuriStor UBIK implementation is built upon the same Rx RPC stack and thread pools as the File Servers. Each database server instance can process thousands of requests in parallel with lower request/response latency.

Improvements in the UBIK configuration management and the quorum establishment algorithms permit the use of multiple IP addresses per server and the use of database quorums that are split across private and public networks.

Functional Improvements

The AuriStor File System implements many small but important functional improvements:

  • Mandatory locking. Windows applications expect mandatory locking to be enforced by the file server. AuriStorFS file servers do.
  • Per file ACLs, ACL inheritance and cross directory hard links.
  • Volume Status info updated with each file server RPC avoids significant numbers of RPCs from clients and permits further efficiencies in client caching.

Configuration Improvements

  • A new CellServDB format supports a broad range of endpoint types such as IPv6; and permits arbitrary port numbers, priorities and weights for load balancing.
  • Other configuration files such as ThisCell, TheseCells, CellAlias, NetRetrict, NetInfo, krb.conf, krb.excl, etc., and all command line parameters are consolidated into a flexible configuration profile.
  • Log rotation is compatible with “logrotate”.

Client Improvements

The AuriStor File System distribution includes simplified installation packages for:

  • Red Hat Enterprise Linux (6, 7, 8, 9)
  • AlmaLinux (8, 9)
  • Rocky Linux (8, 9)
  • CentOS (7, 8)
  • Amazon Linux 2
  • Oracle Linux
  • Debian Trusty (8), Stretch (9), Buster (10), and Bullseye (11)
  • Fedora 35, 36, 37
  • Ubuntu Xenial (16.04), Bionic (18.04), Focal Fossa (20.04), Jammy Jellyfish (22.04)
  • macOS (Sierra, High Sierra, Mojave, Catalina, Big Sur, Monterey, Ventura) on Intel and Apple Silicon
  • Microsoft Windows (Windows 7 through Windows 11)

Not only are the installers digitally signed but so are the executable binaries, shared libraries and kernel modules. On Apple's MacOS X and Microsoft Windows, digital signatures are necessary for integration with built-in firewall services. The Windows installer is an all-in-one package providing all necessary 64-bit and 32-bit components including Heimdal Kerberos/GSS assemblies.


Ready to migrate from OpenAFS? Contact us to learn more.