Comparison of AuriStor File System and AFS
The AuriStor File System inherits the best features and capabilities of the /afs model and addresses its biggest weaknesses and limitations. Upgrading from AFS to AuriStorFS can be done incrementally. AuriStorFS clients and servers integrate two file system protocols: AuristorFS and AFS. The AFS protocol stack is backward compatible with IBM AFS 3.6 and all OpenAFS releases. The AuriStorFS protocol stack provides enhanced functionality, performance optimizations and future extensibility. AuriStorFS caching algorithms are more efficient and reduce overall network traffic. The following table provides a baseline feature comparison.
Feature | AuriStorFS | OpenAFS 1.8 |
Year 2038 Safe | Yes | No |
Timestamp Granularity | 100ns (UNIX Epoch) | 1s (UNIX Epoch) |
Rx Listener Single Thread Throughput | <= 8.2 gbit/secondA | <= 2.4 gbits/second |
Rx Listener Threads per ServiceB | >up to 48 | 1 |
Rx Window Size (default) | 128 packets / 180.5KB | 32 packets / 44KB |
Rx Window Size (maximum) | 65535 packets / 90.2MB | 32 packets / 44KB |
Rx Congestion Window Validation | Yes | No |
Volume IDs per Cell | 264 | 231 |
Object IDs per Volume | 295 directories and 295 files | 230 directories and 230 files |
Objects per Volume | 290 directories or files | 226 directories or files |
Objects per DirectoryC | Up to 2,029,072 | up to 64,447 |
Maximum Distributed DB Size | 16 exabytes (264 bytes) | 2 gigabytes (231 bytes) |
Access Control Lists (ACL) | Per Object | Per Directory |
Access Control Entries (ACE) per ACL | More than 400D | 20 |
Directory ACL Inheritance | Yes | N/A |
Volume Maximum Access Control Policies | Yes | No |
Mandatory Locking | Yes | No |
GSS Authentication (YFS-RxGK) | Yes | No |
AES-256/SHA-1 Wire Privacy | Yes | No |
AES-256/SHA-2 Wire Privacy (RFC 8009) | Yes | No |
Wire Integrity Protection | YFS-RxGK: Yes; RxKad: No | No |
Mandatory Security LevelsE | Yes | No |
Cache Poisoning Attack ProtectionF | YFS-RxGK: Yes; RxKad: No | No |
Combined Identity Authentication | YFS-RxGK: Yes; RxKad: No | No |
Perfect Forward Secrecy | YFS-RxGK: Yes; RxKad: No | No |
Secure Callbacks Connections | YFS-RxGK: Yes; RxKad: No | No |
Default Volume Quota | 20 GB | 5000 KB |
Maximum Assignable Quota | 16 zettabytes (274 bytes) | 2 terabytes (241 bytes) |
Maximum Reported Volume Size | 16 zettabytes (274 bytes) | 2 terabytes (241 bytes) |
Maximum Volume Size | 16 zettabytes (274 bytes) | 16 zettabytes (274 bytes) |
Maximum Transferable Volume Size | 16 zettabytes (274 bytes) | 5.639 terabytesG |
Maximum Partition Size | 16 zettabytes (274 bytes) | 16 zettabytes (274 bytes) |
Servers run unprivileged | Yes | No, must be run as “root” |
UBIK quorum initial establishment | 23 to 40 seconds | 75 to 120 seconds |
UBIK quorum establishment after coordinator restart | 1 second | 75 to 165 seconds |
UBIK quorum establishment after coordinator shutdown | 23 to 56 seconds | 75 to 165 seconds |
UBIK servers per cell (usable) | Up to 80 | Up to 5 |
UBIK clone servers | Voting: Yes; Non-voting: Yes | Voting: Yes; Non-voting: No |
UBIK server arbitrary ranking | Yes | No |
POSIX O_DIRECT support | Yes | No |
IBM AFS 3.6 client support | Yes (AFS protocol only) | Yes |
OpenAFS 1.x client support | Yes (AFS protocol only) | Yes |
AuriStorFS client support | Yes | Yes (AFS protocol only) |
IBM AFS 3.6 DB server support | Yes (AFS protocol only) | Yes |
OpenAFS 1.x server support | Yes (AFS protocol only) | Yes |
AuriStorFS server support | Yes | Yes (AFS protocol only) |
DB Servers support AuriStorFS file servers | Yes | Yes (AFS protocol only) |
DB Servers support AFS file servers | Yes | Yes |
Thread safe libraries | Yes | No |
File Lock State Callback Notification | Yes | No |
Atomic Create File in Locked State | Yes | No |
Valid AFS3 Volume Status Info Replies | Yes | No |
Server Thread LimitsH | Up to OS capability | 256 |
Dynamic Server Thread Pools | Yes | No |
Linux reflink capableI | Yes | No |
File Server MeltdownsJ K | No | Yes |
IPv6 Ready | Yes | No |
Microsoft Direct Access compatible | Yes | No |
Kerberos Profile based configuration | Yes | No |
A Per Rx Listener thread: on RHEL 7 (or similar) AuriStorFS can support one Rx Listener thread per CPU Core permitting saturation of multiple bonded 10gbit network interfaces.
B Multiple listener threads are supported on Linux kernels that provide the necessary functionality. RHEL6v7 and later, RHEL7 and later, and all Linux distributions based upon the Linux 4.x kernel support this feature.
C File names longer than 12 bytes require an additional directory entry for every 32 characters. Large AuriStorFS directories are incompatible with IBM AFS and OpenAFS clients and backup tools that parse volume dumps.
D AuriStorFS ACEs are variable length. The maximum number that can be assigned to an object is dependent upon the number of IDs.
E A Security Level is defined by Rx Security class (rxkad or rxgk) combined with cryptographic requirements for data privacy and integrity protection. Security Levels are enforced by the File Server.
F AFS Clients are susceptible to cache poisoning attacks when the AFS token session key shared with the file server is visible to the end user who can impersonate the fileserver to the cache manager without detection.
G The maximum transferable volume size is limited by the capabilities of the OpenAFS Rx implementation. The maximum number of bytes that can be transferred by as single Rx call in OpenAFS is approximately 5.639 terabytes.
H 64-bit Linux supports up to ~16,000 threads
I Linux BTRFS, XFS and OCFS2 reflinks are used to implement atomic copy-on-write operations that ensure uniform access time to RW volumes after “vos release” and “vos backup” operations.
J OpenAFS File Servers are limited to processing slightly more than one remote procedure call at a time regardless of the number of configured worker threads. A simple test using two UNIX client machines can demonstrate the limitations. On client A copy a file that is large enough to require several minutes to complete to a directory. On client B perform “ls –l” (directory listing with status information) of the directory to which the file is being copied. The “ls –l” will not complete the directory listing until client A’s copy completes. When the number of clients accessing the target directory is larger than the number of worker threads the file server is unable to respond to any client request until the copy completes.
K Another meltdown scenario is triggered by IBM and OpenAFS clients which experience soft-deadlocks during normal AFS3 CallBack processing. When a soft-deadlock occurs, fileserver worker threads can remain blocked for several minutes. Client failover to other fileservers can tie up worker threads.