Comparison of AuriStor File System and AFS

The AuriStor File System inherits the best features and capabilities of the /afs model and addresses its biggest weaknesses and limitations. Upgrading from AFS to AuriStorFS can be done incrementally. AuriStorFS clients and servers integrate two file system protocols: AuristorFS and AFS. The AFS protocol stack is backward compatible with IBM AFS 3.6 and all OpenAFS releases. The AuriStorFS protocol stack provides enhanced functionality, performance optimizations and future extensibility. AuriStorFS caching algorithms are more efficient and reduce overall network traffic. The following table provides a baseline feature comparison.

Feature OpenAFS 1.6 AuriStor 1.0
Year 2038 Safe No Yes
Timestamp Granularity 1s (UNIX Epoch) 100ns (UNIX Epoch)
Rx Listener Thread Throughput <2.4 gbits/second >8.2 gbit/second
Rx Listener Threads per Service (RHEL7) 1 >1 per processor thread
Rx Window Size 32 packets / 44KB 60 packets / 84KB plus 1MB buffering
Rx Addressing IPv4 IPv4 / IPv6
Volume IDs per Cell 231 264
Object IDs per Volume 230 directories and 230 files 295 directories and 295 files
Objects per Volume 226 directories or files 290 directories or files
Maximum Distributed DB Size 2 gigabytes (231 bytes) 16 exabytes (264 bytes)
Access Control Lists (ACL) Per Directory Per Object
Access Control Entries (ACE) per ACL 20 Unlimited
Directory ACL Inheritance No Yes
Volume Access Control Policies No Yes
Mandatory Locking No Yes
GSS Authentication (RxGK) No Yes
AES-256/SHA-1 Wire Privacy No Yes
AES-256/SHA-2 Wire Privacy (RFC 8009) No Yes
Mandatory Security Levels* No Yes
Cache Poisoning Attack Protection** No yfs-rxgk: Yes; rxkad: No
Combined Identity Authentication No yfs-rxgk: Yes; rxkad: No
Perfect Forward Secrecy Yes yfs-rxgk: Yes; rxkad: No
Default Volume Quota 5000 KB 20 GB
Maximum Assignable Quota 2 terabytes (241 bytes) 16 zettabytes (274 bytes)
Maximum Reported Volume Size 2 terabytes (241 bytes) 16 zettabytes (274 bytes)
Maximum Volume Size 16 zettabytes (274 bytes) 16 zettabytes (274 bytes)
Maximum Partition Size 16 zettabytes (274 bytes) 16 zettabytes (274 bytes)
Servers run as “root” Yes No
UBIK quorum initial establishment 75 to 120 seconds 23 to 40 seconds
UBIK quorum re-establishment 75 to 165 seconds 23 to 56 seconds
UBIK servers per cell (usable) Up to 5 Up to 80
UBIK clone servers Voting: Yes; Non-voting: No Voting: Yes; Non-voting: Yes
UBIK server ranking IPv4 address (low to high) arbitrary assignment
POSIX O_DIRECT support No Yes
iOS support No Yes
IBM AFS 3.6 client support Yes Yes (AFS3 protocol only)
OpenAFS 1.x client support Yes Yes (AFS3 protocol only)
AuriStor 1.0 client support Yes (AFS3 protocol only) Yes
IBM AFS 3.6 DB server support Yes Yes (AFS3 protocol only)
OpenAFS 1.x server support Yes Yes (AFS3 protocol only)
AuriStor 1.0 server support Yes (AFS3 protocol only) Yes
DB Servers support AuriStor file servers Yes (AFS3 protocol only) Yes
DB Servers support AFS file servers Yes Yes
Thread safe libraries No Yes
file Lock State Callback Notification No Yes
Atomic Create File in Locked State No Yes
Valid AFS3 Volume Status Info Replies No Yes
Server Thread Limits 256 Up to OS capability
Dynamic Thread Pools No Yes
Vice Partition reflink support*** No Yes
File Server Meltdowns**** Yes No
IPv6 Ready No Yes
Microsoft Direct Access compatible No Yes
Kerberos Profile based configuration No Yes

*A Security Level is defined as a Rx Security class (rxkad or rxgk) combined with cryptographic requirements for data privacy and integrity protection. Security Levels are enforced at the File Server.
**AFS Clients are susceptible to cache poisoning attacks because the AFS token session key used for authenticating the file server is visible to the end user. It is therefore possible for the end user to spoof the file server to the AFS cache manager without detection.
***Linux BTRFS and XFS RefLinks are used to implement atomic copy-on-write operations that ensure uniform access time to RW volumes after “vos release” and “vos backup” operations.
****OpenAFS File Servers are effectively limited to processing slightly more than one remote procedure call at a time regardless of the number of configured worker threads. A simple test using two UNIX client machines can demonstrate the adverse side effects. On client one copy a file that is large enough to require several minutes to complete to a directory. On client two perform “ ls –l” (directory listing with stat information) of the directory to which the file is being copied. The second client will be unable to complete the directory listing until the first client’s copy completes. When the number of clients accessing the target directory is larger than the number of worker threads the file server becomes unable to respond to any client request until the copy completes. This is one example of a file server meltdown scenario.
Another meltdown scenario is triggered by IBM and OpenAFS clients who experience soft-deadlocks during normal AFS3 CallBack processing. When a soft-deadlock occurs, fileserver worker threads can remain blocked for several minutes. Client failover to other fileservers can tie up worker threads


Ready to migrate from OpenAFS? Contact us to learn more.