auristor_migration - Migrating to the AuriStor File System from OpenAFS
The AuriStor File System is an AFS3-compatible distributed file system. AFS3 is deployed in administrative domain units called cells. An existing installation of OpenAFS can be migrated in place without flag days to a fully functional AuriStorFS cell.
AuriStorFS Cache Managers are fully compatible with OpenAFS services and can be deployed at any time.
AuriStorFS services are backward compatible with OpenAFS and IBM AFS 3.6 cache managers and administration command-line tools.
AuriStorFS File Servers can be deployed in a cell whose database services are provided by OpenAFS. In this configuration, the AuriStorFS File Servers' advanced security features are unavailable.
AuriStorFS Database Services can be deployed in a cell whose coordinator (or synchronization site) is provided by OpenAFS. In this mode all of the AuriStorFS protocol enhancements are disabled.
AuriStorFS Database Services support OpenAFS 1.x compatible formats for the protection database and location database. As a result, OpenAFS database servers can be upgraded in-place.
AuriStorFS File Servers stores volume data in an incompatible proprietary vice partition format. Thus, OpenAFS file servers cannot be upgraded in-place. Instead, existing volume data must be migrated onto the AuriStorFS File Server using the vos(1) command suite.
The AuriStor File System uses different terminology to describe its services than OpenAFS system administrators are used to.
The OpenAFS documentation discusses various databases such as the Volume Location Database and the Protection Database. In AuriStorFS the architecture is similar but they are referred to as the Location Service and the Protection Service. That a particular service's data is stored in a database is an implementation detail.
In the AuriStor File System the Location Service manages a broader set of location and key data than simply volumes. As such it has been renamed.
In the AuriStor File System the Ubik distributed database instance that OpenAFS documentation referred to as a synchronization site is now called the coordinator.
Start by gathering a list of servers to be updated. The list of Location Servers for a cell can be collected using one of the following methods:
examine a server CellServDB file on any of the known Location Servers or File Servers.
execute the bos_listhosts(8) command against any of the known Location Servers or File Servers.
execute the fs_listcells(1) command on any client in the cell.
execute the udebug(1) command with the -long switch against any of the known Location Servers. This is a good method to confirm the results as the Location Service might be installed on more servers than are published to Cache Managers or File Servers.
A list of File Servers can be collected with the OpenAFS vos listaddrs -printuuid command,
the AuriStorFS vos_listfs(1) command,
or the AuriStorFS vos_eachfs(1) command.
Any File Servers that are also Location Servers need to be vacated before conversion.
Any RW
volumes can be migrated to another machine with the vos_move(1) command.
For RO
volumes the vos_remove(1) command can be used to remove the replication site after other sites for replicas are configured with vos_addsite(1) and populated with vos_release(1).
Organizations are encouraged to begin deploying AuriStorFS by first adding an AuriStorFS File Server to the existing cell and then populating it with volume data. This will provide the organization with a taste of the performance and scaling improvements that AuriStorFS offers above and beyond the capabilities of OpenAFS.
It is possible for AuriStorFS Location Servers and AuriStorFS Protection Servers to be added to an existing OpenAFS cell in an AFS3-compatible UBIK mode. This permits evaluating organizations to measure the performance benefits of the AuriStorFS services versus their OpenAFS counterparts without risking the introduction of AuriStorFS extended data into the cell's Location Service, Protection Service, and Backup Service.
The client configuration file, /etc/yfs/yfs-client.conf must be configured for the cell. If the AuriStor distributed cellservdb.conf included in the installation packages does not include configuration for the cell to be upgraded, it will be necessary to provide it in the /etc/yfs/yfs-client.conf file. The administration tools use the client configuration when establishing connections to the Location Service, the Protection Service, and the Backup Service.
The file /etc/yfs/server/yfs-server.conf must be configured for the cell. The Location Service and Protection Service servers use the [defaults] databases section to specify the the UBIK database configuration. The [cells] section, if present, is used by the File Server, the Volume Server, and the Salvage Server to locate the Location Service and Protection Service. DNS SRV records or DNS AFSDB records will be used if the <I[cells]> definition for the cell is not present.
Additionally, the /etc/yfs/server/BosConfig file must reflect the paths to the newly-installed server binaries. Note that in AuriStorFS, options to all servers can be configured via the /etc/yfs/server/yfs-server.conf file. The use of command line switches is discouraged.
Once the organization is prepared to commit to AuriStorFS and take advantage of the enhanced security capabilities and extended name spaces, it is time to upgrade the database services to AuriStorFS.
The safe method of migrating database servers from OpenAFS to AuriStorFS without an outage is:
One at a time, convert all but one of the OpenAFS database servers to AuriStorFS database servers running in the AFS3-compatible UBIK mode. When a single OpenAFS database server remains, it will be the coordinator. For each server:
Shutdown the OpenAFS services
Uninstall the OpenAFS binaries
Install the AuriStorFS binaries and configuration files
Enable the afscompat option in the [defaults] databases stanza of the /etc/yfs/server/yfs-server.conf file.
Copy or move the *.DB0 and *.DBSYS1 files from /usr/afs/db or /Library/Auristor/Tools/var/openafs/db to the /etc/yfs directory.
Start the AuriStorFS services.
Convert the coordinator from OpenAFS to AuriStorFS without the afscompat option.
Note: After the last OpenAFS server shuts down it will take (2 * number_of_servers + 1) * 75 seconds
before an AuriStorFS database server running in AFS compatibility mode can be elected coordinator.
For a three database server cell,
that is 525 seconds
.
If a write transaction is in progress when the shutdown occurs,
all read and write operations will be blocked until a new coordinator is elected.
In other words,
in a cell with three database servers,
if the final OpenAFS database server fails,
there will be a potential cell outage of 525 seconds before read and write transactions can resume.
Shutdown the OpenAFS services.
Uninstall the OpenAFS binaries.
Install the AuriStorFS binaries and configuration files without the afscompat option in the [defaults] databases stanza of the /etc/yfs/server/yfs-server.conf file.
Copy or move the *.DB0 and *.DBSYS1 files from /usr/afs/db or /Library/Auristor/Tools/var/openafs/db to the /etc/yfs directory.
Start the AuriStorFS services. This server will be elected coordinator within 56 seconds.
One at a time, restart each of the AuriStorFS database servers after disabling AFS3 compatibility mode. Each restarted server will resume voting for the coordinator within two seconds.
Edit the /etc/yfs/server/yfs-server.conf file to remove the afscompat option from the [defaults] databases block.
Restart the AuriStorFS services
The sections below describe how to convert an OpenAFS cell named your-cell-name.com
with three database servers to AuriStorFS.
AuriStorFS Servers such as the fileserver, volserver, and salvageserver and the administration command-line tools including vos, pts, and backup initiate outbound connections to cell services using the same configuration as the afsd cache manager.
The contents of the /etc/yfs/yfs-client.conf required by servers can be obtained from the OpenAFS CellServDB and ThisCell files.
The contents of ThisCell should be specified in the [defaults] stanza.
[defaults] thiscell = your-cell-name.com
The CellServDB data for the cell should be specified in the [cells] section.
[cells] your-cell-name.com = { description = "My test cell" servers = { db1.your-cell-name.com = { address = 10.0.0.1 } db2.your-cell-name.com = { address = 172.16.150.1 } db3.your-cell-name.com = { address = 192.168.5.2 } db4.your-cell-name.com = { address = 192.168.5.1 } } }
The description value can include any text string. Tools might use this string to describe the cell to end users when more detail than the cell name is desired. In the OpenAFS CellServDB record the description follows the cellname.
The servers block can include zero or more server entries consisting of:
A valid DNS host name. The name can be resolved using A, AAAA or CNAME records.
The address can be an IPv4 or IPv6 address. Zero or more addresses can be specified for a server. When transitioning from OpenAFS specify a single IPv4 address.
If the [cells] your-cell-name.com block is not provided the fileserver, volserver and salvageserver will use DNS SRV record lookups when other services within the cell must be contacted.
OpenAFS Servers use the CellServDB, ThisCell, NetRestrict, NetInfo, krb.excl and krb.conf files to configure parameters which are set in /etc/yfs/server/yfs-server.conf on AuriStorFS Servers. Additionally, all command line parameters to AuriStorFS daemons should be set in the configuration file instead of the BosConfig file.
The [defaults] section of the server configuration file holds all of the information previously obtained from these files:
The contents of NetInfo should be specified as a list of addresses in the netinfo stanza.
[defaults] netinfo = <address-list>
The contents of NetRestrict should be specified as a list of addresses in the netrestrict stanza.
[defaults] netrestrict = <address-list>
The contents of ThisCell should be specified in the thiscell stanza.
[defaults] thiscell = your-cell-name.com
The CellServDB data for the cell each database server is a member of must be specified in the databases stanza. The databases stanza should contain a servers stanza that lists lines for the address, and optionally, clone status, for each server.
databases = { afscompat = <boolean-value> servers = { db1.your-cell-name.com = { address = 192.0.2.1 } db2.your-cell-name.com = { address = 198.51.100.42 } db3.your-cell-name.com = { address = 203.0.113.252 } db4.your-cell-name.com = { address = 195.51.100.99 clone = <boolean-value>, "voting" or "non-voting" } } }
The afscompat option when set to a true boolean-value instructs the configured server to operate in AFS3-compatibility mode. This mode is intended for use during trials and cell migrations only. Any server that is running in AFS3-compatibility mode cannot be elected as the coordinator. Such servers are always voting clones.
For example, the following AuriStorFS servers definition on db4.your-cell-name.com
implies it as a voting clone because it is operating in AFS3-compatibility mode:
[defaults] thiscell = your-cell-name.com databases = { afscompat = yes servers = { db1.your-cell-name.com = { address = 192.0.2.37 } db2.your-cell-name.com = { address = 198.51.100.47 } db3.your-cell-name.com = { address = 203.0.113.252 } db4.your-cell-name.com = { address = 198.51.100.99 } } }
The matching OpenAFS CellServDB definition on the OpenAFS servers is:
>your-cell-name.com # Local Cell Configuration 192.0.2.37 # db1.your-cell-name.com 109.51.100.47 # db2.your-cell-name.com 203.0.113.252 # db3.your-cell-name.com 198.51.100.99 # db4.your-cell-name.com
The databases/servers block differs from the [cells] your-cell-name.com block as follows:
The databases/servers block must include only the addresses which are used by Ubik for database management operations. The server name is used for descriptive purposes only and is never used to query DNS.
The [cells] your-cell-name.com block of server names must be valid DNS host names that can be resolved using DNS A, AAAA, or CNAME records. Each server block can include zero or more reachable addresses. The contents are identical to the [cells] your-cell-name.com block added to /etc/yfs/yfs-client.conf.
If no [cells] your-cell-name.com block is specified, the server will instead use DNS SRV records [RFC 5864] and DNS AFSDB records [RFC 1183].
The [kerberos] section of the server configuration file holds all of the information previously obtained from these files:
The contents of krb.conf should be specified as a list of realms in the local_realms stanza.
[kerberos] local_realms = I<list of Kerberos v5 realm names>
The contents of krb.excl should be specified as a list of principals in the foreign_principals stanza.
[kerberos] foreign_principals = I<list of Kerberos v5 principal names>
Each service also can have options specified in an appropriately-named section, e.g. the bosserver
can have options specified in the [bosserver] section. Examples might include running in nofork mode, where the line nofork = yes would be included.
[bosserver] nofork = yes
The databases stanza may also be configured per-service.
[vlserver] databases = { ... }
It is important that the /etc/yfs/server/yfs-server.conf file and the directory containing it be owned by the user the AuriStorFS server processes will run as. Typically this user is named yfsserver
and will be created when the AuriStorFS binary packages are installed if your platform supports running AuriStorFS daemons as non-root users.
The existing afs key must be installed on the AuriStorFS servers. AuriStorFS servers store the afs key data in two files:
The KeyFile is compatible with IBM AFS and OpenAFS cells. Prior to OpenAFS 1.6.5 the afs key was always a 56-bit parity checked key compatible with the deprecated U.S. Data Encryption Standard (DES). Keys compatible with the DES encryption types, des-cbc-crc, des-cbc-md4 and des-cbc-md5, are referred to as rxkad keys. The KeyFile file stores rxkad keys.
Beginning with the OpenAFS 1.6.5 release the afs key can be any encryption type accepted by the Kerberos v5 standard including aes256-cts-hmac-sha384-192, aes128-cts-hmac-sha256-128, aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, and arcfour-hmac. All of the non-DES afs keys are referred to as rxkad_krb5 keys.
The AuriStor File System also adds a new key type known as yfs-rxgk. The KeyFileExt file stores both rxkad_krb5 and yfs-rxgk keys.
If the cell uses an rxkad key it is strongly recommended (but not required) that the cell be upgraded to use one or more rxkad_krb5 keys before converting the cell to the AuriStor File System. The rxkad key can be broken by brute force in under a day using publicly available cloud services. The How To Rekey guide provides instructions to convert an OpenAFS cell to use the more secure rxkad_krb5 keys. This should be done before continuing to generate the KeyFileExt.
If the cell cannot be upgraded to use rxkad_krb5 keys, then the OpenAFS KeyFile should be copied to /etc/yfs/server/KeyFile on each AuriStorFS server.
If rxkad_krb5 keys are used by the cell, they must be installed into the /etc/yfs/server/KeyFileExt file using the AuriStorFS version of asetkey. Most OpenAFS cells running version 1.6.5 or later will be using a krb5 keytab file named rxkad.keytab. Some installations might have an OpenAFS KeyFileExt file.
If neither an OpenAFS KeyFileExt file nor a krb5 keytab with the existing key is available, a krb5 keytab must be generated. How this is done depends on which Kerberos implementation is in use. Skip this section if a Kerberos v5 keytab, rxkad.keytab, is available.
To avoid changing the key when using MIT Kerberos, you will need to reconstruct the keytab from the existing key. Use OpenAFS asetkey list or AuriStorFS asetkey listkeys to get the key for the newest kvno:
# asetkey list kvno 4: key is: d34dbeefd34dbeef kvno 3: key is: baddc4febaddc4fe All done.
Then, use ktutil to create a keytab:
# ktutil ktutil: add_entry -key -p afs/your-cell-name.com@EXAMPLE.ORG \ -k 4 -e des-cbc-crc Key for your-cell-name.com@EXAMPLE.ORG (hex): d34dbeefd34dbeef ktutil: write_kt /etc/yfs/server/rxkad.keytab ktutil: exit
This can be done via the Heimdal ext_keytab command.
# ext_keytab -k /etc/yfs/server/rxkad.keytab \ afs/your-cell-name.com@EXAMPLE.ORG
As the desired result is a KeyFileExt file, you will need to convert your keytab. First, generate a list of all the key types it contains via ktutil.
# ktutil ktutil: rkt /etc/yfs/server/rxkad.keytab ktutil: list -e slot KVNO Principal ---- ---- --------------------------------------------------------------------- 1 4 afs/your-cell-name.com@EXAMPLE.ORG (aes256-cts-hmac-sha1-96) 2 4 afs/your-cell-name.com@EXAMPLE.ORG (aes128-cts-hmac-sha1-96) 3 4 afs/your-cell-name.com@EXAMPLE.ORG (des3-cbc-sha1) 4 4 afs/your-cell-name.com@EXAMPLE.ORG (arcfour-hmac)
# ktutil -k /etc/yfs/server/rxkad.keytab list /etc/yfs/server/rxkad.keytab: Vno Type Principal Aliases 4 aes256-cts-hmac-sha1-96 afs/your-cell-name.com@EXAMPLE.ORG 4 aes128-cts-hmac-sha1-96 afs/your-cell-name.com@EXAMPLE.ORG 4 des3-cbc-sha1 afs/your-cell-name.com@EXAMPLE.ORG 4 arcfour-hmac-md5 afs/your-cell-name.com@EXAMPLE.ORG
All keys will need to be inserted with asetkey. The all
key type will iterate over all supported keys and add them. For example, the above keys have a key version of 4
, so for each key, a kvno parameter of 4
would need to be specified to the asetkey add command.
# asetkey add rxkad_krb5 4 all \ /etc/yfs/server/rxkad.keytab afs/your-cell-name.com@EXAMPLE.ORG
In a cell running an AuriStorFS Location Service, a cell-wide yfs-rxgk key is required in order to support AES256 encryption, protection of cache manager callback connections, and many other enhanced security features. This additional key is created with asetkey add. The yfs-rxgk key should be key version 1, and future keys can increment from there.
Note: Do not create the yfs-rxgk key when adding an AuriStorFS Server to a cell running an OpenAFS Location Service. When converting an OpenAFS cell to AuriStorFS the yfs-rxgk key must be added after all of the database servers are running AuriStorFS and AFS3 compatibility mode has been disabled.
To generate a new yfs-rxgk key, execute the following command:
# asetkey add yfs-rxgk 1 aes256-cts-hmac-sha1-96 random
The KeyFileExt from the machine the asetkey command was executed on can then be distributed to all AuriStorFS servers.
It is important that the KeyFileExt file and the directory containing it be owned by the user the AuriStorFS server processes will run as. Typically this user is named yfsserver
and will be created when the AuriStorFS binary packages are installed if your platform supports running AuriStorFS as non-root users.
The OpenAFS bosservershares the same cell-wide keys as the AFS cell. Each AuriStorFS bosserver is an independent service that is used to manage a single machine. As such, each AuriStorFS bosserver requires its own Kerberos service principal and keytab. This keytab should be installed in /etc/yfs/server/bos.keytab.
This example creates a bosserver key for server server.your-cell-name.com
:
# kadmin kadmin: add_principal -randkey afs3-bos/server.your-cell-name.com@EXAMPLE.ORG kadmin: ktadd -k /etc/yfs/server/bos.keytab \ afs3-bos/server.your-cell-name.com@EXAMPLE.ORG
This example creates a bosserver key for server server.your-cell-name.com
:
# kadmin kadmin> add -r afs3-bos/server.your-cell-name.com@EXAMPLE.ORG kadmin> ext_keytab -k /etc/yfs/server/bos.keytab \ afs3-bos/server.your-cell-name.com@EXAMPLE.ORG
It is important that the /etc/yfs/server/bos.keytab file and the directory containing it be owned by the user the AuriStorFS server processes will run as. Typically this user is named yfsserver
and will be created when the AuriStorFS binary packages are installed if your platform supports running AuriStorFS as non-root users.
After populating the KeyFileExt and setting up the AuriStorFS bosserver authentication, each AuriStorFS Location server must be configured with an updated BosConfig and UserListExt file. In addition, after conversion the AuriStorFS Location service can be configured to use GSS-API Kerberos v5 authentication.
Because the bosserver has a key installed, it is possible to use local superuser authentication to configure the bosserver with the bos create command. This is the safest way to make sure the BosConfig file has the proper format and ownership. Any options should already be configured in the yfs-server.conf. file.
This would create the location server and protection server processes on server server.your-cell-name.com
:
# bos create -server server.your-cell-name.com -instance vlserver \ -type simple -cmd "/usr/local/libexec/yfs/vlserver" \ -localauth # bos create -server server.your-cell-name.com -instance ptserver \ -type simple -cmd "/usr/local/libexec/yfs/ptserver" \ -localauth
The location server and protection server processes should all be running. The file server should be upgraded as explained in the "FILE SERVERS" section before any data is moved to the server.
The bosserver, vlserver, and volserver do not rely upon the AuriStorFS Protection service for group membership information. Instead, super user status is determined by examining the UserListExt file which replaces the OpenAFS UserList file. The UserList file must be replaced with a UserListExt file, which is installed in /etc/yfs/server/UserListExt.
AuriStorFS servers support two rx security classes:
The rxkad security class is compatible with OpenAFS and authenticates Kerberos 4 principal names even when Kerberos 5 service tickets are used.
The yfs-rxgk security class is incompatible with OpenAFS and authenticates exported names from the negotiated GSS mechanism. These names are represented in the UserListExt as opaque blobs. When used with the GSS Kerberos 5 mechanism, the full Kerberos 5 principal is encoded in the opaque blob.
When migrating from OpenAFS, the UserListExt should contain the list of rxkad authenticated users found in the OpenAFS UserList, with the Kerberos realm appended to the end of each name, after an @
sign. For example, if the realm EXAMPLE.ORG
is accepted as a local realm, then the super users
alice.admin bob.admin bob
reported by bos listusers are added to the AuriStorFS server using bos adduser as the rxkad names
alice.admin@EXAMPLE.ORG bob.admin@EXAMPLE.ORG bob@EXAMPLE.ORG
and yfs-rxgk Kerberos 5 names
alice/admin@EXAMPLE.ORG bob/admin@EXAMPLE.ORG bob@EXAMPLE.ORG
If two realms EXAMPLE.ORG
and AD.EXAMPLE.ORG
are accepted as local realms, then the super users
alice.admin bob.admin bob
might be added as rxkad names
alice.admin@EXAMPLE.ORG bob.admin@EXAMPLE.ORG bob@EXAMPLE.ORG alice.admin@AD.EXAMPLE.ORG bob.admin@AD.EXAMPLE.ORG
and yfs-rxgk Kerberos 5 names
alice/admin@EXAMPLE.ORG bob/admin@EXAMPLE.ORG bob@EXAMPLE.ORG alice/admin@AD.EXAMPLE.ORG bob/admin@AD.EXAMPLE.ORG
Assuming that bob@AD.EXAMPLE.ORG
is not trusted as a super user but bob@EXAMPLE.ORG
is.
Because the bosserver has a key installed, it is possible to use local superuser authentication to configure the superuser list with the AuriStorFS bos adduser command. To add super users alice.admin
and bob.admin
on server server.your-cell-name.com
using both rxkad and yfs-rxgk security classes:
# bos adduser -server server.your-cell-name.com -user alice.admin@EXAMPLE.ORG \ -user bob.admin@EXAMPLE.ORG -type superuser -rxkad -localauth # bos adduser -server server.your-cell-name.com -user alice/admin@EXAMPLE.ORG \ -user bob/admin@EXAMPLE.ORG -type superuser -localauth
This procedure must be performed on location servers as well as on file servers.
Unlike OpenAFS, AuriStorFS restricts access to cell metadata that is not necessary for proper operation of clients. This includes pts memberships, volume statistics, and bosserver process information. By default the restricted_query option, supported by each AuriStorFS service, is set to admin which prevents users who are not on either the UserListExt or ReaderList from viewing metadata. Members of the ReaderList are granted read-only access to restricted metadata. This file can be configured using bos adduser by replacing -type superuser with -type reader. The following commands permit bob@AD.EXAMPLE.ORG
to query restricted metadata using either rxkad or yfs-rxgk security classes:
# bos adduser -server server.your-cell-name.com \ -user bob@AD.EXAMPLE.ORG -type reader -rxkad -localauth # bos adduser -server server.your-cell-name.com \ -user bob@AD.EXAMPLE.ORG -type reader -localauth
When restricted_query is set to admin automated processes such as monitoring tools will need to operate with access to valid authentication tokens. Alternatively, adding
[defaults] restricted_query = anyuser
or
[vlserver] restricted_query = anyuser
will restore the OpenAFS behavior.
In addition to authenticating with the rxkad_krb5 key and afs/your-cell-name.com@EXAMPLE.ORG
Kerberos v5 principal, AuriStorFS location servers can authenticate client entities, both users and machines, using GSS-API Kerberos v5 authentication.
Once authenticated via GSS-API, yfs-rxgk tokens will be issued to permit subsequent authentication to the Key Management Service which is co-located with the AuriStorFS location service. Unlike rxkad_krb5 tokens which are valid for use with every server in the cell, the yfs-rxgk tokens are only valid for a single server. Clients will contact the Key Management Service to acquire a separate token with a unique session key for every connection.
A key for yfs-rxgk needs to be inserted in the Kerberos KDC. In order that services are able to be fully configured, the key should be created disabled.
Install as /etc/yfs/server/vl.keytab on all the location servers.
It is important that the /etc/yfs/server/vl.keytab file and the directory containing it be owned by the user the AuriStorFS server processes will run as. Typically this user is named yfsserver
and will be created when the AuriStorFS binary packages are installed if your platform supports running AuriStorFS as non-root users.
# kadmin kadmin: add_principal -randkey -allow_tix yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG kadmin: ktadd -k /etc/yfs/server/vl.keytab yfs-rxgk/_afs.your-cell-name.com
# kadmin kadmin> add --attributes=+disallow-svr -r yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG [...] kadmin> ext_keytab --keytab=/etc/yfs/server/vl.keytab \ yfs-rxgk/_afs.your-cell-name.com
For GSS-API Kerberos v5 clients that can leverage DNS TXT records to resolve the Kerberos domain for a service it is beneficial to publish a DNS TXT record:
_kerberos._afs.your-cell-name.com IN TXT EXAMPLE.ORG
The existence of this DNS TXT record can reduce the time to evaluate the Kerberos realm associated with the service. It is of particular use when the AuriStorFS cell name does not match the Kerberos realm name.
After all of the database servers have been upgraded to AuriStorFS and the yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG
key has been installed, enable the yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG
service principal in the Kerberos KDC.
# kadmin kadmin: modprinc +allow_tix yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG
# kadmin kadmin> modify --attributes=-disallow-svr yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG
A list of file servers can be discovered with the OpenAFS command vos listaddrs -printuuid
, the AuriStorFS command vos listfs, or the AuriStorFS command vos eachfs.
Each server will need a /etc/yfs/server/bos.keytab as described in the "BOSSERVER SETUP" section, as well as a copy of the KeyFileExt file. Each server can also be configured with a unique super user configuration as described in the "CREATING THE SERVER SUPERUSER LISTS" section.
Servers can be upgraded one at a time, but all volume data must be removed from a file server before the upgrade is performed. See vos_move, vos_addsite, and vos_remove.
Existing vice partitions can be reused, but
must support POSIX extended attributes. On some Linux systems, enabling POSIX extended attributes requires mounting the partitions with the user_xattr
option. See fstab and mount for additional details. The mount command will display the options a filesystem was mounted with.
the ownership must be changed from root
to yfsserver
chown -R yfsserver:yfsserver /vicep*
all files except for an optional AlwaysAttach must be removed
Any file system that supports POSIX extended attributes can be used for vice partitions including ext4, xfs, btrfs, zfs, nfs mounts, cifs mounts, etc. The current recommendation is to use xfs and where available to enable support for xfs reflinks.
See fileserver for more details.
For the simplest and most common case, the cell name and Kerberos realm name will match, and no configuration will need to be done.
In cases where the cell and realm names do not match or when more than one realm is accepted, the local_realms configuration stanza can be used to map all principals in one (or more) realms to the same username in the cell.
[kerberos] local_realms = EXAMPLE.ORG AD.EXAMPLE.ORG
To disallow some principals, the foreign_principals stanza may be used. For instance, you may not want a particular admin principal to be able to authenticate to AuriStorFS.
[kerberos] foreign_principals = admin@AD.EXAMPLE.ORG
Because the bosserver has a key installed, it is possible to use local superuser authentication to configure the bosserver with the bos create command. This is the safest way to make sure the BosConfig file has the proper format and ownership. Any options should already be configured in the yfs-server.conf. file.
This would create the fileserver process on server fs1.your-cell-name.com
:
# bos create -server fs1.your-cell-name.com -instance dafs -type dafs \ -cmd "/usr/local/libexec/yfs/fileserver" \ /usr/local/libexec/yfs/volserver \ /usr/local/libexec/yfs/salvageserver \ /usr/local/libexec/yfs/salvager -localauth
The fileserver processes should all be started. Data can now be migrated onto the fileserver.
During a migration from OpenAFS to AuriStorFS please be aware of the details to avoid surprises.
Many organizations monitor the health of the UBIK database services by examining the output of udebug. Many monitoring scripts only consider Recovery state 1f
to be the healthy state. However, Recovery state 17
is also a healthy state. The difference between the states is that Recovery state 17
means that all replica sites have a copy of the best database but there have been no database modifications since the UBIK coordinator declared a new UBIK epoch.
Few OpenAFS monitoring tools check for Recovery state 17
because OpenAFS always modifies the database after a UBIK coordinator is elected. AuriStorFS UBIK does not modify the database until a write transaction is successfully completed.
Typically, local superuser root
is required to install new packages. Unlike OpenAFS, AuriStorFS can be configured to run server daemons as non-superusers.
AlwaysAttach(5), asetkey(8), BosConfig(5), bos_adduser(8), bos_create(8), bos_listhosts(8), bos.keytab(5), bosserver(8), fileserver(8), fstab(5), KeyFileExt(5), mount(8), ptserver(8), ReaderList(5), salvageserver(8), udebug(1), UserListExt(5), vl.keytab(5), vos(1), vos_addsite(1), vos_listfs(1), vos_release(1), vos_remove(1), vos_move(1), yfs-client.conf(5), yfs-server.conf(5), How To Rekey
Copyright AuriStor, Inc. 2014-2021. https://www.auristor.com/ All Rights Reserved.
"AFS" is a registered mark of International Business Machines Corporation, used under license. (USPTO Registration 1598389)
"OpenAFS" is a registered mark of International Business Machines Corporation. (USPTO Registration 4577045)
The "AuriStor" name, log 'S' brand mark, and icon are registered marks of AuriStor, Inc. (USPTO Registrations 4849419, 4849421, and 4928460) (EUIPO Registration 015539653).
"Your File System" is a registered mark of AuriStor, Inc. (USPTO Registrations 4801402 and 4849418).
"YFS" and "AuriStor File System" are trademarks of AuriStor, Inc.