NAME

auristor_migration - Migrating to the AuriStor File System from OpenAFS

DESCRIPTION

The AuriStor File System is an AFS3-compatible distributed file system. AFS3 is deployed in administrative domain units called cells. An existing installation of OpenAFS can be migrated in place without flag days to a fully functional AuriStorFS cell.

AuriStorFS Database Services support OpenAFS 1.x compatible formats for the protection database and location database. As a result, OpenAFS database servers can be upgraded in-place.

AuriStorFS File Servers stores volume data in an incompatible proprietary vice partition format. Thus, OpenAFS file servers cannot be upgraded in-place. Instead, existing volume data must be migrated onto the AuriStorFS File Server using the vos command suite.

TERMINOLOGY

The AuriStor File System uses different terminology to describe its services than OpenAFS system administrators are used to.

Services not Databases

The OpenAFS documentation discusses various databases such as the Volume Location Database and the Protection Database. In AuriStorFS the architecture is similar but they are referred to as the Location Service and the Protection Service. That a particular service's data is stored in a database is an implementation detail.

Location Service not Volume Location Service

In the AuriStor File System the Location Service manages a broader set of location and key data than simply volumes. As such it has been renamed.

Coordinators not Synchronization Sites

In the AuriStor File System the Ubik distributed database instance that OpenAFS documentation referred to as a synchronization site is now called the coordinator.

UPGRADE PROCEDURE

Start by gathering a list of servers to be updated. The list of Location Servers for a cell can be collected using one of the following methods:

  1. examine a server CellServDB file on any of the known Location Servers or File Servers.

  2. execute the bos listhosts command against any of the known Location Servers or File Servers.

  3. execute the fs listcells command on any client in the cell.

  4. execute the udebug command with the -long switch against any of the known Location Servers. This is a good method to confirm the results as the Location Service might be installed on more servers than are published to Cache Managers or File Servers.

A list of File Servers can be collected with the OpenAFS vos listaddrs -printuuid command, the AuriStorFS vos listfs command, or the AuriStorFS vos eachfs command. Any File Servers that are also Location Servers need to be vacated before conversion. Any RW volumes can be migrated to another machine with the vos move command. For RO volumes the vos remove command can be used to remove the replication site after other sites for replicas are configured with vos addsite and populated with vos release.

Organizations are encouraged to begin deploying AuriStorFS by first adding an AuriStorFS File Server to the existing cell and then populating it with volume data. This will provide the organization with a taste of the performance and scaling improvements that AuriStorFS offers above and beyond the capabilities of OpenAFS.

It is possible for AuriStorFS Location Servers and AuriStorFS Protection Servers to be added to an existing OpenAFS cell in an AFS3-compatible mode. This permits evaluating organizations to measure the performance benefits of the AuriStorFS services versus their OpenAFS counterparts without risking the introduction of AuriStorFS extended data into the cell's Location Service and Protection Service.

The client configuration file, /etc/yfs/yfs-client.conf must be configured for the cell. If the AuriStor distributed cellservdb.conf included in the installation packages does not include configuration for the cell to be upgraded, it will be necessary to provide it in the /etc/yfs/yfs-client.conf file. The File Servers, the Volume Servers, and the administration tools use the client configuration when establishing connections to the Location Service and the Protection Service.

The file /etc/yfs/server/yfs-server.conf must be configured for the cell. The Location Service and Protection Service servers use the [defaults] databases section to specify the ubik database configuration.

Additionally, the /etc/yfs/server/BosConfig file must reflect the paths to the newly-installed server binaries. Note that in AuriStorFS, options to all servers can be configured via the /etc/yfs/server/yfs-server.conf file. The use of command line switches is discouraged.

Once the organization is prepared to commit to AuriStorFS and take advantage of the enhanced security capabilities and extended name spaces, it is time to upgrade the database services to AuriStorFS.

The safe method of migrating database servers from OpenAFS to AuriStorFS without an outage is:

  1. One at a time, convert all but one of the OpenAFS database servers to AuriStorFS database servers running in the AFS3-compatible mode. When a single OpenAFS database server remains, it will be the coordinator. For each server:

    1. Shutdown the OpenAFS services

    2. Uninstall the OpenAFS binaries

    3. Install the AuriStorFS binaries and configuration files

    4. Enable the afscompat option in the [defaults] databases stanza of the /etc/yfs/server/yfs-server.conf file.

    5. Start the AuriStorFS services.

  2. Convert the coordinator from OpenAFS to AuriStorFS without the afscompat option.

    Note: After the last OpenAFS server shuts down it will take (2 * number_of_servers + 1) * 75 seconds before an AuriStorFS database server running in AFS compatibility mode can be elected coordinator. For a three database server cell, that is 525 seconds. If a write transaction is in progress when the shutdown occurs, all read and write operations will be blocked until a new coordinator is elected. In other words, in a cell with three database servers, if the final OpenAFS database server fails, there will be a potential cell outage of 525 seconds before read and write transactions can resume.

    1. Shutdown the OpenAFS services.

    2. Uninstall the OpenAFS binaries.

    3. Install the AuriStorFS binaries and configuration files without the afscompat option in the [defaults] databases stanza of the /etc/yfs/server/yfs-server.conf file.

    4. Start the AuriStorFS services. This server will be elected coordinator within 56 seconds.

  3. One at a time, restart each of the AuriStorFS database servers after disabling AFS3 compatibility mode.

    1. Edit the /etc/yfs/server/yfs-server.conf file to remove the afscompat option from the [defaults] databases block.

    2. Restart the AuriStorFS services

The sections below describe how to convert an OpenAFS cell named your-cell-name.com with three database servers to AuriStorFS.

CREATING THE CLIENT CONFIG FILE

AuriStorFS Servers such as the fileserver, volserver, and salvageserver and the administration command-line tools including vos, pts, and backup initiate outbound connections to cell services using the same configuration as the afsd cache manager.

The contents of the /etc/yfs/yfs-client.conf required by servers can be obtained from the OpenAFS CellServDB and ThisCell files.

ThisCell

The contents of ThisCell should be specified in the [defaults] stanza.

    [defaults]
        thiscell = your-cell-name.com
CellServDB

The CellServDB data for the cell should be specified in the [cells] section.

    [cells]
        your-cell-name.com = {
            description = "My test cell"
            servers = {
                db1.your-cell-name.com = {
                    address = 10.0.0.1
                }
                db2.your-cell-name.com = {
                    address = 172.16.150.1
                }
                db3.your-cell-name.com = {
                    address = 192.168.5.2
                }
                db4.your-cell-name.com = {
                    address = 192.168.5.1
                }
            }
        }
description

The description value can include any text string. Tools might use this string to describe the cell to end users when more detail than the cell name is desired. In the OpenAFS CellServDB record the description follows the cellname.

servers

The servers block can include zero or more server entries consisting of:

DNS-host-name

A valid DNS host name. The name can be resolved using A, AAAA or CNAME records.

address

The address can be an IPv4 or IPv6 address. Zero or more addresses can be specified for a server. When transitioning from OpenAFS specify a single IPv4 address.

If the [cells] your-cell-name.com block is not provided the fileserver, volserver and salvageserver will use DNS SRV record lookups when other services within the cell must be contacted.

CREATING THE SERVER CONFIG FILE

OpenAFS Servers use the CellServDB, ThisCell, NetRestrict, NetInfo, krb.excl and krb.conf files to configure parameters which are set in /etc/yfs/server/yfs-server.conf on AuriStorFS Servers. Additionally, all command line parameters to AuriStorFS daemons should be set in the configuration file instead of the BosConfig file.

Server [defaults] Configuration

The [defaults] section of the server configuration file holds all of the information previously obtained from these files:

NetInfo

The contents of NetInfo should be specified as a list of addresses in the netinfo stanza.

    [defaults]
        netinfo = <address-list>
NetRestrict

The contents of NetRestrict should be specified as a list of addresses in the netrestrict stanza.

    [defaults]
        netrestrict = <address-list>
ThisCell

The contents of ThisCell should be specified in the thiscell stanza.

    [defaults]
        thiscell = your-cell-name.com
CellServDB

The CellServDB data for the cell each database server is a member of must be specified in the databases stanza. The databases stanza should contain a servers stanza that lists lines for the address, and optionally, clone status, for each server.

    databases = {
        afscompat = <boolean-value>
        servers = {
            db1.your-cell-name.com = {
                address = 192.0.2.1
            }
            db2.your-cell-name.com = {
                address = 198.51.100.42
            }
            db3.your-cell-name.com = {
                address = 203.0.113.252
            }
            db4.your-cell-name.com = {
                address = 195.51.100.99
                clone = <boolean-value>, "voting" or "non-voting"
            }
        }
    }

The afscompat option when set to a true boolean-value instructs the configured server to operate in AFS3-compatibility mode. This mode is intended for use during trials and cell migrations only. Any server that is running in AFS3-compatibility mode cannot be elected as the coordinator. Such servers are always voting clones.

For example, the following AuriStorFS servers definition on db4.your-cell-name.com implies it as a voting clone because it is operating in AFS3-compatibility mode:

    [defaults]
        thiscell = your-cell-name.com
        databases = {
            afscompat = yes
            servers = {
                db1.your-cell-name.com = {
                    address = 192.0.2.37
                }
                db2.your-cell-name.com = {
                    address = 198.51.100.47
                }
                db3.your-cell-name.com = {
                    address = 203.0.113.252
                }
                db4.your-cell-name.com = {
                    address = 198.51.100.99
                }
            }
        }

The matching OpenAFS CellServDB definition on the OpenAFS servers is:

    >your-cell-name.com # Local Cell Configuration
    192.0.2.37      # db1.your-cell-name.com
    109.51.100.47   # db2.your-cell-name.com
    203.0.113.252   # db3.your-cell-name.com
    198.51.100.99   # db4.your-cell-name.com

The databases/servers block differs from the [cells] your-cell-name.com block as follows:

Server [kerberos] Configuration

The [kerberos] section of the server configuration file holds all of the information previously obtained from these files:

krb.conf

The contents of krb.conf should be specified as a list of realms in the local_realms stanza.

    [kerberos]
        local_realms = I<list of Kerberos v5 realm names>
krb.excl

The contents of krb.excl should be specified as a list of principals in the foreign_principals stanza.

    [kerberos]
        foreign_principals = I<list of Kerberos v5 principal names>

Server Configuration Notes

Each service also can have options specified in an appropriately-named section, e.g. the bosserver can have options specified in the [bosserver] section. Examples might include running in nofork mode, where the line nofork = yes would be included.

    [bosserver]
        nofork = yes

The databases stanza may also be configured per-service.

    [vlserver]
        databases = {
            ...
        }

It is important that the /etc/yfs/server/yfs-server.conf file and the directory containing it be owned by the user the AuriStorFS server processes will run as. Typically this user is named yfsserver and will be created when the AuriStorFS binary packages are installed if your platform supports running AuriStorFS daemons as non-root users.

CREATING THE CELL KEYFILE

The existing afs key must be installed on the AuriStorFS servers. AuriStorFS servers store the afs key data in two files:

KeyFile

The KeyFile is compatible with IBM AFS and OpenAFS cells. Prior to OpenAFS 1.6.5 the afs key was always a 56-bit parity checked key compatible with the deprecated U.S. Data Encryption Standard (DES). Keys compatible with the DES encryption types, des-cbc-crc, des-cbc-md4 and des-cbc-md5, are referred to as rxkad keys. The KeyFile file stores rxkad keys.

KeyFileExt

Beginning with the OpenAFS 1.6.5 release the afs key can be any encryption type accepted by the Kerberos v5 standard including aes256-cts-hmac-sha384-192, aes128-cts-hmac-sha256-128, aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, and arcfour-hmac. All of the non-DES afs keys are referred to as rxkad_krb5 keys.

The AuriStor File System also adds a new key type known as yfs-rxgk. The KeyFileExt file stores both rxkad_krb5 and yfs-rxgk keys.

If the cell uses an rxkad key it is strongly recommended (but not required) that the cell be upgraded to use one or more rxkad_krb5 keys before converting the cell to the AuriStor File System. The rxkad key can be broken by brute force in under a day using publicly available cloud services. The How To Rekey guide provides instructions to convert an OpenAFS cell to use the more secure rxkad_krb5 keys. This should be done before continuing to generate the KeyFileExt.

If the cell cannot be upgraded to use rxkad_krb5 keys, then the OpenAFS KeyFile should be copied to /etc/yfs/server/KeyFile on each AuriStorFS server.

If rxkad_krb5 keys are used by the cell, they must be installed into the /etc/yfs/server/KeyFileExt file using the AuriStorFS version of asetkey. Most OpenAFS cells running version 1.6.5 or later will be using a krb5 keytab file named rxkad.keytab. Some installations might have an OpenAFS KeyFileExt file.

Generating a Kerberos v5 keytab from KeyFile

If neither an OpenAFS KeyFileExt file nor a krb5 keytab with the existing key is available, a krb5 keytab must be generated. How this is done depends on which Kerberos implementation is in use. Skip this section if a Kerberos v5 keytab, rxkad.keytab, is available.

MIT Kerberos

To avoid changing the key when using MIT Kerberos, you will need to reconstruct the keytab from the existing key. Use asetkey list to get the key for the newest kvno:

    # asetkey list
    kvno    4: key is: d34dbeefd34dbeef
    kvno    3: key is: baddc4febaddc4fe
    All done.

Then, use ktutil to create a keytab:

    # ktutil
    ktutil:  add_entry -key -p afs/your-cell-name.com@EXAMPLE.ORG \
             -k 4 -e des-cbc-crc
    Key for your-cell-name.com@EXAMPLE.ORG (hex): d34dbeefd34dbeef
    ktutil: write_kt /etc/yfs/server/rxkad.keytab
    ktutil: exit

Heimdal

This can be done via the Heimdal ext_keytab command.

    # ext_keytab -k /etc/yfs/server/rxkad.keytab \
            afs/your-cell-name.com@EXAMPLE.ORG

Converting a Kerberos v5 keytab to KeyFileExt

As the desired result is a KeyFileExt file, you will need to convert your keytab. First, generate a list of all the key types it contains via ktutil.

MIT Kerberos

    # ktutil
    ktutil:  rkt /etc/yfs/server/rxkad.keytab
     ktutil:  list -e
    slot KVNO Principal
    ---- ---- ---------------------------------------------------------------------
       1    4 afs/your-cell-name.com@EXAMPLE.ORG (aes256-cts-hmac-sha1-96)
       2    4 afs/your-cell-name.com@EXAMPLE.ORG (aes128-cts-hmac-sha1-96)
       3    4 afs/your-cell-name.com@EXAMPLE.ORG (des3-cbc-sha1)
       4    4 afs/your-cell-name.com@EXAMPLE.ORG (arcfour-hmac)

Heimdal

    # ktutil -k /etc/yfs/server/rxkad.keytab  list
    /etc/yfs/server/rxkad.keytab:

    Vno  Type                     Principal                                      Aliases
      4  aes256-cts-hmac-sha1-96  afs/your-cell-name.com@EXAMPLE.ORG
      4  aes128-cts-hmac-sha1-96  afs/your-cell-name.com@EXAMPLE.ORG
      4  des3-cbc-sha1            afs/your-cell-name.com@EXAMPLE.ORG
      4  arcfour-hmac-md5         afs/your-cell-name.com@EXAMPLE.ORG

All keys will need to be inserted with asetkey. The all key type will iterate over all supported keys and add them. For example, the above keys have a key version of 4, so for each key, a kvno parameter of 4 would need to be specified to the asetkey add command.

    # asetkey add rxkad_krb5 4 all \
      /etc/yfs/server/rxkad.keytab afs/your-cell-name.com@EXAMPLE.ORG

Adding the yfs-rxgk cell key

In a cell running an AuriStorFS Location Service, a cell-wide yfs-rxgk key is required in order to support AES256 encryption, protection of cache manager callback connections, and many other enhanced security features. This additional key is created with asetkey add. The yfs-rxgk key should be key version 1, and future keys can increment from there.

Note: Do not create the yfs-rxgk key when adding an AuriStorFS Server to a cell running an OpenAFS Location Service. When converting an OpenAFS cell to AuriStorFS the yfs-rxgk key must be added after all of the database servers are running AuriStorFS and AFS3 compatibility mode has been disabled.

To generate a new yfs-rxgk key, execute the following command:

    # asetkey add yfs-rxgk 1 aes256-cts-hmac-sha1-96 random

The KeyFileExt from the machine the asetkey command was executed on can then be distributed to all AuriStorFS servers.

It is important that the KeyFileExt file and the directory containing it be owned by the user the AuriStorFS server processes will run as. Typically this user is named yfsserver and will be created when the AuriStorFS binary packages are installed if your platform supports running AuriStorFS as non-root users.

BOSSERVER SETUP

The OpenAFS bosservershares the same cell-wide keys as the AFS cell. Each AuriStorFS bosserver is an independent service that is used to manage a single machine. As such, each AuriStorFS bosserver requires its own Kerberos service principal and keytab. This keytab should be installed in /etc/yfs/server/bos.keytab.

MIT Kerberos

This example creates a bos key for server server.your-cell-name.com:

    # kadmin
    kadmin: add_principal -randkey afs3-bos/server.your-cell-name.com@EXAMPLE.ORG
    kadmin: ktadd -k /etc/yfs/server/bos.keytab \
            afs3-bos/server.your-cell-name.com@EXAMPLE.ORG

Heimdal

This example creates a bos key for server server.your-cell-name.com:

    # kadmin
    kadmin> add -r afs3-bos/server.your-cell-name.com@EXAMPLE.ORG
    kadmin> ext_keytab -k /etc/yfs/server/bos.keytab \
            afs3-bos/server.your-cell-name.com@EXAMPLE.ORG

It is important that the /etc/yfs/server/bos.keytab file and the directory containing it be owned by the user the AuriStorFS server processes will run as. Typically this user is named yfsserver and will be created when the AuriStorFS binary packages are installed if your platform supports running AuriStorFS as non-root users.

LOCATION SERVER CONFIGURATION

After populating the KeyFileExt and setting up the AuriStorFS bosserver authentication, each AuriStorFS Location server must be configured with an updated BosConfig and UserListExt file. In addition, after conversion the AuriStorFS Location service can be configured to use GSS-API Kerberos v5 authentication.

CREATING THE BOS CONFIGURATION

Because the bosserver has a key installed, it is possible to use local superuser authentication to configure the bosserver with the bos create command. This is the safest way to make sure the BosConfig file has the proper format and ownership. Any options should already be configured in the yfs-server.conf. file.

This would create the location server and protection server processes on server server.your-cell-name.com:

   # bos create -server server.your-cell-name.com -instance vlserver \
                -type simple -cmd "/usr/libexec/yfs/vlserver" \
                -localauth
   # bos create -server server.your-cell-name.com -instance ptserver \
                -type simple -cmd "/usr/libexec/yfs/ptserver" \
                -localauth

The location server and protection server processes should all be running. The file server should be upgraded as explained in the "FILE SERVERS" section before any data is moved to the server.

CREATING THE SERVER SUPERUSER LISTS

The bosserver, b<vlserver|vlserver(8)>, and b<volserver|volserver(8)> do not rely upon the AuriStorFS Protection service for group membership information. Instead, super user status is determined by examining the UserListExt file which replaces the OpenAFS UserList file. The UserList file must be replaced with a UserListExt file, which is installed in /etc/yfs/server/UserListExt. That file should contain the same list as the OpenAFS UserList, except with the Kerberos realm appended to the end of each user, after an @ sign. For example, if the realm EXAMPLE.ORG is accepted as a local realm, then

    alice.admin
    bob.admin
    bob

would become

    alice.admin@EXAMPLE.ORG
    bob.admin@EXAMPLE.ORG
    bob@EXAMPLE.ORG

If two realms EXAMPLE.ORG and AD.EXAMPLE.ORG are accepted as local realms, then

    alice.admin
    bob.admin
    bob

might become

    alice.admin@EXAMPLE.ORG
    bob.admin@EXAMPLE.ORG
    bob@EXAMPLE.ORG
    alice.admin@AD.EXAMPLE.ORG
    bob.admin@AD.EXAMPLE.ORG

Assuming that bob@AD.EXAMPLE.ORG is not trusted as an administrator but bob@EXAMPLE.ORG is.

Because the bosserver has a key installed, it is possible to use local superuser authentication to configure the superuser list with the AuriStorFS bos adduser command. To add users alice.admin and bob.admin on server server.your-cell-name.com:

    # bos adduser -server server.your-cell-name.com -user alice.admin@EXAMPLE.ORG \
      -user bob.admin@EXAMPLE.ORG -localauth

Regardless of which manner is chosen, this procedure will need to be done on location servers as well as on file servers.

Unlike OpenAFS, AuriStorFS restricts access to cell metadata that is not necessary for proper operation of clients. This includes pts memberships, volume statistics, and bosserver process information. By default the restricted_query option, supported by each AuriStorFS service, is set to admin which prevents users who are not on either the UserListExt or ReaderList from viewing metadata. Members of the ReaderList are granted read-only access to restricted metadata. This file can be configured using bos adduser by adding the -type reader specifier.

    # bos adduser -server server.your-cell-name.com \
      -user bob@AD.EXAMPLE.ORG -localauth -type reader

When restricted_query is set to admin automated processes such as monitoring tools will need to operate with access to valid authentication tokens. Alternatively, adding

    [defaults]
        restricted_query = anyuser

or

    [vlserver]
        restricted_query = anyuser

will restore the OpenAFS behavior.

CONFIGURATION GSS-API KERBEROS V5 AUTHENTICATION

In addition to authenticating with the rxkad_krb5 key and afs/your-cell-name.com@EXAMPLE.ORG Kerberos v5 principal, AuriStorFS location servers can authenticate client entities, both users and machines, using GSS-API Kerberos v5 authentication.

Once authenticated via GSS-API, yfs-rxgk tokens will be issued to permit subsequent authentication to the Key Management Service which is co-located with the AuriStorFS location service. Unlike rxkad_krb5 tokens which are valid for use with every server in the cell, the yfs-rxgk tokens are only valid for a single service. Clients will contact the Key Management Service to acquire a separate token with a unique session key for every connection.

A key for yfs-rxgk needs to be inserted in the Kerberos KDC. In order that services are able to be fully configured, the key should be created disabled.

Install as /etc/yfs/server/vl.keytab on all the location servers.

MIT Kerberos

    # kadmin
    kadmin: add_principal -randkey -allow_tix
              yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG
    kadmin: ktadd -k /etc/yfs/server/vl.keytab yfs-rxgk/_afs.your-cell-name.com

Heimdal

    # kadmin
    kadmin> add --attributes=+disallow-all-tix -r
              yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG
    [...]
    kadmin> ext_keytab --keytab=/etc/yfs/server/vl.keytab \
                         yfs-rxgk/_afs.your-cell-name.com

It is important that the /etc/yfs/server/vl.keytab file and the directory containing it be owned by the user the AuriStorFS server processes will run as. Typically this user is named yfsserver and will be created when the AuriStorFS binary packages are installed if your platform supports running AuriStorFS as non-root users.

ENABLING RXGK

After all of the database servers have been upgraded to AuriStorFS and the yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG key has been installed, enable the yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG service principal in the Kerberos KDC.

MIT Kerberos

    # kadmin
    kadmin: modprinc +allow_tix
              yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG

Heimdal

    # kadmin
    kadmin> modify --attributes=-disallow-all-tix
              yfs-rxgk/_afs.your-cell-name.com@EXAMPLE.ORG

FILE SERVERS

A list of file servers can be discovered with the OpenAFS command vos listaddrs -printuuid, the AuriStorFS command vos listfs, or the AuriStorFS command vos eachfs.

Each server will need a /etc/yfs/server/bos.keytab as described in the "BOSSERVER SETUP" section, as well as a copy of the KeyFileExt file. Each server can also be configured with a unique super user configuration as described in the "CREATING THE SERVER SUPERUSER LISTS" section.

Servers can be upgraded one at a time, but all volume data must be removed from a file server before the upgrade is performed. See vos_move, vos_addsite, and vos_remove.

Existing vice partitions can be reused, but

Any file system that supports POSIX extended attributes can be used for vice partitions including ext4, xfs, btrfs, zfs, nfs mounts, cifs mounts, etc. The current recommendation is to use xfs and where available to enable support for xfs reflinks.

See fileserver for more details.

MAPPING THE LOCAL REALM TO THE CELL

For the simplest and most common case, the cell name and Kerberos realm name will match, and no configuration will need to be done.

In cases where the cell and realm names do not match or when more than one realm is accepted, the local_realms configuration stanza can be used to map all principals in one (or more) realms to the same username in the cell.

    [kerberos]
        local_realms = EXAMPLE.ORG AD.EXAMPLE.ORG

To disallow some principals, the foreign_principals stanza may be used. For instance, you may not want a particular admin principal to be able to authenticate to AuriStorFS.

    [kerberos]
        foreign_principals = admin@AD.EXAMPLE.ORG

CREATING THE BOS CONFIGURATION

Because the bosserver has a key installed, it is possible to use local superuser authentication to configure the bosserver with the bos create command. This is the safest way to make sure the BosConfig file has the proper format and ownership. Any options should already be configured in the yfs-server.conf. file.

This would create the fileserver process on server fs1.your-cell-name.com:

   # bos create -server fs1.your-cell-name.com -instance dafs -type dafs \
                -cmd "/usr/libexec/yfs/fileserver" \
                /usr/libexec/yfs/volserver \
                /usr/libexec/yfs/salvageserver \
                /usr/libexec/yfs/salvager -localauth

The fileserver processes should all be started. Data can now be migrated onto the fileserver.

PRIVILEGE REQUIRED

Typically, local superuser root is required to install new packages. Unlike OpenAFS, AuriStorFS can be configured to run server daemons as non-superusers.

SEE ALSO

AlwaysAttach(5), asetkey(8), BosConfig(5), bos_adduser(8), bos_create(8), bos_listhosts(8), bos.keytab(5), bosserver(8), fileserver(8), fstab(5), KeyFileExt(5), mount(8), ReaderList(5), salvageserver(8), UserListExt(5), vl.keytab(5), vos(1), vos_addsite(1), vos_listfs(1), vos_release(1), vos_remove(1), vos_move(1), yfs-client.conf(5), yfs-server.conf(5), How To Rekey

COPYRIGHT

Copyright AuriStor, Inc. 2014-2018. https://www.auristor.com/ All Rights Reserved.

ACKNOWLEDGEMENTS

"AFS" is a registered mark of International Business Machines Corporation, used under license. (USPTO Registration 1598389)

"OpenAFS" is a registered mark of International Business Machines Corporation. (USPTO Registration 4577045)

The "AuriStor" name, log 'S' brand mark, and icon are registered marks of AuriStor, Inc. (USPTO Registrations 4849419, 4849421, and 4928460) (EUIPO Registration 015539653).

"Your File System" is a registered mark of AuriStor, Inc. (USPTO Registrations 4801402 and 4849418).

"YFS" and "AuriStor File System" are trademarks of AuriStor, Inc.