Copyright © 2008-2019 Davor Ocelic
Last update: Oct 9, 2019. — Ensure all information is up to date
This documentation is free; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
It is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
Abstract
The purpose of this article is to give you a straightforward, Debian/Ubuntu/Devuan-friendly way of installing and configuring OpenAFS version 1.8, 1.6, or 1.4.
By the end of this guide, you will have a functional OpenAFS installation that will complete our solution for secure, centralized network logins with shared home directories.
This article is part of Spinlock Solutions's practical 3-piece introductory series to infrastructure-based Unix networks, containing MIT Kerberos 5 Guide, OpenLDAP Guide and OpenAFS Guide.
Table of Contents
AFS distributed filesystem is a service that has been traditionally captivating system administrators' and advanced users' interest, but its high entry barrier and infrastructure requirements have been preventing many from using it.
AFS has already been the topic of numerous publications. Here, we will present only the necessary summary; enough information to establish the context and achieve practical results.
You do not need to follow any external links; however, the links have been provided both throughout the article and listed all together at the end, to serve as pointers to more precise technical treatment of individual topics.
AFS was started at the Carnegie Mellon University in the early 1980s, in order to easily share file data between people and departments. The system became known as the Andrew File System, or AFS, in recognition of Andrew Carnegie and Andrew Mellon, the primary benefactors of CMU. Later, AFS was supported and developed as a product by Transarc Corporation (now IBM Pittsburgh Labs). IBM branched the source of the AFS product, and made a copy of the source available for community development and maintenance. They called the release OpenAFS, which is practically the only "variant" of AFS used today for new installations.
The amount of important information related to AFS is magnitudes larger than that of, say, Kerberos or LDAP. It isn't possible to write a practical OpenAFS Guide without skipping some of the major AFS concepts and taking shortcuts in reaching the final objective. However, this injustice will be compensated by introducing you to the whole OpenAFS idea, helping you achieve practical results quickly, and setting you underway to further expanding your OpenAFS knowledge using other qualified resources.
AFS relies on Kerberos for authentication. A working Kerberos environment is the necessary prerequisite, and the instructions on setting it up are found in another article from the series, the MIT Kerberos 5 Guide.
Furthermore,
in a centralized network login solution, user metadata (Unix user and group
IDs, GECOS information, home directories, preferred shells, etc.) needs to
be shared in a network-aware way as well. This
metadata can be served using LDAP or libnss-afs
.
In general, LDAP is standalone and flexible, and covered in another
article from the series, the OpenLDAP Guide.
libnss-afs
is simple, depends on the use of AFS, and
is covered in this Guide.
AFS' primary purpose is to serve files over the network in a robust,
efficient, reliable, and fault-tolerant way.
Its secondary purpose may be to serve user meta information through
libnss-afs
, unless you choose OpenLDAP for the
purpose as explained in another article from the series, the
OpenLDAP Guide.
While the idea of a distributed file system is not unique, let's quickly identify some of the AFS specifics:
AFS offers a client-server architecture for transparent file
access in a common namespace (/afs/
)
anywhere on the network. This principle has been nicely illustrated by one
of the early AFS slogans "Where ever you go, there you are!".
AFS uses Kerberos 5 as an authentication mechanism, while authorization is handled by OpenAFS itself. In AFS, part from specific memberships/permissions, unauthenticated users implicitly get the privilege of "system:anyuser" group, and authenticated users implicitly get the privilege of "system:authuser" group.
User's AFS identity is not in any way related to traditional system usernames or other data; AFS Protection Database (PTS) is a stand-alone database of AFS usernames and groups. However, since Kerberos 5 is used as an authentication mechanism, provision is made to automatically "map" Kerberos principal names onto PTS entries.
AFS does not "export" existing data partitions to the network in a way that NFS does. Instead, AFS requires partitions to be dedicated to AFS. These partitions are formatted like standard Unix partitions (e.g. using Ext4) and individual files on them correspond to individual files later visible in the AFS space, but file and directory names are unrecognizable and are managed by AFS. These files are therefore not too useful when examined "raw" on the AFS fileservers' partitions -- they must be viewed through AFS clients to make sense.
On AFS partitions, one creates "volumes" which represent basic client-accessible units to hold files and directories. These volumes and volume quotas are "virtual" entities existing inside AFS -- they do not affect physical disk partitions like system partitions do.
AFS volumes reside on AFS server partitions. Each AFS server can have up to 256 partitions of arbitrary size and unlimited number of volumes on them. A volume cannot span multiple partitions — the size of the partition implies the maximum data size (and/or single file size) contained in any of its volumes. (If AFS partitions composed of multiple physical partitions are a requirement, Logical Volume Manager or other OS-level functionality can be used to construct such partitions.)
To become conveniently accessible, AFS volumes are usually "mounted"
somewhere under
the AFS namespace (/afs/
).
These "mounts" are again handled internally in AFS — they do not
correspond to Unix mount points and they are not affected
by client or server reboots.
The only Unix mount point defined in AFS is one for the
/afs/
directory itself.
AFS supports a far more elaborate and convenient permissions system (AFS ACL) than the traditional Unix "rwx" modes. These ACLs are set on directories (instead of individual files) and apply to all contained files, even though an option for file-based permissions can be enabled if explicitly desired. Each directory can hold up to 20 ACL entries. The ACLs may refer to users and groups, and even "supergroups" (groups within groups) to a maximum depth of 5. IP-based ACLs are available as well, for the rare cases where you might have no other option; they work on the principle of adding IPs to groups, and then using group names in ACL rules. Newly-created directories automatically inherit ACL list of the parent directory.
AFS is, or should we say "was", available for a broad range of architectures and software platforms. There were up-to-date AFS releases for all Linux platforms, Microsoft Windows, IBM AIX, HP/UX, SGI Irix, MacOS X and Sun Solaris. Today, the main remaining platforms are GNU/Linux, MS Windows, and OS X.
OpenAFS comes in two main releases, 1.7.x and 1.8.x. The 1.7 "maintenance" release is the recommended production version for MS Windows. The 1.8 "maintenance" release is the recommended production version for Linux, UNIX, and OS X. The client versions are completely interoperable and you can freely mix clients of all versions and generations.
The OpenAFS website and documentation may seem out of date at first, but they do contain all the information you need.
AFS is enterprise-grade and mature. Books about AFS written 10 or 15 years ago are still authoritative today as long as some significant architecture improvements that happened are kept in mind.
You can find the complete AFS documentation at the OpenAFS website.
After grasping the basic concepts, your most helpful resources
will be quick help
options supported in all commands,
such as in fs help, vos help,
pts help or bos help and
the UNIX manpages.
On all GNU/Linux-based platforms, Linux-PAM is available for service-specific
authentication configuration. Linux-PAM is an implementation of PAM
("Pluggable Authentication Modules
") from Sun Microsystems.
Network services, instead of having hard-coded authentication interfaces and decision methods, invoke PAM through a standard, pre-defined interface. It is then up to PAM to perform any and all authentication-related work, and report the result back to the application.
Exactly how PAM reaches the decision is none of the service's business. In traditional set-ups, that is most often done by asking and verifying usernames and passwords. In advanced networks, that could be Kerberos tickets and AFS tokens.
PAM will allow for inclusion of OpenAFS into the authentication path of all services. After typing in your password, it will be possible to verify the password against the Kerberos database and automatically obtain the Kerberos ticket and AFS token, without having to run kinit and aklog manually.
You can find the proper introduction (and complete documentation) on the Linux-PAM website. Pay special attention to the PAM Configuration File Syntax page. Also take a look at the Linux-PAM(7) and pam(7) manual pages.
Let's agree on a couple points before going down to work:
Our platform of choice, where we will demonstrate a practical setup, will be Debian GNU. The setup will also work on Ubuntu and Devuan GNU+Linux, and if any notable differences exist they will be noted.
Please run dpkg -l sudo to verify you have the package sudo installed.
Sudo is a program that will allow you to carry out system administrator tasks from your normal user account. All the examples in this article requiring root privileges use sudo, so you will be able to copy-paste them to your shell.
To install sudo if missing, run:
su -c 'apt install sudo'
If asked for a password, type in the root user's password.
To configure sudo, run the following,
replacing USERNAME
with your login name:
su -c 'echo "USERNAME
ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers'
Debian packages installed during the procedure will ask us a series of questions through the so-called debconf interface. To configure debconf to a known state, run:
sudo dpkg-reconfigure debconf
When asked, answer interface=Dialog
and priority=low
.
Monitoring log files is crucial in detecting problems. The straightforward, catch-all routine to this is opening a terminal and running:
cd /var/log; sudo tail -F daemon.log sulog user.log auth.log debug kern.log syslog dmesg messages \ kerberos/{krb5kdc,kadmin,krb5lib}.log openafs/{Bos,File,Pt,Salvage,VL,Volser}Log
The command will keep printing log messages to the screen as they arrive.
For maximum convenience, the installation and configuration
procedure we will show will set everything up on a single machine.
It means that the Kerberos server, LDAP server, and AFS server
will be on the same machine with an IP address of
192.168.7.12
. You should use your own machine's
network address in this place.
To differentiate between
client and server roles, the connecting client will be named monarch.example.com
and the servers will be named
krb1.example.com
,
ldap1.example.com
, and
afs1.example.com
.
You can reuse these names, or even better replace them with your appropriate/existing hostnames.
The following addition will be
made to /etc/hosts
to completely support this single-host installation scheme.
192.168.7.12
monarch.example.com
monarch
krb1.example.com
krb1 ldap1.example.com
ldap1 afs1.example.com
afs1
Finally, test that the network setup works as expected. Pinging the hostnames should report proper FQDNs and IPs as shown:
ping -c1 localhost
PING localhost (127.0.0.1) 56(84) bytes of data. ....ping -c1
PINGmonarch
monarch.example.com
(192.168.7.12
) 56(84) bytes of data. ....ping -c1 krb1
PING krb1.example.com
(192.168.7.12
) 56(84) bytes of data. ....ping -c1 afs1
PING afs1.example.com
(192.168.7.12
) 56(84) bytes of data. ....
The only meaningful way to access data in AFS is through an AFS
client. That means you will need the OpenAFS client installed on at
least all AFS client systems, and possibly AFS servers too.
There are two clients available — the one from OpenAFS with a separate
kernel module, and libkafs
which already exists in the Linux
kernel. We will show the OpenAFS native client.
Great, so let's roll.
Building the OpenAFS kernel module today is very simple. There are basically two methods available: the module-assistant (older) and DKMS (newer). Both of them offer an extremely simple and elegant way to not have to deal with any of the complexities behind the scenes.
DKMS is a framework for generating Linux kernel modules. It can rebuild modules automatically as the new kernel version is installed. This mechanism was invented for Linux by the Dell Linux Engineering Team back in 2003, and has since started seeing widespread use.
To get the OpenAFS module going with DKMS, here's what you do:
sudo apt-get install build-essential dkms linux-headers-`uname -r` sudo apt-get install openafs-modules-dkms
It's that easy, and you're done. What's best, this method is maintenance-free — provided that there are no compile-time errors, the OpenAFS module will be automatically compiled when needed for all kernel versions you happen to be running, be it upgrades or downgrades.
To get it going with module-assistant, here's what you do:
sudo apt-get install module-assistant sudo m-a prepare openafs sudo m-a a-i openafs
Similarly as for the DKMS approach, you're already done. Although, with just one difference: module-assistant does not provide support for rebuilding the kernel module automatically on kernel change, so you'll need to manually run sudo m-a a-i openafs every time you boot into a new kernel version.
After the kernel module is installed, we can proceed with installing the OpenAFS client:
sudo apt-get install openafs-{client,krb5}
Debconf answers for reference:
AFS cell this workstation belongs to:# (Your domain name in lowercase, matching the Kerberos realm in uppercase) Size of AFS cache in kB?
example.com
# (Default value is 50000 for 50 MB, but you can greatly increase the # size on modern systems to a few gigabytes, with 20000000 (20 GB) being # the upper reasonable limit. The example above uses 4 GB) Run Openafs client now and at boot?
4000000
No
# (It is important to say NO at this point, or the client will try to # start without the servers in place for the cell it belongs to!) Look up AFS cells in DNS?Yes
Encrypt authenticated traffic with AFS fileserver?No
# (OpenAFS client can encrypt the communication with the fileserver. The # performance hit is not too great to refrain from using encryption, but # generally, disable it on local and trusted-connection clients, and enable # it on clients using remote/insecure channels) Dynamically generate the contents of /afs?Yes
Use fakestat to avoid hangs when listing /afs?Yes
DB server host names for your home cell:# (Before continuing, make sure you've edited your DNS configuration or # /etc/hosts file as mentioned above in the section "Conventions", and that # the command 'ping afs1' really does successfully ping your server)
afs1
OpenAFS cache directory on AFS clients is
/var/cache/openafs/
. As we've said,
this includes your AFS servers too, as they will all have the AFS client software
installed.
The cache directory must be on an Ext partition.
In addition, ensure to never run out of space assigned to the OpenAFS cache; OpenAFS doesn't handle the situation gracefully when its cache unexpectedly runs out of space it thought it had.
The best way to satisfy both requirements is to mount a dedicated partition onto
/var/cache/openafs/
, and make its size match
the size of the AFS cache that was specified above.
If you have a physical partition available, create an Ext filesystem on it and
add it to /etc/fstab
as usual:
sudo mkfs.ext4 /dev/my-cache-partition
sudo sh -c "echo '/dev/my-cache-partition
/var/cache/openafs ext4 defaults 0 2' >> /etc/fstab"
If you do not have a physical partition available, you can create the partition in a file; here's an example for the size of 4 GB we've already used above for the "AFS cache size" value:
cd /var/cache sudo dd if=/dev/zero of=openafs.img bs=10M count=410 # (~4.1 GB partition) sudo mkfs.ext4 openafs.img sudo sh -c "echo '/var/cache/openafs.img /var/cache/openafs ext4 defaults,loop 0 2' >> /etc/fstab" sudo tune2fs -c 0 -i 0 -m 0 openafs.img
To verify that the Ext cache partition has been created successfully and can be mounted, run:
sudo mount /var/cache/openafs
Now that the kernel module and the AFS client are ready, we can proceed with the last step — installing the OpenAFS server.
sudo apt-get install openafs-{fileserver,dbserver}
Debconf answers for reference:
Cell this server serves files for: example.com
That one was easy, wasn't it? Let's follow with the configuration part to get it running:
As Kerberos introduces mutual authentication of users and services, we need to create a Kerberos principal for our AFS service.
Strictly speaking, in Kerberos you would typically create one key per-host per-service, but since OpenAFS uses a single key for the entire cell, we must create just one key, and that key will be shared by all OpenAFS cell servers.
(The transcript below assumes you've set up Kerberos and the Kerberos
policy named "service
" as explained in the MIT Kerberos 5 Guide;
if you did not, do so right now as Kerberos is the necessary prerequisite.)
sudo rm -f /tmp/afs.keytab
sudo kadmin.local
Authenticating as principal root/admin@EXAMPLE.COM
with password. kadmin.local:addprinc -policy service -randkey afs/
Principal "afs/example.com
example.com
@EXAMPLE.COM
" created. kadmin.local:ktadd -k /tmp/afs.keytab -norandkey afs/
Entry for principal afs with kvno 1, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/tmp/afs.keytab. Entry for principal afs with kvno 1, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/tmp/afs.keytab. kadmin.local:example.com
quit
Once the key's been created and exported to file
/tmp/afs.keytab
as shown, we need to load it
into the AFS KeyFile. Note that the number "1
" in the
following command is the key version number, which has to match KVNO reported
in the 'ktadd' step above.
sudo asetkey add rxkad_krb5 1 18 /tmp/afs.keytab afs/example.com
sudo asetkey add rxkad_krb5 1 17 /tmp/afs.keytab afs/example.com
If the above fails, most probably the issue is in Kerberos being too new, so you need to use OpenAFS of 1.8 with it.
It is relatively simple to use the new packages from the "testing" branch. Edit your /etc/apt/sources.list
, copy the first entry you find there and in the newly added lines replace the distribution name with "testing".
Then run apt update.
When update is done, bring all existing OpenAFS packages to new version by doing e.g. dpkg -l |grep afs | awk '{ print $2 }' | xargs apt install -y.
To verify the key has been loaded and that there is only one key in the AFS KeyFile, run bos listkeys as shown below. Please note that this may return an empty list and report "All done", or even throw an error, when using rxkad_krb5. "bos listkeys" only supports basic rxkad (old 56-bit) DES-compatible keys, and so you can ignore the error if it happens:
sudo bos listkeys afs1 -localauth
key1
has cksum2035850286
Keys last changed onTue Jun 24 14:04:02 2008
. All done.
Now that's nice!
(On a side note, you can also remove a key from they KeyFile. In case there's something wrong and you want to do that, run bos help for a list of available commands and bos help removekey for the specific command you'll want to use.)
As we've hinted in the introduction, AFS works by using its own dedicated
partitions. Each server can have up to 256 partitions which should be
mounted to directories named /vicepXX/
,
where "XX" is the partition "number" going from 'a' to 'z' and from
'aa' to 'iv'.
In a simple scenario, we will have only one partition
/vicepa/
. While different
underlying filesystems are supported, we will assume
/vicepa/
has been formatted
as some version of the Ext filesystem (4, 3 or 2).
The same notes as for the OpenAFS client cache directory apply — it is advisable to have /vicepa mounted as a partition, although you can get away without it.
Here's the list of the three possible setups:
1) If you have a physical partition available, create an Ext filesystem on it and
add it to /etc/fstab
as usual:
sudo mkfs.ext4 /dev/my-vice-partition
sudo sh -c "echo '/dev/my-vice-partition
/vicepa ext4 defaults 0 2' >> /etc/fstab"
2) If you do not have a physical partition available, you can create the partition in a file; here's an example for the size of 10 GB:
cd /home sudo dd if=/dev/zero of=vicepa.img bs=100M count=100 # (10 GB partition) sudo mkfs.ext4 openafs.img sudo sh -c "echo '/home/openafs.img /vicepa ext4 defaults,loop 0 2' >> /etc/fstab" sudo tune2fs -c 0 -i 0 -m 0 openafs.img
To verify that the Ext vice partition has been created successfully and can be mounted, run:
sudo mkdir -p /vicepa sudo mount /vicepa
3) If you insist on not using any partition mounted on /vicepa, that'll work too because AFS does not use its own low-level format for the partitions — it saves data to vice partitions on the file level. (As said in the introduction, that data is structured in a way meaningful only to AFS, but it is there in the filesystem, and you are able to browse around it using cd and ls).
To make OpenAFS honor and use such vice directory that is not mounted to a separate partition,
create file AlwaysAttach
in it:
mkdir -p /vicepa touch /vicepa/AlwaysAttach
Now that we've installed the software components that make up the OpenAFS server and that we've taken care of the pre-configuration steps, we can create an actual AFS cell.
Edit /etc/openafs/CellServDB
and add to the top:
>example.com
192.168.7.12
#afs1
Then verify that the cache size you've entered in the previous steps is less than 95% of the total partition size. If you specified a value too large, edit /etc/openafs/cacheinfo
and reduce it.
Then run afs-newcell
:
sudo afs-newcell
Prerequisites In order to set up a new AFS cell, you must meet the following: 1) You need a working Kerberos realm with Kerberos4 support. You should install Heimdal with KTH Kerberos compatibility or MIT Kerberos 5. 2) You need to create the single-DES AFS key and load it into /etc/openafs/server/KeyFile. If your cell's name is the same as your Kerberos realm then create a principal called afs. Otherwise, create a principal called afs/cellname in your realm. The cell name should be all lower case, unlike Kerberos realms which are all upper case. You can use asetkey from the openafs-krb5 package, or if you used AFS3 salt to create the key, the bos addkey command. 3) This machine should have a filesystem mounted on /vicepa. If you do not have a free partition, then create a large file by using dd to extract bytes from /dev/zero. Create a filesystem on this file and mount it using -oloop. 4) You will need an administrative principal created in a Kerberos realm. This principal will be added to susers and system:administrators and thus will be able to run administrative commands. Generally the user is a root or admin instance of some administrative user. For example if jruser is an administrator then it would be reasonable to create jruser/admin (or jruser/root) and specify that as the user to be added in this script. 5) The AFS client must not be running on this workstation. It will be at the end of this script. Do you meet these requirements? [y/n]y
If the fileserver is not running, this may hang for 30 seconds. /etc/init.d/openafs-fileserver stop What administrative principal should be used?root/admin
/etc/openafs/server/CellServDB already exists, renaming to .old /etc/init.d/openafs-fileserver start Starting OpenAFS BOS server: bosserver. bos adduser afs.example.com
root -localauth Creating initial protection database. This will print some errors about an id already existing and a bad ubik magic. These errors can be safely ignored. pt_util: /var/lib/openafs/db/prdb.DB0: Bad UBIK_MAGIC. Is 0 should be 354545 Ubik Version is: 2.0 bos create afs1.example.com
ptserver simple /usr/lib/openafs/ptserver -localauth bos create afs1.example.com
vlserver simple /usr/lib/openafs/vlserver -localauth bos create afs1.example.com
dafs dafs -cmd '/usr/lib/openafs/dafileserver -p 23 -busyat 600 \ -rxpck 400 -s 1200 -l 1200 -cb 65535 -b 240 -vc 1200' -cmd /usr/lib/openafs/davolserver \ -cmd /usr/lib/openafs/salvageerver -cmd /usr/lib/openafs/dasalvager -localauth bos setrestart afs1.example.com
-time never -general -localauth Waiting for database elections: done. vos create afs1.example.com
a root.afs -localauth Volume 536870915 created on partition /vicepa of afs.example.com
/etc/init.d/openafs-client force-start Starting AFS services: afsd. afsd: All AFS daemons started. Now, get tokens as root/admin in theexample.com
cell. Then, run afs-rootvol to create the root volume.
Now that our AFS cell is created, remember we've said volumes are the basic
units accessible by AFS clients?
By convention, each AFS cell creates the
first volume called root.afs
.
Well, strictly speaking, files in a volume can legitimately be accessed without mounting a volume. It's not as convenient, so you will almost always want to mount the volume first, but keep in mind that unmounting a volume does not equal making the files inaccessible — a volume becomes really inaccessible only if you clear its toplevel ACL.
According to the advice printed at the end of afs-newcell run, we need to first obtain the AFS administrator token:
sudo su
# (We want to switch to the root user)kinit root/admin
Password for root/admin@EXAMPLE.COM
:
PASSWORD
aklog
To verify that you hold the Kerberos ticket and AFS token, you may run the following:
klist -5f
Ticket cache: FILE:/tmp/krb5cc_1116 Default principal: root/admin@EXAMPLE.COM
Valid starting Expires Service principal 02/09/10 17:18:18 02/10/10 03:18:18 krbtgt/EXAMPLE.COM
@EXAMPLE.COM
renew until 02/10/10 17:18:16, Flags: FPRIA 02/09/10 17:18:18 02/10/10 03:18:18 afs/example.com
@EXAMPLE.COM
renew until 02/10/10 17:18:16, Flags: FPRATtokens
Tokens held by the Cache Manager: User's (AFS ID 1) rxkad tokens forexample.com
[Expires Feb 10 03:18] --End of list--
Now, with a successful kinit and aklog in place, we can run afs-rootvol:
afs-rootvol
Prerequisites In order to set up the root.afs volume, you must meet the following pre-conditions: 1) The cell must be configured, running a database server with a volume location and protection server. The afs-newcell script will set up these services. 2) You must be logged into the cell with tokens in for a user in system:administrators and with a principal that is in the UserList file of the servers in the cell. 3) You need a fileserver in the cell with partitions mounted and a root.afs volume created. Presumably, it has no volumes on it, although the script will work so long as nothing besides root.afs exists. The afs-newcell script will set up the file server. 4) The AFS client must be running pointed at the new cell. Do you meet these conditions? (y/n)y
You will need to select a server (hostname) and AFS partition on which to create the root volumes. What AFS Server should volumes be placed on?afs1
What partition? [a]a
vos create afs1 a root.cell -localauth Volume 536870918 created on partition /vicepa of afs1 fs sa /afs system:anyuser rl fs mkm /afs/example.com
root.cell -cellexample.com
-fast || true fs mkm /afs/grand.central.org root.cell -cell grand.central.org -fast || true ..... ..... ..... ..... ..... fs sa /afs/example.com
system:anyuser rl fs mkm /afs/.example.com
root.cell -cellexample.com
-rw fs mkm /afs/.root.afs root.afs -rw vos create afs1 a user -localauth Volume 536870921 created on partition /vicepa of afs1 fs mkm /afs/example.com
/user user fs sa /afs/example.com
/user system:anyuser rl vos create afs1 a service -localauth Volume 536870924 created on partition /vicepa of afs1 fs mkm /afs/example.com
/service service fs sa /afs/example.com
/service system:anyuser rl ln -sexample.com
/afs/spinlock ln -s .example.com
/afs/.spinlock vos addsite afs1 a root.afs -localauth Added replication site afs /vicepa for volume root.afs1 vos addsite afs1 a root.cell -localauth Added replication site afs1 /vicepa for volume root.cell vos release root.afs -localauth Released volume root.afs successfully vos release root.cell -localauth Released volume root.cell successfully
If you remember, during the AFS installation phase, we've answered "No"
to the question "Run OpenAFS client now and at boot?". AFS init script
is such that it just won't run the client as long as the client startup
is disabled even if you invoke
sudo invoke-rc.d openafs-client start
manually
(you'd have to invoke sudo invoke-rc.d openafs-client force-start
,
but this is not what happens during regular boot).
So we have to enable the client. For older OpenAFS versions, you can do
this in /etc/openafs/afs.conf.client
by replacing
line AFS_CLIENT=false
with AFS_CLIENT=true
:
sudo perl -pi -e's/^AFS_CLIENT=false/AFS_CLIENT=true/' /etc/openafs/afs.conf.client sudo invoke-rc.d openafs-client restart
For newer versions you do this by running
sudo dpkg-reconfigure openafs-client
.
Now let's drop any tokens or tickets that we may have initialized, to continue with a clean slate:
unlog; kdestroy
And at this point, you've got yourself one helluva OpenAFS cell up and running!
The following sections provide some more information on OpenAFS and its usage, but your installation and configuration phases have been completed.
While the whole point of AFS is in accessing files from remote workstations,
remember that all AFS servers are also regular AFS clients and you can use
them to browse the files just as fine. So let's explain
the AFS directory structure a bit and then use our just-installed machine to
look at the actual contents of the
/afs/
directory.
As we've hinted in the section called “Introduction”, AFS uses a global namespace. That
means all AFS sites are instantly accessible from
/afs/
as if they were local
directories, and all files have a unique AFS path. For example, file
/afs/
will always be
example.com
/service/test/afs/
, no matter the
client, operating system, local policy, connection type
or geographical location.
example.com
/service/test
In order to avoid clashes in this global AFS namespace, by convention, each
cell's "AFS root" starts with
/afs/
.
domain.name
/
Beneath it, AFS automatically creates two directories,
common/
and
user/
. The latter is where the
users' home directories should go, usually hashed to two levels,
such as /afs/
.
example.com
/user/r/ro/root/
Let's list /afs/
directory contents to verify what we've just said about AFS cells
and their mount points:
cd /afs
ls | head
1ts.org acm-csuf.org acm.uiuc.edu ams.cern.ch andrew.cmu.edu anl.gov asu.edu athena.mit.edu atlass01.physik.uni-bonn.de atlas.umich.eduls | wc -l
189
The 189 directories were automatically created by the afs-rootvol script, but you can create additional and remove existing mount points (AFS mount points) at will.
With the above said, we can predict that AFS has created our own directory in
/afs/
. This directory is only visible automatically within the local cell
and is not seen by the world in example.com
/ls /afs
listing (because you have
not asked for its inclusion in the global "CellServDB" file).
Its default invisibility, however, does not make it inaccessible — supposing that
you have a functioning network link, and that your cell name and server
hostnames are known, your cell is reachable from the Internet.
Now that we're in AFS land,
we can quickly get some more AFS-specific information on
/afs/
:
example.com
/
fs lsm /afs/
'/afs/example.com
example.com
' is a mount point for volume '#example.com
:root.cell'fs lv /afs/
File /afs/example.com
example.com
(536870919.1.1) contained in volume 536870919 Volume status for vid = 536870919 named root.cell.readonly Current disk quota is 5000 Current blocks used are 4 The partition has 763818364 blocks available out of 912596444
The output above is showing a cell setup with 1 TB of AFS storage — the block size in OpenAFS is 1 KB.
Most of the AFS
fs subcommands operate only on directory names that do not end with a
dot (".") or a slash ("/"). For example, the above fs lsm /afs/example.com
would not work if it was called with /afs/example.com/
. Likewise, it is not possible to call
fs lsm .
or fs lsm ./
; use fs lsm $PWD
for
the equivalent.
Each time you mount a volume, you can mount it read-write or read-only.
Read-write mounts are simple — reads and writes are done through
the same filesystem path, such as
/afs/.
,
and are always served by the AFS server on which the volume resides.
example.com
/common/testfile
Read-only mounts make things interesting — volumes may have up to
8 read-only replicas and clients will retrieve files from the "best" source.
However, that brings two specifics:
First, as the read-only mount is read-only by definition, a
different file path (prefixed with a dot),
such as /afs/.
,
must be used when access in a read-write fashion is required.example.com
/common/testfile
Second, any change of data in the read-write tree won't show up in the read-only tree
until you "release" the volume contents with the vos release command.
As said, read-write mounts are by convention prefixed by a leading dot. Let's verify this:
fs lsm /afs/
'/afs/example.com
example.com
' is a mount point for volume '#example.com
:root.cell'fs lsm /afs/.
'/afs/.example.com
example.com
' is a mount point for volume '%example.com
:root.cell'
Equipped with the above information, let's visit
/afs/
,
look around, and then try to read and write files.
example.com
/
cd /afs/
example.com
ls -al
total 14 drwxrwxrwx 2 root root 2048 2008-06-25 02:05 . drwxrwxrwx 2 root root 8192 2008-06-25 02:05 .. drwxrwxrwx 2 root root 2048 2008-06-25 02:05 service drwxrwxrwx 2 root root 2048 2008-06-25 02:05 userecho TEST > testfile
-bash: testfile: Read-only file systemcd /afs/.
example.com
echo TEST > testfile
-bash: testfile: Permission denied
Good. The first command has been denied since we were in the read-only AFS mount point. The second command has been denied since we did not obtain the Kerberos/AFS identity yet to acquire the necessary write privilege.
Now let's list access permissions (AFS ACL) for the directory, and then obtain AFS admin privileges that will allow us to write files. Note that we first establish our Kerberos identity using kinit, and then obtain the matching AFS token using aklog. Aklog obtains a token automatically and without further prompts, on the basis of the existing Kerberos ticket.
cd /afs/.
example.com
fs la .
Access list for . is Normal rights: system:administrators rlidwka system:anyuser rlkinit root/admin; aklog
Password for root/admin@EXAMPLE.COM
:
PASSWORD
klist -5
Ticket cache: FILE:/tmp/krb5cc_0 Default principal: root/admin@EXAMPLE.COM
Valid starting Expires Service principal 06/29/08 19:38:05 06/30/08 05:38:05 krbtgt/EXAMPLE.COM
@EXAMPLE.COM
renew until 06/30/08 19:38:05 06/29/08 19:38:12 06/30/08 05:38:05 afs@EXAMPLE.COM
renew until 06/30/08 19:38:05tokens
Tokens held by the Cache Manager: User's (AFS ID 1) tokens for afs@example.com
[Expires Jun 30 05:38] --End of list--
At this point, writing the file succeeds:
echo TEST > testfile
# This is to make the test file visible in the read-only trees:vos release root.cell
cat testfile
TESTrm testfile
As we've seen in previous chapters, to obtain read or write privilege in AFS, you authenticate to Kerberos using kinit and then to AFS using aklog.
We're dealing with two separate authentication databases here — the Kerberos database, and the AFS "Protection Database" or PTS.
That means all users have to exist in both Kerberos and AFS if they
want to access AFS data space in an authenticated fashion. The only reason
we did not have to add root/admin
user to AFS PTS is
because this was done automatically for the admin user by the virtue of
afs-newcell.
So let's add a regular AFS user. We're going to add user
"jirky
", which should already exist in Kerberos if you've
followed the MIT Kerberos 5 Guide, section
"Creating first unprivileged principal".
Make sure you hold the administrator Kerberos ticket and AFS token,
and then execute:
pts createuser
Userjirky
20000
jirky
has id20000
You will notice that Kerberos and AFS do not require any use of
sudo. (Actually, we do use sudo to invoke Kerberos'
sudo kadmin.local
, but that's only because we want
to access the local Kerberos database directly by opening the on-disk Kerberos
database file).
Kerberos and AFS privileges are determined solely by tickets and
tokens one has obtained, and have nothing to do with traditional
Unix privileges nor are tied to certain usernames or IDs.
Now that we have a regular user "jirky
" created
in both Kerberos and AFS, we want to create an AFS data volume that will
correspond to this user and be "mounted" in the location of the
user's home directory in AFS.
This is an established AFS practice — every user gets a separate volume, mounted in the AFS space as their home directory. Depending on specific uses, further volumes might also be created for the user and mounted somewhere under their toplevel home directory, or even somewhere else inside the cell file structure.
Make sure you still hold the administrator Kerberos ticket and AFS token, and then execute:
vos create afs1 a user.
Volumejirky
200000
536997357
created on partition /vicepa of afs1vos examine user.
user.jirky
jirky
536997357
RW 2 K On-line afs1.example.com
/vicepa RWrite536997357
ROnly 0 Backup 0 MaxQuota200000
K Creation Sun Jun 29 18:06:43 2008 Copy Sun Jun 29 18:06:43 2008 Backup Never Last Update Never RWrite:536997357
number of sites -> 1 server afs1.example.com
partition /vicepa RW Site
Having the volume, let's mount it to a proper location. We will use
a "hashed" directory structure with two sublevels, so that the person's home
directory will be in /afs/
(instead of directly in example.com
/user/p
/pe
/person
/user/
).
Follow this AFS convention and you will be able to use
person
/libnss-afs
and 3rd party management scripts without
modification.
cd /afs/.
example.com
/usermkdir -p
j
/ji
fs mkm
j
/ji
/jirky
user.jirky
-rw
Let's view the volume and directory information:
fs lsm
'j
/ji
/jirky
j
/ji
/jirky
' is a mount point for volume '%user.jirky
'fs lv
Filej
/ji
/jirky
j
/ji
/jirky
(536997357.1.1
) contained in volume536997357
Volume status for vid =536997357
named user.jirky
Current disk quota is200000
Current blocks used are 2 The partition has 85448567 blocks available out of 140861236
Let's view the permissions on the new directory and allow user full access:
fs la
Access list forj
/ji
/jirky
j
/ji
/jirky
is Normal rights: system:administrators rlidwkafs sa
j
/ji
/jirky
jirky
allfs la !:2
Access list forj
/ji
/jirky
is Normal rights: system:administrators rlidwka jirky rlidwka
(On a side note, the "!:2" above is a Bash construct that will insert the 3rd word from the previous line.
Expanded, that line will be and execute "fs la j
/ji
/jirky
")
Now switch to user jirky
and verify you've got
access to the designated home directory:
unlog; kdestroy
kinit
Password forjirky
; aklogjirky
@EXAMPLE.COM
:
PASSWORD
cd /afs/.
example.com
/user/j
/ji
/jirky
echo IT WORKS > test
cat test
IT WORKS
AFS volumes have a concept of "volume quota", or the maximum amount of data a volume can hold before denying further writes with the appropriate "Quota exceeded" error. It's important to know that AFS volumes do not take a predefined amount of disk space like physical disk partitions do; you can create thousands of volumes, and they only take as much space as there is actual data on them. Likewise, AFS volume quotas are just limits that do not affect volume size except "capping" the maximum size of data a volume can store.
Let's list volume data size quota and increase it from the default 5 MB to 100 MB:
cd /afs/.
example.com
fs lq
Volume Name Quota Used %Used Partition root.cell 5000 28 1% 38%fs sq . 100000
fs lq
Volume Name Quota Used %Used Partition root.cell.readonly 100000 28 1% 38%
The material covered so far in the MIT Kerberos 5 Guide and this OpenAFS Guide has gotten us to a point where we can create users in Kerberos and AFS, create and mount users' data volumes, authenticate using kinit and aklog, and read and write files in the users' volumes with full permissions.
In other words, it seems as if we're a step away from our goal — a true networked and secure solution for centralized logins with exported home directories.
There's one final thing missing, and it's the support for serving
user "metadata". As explained in the section called “Introduction”, metadata will come
from either LDAP or libnss-afs
.
If you've followed and implemented the setup described in the OpenLDAP Guide, you already have the metadata taken care of. However, let's say a few words about it anyway to broaden our horizons.
Collectively, metadata is the information traditionally found in
system files
/etc/passwd
,
/etc/group
, and
/etc/shadow
.
The metadata necessary for a successful user login includes four elements: Unix user ID, Unix group ID, home directory and the desired shell.
But let's take a look at the complete list of common user metadata information. The software components which can store them are listed in parentheses:
Username (all)
Password (Kerberos or LDAP — but storing passwords in LDAP is out of our scope)
User ID (LDAP or libnss-afs
)
Group ID and group membership (LDAP)
GECOS information (LDAP)
Home directory (LDAP or libnss-afs
)
Preferred shell (LDAP or libnss-afs
)
Group membership (LDAP)
Password aging (Kerberos)
You may notice LDAP seems like a "superset" of
libnss-afs
. And it really is, which can be an
advantage or a disadvantage, depending on the situation. Here's why:
LDAP is a standalone solution that can be used to create network infrastructures based on the "magic trio" — Kerberos, LDAP, and AFS. It is flexible and can serve arbitrary user and system information besides the necessary metadata. Can you think of a few examples how this would be useful? For example, on a lower level, you could use LDAP to store extra group membership information or per-user host access information; on a higher level, you could use LDAP to store a person's image, birth date, or a shared calendar available to all user applications. However, this flexibility comes at a cost of administering yet another separate database (Kerberos, AFS, and LDAP all have their own databases, and you have to keep them in sync. Without proper tools, this could become a burden).
libnss-afs
, on the other hand, is an AFS-dependent module
that serves the
metadata out of the AFS PTS database. It is simple, and limited.
Structure of the PTS is such that
you can only save certain information in there, and nothing else. For
fields that cannot be represented in PTS, libnss-afs
outputs a "one size fits all" default value. For example, as there is
no space for GECOS information in the PTS, everyone's GECOS is set to
their username; as there is no group ID, everyone's group ID is set to group
65534 (nogroup)
, and as there is no home directory,
everyone's homedir is set to /afs/
.
cell.name
/user/u
/us
/user
/libnss-afs
may suit those who prefer simplified
administration over flexibility.
In this Guide, both LDAP and libnss-afs
approach will be
explained. Moving from libnss-afs
to LDAP is easy; if in doubt, pick libnss-afs
as a simpler
start-up solution.
As said, libnss-afs is an AFS-dependent approach to serving metadata, so it makes sense to describe it in the context of the OpenAFS Guide.
Adam Megacz created libnss-afs
on the basis of Frank
Burkhardt's libnss-ptdb
, which in turn was created on
the basis of Todd M. Lewis' nss_pts
.
The primary motivation for libnss-afs
has been the use
at HCoop, the first non-profit corporation offering public AFS hosting
and accounts.
Indeed, HCoop's libnss-afs repository
is nowadays the right place to get libnss-afs from, and it has also been updated
to work with OpenAFS 1.8.
Good. Let's move onto the technical setup:
libnss-afs
must run in
combination with nscd and cache the replies from the
AFS ptserver
, so
let's install nscd:
sudo apt-get install nscd
Once nscd is installed, edit its config file, /etc/nscd.conf
to include the following:
enable-cache hosts no persistent passwd no persistent group no persistent hosts no
(Note that all of the above lines already exist in /etc/nscd.conf
, although the
formatting of the file is a bit strange and finding them is an exercise for your eyes. So you should
not be adding the above lines in there, but just locating them in the config file and turning
the appropriate "yes" to "no".)
Then, restart nscd as usual with sudo invoke-rc.d nscd restart.
libnss-afs
can be cloned from Git repository at git://git.hcoop.net/git/hcoop/debian/libnss-afs.git. Debian package can be built and installed by cd-ing into the libnss-afs/
directory and running dpkg-buildpackage. The whole session transcript might look like this:
sudo apt-get install libopenafs-dev debhelper libkrb5-dev heimdal-multidev
git clone git://git.hcoop.net/git/hcoop/debian/libnss-afs.git
cd libnss-afs
dpkg-buildpackage
sudo dpkg -i ../libnss-afs*deb
After libnss-afs
is installed, let's modify the
existing lines in /etc/nsswitch.conf
to look like
the following:
passwd: afs files group: afs files shadow: files
If you have completed this setup and are are not interested in the LDAP procedure for serving metadata, you can skip to the section called “Metadata test”.
A complete LDAP setup is explained in another article from the series,
the OpenLDAP Guide. If you have followed and implemented the procedure,
especially the part about modifying /etc/nsswitch.conf
,
then there's only one thing that should be done here —
you should modify users' entries in LDAP to make their
home directories point to AFS instead of to
/home/
.
Actually, you can symlink
/home/
to AFS, and then no change in LDAP will be necessary. One benefit of
this approach is that /home/
looks
familiar to everyone. One drawback is that you need to symlink that
directory to AFS on all machines where users will be logging in.
To create the symlinks, use:
sudo mv /home /home,old sudo ln -sf /afs/.example.com
/user /home sudo ln -sf /afs/example.com
/user /rhome
To literally change users' home directories in LDAP (to point to
/afs/
),
construct a LDIF file and use ldapmodify to
apply the LDIF file.
example.com
/user/u
/us
/user
/
Here's an example for user jirky
(which should already exist
in your LDAP directory if you've followed the OpenLDAP Guide Guide). Save the
following as /tmp/homechange.ldif
:
dn: uid=jirky
,ou=people,dc=spinlock
,dc=hr
changetype: modify replace: homeDirectory homeDirectory: /afs/example.com
/user/j
/ji
/jirky
And apply using:
ldapmodify -c -x -D cn=admin,dc=spinlock
,dc=hr
-W -f /tmp/homechange.ldif
The final step in this article pertains to integrating OpenAFS into the system authentication procedure. We want Kerberos ticket and OpenAFS token to be issued for users as they log in, without the need to run kinit and aklog manually after login.
Let's install the necessary OpenAFS PAM module:
sudo apt-get install libpam-afs-session
To minimize the chance of locking yourself out of the system during PAM configuration phase, ensure right now that you have at least one root terminal window open and a copy of the files available before starting on PAM configuration changes. To do so, execute the following in a cleanly started shell and leave the terminal open:
sudo su - cd /etc cp -a pam.d pam.d,orig
If you break logins with an invalid PAM configuration, the above will allow you to simply revert to a known-good state by using the open root terminal and executing:
cp -a pam.d,orig/* pam.d/
After you've edited your PAM configuration as shown below, restart the services you will be connecting to. This isn't strictly necessary, but it ensures that the services will re-read the PAM configuration and not use any cached information.
auth sufficient pam_unix.so nullok_secure auth sufficient pam_krb5.so use_first_pass auth optional pam_afs_session.so program=/usr/bin/aklog auth required pam_deny.so
At this point, you have a functional AFS site. Users, once created in the system, can log in and access their files anywhere on the network.
You can rely on either system login or manually running kinit; aklog in obtaining Kerberos ticket and AFS token.
Once the token is obtained, you can access the protected AFS data space.
With a good foundation we've built, for further information on AFS, please refer to other available resources:
Official documentation: http://docs.openafs.org/
Mailing lists: https://lists.openafs.org/mailman/listinfo (OpenAFS-info in particular)
IRC: channel #OpenAFS at the Libera.Chat network (irc.libera.chat)
For commercial consultation and infrastructure-based networks containing AFS, contact Spinlock Solutions or organizations listed on the OpenAFS support page.
The newest version of this article can always be found at http://techpubs.spinlocksolutions.com/dklar/afs.html.
Platforms:
GNU
Debian GNU
AFS:
OpenAFS
http://docs.openafs.org/
https://lists.openafs.org/mailman/listinfo
GRAND.CENTRAL.ORG - community resource for users of the AFS distributed file system
Google Summer of Code AFS projects
Arla - alternative AFS client
Public access AFS accounts:
HCoop - Internet Hosting Cooperative
Glue layer:
Linux-PAM
PAM Configuration File Syntax
NSS
libnss-afs
Related infrastructural technologies:
MIT Kerberos
OpenLDAP
FreeRADIUS
Adduser-ng suite:
adduser-ng
Commercial support:
Spinlock Solutions
OpenAFS support organizations
Misc:
DocBook