Backups and Removable Media

Data backups in Linux were traditionally done by running commands to archive and compress the files to back up, then writing that backup archive to tape. Choices for archive tools, compression techniques, and backup media have grown tremendously in recent years. Tape archiving has, for many, been replaced with techniques
for backing up data over the network, to other hard disks, or to CDs, DVDs, or other low-cost removable media.
This chapter details some useful tools for backing up and restoring your critical data. The first part of the chapter details how to use basic tools such as tar, gzip, and rsync for backups.

Backing Up Data to Compressed Archives

If you are coming from a Windows background, you may be used to tools such as WinZip and PKZIP, which both archive and compress groups of files in one application. Linux offers separate tools for gathering groups of files into a single archive (such as tar) and compressing that archive for efficient storage (gzip, bzip2, and lzop).

However, you can also do the two steps together by using additional options to the tar command.

Creating Backup Archives with tar

The tar command, which stands for tape archiver, dates back to early Unix systems. Although magnetic tape was the common medium that tar wrote to originally, today tar is most often used to create an archive file that can be distributed to a variety of media.

The fact that the tar command is rich in features is reflected in the dozens of options available with tar. The basic operations of tar, however, are used to create a backup archive (-c), extract files from an archive (-x), compare differences between archives (-d), and update files in an archive (-u). You can also append files to (-r or -A) or delete files from (-d) an existing archive, or list the contents of an archive (-t).

NOTE: Although the tar command is available on nearly all Unix and Linux systems, it behaves differently on many systems. For example, Solaris does not support -z to manage tar archives compressed in gzip format. The Star (ess-tar) command supports access control lists (ACLs) and file flags (for extended permissions used by Samba).

As part of the process of creating a tar archive, you can add options that compress the resulting archive. For example, add -j to compress the archive in bzip2 format or –z to compress in gzip format. By convention, regular tar files end in .tar, while compressed tar files end in .tar.bz2 (compressed with bzip2) or .tar.gz (compressed with gzip). If you compress a file manually with lzop (see http://www.lzop.org), the compressed tar file should end in .tar.lzo.

Besides being used for backups, tar files are popular ways to distribute source code and binaries from software projects. That’s because you can expect every Linux and Unix-like system to contain the tools you need to work with tar files.

NOTE: One quirk of working with the tar command comes from the fact that tar was created before there were standards regarding how options are entered. Although you can prefix tar options with a dash, it isn’t always necessary. So you might see a command that begins tar xvf with no dashes to indicate the options.

A classic example for using the tar command might combine old-style options and pipes for compressing the output; for example:

Make archive, zip it and output
$ tar c *.txt | gzip -c > myfiles.tar.gz

The example just shown illustrates a two-step process you might find in documentation for old Unix systems. The tar command creates (c) an archive from all .txt files in the current directory. The output is piped to the gzip command and output to stdout (-c), and then redirected to the myfiles.tar.gz file. Note that tar is one of the few commands which don’t require that options be preceded by a dash (-).

New tar versions, on modern Linux systems, can create the archive and compress the output in one step:

Create gzipped tar file of .txt files
$ tar czf myfiles.tar.gz *.txt

Be more verbose creating archive
$ tar czvf myfiles.tar.gz *.txt
textfile1.txt
textfile2.txt

In the examples just shown, note that the new archive name (myfiles.tar.gz) must immediately follow the f option to tar (which indicates the name of the archive).

Otherwise the output from tar will be directed to stdout (in other words, your screen). The z option says to do gzip compression, and v produces verbose descriptions of processing.

When you want to return the files to a file system (unzipping and untarring), you can also do that as either a one-step or two-step process, using the tar command and optionally the gunzip command:

Unzips and untars archive
$ gunzip -c myfiles.tar.gz | tar x
Or try the following command line instead:

Unzips then untars archive
$ gunzip myfiles.tar.gz ; tar xf myfiles.tar

To do that same procedure in one step, you could use the following command:

$ tar xzvf myfiles.tar.gz
textfile1.txt
textfile2.txt

The result of the previous commands is that the archived .txt files are copied from the archive to the current directory. The x option extracts the files, z uncompresses (unzips) the files, v makes the output, and f indicates that the next option is the name of the archive file (myfiles.tar.gz).

Advertisements

Backing Up with unison

Although the rsync command is good to back up one machine to another, it assumes that the machine being backed up is the only one where the data is being modified. What if you have two machines that both modify the same file and you want to sync those files? Unison is a tool that will let you do that. It’s common for people to want to work with the same documents on their laptop and desktop systems. Those machines might even run different operating systems. Because unison is a cross-platform application, it can let you sync files that are on both Linux and Windows systems. To use unison in Linux, you must install the unison package (type the sudo apt-get install unison command).With unison, you can define two roots representing the two paths to synchronize. Those roots can be local or remote over ssh. For example:

$ unison /home/marvin ssh://marvin@server1//home/fcaen
$ unison /home/marvin /mnt/backups/marvin-homedir

NOTE: Make sure you run the same version of unison on both machines.

Unison contains both graphical and command-line tools for doing unison backups. It will try to run the graphical version by default. This may fail if you don’t have a desktop running or if you’re launching unison from within screen. To force unison to run in command line mode, add the -ui text option as follows:

$ unison /home/marvin ssh://marvin@server1//home/fcaen -ui text
Contacting server…
marvin@server1’s password:
Looking for changes
Waiting for changes from server
Reconciling changes
local server1
newfile —-> memo.txt [f] y
Propagating updates

The unison utility will then compare the two roots and for each change that occurred since last time, ask you what you want to do. In the example above, there’s a new file called memo.txt on the local system. You are asked if you want to proceed with the update (in this case, copy memo.txt from the local machine to server1). Type y to do the updates. If you trust unison, add -auto to make it take default actions without prompting you:

$ unison /home/marvin ssh://marvin@server1//home/fcaen -auto

For more information, see the man page for unison. In addition, you can view unison options using the -help option. You can also display and page through the unison manual using the -doc all option as shown here:

See unison options
$ unison -help

Display unison manual
$ unison -doc all | less

If you find yourself synchronizing two roots frequently, you can create a profile, which is a series of presets. In graphical mode, the default screen makes you create profiles. Profiles are stored in .prf text files in the ~/.unison/ directory.

They can be as simple as the following:

root = /home/marvin
root = ssh://marvin@server1//home/fcaen

If this is stored in a profile called fc-home.prf, you can invoke it simply with the following command line:

$ unison fc-home

Backing Up to Removable Media

The capacity of CDs and DVDs, and the low costs of those media, has made them attractive options as computer backup media. Using tools that commonly come with Linux systems, you can gather files to back up into CD or DVD images and burn those images to the appropriate media.

Command line tools such as mkisofs (for creating CD images) and cdrecord (for burning images to CD or DVD) once provided the most popular interfaces for making backups to CD or DVD. Now there are many graphical front-ends to those tools you could also consider using. For example, GUI tools for mastering and burning CDs/DVDs include K3b (the KDE CD and DVD Kreator) and Nautilus (GNOME’s file manager that offers a CD-burning feature). Other GUI tools for burning CDs include gcombust, X-CD-Roast, and graveman. The commands for creating file system images to back up to CD or DVD, as well as to burn those images, are described in this document.

Creating Backup Images with mkisofs

Most data CDs and DVDs can be accessed on both Windows and Linux systems because they are created using the ISO9660 standard for formatting the information on those discs. Because most modern operating systems need to save more information about files and directories than the basic ISO9660 standard includes, extensions
to that standard were added to contain that information.

Using the mkisofs command, you can back up the file and directory structure from any point in your Linux file system and produce an ISO9660 image. That image can include the following kinds of extensions:

❑ System Use Sharing Protocol (SUSP) are records identified in the Rock Ridge Interchange Protocol. SUSP records can include Unix-style attributes, such as ownership, long file names, and special files (such as character devices and symbolic links).

❑ Joliet directory records store longer file names in a form that makes them usable to Windows systems.

❑ Hierarchical File System (HFS) extensions allow the ISO image to appear as an HFS file system, which is the native file system for Macintosh computers. Likewise, Data and Resource forks can be added in different ways to be read by Macs.

When you set out to create your ISO image, consider where you will ultimately need to access the files you back up using mkisofs (Linux, Windows, or Macs). Once the image is created, it can be used in different ways, the most obvious of which is to burn the image to a CD or DVD.

Besides being useful in producing all or portions of a Linux file system to use on a portable medium, mkisofs is also useful for creating live CDs/DVDs. It does this by adding boot information to the image that can launch a Linux kernel or other operating system, bypassing the computer’s hard drive.

NOTE: Although you can still use the mkisofs command in Ubuntu, mkisofs is now a pointer to genisoimage. The genisoimage command was derived from mkisofs, which was part of the cdrtools package (see http://cdrecord
.berlios.de). Development of genisoimage is part of the cdrkit project (www.cdrkit.org).

Because most Linux users store their personal files in their home directories, a common way to use mkisofs to back up files is to back up everything under the /home directory. Here are some examples of using mkisofs to create an ISO image from all files and directories under the /home directory:

$ cd /tmp

Create basic ISO9660 image
$ sudo mkisofs -o home.iso /home

Add Joliet Rock Ridge extensions
$ sudo mkisofs -o home2.iso -J -R /home

Also add HFS extensions
$ sudo mkisofs -o home3.iso -J -R -hfs /home

With the last command, you will see a warning message like the following:

genisoimage: Warning: no Apple/Unix files will be decoded/mapped

In each of the three examples above, all files and directories beneath the /home directory are added to the ISO image (home.iso). The first example has no extensions, so all file names are converted to DOS-style naming (8.3 characters). The second example uses Joliet and Rock Ridge extensions, so file names and permissions should appear as they did on the original Linux system when you open the ISO on a Linux or Windows system. The last example also makes the files on the image readable from a Mac file system.

NOTE: You can also read Rock Ridge and Joliet extensions on Mac OS X.

You can have multiple sources added to the image. Here are some examples:

Multiple directories/files
$ mkisofs -o home.iso -R -J music/ docs/
giovanni.pdf /var/spool/mail

Graft files on to the image
$ mkisofs -o home.iso -J -R \
-graft-points Pictures/=/usr/share/pixmaps/
/home/Giovanni

The first example above shows various files and directories being combined and placed on the root of the ISO image. The second example grafts the contents of the /var/pics directory into the /home/giovanni/Pictures directory. As a result, on the CD image the /Pictures directory will contain all content from the /usr/share/pixmaps directory.

Adding information into the header of the ISO image can help you identify the contents of that image later. This is especially useful if the image is being saved or distributed online, without a physical disc you can write on. Here are some examples:

Add header info to ISO
$ mkisofs -o /tmp/home.iso -R -J
-p http://www.handsonhistory.com
-publisher “Swan Bay Folk Art Center”
-V “WebBackup”
-A “mkisofs”
-volset “1 of 4 backups, July 30, 2007”
/home/giovanni

In the example above, -p indicates the preparer ID, which could include a phone number, mailing address, or web site for contacting the preparer of the ISO image. With the option -publisher, you can indicate a 128-character description of the preparer (possibly the company or organization name). The -V indicates the volume ID. Volume ID is important because in many Linux systems this volume ID is used to mount the CD when it is inserted. For example, in the command line shown above, the CD would be mounted on /media/WebBackup in Ubuntu and other Linux systems. The -A option can be used to indicate the application used to create the ISO image. The -volset option can contain a string of information about a set of ISO images.

When you have created your ISO image, and before you burn it to disc, you can check the image and make sure you can access the files it contains. Here are ways to check it out:

Display volume name
$ volname home.iso
WebBackup

Display header information
$ isoinfo -d -i home.iso
CD-ROM is in ISO 9660 format
System id: LINUX
Volume id: WebBackup
Volume set id: All Website material on November 2, 2007
Publisher id: Swan Bay Folk Art Center
Data preparer id: http://www.handsonhistory.com
Application id: mkisofs
Copyright File id:
Abstract File id:
Bibliographic File id:
Volume set size is: 1
Volume set sequence number is: 1
Logical block size is: 2048
Volume size is: 23805
Joliet with UCS level 3 found
Rock Ridge signatures version 1 found

You can see a lot of the information entered on the mkisofs command line when the image was created. If this had been an image that was going to be published, we might also have indicated the locations on the CD of a copyright file (-copyright), abstract file (-abstract), and bibliographic file (-biblio). Provided that the header is okay, you can next try accessing files on the ISO image by mounting it:

Create a mount point
$ sudo mkdir /mnt/myimage

Mount the ISO in loopback
$ sudo mount -o loop home.iso /mnt/myimage

Check the ISO contents
$ ls -l /mnt/myimage

Unmount the image when done
$ sudo umount /mnt/myimage

Besides checking that you can access the files and directories on the ISO, make sure that the date/time stamps, ownership, and permissions are set as you would like. That information might be useful if you need to restore the information at a later date.

Burning Backup Images with cdrecord

The cdrecord command is the most popular Linux command line tool for burning CD and DVD images. After you have created an ISO image (as described earlier) or obtained one otherwise (such as downloading an install CD or live CD from the Internet), cdrecord makes it easy to put that image on a disc.

NOTE: In Linux, cdrecord has been replaced with the wodim command. The wodim command was created from the cdrecord code base and still supports most of the same options. If you run cdrecord, you will actually be running wodim in this Ubuntu release. If you have problems with that utility, contact the CDRkit project (http://cdrkit.org).

There is no difference in making a CD or DVD ISO image, aside from the fact that a DVD image can obviously be bigger than a CD image. Check the media you have for their capacities. A CD can typically hold 650MB, 700MB, or 800MB, whereas mini CDs can hold 50MB, 180MB, 185MB, or 193MB. Single-layer DVDs hold 4.7GB, while doublelayer DVDs can hold 8.4GB. Keep in mind, however, that CD/DVD manufacturers list their capacities based on 1000KB per 1MB, instead of 1024KB. Type du –si home.iso to list the size of your ISO, instead of du -sh as you would normally, to check if your ISO will fit on the media you have.

Before you begin burning your image to CD or DVD, check that your drive supports CD/DVD burning and determine the address of the drive. Use the –scanbus option to cdrecord to do that:

In the two examples shown, the first indicates a CD/DVD drive that only supports reading and cannot burn CDs (DVD-ROM and CD-ROM). The second example shows a drive that can burn CDs or DVDs (DVDRW). Insert the medium you want to record on. Assuming your drive can burn the media you have, here are some simple cdrecord commands for burning a CD or DVD images:

Test burn without actually burning
$ cdrecord -dummy home.iso

Burn CD (default settings) in verbose
$ cdrecord -v home.iso

Set specific speed
$ cdrecord -v speed=24 home.iso

Can’t read track so add 15 zeroed sectors
$ cdrecord -pad home.iso

Eject CD/DVD when burn is done
$ cdrecord -eject home.iso

Identify drive by device name (may differ)
$ cdrecord /dev/cdrw home.iso

Identify drive by SCSI name
$ cdrecord dev=0,2,0 home.iso

The cdrecord command can also burn multi-session CDs/DVDs. Here is an example:

Start a multi-burn session
$ cdrecord -multi home.iso

Check the session offset for next burn
$ cdrecord -msinfo
Using /dev/cdrom of unknown capabilities
0,93041

Create a second ISO to burn
$ mkisofs -J -R -o new.iso \
-C 0,93041 /home/marvin/more Indicate start point and new data for ISO

Burn new data to existing CD
$ cdrecord new.iso

You can use multiple -multi burns until the CD is filled up. For the final burn, don’t use -multi, so that the CD will be closed.

Making and Burning DVDs with growisofs

Using the growisofs command, you can combine the two steps of gathering files into an ISO image (mkisofs) and burning that image to DVD (cdrecord). Besides saving a step, the growisofs command also offers the advantage of keeping a session open by default until you close it, so you don’t need to do anything special for multi-burn sessions.

Here is an example of some growisofs commands for a multi-burn session:

Master and burn to DVD
$ growisofs -Z /dev/dvd -R -J /home/marvin

Add to burn
$ growisofs -Z /dev/dvd -R -J /home/giovanni

Close burn
$ growisofs -M /dev/dvd=/dev/zero

If you want to add options when creating the ISO image, you can simply add mkisofs options to the command line. (For example, see how the -R and -J options are added in the above examples.)

If you want to burn a DVD image using growisofs, you can use the -dvd-compat option. Here’s an example:

Burn an ISO image to DVD
$ growisofs -dvd-compat -Z /dev/dvd=image.iso

The -dvd-compat option can improve compatibility with different DVD drives over some multi-session DVD burning procedures.

Backing Up tar Archives Over ssh

OpenSSH (www.openssh.org) provides tools to securely do remote login, remote execution, and remote file copy over network interfaces. By setting up two machines to share encryption keys, you can transfer files between those machines without entering passwords for each transmission. That fact lets you create scripts to back up your data from an SSH client to an SSH server, without any manual intervention.

From a central Linux system, you can gather backups from multiple client machines using OpenSSH commands. The following example runs the tar command on a remote site (to archive and compress the files), pipes the tar stream to standard output, and uses the ssh command to catch the backup locally (over ssh) with tar:

$ mkdir mybackup ; cd mybackup
$ ssh marvin@server1 ‘tar cf – myfile*’ | tar xvf –
marvin@server1’s password: ******
myfile1
myfile2

In the example just shown, all files beginning with myfile are copied from the home directory of marvin on server1 and placed in the current directory. Note that the left side of the pipe creates the archive and the right side expands the files from the archive to the current directory. (Keep in mind that ssh will overwrite local files if they exist, which is why we created an empty directory in the example.)

To reverse the process and copy files from the local system to the remote system, we run a local tar command first. This time, however, we add a cd command to put the files in the directory of our choice on the remote machine:

$ tar cf – myfile* | ssh marvin@server1 \
‘cd /home/marvin/myfolder; tar xvf -’
marvin@server1’s password: ******
myfile1
myfile2

In this next example, we’re not going to untar the files on the receiving end, but instead write the results to tgz files:

$ ssh marvin@server1 ‘tar czf – myfile*’ | cat > myfiles.tgz
$ tar cvzf – myfile* | ssh marvin@server1 ‘cat > myfiles.tgz’

The first example takes all files beginning with myfile from the marvin user’s home directory on server1, tars and compresses those files, and directs those compressed files to the myfiles.tgz file on the local system. The second example does the reverse by taking all files beginning with myfile in the local directory and sending them to a myfiles.tgz file on the remote system.

The examples just shown are good for copying files over the network. Besides providing compression they also enable you to use any tar features you choose, such as incremental backup features.

Backing Up Over Networks

After you have backed up your files and gathered them into a tar archive, what do you do with that archive? The primary reason for having a backup is in case something happens (such as a hard disk crash) where you need to restore files from that backup. Methods you can employ to keep those backups safe include:

❑ Copying backups to removable media such as tape, CD, or DVD
❑ Copying them to another machine over a network

Fast and reliable networks, inexpensive high-capacity hard disks, and the security that comes with moving your data off-site have all made network backups a popular practice.

For an individual backing up personal data or a small office, combining a few simple commands may be all you need to create efficient and secure backups. This approach represents a direct application of the UNIX philosophy: joining together simple programs that do one thing to get a more complex job done.

Although just about any command that can copy files over a network can be used to move your backup data to a remote machine, some utilities are especially good for the job. Using OpenSSH tools such as ssh and scp, you can set up secure passwordless transfers of backup archives and encrypted transmissions of those archives.

Tools such as the rsync command can save resources by backings up only files (or parts of files) that have changed since the previous backup. With tools such as unison, you can back up files over a network from Windows, as well as Linux systems.

The following sections describe some of these techniques for backing up your data to other machines over a network.

NOTE: A similar tool that might interest you is the rsnapshot command (yum install rsnapshot). The rsnapshot command (www.rsnapshot.org/) can work with rsync to make configurable hourly, daily, weekly, or monthly snapshots of a file system. It uses hard links to keep a snapshot of a file system, which it can then sync with changed files.

Install this tool with the following commands:

$ sudo apt-get install rsnapshot
$ sudo apt-get install sshfs

Backing Up Files with rsync

A more feature-rich command for doing backups is rsync. What makes rsync so unique is the rsync algorithm, which compares the local and remote files one small block at a time using checksums, and only transfers the blocks that are different. This algorithm is so efficient that it has been reused in many backup products. The rsync command can work either on top of a remote shell (ssh), or by running an rsyncd daemon on the server end. The following example uses rsync over ssh to mirror a directory:

$ rsync -avz –delete giovanni@server1:/home/giovanni/pics/giovanni/pics/

The command just shown is intended to mirror the remote directory structure (/home/giovanni/pics/) on the local system. The -a says to run in archive mode (recursively copying all files from the remote directory), the -z option compresses the files, and –v makes the output verbose. The –delete tells rsync to delete any files on the local system that no longer exist on the remote system. For ongoing backups, you can have rsync do seven-day incremental backups. Here’s an example:

# mkdir /var/backups
# rsync –delete –backup \
–backup-dir=/var/backups/backup-`date +%A` \
-avz giovanni@server1:/home/giovanni/Personal/ \
/var/backups/current-backup/

When the command just shown runs, all the files from /home/giovanni/Personal on the remote system server1 are copied to the local directory /var/backups/current-backup. All files modified today are copied to a directory named after today’s day of the week, such as /var/backups/backup-Monday.

Over a week, seven directories will be created that reflect changes over each of the past seven days. Another trick for rotated backups is to use hard links instead of multiple copies of the files. This two-step process consists of rotating the files, then running rsync:

# rm -rf /var/backups/backup-old/
# mv /var/backups/backup-current/ /var/backups/backup-old/
# rsync –delete –link-dest=/var/backups/backup-old -avz \
giovanni@server1:/home/giovanni/Personal/ /var/backups/backup-current/

In the previous procedure, the existing backup-current directory replaces the backup-old directory, deleting the two-week-old full backup with last-week’s full backup. When the new full backup is run with rsync using the –link-dest option, if any of the files being backed up from the remote Personal directory on server1 existed during the previous backup (now in backup-old), a hard link is created between the file in the backup-current directory and backup-old directory.

You can save a lot of space by having hard links between files in your backup-old and backup-current directory. For example, if you had a file named file1.txt in both directories, you could check that both were the same physical file by listing the files’ inodes as follows:

$ ls -i /var/backups/backup*/file1.txt
260761 /var/backups/backup-current/file1.txt
260761 /var/backups/backup-old/file1.txt

Administration – Working with System Logs

Most Linux systems are configured to log many of the activities that occur on those systems. Those activities are then written to log files located in the /var/log directory or its subdirectories. This logging is done by the Syslog facility.

Linux uses the syslogd (system log daemon) and klogd (kernel log daemon) from the sysklogd and klogd packages to manage system logging. Those daemons are started automatically from the syslog init script (/etc/init.d/sysklogd). Information about system activities is then directed to files in the /var/log directory such as messages, secure, cron, and boot.log, based on settings in the /etc/syslog .conf file.

Automatic log rotation is handled by logrotate, based on settings in the /etc/logrotate.conf file and /etc/logrotate.d directory. The /etc/cron.daily/logrotate cronjob causes this daily log rotating to take place.

You can check any of the log files manually (using vi or another favorite text editor). However, if you install the logwatch package, highlights of your log files will automatically be mailed to your root user’s mailbox every day. You can change both the recipient and the sender address of that mail by editing the /etc/cron.daily/0logwatch file. To prevent e-mail loops, you should change the sender address to a real e-mail address when the recipient is not on the local machine. Another way to change the recipient is to forward root’s e-mail to another address by editing /etc/aliases and running newaliases to enact the changes. Otherwise, just log in as root and use a mail client, as described in Chapter 12, to read the logwatch email messages:

You can send your own messages to the syslogd logging facility using the logger command. Here are a couple of examples:

Message added to messages file
$ logger Added new video card

Priority, tag, message file
$ logger -p info -t CARD -f /tmp/my.txt

In the first example, the words Added new video card are sent to the messages file. In the second example, the priority of the message is set to info, and a tag of CARD is added to each line in the message.

Administration – Using Advanced Security Features

A dozen or so pages covering security-related commands are not nearly enough to address the depth of security tools available to you as a Linux system administrator. Beyond the commands covered in this chapter, here are descriptions of some features you may want to look into to further secure your Linux system:

❑ Security Enhanced Linux (SELinux) — The SELinux feature provides a means of securing the files, directories, and applications in your Linux system in such a way that exploitation of one of those areas of your system cannot be used to breach other areas. For example, if intruders were to compromise your web daemon, they wouldn’t necessarily be able to compromise the rest of the system.

SELinux was developed by the U.S. National Security Agency (NSA), who hosts a related FAQ at http://www.nsa.gov/selinux/info/faq.cfm. You need to install SELinux as separate packages. See https://wiki.ubuntu.com/SELinux for details.

❑ Central logging—If you’re managing more than a couple of Linux servers, it becomes preferable to have all your systems log to a central syslog server. When you implement your syslog server, you may want to explore using syslog-ng. Also, if you outgrow logwatch, you should consider using a log parser such as Splunk.

❑ Tripwire — Using the tripwire package, you can take a snapshot of all the files on your system, then later use that snapshot to find if any of those files have been changed. This is particularly useful to find out if any applications have been modified that should not have been. First, you take a baseline of your system file. Then at regular intervals, you run a tripwire integrity check to see if any of your applications or configuration files have been modified.

❑ APT database — Another way to check if any of your applications have been modified is by using the APT commands to validate the applications and configuration files you have installed on your system.

❑ chkrootkit—If you suspect your system has been compromised, download and build chkrootkit from http://www.chkrootkit.org. This will help you detect rootkits that may have been used to take over your machine. We recommend you run chkrootkit from a LiveCD or after mounting the suspected drive on a clean system.

Administration – Modifying or deleting User Accounts

After a user account is created, you can change values for that account with the usermod command. Most options are the same ones you would use with useradd. For example:

Change user’s name in comment field
$ sudo usermod -c “Marvin Soto” msoto

Change default shell to sh
$ sudo usermod -s /bin/sh msoto

Lock the user account named marvin
$ sudo usermod -L marvin

Unlock user account named marvin
$ sudo usermod -U marvin

Note that the last two examples lock and unlock a user account, respectively. Locking a user account does not remove the user’s account from the system or delete any of the user’s files and directories. However, it does keep the user from logging in. Locking an account can be useful if an employee is leaving the company, but the work in that employee’s files needs to be passed to another person. Under those circumstances, locking the user instead of deleting it prevents the files owned by that user from appearing as belonging to an unassigned UID.

Because a regular user can’t use the useradd or usermod command, there are special commands for changing personal account information. Here are examples:

Change current user’s shell to /bin/sh
$ chsh -s /bin/sh

Change a user’s shell to /bin/sh
$ sudo chsh -s /bin/sh marvin

$ sudo chfn \
-o “B-205” \ Change office number
-h “212-555-1212” \ Change home phone number
-w “212-555-1957” Change office phone number

$ finger marvin
Login: marvin Name: Marvin G. Soto
Directory: /home/marvin Shell: /bin/bash
Office: B-205, 212-555-1212 Home Phone: 212-555-1957
On since Sat Aug 4 13:39 (CDT) on tty1 4 seconds idle
No mail.
No Plan.

The information changed above with the chfn command and displayed with finger are stored in the fifth field of the /etc/password file for the selected user. (The /etc/passwd file can only be edited directly by the root user, and should only be edited using the vipw command and extreme caution.)
On other versions of Linux, you can use the -f option to chfn to change your real or full name. On Linux, the permission to do this is turned off by default. You can change this by editing /etc/login.defs. Look for the following line:

CHFN_RESTRICT rwh

and change this to:

CHFN_RESTRICT frwh

Deleting User Accounts

With the userdel command, you can remove user accounts from the system, as well as other files (home directories, mail spool files, and so on) if you choose. Here are examples:

Delete user, not user’s home directory
# userdel marvin

Delete user, home directory, and mail spool
# userdel -r marvin

Administration – Managing Passwords

Adding or changing a password is usually done quite simply with the passwd command. However, there are additional options available with passwd that let an administrator manage such things as user account locking, password expiration, and warnings to change passwords. Besides passwd, there are commands such as chage, chfn, and vipw, for working with user passwords. Regular users can change only their own passwords, whereas the root user can change the password for any user. For example:

Change a regular user’s own password
$ passwd
Changing password for user marvin.
Changing password for marvin.
(current) UNIX password: ********
New UNIX password: *
BAD PASSWORD: it’s WAY too short
New UNIX password: *********
Retype new UNIX password: *********
passwd: password updated successfully

Root can change any user’s password
$ sudo passwd marvin
Changing password for user marvin.
New UNIX password: *
Retype new UNIX password: *
passwd: password updated successfully

In the first example, a regular user (marvin) changes his own password. Even while logged in, the user must type the current password before entering a new one. Also, passwd keeps a regular user from setting a password that is too short, based on a dictionary word, doesn’t have enough different characters, or is otherwise easy to guess.
The root user, in the second example, can change any user password without the old password.

Passwords should be at least eight characters, be a combination of letters and other characters (numbers, punctuation, and so on), and not include real words. Make passwords easy to remember but hard to guess. A system administrator can use passwd to lock and unlock user accounts. For example:

Lock the user account (marvin)
$ sudo passwd -l marvin
Locking password for user marvin.
passwd: Success

Unlock a locked user account (marvin)
$ sudo passwd -u marvin
Unlocking password for user marvin.
passwd: Success

Fails to unlock account with blank password
$ sudo passwd -u marvin
Unlocking password for user marvin.
passwd: Warning: unlocked password would be empty.
passwd: Unsafe operation (use -f to force)
Locking a user account with passwd causes an exclamation mark (!) to be placed at the front of the password field in the /etc/shadow file (where user passwords are stored). When a user account is unlocked, the exclamation mark is removed and the user’s previous password is restored.

An administrator can use the passwd command to require users to change passwords regularly, as well as warn users when passwords are about to expire. To use the password expiration feature, the user account needs to have had password expiration enabled. The following examples use passwd to modify password expiration:

Set minimum password life to 2 days
$ sudo passwd -n 2 marvin

Set maximum password life to 300 days
$ sudo passwd -x 300 marvin

Warn of password expiration 10 days in advance
$ sudo passwd -w 10 marvin

Days after expiration account is disabled
$ sudo passwd -i 14 marvin

In the first example, the user must wait at least two days (-n 2) before changing to a new password. In the second, the user must change the password within 300 days (-x 300). In the next example, the user is warned 10 days before the password expires (-w 10). In the last example, the user account is disabled 14 days after the password expires (-i 14). To view password expiration, you can use the chage command as follows:

View password expiration information
$ sudo chage -l marvin
Last password change : Aug 04, 2007
Password expires : May 31, 2008
Password inactive : Jun 14, 2008
Account expires : never
Minimum number of days between password change : 2
Maximum number of days between password change : 300
Number of days of warning before password expires : 10

As system administrator, you can also use the chage command to manage password expiration. Besides being able to set minimum (-m), maximum (-M), and warning (-W) days for password expiration, chage can also set the day when a user must set a new password or a particular date the account becomes inactive:

Make account inactive in 40 days
$ sudo chage -I 40 marvin

Force user’s password to expire in 5 days
$ sudo chage -d 5 marvin

Instead of five days (-d 5), you could set that option to 0 and cause the user to have to set a new password the next time he or she logs in. For example, the next time the user marvin logged in, if -d 0 had been set, marvin would be prompted for a new password as follows:

login: marvin
Password: ********
You are required to change your password immediately (root enforced)
Changing password for marvin.
(current) UNIX password:
New UNIX password: *********
Retype new UNIX password: *********