UNIX One-Liners: Some Useful Shell Programs

My Favorite One-Liners: Some Useful Shell Programs… But first things first… I would like to note that this aint my list of favorite shell programs.. well, it has become… but what I mean is this aint an original article that I created and posted here.. this is a section from the now defunt, Sysadmin Magazine… and you’ll be reading a lot of those here….

“Keep It Simple, SysAdmin,” or perhaps just “Keep It Shell Script”, that is one of the motto that every sysdamin knows… They automate, they script, they relax.

Too many system administrators, especially former or frustrated programmers, get involved and distracted creating ad hoc programs that grow until they have a life and a following of their own. These programs generally are written in C, awk, or C shell, have no documentation, and are incomprehensible, even to the author, less than two weeks after being finished.

While it’s no cure-all for bad programming practices, I prefer to write simple shell scripts for system administration work. How simple are they? Simple enough that their inner workings should be almost obvious; tricky programs are fine for impressing your friends but hard to understand when the system has to be restored at 3:00 A.M. And of course, the C shell is so much more elegant, but not every system has one, which means that C shell programs aren’t portable. What price elegance?

My own pride is in writing programs that can be easily understood, even if they aren’t perfect. In fact, I’m sure I will get at least five pieces of email for each of these programs, showing how they can be improved. If you like being tricky, find ingenious things to do with your programs after you write them. If these programs are beyond you, don’t despair: but don’t fail to read the manual pages for the shell and every command you don’t understand. Meanwhile, they don’t get much simpler than Tail:

for i in $*
do
echo “=========” $i “==========”
tail $i
done

Tail emulates the behavior of the head command when presented with multiple files, instead of the binary tail’s default behavior of just showing you the tail of one file. The simple for-do-done loop executes the echo and tail commands repeatedly on whatever files you specify on the command line.

Take Four, They’re Small

While we’re on incredibly simple programs, how many times have you had to double-space a file and forgotten how? It happened to me every time, until I wrote double:

pr -dt $*

See? Most of the work is just picking a good name for the program. In the same vein, here’s another self-documenting program called crtolf:

tr “\015” “\012”

Here, I had to surround the arguments to the tr program with double quotes to prevent the shell from hiccupping on the backslash, which serves to indicate to tr that the argument is an octal constant (in this case, the octal representations of the carriage return and linefeed characters).

I’m sure you’d sometimes like to know which files are taking up the most space in a particular overstuffed directory, so here’s largest:

ls -l $* | sort -r -n +4 -5

Since the ls -l command shows the file size in the fifth whitespace-delimited column, the arguments +4 -5 tell sort that’s the column you want to look at. The -n flag tells sort to consider the full numerical value of the numbers in that column. Without this precaution, the single digit 9 would be sorted after 1111111, since 1 is lower in the ASCII collating sequence. Finally, the -r flag reminds sort to present the lines in reverse sequence, since it’s the largest numbers we’re most interested in here.

And how about a program to help you create these little one-line programs?

if test -f $1
then echo $0: cannot overwrite existing file $1
exit 1
fi
cat $1 ; chmod 755 $1
exit 0

To use mkcmd, above, simply type mkcmd filename, then the shell program you want to enter, followed by a single Control-D. The program does the rest.

Testing, Testing

Mkcmd introduces rudimentary error checking to these short programs. In cases like these, some error checking is necessary, but more might be overkill for what’s essentially a one-line program. Why didn’t I check to make sure that at least the filename argument is typed on the command line? Leave out the filename, and the test program itself will complain:

mkcmd: test: argument expected

Clearly, this is minimalist thinking. But it alerts you to the problem, so why bother getting much fancier?

Testing can be done for other reasons than error checking. Sometimes you want to perform a function depending on what tty line a user is logging in on, or what system they’re on. The program fragment in Listing 1 from the end of a Korn shell .profile is a good illustration. The first section grabs the name of the system the .profile is running on (by running the uname command), and stuffs it into the SYS variable. That string is then used, together with the value of PWD (kept up-to-date by the Korn shell), to make the shell prompt reflect the system, current directory, and command number that the user is working with. When you set a shell variable, you simply name it; when you use its actual value, you have to prefix it with a $ character.

But wait, there’s more! A similar technique extracts the actual name of the tty line, using sed to remove its /dev/ prefix (note the use of semicolons as delimiters, since the normal slashes would prove confusing here). The case statement is then used to select a specific series of commands as the final step in the login procedure.

Note the special handling of tty1A, to which a modem is connected for dialins from a laptop computer. This section cuts the normal 25-line display down to 24 (for compatibility with a larger number of terminal programs), and starts a program that will automatically log out the user (thus regaining the port) after five minutes of inactivity. A subsequent section handles a tty name beginning with the string ttyp (a pseudo tty which signifies rlogin or telnet from another machine on the network) by displaying an appropriate message as a reminder. This is also an example of recycling the use of the SYS variable already created. And not a single line of this has to be changed to use it on another computer (except perhaps the hardware-specific tty numbers)!

I’m a Back-Quote Man

If you’re not conversant with the back quotes (accent grave) used in Listing 1, read the manual page for your favorite shell carefully. You can usually find the details in a section titled “Command Substitution.” Surrounding a command by back quotes effectively puts the results of running that command on the command line, as if you had typed it yourself.

This little one-liner almost breaks all my own rules, because it’s not immediately apparent what it’s good for, but it’s neat and a good illustration of how useful back quotes can be. It’s called kind:

file * | grep $1 | sed ‘s/:.*$//’

Kind looks for a specific type of file (that you supply on the command line) using the file command, then selects the type you want with grep and removes file’s extraneous comments with sed. You generally use kind, along with back quotes, when you want a command to operate only on, say, text files:

$ cd /usr/local/bin
$ more `kind text`

These two lines let you look at all your locally produced shell programs (like this one), while not cluttering up your screen with garbage from binary executable files.

While we’re on back quotes, let’s try another one-liner. I call this one lsbin:

/bin/ls -1 `echo $PATH | tr “:” ” “`

Lsbin dynamically reads your current PATH environment variable, removes the colons, then feeds the result to ls as simply a list of directories. The final result is a list of all the programs in your PATH, on which you can then run sort, grep, or whatever you wish.

The More the Merrier

Now let’s combine back quotes, test, and if-then-else. The program /usr/local/colors, referred to in the .profile in Listing 1, provides an easy-to-follow example:

SYSTEM=`uname -n`
if [ $SYSTEM = “tegu” ]
then
setcolor yellow brown
setcolor -r yellow red
setcolor -g yellow lt_magenta
else
setcolor yellow blue
setcolor -r red lt_magenta
setcolor -g cyan lt_blue
fi

Here I’m simply picking one color scheme or another, so I can instantly tell which system I’m working with on a crowded X Window display (note: neither the author or publisher is responsible for eye or brain damage resulting from your use of these colors). I thought the “bracket” version of the test command looked better here; it works exactly the same as using the word test.

The following program, which I call arcmail, is a little bit more complicated, but illustrates more of the ways you can combine these simple programming techniques, yet still come up with a readable program. The program moves old email correspondence into a “holding directory,” then, if the mail is not examined within 10 days, puts it into a compressed archive file. Arcmail is run every night by cron:

OLDMAIL=”/ips/david/Mail/old-mail”
cd $OLDMAIL
FILES=`find . -mtime +190 -print`
if [ “$FILES” ]
then /usr/lbin/arc m Oldmail.arc $FILES
fi
cd ..
FILES=`find . -mtime +180 -print`
if [ “$FILES” ]
then mv $FILES $OLDMAIL
fi

The first thing I do is define the location of the holding directory. This makes it much easier to edit the program later (without causing inadvertent errors) if the particular directory you’re using happens to change.

Lines 2 and 3 establish the holding directory as a starting point, by cding to it, then running a find in back quotes to determine which, if any, files there meet the age criteria (6 months old, plus 10 days). The (bracketed) test on line 4 is neat: if no filenames were found, the test fails and we skip to the next routine. Otherwise, those same names are used as a command line parameter (on line 5) to the arc command, where the files are added into the archive.

In either case, line 7 brings us up one directory, to the place where current mail is stored. Using back quotes, we again store filenames matching our criteria into a variable called FILES. Notice, though, that the number of days is now exactly 180. If any files 180 days old (or more) are found, they’re moved to the holding directory OLDMAIL.

The program had to be written “backwards” this way because the find command checks the files in the current directory — and also in all subdirectories. Since the holding directory was a subdirectory of the main mail directory, those older files had to be removed first (which was effectively accomplished by archiving them). Finishing the program by moving the 180-day files to the holding directory ensures that that’s where they’ll be in 10 more days, when it will be their turn to be archived.

Don’t Stay Lost

Every system administrator runs fsck, and every once in awhile, you discover, to your horror, that you are wasting a couple of precious megabytes of disk space on files that you didn’t even know you had. To clean out your lost+found directories on a regular basis, simply run checklost as often as you deem advisable, via cron, and mail the results to yourself:

for i in / /usr/spool /ips
do
ls -lta $i/lost+found
echo That was the $i file system
echo “—————————”
done

Naturally, replace my file system names with yours in the first line. Don’t wipe all the files out by reflex, either; take a look at them and see if they look useful. I once found a header file — needed to rebuild the kernel — in a lost+found directory after a crash!

About the Author

David Fiedler is a writer and consultant specializing in UNIX and networking topics. He was the co-author of the first book on UNIX system administration, and can be reached at david@infopro.com.

Leave a Reply

Your email address will not be published. Required fields are marked *