More UNIX History

June 8, 2009

In my last post, I forgot to mention one useful resource for those who are interested in the history of UNIX and its offspring, and their relationship to Linux.  Dr. Peter Salus has written a history of the development of free/open source software, starting with its roots in UNIX and tracing development through the various projects that were its offshoots.  He describes it this way:

My aim is to show how the advent of the computer and the Internet have given rise to the expansion of the academic/scholarly notions of sharing, and how this in turn has brought us free and open software, which will bring about a major change in the way we do business.

This effort is more than a history of Linux, of the Free Software Foundation (FSF), the Internet, software licensing, and myriad other topics. It will contain a number of histories within it, which (I hope) will serve as an antidote to the cloud of FUD stirred up by those who fear that change will mean that their businesses will fail (certainly more a sign of lack of imagination and flexibility than of anything else).

The text of the book, The Daemon, the GNU, and the Penguin, is available online at the Groklaw site.   (Groklaw, incidentally, is a good source for material related to the legal status of open-source software and licensing.)  The book is available in softcover, published by Reed Media Services, ISBN 097903423X.

There is also a UNIX “family tree” graphic (mentioned in the Computer World article)  that shows the relationships between the various offshoots of UNIX:

Credit: Eraserhead1  (licensed under <a href=

Credit: Eraserhead1 (licensed under Creative Commons Attribution ShareAlike 3.0 and GNU Free Documentation License)

(Click here for a larger version of the graphic from Wikimedia.)

Update at 23:45 Monday

If anyone is interested in the history of the MULTICS system, which was an important influence (in both  positive and negative senses) on the early work on UNIX, there is a wealth of information at the Multicians site.


UNIX Turns Forty

June 8, 2009

It will be forty years ago this August that Ken Thompson, of Bell Labs, wrote the first version of what would become the UNIX® operating system.  He wrote that first version in assembly language for a Digital Equipment PDP-7 computer, and it took him about a month to write the system kernel, an assembler, a text editor, and a command processor (the shell).  Thompson and his colleague Dennis Ritchie continued to develop that system, with assistance from other members of the Bell Labs staff.  It turned out to be one of the more important software development projects in the still-brief history of computing.  The UNIX system, which by then had been re-written in the C language, was described in a landmark paper by Thompson and Ritchie, presented at a symposium in October 1973, and subsequently published in the Communications of the Association for Computing Machinery [ACM], 17:7, in July 1974.  (A somewhat later version of the paper, “The UNIX Time-Sharing System” is available here.)   It led to Thompson and Ritchie receiving the Turing Award, the top award given by the ACM, in 1983, and the National Medal of Technology from President Clinton in 1996.

Computer World has an article by Gary Anthes in its June 4 edition on “Unix turns 40: The past, present and future of a revolutionary OS“, which relates much of the rather complicated history of UNIX development.  I don’t intend to recite that history here; but I do, in this and perhaps a couple of subsequent posts, want to say a little bit about why, in my view, UNIX has had such a significant impact on the development of IT.

One of the key things that differentiated UNIX from other operating systems was its simplicity.  Bell Labs had been a partner, along with MIT and the (then) computer division of General Electric, in Project MULTICS.  The project, which began in 1964, was an attempt to design and build a time-sharing operating system from the ground up, incorporating new ideas about multi-processing, modularity, high availability, and security.  Although it was in part a research system, the ultimate aim was to have a commercial product that would be sold by GE; in fact, MULTICS was offered as a product by Honeywell, who took over GE’s computer business in 1970.  It was never a great commercial success, although it was used by the National Security Agency and other influential sites.

MULTICS, which was an acronym for Multiplexed Information and Computing Service, pioneered some ideas (such as a hierarchical file system, and file access controls) that later became commonplace, but its initial implementations were large and slow, at least by the standards of the day.  (Some suggested that MULTICS really stood for Many Unnecessarily Large Tables In Core Simultaneously.)*   Bell Labs withdrew from the project early in 1969.  UNIX (originally spelled ‘Unics’, as a sort of pun on MULTICS) was the Bell Labs staffers’ reaction to what they perceived as the excessive complexity of MULTICS.  They were interested in creating an environment that was congenial for software development.  At the beginning, there was no thought given to making UNIX a commercial product; in fact, by the terms of a 1956 consent decree, AT&T (the parent of Bell Labs) was prohibited from entering the software business.  Nonetheless, UNIX gained popularity among academic and scientific users.  In The UNIX Programming Environment (Prentice Hall: 1984), Brian Kernighan and Rob Pike put it this way:

Why did it become popular in the first place?  The central factor is that it was designed and built by a small number (two) of exceptionally talented people, whose sole purpose was to create an environment that would be convenient for program development, and who had the freedom to pursue that ideal.

As anyone who has used UNIX will attest, there is very definitely a “UNIX Way” of doing things, a central philosophy  whose consistency probably came from the fact that the system was built a very small group.  It was, at the beginning, definitely not a “me too” approach; I can remember my first encounter with UNIX in the mid 1980s, when I had been developing software in an IBM mainframe environment (System/360 and System/370) for more than ten years.  Some features of the system seemed very strange; yet I found, as I got more accustomed to the system, that they often made a great deal of sense. For example, in the mainframe operating systems, there were careful distinctions drawn between different types of files: fixed- vs. variable-length records, for example, or text vs. binary files.  In UNIX, any file is just a stream of bytes.  At first, this seemed like a grotesque over-simplification.  The “AHA!” moment came later, when I realized that the approach was what made it easily possible to use the output of one program as the input to another.

Today, UNIX has many descendants and other relations: the commercial varieties, such as Sun’s Solaris or IBM’s AIX; the BSD UNIX derivatives, such as FreeBSD, OpenBSD, and Mac OS-X; and work-alike systems, notably Linux.  Many features found originally in UNIX (such as pipes and I/O redirection) are found in many other systems.   And the philosophy has gained a foothold in a significant segment of the development world; Eric Raymond’s book, The Art of UNIX Programming, is a good example.  I’ll talk about some more specific features of that philosophy in a later post.

*For younger readers: “core” was a term used back in the days of stone knives and bearskins to refer to what we now usually call RAM.  It was so called because, at one time, it was actually made of small, doughnut-shaped pieces of magnetic material (the “cores”) strung on a three-dimensional wire lattice.


%d bloggers like this: