The unix operating system was developed in. How Unix differs from Linux

master of syllable March 19, 2011 at 11:16 pm

How does Linux differ from UNIX, and what is a UNIX-like OS?

  • Lumber room *
UNIX
UNIX (not worth it confused with the definition of “UNIX-like operating system”) - a family of operating systems (Mac OS X, GNU/Linux).
The first system was developed in 1969 at Bell Laboratories, a former American corporation.

Distinctive features of UNIX:

  1. Easy system configuration using simple, usually text, files.
  2. Extensive use of the command line.
  3. Use of conveyors.
In our UNIX time used mainly on servers and as a system for equipment.
It is impossible not to note the enormous historical importance of UNIX systems. They are now recognized as one of the most historically important operating systems. During the development of UNIX systems, the C language was created.

UNIX variants by year

UNIX-like OS
UNIX-like OS (Sometimes use the abbreviation *nix) - a system formed under the influence of UNIX.

The word UNIX is used as a mark of conformity and as a trademark.

The Open Group consortium owns the "UNIX" trademark, but is best known as the certifying authority for the UNIX trademark. Recently, The Open Group shed light on the publication of the Single UNIX Specification, the standards that an operating system must meet in order to be proudly called Unix.

You can take a look at family tree UNIX-like operating systems.

Linux
Linux- the general name for UNIX-based operating systems that were developed within the framework of the GNU project (open source software development project). Linux runs on a huge variety of processor architectures, ranging from ARM to Intel x86.

The most famous and widespread distributions are Arch Linux, CentOS, Debian. There are also many “domestic” Russian distributions- ALT Linux, ASPLinux and others.

There is quite a bit of controversy about the naming of GNU/Linux.
Supporters " open source" use the term "Linux", and supporters of "free software" use "GNU/Linux". I prefer the first option. Sometimes, for the convenience of representing the term GNU/Linux, the spellings “GNU+Linux”, “GNU-Linux”, “GNU Linux” are used.

Unlike commercial systems (MS Windows, Mac OS X), Linux does not have a geographical development center and a specific organization that owns the system. The system itself and the programs for it are the result of the work of huge communities, thousands of projects. Anyone can join the project or create their own!

Conclusion
Thus, we learned the chain: UNIX -> UNIX-like OS -> Linux.

To summarize, I can say that the differences between Linux and UNIX are obvious. UNIX is a much broader concept, the foundation for the construction and certification of all UNIX-like systems, and Linux is a special case of UNIX.

Tags: unix, linux, nix, Linux, unix

The UNIX operating system, the progenitor of many modern operating systems such as Linux, Android, Mac OS X and many others, was created within the walls of the Bell Labs research center, a division of AT&T. Generally speaking, Bell Labs is a breeding ground for scientists who have made discoveries that have literally changed technology. For example, it was at Bell Labs that scientists such as William Shockley, John Bardeen and Walter Brattain worked, who first created bipolar transistor in 1947. We can say that it was at Bell Labs that the laser was invented, although masers had already been created by that time. Claude Shannon, the founder of information theory, also worked at Bell Labs. The creators of the C language, Ken Thompson and Denis Ritchie (we will remember them later), also worked there, and the author of C++, Bjarne Stroustrup, also works there.

On the way to UNIX

Before we talk about UNIX itself, let's remember those operating systems that were created before it, and which largely defined what UNIX is, and through it, many other modern operating systems.

The development of UNIX was not the first work in the field of operating systems carried out at Bell Labs. In 1957, the laboratory began developing an operating system called BESYS (short for Bell Operating System). The project manager was Viktor Vysotsky, the son of a Russian astronomer who emigrated to America. BESYS was an internal project that was not released as a finished commercial product, although BESYS was distributed on magnetic tape to everyone. This system was designed to run on IBM 704 – 709x series computers (IBM 7090, 7094). I would like to call these things the antediluvian word “computers,” but so as not to hurt the ears, we will continue to call them computers.

IBM 704

First of all, BESYS was intended for batch execution of a large number of programs, that is, in a way where a list of programs is specified, and their execution is scheduled in such a way as to occupy the maximum possible resources so that the computer does not stand idle. At the same time, BESYS already had the beginnings of a time-sharing operating system - that is, in essence, what is now called multitasking. When did they appear complete systems with time sharing, this opportunity was used so that several people could work with one computer at the same time, each from their own terminal.

In 1964, Bell Labs underwent an upgrade of computers, as a result of which BESYS could no longer be run on new computers from IBM; cross-platform was out of the question at that time. At that time, IBM supplied computers without operating systems. Developers from Bell Labs could have started writing a new operating system, but they did it differently - they joined the development of the Multics operating system.

The Multics project (short for Multiplexed Information and Computing Service) was proposed by MIT professor Jack Dennis. In 1963, he and his students developed a specification for a new operating system and managed to interest representatives of General Electric in the project. As a result, Bell Labs joined MIT and General Electric in developing a new operating system.

And the ideas for the project were very ambitious. Firstly, it had to be an operating system with full time sharing. Secondly, Multics was not written in assembly language, but in one of the first high-level languages ​​- PL/1, which was developed in 1964. Third, Multics could run on multiprocessor computers. The same operating system had a hierarchical file system, file names could contain any characters and be quite long, and the file system also provided symbolic links to directories.

Unfortunately, work on Multics dragged on for a long time; Bell Labs programmers never saw the release of this product and left the project in April 1969. And the release took place in October of the same year, but they say that the first version was terribly buggy, and for another year the remaining developers corrected the bugs that users reported to them, although a year later Multics was already a more reliable system.

Development of Multics was still quite ongoing for a long time, the last release was in 1992, and it was version 12.5, although that is a completely different story, Multics had a huge impact on the future of UNIX.

Birth of UNIX

UNIX appeared almost by accident, and the computer game “Space Travel”, a space flight game written by Ken Thompson, was to blame for this. It was back in 1969, the game “Space Travel” was first designed for that same Multics operating system, and after Bell Labs was cut off from access to new versions of Multics, Ken rewrote the game in Fortran and ported it to the GECOS operating system, which came with the GE-635 computer. But here two problems crept in. Firstly, this computer was not so great good system for display output, and, secondly, playing on this computer was a little expensive - something like $50-75 per hour.

But one day Ken Thompson came across a DEC PDP-7 computer that was rarely used, and could well be suitable for running Space Travel, and it also had a better video processor.

Ken Thompson

This time the developers did not (yet) experiment with high-level languages, and the first version of Unics was written in assembly language. Thompson himself, Denis Ritchie, and later Douglas McIlroy, Joey Ossanna and Rad Kennedy took part in the development of Unics. At first, Kernighan, who proposed the name of the OS, provided only moral support.

A little later, in 1970, when multitasking was implemented, the operating system was renamed UNIX and was no longer considered an abbreviation. This year is considered the official year of birth of UNIX, and it is from January 1, 1970 that the system time is counted (the number of seconds starting from this date). The same date is called more pathetically - the beginning of the UNIX era (in English - UNIX Epoch). Remember how we were all scared by the Y2K problem? So, a similar problem awaits us back in 2038, when 32-bit integers, which are often used to determine the date, will not be enough to represent time, and time and date will become negative. I would like to believe that by this time all vital software will use 64-bit variables for this purpose in order to push back this terrible date by another 292 million years, and then we’ll come up with something.

By 1971, UNIX was already a full-fledged operating system and Bell Labs even staked its claim trademark UNIX. In the same year, UNIX was rewritten to run on the more powerful PDP-11 computer, and it was in this year that the first official version of UNIX (also called First Edition) was released.

In parallel with the development of Unics/UNIX, Ken Thompson and Denis Ritchie, starting in 1969, developed new language B (Bi), which was based on the BCPL language, which, in turn, can be considered a descendant of the Algol-60 language. Ritchie proposed rewriting UNIX in B, which was portable although interpreted, and then he continued to modify the language to suit new needs. In 1972, the second version of UNIX, Second Edition, was released, which was written almost entirely in B; a fairly small module of about 1000 lines remained in assembler, so porting UNIX to other computers was now relatively easy. This is how UNIX became portable.

Ken Thompson and Dennis Ritchie

Then the B language developed along with UNIX until it gave birth to the C language, one of the most famous programming languages, which is now usually slandered or exalted as an ideal. In 1973, the third edition of UNIX was released with a built-in C compiler, and starting with the 5th version, which was released in 1974, it is believed that UNIX was completely rewritten in C. By the way, it was in UNIX in 1973 that the concept of pipes

Beginning in 1974-1975, UNIX began to spread beyond Bell Labs. Thompson and Ritchie publish UNIX in the Communications of the ACM, and AT&T provides UNIX to educational institutions as a teaching tool. In 1976, we can say that the first porting of UNIX to another system took place - to the Interdata 8/32 computer. In addition, in 1975, the 6th version of UNIX was released, starting with which various implementations of this operating system appeared.

The UNIX operating system turned out to be so successful that, starting in the late 70s, other developers began to make similar systems. Let's now switch from the original UNIX to its clones and see what other operating systems have appeared thanks to it.

The emergence of BSD

The proliferation of this operating system was largely facilitated by American officials, even before the birth of UNIX, in 1956, who imposed restrictions on AT&T, which owned Bell Labs. The fact is that at that time the Department of Justice forced AT&T to sign an agreement prohibiting the company from engaging in activities not related to telephone and telegraph networks and equipment, but by the 70s AT&T had already realized what a successful project UNIX had turned out to be and wanted to make it commercial. In order for officials to allow them to do this, AT&T transferred the UNIX source code to some American universities.

One of these universities that had access to the body of the source code was the University of California at Berkeley, and if there are other people’s source codes, then the desire involuntarily arises to correct something in the program for yourself, especially since the license did not prohibit this. Thus, a few years later (in 1978), the first UNIX-compatible system appeared, not created within the walls of AT&T. It was BSD UNIX.

University of California at Berkeley

BSD is an abbreviation for Berkeley Software Distribution, a special system for distributing programs in source code with a very soft license. The BSD license was created just for the distribution of the new UNIX-compatible system. This license allows the reuse of the source code distributed under it, and, in addition, unlike the GPL (which did not yet exist), it does not impose any restrictions on derivative programs. In addition, it is very short and does not deal with a lot of tedious legal terms.

The first version of BSD (1BSD) was more of an addition to the original UNIX version 6 than a standalone system. 1BSD added the Pascal compiler and ex text editor. The second version of BSD, released in 1979, included the following: famous programs, like vi and C Shell.

After BSD UNIX appeared, the number of UNIX-compatible systems began to grow incredibly quickly. Already from BSD UNIX, separate branches of operating systems began to branch off, different operating systems exchanged code with each other, the intertwining became quite confusing, so in the future we will not dwell on each version of all UNIX systems, but will look at how the most famous of them appeared.

Perhaps the most famous direct descendants of BSD UNIX are the operating systems FreeBSD systems, OpenBSD and, to a slightly lesser extent, NetBSD. All of them descended from the so-called 386BSD, released in 1992. 386BSD, as the name suggests, was a port of BSD UNIX to the Intel 80386 processor. This system was also created by graduates of the University of Berkeley. The authors believed that the UNIX source code received from AT&T was sufficiently modified to void the AT&T license, however, AT&T itself did not think so, so there were lawsuits surrounding this operating system. Judging by the fact that 386BSD itself became the parent of many other operating systems, everything ended well for it.

The FreeBSD project (at the beginning it did not have its own name) appeared as a set of patches for 386BSD, however, for some reason these patches were not accepted, and then, when it became clear that 386BSD would no longer be developed, in 1993 the project was deployed towards the creation of a full-fledged operating system, called FreeBSD.

Beastie. FreeBSD Mascot

At the same time, the 386BSD developers themselves created a new project, NetBSD, from which, in turn, OpenBSD branched. As you can see, it turns out to be a rather sprawling tree of operating systems. The goal of the NetBSD project was to create a UNIX system that could run on as many more architectures, that is, to achieve maximum portability. Even NetBSD drivers must be cross-platform.

NetBSD logo

Solaris

However, the first to branch off from BSD was the SunOS operating system, the brainchild, as you understand from the name, of the Sun Microsystems company, unfortunately now deceased. This happened in 1983. SunOS is an operating system that came with computers built by Sun itself. Generally speaking, Sun a year earlier, in 1982, had launched the Sun UNIX operating system, which was based on code base Unisoft Unix v7 (Unisoft is a company founded in 1981 that was involved in porting Unix to various hardware), but SunOS 1.0 is based on the 4.1 BSD code. SunOS was regularly updated until 1994, when version 4.1.4 was released, and then was renamed Solaris 2. Where did the two come from? It’s a bit of a confusing story here, because Solaris was first called SunOS versions 4.1.1 – 4.1.4, developed from 1990 to 1994. Consider that this was a kind of rebranding that only took root starting with version Solaris 2. Then, until 1997, Solaris 2.1, 2.2, etc. were released. to 2.6, and instead of Solaris 2.7 in 1998, just Solaris 7 was released, then only this figure began to increase. On this moment The latest version of Solaris is 11, released on November 9, 2011.

OpenSolaris logo

The history of Solaris is also quite complex, until 2005 Solaris was a completely commercial operating system, but in 2005 Sun decided to open up part of the source code of Solaris 10 and create the OpenSolaris project. In addition, previously, while Sun was alive, Solaris 10 could be used either for free, or you could buy official technical support. Then, in early 2010, when Oracle acquired Sun, it made Solaris 10 a paid system. Fortunately, Oracle has not yet been able to ruin OpenSolaris.

Linux. Where would we be without him?

And now it’s the turn to talk about the most famous of the UNIX implementations – Linux. The history of Linux is remarkable because three interesting projects came together in it. But before we talk about the creator of Linux, Linus Torvalds, we need to mention two more programmers, one of whom, Andrew Tanenbaum, unknowingly pushed Linus to create Linux, and the second, Richard Stallman, whose tools Linus used when creating his operating system .

Andrew Tanenbaum is a professor at the Vrije Universiteit Amsterdam and is primarily involved in operating systems development. Together with Albert Woodhull, he wrote such a well-known book as “Operating Systems: Design and Implementation,” which inspired Torvalds to write Linux. This book discusses a UNIX-like system called Minix. Unfortunately, Tanenbaum for a long time viewed Minix only as a project for teaching operating system skills, but not as a full-fledged working OS. The Minix source code had a rather limited license, where you can study its code, but you cannot distribute your own modified versions of Minix, and for a long time the author himself did not want to apply the patches that were sent to him.

Andrew Tanenbaum

The first version of Minix was released along with the first edition of the book in 1987, the subsequent second and third versions of Minix were published along with the corresponding editions of the book about operating systems. The third version of Minix, released in 2005, can already be used as a stand-alone operating system for a computer (there are LiveCD versions of Minix that do not require installation on a hard drive), and as an embedded operating system for microcontrollers. The latest version of Minix 3.2.0 was released in July 2011.

Now let's remember Richard Stallman. Recently, he began to be perceived only as a propagandist of free software, although many now well-known programs appeared thanks to him, and at one time his project made Torvalds’ life much easier. The most interesting thing is that both Linus and Richard approached the creation of the operating system from different angles, and as a result, the projects merged into GNU/Linux. Here we need to give some explanation about what GNU is and where it came from.

Richard Stallman

You can talk about Stallman for quite a long time, for example, that he received a degree with honors in physics from Harvard University. In addition, Stallman worked at MIT, where he began writing his famous EMACS editor in the 1970s. At the same time, the source code of the editor was available to everyone, which was not a feature at MIT, where for a long time there was a kind of friendly anarchy, or, as Steven Levy, the author of the wonderful book “Hackers,” called it. Heroes of the computer revolution", "hacker ethics". But a little later, MIT began to take care of computer security, users were given passwords, and unauthorized users could not access the computer. Stallman was strongly against this practice; he made a program that could allow anyone to find out any password for any user, and advocated leaving the password blank. For example, he sent the following messages to users: “I see that you have chosen the password [such and such]. I'm guessing you can switch to the "carriage return" password. It's much easier to type and is consistent with the principle that there should be no passwords here." But his efforts came to nothing. Moreover, new people who came to MIT already began to worry about the rights to their program, about copyright and other similar abominations.

Stallman later said (quoted from the same book by Levy): “I cannot believe that software should have owners. What happened sabotaged all of humanity as a whole. It prevented people from getting the most out of the programs.” Or here’s another quote from him: “The cars began to break down, and there was no one to fix them. Nobody made the necessary changes to the software. Non-hackers reacted to this simply - they began to use purchased commercial systems, bringing with them fascism and licensing agreements."

As a result, Richard Stallman left MIT and decided to create his own free implementation of a UNIX-compatible operating system. So on September 27, 1983, the GNU project appeared, which translates as “Gnu is Not UNIX”. The first GNU program was EMACS. Within the framework of the GNU project, in 1988, its own GNU GPL license was developed - the GNU General Public License, which obliges the authors of programs based on source codes distributed under this license to also open source codes under the GPL license.

Until 1990, within the framework of GNU (not only by Stallman), various software was written for the future operating system, but this OS did not have its own kernel. They started working on the kernel only in 1990, it was a project called GNU Hurd, but it didn’t take off; its last version was released in 2009. But Linux took off, and we have finally come to it.

And here the Finnish boy Linus Torvalds comes into action. While studying at the University of Helsinki, Linus took courses on the C language and the UNIX system; in anticipation of this subject, he bought the very book by Tanenbaum that described Minix. And it was exactly as described, Minix itself had to be purchased separately on 16 floppy disks, and it cost $169 at that time (oh, our Gorbushka was not in Finland then, but what can you do, savages). In addition, Torvalds also had to buy a computer with an 80386 processor on credit for $3,500, because before that he only had an old computer with a 68008 processor, which Minix could not run on (fortunately, when he had already made the first version of Linux, grateful users chipped in and paid for his computer loan).

Linus Torvalds

Despite the fact that Torvalds generally liked Minix, he gradually began to understand what limitations and disadvantages it had. He was especially irritated by the terminal emulation program that came with the operating system. As a result, he decided to write his own terminal emulator, and at the same time understand the operation of the 386 processor. Torvalds wrote the emulator at a low level, that is, he started with the BIOS bootloader, gradually the emulator acquired new capabilities, then, in order to download files, Linus had to write a driver for the drive and file system, and off we go. This is how the operating room appeared Linux system(at that time it did not yet have any name).

When the operating system began to more or less take shape, the first program that Linus ran on it was bash. It would be more correct to say that he tweaked his operating system so that bash could finally work. After that, he gradually began to launch other programs under his operating system. And the operating system should not be called Linux at all. Here is a quote from Torvalds’ autobiography, which was published under the title “Just for Fun”: “In my mind I called it Linux. Honestly, I never intended to release it under the Linux name because it seemed too immodest to me. What name did I have in mind for the final version? Freax. (Get it? Freaks are fans – and at the end of the x from Unix).”

On August 25, 1991, the following historical message appeared in the comp.os.minix conference: “Hello to all minix users! I'm writing a (free) operating system (amateur version - it won't be as big and professional as gnu) for 386 and 486 AT. I've been working on this since April and it looks like it will be ready soon. Write to me what you like/dislike about minix, since my OS is similar to it (among other things, it has, for practical reasons, the same physical layout of the file system). So far I have transferred bash (1.08) and gcc (1.40) to it, and everything seems to work. This means that in the coming months I will have something working, and I would like to know what features most people need. All applications are accepted, but implementation is not guaranteed"

Please note that GNU and the gcc program are already mentioned here (at that time this abbreviation stood for GNU C Compiler). And remember Stallman and his GNU, who began developing the operating system from the other end. Finally, the merger happened. Therefore, Stallman is offended when the operating system is simply called Linux, and not GNU/Linux; after all, Linux is the kernel, and many of the features were taken from the GNU project.

On September 17, 1991, Linus Torvalds first posted his operating system, which at that time was version 0.01, on a public FTP server. Since then, all progressive humanity has celebrated this day as the birthday of Linux. Particularly impatient people begin to celebrate it on August 25, when Linus admitted at the conference that he was writing an operating system. From there it went Linux development, and the name Linux itself was strengthened, because the address where the operating system was posted looked like ftp.funet.fi/pub/OS/Linux. The fact is that Ari Lemke, the teacher who allocated space on the server for Linus, thought that Freax did not look very presentable, and he named the directory “Linux” - as a mixture of the author’s name and the “x” at the end from UNIX.

Tux. Linux logo

There is also a point that although Torvalds wrote Linux under the influence of Minix, there is a fundamental difference between Linux and Minix from a programming point of view. The fact is that Tanenbaum is a supporter of microkernel operating systems, that is, those when the operating system has a small kernel with a small number of functions, and all drivers and services of the operating system act as separate independent modules, while Linux has a monolithic kernel, there many features of the operating system are included, so under Linux, if you need some special feature, you may need to recompile the kernel, making some changes there. On the one hand, the microkernel architecture has the advantages of reliability and simplicity; at the same time, with careless microkernel design, a monolithic kernel will work faster, since it does not need to exchange large amounts of data with third-party modules. After the advent of Linux in 1992, a virtual debate broke out between Torvalds and Tanenbaum, as well as their supporters, at the comp.os.minix conference about which architecture was better - microkernel or monolithic. Tanenbaum argued that microkernel architecture was the future, and Linux was already outdated before it was released. Almost 20 years have passed since that day... By the way, GNU Hurd, which was supposed to become the kernel of the GNU operating system, was also developed as a microkernel.

Mobile Linux

So, since 1991, Linux has been gradually developing, and although Linux’s share on ordinary users’ computers is not yet large, it has long been popular on servers and supercomputers, and Windows is trying to carve out its share in this area. In addition, Linux has now taken a good position on phones and tablets, because Android is also Linux.

Andriod logo

The history of Android began with the company Android Inc, which appeared in 2003 and seemed to be developing mobile applications (the specific developments of this company in the first years of its existence are still not particularly advertised). But less than two years later Android company Inc acquires Google. It was not possible to find any official details about what exactly the developers of Android Inc were doing before the takeover, although already in 2005, after its purchase by Google, it was rumored that they were already developing a new operating system for phones. However, the first release of Android took place on October 22, 2008, after which new versions began to be released regularly. One of the features of the development of Android could be the fact that attacks on this system began regarding allegedly violated patents, and the situation with the Java implementation is unclear from a legal point of view, but let’s not go into these non-technical squabbles.

But Android is not the only mobile representative of Linux; besides it there is also the MeeGo operating system. If Android is backed by such a powerful corporation as Google, then MeeGo does not have one strong trustee; it is developed by a community under the auspices of The Linux Foundation, which is supported by companies such as Intel, Nokia, AMD, Novell, ASUS, Acer, MSI and others. At the moment, the main help comes from Intel, which is not surprising, since the MeeGo project itself grew out of the Moblin project, which was initiated by Intel. Moblin is like that Linux distribution, which was supposed to work for portable devices, controlled by the processor Intel Atom. Let's mention another mobile Linux – Openmoko. Linux is quite quickly trying to gain a foothold on phones and tablets, Google and Android have taken the matter seriously, but the prospects for the rest mobile versions Linux is still vague.

As you can see, Linux can now run on many systems running different processors, however, in the early 1990s, Torvalds did not believe that Linux could be ported anywhere other than the 386 processor.

Mac OS X

Now let's switch to another operating system, which is also UNIX-compatible - Mac OS X. The first versions of Mac OS, up to the 9th, were not based on UNIX, so we will not dwell on them. The most interesting thing for us began after Steve Jobs was expelled from Apple in 1985, after which he founded the NeXT company, which developed computers and software for them. NeXT hired programmer Avetis Tevanyan, who had previously been developing the Mach microkernel for a UNIX-compatible operating system being developed at Carnegie Mellon University. The Mach kernel was intended to replace the BSD UNIX kernel.

NeXT company logo

Avetis Tevanyan was the leader of the team developing a new UNIX-compatible operating system, called NeXTSTEP. To avoid reinventing the wheel, NeXTSTEP was based on the same Mach core. From a programming point of view, NeXTSTEP, unlike many other operating systems, was object-oriented, and a huge role in it was played by the Objective-C programming language, which is now widely used in Mac OS X. The first version of NeXTSTEP was released in 1989. Although NeXTSTEP was originally designed for Motorola 68000 processors, in the early 1990s, the operating system was ported to 80386 and 80486 processors. Things weren't going well for NeXT in the best possible way, and in 1996 Apple company offered Jobs to buy NeXT in order to use NeXTSTEP instead of Mac OS. Here we could also talk about the rivalry between the operating systems NeXTSTEP and BeOS, which ended with the victory of NeXTSTEP, but we will not lengthen the already long story, besides, BeOS is in no way related to UNIX, so at the moment it does not interest us, although it itself in itself, this operating system was very interesting, and it’s a pity that its development was interrupted.

A year later, when Jobs returned to Apple, the policy of adapting NeXTSTEP for Apple computers continued, and a few years later this operating system was ported to PowerPC and Intel processors. Thus, the server version of Mac OS X (Mac OS X Server 1.0) was released in 1999, and in 2001 the operating system for end users, Mac OS X (10.0), was released.

Later, based on Mac OS X, an operating system for iPhone phones was developed, which was called Apple iOS. First iOS version came out in 2007. The iPad also runs on the same operating system.

Conclusion

After all of the above, you may be wondering what kind of operating system can be considered UNIX? There is no clear answer to this. From a formal point of view, there is the Uniform UNIX Specification - a standard that an operating system must satisfy in order to be called UNIX. Do not confuse it with the POSIX standard, which can be followed by a non-UNIX-like operating system. By the way, the name POSIX was proposed by the same Richard Stallman, and formally the POSIX standard has the number ISO/IEC 9945. Obtaining a single specification is expensive and time-consuming, so not many operating systems are associated with this. Operating systems that have received such a certificate include Mac OS X, Solaris, SCO and several other lesser-known operating systems. This does not include either Linux or *BSD, but no one doubts their “Unix-ness”. Therefore, for example, programmer and writer Eric Raymond proposed two more signs to determine whether a particular operating system is UNIX-like. The first of these signs is the “non-inheritance” of the source code from the original UNIX, developed at AT&T and Bell Labs. This includes BSD systems. The second sign is “UNIX in functionality”. This includes operating systems that behave closely as described in the UNIX specification, but have not received a formal certificate, and, moreover, are in no way related to the source code of the original UNIX. This includes Linux, Minix, QNX.

We’ll probably stop here, otherwise there are already too many letters. This review mainly includes the history of the emergence of the most famous operating systems - variations of BSD, Linux, Mac OS X, Solaris, some UNIXes were left out, such as QNX, Plan 9, Plan B and some others. Who knows, maybe in the future we’ll remember about them.

Links

  • Hackers, heroes of the computer revolution
  • FreeBSD Manual

All pictures are taken from Wikipedia

What is Unix (for beginners)


Dmitry Y. Karpov


What am I talking about?


This opus does not pretend to be a complete description. Moreover, for the sake of simplicity, some details have been deliberately omitted. At first the cycle was intended as a FAQ (FAQ - frequently asked questions), but apparently it will turn out to be a “Young Soldier Course” or “Sergeant School”.

I tried to give a comparative description of different operating systems - this is what, in my opinion, is lacking in most textbooks and technical manuals.

Without waiting for exposure from experienced Unix "oids, I make a voluntary confession - I cannot claim to be a great Unix expert, and my knowledge is mainly around FreeBSD. I hope this does not interfere.

This file will remain in the "under construction" state for a long time. :-)

What is Unix?


Unix is ​​a full-fledged, natively multi-user, multi-tasking and multi-terminal operating system. More precisely, this is a whole family of systems that are almost completely compatible with each other at the source code level.

What types of Unixes are there and on what machines do they run?


This list does not pretend to be complete, because in addition to those listed, there are many less common Unixes and Unix-like systems, not to mention ancient Unixes for outdated machines.

Conventionally, we can distinguish the System V and Berkeley families. System V (read "System Five") has several variants, the latest, as far as I know, is System V Release 4. Berkeley University is famous not only for the development of BSD, but also for most Internet protocols. However, many Unixes combine the properties of both systems.

Where can I get free Unix?


  • BSD family: FreeBSD, NetBSD, OpenBSD.
  • Linux family: RedHat, SlackWare, Debian, Caldera,
  • SCO and Solaris are available free of charge for non-commercial use (mainly for educational institutions).

    What are the main differences between Unix and other OSes?


    Unix consists of a kernel with included drivers and utilities (programs external to the kernel). If you need to change the configuration (add a device, change a port or interrupt), then the kernel is rebuilt (linked) from object modules or (for example, in FreeBSD) from sources. /* This is not entirely true. Some parameters can be corrected without rebuilding. There are also loadable kernel modules. */

    In contrast to Unix, Windows (if it is not specified which one, then we mean 3.11, 95 and NT) and OS/2 actually link drivers on the fly when loading. At the same time, the compactness of the assembled kernel and the reuse of common code are an order of magnitude lower than in Unix. In addition, with the system configuration unchanged, the Unix kernel can be written into ROM without modification (only the starting part of the BIOS needs to be changed) and executed without being loaded into RAM. The compactness of the code is especially important, because the kernel and drivers never leave the physical RAM. memory will not be transferred to disk.

    Unix is ​​the most multi-platform OS. WindowsNT is trying to imitate it, but so far it has not been successful - after the abandonment of MIPS and POWER-PC, W"NT remained on only two platforms - the traditional i*86 and DEC Alpha. Of course, the portability of programs from one version of Unix to another is limited. Inaccurate a program written that does not take into account differences in Unix implementations, making unreasonable assumptions like “an integer variable must occupy four bytes,” may require serious rework. But it is still many orders of magnitude easier than, for example, moving from OS/2 to NT.

    Why Unix?


    Unix is ​​used both as a server and workstation. In the server category, it competes with MS WindowsNT, Novell Netware, IBM OS/2 Warp Connect, DEC VMS and mainframe operating systems. Each system has its own area of ​​application in which it is better than others.

  • WindowsNT is for administrators who prefer a familiar interface to economical use of resources and high performance.
  • Netware - for networks where high performance file and printer services are needed and other services are not so important. The main disadvantage is that it is difficult to run applications on a Netware server.
  • OS/2 is good where you need a "lightweight" application server. It requires fewer resources than NT, is more flexible in management (although it can be more difficult to configure), and multitasking is very good. Authorization and differentiation of access rights are not implemented at the OS level, which is more than compensated for by implementation at the server application level. (However, other OSes often do the same). Many FIDOnet stations and BBSs are based on OS/2.
  • VMS is a powerful application server, in no way inferior to Unix (and in many ways superior to it), but only for DEC's VAX and Alpha platforms.
  • Mainframes - for serving a very large number of users (on the order of several thousand). But the work of these users is usually organized not in the form of client-server interaction, but in the form of a host-terminal one. The terminal in this pair is more likely not a client, but a server (Internet World, N3 for 1996). The advantages of mainframes include higher security and resistance to failures, and the disadvantages are the price corresponding to these qualities.

    Unix is ​​good for a qualified (or willing to become one) administrator because... requires knowledge of the principles of functioning of the processes occurring in it. Real multitasking and strict memory sharing ensure high reliability of the system, although Unix's file and print services are inferior to Netware in the performance of file and print services.

    The lack of flexibility in granting user access rights to files compared to WindowsNT makes it difficult to organize _at_the_file_system_ group access to data (more precisely, to files), which, in my opinion, is compensated by the ease of implementation, which means lower hardware requirements. However, applications such as SQL server solve the problem of group data access on their own, so the ability, which is missing in Unix, to deny access to a _file_ to a specific user, in my opinion, is clearly redundant.

    Almost all the protocols on which the Internet is based were developed under Unix, in particular the TCP/IP protocol stack was invented at Berkeley University.

    Unix's security when properly administered (and when is it not?) is in no way inferior to either Novell or WindowsNT.

    An important property of Unix, which brings it closer to mainframes, is its multi-terminal nature; many users can simultaneously run programs on one Unix machine. If you do not need to use graphics, you can get by with cheap text terminals (specialized or based on cheap PCs) connected over slow lines. In this, only VMS competes with it. You can also use graphical X terminals when the same screen contains windows of processes running on different machines.

    In the workstation category, MS Windows*, IBM OS/2, Macintosh and Acorn RISC-OS compete with Unix.

  • Windows - for those who value compatibility over efficiency; for those who are ready to buy a large number of memory, disk space and megahertz; for those who like to click on the buttons in the window without delving into the essence. True, sooner or later you will still have to study the principles of operation of the system and protocols, but then it will be too late - the choice has been made. Important advantage of Windows We must also admit the possibility of stealing a bunch of software.
  • OS/2 - for OS/2 lovers. :-) Although, according to some information, OS/2 interacts better with IBM mainframes and networks than others.
  • Macintosh - for graphic, publishing and music work, as well as for those who love a clear, beautiful interface and do not want (cannot) understand the details of the system's functioning.
  • RISC-OS, flashed into ROM, allows you to avoid wasting time installing the operating system and restoring it after failures. In addition, almost all programs under it use resources very economically, due to which they do not require swapping and work very quickly.

    Unix operates both on PCs and on powerful workstations with RISC processors; truly powerful CAD and geographic information systems. Unix's scalability, due to its multi-platform nature, is an order of magnitude greater than any other operating system I know of.

    Unix Concepts


    Unix is ​​based on two basic concepts: "process" and "file". Processes represent the dynamic side of the system, they are subjects; and files are static, they are objects of processes' actions. Almost the entire interface of processes interacting with the kernel and with each other looks like writing/reading files. /* Although we need to add things like signals, shared memory and semaphores. */

    Processes should not be confused with programs - one program (usually with different data) can be executed in different processes. Processes can be very roughly divided into two types - tasks and daemons. A task is a process that performs its work, trying to finish it quickly and be completed. The daemon waits for events to process, processes the events that have occurred, and waits again; it usually ends on the orders of another process; most often it is killed by the user by giving the command “kill process_number”. /* In this sense, it turns out that an interactive task that processes user input is more like a daemon than a task. :-) */

    File system


    In old Unixes, 14 letters were allocated per name, in new ones this restriction has been removed. In addition to the file name, the directory contains its inode identifier - an integer that determines the number of the block in which the attributes of the file are written. Among them: the user number - the owner of the file; number groups; number of links to the file (see below), date and time of creation, last modification and last access to the file. Access attributes contain the file type (see below), attributes for changing rights at startup (see below) and rights. access to it for the owner, classmate and others to read, write and execute. The right to erase the file is determined by the right to write to the overlying directory.

    Each file (but not directory) can be known under several names, but they must be located on the same partition. All links to the file are equal; the file is erased when the last link to the file is deleted. If the file is open (for reading and/or writing), then the number of links to it increases by one more; so many programs that open temporary file, remove it immediately so that when crash When the operating system closes files opened by a process, this temporary file has been deleted by the operating system.

    There's another one interesting feature file system: if, after creating a file, writing to it was not done in a row, but at large intervals, then disk space is not allocated for these intervals. Thus, the total volume of files in a partition may be larger than the volume of the partition, and when such a file is deleted, less space is freed than its size.

    Files are of the following types:

    • regular direct access file;
    • directory (a file containing the names and identifiers of other files);
    • symbolic link (a string with the name of another file);
    • block device (disk or magnetic tape);
    • serial device (terminals, serial and parallel ports; disks and magnetic tapes also have a serial device interface)
    • named channel.

    Special files designed to work with devices are usually located in the “/dev” directory. Here are some of them (in the FreeBSD nomination):

    • tty* - terminals, including:
      • ttyv - virtual console;
      • ttyd - DialIn terminal (usually a serial port);
      • cuaa - DialOut line
      • ttyp - network pseudo-terminal;
      • tty - terminal with which the task is associated;
    • wd* - hard drives and their subpartitions, including:
      • wd - hard drive;
      • wds - partition of this disk (referred to here as "slice");
      • wds - partition section;
    • fd - floppy disk;
    • rwd*, rfd* - the same as wd* and fd*, but with sequential access;

    Sometimes it is required that a program launched by a user not have the rights of the user who launched it, but some others. In this case, the attribute of changing rights is set to the rights of the user - the owner of the program. (As an example, I will give a program that reads a file with questions and answers and, based on what it read, tests the student who launched this program. The program must have the right to read the file with answers, but the student who launched it does not.) This is how, for example, the passwd program works, with with which the user can change his password. The user can run the passwd program, it can make changes to the system database - but the user cannot.

    Unlike DOS, which full name file looks like "drive:\path\name", and RISC-OS, in which it looks like "-filesystem-drive:$.path.name" (which generally speaking has its advantages), Unix uses a transparent notation like "/ path/name". The root is measured from the partition from which the Unix kernel was loaded. If we are going to use another partition (and the boot partition usually contains only the essentials for booting), the command `mount /dev/partition_file directory` is used. In this case, files and subdirectories that were previously located in this directory become inaccessible until the partition is unmounted (naturally, all normal people use empty directories to mount partitions). Only the supervisor has the right to mount and unmount.

    When started, each process can expect to have three files already open for it, which it knows as standard input stdin at descriptor 0; standard output stdout on descriptor 1; and standard output stderr on descriptor 2. When logging into the system, when the user enters a name and password, and the shell is launched for him, all three are directed to /dev/tty; later any of them can be redirected to any file.

    Command interpreter


    Unix almost always includes two command interpreters - sh (shell) and csh (C-like shell). In addition to them, there are also bash (Bourne), ksh (Korn), and others. Without going into details, I will give general principles:

    All commands except changing the current directory, setting environment variables and operators structured programming - external programs. These programs are usually located in the /bin and /usr/bin directories. System administration programs are in the /sbin and /usr/sbin directories.

    The command consists of the name of the program to be launched and arguments. Arguments are separated from the command name and from each other by spaces and tabs. Some special characters are interpreted by the shell itself. The special characters are " " ` \ ! $ ^ * ? | & ; (what else?).

    You can issue multiple commands on one command line. Teams can be split; (sequential execution of commands), & (asynchronous simultaneous execution of commands), | (synchronous execution, stdout of the first command will be fed to stdin of the second).

    You can also take standard input from a file by including "<файл" (без кавычек); можно направить стандартный вывод в файл, используя ">file" (the file will be zeroed) or ">>file" (the write will be made to the end of the file). The program itself will not receive this argument; to know that the input or output has been reassigned, the program itself must perform some very non-trivial gestures.

    Manuals - man


    If you need to get information on any command, give the command "man command_name". This will be displayed on the screen through the “more” program - see how to manage it on your Unix with the `man more` command.

    Additional Documentation

  • UNIX originated at AT&T's Bell Labs more than 20 years ago.

    UNIX is a multi-user, multi-tasking OS that includes quite powerful means of protecting programs and files of various users. Written in C language and is machine-independent, which ensures its high mobility and easy portability application programs on PCs of various architectures. An important feature of the UNIX family of operating systems is its modularity and an extensive set of utility programs that make it possible to create a favorable operating environment for programmer users.

    Supports hierarchical file structure, virtual memory, multi-window interface, multiprocessor systems, multi-user database management system, heterogeneous computer networks.

    UNIX OS has the following main characteristics:

    Portability;

    - preemptive multitasking based on processes running in isolated address spaces in virtual memory;

    Supports simultaneous work of many users;

    Support for asynchronous processes;

    Hierarchical file system;

    Support for device-independent I/O operations (via special device files);

    Standard interface for programs (program channels, IPC) and users (command interpreter, not included in the OS kernel);

    Built-in system usage accounting tools.

    UNIX OS architecture- multi-level. On lower level works core operating system. Kernel functions (process management, memory management, interrupt handling, etc.) are accessible through system call interface, forming the second level. System calls provide a software interface for accessing kernel procedures. On next level work command interpreters , commands and utilities for system administration, communication drivers And protocols , - everything that is usually attributed to system software . The outer level is formed application programs user, network and other communication services, DBMS and utilities.

    The operating system performs two main tasks: data manipulation and storage. Most programs primarily manipulate data, but ultimately it is stored somewhere. On a UNIX system, this storage location is file system. Moreover, in UNIX all devices, with which the operating system works, are also presented as special files in the file system.

    Logical file system in UNIX OS (or simply file system ) is a hierarchically organized structure of all directories and files in the system, starting with root catalogue. The UNIX file system provides unified interface access to data located on various media, and to peripheral devices. A logical file system may consist of one or more physical file (sub)systems, which are sections of physical media (disks, CD-ROMs or floppy disks).


    The file system controls file permissions, performs file creation and deletion operations, and writes/reads file data. The file system provides redirection of requests addressed to peripheral devices corresponding to the I/O subsystem modules.

    The UNIX file system's hierarchical structure makes it easy to navigate. Each directory starting from the root ( / ), in turn, contains files and subdirectories.

    UNIX has no theoretical limit on the number of subdirectories, but there is a limit on maximum length The file name specified in the commands is 1024 characters.

    In UNIX, there are several types of files that differ in functionality:

    Regular file - The most common file type that contains data in some format. To the operating system, such files are simply a sequence of bytes. These files include text files, binary data, and executable programs.

    Catalog- this is a file containing the names of the files contained in it, as well as pointers to additional information that allows the operating system to perform actions with these files. Directories are used to form a logical file system tree.

    Device special file - Provides access to physical devices. Devices are accessed by opening, reading, and writing to a special device file.

    FIFO - named pipe. This file is used for communication between processes on a queue basis.

    Socket- allow you to represent a network connection as a file.

    Each file in UNIX contains a set of permissions that determine how the user interacts with the file.

    Each hard drive consists of one or more logical parts - partitions. The location and size of the partition is determined when the disk is formatted. In UNIX, partitions act as independent devices that are accessed as separate storage media. A section can only contain one physical file system.

    There are many types of physical file systems, such as FAT16 and NTFS, with different structures. Moreover, there are many types of UNIX physical file systems ( ufs, s5fs, ext2, vxfs, jfs, ffs etc.).

    Introduction

    What is Unix?

    Where can I get free Unix?

    What are the main differences between Unix and other OSes?

    Why Unix?

    Unix Concepts

    File system

    Command interpreter

    Manuals - man

    Introduction

    Writing about the Unix OS is extremely difficult. Firstly, because a lot has been written about this system. Secondly, because Unix ideas and solutions have had and are having a huge impact on the development of all modern operating systems, and many of these ideas are already described in this book. Thirdly, because Unix is ​​not one OS, but a whole family of systems, and it is not always possible to “trace” their relationship with each other, and it is simply impossible to describe all the OSes included in this family. Nevertheless, without in any way claiming to be complete, we will try to give a quick overview of the “Unix world” in those areas that seem interesting to us for the purposes of our training course.

    The birth of the Unix OS dates back to the late 60s, and this story has already become overgrown with “legends” that sometimes tell different stories about the details of this event. The Unix OS was born at the Bell Telephone Laboratories (Bell Labs) research center, part of AT&T Corporation. Initially, this initiative project for the PDP-7 computer (later - for the PDP-11) was either a file system, or a computer game, or a text preparation system, or both. It is important, however, that from the very beginning the project, which eventually turned into an OS, was conceived as software environment for collective use. The author of the first version of Unix was Ken Thompson, but a large team of employees (D. Ritchie, B. Kernighan, R. Pike and others) took part in the discussion of the project, and subsequently in its implementation. In our opinion, several fortunate circumstances in the birth of Unix determined the success of this system for many years to come.

    For most of the employees of the team in which the Unix OS was born, this OS was the “third system”. There is an opinion (see, for example) that a systems programmer achieves high qualifications only when completing his third project: the first project is still a “student’s” project, in the second the developer tries to include everything that did not work out in the first, and in the end it turns out to be too cumbersome , and only in the third is the necessary balance of desires and possibilities achieved. It is known that before the birth of Unix, the Bell Labs team participated (together with a number of other companies) in the development of the MULTICS OS. The final product, MULTICS (Bell Labs was not involved in the final stages of development), bears all the hallmarks of a "second system" and is not widely used. It should be noted, however, that many fundamentally important ideas and solutions were born from this project, and some concepts that many consider to be born in Unix actually have their origin in the MULTICS project.

    The Unix OS was a system that was made "for yourself and for your friends." Unix was not tasked with capturing the market and competing with any products. The developers of the Unix OS themselves were also its users, and they themselves assessed the suitability of the system to their needs. Without the pressure of market conditions, such an assessment could be extremely objective.

    The Unix OS was a system made by programmers and for programmers. This determined the elegance and conceptual harmony of the system, on the one hand, and on the other, the need for an understanding of the system for the Unix user and a sense of professional responsibility for the programmer developing software for Unix. And no subsequent attempts to make “Unix for dummies” could rid the Unix OS of this advantage.

    In 1972-73 Ken Thompson and Dennis Ritchie wrote new version Unix. Especially for this purpose, D. Ritchie created the C programming language, which is no longer necessary to introduce. More than 90% program code Unix is ​​written in this language, and the language has become an integral part of the OS. The fact that the main part of the OS is written in a high-level language makes it possible to recompile it into codes of any hardware platform and is a circumstance that determined the widespread use of Unix.

    During the creation of Unix, US antitrust laws did not give AT&T the opportunity to enter the software market. Therefore, the Unix OS was non-commercial and freely distributed, primarily in universities. There its development continued, and it was most actively carried out at the University of California at Berkeley. At this university, the Berkeley Software Distribution group was created, which was engaged in the development of a separate branch of the OS - BSD Unix. Throughout subsequent history, the main branch of Unix and BSD Unix developed in parallel, repeatedly enriching each other.

    As the Unix OS spread, the interest in it from commercial firms began to increase, and they began to release their own commercial versions of this OS. Over time, AT&T's "mainstream" branch of Unix became commercial, and a subsidiary, Unix System Laboratory, was created to promote it. The BSD branch of Unix in turn branched into commercial BSD and Free BSD. Various commercial and freely available Unix-like systems were built on the AT&T Unix kernel, but they also included features borrowed from BSD Unix, as well as original features. Despite the common origin, differences between members of the Unix family accumulated and eventually led to the fact that porting applications from one Unix-like OS to another became extremely difficult. At the initiative of Unix users, a movement arose to standardize the Unix API. This movement was supported by the International Standards Organization ISO and led to the POSIX (Portable Operation System Interface eXecution), which is still being developed and is the most authoritative standard for the OS. However, establishing the POSIX specification as an official standard is a slow process and cannot meet the needs of software manufacturers, which has led to the emergence of alternative industry standards.

    With the transition of AT&T Unix to Nowell, the name of this operating system changed to Unixware, and the rights to the Unix trademark were transferred to the X/Open consortium. This consortium (now the Open Group) developed its own (broader than POSIX) system specifications, known as the Single Unix Specification. The second edition of this standard was recently released, much better aligned with POSIX.

    Finally, a number of companies producing their own versions of Unix formed a consortium Open Software Foundation (OSF), which released its own version of Unix - OSF/1, based on the Mach microkernel. OSF also released the OSF/1 system specifications, which led OSF member firms to produce their own Unix systems. Among such systems: SunOS from Sun Microsystems, AIX from IBM, HP/UX from Hewlett-Packard, DIGITAL UNIX from Compaq and others.

    At first, the Unix systems of these companies were largely based on BSD Unix, but now most modern industrial Unix systems are built using (under license) the AT&T Unix System V Release 4 (S5R4) kernel, although they also inherit some properties of BSD Unix. We do not take responsibility for comparing commercial Unix systems, since comparisons of this kind that appear periodically in the press often present completely opposite results.

    Nowell sold Unix companies Santa Crouse Operations, which produced its own Unix product, SCO Open Server. SCO Open Server was based on more early version kernel (System V Release 3), but was superbly debugged and highly stable. Santa Crouse Operations integrated its product with AT&T Unix and released Open Unix 8, but then sold Unix to Caldera, which owns the "classic" Unix OS today (late 2001).

    Sun Microsystems began its representation in the Unix world with the SunOS system, created on the basis of the BSD kernel. However, it was subsequently replaced by the S5R4-based Solaris system. Currently, version 8 of this OS is distributed (there is also v.9-beta). Solaris runs on the SPARC (RISC processors manufactured to Sun specifications) and Intel-Pentium platforms.

    Hewlett-Packard offers HP-UX OS. v.11 on the PA-RISC platform. HP-UX is based on S5R4, but contains many features that indicate its origins in BSD Unix. Of course, HP-UX will also be available on the Intel-Itanium platform.

    IBM comes out with the AIX OS, the latest version to date is 5L (we will talk about it later). IBM did not declare the "pedigree" of AIX, it is mainly an original development, but the first versions bore signs of origin from FreeBSD Unix. Now, however, AIX is more like S5R4. The AIX OS was initially available on the Intel-Pentium platform, but subsequently (in accordance with IBM's general policy) was no longer supported on this platform. AIX currently runs on IBM RS/6000 servers and other PowerPC-based computing platforms (including IBM supercomputers).

    DEC's DIGITAL UNIX OS was the only commercial implementation of the OSF/1 system. DIGITAL UNIX OS ran on DEC Alpha RISC servers. When DEC was acquired by Compaq in 1998, both Alpha and DIGITAL UNIX servers went to Compaq. Compaq intends to restore its presence in the Alpha server market and, in connection with this, is intensively developing the OS for them. The current name of this OS is Tru64 Unix (current version is 5.1A), it continues to be based on the OSF/1 kernel and carries many of the features of BSD Unix.

    Although most commercial Unix systems are based on a single kernel and comply with POSIX requirements, each has its own API dialect, and the differences between dialects accumulate. This leads to the fact that transferring industrial applications from one Unix system to another is difficult and requires, at a minimum, recompilation, and often correction of the source code. An attempt to overcome the “confusion” and make a single Unix OS for all was made in 1998 by an alliance of SCO, IBM and Sequent. These firms united in the Monterey project with the goal of creating a single OS based on Unixware, which at that time was owned by SCO, IBM AIX and Sequent's DYNIX OS. (The Sequent company occupies a leading position in the production of computers with the NUMA architecture - asymmetric multiprocessor - and DYNIX - this is Unix for such computers). Monterey OS was to run on the 32-bit Intel-Pentium platform, the 64-bit PowerPC platform, and the new 64-bit Intel-Itanium platform. Almost all leaders in the hardware and middleware industry have expressed support for the project. Even companies that have own clones Unix (except Sun Microsystems), announced that on Intel platforms they will only support Monterey. Work on the project seemed to be going well. Monterey OS was among the first to prove its performance on Intel-Itanium (along with Windows NT and Linux) and the only one that did not emulate the 32-bit Intel-Pentium architecture. However, at the final stage of the project, a fatal event occurred: SCO sold its Unix division. Even earlier, Sequent became part of IBM. The "successor" of all the properties of the Monterey OS was the IBM AIX v.5L OS. However, not quite everyone. The Intel-Pentium platform is not a strategic focus for IBM, and AIX is not available on this platform. And since other leaders in the computer industry did not share (or did not quite share) IBM's position, the idea of ​​a common Unix operating system never came to fruition.