Operating systems, environments and shells. Architecture, purpose and functions of operating systems

What does any computing system consist of? From what is commonly called hardware in English-speaking countries, or technical support: processor, memory, monitor, disk devices, etc., connected by a backbone connection called a bus.


Share your work on social networks

If this work does not suit you, at the bottom of the page there is a list of similar works. You can also use the search button


Section 1 Operating systems, environments and shells p. 18 of 18

Lecture 1

Lecture 1 Introduction to OS definition, purpose and history of development.

What is an operating system

Computer system structure

What is OS

Operating system as a virtual machine

Operating system as resource manager

The operating system as a protector of users and programs

Operating system as a constantly running kernel

Short story evolution computing systems

First period (1945-1955). Lamp machines. No operating systems

Second period (1955–early 60s). Transistor-based computers. Batch Operating Systems

Third period (early 60s 1980). Computers based on integrated circuits. The first multitasking OS

The fourth period (from 1980 to the present). Personal computers. Classic, network and distributed systems

Basic concepts, OS concepts

System calls

Interrupts

Exceptional situations

Files

Processes, threads

OS architectural features

Monolithic core

Layered systems

Virtual machines

Microkernel architecture

Mixed systems

OS classification

Implementation of multitasking

Multi-user support

Multiprocessing

Real-time systems

Conclusion

Annex 1.

Some information about computer architecture

Interaction peripheral devices

  1. Lecture 1 Introduction to operating systems definition, purpose and history of development.
  2. What is an operating system
    1. Computer system structure

What does any computing system consist of? Firstly, from what in English-speaking countries is usually called the word hardware, or technical support: CPU , memory, monitor, disk devices, etc., united by a backbone connection called a bus.

Secondly, a computing system consists of software. All software is usually divided into two parts: application and system. Application software usually includes a variety of banking and other business programs, games, word processors, etc. System software usually refers to programs that facilitate the operation and development of application programs. It must be said that the division into application and system software is partly arbitrary and depends on who makes the division. So, regular user An inexperienced person in programming may consider Microsoft Word to be a system program, but from a programmer's point of view, it is an application. The C language compiler for an ordinary programmer is a system program, and for a system programmer it is an application program. Despite this fuzzy edge, this situation can be displayed as a sequence of layers (see Fig. 1.1), highlighting separately the most common part of the system software - the operating system:

Rice. 1.1. Software Layers computer system

  1. What is OS

Most users have operating experienceoperating systems, but nevertheless they will find it difficult to give this concept an exact definition. Let's take a quick look at the main points of view.

  1. Operating system as a virtual machine

When developing an OS abstraction is widely used, which is an important method of simplification and allows you to concentrate on the interaction of high-level components of the system, ignoring the details of their implementation. In this sense OS represents the interface between the user and the computer.

The architecture of most computers at the machine instruction level is very inconvenient for use by application programs. For example, working with a disk requires knowledge internal structure its electronic component controller for entering commands to rotate the disk, search and format tracks, read and write sectors, etc.

It is clear that the average programmer is not able to take into account all the features of the equipment (in modern terminology, develop device drivers), but must have a simple high-level abstraction, say, representing information space disk as a set of files. The file can be opened for reading or writing, used to retrieve or reset information, and then closed. This is conceptually simpler than worrying about the details of moving disk heads or organizing the operation of a motor. Similarly, with the help of simple and clear abstractions, all unnecessary details of the organization are hidden from the programmer interrupts , timer operation, memory management, etc. Moreover, on modern computing systems you can create the illusion of unlimited size operating memory and numbers processors . Does all thisoperating system. Thus, operating systemappears to the uservirtual machine, which is easier to deal with than directly with the computer hardware.

  1. Operating system as resource manager

operating systemdesigned to control all parts of a highly complex computer architecture. Consider, for example, what would happen if several programs running on the same computer tried to output to a printer at the same time. We would end up with a jumble of lines and pages produced by different programs.operating systemPrevents this kind of chaos by buffering printable information on disk and queuing it for printing. For multi-user computers, the need to manage and protect resources is even more obvious. Hence, operating system, How resource manager, carries out an orderly and controlled distribution processors , memory and other resources between different programs.

  1. The operating system as a protector of users and programs

If the computing system allows working together several users, then the problem of organizing their safe activities arises. It is necessary to ensure the safety of information on the disk so that no one can delete or damage other people's files. One user's programs must not be allowed to arbitrarily interfere with the operation of other users' programs. It is necessary to stop attempts at unauthorized use of the computer system. All these activities are carried outoperating systemas an organizer of safe operation of users and their programs.From this point of viewoperating systemappears to be a state security system, which is entrusted with police and counterintelligence functions.

  1. Operating system as a constantly running kernel

Finally, we can give the following definition:operating system – This is a program that constantly runs on the computer and interacts with everyoneapplication programs. It would seem that this is absolutely correct definition, but, as we will see later, in many modernoperating systemsOnly a part works constantly on the computeroperating systemwhich is usually called her core.

As we can see, there are many points of view on what it isoperating system. It is impossible to give it an adequate strict definition. It's easier to say that there is nothingoperating system, why is it needed and what does it do? . To clarify this issue, consider the history of the development of computing systems.

  1. A Brief History of the Evolution of Computing Systems

We will consider the history of the development of computing, and notoperating systemsbecause hardware and software have evolved together, influencing each other. The emergence of new technical capabilities led to a breakthrough in the creation of convenient, efficient and safe programs, and fresh ideas in the software field stimulated the search for new technical solutions. It was these criteria - convenience, efficiency and security - that played the role of natural selection factors in the evolution of computing systems.

  1. First period (1945-1955). Lamp machines. No operating systems

We will begin our study of the development of computer systems with the advent of electronic computing systems (omitting the history of mechanical and electromechanical devices).

The first steps in the development of electronic computers were taken at the end of the Second World War. In the mid-1940s, the first tube computing devices were created, and the principle of a program stored in the memory of a machine emerged (John Von Neumann, June 1945). At that time, the same group of people participated in the design, operation, and programming of the computer. It was more of a research work in the field of computer technology, rather than the regular use of computers as a tool for solving any practical problems from other applied areas. Programming was carried out exclusively in machine language. Aboutoperating systemsthere was no question, all tasks of organizing the computing process were solved manually by each programmer from the control panel. Only one user could be at the remote control. The program was loaded into the machine's memory, at best, from a deck of punched cards, and usually using a panel of switches.

The computing system performed only one operation at a time (input-output or actual calculations). Debugging of programs was carried out from the control panel by studying the state of the memory and registers of the machine. At the end of this period, the first system software appears: in 1951–1952. prototypes of the first compilers from symbolic languages ​​(Fortran, etc.) appeared, and in 1954 Nat Rochester developed an assembler for the IBM-701.

A significant portion of the time was spent preparing to launch the program, and the programs themselves were executed strictly sequentially. This mode of operation is calledsequential processingdata. In general, the first period is characterized by the extremely high cost of computing systems, their small number and low efficiency of use.

  1. Second period (1955–early 60s). Transistor-based computers. Batch Operating Systems

Since the mid-50s, the next period in the evolution of computer technology began, associated with the emergence of a new technical base - semiconductor elements. Application transistors instead of frequently burned out electronic lamps led to increased computer reliability. Machines can now operate continuously for long enough that they can be assigned to perform practically important tasks. Electricity consumption by computers is reduced and cooling systems are improved. Computer sizes have shrunk. The cost of operating and maintaining computer equipment has decreased. The use of computers by commercial firms began. At the same time, there is a rapid development of algorithmic languages ​​(LISP, COBOL, ALGOL-60, PL-1, etc.). The first real compilers, link editors, libraries of mathematical and utility routines appear. The programming process is simplified. There is no need to burden the same people with the entire process of developing and using computers. It was during this period that the personnel was divided into programmers and operators, operation specialists and computer developers.

The process of running programs itself changes. Now the user brings the program with input data in the form of a deck of punched cards and indicates the necessary resources. Such a deck is called a task. The operator loads the task into the machine's memory and starts it for execution. The resulting output data is printed on the printer, and the user receives it back after some (quite long) time.

Changing the requested resources causes program execution to be suspended, as a result CPU often idle. To improve the efficiency of computer use, jobs with similar resources begin to be collected together, creating a batch of jobs.

The first ones appearbatch processing systems, which simply automate the launch of one program from a package after another and thereby increase the load factor processor When implementing batch processing systemsa formalized task control language was developed, with the help of which the programmer informed the system and the operator what work he wanted to perform on the computer.Batch Processing Systemsbecame the prototype of modernoperating systems, they were the first system programs designed to control the computing process.

  1. Third period (early 60s - 1980). Computers based on integrated circuits. The first multitasking OS

The next important period in the development of computers dates back to the early 60s and 1980. At this time, a transition took place in the technical base from individual semiconductor elements of the type transistors to integrated circuits. Computer technology is becoming more reliable and cheaper. The complexity and number of problems solved by computers is growing. Increased productivity processors

Improving the efficiency of CPU time use low speed operation of mechanical input/output devices (a fast punched card reader could process 1200 punched cards per minute, printers printed up to 600 lines per minute). Instead of directly reading a package of tasks from punched cards into memory, they begin to use its preliminary recording, first on magnetic tape, and then on disk. When data input is required during a job, it is read from disk. Similarly, output information is first copied to the system buffer and written to tape or disk, and printed only after the job is completed. At first, actual I/O operations were carried out off-line, that is, using other, simpler ones, separately standing computers. Subsequently, they begin to be executed on the same computer that performs the calculations, that is, in on-line mode. This technique is called spooling (short for Simultaneous Peripheral Operation On Line) or data pumping/pumping. Introduction of pump-pump technology intobatch systemsmade it possible to combine real input-output operations of one task with the execution of another task, but required the development of a device interrupts to notify the processor about the completion of these operations.

Magnetic tapes were devices sequential access, that is, information was read from them in the order in which it was written. The advent of a magnetic disk, for which the order in which information is read is not important, that is, a direct access device, led to the further development of computing systems. When processing a batch of jobs on magnetic tape, the order in which jobs were launched was determined by the order in which they were entered. When processing a batch of tasks on a magnetic disk, it became possible to select the next task to be executed.Batch systemsbegin to schedule tasks: depending on the availability of the requested resources, the urgency of the calculations, etc. one or another task is selected for the account.

Further increase in efficiency of use processor was achieved through multiprogramming. The idea behind multiprogramming is that while one program is performing an I/O operation, CPU does not stand idle, as happened in single-program mode, but executes another program. When the I/O operation ends, CPU returns to execution of the first program. This idea is reminiscent of the behavior of a teacher and students during an exam. While one student (program) is thinking about the answer to a question (input/output operation), the teacher ( CPU ) listens to the answer of another student (calculations). Naturally, this situation requires several students in the room. Likewise, multiprogramming requires having multiple programs in memory at the same time. In this case, each program is loaded into its own section of RAM, called a partition, and should not affect the execution of another program.

The emergence of multiprogramming required a real revolution in the structure of the computing system.Hardware support plays a special role here.(many hardware innovations appeared at the previous stage of evolution), the most significant features of which are listed below:

  • Implementation of protective mechanisms. Programs should not have independent access to resource allocation, which leads to the emergence of privileged and unprivileged commands. Privileged commands, such as I/O commands, can only be executedoperating system. They say it runs in privileged mode. Transfer of control from the application program to OS accompanied by a controlled regime change. In addition, it is memory protection, allowing competing user programs to be isolated from each other, and OS from user programs.
  • Presence of interruptions. External interrupts notify the OS about what happened asynchronous event, such as an I/O operation completed. Internal interrupts (nowadays they are calledexceptional situations) occur when program execution has led to a situation requiring intervention OS , such as division by zero or an attempt to violate security.
  • Development of parallelism in architecture . Direct memory access and organization of I/O channels made it possible to free up the central CPU from routine operations.

No less important in multiprogramming organizations role operating system. She is responsible for the following operations:

  • Organization of the interface between the application program and OS using system calls .
  • Queuing jobs in memory and allocation processor one of the tasks required planning the use processor
  • Switching from one job to another requires preserving the contents of the registers and data structures needed to complete the job, in other words, the context to ensure that computation continues correctly.
  • Since memory is a limited resource, memory management strategies are needed, that is, it is necessary to organize the processes of placing, replacing and retrieving information from memory.
  • Organizing the storage of information on external media in the form of files and providing access to specific file only to certain categories of users.
  • Since programs may need to carry out authorized data exchange, it is necessary to provide them with means of communication.
  • For correct data exchange, you must allow conflict situations problems that arise when working with various resources and provide for coordination by programs of their actions, i.e. provide the system with synchronization tools.

Multiprogramming systems have made it possible to more effective use system resources(For example, processor , memory, peripheral devices), but they remained for a long time batch . The user could not directly interact with the task and had to foresee all possible situations using control cards. Debugging programs was still time-consuming and required examining multi-page printouts of the contents of memory and registers or using debug printing.

The advent of cathode ray displays and a rethinking of the use of keyboards has brought a solution to this problem. The logical extension of multiprogramming systems was time-sharing systems, or time sharing systems. They have a processor switches between tasks not only during I/O operations, but also simply after a certain time has passed. These switches occur so frequently that users can interact with their programs while they are running, that is, interactively. As a result, it becomes possible for several users to work simultaneously on one computer system. Each user must have at least one program in memory for this. To reduce restrictions on the number of working users, the idea of ​​not completely residing the executable program in RAM was introduced. The main part of the program is located on disk, and the fragment that needs to be executed at the moment can be loaded into RAM, and the unnecessary one can be downloaded back to disk. This is implemented using a virtual memory mechanism. The main advantage of such a mechanism is the creation of the illusion of unlimited computer RAM.

IN separation systems time the user was able to effectively debug the program in interactive mode and write information to disk without using punched cards, but directly from the keyboard. The emergence of on-line files led to the need to develop advanced file systems.

In parallel with the internal evolution of computing systems, their external evolution also occurred. Before the beginning of this period, computing systems were, as a rule, incompatible. Everyone had their own operating system, its own command system, etc. As a result, a program that ran successfully on one type of machine had to be completely rewritten and re-debugged to run on another type of computer. At the beginning of the third period, the idea arose of creating families of software-compatible machines operating under the same control.operating system. The first family of software-compatible computers built onintegrated circuits, became the IBM/360 series of machines. Developed in the early 60s, this family was significantly superior to second-generation machines in terms of price/performance. It was followed by the PDP line of computers, incompatible with the IBM line, and the top model in this line was the PDP-11.

The strength of “one family” was also its weakness. Wide range of possibilities this concept (the presence of all models: from mini-computers to giant machines; an abundance of various peripherals; different environments; various users) generated a complex and cumbersomeoperating system. Millions of lines of Assembly language, written by thousands of programmers, contained many errors, which caused a constant stream of publications about them and attempts to correct them. Only inoperating systemOS/360 contained over 1000 known bugs. However, the idea of ​​standardizationoperating systemswas widely introduced into the minds of users and subsequently received active development.

  1. The fourth period (from 1980 to the present). Personal computers. Classic, network and distributed systems

The next period in the evolution of computing systems is associated with the emergence of largeintegrated circuits(BIS). These years saw a sharp increase in the degree of integration and a decrease in the cost of microcircuits. A computer that did not differ in architecture from the PDP-11, in terms of price and ease of use, became accessible to an individual, and not to a department of an enterprise or university. The era of personal computers has arrived. Initially, personal computers were intended to be used by one user in a single-program mode, which led to the degradation of the architecture of these computers and theiroperating systems(in particular, there is no need to protect files and memory, schedule tasks, etc.).

Computers began to be used not only by specialists, which required the development of “friendly” software.

However, the increasing complexity and diversity of problems solved in personal computers, the need to improve the reliability of their work has led to a revivalalmost all the features characteristic of the architecture of large computing systems.

In the mid-80s, networks of computers, including personal computers, running under the control of network or distributed operating systems.

IN network operating systemsUsers can access the resources of another network computer; only they must be aware of their availability and be able to do so. Each machine on the network runs under its own localoperating system, different fromoperating system standalone computer availability of additional tools (software support for network interface devices and access to remote resources), but these additions do not change the structureoperating system.

Distributed system, on the contrary, it looks like normal autonomous system. The user does not and should not know where his files are stored on a local or remote machine and where his programs are executed. He may not even know if his computer is connected to the network. Internal structuredistributed operating systemhas significant differences from autonomous systems.

In the future, autonomousOSwe will call them classicoperating systems.

Having looked at the stages of development of computing systems, we can identify six main functions that were performed by classicalOSin the process of evolution:

  • Schedule jobs and usage processor
  • Providing programs with means of communication and synchronization.
  • Memory management.
  • File system management.
  • I/O management.
  • Security

Each of the above functions is usually implemented as a subsystem, which is a structural component OS. In each operating systemthese functions, of course, were implemented in their own way, to varying degrees. They were not originally designed as componentsoperating systems, but appeared in the process of development, as computing systems became more convenient, efficient and secure. The evolution of computing systems created by man has followed this path, but no one has yet proven that this is the only possible path for their development.OSexist because, at the moment, their existence is a reasonable way to use computing systems. Consideration of the general principles and algorithms for implementing their functions constitutes the content of most of our course, in which the listed subsystems will be consistently described.

  1. Basic concepts, OS concepts

In the process of evolution there aroseseveral important concepts, which have become an integral part of theory and practice OS . Covered in this section concepts will be encountered and explained throughout the course. Below is a brief description of them.

  1. System calls

At any operating systemsupports a mechanism that allows user programs to access kernel services OS. IN operating systemsthe most famous Soviet computer BESM-6, the corresponding means of “communication” with the kernel were called extracodes, inoperating systemsIBM called them system macros, etc. IN OS Unix such tools are calledsystem calls.

System calls(system calls ) this is the interface betweenoperating systemand user program. They create, delete and use various objects, the main ones being processes and files.

The user program requests a service fromoperating system by making a system call. There are libraries of procedures that load machine registers with certain parameters and carry out CPU interrupt , after which control is transferred to the handler of this call , included in the coreoperating system. The purpose of such libraries is to make system call similar to normal subroutine call.

The main difference is that whensystem callThe task goes into privileged mode or kernel mode. That's whysystem callssometimes also called software interrupts, as opposed to hardware interrupts which are more often called simply interruptions

The kernel code runs in this modeoperating system, and it is executed in the address space and incontext of the task that caused it. So the coreoperating systemIt has full access to the user program memory, and whensystem callit is enough to transfer the addresses of one or several memory areas with parameters call and the addresses of one or more memory areas for the results call

In the majority operating systems system callcarried out by the software team interrupts (INT). Software interrupt This is a synchronous event that can be repeated when executing the same program code.

  1. Interrupts

Hardware interrupt ) this is an event generated external (relative to the processor ) device. Through hardware interrupts equipment either informs the central CPU indicates that an event has occurred that requires an immediate response (for example, the user has pressed a key), or reports the completion of an asynchronous I/O operation (for example, data has been read from disk to main memory). Important type of hardware interrupts interrupts timers, which are generated periodically after a fixed period of time. Interrupts timers are usedoperating systemwhen planning processes. Each type of hardware interrupts It has own number, which uniquely identifies the source interrupts Hardware interrupt this is an asynchronous event, that is, it occurs regardless of what code is being executed processor At the moment. Hardware processing interrupts should not take into account which process is current.

  1. Exceptional situations

Exceptional situation(exception) an event that occurs as a result of an attempt by a program to execute a command that, for some reason, cannot be completed to completion. Examples of such commands include attempting to access a resource without sufficient privileges or accessing a missing memory page.

Exceptional situations, like system calls, are synchronous events, arising in the context of the current task.Exceptional situationscan be divided into correctable and incorrigible. Correctable ones include:exceptional situations, as the lack of necessary information in RAM. After eliminating the cause, it is correctableexceptional situationthe program can continue to run. Occurrence during workoperating system correctable exceptional situationsconsidered normal. Incorrigibleexceptional situationsmost often arise as a result of errors in programs (for example, division by zero). Usually in such casesoperating systemreacts by terminating the program that called itexceptional situation.

  1. Files

Files are intended for storing information on external media, that is, it is accepted that information recorded, for example, on a disk, should be located inside the file. Typically, a file is understood as a named portion of space on a storage medium.

the main task file system (file system ) hide the I/O features and give the programmer a simple abstract model of device-independent files.

There is also an extensive category for reading, creating, deleting, writing, opening and closing filessystem calls(creating, deleting, opening, closing, reading, etc.). Users are familiar with file system organization concepts such as directory, current directory, root directory, and path.To manipulate these objects inoperating system available system calls.

  1. Processes, threads

Process concept in OS one of the mostfundamental. Related to them are the concepts of threads, or lightweight processes.

  1. OS architectural features

So far it has been said that I do t OS. It is necessary to imagine how they do this, and what they are like from the inside, what approaches exist to their construction.

  1. Monolithic core

In fact, operating systemthis regular program, so it would be logical to organize it in the same way as most programs are organized, that is, composed of procedures and functions. In this case the componentsoperating systemare not independent modules, A components one big program. This structureoperating system called monolithic core(monolithic kernel). Monolithic core is a set of procedures, each of which can call each. All procedures run in privileged mode.

Thus, a monolithic kernel this is such a schemeoperating system, in which all its components are components of one program, use general structures data and interact with each other by directly calling procedures. For monolithic operating systemthe core coincides with the entire system.

In many operating systems With monolithic core kernel assembly, that is, its compilation , is carried out separately for each computer on which it is installedoperating system. In this case, you can select a list of hardware and software protocols, support for which will be included in the kernel. Since the kernel is a single program, recompilation is the only way to add new components to it or remove unused ones. It should be noted that the presence of unnecessary components in the core is extremely undesirable, since like the core always completely located in RAM. Additionally, eliminating unnecessary components improves reliabilityoperating system generally.

Monolithic core – the oldest way organizationsoperating systems. Example of systems withmonolithic coreis most Unix systems.

Even in monolithic systems, some structure can be discerned. Just as in a block of concrete one can discern inclusions of crushed stone, so in monolithic core interspersed service procedures corresponding tosystem calls. Service procedures are executed in privileged mode, while user programs are executed in non-privileged mode. To move from one privilege level to another, a main utility program can sometimes be used to determine which one system call was done, the correctness of the input data for this call and transferring control to the corresponding service procedure with a transition to the privileged operating mode. Sometimes there is also a set of software utilities that help perform service procedures.

  1. Layered systems

Continuing the structuring, it is possible to break the entire computing system into a number of smaller levels with good certain connections between them, so that level N objects can only call level N-1 objects. The lowest level in such systems is usually hardware, top level user interface. The lower the level, the more privileged commands and actions a module located at this level can perform. This approach was first applied when creating the THE (Technishe Hogeschool Eindhoven) system by Dijkstra and his students in 1968. This system had the following levels:

Rice. 1.2. THE layer system

Layered systems are well implemented. When using lower layer operations, you don't need to know how they are implemented, you just need to understand what they do. Layered systems are well tested. Debugging starts from the bottom layer and is carried out layer by layer. When an error occurs, we can be sure that it is in the layer under test. Layered systems are easily modified. If necessary, you can replace only one layer without touching the others. But layered systems are difficult to develop: it is difficult to correctly determine the order of layers and what belongs to which layer. Layered systems are less efficient than monolithic ones. So, for example, to perform I/O operations, the user program will have to sequentially go through all the layers from top to bottom.

  1. Virtual machines

At the beginning of the lecture we talked about looking atoperating system How on virtual machinewhen the user does not need to know the details of the internal structure of the computer. It works with files, not with magnetic heads and a motor; it works with huge virtual rather than limited real RAM; he doesn’t care much whether he is the only user on the machine or not. Let's consider a slightly different approach. Let operating system implements virtual machinefor each user, but not making his life easier, but, on the contrary, complicating it. Each one is like thisvirtual machineappears to the user as bare metal a copy of all hardware in the computer system, including CPU , privileged and unprivileged commands, input/output devices, interrupts etc. And he is left alone with this iron. When trying to access such virtual hardware at the privileged command level, what actually happens is system call real operating system, which performs all the necessary actions. This approach allows each user to upload their ownoperating system on virtual machineand do with her whatever your heart desires

Rice. 1.3. Option virtual machine

First real system This kind of system was the CP/CMS, or VM/370, as it is now called, for the IBM/370 family of machines.

The disadvantage of suchoperating systemsis a decrease in efficiencyvirtual machinescompared to a real computer, and they tend to be very bulky. The advantage lies in the use on one computer system of programs written for differentoperating systems.

  1. Microkernel architecture

Current trend in developmentoperating systemsconsists of transferring a significant part of the system code to the user level and simultaneously minimizing the kernel.We are talking about an approach to building a kernel calledmicrokernel architecture(microkernel architecture) operating system, when most of its components are independent programs. In this case, interaction between them is ensured by a special kernel module called microkernel . The microkernel operates in privileged mode and ensures interaction between programs and scheduling of use processor , primary processing interrupts , I/O operations and basic controls memory.

Rice. 1.4. Microkernel operating system architecture

The remaining components of the system communicate with each other by passing messages through the microkernel.

Main advantagemicrokernel architecturehigh degree of core modularityoperating system. This makes it much easier to add new components to it. In microkerneloperating systemyou can, without interrupting its operation, load and unload new drivers, file systems, etc. The process of debugging kernel components is greatly simplified, since a new version of the driver can be loaded without restarting the entireoperating system. Kernel componentsoperating systemare no fundamentally different from user programs, so you can use ordinary tools to debug them.Microkernel architectureincreases system reliability because a failure at the unprivileged program level is less dangerous than a failure at the kernel mode level.

In the same time microkernel operating system architectureintroduces additional overhead associated with message passing, which significantly impacts performance. In order for microkerneloperating systemwas not inferior in speedoperating systems on the base monolithic core, it is required to very carefully design the division of the system into components, trying to minimize the interaction between them. Thus, the main difficulty in creating microkernelsoperating systemsthe need for very careful design.

  1. Mixed systems

All considered approaches to buildingoperating systemshave their advantages and disadvantages. In most cases modernOSuse various combinations these approaches. So, for example, the kerneloperating systemLinux is a monolithic system with elementsmicrokernel architecture. When compiling the kernel, you can enable dynamic loading and unloading of many kernel components, called modules. When a module is loaded, its code is loaded at the system level and linked to the rest of the kernel. Any functions exported by the kernel can be used inside a module.

Another example of a mixed approach would be the ability to launchoperating system With monolithic corecontrolled by a microkernel. This is how 4.4BSD and MkLinux, based on the Mach microkernel, are designed. The microkernel provides virtual memory management and low-level drivers. All other functions, including interaction with application programs, are carried outmonolithic core. This approach formed as a result of attempts to take advantage ofmicrokernel architecture, keeping the code as well-debugged as possiblemonolithic core.

Most closely elementsmicrokernel architecture and elements monolithic coreintertwined in the Windows NT kernel. Although Windows NT is often called microkerneloperating system, this is not entirely true. The NT microkernel is too large (more than 1 MB) to bear the "micro" prefix. Components Windows kernels NT are located in preemptible memory and interact with each other by passing messages, as expected in microkerneloperating systems. At the same time, all kernel components operate in the same address space and actively use common data structures, which is typicaloperating systems With monolithic core. According to Microsoft experts, the reason is simple: a purely microkernel design is commercially unprofitable because it is inefficient.

Thus, Windows NT can rightfully be called hybridoperating system

  1. OS classification

There are several classification schemesoperating systems. Below is a classification based on some characteristics from the user's point of view.

  1. Implementation of multitasking

By the number of simultaneously performed tasksOScan be divided into two classes:

  • multitasking (Unix, OS/2, Windows);
  • single-tasking (for example, MS-DOS).

Multitasking OS, solving the problems of resource allocation and competition, fully implements the multi-program mode in accordance with the requirements of the section "Basic concepts, concepts OS".

Multitasking, which embodies the idea of ​​time sharing, is called preemptive. Each program is allocated a quantum processor time after which control is transferred to another program. They say that the first program will be superseded. Most commercial user programs operate in preemptive mode. OS.

On some OS (Windows 3.11, for example) a user program can monopolize CPU , that is, work in a non-displacement mode. As a rule, in most systems the code itself is not subject to preemption. OS . Critical programs, in particular real-time tasks, are also not preempted. This is discussed in more detail in the lecture on work planning processor

Based on the examples given, one can judge the approximate nature of the classification. So, in OS MS-DOS can organize the launch of a child task and the presence of two or more tasks in memory at the same time. However, this OS traditionally considered to be single-tasking, mainly due to the lack of defense mechanisms and communication capabilities.

  1. Multi-user support

By number of concurrent users OS can be divided into:

  • single-user (MS-DOS, Windows 3.x);
  • multi-user(Windows NT, Unix).

The most significant difference between these OS lies in the presence ofmulti-user systemsmechanisms for protecting the personal data of each user.

  1. Multiprocessing

Until recently, computing systems had one central processor . As a result of demands for increased productivity,multiprocessor systems, consisting of two or more processors general purpose, carrying out parallel execution of commands. Multiprocessing support is an important feature OS and leads to the complication of all resource management algorithms. Multiprocessing is implemented in such OS , like Linux, Solaris, Windows NT, and several others.

Multiprocessor OSdivided into symmetrical and asymmetrical. In symmetrical OS on each processor the same core functions, and the task can be executed on any processor , that is, processing is completely decentralized. At the same time, each of processors all memory is available.

In asymmetric OS processors unequal. There is usually a main CPU (master) and subordinates (slave), the workload and nature of which is determined by the master processor .

  1. Real-time systems

In rank multitasking OS, along with package systems And time sharing systems, are also includedreal time systems, which have not been mentioned so far.

They are used to control various technical objects or technological processes. Such systems are characterized by a maximum permissible response time to an external event, during which the program that controls the object must be executed. The system must process incoming data faster than it can arrive, and from several sources simultaneously.

Such strict restrictions affect architecturereal time systems, for example, they may be missing virtual memory, the support of which causes unpredictable delays in program execution. (See also the sections related to process scheduling and virtual memory implementation.)

The given classification OS is not exhaustive. In more detail, the features of the use of modern OS are reviewed in Olifer, 2001].

Operating environment is a set of system programs, the main purpose of which is to provide the user with a user interface ( UI ) and software interface ( API ), significantly superior in their capabilities to similar interfaces provided by the operating system. Distinctive feature operating environment is that it is built on top of the existing OS, i.e. its operation is impossible without this OS.

Operating shell is a set of system programs that provide a user-friendly interface ( UI ) with an operating system that is superior in one way or another (usually in the level of non-procedurality and proximity to the language of the user’s professional activity) similar user interface tools provided by the OS itself.

  1. Conclusion

We looked at different views on what it isoperating system; studied the history of developmentoperating systems; found out what functions they usually performOS; finally figured out what approaches to building existoperating systems. We will devote the next lecture to clarifying the concept of “process” and issues of process planning.

  1. Annex 1.
    1. Some information about computer architecture

The main hardware components of a computer are: main memory, central CPU and peripheral devices. To exchange data with each other, these components are connected by a group of wires called a backbone (see Fig. Fig.1.5).

Rice. 1.5. Some computer components

Main memory is used to store programs and data in binary form and is organized as an ordered array of cells, each with a unique digital address. Typically, the cell size is one byte. Typical operations on main memory: reading and writing the contents of a cell with a specific address.

Performance various operations data is handled by an isolated part of the computer called central processor (CPU). The CPU also has storage locations called registers. They are divided into general purpose registers and specialized registers. In modern computers, the register capacity is usually 48 bytes. General purpose registers are used to temporarily store data and the results of operations. To process information, data is usually transferred from memory cells to general-purpose registers, and the operation is performed by a central processor and transferring the results of the operation to the main memory.

Specialized registers are used to control operation processor . The most important are: the program counter, the instruction register, and the register containing program state information.

Programs are stored as a sequence of machine instructions that must be executed by a central CPU . Each command consists of an operation field and operand fields, that is, the data on which the operation is performed. The entire set of machine instructions is called machine language.

The program is executed as follows. The machine instruction pointed to by the program counter is read from memory and copied into the instruction register. Here it is decoded and then executed. After the command is executed, the program counter points to next command. These actions, called a machine cycle, are then repeated.

  1. Interaction with peripheral devices

Peripheral devices are designed to input and output information. Each device usually includes a specialized computer called a controller or adapter. When the controller is inserted into the connector on motherboard, it connects to the bus and receives a unique number (address). The controller then monitors the signals on the bus and responds to signals addressed to it.

Any I/O operation involves a dialogue between the CPU and the device controller. When processor When an I/O-related command is encountered in a program, it executes it by sending signals to the device controller. This is the so-called programmable I/O.

In turn, any changes to external devices result in signal transmission from the device to the CPU. From the CPU's point of view, this is an asynchronous event and requires its response. In order to detect such an event, between machine cycles CPU queries a special register, containing information about the type of device that generated the signal. If the signal is present, then the CPU performs a specific of this device a program whose task is to react to this event appropriately (for example, to enter a character entered from the keyboard into a special buffer). Such a program is called a processing program interrupts, and the event itself is an interruption , because it disrupts planned work processor . After processing is completedprocessor interruptsreturns to program execution. These computer actions are called input/output using interrupts

Modern computers also have the ability to communicate directly between the controller and main memory, bypassing the CPU, the so-called direct memory access mechanism.

Questions

  1. Purpose of OS in computers;
  2. A brief history of the development of computing systems and operating systems;
  3. Definitions of OS theory and practice: system calls, interrupts, exceptions, files;
  4. OS classification

Other similar works that may interest you.vshm>

13757. Creation of a network system for testing electronic course support Operating systems (using the example of the Joomla tool shell) 1.83 MB
The program for compiling tests will allow you to work with questions in electronic form and use all types digital information to display the content of the question. The purpose of the course work is to create modern model web service for testing knowledge using web development tools and software implementation for effective work test system protection against copying of information and cheating during knowledge control etc. The last two mean creating equal conditions for all passing knowledge control, impossibility of cheating and...
6179. OPERATING SYSTEMS 13.01 KB
To consider the functions of the operating system, people can be divided into two groups: users and programmers, here the concept of a user is more limited than the understanding of a user as any person communicating with a computer. The operating system programmer requires a set of tools that would help him in developing and debugging the final product of the program. Command line is the screen line that begins with the operating system prompt.
9146. Introduction to Operating Systems 11.94 KB
Operating system. Definition and purpose. Functions of operating systems. Basic qualities of the OS. Generations of operating systems. Short review modern OS. Classification of operating systems according to the features of resource management algorithms, features of hardware platforms, features of areas of use.
10804. Allergic lesions of the oral mucosa in children 47.16 KB
Motivational characteristics of the lesson topic: as a result practical lesson interns must acquire the following practical and theoretical skills: learn the basic and additional methods studies of children with allergic lesions of the oral mucosa; based on the data obtained and anamnesis, be able to make a preliminary diagnosis; draw up a treatment plan taking into account the child’s age; know the indications and contraindications for the use of medications for this pathology. This appears to be due to pollution...
2622. Ecology of microorganisms, their ecological environments. The effect of physical and chemical environmental factors on microorganisms 41.12 KB
Action of physical and chemical factors environment on microorganisms. The influence of physical factors on microorganisms. The influence of chemical factors on microorganisms. Distribution of microorganisms in nature In nature, microorganisms inhabit almost any environment, soil, air, water, and are distributed much more widely than other living beings.
8621. Programming languages. Programming systems. Visual Design Environments 21.13 KB
Bsic is a language that has both compilers and interpreters; it ranks first in popularity in the world. This language ranks second in popularity after Bsic. The following environments are currently the most popular visual programming for languages...
16438. Analysis of the operating environment 39.84 KB
All these indicators are called private one-dimensional performance indicators. More objective technologies for assessing the effectiveness of complex systems appeared only in the second half of the 20th century. Currently widely used throughout the world as a tool for analyzing the effectiveness of complex socio-economic systems, large oil companies, banks, manufacturing firms, housing and communal services, universities, hospitals and other commercial and non-profit institutions. At the same time, Farrell identified the following types...
342. Programming Environments 14.75 KB
Tool programming environments contain, first of all, a text editor that allows you to construct programs in a given programming language; tools that allow you to compile or interpret programs in this language, as well as test and debug the resulting programs. The following classes of programming tools are distinguished, see: General-purpose programming tools contain a set software tools supporting the development of programs on different languages programming for example...
561. Habitat quality 5.35 KB
Habitat quality Habitat quality is the degree to which environmental parameters meet the needs of people and other living organisms. Their requirements for the quality of the living environment are quite conservative, so the quality of the technosphere should not differ significantly from the natural environment. As a result of significant anthropogenic loads, vegetation degradation occurs in most cities, which worsens the condition of the urban environment. Pollution of the environment with harmful substances is steadily reducing the quality of food and water consumed...
522. Perception of the living environment. Analyzers 5.11 KB
Perception of the living environment. Analyzers A person needs constant information about the state and changes external environment and processing of this information. The ability to obtain information about the environment, the ability to navigate in space and evaluate the properties of the environment are provided by analyzers. Information coming from the external environment is analyzed in the cerebral cortex, the highest level of the central nervous system.

A modern computer is a complex hardware and software system. Writing computer programs, debugging them, and then executing them is a complex, time-consuming task. The main reason for this is the huge difference between what is convenient for people and what is convenient for computers. A computer understands only its own machine language (let’s call it L0), but for a person the most convenient is a spoken language or at least a language for describing algorithms - algorithmic language. The problem can be solved in two ways. Both methods involve developing commands that are more human-friendly than the computer's built-in machine commands. These new commands together form a certain language, which we will call L1.

The mentioned two methods of solving the problem differ in how the computer will execute programs written in the L1 language. The first method is to replace each command in the L1 language with an equivalent set of commands in the L0 language. In this case, the computer executes a new program written in L0 instead of the program written in L1. This technology is called broadcast.

The second way is to write a program in L0 that takes programs written in L1 as input, considers each command in turn, and immediately executes an equivalent set of L0 commands. This technology does not require compilation new program to R0. It is called interpretation, and the program that carries out the interpretation is called interpreter.

IN similar situation It's easier to imagine the existence of a hypothetical computer or virtual machine, for which the machine language is L1, than to think about translation and interpretation. Let's call such a virtual machine M1, and a virtual machine with language R0 - M0. It will be possible to write programs for virtual machines as if they (the machines) really existed.

Obviously, you can go further - create another set of commands that is more human-oriented and less computer-oriented than L1. This set forms the L2 language and, accordingly, the M2 virtual machine. We can continue this way until we reach a language of level n that suits us.

Most modern computers consist of two or more layers. Level 0 – Hardware cars. Electronic circuits at this level execute programs written in a Level 1 language. The next level is microarchitectural level.

At this level you can see collections of 8 or 32 (sometimes more) registers that form local memory and ALU ( arithmetic logic unit). The registers, together with the ALU, form the data path through which data arrives. The main operation of this path is as follows. One or two registers are selected, the ALU performs some operation on them, and the result is placed in one of these registers. On some machines, the operation of the path is controlled by a special program called firmware. In other machines, this control is performed by hardware.

The next (second) level is the level instruction set architecture. Instructions use registers and other hardware features. The instructions form the ISA (Instruction Set Architecture) layer, called machine language. Typically, machine language contains from 50 to 300 instructions, primarily used to move data around the computer, execute arithmetic operations and comparisons of quantities.

The next (third) level is usually hybrid. Most of the commands in his language are also at the command system architecture level. This level has some Additional features: set of new commands, other memory organization, the ability to execute two or more programs simultaneously and some others. Over time, the range of such teams has expanded significantly. It introduced so-called operating system macros or supervisor calls, now called system calls.

New features introduced at the third level are executed by an interpreter that runs at the second level. This interpreter was once called operating system. Third-level commands, identical to second-level commands, are executed by firmware or hardware, but not by the operating system. In other words, one part of the third level commands is interpreted by the operating system, and the other part is interpreted by the firmware. This is why this layer of the operating system is considered hybrid.

operating system was created in order to automate the operator’s work and hide from the user the difficulties of communicating with the equipment, providing him with more convenient system commands The lower three levels (from zero to second) are not designed for an ordinary programmer to work with them. They were originally designed to operate interpreters and translators that support higher levels. These translators and interpreters are compiled by system programmers who specialize in designing and building new virtual machines.

Above the operating system (OS) are the rest system programs. This is where the command interpreter (shell), compilers, editors, etc. are located. Similar programs are not part of the OS (sometimes users consider the shell to be the operating system). An operating system usually means software, which runs in kernel mode or, as it is also called, supervisor mode. It is protected from user intervention using special hardware.

The fourth level is a symbolic form of one of the low-level languages ​​(usually assembly language). At this level, programs can be written in a human-readable form. These programs are first translated into a level 1, 2 or 3 language and then interpreted by the corresponding virtual or actual (physical) machine.

Levels five and above are intended for application programmers who solve specific tasks in high-level languages ​​(C, C++, C#, VBA, etc.). Compilers and editors of these levels run in user mode. At even higher levels are user application programs.

Most computer users have at least enough experience with an operating system to perform their current tasks effectively. However, they have difficulty trying to define an operating system. To a certain extent, the problem is due to the fact that operating systems perform two main, but practically unrelated functions: expanding the capabilities of the computer and managing its resources.

From the user's point of view, the OS functions as an extended machine or virtual machine that is easier to program and operate on than the actual hardware that makes up the actual computer. operating system Not only does it eliminate the need to work directly with disks and provide a simple, file-oriented interface, but it also hides a lot of the nasty work with interrupts, time counters, memory organization, and other low-level components.

However, the concept of viewing the operating system primarily as a user-friendly interface is a top-down view. An alternative view, from the bottom up, gives an idea of ​​the operating system as a mechanism present in the computer to manage all the components of this highly complex system. According to this approach, the job of the operating system is to provide an organized and controlled distribution of processors, memory, disks, printers, input/output devices, time sensors, etc. between different programs competing for the right to use them.

1.2. Operating system, environment and operating shell

Operating systems (OS) in their modern understanding (their purpose and essence) appeared much later than the first computers (though, most likely, they will disappear in this essence in the computers of the future). Why and when did OS appear? Counts 1 According to other sources, the first computer was created in England in 1943 to decipher the codes of German submarines. that the first digital computer ENIAC (Electronic Numerical Integrator and Computer) was created in 1946 under the Project PX project of the US Department of Defense. 500 thousand dollars were spent on the project. The computer contained 18,000 vacuum tubes, a lot of all kinds of electronics, included 12 ten-bit adders, and to speed up some arithmetic operations it had a multiplier and a “divider-extractor” square root. Programming came down to connecting various blocks with wires. Of course, no software, much less operating systems, existed at that time [,].

Intense Creation various models The computer dates back to the early 50s of the last century. During these years, the same groups of people participated in the design, creation, programming, and operation of computers. Programming was carried out exclusively in machine language (and later in assembly language), there was no system software other than libraries of mathematical and utility routines. Operating systems had not yet appeared, and all tasks of organizing the computing process were solved manually by each programmer from a primitive computer control panel.

With the advent semiconductor elements The computing capabilities of computers have increased significantly. Along with this, it is noticeable progressed advances in automation programming and organization computational work. Algorithmic languages ​​(Algol, Fortran, Cobol) and system software(translators, communication editors, loaders, etc.). The implementation of the programs became more complex and included the following main actions:

  • loading the required translator (installing the necessary MLs, etc.);
  • launching the translator and obtaining the program in machine code;
  • linking the program with library routines;
  • loading the program into RAM;
  • launching the program;
  • outputting the results of the program to a printing device or other peripheral device.

To organize the efficient loading of all computer resources, positions of specially trained operators were introduced into the staff of computer centers, who professionally performed the work of organizing the computing process for all users of this center. However, no matter how prepared the operator is, it is difficult for him to compete in productivity with the work of computer devices. And therefore, most of the time the expensive processor was idle, and therefore, the use of computers was not effective.

In order to eliminate downtime, attempts were made to develop special programs - monitors, prototypes of the first operating systems, which carried out automatic transition from task to task. It is believed that the first operating system was created in 1952 for its IBM-701 computers by the General Motors research laboratory. In 1955, this company and North American Aviation jointly developed an operating system for the IBM -704 computer.

At the end of the 50s of the last century, leading manufacturers supplied operating systems with the following characteristics:

  • batch processing of one task stream;
  • Availability of standard input/output programs;
  • possibility of automatic transition from program to program;
  • error recovery tools that automatically clean up your computer if crash the next task and allowing you to launch the next task with minimal operator intervention;
  • job management languages, allowing users to describe their tasks and the resources required to complete them.

A package is a set (deck) of punched cards, organized in a special way (task, programs, data). To speed up work, it could be transferred to magnetic tape or disk. This made it possible to reduce downtime of expensive equipment. It must be said that currently, due to the progress of microelectronic technologies and programming methodologies, the cost of computer hardware and software has significantly decreased. Therefore, the focus now is on making the work of users and programmers more efficient, since skilled labor costs now represent a much larger share of the total cost of computing systems than hardware and software computers.

The location of the operating system in the hierarchical structure of software and hardware The computer can be represented as shown in Fig. 1.1.


Rice. 1.1.

The lowest level contains various computer devices consisting of microcircuits, conductors, power supplies, cathode ray tubes, etc. This level can be divided into sub-levels, for example device controllers, and then the devices themselves. It is possible to divide into more levels. Above is the microarchitectural level, where physical devices are treated as separate functional units.

At the microarchitectural level there are internal registers central processor(there may be several of them) and arithmetic-logical devices with means of controlling them. At this level, the execution of machine commands is implemented. In the process of executing instructions, registers of the processor and devices, as well as other hardware capabilities, are used. The commands visible to the assembler programmer form the level lead time access utilities to perform specific functions. The most important of the system programs is operating system, which frees the programmer from the need for in-depth knowledge of the computer structure and provides him with a convenient interface for its use. operating system acts as an intermediary, making it easier for the programmer, users and software applications to access various services and capabilities of the computer.

Thus, operating system is a set of programs that control the operation of application programs and system applications and act as an interface between users, programmers, application programs, system applications and computer hardware.

Figuratively, we can say that the computer hardware provides “raw” computing power, and the task of the operating system is to make the use of this computing power accessible and, if possible, convenient for the user. The programmer may not know the details of how to manage specific resources (such as disk) on the computer and must make appropriate calls to the operating system to obtain necessary services and functions. This set of services and functions represents the operating environment in which application programs are executed.

Thus, operating environment is a software environment created by an operating system that defines an application programming interface (API) as a set system functions and services (system calls) that are provided to application programs. Operating environment may include multiple application programming interfaces. In addition to the main operating environment, called natural, additional software environments can be organized through emulation (simulation) to allow the execution of applications that are designed for other operating systems and even other computers.

Another important concept related to the operating system relates to the implementation of user interfaces. As a rule, any operating system provides convenient user experience at the expense of funds user interface. These tools can be an integral part of the operating environment (for example, the Windows graphical interface or the MS DOS text command line interface), or they can be implemented by a separate system program - the operating system shell (for example, Norton Commander for MS DOS). In general, under operating system shell refers to the part of the operating environment that determines the user interface, its implementation (text, graphic, etc.), command and service capabilities of the user to control application programs and the computer.

Let's move on to consider the evolution of operating systems.

Topic 1. Computing system. Composition of the computing system

One of the main tasks of technical disciplines is the selection of means and methods of mechanization and automation of work. Automation of work with data has its own characteristics and special devices are used for its implementation.

A set of devices designed for automatic or automated data processing is called computer technology .

A specific set of interacting devices and programs, which is designed to serve one work area, is called computing system . The central device of most computing systems is computer . It is designed to automate the creation, storage, processing and transmission of data.

The composition of the computing system is called configuration .

Consider separately hardware configuration computing systems and their software configuration. The criteria for choosing a hardware or software solution are performance and efficiency.

Rice. 1. Composition of the computing system

Hardware

Computer system hardware includes devices and instruments that form a hardware configuration. Modern computers and computing systems have a block-modular design - the hardware configuration necessary to perform specific types of work can be assembled from ready-made units and blocks.

Based on the way devices are located relative to the central processor, they are divided into internal And external devices. External, as a rule, are most input/output devices (also called peripheral devices) and some devices designed for long-term data storage.

Coordination between individual nodes and blocks is performed using transitional hardware-logical devices called hardware interfaces. Standards for hardware interfaces are called protocols. Thus, protocol- is a collection technical specifications, which must be provided by device developers to successfully harmonize their operation with other devices.

Numerous interfaces present in the architecture of any computing system can be divided into two large groups: sequential And parallel .

1. Through a serial interface, data is transferred sequentially, bit by bit, and its performance is measured in bits per second (bps, Kbps, Mbps).

2. Through a parallel interface, data is transmitted simultaneously in groups of bits. The number of bits involved in one message is determined bit depth interface, for example, eight-bit parallel interfaces transfer one byte (8 bits) per cycle. Parallel interfaces are usually more complex than serial interfaces, but provide higher performance. They are used where data transfer speed is important: for connecting printing devices, graphic information input devices, data recording devices on external media etc. The performance of parallel interfaces is measured in bytes per second (byte/s; KB/s; MB/s).

Initially, serial interfaces were used to connect “slow” devices (simple printing devices Low quality, devices for input and output of sign and signal information, control sensors, low-performance communication devices, etc.), as well as in cases where there are no significant restrictions by duration of data exchange.

However, with the development of technology, new, high-speed serial interfaces have appeared that are not inferior to parallel ones, and often surpass them in throughput. Today, serial interfaces are used to connect any type of device to a computer.

Software

Programs are ordered sequences of commands. The ultimate goal of any computer program is to control hardware.

The composition of the computer system software is called software configuration. There is a relationship between programs, as well as between physical nodes and blocks - many programs work relying on other lower-level programs, that is, we can talk about cross-program interface. The possibility of the existence of such an interface is also based on the existence of technical conditions and interaction protocols, and in practice it is ensured by the distribution of software into several interacting levels.

Software layers are a pyramidal structure. Every next level relies on software from previous levels. Each higher level increases the functionality of the entire system. For example, a computer system with basic software is not capable of performing most functions, but allows installation of system software.

Rice. 2. Software structure

1. Basic level. Most low level software represents the base software. It is responsible for interacting with the underlying hardware. As a rule, the basic software is directly included in the basic hardware and is stored in special chips called read-only memory devices (ROM - ReadOnlyMemory, ROM). Programs and data are written (“flashed”) into ROM chips at the production stage and cannot be changed during operation.

2. System level. The system level is transitional. Programs operating at this level ensure the interaction of other computer system programs with basic level programs and directly with the hardware, that is, they perform “intermediary” functions.

user interface tools– thanks to them, the computer gets the opportunity to enter data into the computing system, control its operation and obtain the result in a form convenient for itself.

drivers– expand the capabilities of the OS, allowing it to work with one or another connected device, teaching it a new data exchange protocol, etc.

The collection of system-level software forms the core of a computer's operating system. We will look at the full concept of an operating system a little later, but here we will only note that if a computer is equipped with system-level software, then it is already prepared for installing programs more high levels, to the interaction of software with hardware and, most importantly, to interaction with the user. That is, the presence of an operating system kernel is an indispensable condition for the possibility practical work person with a computing system.

3. Service level. Software at this level interacts with both base-level and system-level programs. The main purpose of utility programs (they are also called utilities) is to automate the work of checking, setting up and configuring a computer system. In many cases, they are used to expand or improve the functionality of system programs. Some utilities(as a rule, these are maintenance programs) are initially included in the operating system, but most utility programs are external to the operating system and serve to expand its functions.

There are two alternative directions in the development and operation of utility programs:

a) integration with the operating system - utility programs can change the consumer properties of system programs, making them more convenient for practical work.

b) autonomous operation - utilities are loosely coupled to the system software, but provide the user with more opportunities to personalize their interaction with the hardware and software.

4. Application layer. Application level software is a set of application programs with the help of which specific tasks are performed at a given workplace.

Application Software Examples

1. Text editors.

Entering and editing text data;

Automation of input and editing processes.

For input, output, and storage operations, text editors call and use system software (this is also typical for all other types of application programs)

2. Word processors. The main difference between word processors and text editors is that they allow you not only to enter and edit text, but also to format it, that is, design it. Accordingly, the main tools of word processors include tools for ensuring the interaction of text, graphics, tables and other objects that make up the final document, and additional tools include tools for automating the formatting process.

The modern style of working with documents involves two alternative approaches - working with paper documents and working with electronic documents (using paperless technology). Therefore, word processors allow you to perform 2 types of formatting - formatting documents intended for print, and formatting of electronic documents intended to display on screen. The techniques and methods in these cases differ significantly. Word processors vary accordingly, although many successfully combine both approaches.

Ministry of Education and Science of the Russian Federation

"St. Petersburg National Research

I APPROVED

Head of the IS Department

“____“ ___________20___

LECTURE NOTES

Module No: _ 3 _ Operating systems theory

Lecture topic: Operating systems theory

Literature :

Main:

1. Stalings V. Operating systems, 4th ed. M.: "Williams", 2004. – 848 p.

2. Tanenbaum E. Modern operating systems. - St. Petersburg: Peter, 2003 - 992 p.

In asymmetric operating systems, processors are unequal. Usually there is a main processor (master) and slaves (slaves), the load and nature of which is determined by the main processor.

Real-time systems

The category of multitasking operating systems, along with batch systems and time-sharing systems, also includes real-time systems that have not been mentioned so far.

They are used to control various technical objects or technological processes. Such systems are characterized by a maximum permissible response time to an external event, during which the program that controls the object must be executed. The system must process incoming data faster than it can arrive, and from several sources simultaneously.

Such severe restrictions affect the architecture of real-time systems; for example, they may lack virtual memory, the support of which causes unpredictable delays in program execution.

The given OS classification is not exhaustive

Lecture 3

Process Definition

In general process - This is some activity associated with the execution of a program on the processor.

When executing programs on a central processor, the following characteristic individual states are most often distinguished:

1. generation- conditions are prepared for the first execution on the processor

2. active state, or “Account” state - the program is executed on the processor

3. expectation- the program is not executed on the processor due to the busyness of any required resource

4. readiness- the program is not executed, but all currently necessary resources are provided for execution, except for the central processor

5. ending- normal or emergency termination of program execution, after which the processor and other resources are not provided to it

A process stays in each of its valid states for some time, after which it transitions to some other valid state. The composition of admissible states, as well as admissible transitions from state to state, are usually specified in the form of a process existence graph

For the OS, a process in this interpretation is considered as an object in relation to which it is required to ensure the implementation of each of the permissible states, as well as permissible transitions from state to state in response to events that may cause such transitions. Such events can also be initiated by the processes themselves, which are capable of requesting a processor or some other resource necessary for program execution.

Properties and classification

Processes are determined by a number of time characteristics. At some point in time, a process can be generated (formed), and after some time it can be completed. The interval between these moments is called process existence interval.

At the moment of generation, the sequence and duration of the process's stay in each of its states (process trace) are generally unpredictable. Consequently, the duration of the existence interval is also unpredictable. However, certain types of processes require such planning to ensure that the process is completed before a specific point in time. Processes of this class are called real time processes. Another class includes processes whose lifetime should not exceed the time interval of an acceptable computer response to user requests. Processes of this class are called interactive. Processes not included in these classes are called batch.

In any OS, at the request of an existing or existing process, work is carried out to spawn processes. The process that defines this requirement, called generating, and created on demand - generated. If a generated process, during the interval of its existence, in turn issues a demand for the generation of another process, then it simultaneously becomes a generating one.

When managing processes, it is important to ensure the reproducibility of the results of each process, to take into account and manage the situation that developed during the development of the process. Therefore, it is often important not only the result of the calculation, but also how this result is achieved. From these positions, the OS compares processes by dynamic properties, using the concept of “trace” - the order and duration of the process’s stay in acceptable states during the interval of existence.

Two processes that have the same end result of processing the same input data for the same or even various programs on the same or on different processors is called equivalent. In the general case, the traces of equivalent processes do not coincide. If in each of the equivalent processes data processing occurs according to the same program, but the traces generally do not coincide, then such processes are called identical. When the traces of identical processes coincide, they are called equal. In all other cases, the processes are always different.

The problem with process management is that at the moment processes are spawned, their traces are unknown. In addition, it is necessary to take into account how the intervals of existence of processes correlate in time. If the intervals of two processes do not intersect in time, then such two processes are called consistent each other relative to each other. If two processes exist simultaneously on the time interval under consideration, then they are parallel each other relative to each other. If on the interval under consideration there is at least one point at which one process exists, but the other does not exist, and at least one point at which both processes exist simultaneously, then such two processes are called combined.

In the operating system, it is customary to distinguish processes not only by time, but also by the place of their development, that is, on which processor the process program is executed. The starting point is considered to be the central processor (processors), on which processes called software or internal. This name indicates the possibility of the existence in the system of processes called external. These are processes whose development occurs under the control or control of the OS on processors other than the central one. They can be, for example, input-output processes developing in the channel. The activity of any computer user who, in one form or another, enters through the OS information required for the execution of one or more programs, can also be considered as an external process.

Software processes are usually divided into systemic And custom. When a system process develops, a program from the operating system is executed. When a user process develops, the user (application) program is executed.

Processes, regardless of their type, can be interconnected or isolated from each other. Two processes are interconnected if some kind of connection is maintained between them using a process control system: functional, spatiotemporal, control, informational, etc. Otherwise, they are isolated (more precisely, processes with weak connections, since in the absence of obvious connections, they can be connected indirectly and in a certain way influence each other's development).

If there is a control connection between processes, a relationship of the form “generator-generated”, discussed above, is established. If two interconnected processes during development use some resources together, but are not informationally connected with each other, i.e. do not exchange information, then such processes are called information-independent. The connection between such processes can be either functional or spatiotemporal. In the presence of information links between two processes they are called interacting, and the schemes, and therefore the mechanisms for establishing such connections, may be different.

The specificity is, firstly, determined by the dynamics of the processes (i.e., whether the interacting processes are sequential, parallel, or combined); secondly, the chosen method of communication (explicit, using explicit exchange of messages between processes, or implicit, using shared data structures). When it is necessary to emphasize the connection between interrelated resource processes, they are called competing.

Operating systems (OS) in their modern understanding (their purpose and essence) appeared much later than the first computers (though, most likely, they will disappear in this essence in the computers of the future). Why and when did OS appear? Counts 1) that the first digital computer, ENIAC (Electronic Numerical Integrator and Computer), was created in 1946 under the Project PX project of the US Department of Defense. 500 thousand dollars were spent on the project. The computer contained 18,000 vacuum tubes, a lot of all kinds of electronics, included 12 ten-bit adders, and to speed up some arithmetic operations it had a multiplier and a square root divider. Programming came down to connecting various blocks with wires. Of course, no software, much less operating systems, existed at that time.

The intensive creation of various computer models dates back to the early 50s of the last century. During these years, the same groups of people participated in the design, creation, programming, and operation of computers. Programming was done entirely in machine language (and later in assembly language), and there was no system software other than libraries of mathematical and utility routines. Operating systems had not yet appeared, and all tasks of organizing the computing process were solved manually by each programmer from a primitive computer control panel.

With the advent of semiconductor elements, the computing capabilities of computers have increased significantly. Along with this, achievements in the field of automation of programming and organization of computing work have progressed noticeably. Algorithmic languages ​​(Algol, Fortran, Cobol) and system software (translators, communication editors, loaders, etc.) appeared. The implementation of the programs became more complex and included the following main actions:

· launching the translator and obtaining the program in machine code;

· linking the program with library routines;

· launching the program;

· output the results of the program to a printing device or other peripheral device.

To organize the efficient loading of all computer resources, positions of specially trained operators were introduced into the staff of computer centers, who professionally performed the work of organizing the computing process for all users of this center. However, no matter how prepared the operator is, it is difficult for him to compete in productivity with the work of computer devices. And therefore, most of the time the expensive processor was idle, and therefore the use of computers was not efficient.


In order to eliminate downtime, attempts were made to develop special programs - monitors, prototypes of the first operating systems, which carried out automatic transition from task to task. It is believed that the first operating system was created in 1952 by the General Motors research laboratory for its IBM-701 computers. In 1955, this company and North American Aviation jointly developed the operating system for the IBM-704 computer.

At the end of the 50s of the last century, leading manufacturers supplied operating systems with the following characteristics:

· batch processing of one task stream;

· Availability of standard input/output programs;

· the possibility of automatic transition from program to program;

· error recovery tools that provide automatic “cleaning” of the computer in the event of an emergency termination of the next task and allow you to launch the next task with minimal operator intervention;

· Job management languages ​​that provide users with the ability to describe their jobs and the resources required to complete them.

A package is a set (deck) of punched cards, organized in a special way (task, programs, data). To speed up work, it could be transferred to magnetic tape or disk. This made it possible to reduce downtime of expensive equipment. It must be said that currently, due to the progress of microelectronic technologies and programming methodologies, the cost of computer hardware and software has significantly decreased. Therefore, the focus now is on making the work of users and programmers more efficient, since skilled labor costs now represent a much larger share of the total cost of computing systems than computer hardware and software.

The location of the operating system in the hierarchical structure of computer software and hardware can be represented as shown in Fig. 1.1.

Rice. 1.1. Hierarchical structure of computer hardware and software

The lowest level contains various computer devices consisting of microcircuits, conductors, power supplies, cathode ray tubes, etc. This layer can be divided into sublayers, such as device controllers and then the devices themselves. It is possible to divide into more levels. Above is the microarchitectural level, where physical devices are treated as separate functional units.

At the microarchitectural level there are internal registers of the central processor (there may be several of them) and arithmetic-logical devices with means for controlling them. At this level, the execution of machine commands is implemented. In the process of executing instructions, registers of the processor and devices, as well as other hardware capabilities, are used. The instructions visible to an assembler programmer form the ISA (Instruction Set Architecture) layer, often called machine language.

The operating system is designed to hide all these complexities. End user usually not interested in the details of computer hardware. He sees a computer as a set of applications. The application can be written by a programmer in any programming language. To simplify this work, the programmer uses a set of system programs, some of which are called utilities. With their help, frequently used functions are implemented that help you work with files, manage I/O devices, etc. The programmer uses these tools when developing programs, and applications call the utilities at runtime to perform specific functions. The most important of the system programs is the operating system, which frees the programmer from the need for in-depth knowledge of the computer structure and provides him with a convenient interface for its use. The operating system acts as an intermediary, making it easier for the programmer, users, and software applications to access various services and capabilities of the computer.

Thus, operating system is a set of programs that control the operation of application programs and system applications and act as an interface between users, programmers, application programs, system applications and computer hardware.

Figuratively, we can say that the computer hardware provides “raw” computing power, and the task of the operating system is to make the use of this computing power accessible and, if possible, convenient for the user. The programmer may not know the details of managing specific resources (for example, disk) of the computer and must make appropriate calls to the operating system to obtain the necessary services and functions from it. This set of services and functions represents the operating environment in which application programs are executed.

Thus, operating environment is a software environment created by an operating system that defines an application programming interface (API) as a set of system functions and services (system calls) that are provided to application programs. The operating environment may include multiple application programming interfaces. In addition to the main operating environment, called natural, additional software environments can be organized through emulation (simulation) to allow the execution of applications that are designed for other operating systems and even other computers.

Another important concept related to the operating system relates to the implementation of user interfaces. As a rule, any operating system provides a convenient user experience through user interface tools. These tools can be an integral part of the operating environment (for example, the Windows graphical interface or the MS DOS text command line interface), or they can be implemented by a separate system program - the operating system shell (for example, Norton Commander for MS DOS). In general, under operating system shell refers to the part of the operating environment that determines the user interface, its implementation (text, graphic, etc.), command and service capabilities of the user to control application programs and the computer.

Let's move on to consider the evolution of operating systems.