IBM PC-compatible computer. Brief information about IBM PC - compatible computers

The computer configurator with compatibility check allows you to quickly assemble a system unit with the technical characteristics required by the user. Using our online designer, you can easily assemble a reliable office machine, a home multimedia system unit, or a powerful gaming setup.

Computer assembly online

Nowadays, like many years before, assembling a computer from independently selected components is popular. This is a good opportunity to choose what you want. Nothing limits you; there are hundreds of options available for assembly, among which there is sure to be one you like.

Our online store offers the opportunity to assemble a computer online through the configurator. In it, this process is presented in the form of categories of components, from the processor to the power supply. Each category contains an expanded range of models with descriptions of characteristics for ease of selection.

To simplify the selection of components, the configurator has a compatibility filter for the main components of the assembly. For example, by selecting a specific processor, the following components are automatically filtered by compatibility. Also, you will be presented with a choice to install the operating system. After completing the assembly process, you receive the final result based on three parameters: price, technical data, rendered image. After placing an order and confirming it by phone, our specialists assemble this kit and check its functionality.

The advantage of this method of purchasing a system unit is that you not only choose the components you want, but also have the opportunity to choose the brand or manufacturer of the part.

Having assembled a certain configuration and completed by clicking the assemble/buy buttons, the assembly is assigned a specific serial number, by typing it in the product search bar, you can find this PC and send a link to it to friends or acquaintances for consultation or recommending them to buy.

An important feature of our configurator is the “get an expert’s opinion” function. By sending your request through this form, you will receive a detailed response with a recommendation to the email you specified.

Try it and see for yourself - assembling a computer online is easy and simple! In case of difficulties, you can always get advice from our specialists on all issues that interest you.

Otherwise it may be questioned and deleted.
You can edit this article by adding links to .
This mark is set July 3, 2014.

IBM-PC-compatible desktops are characterized by expandability - a variety of devices can be connected via expansion buses (ISA, PCI, AGP, etc.). The processor and RAM are almost always replaceable.

Evolution

The original IBM PC had 20-bit memory addressing. With the advent of processors, addressing was expanded, which made it possible to use larger amounts of RAM.

The long-term collaboration between Microsoft and Intel, which led to their market dominance, gave rise to the word "Wintel", denoting a personal computer that uses an Intel processor and a Microsoft Windows operating system. However, this is not the only possible application of this architecture. For example, an IBM PC-compatible computer can run on processors from other manufacturers (primarily AMD) and be used with such

The architecture of IBM PC computers is based on the principle of bus organization of connections between the processor and other computer components. Although since then the types of buses used and their structure have changed several times, the architecture, the basic principle of the internal organization of the computer, has remained unchanged. The computer structure is shown in the diagram below.

The central processing unit (CPU) is the core of a computer system. Communication with other components is carried out via the external processor bus. Inside the processor there are buses for interaction between the ALU, control devices and memory registers. The processor's external bus consists of lines that carry data, addresses (indicating where the data comes from and where it is sent), and control commands. Therefore, the common bus is divided into a data bus, an address bus and a control bus. Each line can carry one bit of data, address or control command.

The number of lines on the bus is called the bus width. The bus width determines the maximum number of simultaneously transmitted bits, which in turn determines the overall performance of the computer. That is, the larger the bus width, the more data can be transmitted simultaneously, the higher the performance. The second parameter that affects performance is the bus data transfer speed, which is determined by the bus clock speed.

The bus frequency is a fairly important characteristic, but still does not determine the performance of the computer. The most important parameters for the overall performance of a computer are the clock speed and bit depth of the central processor. And this is natural for many reasons. It is the processor that performs the main data processing tasks and often initiates and manages data exchange. The clock frequency determines the speed of operations, and the bit depth determines the amount of data processed during one operation.

Question 20: System of structural elements of a personal computer. Form factors. Computer ter

(English computer, - “calculator”) - a device or system capable of performing a given, clearly defined, variable sequence of operations. These are most often operations of numerical calculations and data manipulation, however, this also includes input-output operations.A description of a sequence of operations is called a program. Electronic computer,

computer- a set of technical means, where the main functional elements (logical, storage, display, etc.) are made on electronic elements intended for automatic information processing in the process of solving computational and information problems. , Personal computer PC(personal electronic computer) is a desktop microcomputer that has the operational characteristics of a household appliance and universal functionality.

Form factor( from English form factor) is a standard that specifies the overall dimensions of a technical product, as well as describing additional sets of its technical parameters, for example, shape, types of additional elements placed in/on the device, their position and orientation.

The form factor (like any other standards) is advisory in nature.

The form factor specification defines the required and optional components. However, the vast majority of manufacturers prefer to comply with the specification, since the price of compliance with existing standards is the compatibility of the motherboard and standardized equipment (peripherals, expansion cards) from other manufacturers in the future.

An electronic computer implies the use of electronic components as its functional units, however, a computer can be designed on other principles - it can be mechanical, biological, optical, quantum, etc. (more details: Classes of computers By type of working environment), working due to the movement of mechanical parts, the movement of electrons, photons or the effects of other physical phenomena.

In addition, according to the type of operation, a computer can be digital (DVM) and analog (ABM).

On the other hand, the term “computer” implies the possibility of changing the program being executed (reprogramming). Many electronic computers can perform a strictly defined sequence of operations, contain input and output devices, or consist of structural elements similar to those used in an electronic computer (for example, registers), but are not reprogrammable.*

Design features

Modern computers use the entire range of design solutions developed over the entire period of development of computer technology. These solutions, as a rule, do not depend on the physical implementation of computers, but are themselves the basis on which developers rely. Below are the most important issues faced by computer creators:

A fundamental decision when designing a computer is the choice of whether it will be a digital or analog system. If digital computers work with discrete numerical or symbolic variables, then analog ones are designed to process continuous streams of incoming data.

Today, digital computers have a much wider range of applications, although their analogue counterparts are still used for some special purposes. It should also be mentioned that other approaches are possible here, used, for example, in pulsed and quantum computing, but for now they are either highly specialized or experimental solutions.

Examples of analog computers, from simple to complex, are: nomogram, slide rule, astrolabe, oscilloscope, television, analog sound processor, autopilot, brain.

Among the simplest discrete calculators, the abacus, or ordinary abacus, is known;

The most complex of these types of systems is a supercomputer.

Number system

An example of a computer based on the decimal number system is the first American computer, the Mark I.

The most important step in the development of computer technology was the transition to the internal representation of numbers in binary form. This has greatly simplified the design of computing devices and peripheral equipment. Taking the binary number system as a basis made it possible to more simply implement arithmetic functions and logical operations.

However, the transition to binary logic was not an instantaneous and unconditional process. Many designers tried to develop computers based on the decimal number system, which is more familiar to humans. Other design solutions were also used. Thus, one of the early Soviet machines worked on the basis of the ternary number system, the use of which is in many respects more profitable and convenient compared to the binary system (the Setun ternary computer project was developed and implemented by the talented Soviet engineer N.P. Brusentsov).

Under the leadership of Academician Ya. A. Khetagurov, a “highly reliable and secure microprocessor of a non-binary coding system for real-time devices” was developed, using a 1 of 4 coding system with an active zero.

While performing calculations, it is often necessary to save intermediate data for later use.

The performance of many computers is largely determined by the speed with which they can read and write values ​​​​to (from) the memory of its total capacity. Initially, computer memory was used only to store intermediate values, but it was soon proposed that program code be stored in the same memory (von Neumann architecture, aka “Princeton” architecture) as the data. This solution is used in most computer systems today. However, for control controllers (microcomputers) and signal processors, a scheme in which data and programs are stored in different memory sections (Harvard architecture) turned out to be more convenient.

    The main part of the PC, including:

    electronic devices that control the operation of the PC (including the “central processor”, “coprocessor”, “RAM”, “controllers” (“adapters”), “bus”);

    a power supply that converts alternating mains voltage into direct voltage of the required low value and supplies it to electronic circuits and other PC components;

External memory devices designed for writing and reading programs and data and consisting of a hard magnetic disk drive (HDD) and one or two floppy disk drives (FMD).

The design of the PC system unit consists of a case, several electronic boards (primarily the “system” or “motherboard”), standardized connectors (slots), flexible multi-core connecting cables, a power switch and a small number of switches (buttons) for controlling the PC operating modes .

    The PC system unit case is available in the following variants:

    Horizontal (desktop), incl. in its reduced (Mini-footprint, Slimline) and small-sized version (Ultra-slimline);

    Vertical (“tower”), incl. in its enlarged form, suitable for installation on the floor - “Big Tower”, small-sized - “Small Tower” and medium version - “Medium Tower”;

“All in one” - Desktop with a system unit and monitor combined in one case;

    Portable or portable, including a number of different options, including “knee pad” and “notepad” (see - Laptop or Pocketbook). In these cases, the system unit case also includes a monitor, keyboard, trackball, and in some models, a CD-ROM drive

    memory error when writing results

Today, there are almost no processors with sequential execution of instructions; they have been replaced by processors with parallel execution of instructions, which, all other things being equal, provide higher performance.

The simplest processor with parallel execution of instructions is a processor with an instruction pipeline. An instruction pipeline processor can be derived from a sequential processor by making each stage of the instruction cycle independent of previous and subsequent stages.

To do this, the results of each stage, except the last one, are stored in auxiliary memory elements (registers) located between the stages:

The result of the fetch - the encoded command - is stored in a register located between the fetch and decoding stages

The result of decoding - the type of operation, the values ​​of the operands, the address of the result - are stored in registers between the stages of decoding and execution

The results of execution - the new value of the program counter for a conditional jump, the result of an arithmetic operation calculated in the ALU, and so on - are stored in registers between the stages of execution and writing the results

At the last stage, the results are already written to registers and/or memory, so no auxiliary registers are needed.

Vector interrupt

With such an organization of the interrupt system, the computer that has requested service identifies itself using the interrupt vector - the address of the microcomputer main memory cell, which stores either the first command of the interrupt service subroutine for this computer, or the address of the beginning of such a subroutine.

Thus, the processor, having received the interrupt vector, immediately switches to executing the required interrupt processing routine. In a microcomputer with a vector interrupt system, each computer must have its own interrupt processing routine. Computer compatibility
Parameter name Meaning
Article topic: Computer compatibility

Rubric (thematic category)

Technologies

Classification of computers.

E generation (mid 40s - mid 50s).

The idea of ​​dividing machines into generations was brought to life by the fact that during the short history of its development, computer technology has undergone a great evolution both in terms of the elemental base ( lamps, transistors, microcircuits etc.), and in the sense of changing its structure, the emergence of new opportunities, expanding the scope of application and nature of use.

The development of computers has gone through several stages associated with generations of computers. Each generation of computers differs in their element base, architecture, scope of application, interfaces, and software tools for solving problems.

Element base - electronic tubes, resistors, capacitors; architecture is simple; application - scientific calculations; methods of communication - direct manual control of computer devices, programming in machine language.

1945-1950. The outstanding scientist J. von Neumann (USA) developed the concepts and design of the EDVAC computer. The basic principles of von Neumann's concept are still in use today.

1946 ᴦ. American engineers D. Eckert and D. Mauchly at the University of Pennsylvania built the first operating computer ENtAC.

1947-1950. A group of engineers led by academician. S. A. Lebedeva develops and puts into operation the first small electronic calculating machine (MESM) in the USSR.

1948 ᴦ. A group of American physicists designed a transistor - the main element of a 2nd generation computer.

1949 ᴦ. In England, under the leadership of M. Wilkes, the first computer with a stored program, EDSAK, was created.

Early 50s. In several countries, serial production of 1st generation computers began, the main elemental base of which were vacuum tubes. RAM was built on mercury delay lines, CRTs and later on ferrite rings.

In the USSR, after MESM, the following were produced: in Moscow, a large electronic calculating machine BESM-1, BESM-2 (S.A. Lebedev) and the fastest computer in Europe at that time, M-10 (L. Lebedev and Yu. A. Bazilevsky), in Penza - Ural (V.I. Rameev), in Minsk - Minsk-1, Minsk-14 (V.V. Przhislovsky), in Kyiv - Kiev (V.M. Glushkov), in Yerevan - Rozdan (F.T. Sargsyan).

The introduction of the first computers could not take place without the rapid development of numerical methods for solving problems and the fundamentals of programming. This work in the USSR was led by academicians A.A. Markov, A.N. Kolmogorov, I.V. Kurchatov, M.A. Lavrentieva, A.A. Dorodnitsyn, M.V. Keldysh.

1942-1953. Soviet scientists A.A. Lyapunov and M.R. Shura-Pura proposed an operator programming method.

1943-1955. A group of mathematicians led by D. Backus (USA) developed the algorithmic language Fortran.

2nd generation (mid 50s to mid 60s): semiconductor transistors and diodes, resistors, capacitors; more complex architecture; solving scientific, technical and national economic problems; use of operating systems; creation of computer systems; collective use; development of algorithmic languages.

1954-1957. The first computer based on the NCR 304 transistor will be created in the USA.

Late 50s. At the Massachusetts Institute of Technology, the algorithmic language LISP was developed, work on the problems of artificial intelligence in applied terms - for expert systems).

Early 60s. Serial production in the USSR of 2nd generation computers on transistors: M-220, BESM-3, BESG 4, Ural-11, Ural-14, Ural-16, Minsk-22, Minsk-32, Hrazdan-2ʼʼ, ʼʼHrazdan-3ʼʼ , ʼʼDnepr-1ʼʼ, ʼʼDnepr-3ʼʼ, etc.

1961 ᴦ. Intel (USA) released the first integrated circuits (ICs).

1966 ᴦ. The world's fastest (for that time) large EVG BESM-6 (S.A.Lsbsdsv) was put into operation in the USSR. The high performance of BESM-6 was due to the first use of a multi-program operating mode and a pipeline data processing procedure, which are used in almost all modern computers.

3rd generation (mid 60s - mid 70s) integrated circuits; architecture is associated with multi-processor, multi-machine and multi-channel systems; solving a wide range of problems of automation of management, design and planning; efficient operating systems, application programs and programming languages; emergence of the first computer networks.

1965 ᴦ. In the USA, the production of 3rd generation computers of the 360 ​​series based on integrated circuits has begun.

1966 ᴦ. The algorithmic language COBOL (USA) has been developed for processing commercial information.

1986 ᴦ. DEC (USA) has developed mini-computers of the PDP family with a wide range of applications: scientific research, process control, real-time processing of experimental data, automation of engineering, economic and managerial work, etc.

Early 70s. In the USSR, together with specialists from the People's Republic of Belarus, Hungary, Czechoslovakia, and the German Democratic Republic, 3rd generation computers of the unified system (ES COMPUTER) were developed and produced in the required quantity. These computers, compatible with the IBM 360, served as the basis for the organization of shared computing centers and automated control systems in large organizations and enterprises.

1971 ᴦ. Intel (USA) has released a microprocessor based on IC technology.

1971 ᴦ. The US Department of Defense Advanced Research Projects Agency announced the launch of the first part of the global information and computing network ARPANET. In 1982. ARPANET was merged with other networks and this community of networks was called the Internet.

70s - early 80s. In the USA, England and the USSR, supercomputers come into operation: ILLIAC-IV, STATAN-100, Sgau-1 (2, 3, MX), Cyber-205, DAP, Phenix, Connection machine, “Elbrus”.

1973-1976 Specialists from the USSR, People's Republic of Belarus, Hungary, Poland, Czechoslovakia, East Germany, Mongolia and Cuba have developed a series of minicomputers compatible with PDP (USA).

4th generation (mid 70s - 2000 ᴦ.): large integrated circuits; complex architecture; solving various problems in all areas of human activity; multitasking and multi-user operating systems; ʼʼpersonal type manipulators; speech input and output devices; multimedia tools; effective application programs and languages ​​that support artificial intelligence; development of computer network infrastructure.

1977 ᴦ. In the USA, young entrepreneurs S. Jobson and S. Wozniak organized a company producing inexpensive PCs intended for a wide range of users. These PCs, called APPLE, served as the basis for the widespread use of PCs throughout the world.

1979-1980 Japanese specialists have developed and launched the first electronic dictionary-translators.

1981 ᴦ. A group of leading specialists from several electronic companies in Japan announced the creation of a 5th generation computer in the 90s ("Japanese challenge to the world").

1982 ᴦ. IBM (USA), which occupied a leading position in the production of large computers, began production of the IBM PC. Many companies around the world began producing IBM joint PCs.

Mid 80's. Groups of scientists led by K. Sagan (USA) and V.V. Aleksandrov (USSR) developed mathematical models of the consequences of “nuclear winter” and “nuclear night”. These conclusions played a huge role in shaping the policies of countries holding atomic weapons.

1988 ᴦ. The USSR began mass production of school PCs (Korvet, UKNTs, Nemiga, etc.) and household PCs (BK 0010, Partner, Vector, Byte, etc.).

Today, a large number of electronic companies in the world produce various classes of computers from household ones to supercomputers in stationary and portable versions. The current fleet of computers in the world is approximately: PC 2.5 ‣‣‣ 10 8 pcs.; mini-computer-10 6 pcs.; manframes - 2 * 10 4 pcs. supercomputers - 100 pcs.

5th generation (early 21st century). Now it is difficult to predict what the 6th generation computers will look like, but we can indicate the general trends in the development of computer technologies and their impact on society.

Development is also on the way "intellectualization" computers, eliminating the barrier between man and computer. Computers will be able to perceive information from handwritten or printed text, from forms, from the human voice, recognize the user by voice, and translate from one language to another.

In sixth generation computers there will be a qualitative transition from processing data for processing knowledge.

Creation of a family of computers with fundamentally new capabilities that will provide:

efficient use of all available resources of the country: material, energy, human information;

improving performance in areas of low productivity;

inclusion of the country in international cooperation;

improving the use of the intellectual potential of society;

increasing the competitiveness of goods on the international market;

increasing the productivity of the population;

promoting a high level of education.

The computer element base assumes:

achieving the maximum packing density of elements in silicon-based VLSI circuits;

production of VLSI based on gallium arsenide;

use of cryogenic technology based on the Josephson effect.

Computer architectures are being improved in the following areas:

· creation of a computer system of varying power, balanced in architecture, which will allow the user to quickly, simply and effectively use the huge potential of such a system;

· development of single-processor PCs with command control, based on a new high-speed element base; These areas are being developed by those companies that want to maintain software compatibility of new PCs with existing ones;

· development of computers on several fast processors with command control, some of which are universal, and the other part are pipeline or parallel with a small number of processor elements;

· development of high-performance multiprocessor computers with pipeline, parallel or matrix information processing.

In addition to well-known methods of information processing, computers are focused on pattern recognition and processing of structured knowledge and making intelligent decisions.

Improving smart interfaces:

technical and software means of input/output of various types of information;

communication in problem-oriented natural spoken language;

use of text documents, both printed and handwritten, and images;

full development of known and new algorithmic programming languages;

use of artificial intelligence languages: Lisp Prolog, PS, FRL, VALID, OCCAM, etc.

The implementation of programs to create 5th generation computers will make it possible to build the so-called information society in a number of countries.

There are various classifications of computer technology:

by stages of development (by generations);

in architecture;

by productivity;

according to operating conditions;

by number of processors;

according to consumer properties, etc.

There are no clear boundaries between computer classes. As structures and production technologies improve, new classes of computers appear, and the boundaries of existing classes change significantly.

According to operating conditions, computers are divided into two types:

office (universal);

special.

Office ones are designed to solve a wide class of problems under normal operating conditions.

Special computers are used to solve a narrower class of problems or even one task that requires multiple solutions, and operate under special operating conditions.

The machine resources of dedicated computers are often limited. Moreover, their narrow orientation makes it possible to implement a given class of tasks most effectively.

Special computers control technological installations, work in operating rooms or ambulances, on rockets, airplanes and helicopters, near high-voltage transmission lines or in the range of radars, radio transmitters, in unheated rooms, under water at depth, in conditions of dust, dirt, vibrations, explosive gases, etc. There are many models of such computers. Let's get acquainted with one of them.

Computer Ergotouch

The Ergotouch computer is housed in a cast aluminum, fully sealed case that is easy to open for maintenance.

The walls of the computer absorb almost all electromagnetic radiation, both from the inside and outside. The machine is equipped with a touch-sensitive screen.

The computer can be washed with a hose, disinfected, decontaminated, and degreased without turning it off.

The highest reliability allows it to be used as a means of managing and monitoring technological processes in real time. The computer is easily included in the local network of the enterprise.

An important direction in the creation of industrial computers is the development "operator interface"- control panels, displays, keyboards and pointing devices in all possible designs. The comfort and productivity of operators directly depends on these products.

Based on performance and nature of use, computers can be divided into:

microcomputers, incl. - personal computers;

minicomputers;

mainframes (general purpose computers);

supercomputers.

Microcomputers are computers that have a central processing unit in the form of a microprocessor.

Advanced models of microcomputers have several microprocessors. Computer performance is determined not only by the characteristics of the microprocessor used, but also by the capacity of RAM, types of peripheral devices, quality of design solutions, etc.

Microcomputers provide tools for solving a variety of complex problems. Their microprocessors are increasing in power every year, and their peripherals are increasing in efficiency. Performance is about 1 - 10 million operations per second.

A type of microcomputer is a microcontroller.
Posted on ref.rf
This is a microprocessor-based specialized device that is built into a control system or process line.

Modern computer technology can be classified as follows:

· Personal computers;

· Corporate computers;

· Supercomputers.

Personal computers (PCs) are general-purpose microcomputers designed for one user and controlled by one person.

The class of personal computers includes various machines - from cheap home and gaming machines with small RAM, with program memory on a cassette tape and an ordinary TV as a display, to highly complex machines with a powerful processor, a hard drive with a capacity of tens of gigabytes, with high-definition color graphics, multimedia and other additional devices.

Personal computers are computer systems, all the resources of which are completely aimed at supporting the activities of one employee.

The most famous are the IBM PC and Macintosh family of computers. These are two different directions of PC development, incompatible with each other in hardware and software. It just so happens that the Macintosh family of computers are very easy to use, have extensive graphic capabilities and are widely used among professional artists, designers, in publishing and in education.

In the family of IBM-compatible PCs, one can also distinguish several types of computers, which differ significantly from each other in their characteristics and appearance, and, nevertheless, they are all personal computers. These are, first of all, desktop and portable PCs, which, despite significant external differences, have approximately the same characteristics and capabilities.

Laptop PCs– expensive products, but they are compact and transportable. PDAs are significantly different from desktop and portable ones - so-called organizers, or “portable secretaries”. These PC notepads have neither peripheral devices nor a keyboard; commands are selected directly on the miniature screen using a stylus.

Laptop computers Usually needed by business leaders, managers, scientists, journalists who have to work outside the office - at home, at presentations or during business trips.

Main types of laptop computers:

Laptop (knee pad, from lap- knee and top- on top). It is close in size to a regular briefcase. In terms of basic characteristics (speed, memory) it is approximately the same as a desktop PC. Now computers of this type are giving way to even smaller ones.

Notebook (notepad, notebook). It is closer in size to a large format book. It weighs about 3 kᴦ. Fits in a briefcase. It is important to note that for communication with the office it is usually equipped with modem. Laptops often provide CD-ROM drives.

Many modern laptops include interchangeable blocks with standard connectors. Such modules are designed for very different functions. You can insert a CD drive, a magnetic disk drive, a spare battery, or a removable hard drive into the same slot as needed.
Posted on ref.rf
Laptop resistant to power failures. Even if it receives energy from a regular electrical network, in the event of any failure it instantly switches to battery power.

Personal digital assistant

Palmtop (handheld) is the smallest modern personal computer. Fits in the palm of your hand. Magnetic disks are replaced by non-volatile electronic memory. There are no disk drives either - the exchange of information with ordinary computers goes through communication lines. If Palmtop is supplemented with a set of business programs recorded in its permanent memory, it will turn out personal digital assistant (Personal Digital Assistant).

Corporate computers(sometimes called a mini-computer or main frame) are computing systems that ensure the joint activities of many workers within one organization, one project, one area of ​​information activity using the same information and computing resources. These are multi-user systems that have a central unit with large computing power and significant information resources, to which is attached a large number of workstations with minimal equipment (video terminal, keyboard, positioning device such as a mouse and, possibly, a printing device). In principle, personal computers can also be used as workstations connected to the central unit of a corporate computer. The scope of application of corporate computers is the implementation of information technologies to support management activities in large financial and industrial organizations, government agencies, the creation of information systems that serve a large number of users within one function (exchange and banking systems, booking and selling tickets, etc. ).

Features of corporate computers:

Exceptional reliability;

High performance;

High I/O throughput.

The cost of such computers is millions of dollars. Demand is high.

Advantages - centralized data storage and processing is cheaper than maintaining distributed data processing systems consisting of hundreds and thousands of PCs.

Supercomputers are computing systems with extreme characteristics of computing power and information resources. Οʜᴎ are used in the military and space fields, in fundamental scientific research, global weather forecasting, military industry, geology, etc. For example, weather forecasting or modeling a nuclear explosion.

Supercomputer architecture is based on ideas parallelism And pipelining of calculations.

In these machines, many similar operations are performed in parallel, that is, simultaneously (this is usually called multiprocessing). Τᴀᴋᴎᴍ ᴏϬᴩᴀᴈᴏᴍ, ultra-high performance is ensured not for all tasks, but only for tasks, amenable to parallelization.

A distinctive feature of supercomputers are vector processors equipped with equipment for parallel execution of operations with multidimensional digital objects - vectors and matrices. They have built-in vector registers and a parallel pipelined processing mechanism. If on a conventional processor the programmer performs operations on each vector component in turn, then on a vector processor he issues vector commands at once.

Vector hardware is very expensive, in particular because it requires a lot of ultra-high-speed memory for vector registers.

The most common supercomputers are massively parallel computer systems. They have tens of thousands of processors interacting through a complex, hierarchically organized memory system.

As an example, consider the characteristics multi-purpose massively parallel mid-class supercomputer Intel Pentium Pro 200. This computer contains 9200 Pentium Pro processors at 200 MHz, for a total of (theoretically) performance 1.34 Teraflop(1 Teraflop is equal to 10 12 floating point operations per second), has 537 GB of memory and disks with a capacity of 2.25 Terabytes. The system weighs 44 tons (air conditioners for it weigh as much as 300 tons) and consumes power of 850 kW.

Supercomputers are used to solve complex and large scientific problems (meteorology, hydrodynamics, etc.), in management, intelligence, as centralized information repositories, etc.

The element base is microcircuits with an ultra-high degree of integration.

The cost is tens of millions of dollars.

Purpose – solving those tasks for which PC performance is not enough;

Providing centralized storage and processing of data.

Features: the ability to connect tens and hundreds of terminals or PCs for user work; the presence of special hardware for three-dimensional modeling and animation; therefore, a large number of films are created on them.

Mainframes are designed to solve a wide class of scientific and technical problems and are complex and expensive machines. It is advisable to use them in large systems with at least 200 - 300 workstations.

Centralized data processing on a mainframe is approximately 5-6 times cheaper than distributed processing using a client-server approach.

Famous mainframe S/390 IBM is usually equipped with at least three processors. The maximum amount of operational storage reaches 342 Terabytes.

The performance of its processors, channel throughput, and the amount of RAM storage allow you to increase the number of workstations in the range from 20 to 200,000 by simply adding processor boards, RAM modules and disk drives.

Dozens of mainframes can work together running a single operating system to perform a single task.

This classification is quite arbitrary, since the intensive development of technologies for the production of electronic components, significant progress in improving computers and their most important components lead to a blurring of the boundaries between these classes of computer equipment.

At the same time, the above classification takes into account only the autonomous use of computer technology. Today, the prevailing trend is to combine them into computer networks, which makes it possible to integrate information and computing resources for the most effective implementation of information technologies.

IBM PC - compatible computers - about 90% of all modern computers.

Compatibility is:

Software compatibility - all IBM PC programs will run on all IBM PC compatible computers.

Hardware compatibility - most devices (except those five or ten years old) for IBM PC computers and newer versions of IBM PC XT, IBM RS AT and others are suitable for IBM PC-compatible computers.

Advantages of IBM PC-compatible computers:

1) full compatibility has led to the emergence of hundreds of thousands of programs for all areas of human activity;

2) the openness of the market for IBM PC-compatible computers has caused intense competition among manufacturers of computers and their components, which has ensured high reliability, relatively low prices and the fastest possible introduction of technical innovations;

3) modular design and integration of IBM PC components - compatible computers that provide compactness, high reliability, ease of repair, the possibility of easy modernization and increasing the power of the computer (more powerful processor or more capacious hard drive).

The wide capabilities of IBM PC-compatible computers allow them to be used in various industries and to solve various problems.

Questions for self-control

1. By what criteria can computers be divided into classes and types?

7. How has the elemental base of computers evolved from generation to generation?

8. When did microcomputers become available for widespread home use?

9. Can you connect the concepts “apple”, “garage” and “computer”?

10. On the basis of what technical elements were the first generation computers created?

11. What is the main problem faced by developers and users from the experience of operating first-generation computers?

12. What element base is typical for the second generation of computers?

13. What function does the operating system perform during computer operation?

14. On what element base are third generation machines constructed?

15. Which generations of computers are characterized by widespread use of integrated circuits?

16. What speed is typical for fourth generation machines?

17. What is meant by the “intelligence” of computers?

18. What problem should the “intelligent interface” solve in fifth-generation machines?

19. What features should industrial computers have?

20. What is an operator computer interface?

21. By what main features can mainframes be distinguished from other modern computers?

22. How many users are mainframes designed for?

23. What ideas underlie the architecture of supercomputers?

24. On what types of tasks are the capabilities of supercomputers realized to the maximum?

Topic 5 . PC AS THE BASIS OF INFORMATION TECHNOLOGY

1. PC architecture

2. PC structure

3. PC functional characteristics

Computer compatibility - concept and types. Classification and features of the category "Computer Compatibility" 2017, 2018.

With support for the operation of an SD card, two big questions arose at once - hardware support for the SPI bus and the protocol for interacting with the card itself.

In principle, SPI can be implemented entirely in software, but I wanted to have fun with the hardware too, so I heroically set about drawing a byte transceiver in the circuit design. To my surprise, there was nothing complicated about it, and pretty soon I was already watching briskly running 8-bit packets on the oscilloscope screen, containing exactly what I wanted. By the way, here I first appreciated the ability of the new oscilloscope not only to show a bunch of signals, but also to combine them logically into the appropriate bus. It is much more pleasant to see that the oscilloscope understands that it is byte A5 that is being transmitted, rather than manually checking whether the transitions from 0 to 1 and vice versa are in the right places.

To simplify the task, I did not try to adapt to all types and varieties of cards, but limited myself to the original SD (not SDHC or some other variants) card. A little programming, and now the contents of the 0th sector of the map began to be displayed on the screen. Immediately after this, I brought these functions into some semblance of INT 13h, added INT 19h (boot load) in its rudimentary form and saw the following on the screen:

Since at that moment only the 0th sector was always read during reading, the bootloader (located exactly in this sector) did not find the OS to boot, which it reported. But these are trifles - the main thing is that my circuit slowly began to turn into a real computer and was even trying to boot!

Next came the struggle with converting physical sectors into logical blocks. Here I was also freeloading and instead of defining the parameters (image) of the disk, I simply hard-coded the numbers for a specific instance of the image. I had to tinker with this part - for some reason the calculations led to completely unexpected results (in general, I never liked arithmetic in assembly language). However, after some torment, physical sectors/cylinders/heads began to be regularly translated into logical blocks, and it was time to try to boot in earnest.

Naturally, the download didn’t go through right away, and I didn’t expect it. Knowing in advance that a lot of functions were not implemented in my BIOS, I put stubs on all interrupts, and when I accessed an unimplemented function, all the necessary information was displayed on the screen - which interrupt was being called and what arguments were being used. Next came the process of writing a handler for the corresponding function (and even more often, just a temporary stub), and the process continued. Suddenly, everything stopped at a function that was completely absent in the original PC - one of the INT 2F functions related to event processing. I saw that DOS determines the PC type, and it seems that it should not cause interruptions that are not present on this type, but, nevertheless, this happened and the process stopped. A simple stub didn’t help, and I didn’t want to implement the entire function on principle.

Now I don’t remember the whole train of thought (I looked at a lot of things at that moment in the DOS source code and during the boot process), but once again at this “freeze” I decided to call a bunch of interrupts (at that moment I had the timer disabled on INT 08h ) and pressed the Shift key. Suddenly a miracle happened:

To be honest, quite a lot of emotions came over me - to go from a breadboard with a couple of microcircuits to loading DOS in a month, and even in short bursts (due to chronic lack of time) seems pretty cool (sorry for bragging)!

By the way, with this message I still have an unsolved mystery. The fact is that after finishing the timer interrupt, DOS began to load without freezing in this place, but for some reason the Microsoft copyright message is not displayed. It seems that it also does not appear on a real computer (unfortunately, there is nothing to try on). What is the root cause here is a mystery shrouded in darkness. I tried to understand the logic from the DOS source codes, but I didn’t see it right away, and I didn’t want to spend a lot of time. However, the question is still nagging...

After DOS started, it was the turn to start other programs. You can probably guess whose turn it was first - naturally, as they say, good old Norton Commander. Oddly enough, there was noticeably more fuss with it than with DOS. NC called up a wild number of functions when launched, and in some cases it was not possible to get by with simple stubs; it was necessary to write at least a minimum of functionality.

However, the problems were more quantitative than qualitative, and soon it was possible to bring the NC loading process to its logical conclusion:

This “interesting” appearance is due to several reasons:
- the video adapter did not support attributes at that time
- I didn’t have the second part of the character generator, which contains pseudo-graphics, so the characters from the bottom of the code table ended up in the appropriate places
- some functions of INT 10h were not implemented.

In general, I was periodically surprised by exactly how certain functions were implemented in various programs (and even in DOS). For example, the CLS (clear screen) command called the INT 10h function, which caused the window to move up. In this case, the entire available screen area was specified as a window, and it was shifted by a number of lines equal to the number of lines on the screen. Since I didn’t expect that anyone would use the window functions at all, I was in no hurry to implement them. The result was obvious (or rather, on the screen). However, we’ll return to the oddities of some programs a little further...

After launching NC, I had a natural desire to bring it into divine shape. Moreover, this part of the work is sometimes even more pleasant than trying to start a completely dead device. There were no special problems with pseudographics - just a lot of time spent manually drawing characters (I had a character generator directly in the form of VHDL code). But with the attributes I had to strain a little.

Even earlier in the process, I began to use some elements of VHDL. At first, almost by force - there was still a desire to try to master this language again, and then because in certain cases it turned out to be more convenient than using circuit design. Even in the video adapter itself, I had to delve into the code - initially 43 (or something like that) lines were supported, but I needed to change it to 25 lines. And at first I tried to support attributes using schematic design, but suddenly I began to realize that it might be easier to use VHDL for this. Naturally, everything moved with great difficulty and the use of the simplest language constructs, but I suddenly began to understand the essence of VHDL - still just a little, but already enough to start consciously creating something with it, and not just modifying what already exists.

My tinkering with VHDL was not in vain, and after a while I was able to see something long ago and well known:

Yes, you could still notice some shortcomings (such as an attribute shifted by one character), but in general the 80x25 color text mode worked as it should.

Next in line was the 8259 interrupt controller. At first the idea arose to try to use an existing one from some project, but I didn’t like any of them for various reasons (either they were too primitive, or, on the contrary, I didn’t understand how they work, but there was no documentation). There was even an attempt to buy a commercial IP (in this case, the IP is not Internet Protocol, but Intellectual Property), but the manufacturers did not want to bother with selling one whole thing...

Ultimately, I had to take a piece of paper and sketch out something like a controller (block) diagram, which I then began to implement in VHDL. I didn’t pursue full compatibility - I needed (at this stage) support for one main priority interrupt mode, the ability to mask interrupts (also read the interrupt mask) and execute the EOI (End Of Interrupt) command. In my opinion, this should be enough for the vast majority of programs to work fine with it. Looking ahead, I will say that to this day I have not found a single program that would try to do something with the interrupt controller beyond the functionality I had designed.

Probably, the interrupt controller was my first real (albeit small) VHDL project - from start to finish. I wrote it carefully, I wasn’t even lazy (again for the first time in my life) to make a test bench (I’m not sure how to correctly translate it into Russian - in fact, a sequence of signals to check the correct functioning of the device). Simulation in the ModelSim simulator showed that the controller seemed to be fully operational, after which another graphic symbol was generated from it and added to my device.

I didn’t have a normal 8254 timer yet; to generate 18.2 Hz interrupts, I used a regular counter, which I connected to the interrupt controller. The behavior of the computer showed that everything seemed to be working - DOS loaded without the need to press a key, and the clock finally started running in NC. It seemed that another stage had been passed, and we could safely move on.

As it turned out, I was happy early - at that moment, perhaps the biggest problem in the entire project was discovered. If anyone remembers, NC has a built-in screen saver - “starry sky”. Having left my computer for a while, after returning to it, I discovered that the stars on the screensaver had somehow frozen, in other words, the computer had frozen. Although I understand that such accidents do not happen, I still wanted to believe in a miracle - that this was an isolated incident. Unfortunately, as always, no miracle happened - after a full reset and reboot, the computer froze again after an hour or so of operation. It became unequivocally clear that there was a problem somewhere, and a very difficult one to find.

To narrow the search as much as possible, I wrote a simple memory test that ran immediately after the processor was reset, without initializing all unnecessary devices such as a timer, etc. In principle, I received the memory error indication with relief - at least the problem was clearly in the hardware. The only thing left to do is to understand exactly where. And this turned out to be not at all simple.
The fact is that in general the circuit involved in the memory testing process is inherently quite primitive. A minimum of logic is involved; apart from the processor, there are no other complex programmable elements. As a result, after some time spent analyzing the circuitry, I became more or less confident that the issue was not a fundamental error in the circuit, but something more random - for example, interference.

In general, everything was bad with this side of circuit design. I knew that I needed to install more blocking capacitors, and that long wires were kind of bad. This is where my knowledge ended. Therefore, I again turned to one of the professional forums for advice. I was given a lot of advice, sometimes it was difficult to separate really sensible advice from those who advised according to the principle “I’ll tell you everything I know at least a little on this topic.” I won’t describe all this here - too much has been discussed, so this could be the topic of a separate article. As a result of the discussions, my board was overgrown with almost two dozen blocking capacitors and completely lost its original more or less glamorous appearance.

Unfortunately, the next test run showed that the problem had not gone away. Perhaps it began to appear a little less frequently, but it’s hard to say - and before, a failure could occur either after 20-30 minutes, or after a few hours. Now, at a minimum, a board left overnight was guaranteed to fail in the morning. In desperation, I again returned to the analysis of the circuit design and an even more careful study of the processor bus diagrams. At one point I had a certain thought, and I went to the same forum again. During the discussion of my idea, I once again received a portion of useful (and sometimes not so useful) advice, I tried to implement some things (primarily related to a slight delay in some control signals), but this did not affect the presence of failures at all.

At the end of the road, a concrete dead end clearly loomed, so I began to test generally crazy ideas. In particular, is the memory chip itself failing? To test it, I generated a RAM module right inside the FPGA, which I used instead of external memory. To be honest, I didn’t expect any results – I just did everything that came to mind. But imagine my surprise when after this the crashes suddenly disappeared! In general, I was somehow not even ready for this, so I didn’t quite understand how to use this knowledge. It was hard to believe that the memory chip was faulty even at that moment. I was also almost completely confident that I was working with this microcircuit correctly - according to the control signals, everything was as simple as shelling pears. But the fact remained that with the microcircuit a failure was guaranteed to occur no later than after a few hours of testing; with the internal memory everything worked without failure for several days until I got tired of it.

To clear my conscience, I still decided to test the memory with a completely different circuit, without using my processor board. In the process of thinking about how best to do this, a thought suddenly occurred to me - I realized the only significant difference between using internal and external memory. The fact is that the external memory was asynchronous, and the internal memory was partially synchronous, and it additionally required a signal that would latch the address of the cell being accessed in the internal buffer.
I didn’t understand at all how this could relate to the problem of random failures - from all the diagrams it was absolutely clear that my address was holding much more than the minimum required for memory, so, theoretically, this could not be the reason. However, I immediately drew another register in Quartus, gave it an address and latched it with the same signal that was used for the internal memory. The output of the register, naturally, was fed to the address lines of external memory. Realizing that I was doing complete nonsense, I ran the test. And the test ran successfully until I turned it off the next day. Then a couple more times with and without a register - it was absolutely clear that the presence of a register eliminated the failures completely.

This was completely inexplicable - even on an oscilloscope I saw that the address signals were already holding on longer than was in principle necessary, but the fact remained a fact. After a whole weekend of showdowns, I gave up on it and decided to accept it as a given...

So, DOS loaded, many programs that did not require graphical mode started, and we could move on. Naturally, there was a desire to launch some kind of toy. But a toy usually requires graphics, and I didn’t have any yet. And if for the text video adapter it was possible to get by with little expense by reworking the existing one, then for graphics it was not so easy.

It wasn't even a matter of the lack of ready-made solutions. The problem was that I needed almost complete compatibility with a standard video adapter at the hardware level - after all, all games work with graphics directly from the hardware, without using the BIOS. I realized that it is easier to make a video adapter from scratch than to try to remake a ready-made one. And, naturally, it was much more interesting to do it yourself.

So, we are writing our own CGA adapter - even EGA is a couple of orders of magnitude more complicated, so we won’t try it for now. In principle, to begin with, I still looked a little - I found, in fact, sketches of a VGA scan generation module. But it was a dozen and a half lines, and it didn’t even fully work. So, in reality, they were used as a template for starting writing - it was morally easier.

Naturally, I don’t have a CGA monitor and didn’t plan to, so the idea was to use the VGA 640x400 mode, in which the CGA 320x200 mode fit perfectly by simply duplicating the points both horizontally and vertically.
In general, the graphics adapter turned out to be unexpectedly easy for me - by this time my brain suddenly learned to think in VHDL categories, plus I had a little understanding of what can be required from VHDL and what is not worth it. In general, most of my debugging time was spent searching for a completely stupid error related to the bit depth of numbers (two such problems overlapped each other and gave a very funny variant). Otherwise, I began to enjoy how the lines in the editor turn into almost real hardware inside the FPGA and do exactly what I want.

At the very beginning, of course, the adapter turned out to be far from perfect and compatible, but Checkit was able to recognize it and even display the first test image:

By the way, Checkit turned out to be quite a useful program - it determined many things in rather cunning ways, which forced the whole structure to become more and more PC-compatible. And since Checkit could check all nodes and components, compatibility was also tested for all parts of the system.

After correcting the most obvious mistakes (such as the duplication of a dot from the previous byte visible in the previous photo), we managed, with some difficulty, to find a game that seemed to even work:

The colors in this picture do not match the original ones - at this point the palette switching had not yet been done, and the colors themselves were not adjusted at all.

Attempts to find working games have shown that game programs, which in most cases work directly with hardware, are much more demanding in terms of compatibility than some NC or even QuickBasic. Fortunately, the FPGA provided almost unlimited possibilities for identifying whether a program was accessing ports of interest, memory addresses, etc. Especially since I could also change the BIOS at my own discretion, this provided an excellent debugging mechanism. By the way, at some point (I don’t remember exactly when), Turbo Debugger also started working, which also expanded the arsenal of debugging tools.

It immediately became clear that it was necessary to make at least the minimum timer 8253. Moreover, the programs tried to use the timer not only for sounds (channel 2), but also actively reprogrammed channel 0, thus changing the frequency of interruptions from the timer, and also used this channel for determination of time parameters.

After reading the documentation for 8253, I felt a little sad. There was a lot to do and it wasn’t very interesting. Having decided to do this sometime later, at that moment I simply climbed onto the same opencores and stole a couple of timer modules. One is in Verilog, and very simplified, the second is extremely sophisticated in appearance, and even in VHDL. Unfortunately, the VHDL timer was connected via the Wishbone bus - this is an open standard for FPGA development. I had never encountered Wishbone before, so I decided to start using a Verilog module, which looked simpler in interface.

After connecting the timer to my system quite painlessly, I ran a few simple tests and made sure that the module seemed to work. Moreover, after another small modification of the system in terms of the interface with the speaker, the first, but quite correct sounds were heard from a working toy. For now we could finish with the timer and move on.

Then I had to make a fundamental decision. Up until this point, I wrote INT 10h myself. In text mode, I could live with this, but the need to support these functions in graphic modes upset me. Considering that by this time the passion for assembly language programming was practically satisfied (after all, it was due to the fact that at one time it had already been done on an industrial scale), I acted on the principle “If the mountain does not come to Muhammad, then he sends it to hell.” " Namely, I decided to make my CGA adapter so compatible with hardware that the original BIOS could work with it.

In principle, there was no particular difficulty - there are not very many registers, their functionality is extremely simple. Among the implicit things, we had to emulate the state register, which contains the signs of the reverse motion of the vertical and horizontal scanning beam. Quite logically, it turned out that many programs (including BIOS) actively use this register to avoid “snow” when trying to simultaneously access video memory from the processor and adapter.

For some reason, the process of putting the video adapter in order seemed very exciting to me, and in the end this unit turned out to be the most sophisticated in terms of compatibility with the original device. Along the way, missing things like switchable palettes, 640x200 mode, etc. were added. By the way, to test the 640x200 mode it turned out to be quite difficult to find a program that supports this mode. The only thing we managed to unearth was chess:

In my opinion, it looks quite beautiful...

The original INT 10h processor was very friendly towards such an adapter, and I breathed a sigh of relief from not having to write things like recognizing a character printed in a certain place on the screen in graphical mode.

The last obstacle to acceptable PC compatibility was, oddly enough, the keyboard. Although this was almost the first thing I screwed into the project, from the point of view of compatibility there was no horse lying there yet. The main problem was that all normal programs work with the first set of scan codes, which were used back in the IBM PC. But all keyboards, starting with PC AT, produce at least a second set of scan codes, very different from the first. Only the keyboard controller inside the computer converts these codes into the original, first set, and all ordinary programs work with it (even if these programs seem to access the keyboard directly, without using the BIOS). Naturally, I didn’t have any controller (by the way, in PC AT and even in later PC XT a separate 8051-based microcontroller was used for this). The functions of INT 09/16 were implemented in the most minimal version, and direct operation of programs with the keyboard was out of the question - they (the programs) simply would not understand a single scan code.

By this moment, I suddenly felt euphoria from owning VHDL - it seemed to me that I had already comprehended the truth, and that I could do anything. Therefore, without delay, an elegant (as it seemed to me) module was written in VHDL, which performed the transcoding of scan codes. Everything in this module was very beautiful and good, except for one small detail - it did not work. Moreover, I could not understand the reason for the inability to work, which was frustrating and perplexing - there were only a dozen lines.

Once again, turning to experts on the forum, I received a fair amount of really sensible advice. Moreover, my understanding of the VHDL concept itself has once again almost radically changed (including some disappointment). The main thing is that there are no miracles. VHDL (and all other HDLs) will not do anything that cannot be done in a conventional way from the available hardware resources. If I write a line that seems to be correct from the point of view of the syntax of the language, but I have no idea how it can be implemented in hardware, then most likely it will not be implemented during compilation. At a minimum, it will not do what is required of it. And one more thing - it is very important to use templates. It turns out that many language constructs turn into correct hardware nodes only when the compiler recognizes the corresponding pattern. There is, of course, a certain flexibility, but you still need to always remember the recommended styles for describing certain nodes.

I think it was after these showdowns that I really, at least a little, but truly began to understand the essence of VHDL (and by this time Verilog had also ceased to be completely incomprehensible). Magically, textbooks on these languages ​​suddenly made sense, and the essence of the things described became clear behind the words.

In short, having made the converter module a little less beautiful, but much more correct, I received codes in the first set at its output. Next, all that remains is to feed these codes to the original INT 09h handler, and check with the same Checkit that keystrokes are recognized correctly. So, the keyboard was also almost 100% compatible at the hardware level.

By this point, I began to feel more and more uncomfortable with the fact that the top level of my project was still schematic design. The final impetus that prompted us to undertake a complete transition to VHDL was the change of our home computer. I had an iMac Retina with Windows installed on my desk. Unfortunately, Quartus was among the programs that turned out to be completely unprepared to work with this screen resolution. The circuit design became completely unreadable, and no amount of my attempts to tweak anything produced any real improvement. There was nowhere to go, I gritted my teeth and took up the text editor.

Oddly enough, everything went more than smoothly. Now I don’t even remember whether it was necessary to debug anything, or whether everything worked immediately after the modification. In any case, there were definitely no serious problems, but work immediately became much more convenient and efficient. I immediately remembered the advice of a number of knowledgeable people who strongly recommended that I forget about circuit design from the very beginning and immediately start with VHDL/Verilog. By the way, regarding VHDL vs Verilog - please do not argue with me which is better/worse, and why I chose VHDL. Let's assume that I just wanted it that way, and that's practically true. I won’t discuss this topic any further...

When switching to VHDL, the last module in the circuit design was also completely redesigned - the SPI interface. If you remember, it provided hardware reception/transmission of only one byte, and around this it was necessary to carry out a whole series of preparatory steps. Combined with a slow processor (and lazily written INT 13h), this gave only about 35% of the performance of the original PC XT hard drive (according to Checkit). Since I already practically felt like a guru of VHDL and digital electronics in general, I immediately decided to write not a copy of the existing interface, but a module that provides packet transmission.

True, I decided not to bother with DMA (or, as we say in Russia, DMA) - there was no DMA controller yet, and I didn’t want to take on two new modules at once, then you wouldn’t figure out where exactly the problem was. Debugging the module did not go entirely smoothly - we had to tinker a little, including actively using the digital channels of the oscilloscope as a protocol analyzer. By the way, for some reason during the whole process I almost forgot that Quartus includes a built-in digital analyzer SignalTap, which would probably be even more convenient. Perhaps in the future I will get around to using it (I haven’t used it yet), but for now I really like using a separate piece of hardware for this.

Probably, taking into account the new module, it would have been possible to rewrite INT 13h more seriously, but I was lazy, and I got away with only the minimum necessary modification. The result was a not very beautiful and completely ineffective pile-up, but still the speed with the new module increased almost 5 times:

Next came the partly tedious, partly fascinating process of launching various programs (primarily games) in order to find out why they didn’t work (or rather, what was not compatible enough on my computer). You can write a separate large article about the search for reasons, I’ll just give a few examples:
- I don't have DMA. It turns out that DMA channel zero (used for memory regeneration on original PCs) is also used by some programs as a counter to determine short time periods. I had to emulate the corresponding part of the DMA controller counters
- usually (but not always) when reading from a non-existent memory area or I/O port, the FF byte is read. I read it the other way around - 00. The program did not like this, which checked in this way (and nothing else) for the presence of a joystick, after which it decided that it was there and that all the buttons were pressed
- the most original way to determine the presence of a CGA adapter was to use a program that wrote a certain value to the cursor location register, then read the value and checked it against what it had written (then restored the original value). According to the documentation I have, this register should be write-only, but it was changed to read/write, after which the program calmed down
- not related to my computer - spent a lot of time trying to figure out why the simplest old game Paratrooper was freezing. It turned out that although the game was old, the file I had was compressed with a self-extracting com/exe file archiver. So, the part that was then responsible for unpacking the program at startup contained a command that appeared only starting from the 286th processor. The trouble was that this command did not greatly affect the unpacking process and only corrupted some bytes (less than one in a thousand). I probably spent the most time on these showdowns.

So, little by little, almost all the games I had began to launch and work without any problems, I even tried to play some of them:

In the course of running numerous games, it turned out that the timer module I had was far from ideal - in most cases the sounds were not quite correct. Deciding that I still wanted to deal with the Wishbone bus, I decided to attach a timer to VHDL, which I mentioned earlier. To begin with, I read the description of Wishbone and built something like an adapter between the Wishbone interface and the 8088 bus - nothing complicated. Unfortunately, the timer did not work. I had to take out the oscilloscope again and see what was happening there (first of all, whether the Wishbone signals were being formed correctly).

Who would have thought that at this moment a great discovery would await me... Remember how I suffered from memory failures and was forced to introduce an intermediate register, which I did not see the need for in principle? So, on the oscilloscope screen I got the following picture:

Naturally, the first thing that caught my eye was the terrible ringing of signal 2. Moreover, this ringing moved from a quantitative parameter to a qualitative one. Signal 6 is generated by a one-bit counter, the input of which is signal 2. In fact, on each rising edge of signal 2, signal 6 is inverted. But the oscillogram shows that signal 6 switched once not only along the normal edge of signal 2, but along the edge of the strongest “ringing”! Those. in my circuit, on some lines the ringing was of such amplitude that it could cause false switching of the logic. To say that I was blown away is to say nothing. I couldn’t even believe that with all this I managed to achieve stable operation of the circuit...

Further, after a short analysis of the circuit taking into account the new data, it became completely clear to me exactly where the old failures arose and why that register cured them. However, something had to be done, since it was signal 2 that I needed to work with the new timer module. And again the traditional appeal to experts. From several tips on the forum, the option of cutting the track and soldering a resistor there was chosen. The result was far from ideal, but I did not record more false switchings from ringing when testing for several hours:

Unfortunately, this did not affect the performance of the VHDL timer module - it was silent. After fiddling around for a while, the cause was discovered in a rather unexpected place - in the module itself. Moreover, it was quite prosaic (and often encountered in programming) - the module incorrectly processed one of the extreme values, namely, with a divisor of 0, instead of dividing by the maximum value (65536), it did nothing. All the time I checked the initialization of channel 0, which is initialized by the maximum divisor in order to obtain a frequency of 18.2 Hz. When I used the FFFF divider for the experiment, everything worked.

I even contacted the author of the module, who (the author) had already forgotten that he wrote this module. However, the author helped me find the specific place where the mistake was made, and I even somehow tried to correct the mistake. This particular problem was solved, but others were discovered, so for now I settled on the first version of the module, Verilog.

At this point, the readiness of my design was such that I was ripe for the main experiment. The fact is that back in 1986, I read an article from the magazine “In the World of Science”, which is a Russian translation of the American magazine “Scientific American”, which talked about the latest product from Microsoft - namely the game MS Flight Simulator. Considering that already at that time I was a fan of computers, but at the same time I was firmly planning to become a pilot, you can understand what emotions were seething in my head (and in other parts of my body) at that time.

And now, almost 30 years later, I have an insatiable desire to run that historical Flight Simulator on my computer. Interest was also fueled by the fact that in those days, it seems that two programs were almost officially used for testing compatibility - the same Flight Simulator, as well as Lotus 1-2-3. It was said that they use the hardware features of the computer so closely that if these programs work, then everything else will work even more.

In general, I had some doubts - I still knew about some pitfalls in my design, but still decided to take a risk (especially considering that, of course, I did not risk anything). Result on screen:

By the way, the mysterious graininess of the picture initially aroused my suspicion - I immediately began to think about some very tricky way of working with a video adapter that is not supported by me. In fact, as it turned out, Microsoft was trying to get additional colors by combining dots from existing colors. I must note that, given the resolution of 320x200, the result was, to put it mildly, questionable.

There were no problems launching Lotus 1-2-3 either, so this experiment could be considered over. However, I made a number of small improvements and tweaks, after which all the programs that I currently have began to launch and work absolutely normally. The only new feature I added after this was EMS. I was simply haunted by the fact that more than a megabyte of available memory was being lost (to be honest, I just wanted to do something else), so I found a description of the EMS board with a driver, and wrote a module that emulates the operation of this board. The driver successfully recognized the memory:

The final touch was the redesign of the processor board itself. I didn't like the nightmare that was happening with the waveforms at all, and I also wanted to practice with Eagle again. As a result, a 4-layer printed circuit board was laid out, in which one of the internal layers was allocated for ground, the second for both supply voltages. In addition, the most significant point was the elimination of cables - the connectors are installed so that my board is directly plugged into the FPGA development board (to be very precise, into the GPIO port expansion board of the FPGA development board - such a doll):

There were also some circuit changes - the 8284 clock sequencer was completely removed (I decided that it could be easily removed inside the FPGA without causing the slightest damage to compatibility with bus signals) and the latch register on the address/data bus (also removed inside the FPGA). A quick check of the waveforms on the new board showed that the signals were almost perfect:

So, the path from a blinking LED on a solderless breadboard to a completely normal computer was covered in a couple of months, and a huge amount of pleasure was gained, as well as knowledge in a number of areas. The result was a computer with fairly good compatibility with the IBM PC, on which all the programs that I was not too lazy to get, incl. and those that are considered extremely demanding on hardware compatibility. The computer almost completely (with the exception of the INT 13h handler) uses BIOS version 3 from the IBM PC.

It is almost impossible to say anything definite about the project budget. To begin with, what to include there - only a few microcircuits (implying that the installation can be done by MGTF, the FPGA board and configuration devices are already there), or everything, starting from the urgent production of boards, the purchase of an FPGA debug board specifically for this project , and ending with a not-so-cheap oscilloscope?

I seem to have indicated specific types of microcircuits and everything else in the article, so anyone can see how much all this will cost in his version. Naturally, it is not necessary to use DE2-115; for reference, here are the required FPGA resources:

It should be noted that there are still a bunch of artifacts used for debugging, and the code itself has hardly been optimized.

What to do with all this (or whether to do anything at all) – I’m not entirely sure. During the process, it once again became clear that, although something can also be achieved with enthusiasm and erudition, formal knowledge of the basics would speed up everything, avoid many rakes, and most importantly, concentrate more on creativity rather than inventing a bicycle with square wheels. Therefore, for now there is a great desire to fill in the gaps (or rather, gaping holes) in the knowledge of both the basics of electronics and circuit design in general, and VHDL in particular, using some express method. We will see how well this will work out - there is always a problem in motivation and the availability of free time.