History of the development of information technologies from the 18th to the 20th centuries. History of the development of information technology Message history of the development of information technology

The earliest mention of the use of computing devices occurs in the period 2700-2300 BC. e. At that time, abacus was widespread in ancient Sumer. It consisted of a board with drawn lines that demarcated the sequence of numbers in the number system. The original use of the Sumerian abacus was to draw lines on sand and pebbles. Modified abaci were used much like modern calculators.

Also of interest is the Antikythera Mechanism, which is considered the earliest known mechanical analogue of a computer. It was intended to calculate astronomical positions. Such a mechanism was discovered in 1901 on the ruins of the Greek island of Antikythera between Kythira and Crete and was dated to 100 BC. e. Technological artifacts of similar complexity did not appear again until the 14th century, when mechanical astronomical clocks were invented in Europe.

It is generally accepted that the creation of “calculating machines” began in the 17th century, but the Antikythera Mechanism was created around 80 BC. This device is also called the “ancient Greek computer.” What else can you call a machine that calculates the position of the Sun, Moon and planets of the solar system based on entering the date (using a lever).

In a simplified form, a computer can be represented as a data input device, a data processing device (processor), and a data output device. These are exactly the actions that the Antikythera Mechanism performs.

The device uses differential transmission (which was only re-invented in the 16th century) and is incomparable in terms of the miniaturization and complexity of its parts. The mechanism consists of more than 30 differential gears, with teeth forming equilateral triangles. The use of differential gears allowed the mechanism to add or subtract angular velocities, to calculate the synodic lunar cycle, subtracting the effects of displacement caused by the Sun's gravity.

Perhaps the Antikythera mechanism was not unique. Cicero, who lived in the 1st century BC, mentions an instrument "recently constructed by our friend Posidonius, which accurately reproduces the movements of the Sun, Moon and five planets." Similar devices are mentioned in other ancient sources.

In the early 9th century, the Kitab al-Khiyal (Book of Invented Devices), commissioned by the Caliph of Baghdad, described hundreds of mechanical devices based on Greek texts that were preserved in monasteries. Later this knowledge was combined with the knowledge of European watchmakers.

Mechanical analog computing devices appeared hundreds of years later in the medieval Islamic world. Examples of devices from this period are the equatorium of the inventor Az-Zarqali, the mechanical motor of the astrolabe of Abu Rayhan al-Biruni and the torquetum of Jabir ibn Aflah. Muslim engineers built a number of automata, including musical machines, which could be "programmed" to play various musical compositions. These devices were developed by the Banu Musa and Al-Jazari brothers. Muslim mathematicians also made important achievements in the fields of cryptography and cryptanalysis, as well as Al-Kindi frequency analysis.

New generations have brought many changes in the improvement of information technology. After John Napier discovered logarithms for computational purposes in the early 17th century, there followed a period of significant progress among inventors and scientists in creating calculation tools. In 1623, Wilhelm Schickard developed a calculating machine, but abandoned the project when the prototype he began building was destroyed by fire in 1624. Around 1640, Blaise Pascal, a leading French mathematician, built the first mechanical addition device. The structure of the description of this device is based on the ideas of the Greek mathematician Heron.

The name of Godfried Leibniz has a special place in the history of information technology. Godfried Wilhelm von Leibniz (1646 - 1716) - German mathematician, physicist, inventor. He described the binary number system with the numbers 0 and 1, created combinatorics as a science, laid the foundations of mathematical logic, and created differential and integral calculus.

Leibniz invented his own design of an arithmometer, much better than Pascal's - he could perform multiplication, division, taking square and cube roots, as well as exponentiation.

Leibniz demonstrated his adding machine in 1673 in London at a meeting of the Royal Society. The stepped roller and movable carriage proposed by Gottfried formed the basis for all subsequent adding machines until the 20th century. “With the help of Leibniz’s machine, any boy can perform the most difficult calculations,” one of the French scientists said about this invention.

Later, in his work, Leibniz outlined the design of another computer operating in a binary system, which used a prototype of a punched card. The ones and zeros in the imaginary machine were represented by open or closed holes, respectively, in a moving jar through which balls were supposed to pass and fall into the grooves below.

After Leibniz's adding machine until the creation of Charles Babbage's small difference engine in 1822, nothing fundamentally new was created in the field of computer technology. New models of “calculating machines” were created by dozens, if not hundreds, of mechanics in different countries, but these adding machines are suitable for the role of “ancestors” only of modern calculators. The merit of these inventors is the “popularization” of mechanical computers and the creation of competition, which served as an incentive to improve designs.

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted on http://www.allbest.ru/

Posted on http://www.allbest.ru/

State budgetary educational institution

higher professional education

"Kursk State Medical University"

Ministry of Health of the Russian Federation

(GBOU VPO KSMU Ministry of Health of Russia)

INDEPENDENT WORK

BY DISCIPLINE

"COMPUTER SCIENCE"

« History of the emergence and development of information technologies »

Completed:

1st year student « 1 gr.» groups

Faculty of Clinical Psychology

Blagov I. A.

Checked by: Sazonov S.Yu.

Kursk - 2015

Introduction

Conclusion

Introduction

The history of information technology dates back long before the emergence of the modern discipline of computer science, which appeared in the 20th century. Information technologies are associated with the study of methods and means of selecting, processing and transmitting data in order to obtain new quality information about the state of an object, process or phenomenon. At different periods of human development, information technology has been important in its own way and to varying degrees.

In the history of mankind, several stages should be distinguished that human society successively passed through in its development. These stages differ in the main way society ensures its existence and the type of resources used by a person and playing a major role in the implementation of this method. These stages include: the stages of gathering and hunting, agricultural and industrial. Nowadays, the most developed countries of the world are at the final stage of the industrial stage of social development. They carry out a transition to the next stage, which is called “informational”. In this society, information plays a decisive role. The infrastructure of society is formed by the methods and means of collecting, processing, storing and distributing information. Information becomes a strategic resource.

Therefore, from the second half of the twentieth century in the civilized world, the main determining factor in the socio-economic development of society has become the transition from the “economy of things” to the “economy of knowledge”; there has been a significant increase in the importance and role of information in solving almost all problems of the world community. This is convincing evidence that the scientific and technological revolution is gradually turning into an intellectual and information revolution; information is becoming not only a subject of communication, but also a profitable commodity, an absolute and effective modern means of organizing and managing social production, science, culture, education and socio-economic development of society as a whole.

Modern achievements in information science, computer technology, operational printing and telecommunications have given rise to a new type of high technology, namely information technology.

The results of scientific and applied research in the field of information science, computer technology and communications have created a solid basis for the emergence of a new branch of knowledge and production - the information industry. The industry of information services, computer production and computerization as a technology for automated information processing is successfully developing in the world. The telecommunications industry has reached an unprecedented scale and qualitative leap. From the simplest methods of communication and information transfer, to the most complex network covering millions of consumers and representing a wide range of possibilities for transporting information and interconnecting its consumers.

This entire complex (the consumer with his tasks, computer science, all technical means of information support, information technology and the information services industry, etc.) constitutes the infrastructure and information space for the implementation of informatization of society.

Information technologies activate and effectively use the information resources of society (scientific knowledge, discoveries, inventions, technologies, best practices), which allows for significant savings in other types of resources - raw materials, energy, minerals, materials and equipment, human resources, social time. The change of evolutionary stages in the development of information technologies is determined mainly by the development of scientific and technological progress and the emergence of new technical means of information processing. The main technical means of information processing technology is the personal computer, which has significantly influenced both the concept of constructing and using technological processes and the quality of information obtained after processing.

1. Early history of information technology

The earliest mention of the use of computing devices occurs in the period 2700-2300 BC. e. At that time, abacus was widespread in ancient Sumer. It consisted of a board with drawn lines that demarcated the sequence of numbers in the number system. The original use of the Sumerian abacus was to draw lines on sand and pebbles. Modified abaci were used much like modern calculators.

Also of interest is the Antikythera Mechanism, which is considered the earliest known mechanical analogue of a computer. It was intended to calculate astronomical positions. Such a mechanism was discovered in 1901 on the ruins of the Greek island of Antikythera between Kythira and Crete and was dated to 100 BC. e. Technological artifacts of similar complexity did not appear again until the 14th century, when mechanical astronomical clocks were invented in Europe.

It is generally accepted that the creation of “calculating machines” began in the 17th century, but the Antikythera Mechanism was created around 80 BC. This device is also called the “ancient Greek computer.” What else can you call a machine that calculates the position of the Sun, Moon and planets of the solar system based on entering the date (using a lever).

In a simplified form, a computer can be represented as a data input device, a data processing device (processor), and a data output device. These are exactly the actions that the Antikythera Mechanism performs.

The device uses differential transmission (which was only re-invented in the 16th century) and is incomparable in terms of the miniaturization and complexity of its parts. The mechanism consists of more than 30 differential gears, with teeth forming equilateral triangles. The use of differential gears allowed the mechanism to add or subtract angular velocities, to calculate the synodic lunar cycle, subtracting the effects of displacement caused by the Sun's gravity.

Perhaps the Antikythera mechanism was not unique. Cicero, who lived in the 1st century BC, mentions an instrument "recently constructed by our friend Posidonius, which accurately reproduces the movements of the Sun, Moon and five planets." Similar devices are mentioned in other ancient sources.

In the early 9th century, the Kitab al-Khiyal (Book of Invented Devices), commissioned by the Caliph of Baghdad, described hundreds of mechanical devices based on Greek texts that were preserved in monasteries. Later this knowledge was combined with the knowledge of European watchmakers.

Mechanical analog computing devices appeared hundreds of years later in the medieval Islamic world. Examples of devices from this period are the equatorium of the inventor Az-Zarqali, the mechanical motor of the astrolabe of Abu Rayhan al-Biruni and the torquetum of Jabir ibn Aflah. Muslim engineers built a number of automata, including musical machines, which could be "programmed" to play various musical compositions. These devices were developed by the Banu Musa and Al-Jazari brothers. Muslim mathematicians also made important achievements in the fields of cryptography and cryptanalysis, as well as Al-Kindi frequency analysis.

New generations have brought many changes in the improvement of information technology. After John Napier discovered logarithms for computational purposes in the early 17th century, there followed a period of significant progress among inventors and scientists in creating calculation tools. In 1623, Wilhelm Schickard developed a calculating machine, but abandoned the project when the prototype he began building was destroyed by fire in 1624. Around 1640, Blaise Pascal, a leading French mathematician, built the first mechanical addition device. The structure of the description of this device is based on the ideas of the Greek mathematician Heron.

Leibniz demonstrated his adding machine in 1673 in London at a meeting of the Royal Society. The stepped roller and movable carriage proposed by Gottfried formed the basis for all subsequent adding machines until the 20th century. “With the help of Leibniz’s machine, any boy can perform the most difficult calculations,” one of the French scientists said about this invention.

After Leibniz's adding machine until the creation of Charles Babbage's small difference engine in 1822, nothing fundamentally new was created in the field of computer technology. New models of “calculating machines” were created by dozens, if not hundreds, of mechanics in different countries, but these adding machines are suitable for the role of “ancestors” only of modern calculators. The merit of these inventors is the “popularization” of mechanical computers and the creation of competition, which served as an incentive to improve designs.

2. Development of information technologies in the period from the 14th to the 18th centuries

In the diaries of the brilliant Italian Leonardo da Vinci (1452 - 1519), already in our time, a number of drawings were discovered that turned out to be a sketch of a summing computer on gear wheels, capable of adding 13-digit decimal numbers. Specialists from the famous American company IBM reproduced the machine in metal and were convinced of the complete validity of the scientist’s idea. His adding machine can be considered a seminal milestone in the history of digital computing. This was the first digital adder, a kind of embryo of the future electronic adder - the most important element of modern computers, still mechanical, very primitive (manually controlled). In those distant years, the brilliant scientist was probably the only person on Earth who understood the need to create devices to facilitate the work of performing calculations.

However, the need for this was so small that only more than a hundred years after the death of Leonardo da Vinci, another European was found - the German scientist Wilhelm Schickard (1592-1636), who, naturally, did not read the diaries of the great Italian, who proposed his solution to this problem. The reason that prompted Schickard to develop a calculating machine for summing and multiplying six-digit decimal numbers was his acquaintance with the Polish astronomer I. Kepler. Having become familiar with the work of the great astronomer, which was mainly related to calculations, Schickard was inspired by the idea of ​​helping him in his difficult work. In a letter addressed to him, sent in 1623, he gives a drawing of the machine and tells how it works. Unfortunately, history has not preserved information about the further fate of the car. Apparently, early death from the plague that swept through Europe prevented the scientist from fulfilling his plan.

The inventions of Leonardo da Vinci and Wilhelm Schiccard became known only in our time. They were unknown to their contemporaries.

In the XYII century the situation changes. In 1641 - 1642 Nineteen-year-old Blaise Pascal (1623 - 1662), then a little-known French scientist, creates a working summing machine ("pascaline"). At the beginning, he built it with one sole purpose - to help his father in the calculations performed when collecting taxes. Over the next four years, he created more advanced models of the machine. They were six and eight digit, built on the basis of gears, and could add and subtract decimal numbers. Approximately 50 models of machines were created, B. Pascal received royal privilege for their production, but the “Pascalines” did not receive practical use, although a lot was said and written about them (mainly in France).

The name of Godfried Leibniz has a special place in the history of information technology. Godfried Wilhelm von Leibniz (1646 - 1716) - German mathematician, physicist, inventor. He described the binary number system with the numbers 0 and 1, created combinatorics as a science, laid the foundations of mathematical logic, and created differential and integral calculus.

Leibniz invented his own design of an arithmometer, much better than Pascal's - he could perform multiplication, division, taking square and cube roots, as well as exponentiation.

Leibniz demonstrated his adding machine in 1673 in London at a meeting of the Royal Society. The stepped roller and movable carriage proposed by Gottfried formed the basis for all subsequent adding machines until the 20th century. “With the help of Leibniz’s machine, any boy can perform the most difficult calculations,” one of the French scientists said about this invention.

Later, in his work, Leibniz outlined the design of another computer operating in a binary system, which used a prototype of a punched card. The ones and zeros in the imaginary machine were represented by open or closed holes, respectively, in a moving jar through which balls were supposed to pass and fall into the grooves below.

The merits of V. Leibniz, however, are not limited to the creation of an “arithmetic device”. From his student years until the end of his life, he studied the properties of the binary number system, which later became the main one in the creation of computers. He gave it a certain mystical meaning and believed that on its basis it was possible to create a universal language for explaining the phenomena of the world and for use in all sciences, including philosophy. An image of the medal, drawn by W. Leibniz in 1697, has been preserved, explaining the relationship between the binary and decimal number systems.

In 1799 in France, Joseph Marie Jacquard (1752 - 1834) invented a loom that used punched cards to set patterns on fabric. The initial data required for this was recorded in the form of punches in the appropriate places on the punched card. This is how the first primitive device for storing and entering software (controlling the weaving process in this case) information appeared.

In 1795, in the same place, mathematician Gaspard Prony (1755 - 1839), whom the French government entrusted with work related to the transition to the metric system of measures, was the first in the world to develop a technological calculation scheme that involved dividing the labor of mathematicians into three components. The first group of several highly qualified mathematicians determined (or developed) the methods of numerical calculations necessary to solve the problem, allowing them to reduce calculations to arithmetic operations - add, subtract, multiply, divide. Setting the sequence of arithmetic operations and determining the initial data necessary for their implementation (“programming”) was carried out by a second, somewhat larger group of mathematicians. To carry out the compiled “program”, consisting of a sequence of arithmetic operations, there was no need to attract highly qualified specialists. This, the most labor-intensive part of the work, was entrusted to the third and largest group of computers. This division of labor made it possible to significantly speed up the production of results and increase their reliability. But the main thing was that this gave impetus to the further process of automation, the most labor-intensive (but also the simplest!) third part of the calculations - the transition to the creation of digital computing devices with program control of the sequence of arithmetic operations.

The mechanical principle of constructing devices and the use of a decimal number system, which makes it difficult to create a simple element base, did not allow Ch. Babbage to fully realize his far-reaching plan; he had to limit himself to modest models. Otherwise, the size of the machine would be equal to that of a locomotive, and a steam engine would be needed to set its devices in motion.

The programs for computing on the Babbage machine, compiled by Byron's daughter Ada Augusta Lovelace (1815 - 1852), are strikingly similar to the programs subsequently compiled for the first computers. It is no coincidence that a wonderful woman was called the world's first programmer.

Even more amazing are her statements about the machine’s capabilities:

"There is no end to the line of demarcation limiting the capabilities of the analytical engine. In fact, the analytical engine can be considered as the material and mechanical expression of analysis."

Another outstanding Englishman turned out to be misunderstood: George Boole (1815 - 1864). The algebra of logic he developed (Boole algebra) found application only in the next century, when a mathematical apparatus was needed to design computer circuits using the binary number system. The American scientist Claude Shenon “connected” mathematical logic with the binary number system and electrical circuits in his famous dissertation (1936).

3. History of the development of information technology from the 18th to the 20th centuries

computer science polynomial computational

63 years after the death of Charles Babbage, “someone” was found who took upon himself the task of creating a machine similar in principle to the one to which Charles Babbage gave his life. It turned out to be a German student Konrad Zuse (1910 - 1985). He began work on creating the machine in 1934, a year before receiving his engineering diploma.

He turned out to be a worthy successor to W. Leibniz and J. Boole because he brought back to life the already forgotten binary calculus system, and when calculating circuits he used something similar to Boolean algebra. In 1937 the Z1 (which stood for Zuse 1) machine was ready and working.

It was, like Babbage's machine, purely mechanical. The use of the binary system worked a miracle - the machine occupied only two square meters on the table in the inventor's apartment. The word length was 22 binary digits. Operations were performed using floating point. For the mantissa and its sign, 15 digits were allocated, for order - 7. The memory (also on mechanical elements) contained 64 words (versus 1000 for Babbage, which also reduced the size of the machine). The numbers and program were entered manually. A year later, a data input device and programs appeared in the car, using a film strip onto which information was perforated, and a mechanical arithmetic device replaced the sequential operating units on telephone relays. K. Zuse was helped in this by the Austrian engineer Helmut Schreyer, a specialist in the field of electronics. The improved machine was called Z2. In 1941, Zuse, with the participation of G. Schreyer, created a program-controlled relay computer (Z3), containing 2000 relays and repeating the main characteristics of Z1 and Z2. It became the world's first fully relay digital computer with program control and was successfully operated. Its dimensions were only slightly larger than those of Z1 and Z2.

Back in 1938, G. Schreyer proposed using vacuum tubes instead of telephone relays to build Z2. K. Zuse did not approve of his proposal. But during the Second World War, he himself came to the conclusion about the possibility of a tube version of the machine. They gave this message to a circle of learned men and were ridiculed and condemned. The figure they named - 2000 vacuum tubes needed to build the machine - could cool the hottest heads. Only one listener supported their idea. They did not stop there and presented their ideas to the military department, indicating that the new machine could be used to decipher Allied radiograms.

But the chance to create in Germany not only the first relay, but also the world's first electronic computer was missed.

By this time, K. Zuse had organized a small company, and through its efforts two specialized relay machines S1 and S2 were created. The first is for calculating the wings of “flying torpedoes” - projectile aircraft that fired at London, the second is for controlling them. It turned out to be the world's first control computer.

By the end of the war, K. Zuse creates another relay computer - Z4. It will be the only one surviving of all the machines he developed. The rest will be destroyed during the bombing of Berlin and the factories where they were produced.

And so, K. Zuse set several milestones in the history of computer development: he was the first in the world to use the binary number system when building a computer (1937), created the world's first program-controlled relay computer (1941) and a digital specialized control computer (1943).

Events in the United States developed differently. In 1944, Harvard University scientist Howard Aiken (1900-1973) created the first relay-mechanical digital computer in the USA (at that time it was considered the first in the world) MARK-1. In terms of its characteristics (performance, memory capacity) it was close to the Z3, but differed significantly in size (length 17 m, height 2.5 m, weight 5 tons, 500 thousand mechanical parts).

The machine used the decimal number system. Like Babbage's machine, gears were used in counters and memory registers. Control and communication between them was carried out using relays, the number of which exceeded 3000. G. Aiken did not hide the fact that he borrowed much in the design of the machine from Charles Babbage. “If Babbage were alive, I would have nothing to do,” he said. The remarkable quality of the car was its reliability. Installed at Harvard University, she worked there for 16 years.

Following MARK-1, the scientist creates three more machines (MARK-2, MARK-3 and MARK-4) and also using relays rather than vacuum tubes, explaining this by the unreliability of the latter.

In 1941, employees of the ballistic research laboratory at the Aberdeen Artillery Proving Ground in the United States turned to the nearby technical school at the University of Pennsylvania for help in compiling firing tables for artillery guns, relying on the school's Bush Differential Analyzer, a bulky mechanical analog computing device. However, a member of the school, physicist John Mauchly (1907-1986), who was interested in meteorology and who built several simple digital devices using vacuum tubes to solve problems in this area, proposed something different. He drew up (in August 1942) and sent to the US military department a proposal to create a powerful computer (at that time) using vacuum tubes. These truly historic five pages were shelved by military officials, and Mauchly’s proposal would probably have remained without consequences if the training ground employees had not become interested in him. They secured funding for the project, and in April 1943 a contract was signed between the test site and the University of Pennsylvania to build a computer called the Electronic Digital Integrator and Computer (ENIAC). 400 thousand dollars were allocated for this. About 200 people were involved in the work, including several dozen mathematicians and engineers.

The work was led by J. Mauchly and the talented electronics engineer Presper Eckert (1919 - 1995). It was he who suggested using vacuum tubes rejected by military representatives for the car (they could be obtained for free). Considering that the required number of lamps was close to 20 thousand, and the funds allocated to create the machine were very limited, this was a wise decision. He also proposed reducing the filament voltage of the lamps, which significantly increased the reliability of their operation. The hard work ended at the end of 1945. ENIAC was submitted for testing and passed it successfully. At the beginning of 1946 the machine began to count real tasks. In terms of size, it was more impressive than MARK-1: 26m in length, 6m in height, weight 35 tons. But it was not the size that was striking, but the performance - it was 1000 times higher than the performance of MARK_1. This was the result of using vacuum tubes!

In 1942 - 1943, at the height of World War II, in England, in the strictest secrecy, with his participation in Bletchley Park near London, the world's first specialized digital computer "Colossus" was built and successfully operated using vacuum tubes for decoding secret radiograms German radio stations. She successfully completed the task. One of the participants in the creation of the machine appreciated the merits of A. Turing: “I don’t want to say that we won the war thanks to Turing, but I take the liberty of saying that without him we could have lost it.” After the war, the scientist took part in the creation of a universal tube computer. Sudden death at the age of 41 prevented him from fully realizing his outstanding creative potential. In memory of A. Turing, a prize named after him was established for outstanding work in the field of mathematics and computer science. The Colossus computer has been restored and is kept in the museum in the town of Bletchley Park, where it was created.

However, in practical terms, J. Mauchly and P. Eckert were indeed the first who, having understood the feasibility of storing a program in the RAM of a machine (independent of A. Turing), put this into a real machine - their second EDVAC machine. Unfortunately, its development was delayed, and it was put into operation only in 1951. At this time, a computer with a program stored in RAM had been operating in England for two years! The fact is that in 1946, at the height of work on EDVAC, J. Mauchly gave a course of lectures on the principles of computer construction at the University of Pennsylvania. Among the listeners was the young scientist Maurice Wilkes (born in 1913) from the University of Cambridge, the same one where a hundred years ago Charles Babbage proposed a project for a digital computer with program control. Returning to England, the talented young scientist managed in a very short time to create an EDSAC computer (electronic delay line computer) of sequential operation with a memory on mercury tubes using a binary number system and a program stored in RAM. In 1949 the machine started working. Thus, M. Wilkes was the first in the world to create a computer with a program stored in RAM. In 1951 In 1951 he also proposed microprogram control of operations. EDSAC became the prototype of the world's first serial commercial computer LEO (1953). Today M. Wilkes is the only surviving computer pioneer of the world of the older generation, those who created the first computers. J. Mauchly and P. Eckert tried to organize their own company, but it had to be sold due to financial difficulties. Their new development, the UNIVAC machine, designed for commercial settlements, became the property of Remington Rand and largely contributed to its successful activities.

Although J. Mauchly and P. Eckert did not receive a patent for ENIAC, its creation was certainly a golden milestone in the development of digital computing, marking the transition from mechanical and electromechanical to electronic digital computers.

In 1996, at the initiative of the University of Pennsylvania, many countries around the world celebrated the 50th anniversary of computer science, linking this event with the 50th anniversary of the creation of ENIAC. There were many reasons for this - before and after ENIAC, not a single computer caused such a resonance in the world and did not have such an influence on the development of digital computing technology as the remarkable brainchild of J. Mauchly and P. Eckert.

In the second half of our century, the development of technical means went much faster. The field of software, new methods of numerical calculations, and the theory of artificial intelligence developed even more rapidly.

In 1995, American computer science professor at the University of Virginia John Lee published the book Computer Pioneers. Among the pioneers, he included those who made a significant contribution to the development of hardware, software, computing methods, the theory of artificial intelligence, etc., from the appearance of the first primitive means of information processing to the present day.

Conclusion

Summarizing all of the above, we can outline some stages in the development of information technology:

· The initial stage of IT development (1950-1960s) is characterized by the fact that the interaction between humans and computers is based on machine languages. The computer is available only to professionals.

· The next stage (1960-1970s) is characterized by the creation of operating systems. Several tasks formulated by different users are being processed; The main goal is the greatest load on machine resources.

· The third stage (1970-1980s) is characterized by a change in the criterion for the efficiency of data processing; human resources for the development and maintenance of software became the main ones. This stage includes the distribution of minicomputers. An interactive mode of interaction between several users is carried out.

· The fourth stage (1980-1990s) is a new qualitative leap in software development technology. The center of gravity of technological solutions is transferred to the creation of means of interaction between users and computers when creating a software product. The key element of the new information technology is the representation and processing of knowledge. Knowledge bases and expert systems are being created. Total distribution of personal computers.

There are different classifications of periods in computer history. But, essentially, there are only two periods: before and after the use of transistors in computers. The first half of the 20th century can be called the electric tube period - all “advanced” computers of this period were created using vacuum tubes based on their predecessors - mechanical and electromechanical computers.

In December 1947, Bell Labs employees John Bardeen, Walter Brattain, and William Shockley created the first functional “point-point” transistor. In 1956, these scientists received the Nobel Prize in Physics for their discovery. But it wasn't until 1956 that the first transistor computer was built.

The creation of computer networks began in the late 1950s, but the Internet, as we understand it now, appeared only in the early 90s.

List of used literature

1. Automated information technologies in economics: Textbook / Ed. G.A. Titorenko. - M.: UNITY, 1998.

2. Management information technologies: Textbook. manual for universities / Ed. prof. G.A. Titorenko. - M.: UNITY - DANA, 2003.

3. Makarova N.V., Matveeva L.A., Broido V.L. Computer Science: Textbook. - M.: Finance and Statistics, 1997.

4. Neil J. Rubenking. Effective Internet search // PC Magazine. - 2001. - No. 6.

5. Robert I. Modern information technologies in education. - M.: Shkola-Press, 1994.

6. Semenov M.I. and others. Automated information technologies in economics // Finance and Statistics - 2000 - No. 9.

Posted on Allbest.ru

Similar documents

    Concept, purpose of information technology. History of the development of computer technology. Manual, mechanical and electrical methods of information processing. Ch. Babbage's difference engine. Development of personal computers using electronic circuits.

    presentation, added 11/26/2015

    Examples of computers before the advent of computers. Pascal's summing machine. Gottfried Leibniz's calculating machine. "Analytical Engine" by Charles Babbage, the development of computer technology after its creation. Generations of electronic computers.

    presentation, added 02/10/2015

    Characteristics of Leonardo da Vinci's machine. Study of the principle of operation of the W. Schickard machine. Pascal's summing machine and its features. Leibniz's calculating machine and its analysis. Basic automated programming devices: Jaccard punch cards.

    presentation, added 04/18/2019

    History of the development of computer technology and information technology. Manual period of automation of calculations and creation of a slide rule. Devices that use the mechanical principle of calculation. Electromechanical and electronic stage of development.

    abstract, added 08/30/2011

    History of the development of computer science and computer technology. General principles of PC architecture, its internal interfaces. Basic input/output system. Motherboard. Display technologies and information storage devices. Amount of RAM.

    presentation, added 10/26/2013

    The main stages in the development of electronic computers. Manual stage: abacus, Napier calculating device, slide rule. Mechanical stage: Pascal's summing machine, Leibniz's calculator. Features of the electromechanical and electronic stages.

    presentation, added 05/01/2014

    History of the development of the Department of Informatics and Computer Engineering of the Tula Pedagogical Institute, its current state. Heads of the department and its teaching staff. Development of a navigation system and structure of the department’s website, its style solution.

    course work, added 05/22/2009

    Computer technology appeared a long time ago, since the need for various types of calculations existed at the dawn of the development of civilization. Rapid development of computer technology. Creation of the first PCs, mini-computers since the 80s of the twentieth century.

    abstract, added 09/25/2008

    Stages of development of computer technology: manual, mechanical, electro-mechanical, electronic. Industrialization of information processing and creation of complex relay and relay-mechanical systems with program control. Babbage's computer.

    presentation, added 06/27/2015

    The emergence and development of computers. Development of technologies for managing and processing information flow using computer technology. Properties of information technologies, their significance for the current stage of technological development of society and the state.

The history of information technology goes back to ancient times. The first stage can be considered the invention of the simplest digital device—accounts. Abacus was invented completely independently and almost simultaneously in Ancient Greece, Ancient Rome, China, Japan and Rus'.

Abacus in Ancient Greece was called an abacus, that is, a board or also a “Salamin board” (the island of Salamis in the Aegean Sea). The abacus was a sand-strewn board with grooves on which numbers were marked with pebbles. The first groove meant units, the second - tens, etc. During the counting, any one of them could accumulate more than 10 pebbles, which meant adding one pebble to the next groove. In Rome, abacus existed in a different form: wooden boards were replaced with marble, and balls were also made from marble.

In China, the "suan-pan" abacus was slightly different from the Greek and Roman ones. They were based not on the number ten, but on the number five. In the upper part of the “suan-pan” there were rows of five one-seeds, and in the lower part there were rows of two. If it was necessary, say, to reflect the number eight, one stone was placed in the lower part, and three in the units part. In Japan there was a similar device, only the name was “Serobyan”.

In Rus', abacus was much simpler - a bunch of ones and a bunch of tens with bones or stones. But in the 15th century. The “plank score” will become widespread, that is, the use of a wooden frame with horizontal ropes on which bones were strung.

Ordinary abacuses were the ancestors of modern digital devices. However, if some of the objects of the surrounding material world were amenable to direct counting, piece-by-piece calculation, then others required preliminary measurement of numerical values. Accordingly, historically there have been two directions in the development of computing and computer technology: digital and analog.

The analog direction, based on the calculation of an unknown physical object (process) by analogy with the model of a known object (process), received its greatest development in the period of the late 19th - mid-20th centuries. The founder of the analogue direction is the author of the idea of ​​logarithmic calculus, the Scottish baron John Napier, who prepared it in 1614. scientific tome “Description of the amazing table of logarithms.” John Napier not only theoretically substantiated the functions, but also developed a practical table of binary logarithms.

The principle of John Napier's invention is that the logarithm (the exponent to which a number must be raised) corresponds to a given number. The invention simplified the performance of multiplication and division operations, since when multiplying it is enough to add the logarithms of numbers.

In 1617 Napier invented a way to multiply numbers using sticks. The special device consisted of rods divided into segments, which could be positioned in such a way that when adding numbers in segments adjacent to each other horizontally, the result of multiplying these numbers was obtained.

Somewhat later, the Englishman Henry Briggs compiled the first table of decimal logarithms. Based on the theory and tables of logarithms, the first slide rules were created. In 1620, the Englishman Edmund Gunther used a special plate for calculations on a proportional compass, which was popular at that time, on which logarithms of numbers and trigonometric quantities were plotted parallel to each other (the so-called “Gunther scales”). In 1623, William Oughtred invented the rectangular slide rule, and Richard Delamaine invented the circular rule in 1630. In 1775, librarian John Robertson added a “slider” to the ruler, making it easier to read numbers from different scales. And finally, in 1851-1854. Frenchman Amédée Mannheim dramatically changed the design of the line, giving it an almost modern look. The complete dominance of the slide rule continued until the 20s and 30s. XX century, until electric adding machines appeared, which made it possible to carry out simple arithmetic calculations with much greater accuracy. The slide rule gradually lost its position, but turned out to be indispensable for complex trigonometric calculations and therefore has been preserved and continues to be used today.

Most people who use a slide rule are able to perform basic calculations successfully. However, complex operations of calculating integrals and differentials , moments of functions, etc., which are carried out in several stages using special algorithms and require good mathematical preparation, cause significant difficulties. All this led to the emergence at one time of a whole class of analog devices designed to calculate specific mathematical indicators and quantities by a user who was not very experienced in matters of higher mathematics. In the early to mid-19th century, the following were created: planimeter (calculating the area of ​​flat figures), curvimeter (determining the length of curves), differentiator, integrator, integragraph (graphical results of integration), integramer (integrating graphs), etc. . devices. The author of the first planimeter (1814) is the inventor Hermann. In 1854, the Amsler polar planimeter appeared. Using the Koradi integrator, the first and second moments of the function were calculated. There were universal sets of blocks, for example, the combined integrator KI-3, from which the user, in accordance with his own requests, could select the necessary device.

The digital direction of development of computing technology has turned out to be more promising and today forms the basis of computer equipment and technology. Even Leonardo da Vinci at the beginning of the 16th century. created a sketch of a 13-bit adding device with ten-tooth rings. Although a working device based on these drawings was built only in the 20th century, the reality of Leonardo da Vinci’s project was confirmed.

In 1623, Professor Wilhelm Schickard, in letters to J. Kepler, described the structure of a calculating machine, the so-called “counting clock.” The machine was also not built, but now a working model has been created based on the description.

The first mechanical digital machine built, capable of summing numbers with a corresponding increase in digits, was created by the French philosopher and mechanic Blaise Pascal in 1642. The purpose of this machine was to facilitate the work of B. Pascal's father, a tax inspector. The machine looked like a box with numerous gears, among which was the main calculation gear. The calculation gear was connected using a ratchet mechanism to a lever, the deflection of which made it possible to enter single-digit numbers into the counter and sum them up. It was quite difficult to carry out calculations with multi-digit numbers on such a machine.

In 1657, two Englishmen R. Bissacar and S. Patridge, completely independently of each other, developed a rectangular slide rule. The slide rule remains unchanged to this day.

In 1673, the famous German philosopher and mathematician Gottfried Wilhelm Leibniz invented a mechanical calculator - a more advanced calculating machine capable of performing basic arithmetic operations. Using the binary number system, the machine could add, subtract, multiply, divide, and extract square roots.

In 1700, Charles Perrault published his brother’s book “A Collection of a Large Number of Machines of Claude Perrault’s Own Invention.” The book describes a summing machine with racks instead of gears called a rhabdological abacus. The name of the machine consists of two words: the ancient “abacus” and “rhabdology” - the medieval science of performing arithmetic operations using small sticks with numbers.

In 1703, Gottfried Wilheim Leibniz, continuing a series of his works, wrote a treatise “Explication de I" Arithmetique Binaire” on the use of the binary number system in computers. Later, in 1727, based on Leibniz’s work, Jacob Leopold’s calculating machine was created.

German mathematician and astronomer Christian Ludwig Gersten in 1723 G. created an arithmetic machine. The machine calculated the quotient and the number of successive addition operations when multiplying numbers. In addition, it was possible to control the correctness of data entry.

In 1751, the Frenchman Perera, based on the ideas of Pascal and Perrault, invents an arithmetic machine. Unlike other devices, it was more compact, since its counting wheels were not located on parallel axes, but on a single axis passing through the entire machine.

In 1820, the first industrial production of digital adding machines took place. . The championship here belongs to the Frenchman Thomas de Kalmar. In Russia, the first adding machines of this type were Bunyakovsky’s self-calculators (1867). In 1874, an engineer from St. Petersburg, Vilgodt Odner, significantly improved the design of the adding machine by using wheels with retractable teeth ("Odner" wheels) to enter numbers. Odhner's adding machine made it possible to carry out computational operations at a speed of up to 250 operations with four-digit numbers in one hour.

It is quite possible that the development of digital computing technology would have remained at the level of small machines if not for the discovery of the Frenchman Joseph Marie Jacquard, who at the beginning of the 19th century used a card with punched holes (punched card) to control a weaving loom. Jacquard's machine was programmed using a whole deck of punched cards, each of which controlled one shuttle stroke so that when moving to a new pattern, the operator replaced one deck of punched cards with another. Scientists tried to use this discovery to create a fundamentally new calculating machine that performs operations without human intervention.

In 1822, English mathematician Charles Babbage created a computer-controlled calculating machine, the prototype of today's input and printing peripherals. It consisted of manually rotated gears and rollers.

At the end of the 80s. In the 19th century, Herman Hollerith, an employee of the US National Census Bureau, managed to develop a statistical tabulator capable of automatically processing punched cards. The creation of the tabulator marked the beginning of the production of a new class of digital counting and punching (calculating and analytical) machines, which differed from the class of small machines by their original system of data entry from punched cards. By the middle of the 20th century, counting and punching machines were produced by IBM and Remington Rand in the form of quite complex punching complexes, including: punchers (stuffing punched cards), control punchers (re-stuffing and monitoring the mismatch of holes), sorting machines (layout punched cards into groups in accordance with certain characteristics), layout machines (more careful layout of punched cards and compilation of tables of functions), tabulators (reading punched cards, calculating and printing calculation results), multiplayers (multiplication operations for numbers written on punched cards). The best models of punching systems processed up to 650 cards per minute, and the multiplayer multiplied 870 eight-digit numbers within an hour. The most advanced model of the IBM Model 604 electronic punch, released in 1948, had a programmable panel of data processing commands and provided the ability to carry out up to 60 operations with each punched card.

At the beginning of the 20th century, adding machines with keys for entering numbers appeared. The increased degree of automation of the operation of adding machines made it possible to create automatic counting machines, or so-called small calculating machines with an electric drive and automatic execution of up to 3 thousand operations with three- and four-digit numbers per hour. On an industrial scale, small calculating machines in the first half of the 20th century were produced by the companies Friden, Burroughs, Monro, and others. A variety of small machines were accounting and writing machines, produced in Europe by Olivetti, and in the USA by National Cash Register ( NCR). In Russia during this period, “Mercedes” were widespread - accounting machines designed for entering data and calculating final balances on synthetic accounting accounts.

Based on the ideas and inventions of Babbage and Hollerith, Harvard University professor Howard Aiken was able to create in 1937 - 1943. a higher-level computer punching machine called "Mark-1", which worked on electromagnetic relays. In 1947, a machine of this series “Mark-2” appeared, containing 13 thousand relays.

Around the same period, theoretical prerequisites and the technical possibility of creating a more advanced machine using electric lamps appeared. In 1943, employees of the University of Pennsylvania (USA) under the leadership of John Mauchly and Prosper Eckert, with the participation of the famous mathematician John von Neumann, began developing such a machine. The result of their joint efforts was the ENIAC lamp computer (1946), which contained 18 thousand lamps and consumed 150 kW of electricity. While working on the tube machine, John von Neumann published a report (1945), which is one of the most important scientific documents in the theory of the development of computer technology. The report substantiated the principles of the design and operation of universal computers of the new generation of computers, which have absorbed all the best that has been created by many generations of scientists, theorists and practitioners.

This led to the creation of the so-called first generation computers. They are characterized by the use of vacuum tube technology, memory systems on mercury delay lines, magnetic drums and Williams cathode ray tubes. Data was entered using punched tapes, punched cards, and stored program magnetic tapes. Printing devices were used. The performance of first-generation computers did not exceed 20 thousand operations per second.

Further development of digital computing technology occurred at a rapid pace. In 1949, the first computer was built by the English researcher Maurice Wilkes using Neumann's principles. Until the mid-50s. Lamp machines were produced on an industrial scale. However, scientific research in the field of electronics opened up new development prospects. Presenters The United States occupied a position in this area. In 1948, Walter Brattain and John Bardeen of AT&T invented the transistor, and in 1954, Gordon Tip of Texas Instruments used silicon to make a transistor. Since 1955, transistor-based computers began to be produced, with smaller dimensions, increased performance and lower energy consumption in comparison with lamp-based machines. The computers were assembled manually, under a microscope.

The use of transistors marked the transition to second generation computers. Transistors replaced vacuum tubes and computers became more reliable and faster (up to 500 thousand operations per second). Functional devices have also improved - working with magnetic tapes, memory on magnetic disks.

In 1958, the first interval microcircuit was invented (Jack Kilby - Texas Instruments) and the first industrial integrated circuit (Chip), the author of which Robert Noyce subsequently founded (1968) the world-famous company Intel (INTegrated ELectronics). Computers based on integrated circuits, the production of which began in 1960, were even faster and smaller.

In 1959, Datapoint researchers made the important conclusion that a computer needed a central arithmetic and logic unit that could control calculations, programs and devices. We were talking about a microprocessor. Datapoint employees developed fundamental technical solutions for creating a microprocessor and, together with Intel, began to carry out its industrial development in the mid-60s. The first results were not entirely successful; Intel microprocessors were much slower than expected. The collaboration between Datapoint and Intel has ended.

In 1964, third-generation computers were developed using electronic circuits of low and medium integration (up to 1000 components per chip). From that time on, they began to design not a single computer, but rather a whole family of computers based on the use of software. An example of third-generation computers can be considered the American IBM 360, created at that time, as well as the Soviet EU 1030 and 1060. In the late 60s. minicomputers appeared, and in 1971 the first microprocessor. A year later, Intel released the first widely known microprocessor, the Intel 8008, and in April 1974, the second generation microprocessor, the Intel 8080.

Since the mid-70s. fourth generation computers were developed. They are characterized by the use of large and ultra-large integrated circuits (up to a million components per chip). The first fourth-generation computers were produced by Amdahl Corp. These computers used high-speed integrated circuit memory systems with a capacity of several megabytes. When turned off, the RAM data was transferred to disk. When turned on, it booted up automatically. The performance of fourth generation computers is hundreds of millions of operations per second.

Also in the mid-70s, the first personal computers appeared. The further history of computers is closely connected with the development of microprocessor technology. In 1975, the first mass-produced personal computer Altair was created based on the Intel 8080 processor. By the end of the 70s, thanks to the efforts of Intel, which developed the latest microprocessors Intel 8086 and Intel 8088, the prerequisites arose for improving the computing and ergonomic characteristics of computers. During this period, the largest electrical engineering corporation, IBM, entered into competition in the market and tried to create a personal computer based on the Intel 8088 processor. In August 1981, the IBM PC computer appeared, which quickly gained enormous popularity. The successful design of the IBM PC predetermined its use as the standard for personal computers at the end of the 20th century.

Since 1982, development of fifth generation computers has been underway. Their basis is an orientation towards knowledge processing. Scientists are confident that knowledge processing, which is unique to humans, can also be carried out by a computer in order to solve problems and make adequate decisions.

In 1984, Microsoft introduced the first samples of the Windows operating system. Americans still consider this invention one of the outstanding discoveries of the 20th century.

The proposal made in March 1989 by Tim Berners-Lee, an employee of the International European Research Center (CERN), turned out to be important. The essence of the idea was to create a new distributed information system called the World Wide Web. A hypertext-based information system could combine CERN's information resources (databases of reports, documentation, postal addresses, etc.). The project was adopted in 1990.

Lecture 1. The concept of information technology.

Topic No. 1, Lesson No. 1

EDUCATIONAL AND METHODOLOGICAL DEVELOPMENT

Industrial and environmental safety

Department

(lecture)

ON THE ACADEMIC DISCIPLINE “Information technologies in risk management”

In the early stages of history, humans required coded communication signals to synchronize the actions they performed. The human brain solved this problem without artificially created tools: human speech developed. Speech was also the first carrier of knowledge. Knowledge was accumulated and passed on from generation to generation in the form of oral stories. Man's natural capabilities for accumulating and transmitting knowledge received the first technological support with the creation of writing. The process of improving information media is still ongoing: stone - bone - clay - papyrus - silk - paper, magnetic and optical media - silicon - ... Writing became the first historical stage of information technology. The second stage of information technology is the emergence of printing. It stimulated the development of science and accelerated the rate of accumulation of professional knowledge. The cycle: knowledge - science - social production - knowledge is closed. The spiral of technological civilization began to unwind at breakneck speed. Book printing created the information prerequisites for the growth of productive forces. But the information revolution is associated with the creation of computers in the late 40s of the twentieth century. At the same time, the era of information technology development began. A very important property of information technology is that for it information is not only a product, but also a raw material. Electronic modeling of the real world on a computer requires processing a significantly larger amount of information than the final result contains. The development of information technology can be divided into stages. Each stage is characterized by a certain feature.

1. At the initial stage of development of information technologies (1950-1960s), the interaction between humans and computers was based on machine languages. The computer was available only to professionals.

2. In the next stage (1960-1970s), operating systems are created. Several tasks formulated by different users are being processed; The main goal is the greatest load on machine resources.

3. The third stage (1970-1980s) is characterized by a change in the criterion for the efficiency of data processing; human resources for the development and maintenance of software became the main ones. This stage includes the proliferation of minicomputers. An interactive mode of interaction between several users is carried out.

4. The fourth stage (1980-1990s) is a new qualitative leap in software development technology. The center of gravity of technological solutions is transferred to the creation of means of interaction between users and computers when creating a software product. The key element of the new information technology is the representation and processing of knowledge. Total distribution of personal computers. Note that the evolution of all generations of computers occurs at a constant pace - 10 years per generation. Forecasts suggest that the pace will continue until the beginning of the 21st century. Each generational change in information technology requires retraining and a radical restructuring of the thinking of specialists and users, a change of equipment and the creation of more mass-produced computer technology. Information technology, as an advanced field of science and technology, determines the rhythm of time for the technical development of the entire society. Investments in Internet infrastructure and services caused rapid growth in the IT industry in the late 90s of the 20th century.

The history of information technology goes back to ancient times. The first stage can be considered the invention of the simplest digital device—accounts. Abacus was invented completely independently and almost simultaneously in Ancient Greece, Ancient Rome, China, Japan and Rus'.

Abacus in Ancient Greece was called abacus, that is, a board or also a “Salamin board” (the island of Salamis in the Aegean Sea). The abacus was a sand-strewn board with grooves on which numbers were marked with pebbles. The first groove meant units, the second - tens, etc. During the counting, any one of them could accumulate more than 10 pebbles, which meant adding one pebble to the next groove. In Rome, abacus existed in a different form: wooden boards were replaced with marble, and balls were also made from marble.

In China, the "suan-pan" abacus was slightly different from the Greek and Roman ones. They were based not on the number ten, but on the number five. In the upper part of the “suan-pan” there were rows of five one-seeds, and in the lower part there were rows of two. If it was necessary, say, to reflect the number eight, one stone was placed in the lower part, and three in the units part. In Japan there was a similar device, only the name was “Serobyan”.

In Rus', abacus was much simpler - a bunch of ones and a bunch of tens with bones or stones. But in the 15th century. The “plank score” will become widespread, that is, the use of a wooden frame with horizontal ropes on which bones were strung.

Ordinary abacuses were the ancestors of modern digital devices. However, if some of the objects of the surrounding material world were amenable to direct counting, piece-by-piece calculation, then others required preliminary measurement of numerical values. Accordingly, historically there have been two directions in the development of computing and computer technology: digital and analog.

The analog direction, based on the calculation of an unknown physical object (process) by analogy with the model of a known object (process), received its greatest development in the period of the late 19th - mid-20th centuries. The founder of the analogue direction is the author of the idea of ​​logarithmic calculus, Scottish baron John Napier, who in 1614 prepared the scientific tome “Description of the Amazing Table of Logarithms.” John Napier not only theoretically substantiated the functions, but also developed a practical table of binary logarithms.



The principle of John Napier's invention is that the logarithm (the exponent to which a number must be raised) corresponds to a given number. The invention simplified the performance of multiplication and division operations, since when multiplying it is enough to add the logarithms of numbers.

In 1617, Napier invented a method of multiplying numbers using sticks. The special device consisted of rods divided into segments, which could be positioned in such a way that when adding numbers in segments adjacent to each other horizontally, the result of multiplying these numbers was obtained.

Somewhat later, the Englishman Henry Briggs compiled the first table of decimal logarithms. Based on the theory and tables of logarithms, the first slide rules were created. In 1620, the Englishman Edmund Gunther used a special plate for calculations on a proportional compass, which was popular at that time, on which logarithms of numbers and trigonometric quantities were plotted parallel to each other (the so-called “Gunther scales”). In 1623, William Oughtred invented the rectangular slide rule, and Richard Delamaine invented the circular rule in 1630. In 1775, librarian John Robertson added a “slider” to the ruler, making it easier to read numbers from different scales. And finally, in 1851-1854. Frenchman Amédée Mannheim dramatically changed the design of the line, giving it an almost modern look. The complete dominance of the slide rule continued until the 20s and 30s. XX century, until electric adding machines appeared, which made it possible to carry out simple arithmetic calculations with much greater accuracy. The slide rule gradually lost its position, but turned out to be indispensable for complex trigonometric calculations and therefore has been preserved and continues to be used today.



Most people who use a slide rule are able to perform basic calculations successfully. However, complex operations of calculating integrals and differentials , moments of functions, etc., which are carried out in several stages using special algorithms and require good mathematical preparation, cause significant difficulties. All this led to the emergence at one time of a whole class of analog devices designed to calculate specific mathematical indicators and quantities by a user who was not very experienced in matters of higher mathematics. In the early to mid-19th century, the following were created: planimeter (calculating the area of ​​flat figures), curvimeter (determining the length of curves), differentiator, integrator, integragraph (graphical results of integration), integramer (integrating graphs), etc. . devices. The author of the first planimeter (1814) is the inventor Hermann. In 1854, the Amsler polar planimeter appeared. Using the Koradi integrator, the first and second moments of the function were calculated. There were universal sets of blocks, for example, the combined integrator KI-3, from which the user, in accordance with his own requests, could select the necessary device.

The digital direction of development of computing technology has turned out to be more promising and today forms the basis of computer equipment and technology. Even Leonardo da Vinci at the beginning of the 16th century. created a sketch of a 13-bit adding device with ten-tooth rings. Although a working device based on these drawings was built only in the 20th century, the reality of Leonardo da Vinci’s project was confirmed.

In 1623, Professor Wilhelm Schickard, in letters to J. Kepler, described the structure of a calculating machine, the so-called “counting clock.” The machine was also not built, but now a working model has been created based on the description.

The first mechanical digital machine built, capable of summing numbers with a corresponding increase in digits, was created by the French philosopher and mechanic Blaise Pascal in 1642. The purpose of this machine was to facilitate the work of B. Pascal's father, a tax inspector. The machine looked like a box with numerous gears, among which was the main calculation gear. The calculation gear was connected using a ratchet mechanism to a lever, the deflection of which made it possible to enter single-digit numbers into the counter and sum them up. It was quite difficult to carry out calculations with multi-digit numbers on such a machine.

In 1657, two Englishmen R. Bissacar and S. Patridge, completely independently of each other, developed a rectangular slide rule. The slide rule remains unchanged to this day.

In 1673, the famous German philosopher and mathematician Gottfried Wilhelm Leibniz invented a mechanical calculator - a more advanced calculating machine capable of performing basic arithmetic operations. Using the binary number system, the machine could add, subtract, multiply, divide, and extract square roots.

In 1700, Charles Perrault published his brother’s book “A Collection of a Large Number of Machines of Claude Perrault’s Own Invention.” The book describes a summing machine with racks instead of gears called a rhabdological abacus. The name of the machine consists of two words: the ancient “abacus” and “rhabdology” - the medieval science of performing arithmetic operations using small sticks with numbers.

In 1703, Gottfried Wilheim Leibniz, continuing a series of his works, wrote a treatise “Explication de I" Arithmetique Binaire” on the use of the binary number system in computers. Later, in 1727, based on Leibniz’s work, Jacob Leopold’s calculating machine was created.

German mathematician and astronomer Christian Ludwig Gersten in 1723 created an arithmetic machine. The machine calculated the quotient and the number of successive addition operations when multiplying numbers. In addition, it was possible to monitor the correctness of data entry.

In 1751, the Frenchman Perera, based on the ideas of Pascal and Perrault, invents an arithmetic machine. Unlike other devices, it was more compact, since its counting wheels were not located on parallel axes, but on a single axis passing through the entire machine.

In 1820, the first industrial production of digital adding machines took place. . The championship here belongs to the Frenchman Thomas de Kalmar. In Russia, the first adding machines of this type were Bunyakovsky’s self-calculators (1867). In 1874, an engineer from St. Petersburg, Vilgodt Odner, significantly improved the design of the adding machine by using wheels with retractable teeth ("Odner" wheels) to enter numbers. Odhner's adding machine made it possible to carry out computational operations at a speed of up to 250 operations with four-digit numbers in one hour.

It is quite possible that the development of digital computing technology would have remained at the level of small machines if not for the discovery of the Frenchman Joseph Marie Jacquard, who at the beginning of the 19th century used a card with punched holes (punched card) to control a weaving loom. Jacquard's machine was programmed using a whole deck of punched cards, each of which controlled one shuttle stroke so that when moving to a new pattern, the operator replaced one deck of punched cards with another. Scientists tried to use this discovery to create a fundamentally new calculating machine that performs operations without human intervention.

In 1822, English mathematician Charles Babbage created a computer-controlled calculating machine, the prototype of today's input and printing peripherals. It consisted of manually rotated gears and rollers.

At the end of the 80s. In the 19th century, Herman Hollerith, an employee of the US National Census Bureau, managed to develop a statistical tabulator capable of automatically processing punched cards. The creation of the tabulator marked the beginning of the production of a new class of digital counting and punching (calculating and analytical) machines, which differed from the class of small machines by their original system of data entry from punched cards. By the middle of the 20th century, counting and punching machines were produced by IBM and Remington Rand in the form of rather complex punching complexes. They included punchers (stuffing punched cards), control punchers (re-stuffing and checking for misalignment of holes), sorting machines (laying out punched cards into groups according to certain characteristics), nesting machines (more carefully laying out punched cards and compiling tables of functions), tabulators (reading punched cards , calculation and printing of calculation results), multiplayers (multiplication operations for numbers written on punched cards). The best models of punching systems processed up to 650 cards per minute, and the multiplayer multiplied 870 eight-digit numbers within an hour. The most advanced model of the IBM Model 604 electronic punch, released in 1948, had a programmable panel of data processing commands and provided the ability to carry out up to 60 operations with each punched card.

At the beginning of the 20th century, adding machines with keys for entering numbers appeared. The increased degree of automation of the operation of adding machines made it possible to create automatic counting machines, or so-called small calculating machines with an electric drive and automatic execution of up to 3 thousand operations with three- and four-digit numbers per hour. On an industrial scale, small calculating machines in the first half of the 20th century were produced by the companies Friden, Burroughs, Monro, and others. A variety of small machines were accounting and writing machines, produced in Europe by Olivetti, and in the USA by National Cash Register ( NCR). In Russia during this period, “Mercedes” were widespread - accounting machines designed for entering data and calculating final balances on synthetic accounting accounts.

Based on the ideas and inventions of Babbage and Hollerith, Harvard University professor Howard Aiken was able to create in 1937 - 1943. a higher-level computer punching machine called "Mark-1", which worked on electromagnetic relays. In 1947, a machine of this series “Mark-2” appeared, containing 13 thousand relays.

Around the same period, theoretical prerequisites and the technical possibility of creating a more advanced machine using electric lamps appeared. In 1943, employees of the University of Pennsylvania (USA) under the leadership of John Mauchly and Prosper Eckert, with the participation of the famous mathematician John von Neumann, began developing such a machine. The result of their joint efforts was the ENIAC lamp computer (1946), which contained 18 thousand lamps and consumed 150 kW of electricity. While working on the tube machine, John von Neumann published a report (1945), which is one of the most important scientific documents in the theory of the development of computer technology. The report substantiated the principles of the design and operation of universal computers of the new generation of computers, which have absorbed all the best that has been created by many generations of scientists, theorists and practitioners.

This led to the creation of computers, the so-called first generation. They are characterized by the use of vacuum tube technology, memory systems on mercury delay lines, magnetic drums and Williams cathode ray tubes. Data was entered using punched tapes, punched cards, and stored program magnetic tapes. Printing devices were used. The performance of first-generation computers did not exceed 20 thousand operations per second.

Further development of digital computing technology occurred at a rapid pace. In 1949, the first computer was built by the English researcher Maurice Wilkes using Neumann's principles. Until the mid-50s. Lamp machines were produced on an industrial scale. However, scientific research in the field of electronics opened up new development prospects. The United States occupied a leading position in this area. In 1948, Walter Brattain and John Bardeen of AT&T invented the transistor, and in 1954, Gordon Tip of Texas Instruments used silicon to make a transistor. Since 1955, transistor-based computers began to be produced, with smaller dimensions, increased performance and lower energy consumption in comparison with lamp-based machines. The computers were assembled manually, under a microscope.

The use of transistors marked the transition to computers second generation. Transistors replaced vacuum tubes and computers became more reliable and faster (up to 500 thousand operations per second). Functional devices have also improved - working with magnetic tapes, memory on magnetic disks.

In 1958, the first interval microcircuit was invented (Jack Kilby - Texas Instruments) and the first industrial integrated circuit (Chip), the author of which Robert Noyce subsequently founded (1968) the world-famous company Intel (INTegrated ELectronics). Computers based on integrated circuits, the production of which began in 1960, were even faster and smaller.

In 1959, Datapoint researchers made the important conclusion that a computer needed a central arithmetic and logic unit that could control calculations, programs and devices. We were talking about a microprocessor. Datapoint employees developed fundamental technical solutions for creating a microprocessor and, together with Intel, began to carry out its industrial development in the mid-60s. The first results were not entirely successful: Intel microprocessors were much slower than expected. The collaboration between Datapoint and Intel has ended.

Computers were developed in 1964 third generation using electronic circuits of low and medium degree of integration (up to 1000 components per chip). From that time on, they began to design not a single computer, but rather a whole family of computers based on the use of software. An example of third-generation computers can be considered the American IBM 360, created at that time, as well as the Soviet EU 1030 and 1060. In the late 60s. minicomputers appeared, and in 1971 the first microprocessor. A year later, Intel released the first widely known microprocessor, the Intel 8008, and in April 1974, the second generation microprocessor, the Intel 8080.

Since the mid-70s. computers were developed fourth generation. They are characterized by the use of large and ultra-large integrated circuits (up to a million components per chip). The first fourth-generation computers were produced by Amdahl Corp. These computers used high-speed integrated circuit memory systems with a capacity of several megabytes. When turned off, the RAM data was transferred to disk. When turned on, it booted up automatically. The performance of fourth generation computers is hundreds of millions of operations per second.

Also in the mid-70s, the first personal computers appeared. The further history of computers is closely connected with the development of microprocessor technology. In 1975, the first mass-produced personal computer Altair was created based on the Intel 8080 processor. By the end of the 70s, thanks to the efforts of Intel, which developed the latest microprocessors Intel 8086 and Intel 8088, the prerequisites arose for improving the computing and ergonomic characteristics of computers. During this period, the largest electrical engineering corporation, IBM, entered into competition in the market and tried to create a personal computer based on the Intel 8088 processor. In August 1981, the IBM PC computer appeared, which quickly gained enormous popularity. The successful design of the IBM PC predetermined its use as the standard for personal computers at the end of the 20th century.

Computer development has been underway since 1982 fifth generation. Their basis is an orientation towards knowledge processing. Scientists are confident that knowledge processing, which is unique to humans, can also be carried out by a computer in order to solve problems and make adequate decisions.

In 1984, Microsoft introduced the first samples of the Windows operating system. Americans still consider this invention one of the outstanding discoveries of the 20th century.

The proposal made in March 1989 by Tim Berners-Lee, an employee of the International European Research Center (CERN), turned out to be important. The essence of the idea was to create a new distributed information system called the World Wide Web. A hypertext-based information system could combine CERN's information resources (databases of reports, documentation, postal addresses, etc.). The project was adopted in 1990.