Theoretical foundations of computer science; introduction to computer science. Section III. Information and computing networks. The most important properties of information

Section I. Fundamentals of general computer science.

Topic 1. Basic concepts and definitions of computer science.

Computer science is an area of ​​scientific and technical activity that studies the processes of receiving, transmitting, processing, storing and presenting information, solving problems of creating, implementing and using information technology and technologies in all spheres of public life.

The main task of computer science is to determine the general patterns in accordance with which the creation of scientific information, its transformation, transmission and use in various spheres of human activity occurs.

Modern civilization is characterized by an unprecedented speed of development of science, technology and new technologies. In the field of accumulation of scientific information, its volume has doubled approximately every 10-15 years since the 17th century. Therefore, one of the most important problems of humanity is the avalanche-like flow of information in any sector of its life.

In the structure of computer science as a science, algorithmic and software technical areas are distinguished. Computer science is part of cybernetics, which studies the general theory of control and transmission of information. Cybernetics is the science of the general laws of receiving, storing, transmitting and processing information in complex systems.

Information processes include:

Collection of information- this is the activity of the subject, during which he receives information about the object of interest to him.

Information exchange is a process during which the source of information transmits it and the recipient receives it.

Data storage is the process of maintaining source information in a form that ensures the issuance of data at the request of end users within a specified time frame.

Data processing is an ordered process of its transformation in accordance with the algorithm for solving the problem.

After solving the information processing problem, the result must be presented to end users in the required form. Information is usually provided using external computer devices in the form of texts, tables, graphs, etc.

Information technology represents the material basis of information technology, with the help of which the collection, storage, transmission and processing of information is carried out.

Information technology is a process that uses a set of means and methods for collecting, processing and transmitting data to obtain new quality information about the state of an object, process or phenomenon. New information technology is information technology with a “friendly” user interface, using personal computers and telecommunications.

Of all types of technologies, information technology in the sphere of management places the highest demands on management activities, having a fundamental impact on the qualifications of the employee, the content of his work, physical and mental stress, professional prospects and the level of social relations.

Economic information is a set of information reflecting socio-economic processes and serving to manage these processes and groups of people in the production and non-production sphere.

Automated Economic Information System (AEIS)- a human-machine system in which, using economic and mathematical methods modern means collection, transmission and processing, economic information Problems of managing production processes are solved.

Topic 2. Theoretical foundations of computer science.

Notation is a set of rules and techniques for writing numbers using a set of digital characters. The number of digits needed to write a number in a system is called the base of the number system.

Used in computers binary number system, its base is the number 2. To write numbers in this system, only two digits are used - 0 and 1. The choice of the binary system for use in computer technology is explained by the fact that electronic elements - triggers, which make up computer chips, can only be found in two working conditions.

Using the binary coding system, you can capture any data and knowledge. The binary system is convenient for a computer, but inconvenient for a person: the numbers are long and difficult to write and remember. Number systems related to binary are used - octal and hexadecimal. To write numbers in these systems, 8 and 16 digits are required, respectively. In hexadecimal, the first 10 digits are common, and then capital Latin letters are used. Hexadecimal A corresponds to the decimal number 10, hexadecimal B to the decimal number 11, etc.

Representation of numbers with signs when performing arithmetic operations in a computer, direct, inverse and complementary codes are used. A code is a notation of a number that differs from the natural and generally accepted one. One of the most important characteristics any computer is the length of the word in it. The length of a word is determined by the number of binary digits of the word. Therefore, in a computer, regardless of the size of the number, its code always has a fixed number of binary digits.

Algebra of logic- a system of algebraic methods for solving logical problems and a set of such problems; V in the narrow sense- tabular, matrix construction of the logic of statements, defining logical operations on them.

Axiomatic method- a method of constructing a scientific theory in the form of a system of axioms (postulates) and rules of inference (axiomatics), allowing, through logical deduction, to obtain statements (theorems) of a given theory.

Converting numbers from one number system to another is an important part of machine arithmetic.

Amount of information is a numerical characteristic of a signal that reflects the degree of uncertainty (incompleteness of knowledge) that disappears after receiving a message in the form of a given signal. This measure of uncertainty in information theory is called entropy. If, as a result of receiving a message, complete clarity is achieved on some issue, it is said that complete or exhaustive information has been received and there is no need to obtain additional information. And, conversely, if after receiving the message the uncertainty remains the same, then no information was received (zero information).

The amount of information that can be obtained when answering a yes-no question is called a bit. A bit is the minimum unit of information, because it is impossible to obtain information less than 1 bit. A group of 8 bits of information is called a byte. If a bit is the minimum unit of information, then a byte is its basic unit. There are derived units of information: kilobyte (KB, KB), megabyte (MB, MB) and gigabyte (GB, GB).

Data transfer rate is the digital transmission rate, which is expressed in bytes (or bits) per unit of time.

There is a maximum possible (limit) transmission speed, which is called channel capacity.

Code It is customary to call a set of symbols corresponding to elements of information or its characteristics. The process of compiling code in the form of a set of symbols or a list of abbreviations for the corresponding elements and characteristics is called coding.

Decoding- a process similar to information encoding, but having the opposite direction.

Topic 3. Architecture and principles of computer operation

Computer architecture- this is a general description of the structure and functions of a computer at a level sufficient to understand the principles of operation and the computer command system, which does not include details of the technical and physical structure of the computer.

The computational process must first be presented to the computer in the form programs— a sequence of instructions (commands) written in the order of execution.

General structural scheme COMPUTER:

1. computer memory structure;

2. methods of accessing memory and external devices;

3. ability to change configuration;

4. command system;

5. data formats;

6. interface organization.

The main device of a computer is CPU, or microprocessor. It is designed to perform calculations based on a program stored in a storage device and provide general management COMPUTER. The performance of a computer is largely determined by the speed of the processor.

The data being processed and the program being executed must be in storage device- computer memory, where they are entered through an input device. Functionally, it is divided into two parts: internal and external.

Internal, or main memory, is a storage device directly connected to the processor and designed to store executable programs and data directly involved in calculations.

Internal memory, in turn, is divided into random access memory (RAM) and permanent memory (ROM). RAM, which in volume constitutes the majority of the internal memory, is used for receiving, storing and issuing information. Persistent memory provides storage and delivery of information.

External memory(VZU) is intended for storing large volumes of information and exchanging it with RAM. External storage devices are structurally separated from the central devices of the computer (processor and internal memory), have their own control and carry out processor requests without its direct intervention.

Information communication between computer devices is carried out through system bus(another name is system highway). The system bus is characterized by clock frequency and bit depth. The number of bits simultaneously transmitted on the bus is called the bus width. Clock frequency characterizes the number of elementary data transfer operations per second. Bus width is measured in bits, clock frequency - in megahertz.

System interface is a structural part of a computer designed for the interaction of its devices and the exchange of information between them. Mainframe computers, medium-sized computers, and supercomputers use complex devices that have built-in input/output processors called channels as the system interface. Such devices provide high speed data exchange between computer components.

I/O devices serve respectively to enter information into and output from the computer, as well as to ensure communication between the user and the machine. I/O processes take place using the internal memory of the computer. TO input devices include: keyboard, mouse, trackball, joystick, digitizer, scanner. Output information may be displayed in graphically, monitors, printers or plotters are used for this.

The main memory of a computer is usually address. This means that each unit of information stored in memory (word, byte) is assigned a special number - an address that determines the location of its storage in memory. Basic addressing methods: implied address, direct addressing, direct addressing, indirect addressing, short addressing, etc. The executive address coincides with the address part of the command.

Direct Memory Access- This is a method of directly accessing memory, bypassing the processor.

Memory device is a device for recording, storing and issuing data. There are devices: - long-term and operational data storage; read-only data; both for reading and writing.

Virtual memory divides physical memory into blocks and distributes them between various tasks. It automatically manages two levels of memory hierarchy: main memory and external (disk) memory. Virtual memory systems can be divided into two classes: systems with fixed block sizes, called pages, and systems with variable block sizes, called segments.

Tire is a cable consisting of many conductors. One group of conductors - the data bus - transmits the processed information, and the other - address bus- addresses of memory or external devices accessed by the processor. The third part of the highway - control bus, control signals are transmitted through it (for example, a signal that the device is ready for operation, a signal to start operation of the device, etc.).

Section II. Personal computers and their software

Topic 4. Operating systems for personal computers

The purpose of a computer is to execute programs. The program contains commands that determine the order of the computer's actions. A set of computer programs forms software(BY).

Programs running on a computer can be divided into three categories:

1. application programs that directly support the performance of work required by users: editing texts, drawing pictures, processing information arrays;

2. system programs that perform various auxiliary functions;

3. instrumental systems (programming systems) that ensure the creation of new programs for the computer.

Under systemic(basic) refers to software that includes operating systems, network software, service programs, as well as program development tools (translators, link editors, debuggers, etc.).

operating system(OS) is a collection of programs that perform two main Features: providing the user with the convenience of a virtual machine and increasing the efficiency of using a computer while managing its resources rationally.

Memory management programs provide more flexible use of your computer's RAM.

User interface (service programs) are software add-ons to the operating system (shell and environment) designed to simplify the user’s communication with the operating system.

operating system MS-DOS is a single-user, single-tasking, non-networked 16-bit operating system (OS), designed for use on a PC with an Intel 8088 (80286) microprocessor.

The main characteristics of this OS are:

The maximum amount of addressable physical memory is 640 KB;

Representation of all resources of a personal computer for one currently active program;

Advanced file system and command language processor;

Weak support for interactive user interaction tools;

The occupied disk space, depending on the version, is from 1 MB to 6 MB.

NortonCommander allows you to quickly and conveniently perform MS-DOS commands. The screen displays two windows in which the status of some directories is displayed. You can copy or move files between these directories, create text files, edit and delete them, search for their location on disk, etc. using the function keys for this. The purpose of the function keys is indicated in the hint at the bottom line of the screen, but in English.

OS Windows is a family of operating systems that includes: Windows 3.1, Windows for Workgroups 3.11, Windows 9X, Windows NT, Windows 2000, Windows ME.

After the Windows 98/2000 operating system is loaded in normal mode, the graphical interface will appear. The main components of a graphical interface are the desktop, taskbar, icons, and shortcuts. In addition, in Windows 98/2000, in addition to the standard one, you can also use the Web interface, which uses the active desktop.

In Windows 98/2000, most commands can be executed using the mouse. There are pictograms (icons) and shortcuts on the desktop. They can be used to access relevant applications or documents. Any icon (or shortcut) located on the desktop can be removed from it. The only exceptions to this rule are icons created by the operating system, such as My Computer, network, Basket. The taskbar appears at the bottom of the desktop. By clicking on the Start button on the taskbar, you can open the start menu.

TO standard applications Windows 98/2000 include:

Notepad is a simple text editor (Fig. 3.32) that can be used as a convenient means of viewing text files.

Graphic editor Paint is designed for creating and editing images (drawings).

Text Word processor Pad is used to create, edit and view text documents, as well as format documents.

Calculator program.

Topic 5. Basics of a high-level programming system

Algorithm- this is an exact prescription that defines the process leading from the initial data to the required final result. Program for a computer is a description of the algorithm and data in a certain programming language, intended for subsequent automatic execution.

One of the most important features of the classification of programming languages ​​is their belonging to one of the styles, the main of which are the following: procedural, functional, logical and object-oriented.

Programming language partially bridges the gap between methods for solving various types of problems by humans and computers. The more human-oriented a language is, the higher its level.

Binary language is directly machine language. Currently, such languages ​​are practically not used by programmers.

Assembly language is a language designed to represent programs written in machine language in a human-readable symbolic form.

Macro Assembly Language is an extension of Assembly Language by incorporating macro facilities.

The C programming language was originally developed to implement the UNIX operating system. The C language has a syntax that makes programs concise, and compilers can generate efficient object code.

Basic (Beginners All-purpose Symbolic Instruction Code) is a simple programming language developed in 1964 for use by beginners.

Pascal (Pascal) is one of the most popular among application programmers procedural programming language, especially for PCs. Currently, PC versions of this language such as Borland Pascal and Turbo Pascal are widely used.

Programming language vocabulary- these are the rules for the “spelling of words” of the program, such as identifiers, constants, function words, comments. A feature of any vocabulary is that its elements are regular linear sequences of symbols.

The software program is a large system, so it is developed in parts, which are called software modules. As modular structure It is customary for programs to use a tree structure, including trees with fused branches. Two methods are used: bottom-up development method and top-down development method.

Programming technology is a set of rules, techniques, and programming tools. The core issue of any technology is the programming language.

Module is a stand-alone compiled program unit that includes various components of the declaration section (types, constants, variables, procedures and functions) and, possibly, some sequence of statements.

At the core object-oriented programming style lies in the concept of an object, and its essence is expressed by the formula: “object - data + procedures”. Each object integrates a certain data structure and procedures for processing this data, called methods, available only to it. Classes are used to describe objects. A class defines the properties and methods of an object that belongs to that class.

The most modern object-oriented programming languages ​​include C++ and Java. The fundamental difference between Java and C++ is that the former is interpreted while the latter is compiled. The syntax of the languages ​​is almost completely the same. Due to their constructive nature, the ideas of object-oriented programming are used in many universal procedural languages.

IN Lately many programs, especially object-oriented ones, are implemented as systems visual programming. A distinctive feature of such systems is a powerful program development environment from ready-made “building blocks”, which allows you to create the interface part of a software product online, with virtually no coding of program operations. Object-oriented visual programming systems include; Visual Basic, Delphi, C++Builder and Visual C++.

Topic 6. Database Basics

Databases are one of the main components of modern information systems. Information system is an interconnected set of means, methods and personnel used to store, process and issue information. Database are information structures containing interrelated data about real objects. Creation of a database, its support and provision of user access to it is carried out using a special software tool - a database management system.

Databases are divided into centralized and distributed. The centralized database is stored in the memory of one machine. A distributed database consists of several parts stored on several machines on a computer network.

File server architecture. Organization principle: one machine is dedicated as a central one (file server), and a centralized database is stored on it. The remaining machines on the network perform the functions of workstations.

Client-server architecture. Organizational principle: the central machine (database server) stores the centralized database and processing procedures. The client sends a request, it is processed by the server, and the data received from the request is transferred to the client.

At the heart of any database application are datasets, which are groups of records passed from the database to the application for viewing and editing. Data sets are made up of lower-order categories—groups that are formed based on their degree of similarity. Each group, in turn, consists of one or more data series. Each row or profile is assigned a so-called key, i.e. a set of values ​​corresponding to each cluster of so-called concepts, also called dimensions.

Main part applications The database consists of dialog boxes, or simply forms. Typically, each form has its own data source - a table or query. The application can contain any number of forms and use any interface.

Field- a set of cells with data specific type, located in the same place in each record of a data set, or simply, it is a column in a table. Using fields you can decide complex tasks and create effective and flexible applications databases.

Visual Components Data displays are modifications of standard controls adapted to work with a data set. Most components are designed to work with a single field, that is, when moving through records in a data set, such components show the current values ​​​​of only one field.

Navigational access method consists of processing each individual record of the data set. With the navigation access method, each data set has an invisible indicator of the current record. A pointer defines a record on which operations such as editing or deleting can be performed.

Sorting- this is compiling a list of records that meets the specified conditions. Sorting using data views allows you to set sort conditions at design time and provides an object that you can use for data binding. When sorting directly in a data table, the order of the table contents does not change.

During operation, the data set can perform various operations: moving through records, searching for data, editing data, deleting a record, etc. This changes the state of the data set.

Modification of a data set involves editing, adding, and deleting its records.

Another way an application can use data is to communicate with its database tables. Related tables can be used along with local database tables when creating queries, forms, reports using the usual interactive tools. You can also view related tables in Design view, but you cannot make any changes to the table structure.

Relational access method based on processing a group of records. If one record needs to be processed, a group consisting of one record is still processed. The relational access method is focused on working with remote databases and is preferable for them.

Behind working with reports The report server responds. The report server processes reports and scheduled events, and is responsible for delivering reports and presenting their results. If there is a need to make changes to the database for which the report is being created, then it is possible for the developer or database administrator to rename the fields, which frees them from the need to add new fields to the report or modify formulas that refer to the changed fields.

Topic 7. Application packages and their use in solving economic problems

Panel Microsoft Office provides accelerated launch applications. Microsoft Office 2000 provides: ease of use and support. Available user-friendly interface and help system, an expanded set of wizards and templates, improved capabilities for collective document processing; an expanded set of intelligent tools.

Microsoft Word for Windows is a feature-rich software package text processing. The program is designed to perform work on creating documents that include various elements (drawings, graphs, formulas, regular or spreadsheets, database fragments, etc.), having a hierarchical organization (chapters, parts, paragraphs, etc.) with support work at the level of individual components, the document as a whole, combining information from several files in the form of a main document.

The central concept of a word processor is the concept of document - object created and adjusted by this processor. Typing text in Word is carried out in automated page layout mode. The line size depends on the paragraph and character format settings (Format menu, Paragraph and Font commands). The page size is determined by the parameters of the Page Setup command (File menu).

To create and edit documents, user-friendly interface elements are used: various windows, menus, toolbars, help system, etc.

Microsoft Word supports multi-windowing - simultaneous work with several documents open in different windows. A Word window can contain one or more document windows.

Window table processor Excel is designed for spreadsheet entry .

The workbook is located in the working area of ​​the window. A workbook is a file designed to store a spreadsheet and has the extension .xls. The workbook consists of worksheets. A worksheet is a grid of rows and columns. Maximum size worksheet - 256 columns, 65536 rows. The columns are named with Latin letters from A to Z and from AA to IV. The strings are named with numbers from 1 to 65536.

At the intersection of the rows and columns of the worksheet there are cells (cells). Data entry and editing is done in the active cell. The active cell is highlighted with a bold frame. Its name is contained in the name field. You can enter two types of data into worksheet cells: constants and formulas.

A number in Excel can only consist of the following characters: numbers from 0 to 9, +, -, (,), /, $, %, (.), E, ​​e. The comma in the number is interpreted as a decimal separator. The separator character can be changed in the Language and Standards app in Windows Control Panel.

Typically, numbers are entered in a common numeric format. Entering text is similar to entering numeric values.

Modern computer technologies use mathematical documents consisting of formulas and explanatory text, in which the comments remain unchanged while reading, and the formulas are updated as the input variables change.

Power Point - a system for preparing electronic presentations; designed for preparing and conducting presentations.

Microsoft Office Power Point 2003 is designed for creating and editing slides: using a design template; inserting pictures and placing them on a slide; creating slides consisting of several sheets; applying graphic effects to pictures; setting the parameters for displaying the slide on the screen; editing drawings using built-in program tools; etc.

Microsoft Office Access 2003 is designed for creating databases on any topic: selecting the desired type of table; specifying the type of information in the fields; creating requests to the required database data; introducing a change to an existing database; use of various external interfaces; etc. Microsoft Office Access contains enormous capabilities for managing large amounts of information. Allows you to automate any business from household chores to financial reports large companies.

Microsoft Office Excel 2003 is designed to create spreadsheets, which help store, analyze and present digital information.

MathML is built on XML language A W3C specification for processing mathematical files on Web pages. MathML is superior in its capabilities to the mathematical markup language TEX, in which the equal sign in an equation is considered just a symbol. In MathML, an equal sign indicates an equation, both sides of which can interact with other elements of the HTML page. Mathcad includes professional version software module techexplorer from IBM, which connects to Internet Explorer or Netscape Navigator. This module plays MathML documents and some TEX files in a browser window. A Mathcad document in MathML format can be sent to the Web, and anyone with techexplorer will read it. And then this document can be imported back into the Mathcad environment as a “live” file. All these actions are possible only with documents that were generated by the Mathcad program.

Section III. Information and computing networks.

Topic 8. General principles of building information and computer networks.

In the mid-40s, the first tube computing devices were created. Since the mid-50s, a new period began in the development of computer technology, associated with the emergence of a new technical base - semiconductor elements. The next important period in the development of computers dates back to 1965-1980. At this time, there was a transition in the technical base from individual semiconductor elements such as transistors to integrated circuits, which gave much greater opportunities to the new, third generation of computers. The next period in the evolution of operating systems is associated with the emergence of large integrated circuits(BIS).

Computer (computer) network is a collection of computers and terminals connected via communication channels into a single system that meets the requirements of distributed data processing. A computer network is a complex set of interconnected and coordinated software and hardware components. Studying the network as a whole presupposes knowledge of the operating principles of its individual elements: computers; communication equipment; operating systems; network applications.

In the simplest case computer interaction can be implemented using the same means that are used to interact a computer with peripherals, for example, through a serial interface. Computer pooling carried out via a communication cable through COM ports that implement the interface.

The geometric connection diagram (physical connection configuration) of network nodes is called network topology. There are a large number of network topology options, the basic ones being bus, ring, and star.

Tire. The communication channel connecting nodes into a network forms a broken line - a bus. Any node can receive information at any time, and transmit only when the bus is free.

Ring. The nodes are connected into a closed curve network. Data transfer is carried out in one direction only. Each node, among other things, implements the functions of a repeater. He receives and transmits messages, and perceives only those addressed to him.

Star. The network nodes are connected to the center by rays. All information is transmitted through the center, making it relatively easy to troubleshoot and add new nodes without interrupting the network.

Without physical transmission signals, any type of communication is impossible. But even the simplest network, consisting of just two machines, faces problems inherent in any computer network.

The representation of data in the form of electrical or optical signals is called coding. In computing, binary code is used to represent data. In computer networks, both potential and pulse coding of discrete data, digital coding are used, as well as a specific method of presenting data that is never used inside a computer - modulation.

To combat mistakes problems encountered during file transfers, most modern protocols have error correction capabilities. Each protocol has its own specific methods, but circuit diagram error corrections are the same. It consists in the fact that the transmitted file is divided into small blocks - packets, and then each received packet is compared with the sent one to ensure their adequacy.

The essence of the network is the connection of different equipment, which means that the problem of compatibility is one of the most acute. Therefore, the entire development of the computer industry is ultimately reflected in standards - any new technology only acquires “legal” status when its content is enshrined in the appropriate standard.

Open system can be called any system (computer, computer network, OS, software package, other hardware and software products), which is built to open specifications. An open information system assumes that when transmitting messages, network exchange participants must accept many agreements that must be accepted at all levels, starting from the lowest - the bit transfer level - to the highest, implementing the service for network users. Formalized rules that determine the sequence and format of messages exchanged between network components located at the same level, but in different nodes, are called protocol. Modules that implement adjacent layer protocols and are located in the same node also interact with each other in accordance with clearly defined rules and using standardized message formats. These rules are usually called interface.

Interaction Models open systems consist of the following levels:

Physical layer - deals with the transmission of bits over physical channels.

Data link layer - checking the availability of the transmission medium, implementing error detection and correction mechanisms.

Network layer - serves to form a unified transport system that combines several networks with different principles transfer of information between end nodes.

Transport layer - provides applications or upper levels of the stack - application and session - with data transmission with the degree of reliability that they require.

Session layer - provides dialogue management to record which party is currently active, and also provides synchronization facilities.

Presentation layer - provides assurance that information conveyed by the application layer will be understood by the application layer in another system.

The application layer is a set of various protocols with the help of which network users gain access to shared resources and also organize their joint work.

Data transfer can be carried out on physical and data link levels. Circuit switched networks are used in corporate networks primarily for remote access numerous home users and much less often for connecting local networks. Circuit switched networks divided into analog and digital. Analog networks can use analog (FDM) and digital (TDM) switching, but in them the subscriber is always connected via an analog 2-wire termination.

Topic 9. Network operating systems.

The first computers of the 50s - large, bulky and expensive - were intended for a very small number of selected users. As processors became cheaper in the early 60s, new ways of organizing the computing process emerged that made it possible to take into account the interests of users. The first network operating systems were a collection the existing local OS and the network shell built on top of it.

Network operating system forms the basis of any computer network. In a narrow sense, a network OS is the operating system of a separate computer that provides it with the ability to work on a network. Main OS function is data input/output. Depending on the functions assigned to a particular computer, its operating system may lack either a client or a server part.

To the functional components of the OS relate:

Tools for managing local computer resources.

Means of providing own resources and services in common use- server part of the OS (server).

Means of requesting access to remote resources and services and their use - the client part of the OS (redirector).

Communication means of the OS, with the help of which messages are exchanged on the network.

The modular organization of process control in the network is implemented according to a multi-level scheme. The classic seven-level scheme is: physical, channel, network, transport, session, representative, application. This architecture sewn on as a reference model.

Hardware dependency and OS portability- a typical set of hardware support tools: privileged mode support, process switching, interrupt system, timer, memory protection.

Compatibility is the ability of an operating system to run applications written for other operating systems. The differences are: binary compatibility and compatibility at the source level.

To support multiprogramming, the OS must define and design internal units of work between which the processor and other computer resources will be divided. A larger unit of work is called process, or task and requires several smaller works to be completed, which are denoted by the terms “ flow", or "thread".

Multiprogramming is a way of organizing a computing process in which several programs are alternately executed on one processor. Multiprogramming is designed to improve the efficiency of using a computing system.

Interrupts are the main driving force of any operating system. Periodic timer interrupts cause changes in processes in a multi-program OS, and interrupts from I/O devices control the flow of data that the computer system exchanges with the outside world.

Any interaction between processes or threads is related to their synchronization, which consists in coordinating their speeds by suspending the flow until the occurrence of a certain event and then activating it when this event occurs.

Memory is a critical resource that requires careful management by a multiprogramming operating system. OS functions according to memory management are: tracking free and used memory; allocating memory to processes and freeing memory when processes terminate; displacing processes from RAM and returning them to RAM; setting program addresses to a specific area of ​​physical memory.

Memory allocation can be carried out based on two methods: 1st, they use the movement of processes between RAM and disk; 2nd divide it into several sections of fixed or variable size.

Caching is a process that helps reduce the consumption of resources on the server and increase the speed of their provision.

One of the main functions OS is the control of all I/O devices computer. The OS must send commands to devices, intercept interrupts, and handle errors; it must also provide an interface between the devices and the rest of the system.

File system- this is a part of the operating system, the purpose of which is to provide the user with a convenient interface when working with data stored on the disk, and to ensure the sharing of files among several users and processes. Files come in different types: regular files, special files, directory files.

OS tasks for file management and devices:

Organization of parallel operation of I/O devices and processor;

Coordination of exchange rates and data caching;

Separation of devices and data between processes;

Providing a convenient logical interface between devices and the rest of the system;

Supports a variety of devices with the ability to easily add;

Supports multiple file systems;

Supports synchronous and asynchronous I/O operations.

OS should be designed at two levels: physical and logical. Logical design determines where resources are located, applications are located, and how users access resources. Physical design determines the exact specification of device types (make and model), cable installation locations, types of global services (protocol, type of transmission medium, modem types, etc.).

File operations include commands from the File menu: save; save as...; close; create; open; find.

To control access to files Typically, information protection systems from unauthorized access are used, certified by the State Technical Commission under the President of the Russian Federation.

System fault tolerance- a data protection tool that provides the ability to automatically recover from hardware failures.

Distributed Processing Concepts data is ensuring the collective use of common information resources to manage the object as a whole. Connecting computers into a network provides an opportunity for programs running on individual computers to quickly interact and jointly solve user problems.

Topic 10. Local networks.

The local network- a computer network that unites subscribers located within a small area. The class of local networks includes networks of individual enterprises, firms, offices, etc.

Protocol is a set of rules and procedures governing the manner in which communications are carried out. To the protocols of the most lower levels(physical and channel), related to equipment, include encoding and decoding methods, methods for controlling exchange in the network.

There are several standard sets(or, as they are also called, stacks) of protocols that are now most widely used:

ISO/OSI protocol suite;

IBM System Network Architecture (SNA);

Digital DECnet;

Novell NetWare;

Apple AppleTalk;

A set of protocols for the global Internet network, TCP/IP.

The protocols of the listed sets are divided into three main types:

Application protocols (performing the functions of the application, presentation and session layers of the OSI model);

Transport protocols (performing the functions of the OSI transport and session layers);

Network protocols (performing the functions of the three lower OSI layers).

The most widespread among standard networks is the network Ethernet t. It has become an international standard and has been adopted by the largest international standards organizations: IEEE Committee 802 (Institute of Electrical and Electronic Engineers) and ECMA (European Computer Manufacturers Association).

Token-Ring Network was proposed by IBM and is currently the international standard IEEE 802.5.

FDDI network- this is one of the latest developments local network standards. The FDDI standard was based on the token access method provided for by the international standard IEEE 802.5. The FDDI token access method, unlike CSMA/CD, provides guaranteed access time and the absence of conflicts at any load level. The FDDI standard also provides the ability to reconfigure the network in order to maintain its functionality in the event of cable failure.

The network topology determines not only the physical location of computers, but, what is much more important, the nature of the connections between them, the features of signal propagation throughout the network. It is the nature of the connections that determines the degree of network fault tolerance, the required complexity of network equipment, the most appropriate method of managing the exchange, possible types transmission media (communication channels), permissible network size (length of communication lines and number of subscribers), the need for electrical coordination and much more.

For consolidation of local computer networks The following devices are used.

1. Repeater - a device that provides amplification and filtering of a signal without changing its information content.

2. Bridge - a device that performs the functions of a repeater for those signals (messages) whose addresses satisfy pre-imposed restrictions. Bridges can be local or remote.

3. A router is a device that connects networks different types, but using the same operating system.

4. Gateway is a special hardware and software complex designed to ensure compatibility between networks using different communication protocols.

Task virtual local networks is to minimize multicast and broadcast traffic and simplify moves, additions and changes. VLANs provide additional flexibility in making additions, moves and changes, allowing network administrators to install servers in one place, which simplifies their management and allows users located in different places, access servers via VLAN.

All the variety of means used for monitoring and analysis computer networks can be divided into several large classes:

Network management systems are centralized software systems that collect data about the status of network nodes and communication devices, as well as data about traffic circulating in the network.

System controls - perform the functions similar functions control systems, but in relation to other objects.

Built-in diagnostic and control systems - perform diagnostic and control functions of only one device, and this is their main difference from centralized control systems.

Protocol analyzers are software or hardware-software systems that, unlike control systems, are limited only to the functions of monitoring and analyzing traffic in networks.

Equipment for diagnostics and certification of cable systems. Conventionally, this equipment can be divided into four main groups: network monitors, devices for certification of cable systems, cable scanners and testers (multimeters).

Expert systems - accumulate human knowledge about identifying the causes of abnormal operation of networks and possible ways to bring the network into an operational state.

Multifunctional analysis and diagnostic devices.

Topic 11. Global networks.

Global networks serve to provide their services a large number end subscribers scattered over a large territory.

The creation of network management systems is impossible without focusing on certain standards, since control software and network hardware are being developed by hundreds of companies. The most common protocol network management protocol is SNMP. The main advantages of the SNMP protocol are simplicity, accessibility, and independence from manufacturers. SNMP is a protocol used to obtain information from network devices about their status, performance, and characteristics, which are stored in a special network device database called MIB.

The most important task of the network layer is routing- transmission of packets between two end nodes in a composite network. In complex composite networks, there are almost always several alternative routes for transmitting packets between two end nodes. A route is the sequence of routers that a packet must take from the source to the destination.

The routing problem is solved based on the analysis of routing tables located in all routers and end nodes of the network. The main work of creating routing tables is performed automatically, but the ability to manually adjust or supplement the table is also provided. To automatically build routing tables, routers exchange information about the topology of the composite network in accordance with a special service protocol. Protocols of this type are called routing protocols(or routing protocols). Routing protocols use network protocols as a transport. Using routing protocols, routers map out the connections of a network in varying degrees of detail. Routing protocols can be built on the basis of different algorithms that differ in the methods of constructing routing tables and the methods of selecting best route and other features of their work.

Implementation of internetworking using TCP/IP: The TCP/IP stack is currently the most popular means of organizing composite networks. The core of the entire architecture is the internetworking layer, which implements the concept of transmitting packets in connectionless mode, that is, in a datagram manner. It is this layer that provides the ability to move packets across the network using the route that is in this moment is the most rational. This layer is also called the internet layer, thereby indicating its main function - data transmission through a composite network. The name of the standards that determine the operation of the Internet - Request For Comments (RFC), which can be translated as “request for comments” - shows the transparent and open nature of the adopted standards.

Global network structure: individual computers, local networks, routers and multiplexers, which are used to simultaneously transmit data and voice (or video) over a computer network.

Global networks can be digital and analog. In digital networks, multiplexing and switching are always done using the TDM switching method, and subscribers are always connected via Digital Subscriber Line (DSL). Circuit-switched digital networks are represented by two technologies: Switched 56 and ISDN.

Primary network is a set of typical physical circuits, typical transmission channels and network paths of a telecommunication system. The modern digital primary network is built on the basis of three main technologies: plesiochronous hierarchy (PDH), synchronous hierarchy (SDH) and asynchronous transfer mode (ATM). Of the listed technologies, only the first two can currently be considered as the basis for building a digital primary network. ATM technology As a technology for building a primary network, it is still young and not fully tested. This technology differs from PDH and SDH technologies in that it covers not only the primary network level, but also the technology of secondary networks, in particular, data networks and broadband ISDN (B-ISDN).

Digital networks with integrated services - ISDN- designed to combine various transport and application services in one network. ISDN provides its subscribers with services of dedicated channels, switched channels, as well as packet and frame switching ( frame relay). Type D channels form a packet switched network that performs a dual role in the ISDN network: firstly, transmitting a request to establish a type B switched channel with another network subscriber, and secondly, exchanging packets X.25 with subscribers of the ISDN network or an external X.25 network connected to the ISDN network.

Remote access- technology for interaction of subscriber systems with local networks through territorial communication networks. Remote access is provided via a remote access server. Models used for remote access remote control and the remote system.

Section IV. Fundamentals of information security.

Topic 12. Basic network security technologies.

Information Security- this is the security of information and its supporting infrastructure from accidental or intentional impacts of a natural or artificial nature that may cause damage to owners or users of information. Under information protection refers to a set of measures aimed at ensuring information security.

Confidentiality- the property of information that it cannot be discovered and made available without permission to individuals, modules or processes.

Information integrity- a property of information when processed by technical means, ensuring the prevention of its unauthorized modification or unauthorized destruction.

Data Availability- the state of the data when it is in the form necessary for the user; in the place the user needs and at the time he needs them.

Availability of information- the property of information when processed by technical means, providing unhindered access to it for carrying out authorized operations for familiarization, documentation, modification and destruction.

Under threats confidential information It is customary to understand potential or actually possible actions in relation to information resources, leading to the unlawful acquisition of protected information.

Threats can be classified:

According to the amount of damage caused: the limit after which the company may become bankrupt; significant, but not leading to bankruptcy; insignificant, which the company can compensate for some time, etc.;

According to probability of occurrence: very probable threat; probable threat; unlikely threat;

For reasons of occurrence: natural disasters; intentional actions;

By the nature of the damage caused: material; moral;

By nature of impact: active; passive;

In relation to the object: internal; external.

Possible leakage channels information can be divided into four groups:

1st group - channels associated with access to elements of the data processing system, but not requiring changes to system components.

2nd group - channels associated with access to system elements and changes in the structure of its components.

3rd group - which includes: illegal connection of special recording equipment to system devices or communication lines; malicious modification of the program, malicious disabling of security mechanisms.

4th group - which includes: unauthorized receipt of information by bribery or blackmail of officials of the relevant services; obtaining information by bribing and blackmailing employees, acquaintances, service personnel or relatives who know about the type of activity.

Intrusion into computer systems can be considered in the following forms:

Hacking- one of the types of computer crimes, refers to unauthorized entry into computer system. Hackers use many different ways in order to recognize secret passwords or bypass system password protection.

Software virus is a computer program designed to disrupt the normal functioning of a computer. Many viruses damage basic computer characteristics or data. The virus can also erase important computer files or destroy and even destroy data on your hard drive.

Functional standardization methods in the field of information security are set out in international standard ISO/IEC 15408-99 “Criteria for assessing the security of information technology.” Core standards should be tailored and specific to specific classes of projects, functions, processes, and information system components and offer a set of historical and industry-standard approaches to security.

Building and maintaining a secure system requires systematic approach, i.e. moral and ethical, legislative, administrative, psychological, protective capabilities of network software and hardware.

The importance and complexity of the security problem requires development information security policies, which should take into account several basic principles:

Using an integrated approach to ensuring security;

Ensuring a balance of reliability of protection at all levels;

The use of means that, upon failure, go into a state of maximum protection;

The principle of a single checkpoint;

The principle of balancing possible damage from the implementation of a threat and the costs of preventing it;

Basic security technologies is authentication, authorization, auditing and secure channel technology.

Encryption is a procedure that transforms information from its usual “understandable” form into an “unreadable” encrypted form. It must be supplemented with a decryption procedure. The pair of procedures—encryption and decryption—is called a cryptosystem.

There are two classes of cryptosystems − symmetrical and asymmetrical. In symmetric encryption schemes (classical cryptography), the secret encryption key is the same as the secret decryption key. In asymmetric encryption schemes (public key cryptography), the public encryption key is not the same as the private decryption key.

Peculiarity public key encryption consists in simultaneously generating a unique pair of keys, such that text encrypted with one key can only be decrypted using the second key and vice versa.

Such encryption algorithms are not suitable for encryption of large amounts of data. Therefore, there is a technology that combines both algorithms. In accordance with it, the entire volume of data is encrypted using a secret key, which in turn is encrypted with a public key and sent to the correspondent along with the encrypted data.

Encrypting file system- its purpose: to protect data stored on the disk from unauthorized access by encrypting it.

Operations of copying, moving, renaming and destroying encrypted files and folders are performed in exactly the same way as with unencrypted objects. However, be aware that the destination of the encrypted information must support encryption. Otherwise, when copying, the data will be decrypted and the copy will contain clear information.

Antivirus programs there is quite a lot. Installing several programs will increase the likelihood of detecting modifications of old viruses, as well as new, previously unknown viruses. The main programs include:

1. Polyphage program AIDSTEST - to scan disks and treat infected files .

2. ADINF audit program - allows you to detect the appearance of any of the existing viruses, including Stealth viruses and mutant viruses, as well as currently unknown viruses.

3. IBM ANTIVIRUS/DOS - prevents viruses from entering a computer system, and also detects and removes existing ones.

4.VIRUSCAN/CLEAN-UP - This is an antivirus software package from McAfee Associates. VIRUSCAN detects viruses and transmits detailed information CLEAN-UP program, which provides treatment.

To detection and removal methods computer viruses include:

1. Scan . If the virus is known and has already been analyzed, then a program can be developed that identifies all files and boot records infected by this virus.

2. Detection of changes . To infect programs or boot records viruses must change them. There are programs that specialize in catching such changes. A program that records changes to files and boot records can even be used to identify previously unknown viruses.

3. Heuristic analysis - this is a vague suspicion antivirus program that something is wrong. When identifying viruses using heuristic analysis, a search is made for external manifestations or actions characteristic of certain classes of known viruses.

4. Verification . The methods discussed above may indicate that a program or boot record is infected with a virus, but in this way it is impossible to confidently identify the virus that has infected it and destroy it. Programs that can be used to identify a virus are called verifiers.

5. Neutralization . It is possible that after identifying a virus, it will be possible to remove it and restore the original state of infected files and boot records, which was characteristic of them before the “illness”. This process is called neutralization (disinfection, treatment).

Topic 13. Methods and means of protecting information in computer networks.

Authentication- this is a procedure that checks whether the user with the presented identifier has the right to access the resource.

The most common authentication method when accessing network resources- password. If the password is correct, the user gains access to domain resources; if not, an error message is displayed. The disadvantage of password authentication is the low level of security - the password can be spied on, guessed, guessed, revealed to unauthorized persons, etc. The advantages of password authentication are the absence of additional costs, since password authentication is an integral part of all modern operating systems.

Certificate-based authentication is an alternative to using passwords and seems to be a natural solution when the number of network users (albeit potential ones) is measured in millions. A certificate is an analogue of this document and is issued upon request by certifying organizations if certain conditions are met. He is electronic form, which contains fields such as the owner’s name, the name of the organization that issued the certificate, and the owner’s public key.

When working on the Internet Internet Explorer uses two types of certificates: personal certificate and Web site certificate. A personal certificate verifies the user's identity. Certificate information is used when transmitting personal information over the Internet to a Web site that requires the user to be verified through a certificate. A website certificate verifies that the site is safe and authentic.

Electronic digital signature- details of an electronic document intended to protect this electronic document from forgery, obtained as a result of cryptographic transformation of information using private key electronic digital signature and allows you to identify the owner of the signature key certificate, as well as establish the absence of distortion of information in the electronic document.

Authentication of program codes- an organization that wants to confirm its authorship of a program must build a so-called signing block into the distributed code. This block consists of two parts. The first part is the certificate of this organization, received in the usual way from some certification center. The second part is an encrypted digest obtained by applying a one-way function to the distributed code.

Authorization means control access of legal users to system resources, granting each of them exactly the rights that were assigned to him by the administrator. In addition to granting access rights to users to directories, files and printers, the authorization system can control the ability of users to perform various system functions, such as local access to the server, setting the system time, creating backup copies data, server shutdown, etc.

Audit— recording in the system log of events related to access to protected system resources. The audit subsystem of modern OS allows differentiation to set a list of events of interest to the administrator using a convenient graphical interface. Accounting and monitoring tools provide the ability to detect and record significant security events or any attempts to create, access, or delete system resources. Auditing is used to detect even unsuccessful attempts to “hack” the system.

To ensure the security of data when transmitted over public networks, various secure channel technologies. It is designed to ensure the security of data transmission over open transport network, for example via the Internet. A secure channel involves performing three main functions:

Mutual authentication of subscribers when establishing a connection, which can be performed, for example, by exchanging passwords;

Protection of messages transmitted over the channel from unauthorized access, for example, by encryption;

Confirmation of the integrity of messages arriving through a channel, for example, by transmitting simultaneously with the message of its digest.

Kerberos is a network service designed to centrally solve authentication and authorization tasks in large networks. In networks using Kerberos security, all authentication procedures between clients and servers on the network are performed through an intermediary who is trusted by both parties to the authentication process, with the Kerberos system itself being that authoritative arbiter.

Primary authentication- the user is authenticated once during a logical login to the network, and then goes through authentication and authorization procedures whenever he needs access to a new resource server. When a user logs on to the network, the Kerberos client installed on his computer sends the user ID to the authentication server. The authentication server checks the database to see if there is a user entry with the same ID, then, if such an entry exists, retrieves the user's password from it. This password will be required to encrypt all information that the authentication server will send to the Kerberos client as a response. After such a response message arrives at client machine, the Kerberos client program asks the user to enter their password. If the password is correct, then a receipt for access to the receipt server (in encrypted form) and a session key (in open form). Successful decryption of the message means successful authentication. The next step for the user is to obtain permission to access the resource server. To do this, you need to contact the server that issues such permissions (receipts). To gain access to the receipt server, the user has already acquired a receipt issued to him by the server.

Many new distributed services Windows 2000 use authentication Kerberos. Examples of uses for Kerberos authentication in Windows 2000:

Active Directory authentication using LDAP for directory queries or management;

CIFS/SMB remote file access protocol;

DFS distributed file system management;

Secure DNS address update;

Printing Services;

Intranet authentication in Internet Information Services;

Authentication of public key certificate requests coming from users and computers; etc.

Data archiving- this is the compression of files and their placement for storing data in external memory. Its use also reduces the costs associated with storing and transmitting data. Rarely used data and programs are subject to archiving. Compression is performed using a program called an archiver. This program processes both text and graphic files.

2. V.M. Bryabrin Software personal computers. M.: Nauka, 1990.

3.Yu. Vinokurov. Once again about GOST., M., Monitor.-1995.-N5.

4. A.Yu.Vinokurov. Encryption algorithm GOST 28147-89, its use and implementation for computers Intel platforms x86., Manuscript, 1997.

5.V. Vodolazkiy, “DES Encryption Standard”, Monitor 03-04 1992

6. K. Guy. Introduction to local area networks. Per. from English / Ed. B. S. Irugova. - M.: Radio and communication, 1986.

7. Computer Science: Textbook / ed. N.V. Makarova. - M.: Finance and Statistics, 2000. - 768 p.

8.S. Maftik, "Protection Mechanisms in Computer Networks", ed. World, 1993

9.A.V. Mogilev and others. Computer science. - M., 1999. - 816 p.

10. A.V.Mogilev, N.I.Pak, E.K.Henner, Computer Science, Textbook for Universities - M.: Academa Publishing House, 1999.

11.V.G.Olifer, N.A. Olifer, Computer networks, St. Petersburg, "Peter", 2001

12.Fundamentals of modern computer technologies: Textbook / ed. Khomonenko. - St. Petersburg: CORONA, 2002.

13.Fundamentals of modern computer technologies. Ed. Khomchenko A.D.

Ostreykovsky V.A., Informatics, M., "Higher School", 2000

14.V. E. Figurnov IBM PC for the user. M.: Infra-M, 2001.

15. A.V. Frolov, G.V. Frolov Global computer networks. Practical introduction to the Internet, E-Mail, FTP, WWW and HTML. M.: Dialog-MEPhI, 2006.

17. E. A. Yakubaitis. Computer science, electronics, networks. - M.: Finance and Statistics, 1999.

“Informatics is one of the fastest growing branches of science and technology, many of its terms and concepts are constantly changing.” Computer science. Dictionary

Computer science a new information industry and scientific discipline, one of which respectively deals with, and the other studies, ways, methods and means of automating the following processes: creation, collection, storage, processing and transmission of information. Term Informatics arose in the early 60s in French to denote the automated processing of information in society. This term is a kind of hybrid of two words - information And automation . computer science can be divided into three components: theoretical computer science; information means; information systems and technologies. Each of these parts is structured into several separate ones, and some of them are considered as independent academic disciplines. Theoretical computer science includes sections:

    Theory of algorithms and automata.

    Information theory.

    Coding theory.

    Mathematical logic.

    Theory of formal languages ​​and grammars.

    Operations research, etc.

Information means (technical and software) - a section that examines the general principles of constructing computing devices and data processing systems and issues of developing software systems. Here you can separately highlight the section - programming related to the development of software systems. Information systems and technologies – a section that studies the analysis of information flows, the possibility of their optimization and structuring; principles of implementation of information processes. Currently, there is one more section - artificial intelligence- a field of computer science in which complex problems are solved that intersect with psychology, physiology, linguistics and other sciences. The main directions of developments related to this area are reasoning modeling, computational linguistics, machine translation, creation of expert systems, pattern recognition and others. As a complex scientific discipline, computer science is related to other scientific disciplines:

    philosophy and psychology - through the doctrine of information and theory of knowledge;

    mathematics – through the theory of mathematical modeling, discrete mathematics, mathematical logic and the theory of algorithms;

    linguistics – through the study of formal languages ​​and sign systems;

    cybernetics – through information theory and control theory;

    physics and chemistry, electronics and radio engineering - through the “material” part of the computer and information systems.

Information concept

Currently, information is one of the most expensive types of resources. The creation of new means of information processing - computers - became the impetus for the intensive development of the information industry. An increasing number of economic entities are drawn into the process of information processing. Using the term information We, as a rule, do not think about what it is information . It should be noted that this question is quite complex. Until now, science has not developed a strict definition of the concept information . For example, when axiological approach, they strive to proceed from the value, practical significance of information, i.e. from qualitative characteristics that are significant in the social system. At semantic approach, information is considered from the point of view of both form and content. In this case, the information is associated with thesaurus, those. completeness of a systematic set of data about the subject of information. The concept of information cannot be considered only a technical, interdisciplinary or even supradisciplinary term. Information is a fundamental philosophical category. There are three points of view on the phenomenon information . The first identifies the concept of “information” with knowledge. Although this approach is widely criticized in the domestic literature, it takes place in many scientific works. The second point of view limits the subject area of ​​the concept of “information” to social and biological processes, rejecting the existence of information processes in inorganic nature. The third point of view, widely used at present, is associated with the attributive concept of information. For the first time, the attributive concept of information was formulated by N. Wiener, who believed that all phenomena in nature are covered by three basic concepts: matter, energy, information. Many modern authors, unlike N. Wiener, who did not consider the relationship of these components, closely link them and consider them as a single system.

Click the button above “Buy a paper book” You can buy this book with delivery throughout Russia and similar books at the best price in paper form on the websites of the official online stores Labyrinth, Ozon, Bukvoed, Read-Gorod, Litres, My-shop, Book24, Books.ru.

By clicking the “Buy and download e-book” button, you can buy this book in electronic form in the official liters online store, and then download it on the liters website.

By clicking the “Find similar materials on other sites” button, you can search for similar materials on other sites.

On the buttons above you can buy the book in official online stores Labirint, Ozon and others. Also you can search related and similar materials on other sites.

The textbook presents all sections of computer science that determine the modern level of training of specialists in the higher education system. The content of the book fully complies with the requirements of state standards. The manual is intended for students of all specialties and areas of training, excluding those who specialize in the field of computer science.

MAIN TASKS OF COMPUTER SCIENCE.
The term computer science comes from the French word informatique (a combination of the words information - “information” and automatique - “automation”). In English-speaking countries, this term corresponds to the synonym Computer Science.

Computer science is a science that studies all aspects of obtaining, storing, transforming, transmitting and using information. Let's look at the basic concepts of computer science.
Information resources (PIP) - information and media with information in information systems and networks.

Information system (IS) is a system designed to store, search, process and receive information based on user requests.
Information technology (IT) is a process that includes a set of methods for collecting, storing, processing and transmitting information based on the use of computer technology.

The rapid development of computer science is associated with the advent of electronic computers, or computers. That is why computer science is the science of the methods of creating, storing, reproducing, processing and transmitting data by means of computer technology, as well as the science of the principles of operation of these means and methods of managing them.

TABLE OF CONTENTS
Preface 3
Chapter 1. The concept of information. general characteristics processes of collecting, transmitting, processing and storing information
§ 1.1. Basic tasks of computer science 5
§ 1.2. Information, quality and quantity of information. Information processes b
§ 1.3. General presentation of data and concept of number systems 11
§ 1.4. Presentation of numerical data 13
1.4.1. Translation of numbers in positional number systems 15
1.4.2. Arithmetic operations in positional number systems 18
1.4.3. Representation of numbers in a computer 20
§ 1.5. Logical Data Representation 21
§ 1.6. Representation of text data 25
§ 1.7. Presentation of graphical data 27
§ 1.8. Data structures 32
§ 1.9. Storage units 36
Chapter 2. Technical means for implementing information processes
§ 2.1. Main stages in the development of informatics and computer technology 39
§ 2.2. Architecture, composition and purpose of the main elements of a personal computer 44
§ 2.3. Storage devices 49
§ 2.4. I/O devices 53
Chapter 3. Software implementation of information processes
§ 3.1. Software classification 64
§ 3.2. Operating system Windows 67
§ 3.3. Utility Software 72
3.3.1. File managers 72
3.3.2. Information compression 73
3.3.3. Data backup programs 75
3.3.4. CD burning programs 76
3.3.5. Viewers and converters 76
§ 3.4. Word Processing Software 77
3.4.1. Text editor Notepad 79
3.4.2. Text WordPad editor 79
3.4.3. Word 81 word processor
§ 3.5. Creating a presentation using Power Point 87
3.5.1. Creating presentation slides 88
3.5.2. Creating effects for displaying slides on the screen 90
3.5.3. Presentation demonstration 91
Chapter 4. Models for solving functional and computing tasks
§ 4.1. Modeling as a method of cognition 92
4.1.1. Concept of object and system 92
4.1.2. Modeling methods and types of models 94
4.1.3. Classification of mathematical models 96
§ 4.2. Simulation technology 98
§ 4.3. Classification of problems solved using models 100
§ 4.4. Intelligent systems 101
4.4.1. Propositional and predicate calculus 103
4.4.2. Logical knowledge model 108
4.4.3. Product knowledge model 112
4.4.4. Semantic networks 114
4.4.5. Frames 116
4.4.6. Fuzzy knowledge representation 118
4.4.7. Expert systems 120
4.4.8. Artificial neural networks 122
4.4.9. Genetic algorithms 125
§ 4.5. Problem Solving Strategies 128
Chapter 5. Algorithmization and programming
§ 5.1. Algorithmization 132
§ 5.2. The evolution of programming languages ​​144
§ 5.3. Programming in BASIC 148
5.3.1. Variables and constants 148
5.3.2. Operators and operations 150
5.3.3. Conditional statements 152
5.3.4. Cycles 155
5.3.5. Operations with symbolic variables 158
§ 5.4. Basic concepts of object-oriented visual programming 160
5.4.1. Object classes, class instances, and object families 160
5.4.2. Objects: properties, methods, events 162
Chapter 6. Software and programming technologies
§ 6.1. Programming systems 164
§ 6.2. Structured programming 166
§ 6.3. Stages of preparing and solving problems on a computer 171
Chapter 7: Spreadsheets
§ 7.1. Basic concepts and elements of spreadsheets 173
§ 7.2. Using Formulas and Functions 176
§ 7.3. Sorting and filtering data 185
§ 7.4. Summing up 187
7.4.1. Using the Totals function 187
7.4.2. Data consolidation 189
7.4.3. Pivot table 189
Chapter 8. Databases
§ 8.1. Basic Database Concepts 191
§ 8.2. Relational data model 194
§ 8.3. Building database tables 199
§ 8.4. Sorting, searching and filtering data 205
§ 8.5. Creating queries 206
8.5.1. Query tools 207
8.5.2. Sample requests 209
8.5.3. Final queries 213
8.5.4. Multi-table queries 213
8.5.5. Creating SQL Queries 215
§ 8.6. Generating reports 218
Chapter 9. Computer networks
§ 9.1. Basic concepts and definitions 220
§ 9.2. Hardware and software components of computer networks 222
§ 9.3. Principles of building the Internet 227
9.3.1. Internet access 228
9.3.2. Data transfer protocols 229
9.3.3. Internet addressing 230
§ 9.4. Internet services 230
§ 9.5. Tools for using network services 234
Chapter 10. Fundamentals of information security
§ 10.1. Information security and its components 236
§ 10.2. Computer viruses and means antivirus protection 237
10.2.1. Definition and classification of viruses 237
10.2.2. Methods and means of protection against viruses 240
§ 10.3. Protection against unauthorized intervention 242
10.3.1. Identification, authentication and encryption systems 243
10.3.2. Cryptographic methods of information protection 245
Literature 250.

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Similar documents

    Review of development of cloud computing. Service models of cloud computing. Deployment models of cloud computing. Technology of virtualization. Algorithm of "Cloudy". Safety and labor protection. Justification of the cost-effectiveness of the project.

    thesis, added 05/13/2015

    Design, general device and the principle of operation of hard disk drives. Main characteristics of hard drives: capacity, average search time, data transfer speed. The most common hard drive interfaces (SATA, SCSI, IDE).

    presentation, added 12/20/2015

    Database models. Modern database management systems, basic requirements for their organization. Advantages of reference and legal systems: "Garant", "Code" and "Consultant-Plus". Databases on legislation on the Internet and on CDs.

    abstract, added 03/11/2014

    Comparative analysis and evaluation of the characteristics of floppy and hard disk drives. Physical device, organization of information recording. Physical and logical organization of data, adapters and interfaces. Advanced production technologies.

    thesis, added 04/16/2014

    Disruption of the operating system with data loss. Encryption drive BitLocker. Restoring the OS data system using Reserve copy. Technology Windows protection 7. Presence of viruses in the system. Consequences of unauthorized access.

    course work, added 05/29/2014

    Analysis of the principle of operation of drives on hard magnetic disks of personal computers. Punched card as an information carrier in the form of a card made of paper or cardboard. Main functions file system. Methods for recovering information from RAID arrays.

    thesis, added 12/15/2012

    Writing program command codes and constants to FlashROM, source data codes to EEPROM, required values ​​of installation bits (Fuse Bits) and protection bits (Lock Bits). Writing and reading codes during programming, methods of programming in microcontrollers.

    test, added 08/22/2010

R. S. Gilyarevsky

COMPUTER SCIENCE FUNDAMENTALS

Lecture course

The lectures are intended for students of higher educational institutions. The purpose of the course of lectures is to give students an idea of ​​modern computer science as a scientific discipline and the theoretical basis of information technology, to introduce them to the basic concepts and patterns of intellectual communication, information activities, teach information retrieval methods.

The lectures reveal the essence of information and information technology. Students become familiar with the possibilities of electronic text processing, from checking spelling and grammar to machine translation from one language to another, and with electronic types of communication. The modern understanding of information infrastructure is outlined, information about network technology and the initial steps of working on the Internet.

The course "Fundamentals of Computer Science" was taught at the university-wide Department of Scientific Information of the Moscow state university them. M. V. Lomonosov in 1964–1985 for students of biology, soil science, physics, chemistry, law, the Institute of Asian Countries and students of the faculty of advanced training. Since 1986, it has been read to students of the Faculty of Journalism in a revised form, taking into account new information technology and the special specifics of humanitarian work.

Introductory lecture 5

Computer science as a scientific discipline 5

The Beginning of Computer Science 6

Subject and objects of research 16

Computer science and other sciences and scientific disciplines 17

Information – knowledge – science 21

Information and data 22

Properties of information 24

Information structure 24

Information Features 26

Scientific and technical information 28

Science as a social phenomenon 31

Prospects for the development of science 40

Intelligent Communication 43

Basic concepts 43

Scientific communication system 44

Library and information activities 50

Scientific information activities 54

Stages and tasks of communication 55

Information service 58

Development prospects 59

Man in the process of communication 62

Information consumers 62

Egalitarian information services 66

Information needs for development 71

Literature as a source of information 75

Basic concepts, evolution and typology 75

Patterns of growth and aging 81

Scattering Law 86

Development prospects 89

Information publications and services 92

Main types 92

Abstracting and bibliography 93

Abstract journal VINITI 99

Foreign abstract journals 101

Electronic information and databases 108

Transmission networks and data storage and processing facilities 111

Information services 112

Information structures and infrastructure 115

Information search 122

Background and essence 122

Procedures and concepts 124

Coordinate indexing 129

Citation, bibliographic combination, co-citation 132

Hierarchical and facet classifications 136

Different types of information retrieval languages ​​144

Databases and data banks 147

Information systems 151

Information retrieval systems 151

Intelligent information systems 152

Hypertext systems 154

Machine translation systems 164

Information technology 169

About the concept of information technology 169

Trends in information technology development 172

The influence of information technology on the development of science 175

Social consequences of new technology 178

Computer Communications 182

Electronic computers 182

New generations of computers 185

Personal computer and personal computing 189

Working with text on a computer 195

About programming languages ​​195

Text preparation applications 200

"Understanding" text in natural language 209

E-book 217

New concept for book 217

Essence, features and types of e-books 219

Electronic journal: problems of distribution and storage 220

Organizational and legal problems 225

Electronic library real and virtual 227

Is an e-book better than a traditional printed book? 230

Internet Information 239

Internet as a global computer network 239

Organizing access to primary sources 245

Final Lecture 256

Ideas and methods of computer science 256

The search for the fundamental law 260

Definition and subject area of ​​computer science 264

Perspectives on Computer Science 269

Glossary of terms 272

Introductory lecture

Computer Science as a Scientific Discipline

In modern society, various types of activities for the transmission and dissemination of the results of mental activity are increasingly occupying a place. Journalists, editors, referents, documentarians, librarians and bibliographers, information and archival workers only by tradition believe that their professions relate to different types of activities. Now many people understand that these types are stages of the same process of intellectual communication. Communication - connection, communication, message (process and path of communication) - can occur directly, on the physical level, but intellectual communication, i.e. relating to a person’s mental abilities, is always ideal and carried out informationally. Therefore, it is often also called information communication.

In recent decades, a computer, or rather electronic, revolution has occurred in information technology, that is, methods and technical means of transmitting information. The emergence of personal computers, their networking with each other and with large computer centers, the introduction into the usual processes of information exchange of desktop publishing complexes, copying and duplicating machines, relatively cheap but very capacious means of storing information (compact optical disks) leads to significant changes in the field of communication. Many professions that have existed for thousands of years are merging together, new ones are appearing, for which we do not yet have good Russian names, for example, public relations specialist or database administrator. Of course, it will take a long time until specialists in all these professions fully realize their unity. But new generations of specialists must develop ethical principles and professional worldview together within the framework of a common information direction.

New information technology -– computer training texts, information retrieval, hypertext and intelligent systems, transmission of machine-readable files over computer networks and telephone channels, desktop publishing systems, copying and duplicating equipment - allows for a new interpretation of this entire field of activity from journalism to library and archival science. Specialists in these professions must first of all understand that all information processes are based on general patterns rooted in the structure and properties of the information itself. But it is necessary for all knowledge workers and graduates of higher educational institutions to have the skills to independently use information technologies and technical means.

Students who came to university from high school are convinced that computer science comes down to the process of mastering these skills, and simply, to the ability to work on a computer, since this is what is taught in school in the subject called “computer science.” But this is a one-sided and therefore incorrect interpretation of computer science, which by its very name is a discipline that studies information. The computer and associated electronic devices really serve as the most powerful means of processing, storing and transmitting information in our time, or more precisely, the data in which this information can be contained. But these processes occurred in society before the advent of the computer and will continue to occur when other means are created for their implementation.

Computer science is a scientific discipline that studies the structure and general properties of semantic information, the patterns of processes of its functioning in society, is the theoretical basis of information technology, which is often identified with computer science.

In the future, the meaning and individual terms of this definition will be explained in detail, and you will understand why we are talking about semantic information and informational technology, and also find out what functioning information in society is carried out in processes intellectual(or informational) communications.

The formation of computer science

Computer science did not appear overnight. It has been developing since the beginning of this century, and its general contours were clearly outlined by its creator, Paul Otlet (1868–1944). Among the scientists and specialists who have made a significant contribution to the development of computer science, mention should be made of such names as S. Bradford, W. Bush, K. Muers, M. Taube, A. Kent, B. Vickery, H. P. Luhn , S. Cleverdon, O. Weinberg, D. Price, Y. Garfield, A. I. Mikhailov, V. A. Uspensky, G. E. Vleduts, Yu. A. Shreider, A. I. Cherny, V . K. Finn. The works and achievements of these authors (which are listed in Table 1) can serve as milestones in the development of computer science.

Of interest are the statements of some American scientists who assessed the development of computer science from different positions. Prof. T. Saratsevich emphasized the importance of research in the field of scientific communication for its development: “Despite the fact that information science is in no way limited to scientific communication alone (most theoretical and practical works in this area are of a universal nature), the problems of scientific communication in the form as they were understood in the 30s and 40s, served as the main impetus for the emergence of this science, the reason for its special, philosophical character and the structure of the decisions made in it. Thus, understanding the nature of the problems of scientific communication and their interpretation helps to understand the essence. information science and its relationship with other sciences" 1.

Throughout the 50s and 60s, computer science was driven by a social need to streamline the exchange of information, mainly within science itself. External factors The factors that determined this need were: the rapid growth of scientific literature, the difficulties of its thematic selection due to its dispersion and limited search tools, the processes of specialization and integration in science, blurring the traditional boundaries between scientific disciplines. During this period, our ideas about ways and methods of solving the problem of information retrieval underwent significant changes. Initially, hopes were placed on technical means alone.

Table 1

Major achievements in the development of computer science

Publications

Achievements

P. Otle
(1868–1944)

Universal bibliographic classification. Treatise on Documentation.-Brussels, 1905, 1934

Creating a classification for the bibliographic repertoire. The first work on computer science and a forecast of its development.

S. Bradford

Sources of information on a particular subject.--London, 1934

Opening law of dispersion of scientific publications.

A possible way of our thinking. - New York, 1945

Forecast for the development of computer science for half a century.

Coding for Mechanical Knowledge Organization - Cambridge (MA)

Formulation of the basic concepts of information retrieval.

Research on coordinate indexing. Computing machines and common sense. - Washington, 19531959

Creation of a theory of information retrieval systems of descriptor type.

Machine literature search. Encyclopedia of Library and Information Science. –

New York, 1956,1970–1990

Application of technical means for information retrieval. Creation of a multi-volume encyclopedia on computer science.

B. Vickery

Classification and indexing in science. - London, 1957

Development of information retrieval languages.

S. Cleverdon

Cranfield project.-

Cranfield, 1957

Studying the effectiveness of information retrieval systems.

Selective dissemination of information. -

New York, 1958

Application of computers in information activities.

O. Weinberg

Science, government and information. - Washington, 1963

Report of a commission of scientists to the US President on the importance of information for science.

Little Science, Big Science. - New Haven, 1963

Analysis of informal communications between scientists and the distribution of their publications.

Y. Garfield

Index of cited literature, Atlases of science, Current content of journals. - Philadelphia, 1964

A new principle of searching and analyzing literature and information.

J. Salton

Automatic processing, storage and retrieval of information. Dynamic library information systems. –New Jersey, 1975-79

Modern theory of information systems. Creation of the Salton Advanced Text Retrieval System (SMART)

Foundations of information science. – London, 1960

Creation of a mathematical apparatus for Bradford's law and an attempt to discover general laws of computer science

A.I. Mikhailov

Fundamentals of scientific information. Fundamentals of computer science. Scientific communications and computer science. - Moscow, 1965–1976

Creation of general monographs on computer science and the scientific foundations of information activities in the USSR.

V.A. Uspensky

On the problem of construction machine language. On the problems of scientific information theory. - Moscow, 1959-1963

Determination of principles for formalizing intellectual processes.

G.E. Vladuts

Automated information systems for chemistry. - Moscow, 1974

Formulation of the fundamentals of chemical informatics

Yu.A.Shrader

Equality, similarity, order. Systematicity and evolution.Moscow, 1980-1983

Creation of the theory of binary relations (similarity), the theory of rank distributions and the general concept of the system.

A.I. Black

Introduction to the theory of information retrieval. - Moscow, 1975. Co-authorship in monographs by A.I. Mikhailov

Generalization of the theory of information search. Creation of the largest information system ASSISTANT (VINITI)

Some logical problems of information retrieval. Intelligent systems and society.-Moscow, 1975, 2001

Creation of the theory of intelligent information systems of the “John Stuart Miles” type.

D.G. Lahuti

Questions of the theory of search systems. Automated documentary-fact-graphic information retrieval systems. –

Moscow, 1963, 1988

Development of a theory of information retrieval. Creation of information systems Empty-Non-Empty, Brackets, Abstract.

Quite soon there was an understanding of the need to develop semantic means of analysis and synthesis of scientific information. It is no coincidence that the theory of information retrieval, methods of coordinate indexing, the concepts of relevance (compliance with a request) and pertinence (compliance with the need for information) have become core for computer science.

In the 70s, there was a further expansion of the scope of application of computer science. Scientific information began to play a significant role in matters of managing the national economy, making political decisions, and global modeling of social development. Scientific and technical information systems began to be considered as an element of automated control systems, which was legitimate only to a certain extent. At the same time, it turned out that no matter how widely scientific information is interpreted, it alone is not enough for information systems that provide solutions to such general social problems. On the other hand, it turned out that in the most significant aspects the methods for constructing information retrieval systems for scientific information are applicable for many other types of information (economic, statistical, political, industrial, etc.). Along with scientific information, factual information acquired particular importance, and information services, while maintaining a disciplinary orientation, began to develop in problematic aspects.

BASICS OF CULTURAL STUDY WELLLECTURES Moscow INFRA-M 2002 Kononenko B.I. Basics cultural studies: Welllectures. - M.: INFRA-M; 2002. - 208 p. - (... cheskoe. This contributes to the development of semiotics, computer science and cybernetics, studying general laws...

  • Social informatics course of lectures Moscow

    Law
  • Lecture 0 Introductory summary

    Lecture

    VINITI, 2005. – 316 p. Gilyarevsky R. S. Basicscomputer science: Welllectures. – M.: publishing house “Examination”, 2004. – 320 p. ... 21. – No.2. – P. 117. 8 Gilyarevsky R.S. Basicscomputer science: Welllectures. – M.: Examination Publishing House, 2004. – P. 292. 9 Black...