What is the World Wide Web. What is the Internet, who created the World Wide Web and how the global network works

World Wide Web (abbreviated World Wide Web or WWW) is a unity of information resources that are interconnected by means of telecommunications and are based on a hypertext representation of data scattered throughout the world.

The year of birth of the World Wide Web is considered to be 1989. It was this year that Tim Berners-Lee proposed a common hypertext project, which later became known as the World Wide Web.

The creator of the “web” Tim Bernes-Lee, working in the laboratory of elementary particle physics of the European Center for Nuclear Research “CERN” in Geneva (Switzerland), together with partner Robert Caillot, worked on the problems of applying hypertext ideas to build an information environment that would simplify the exchange of information between physicists .

The result of this work was a document that examined concepts that are fundamental to the “web” in its modern form, and proposed URIs, the HTTP protocol, and the HTML language. Without these technologies it is no longer possible to imagine the modern Internet.

Berners-Lee created the world's first web server and the world's first hypertext web browser. On the world's first website, he described what the World Wide Web was and how to set up a web server, how to use a browser, etc. This site was also the world's first Internet catalogue.

Since 1994, the most important tasks for the development of the World Wide Web have been taken over by the World Wide Web Consortium ( World Wide Web Consortium, WZS), which was organized and still headed by Kim Bernes-Lee. The consortium develops and implements technology standards for the Internet and the World Wide Web. WZS mission: “Unleash the full potential of the World Wide Web by creating protocols and principles that guarantee the long-term development of the Network.” WZS is developing “Recommendations” to achieve compatibility between software products and equipment of various companies, which makes the World Wide Web more advanced, universal and convenient.

Search engines: composition, functions, operating principles.

Search system is a software and hardware complex designed to search the Internet and respond to a user request, specified in the form of a text phrase (search query), by producing a list of links to sources of information, in order of relevance (in accordance with the request). The largest international search engines: "Google", "Yahoo", "MSN". On the Russian Internet it is - "Yandex", "Rambler", "Aport".

Let's describe main characteristics of search engines :

    Completeness

Completeness is one of the main characteristics of a search system, which is the ratio of the number of documents found by request to the total number of documents on the Internet that satisfy the given request. For example, if there are 100 pages on the Internet containing the phrase “how to choose a car,” and only 60 of them were found for the corresponding query, then the completeness of the search will be 0.6. Obviously, the more complete the search, the less likely it is that the user will not find the document he needs, provided that it exists on the Internet at all.

    Accuracy

Accuracy is another main characteristic of a search engine, which is determined by the degree to which the found documents match the user's query. For example, if the query “how to choose a car” contains 100 documents, 50 of them contain the phrase “how to choose a car”, and the rest simply contain these words (“how to choose the right radio and install it in a car”), then the search accuracy is considered equal to 50/100 (=0.5). The more accurate the search, the faster the user will find the documents he needs, the less various kinds of “garbage” will be found among them, the less often the found documents will not correspond to the request.

    Relevance

Relevance is an equally important component of search, which is characterized by the time that passes from the moment documents are published on the Internet until they are entered into the search engine index database. For example, the day after interesting news appeared, a large number of users turned to search engines with relevant queries. Objectively, less than a day has passed since the publication of news information on this topic, but the main documents have already been indexed and available for search, thanks to the existence of the so-called “fast database” of large search engines, which is updated several times a day.

    Search speed

Search speed is closely related to its load resistance. For example, according to Rambler Internet Holding LLC, today, during business hours, the Rambler search engine receives about 60 requests per second. Such workload requires reducing the processing time of an individual request. Here the interests of the user and the search engine coincide: the visitor wants to get results as quickly as possible, and the search engine must process the request as quickly as possible, so as not to slow down the calculation of subsequent queries.

    Visibility

Visual presentation of results is an important component of convenient search. For most queries, the search engine finds hundreds, or even thousands, of documents. Due to unclear queries or inaccurate searches, even the first pages of search results do not always contain only the necessary information. This means that the user often has to perform his own search within the found list. Various elements of the search engine results page help you navigate the search results. Detailed explanations of the search results page, for example for Yandex, can be found at the link http://help.yandex.ru/search/?id=481937.

A Brief History of the Development of Search Engines

In the initial period of Internet development, the number of its users was small, and the amount of available information was relatively small. For the most part, only research staff had access to the Internet. At this time, the task of searching for information on the Internet was not as urgent as it is now.

One of the first ways to organize access to network information resources was the creation of open directories of sites, links to resources in which were grouped according to topic. The first such project was the Yahoo.com website, which opened in the spring of 1994. After the number of sites in the Yahoo directory increased significantly, the ability to search for the necessary information in the directory was added. In the full sense, it was not yet a search engine, since the search area was limited only to the resources present in the catalog, and not to all Internet resources.

Link directories were widely used in the past, but have almost completely lost their popularity at present. Since even modern catalogs, huge in volume, contain information only about a negligible part of the Internet. The largest directory of the DMOZ network (also called the Open Directory Project) contains information about 5 million resources, while the Google search engine database consists of more than 8 billion documents.

The first full-fledged search engine was the WebCrawler project, published in 1994.

In 1995, search engines Lycos and AltaVista appeared. The latter has been a leader in the field of information search on the Internet for many years.

In 1997, Sergey Brin and Larry Page created the Google search engine as part of a research project at Stanford University. Google is currently the most popular search engine in the world!

In September 1997, the Yandex search engine, which is the most popular on the Russian-language Internet, was officially announced.

Currently, there are three main international search engines - Google, Yahoo and MSN, which have their own databases and search algorithms. Most other search engines (of which there are a large number) use in one form or another the results of the three listed. For example, AOL search (search.aol.com) uses the Google database, while AltaVista, Lycos and AllTheWeb use the Yahoo database.

Composition and principles of operation of the search system

In Russia, the main search engine is Yandex, followed by Rambler.ru, Google.ru, Aport.ru, Mail.ru. Moreover, at the moment, Mail.ru uses the Yandex search engine and database.

Almost all major search engines have their own structure, different from others. However, it is possible to identify the main components common to all search engines. Differences in structure can only be in the form of implementation of the mechanisms of interaction of these components.

Indexing module

The indexing module consists of three auxiliary programs (robots):

Spider – a program designed to download web pages. The spider downloads the page and retrieves all internal links from that page. The html code of each page is downloaded. Robots use HTTP protocols to download pages. The spider works as follows. The robot sends the request “get/path/document” and some other HTTP request commands to the server. In response, the robot receives a text stream containing service information and the document itself.

    Page URL

    date the page was downloaded

    Server response http header

    page body (html code)

Crawler (“traveling” spider) – a program that automatically follows all links found on the page. Selects all links present on the page. Its job is to determine where the spider should go next, based on links or a predetermined list of addresses. Crawler, following the links found, searches for new documents that are still unknown to the search engine.

Indexer (robot indexer) - a program that analyzes web pages downloaded by spiders. The indexer parses the page into its component parts and analyzes them using its own lexical and morphological algorithms. Various page elements are analyzed, such as text, headings, links, structural and style features, special service HTML tags, etc.

Thus, the indexing module allows you to crawl a given set of resources using links, download encountered pages, extract links to new pages from received documents, and perform a complete analysis of these documents.

Database

A database, or search engine index, is a data storage system, an information array in which specially converted parameters of all documents downloaded and processed by the indexing module are stored.

Search server

The search server is the most important element of the entire system, since the quality and speed of the search directly depend on the algorithms that underlie its functioning.

The search server works as follows:

    The request received from the user is subjected to morphological analysis. The information environment of each document contained in the database is generated (which will subsequently be displayed in the form of a snippet, that is, text information corresponding to the request on the search results page).

    The received data is passed as input parameters to a special ranking module.

    Data is processed for all documents, as a result of which each document has its own rating that characterizes the relevance of the query entered by the user and the various components of this document stored in the search engine index.

    Next, a snippet is generated, that is, for each document found, the title, a short abstract that best matches the query, and a link to the document itself are extracted from the document table, and the words found are highlighted.

    The resulting search results are transmitted to the user in the form of a SERP (Search Engine Result Page) – a search results page.

As you can see, all these components are closely related to each other and work in interaction, forming a clear, rather complex mechanism for the operation of the search system, which requires huge amounts of resources.

No search engine covers all Internet resources.

Each search engine collects information about Internet resources using its own unique methods and forms its own periodically updated database. Access to this database is granted to the user.

Search engines implement two ways to search for a resource:

    Search by topic catalogs - information is presented in the form of a hierarchical structure. At the top level there are general categories (“Internet”, “Business”, “Art”, “Education”, etc.), at the next level the categories are divided into sections, etc. The lowest level is links to specific web pages or other information resources.

    Keyword search (index search or detailed search) - the user sends to the search engine request, consisting of keywords. System returns to the user a list of resources found upon request.

Most search engines combine both search methods.

Search engines can be local, global, regional and specialized.

In the Russian part of the Internet (Runet), the most popular general purpose search engines are Rambler (www.rambler.ru), Yandex (www.yandex.ru), Aport (www.aport.ru), Google (www.google.ru).

Most search enginesimplemented in the form of portals.

Portal (from English.portal- main entrance, gate) is a website that integrates various Internet services: search tools, mail, news, dictionaries, etc.

Portals can be specialized (like,www. museum. ru) and general (for example,www. km. ru).

Search by keywords

The set of keywords used to search is also called the search criterion or search topic.

A request can consist of either one word or a combination of words combined by operators - symbols by which the system determines what action it needs to perform. For example: the request “Moscow St. Petersburg” contains the AND operator (this is how a space is perceived), which indicates that one should search for documents that contain both words - Moscow and St. Petersburg.

In order for the search to be relevant (from the English relevant - relevant, relevant), several general rules should be taken into account:

    Regardless of the form in which the word is used in the query, the search takes into account all its word forms according to the rules of the Russian language.

    For example, the query “ticket” will also find the words “ticket”, “ticket”, etc.

    Capital letters should only be used in proper names to avoid viewing unnecessary references. At the request of “blacksmiths,” for example, documents will be found that talk about both blacksmiths and Kuznetsovs.

    It is advisable to narrow your search using a few keywords.

If the required address is not among the first twenty addresses found, you should change the request.

Each search engine uses its own query language. To get acquainted with it, use the built-in help of the search engine

Large sites may have built-in information retrieval systems within their web pages.

Queries in such search systems, as a rule, are built according to the same rules as in global search engines, however, familiarity with the help here will not be superfluous.

Advanced Search Queries in such search systems, as a rule, are built according to the same rules as in global search engines, however, familiarity with the help here will not be superfluous. Search engines can provide a mechanism for the user to create a complex query. Following a link

makes it possible to edit search parameters, specify additional parameters and select the most convenient form for displaying search results. The following describes the parameters that can be specified during an advanced search in the Yanex and Rambler systems.

Parameter description

Name in YandexName in

Rambler

Where to look for keywords (document title, body text, etc.)

Dictionary filter

Search by text...

Where to look for keywords (document title, body text, etc.)

What words should or should not be present in the document and how accurate the match should be

Search for query words... Exclude documents containing the following words...

Where to look for keywords (document title, body text, etc.)

How far apart should keywords be located?

Distance between query words...

Restriction on document date

Document date...

Limit your search to one or more sites

Site/Top

Limiting search by document language

Document language...

Search for documents containing a picture with a specific name or signature

Image

Finding pages that contain objects

Special objects

Search results presentation form

Issue format

Displaying search results

Some search engines (for example, Yandex) allow you to enter queries in natural language. You write what you need to find (for example: ordering train tickets from Moscow to St. Petersburg). The system analyzes the request and produces the result. If you are not satisfied with it, switch to the query language.

Scientific and technological progress does not stand still, but is in constant development, search, and improvement. Perhaps the most useful invention of human genius, the Internet, was invented relatively recently, by the standards of the development of civilization. At its core, it is a unique data exchange tool.

The Internet (Network, Internet) is a virtual environment that guarantees access to information resources, the elements of which are personal computers. They are combined into a single circuit and endowed with unique addressing features, using high-speed communication lines with host computers.

The Internet is a huge network connecting countless devices. It serves to exchange information that exists on this network in various forms. Nowadays, not only computers can connect to the Internet. Mobile phones, tablets, game consoles, other gadgets and even TVs can easily access the network at any time.

The significance of this information space is undeniable due to the amazing communication capabilities between users of all devices connected to the Network.

In technical terms, the online space is formed by countless computer devices connected to each other. Billions of PC users living in different countries communicate with each other every day, transmit and receive useful information, download arrays of digital data in the form of applications, programs, utilities; watch videos, listen to music.

The online environment has another important property - unlimited possibilities for storing information. Personal experience is transmitted through the Internet; in addition, it is a unique platform for informing the masses for modern media and a colossal repository of world knowledge.

What is the Internet?

In order for PC owners living on different continents to be able to freely use the services of searching for network resources, trunk cables are laid at the bottom of the ocean through which useful information is pumped around the clock.

A personal computer is controlled by special protocols. This is a kind of instruction that allows you to set rules for communication between devices. The single criterion for constructing a software protocol is the IP address. Thanks to this structure, each participant receives his own digital address, with the help of which search and identification takes place.

For example, after entering the name “novichkam.info” into the browser line, in a matter of moments the client finds himself on a web platform offering help to beginners. In technical terms, the software robot simply finds the IP address code that is assigned to a specific site.

The machine algorithm includes the following operations:

  1. the request is recorded by the main server, where the name of the desired thematic data array is stored;
  2. the name of this resource is found in memory, i.e. detecting the required IP address;
  3. the client lands on the website.

There are other protocols, such as HTTP. Requests in another way are carried out with the addition of a prefix http://

What is the World Wide Web (WWW)

For most representatives of the target audience, the definition of an Internet service as the World Wide Web in abbreviation (WWW or simply WEB) is of great interest. Its definition is understood as a set of interconnected web pages, access to which is provided by a limited number of PCs connected to the Internet.

A set of text files marked up in HTML with links, placed on an electronic platform, is called a website. You can get acquainted with the content of a particular website by activating the browser to search for the address name.

The web today is positioned as the most sought-after and popular service in the online space, i.e. Internet. An important element of the WEB are hypertext links. By clicking on the link of the desired document or requesting a unique URL (name code, path) in the browser, a person can view the desired array of text.

Addressing system

If you enter an incorrect address name into the search bar or follow a broken link, the browser will promptly signal an error (confirm the absence of the required page). Often, upon request, a person gains access to an advertising (fraudulent) site.

In this situation, you should correct the inaccuracy in the query string field without attempting to examine the advertising website for security reasons. The fact is that these sites can be infected with a virus. If the resource was created for the purpose of fraud, then it would be useful to familiarize yourself with our section, where the most popular methods of deception on the Internet are perfectly described.

The main thing in the address of any website is the domain, which serves to make it easier to remember. The domain usually displays the home page address. At the same time, it should be understood that for technical downloading of a page, the computer device uses IP with the protocol "12.123.45.5". Agree, this combination is much more difficult to remember than the domain name of our site.

It is important to know that entering “http://” or the “WWW” prefix in the search bar is NOT necessary. It is better to use the services of a search engine, where the mistake made will be immediately corrected, and the domain can be entered without a zone that causes confusion.

What does the Internet give us?

  • unlimited communication and communication

Many people are looking for like-minded people here, communicating on popular social projects and forums. Others like the unique service of personal communication using ICQ or Skype. Visitors to a dating website expect to find their other half here;

  • unlimited possibilities for entertainment and personal leisure

Here you can listen to popular music tracks for free, enjoy the latest films from film studios, play various games, including gambling, get acquainted with the works of modern authors and classics of the literary genre, take surveys, tests, etc.

  • self-education

In the environment of mass communication, you can not only read useful articles, but also participate in trainings, master classes, watch video lessons;

  • creative personality development

Here you can meet rare people, visit their professional projects for creative and personal improvement;

  • purchase of goods and services

Clients of virtual supermarkets can buy goods without leaving home. Online you can purchase shares of industrial companies, order tickets, book a hotel room, etc.;

  • new ways to earn money

There are more types of earnings on the Internet. For example, you can open an online store by creating your own blog (website). For those who are just trying their hand at this field, it’s easier to start with freelancing: writing custom articles, selling photos, offering services for creating and promoting various projects, doing web design and programming.

  • much more. The information on our website will help you find out not only all the possibilities of this global network, but also great experience while being in it.

The World Wide Web is made up of hundreds of millions of web servers. Most of the resources on the World Wide Web are based on hypertext technology. Hypertext documents posted on the World Wide Web are called web pages. Several web pages united by a common theme, design, and also interconnected by links and usually located on the same web server are called. To download and view web pages, special programs are used - browsers ( browser).

The World Wide Web has caused a real revolution in information technology and an explosion in the development of the Internet. Often, when talking about the Internet, they mean the World Wide Web, but it is important to understand that they are not the same thing.

Structure and principles of the World Wide Web

The World Wide Web is made up of millions of Internet web servers located around the world. A web server is a computer program that runs on a computer connected to a network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard drive and sends it over the network to the requesting computer. More complex web servers can dynamically generate documents in response to an HTTP request using templates and scripts.

To view information received from the web server, a special program is used on the client computer - web browser. The main function of a web browser is to display hypertext. The World Wide Web is inextricably linked with the concepts of hypertext and hyperlinks. Most of the information on the Internet is hypertext.

To facilitate the creation, storage and display of hypertext on the World Wide Web, HTML is traditionally used ( HyperText Markup Language"Hypertext Markup Language"). The work of creating (marking up) hypertext documents is called layout, it is done by a webmaster or a separate markup specialist - a layout designer. After HTML markup, the resulting document is saved to a file, and such HTML files are the main type of resources on the World Wide Web. Once an HTML file is made available to a web server, it is called a “web page.” A set of web pages forms .

The hypertext of web pages contains hyperlinks. Hyperlinks help World Wide Web users easily navigate between resources (files), regardless of whether the resources are located on the local computer or on a remote server. Uniform URL resource locators are used to locate resources on the World Wide Web. Uniform Resource Locator). For example, the full URL of the main page of the Russian section of Wikipedia looks like this: http://ru.wikipedia.org/wiki/Main_page. Such URL locators combine URI identification technology. Uniform Resource Identifier"Uniform Resource Identifier") and the Domain Name System (DNS). Domain Name System). The domain name (in this case ru.wikipedia.org) as part of the URL designates the computer (more precisely, one of its network interfaces) that executes the code of the desired web server. The URL of the current page can usually be seen in the browser's address bar, although many modern browsers prefer to show only the domain name of the current site by default.

World Wide Web Technologies

To improve the visual perception of the web, CSS technology has become widely used, which allows you to set uniform design styles for many web pages. Another innovation worth paying attention to is the URN resource designation system. Uniform Resource Name).

A popular concept for the development of the World Wide Web is the creation of the Semantic Web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make information posted on the network more understandable to computers. The Semantic Web is a concept of a network in which every resource in human language would be provided with a description that a computer can understand. The Semantic Web opens up access to clearly structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find the necessary resources themselves, process information, classify data, identify logical connections, draw conclusions and even make decisions based on these conclusions. If widely adopted and implemented wisely, the Semantic Web has the potential to spark a revolution on the Internet. To create a computer-readable description of a resource, the Semantic Web uses the RDF (English) format. Resource Description Framework), which is based on XML syntax and uses URIs to identify resources. New products in this area are RDFS (English) Russian. (English) RDF Schema) and SPARQL (eng. Protocol And RDF Query Language) (pronounced "sparkle"), a new query language for fast access to RDF data.

History of the World Wide Web

Tim Berners-Lee and, to a lesser extent, Robert Cayo are considered the inventors of the World Wide Web. Tim Berners-Lee is the originator of HTTP, URI/URL and HTML technologies. In 1980 he worked at the European Council for Nuclear Research (French). Conseil Européen pour la Recherche Nucléaire, CERN) software consultant. It was there, in Geneva (Switzerland), that he wrote the Enquire program for his own needs. Enquire, can be loosely translated as "Interrogator"), which used random associations to store data and laid the conceptual foundation for the World Wide Web.

In 1989, while working at CERN on the organization's intranet, Tim Berners-Lee proposed the global hypertext project now known as the World Wide Web. The project involved the publication of hypertext documents linked by hyperlinks, which would facilitate the search and consolidation of information for CERN scientists. To implement the project, Tim Berners-Lee (together with his assistants) invented URIs, the HTTP protocol, and the HTML language. These are technologies without which it is no longer possible to imagine the modern Internet. Between 1991 and 1993, Berners-Lee refined the technical specifications of these standards and published them. But, nevertheless, the official year of birth of the World Wide Web should be considered 1989.

As part of the project, Berners-Lee wrote the world's first web server, httpd, and the world's first hypertext web browser, called WorldWideWeb. This browser was also a WYSIWYG editor (short for English). What You See Is What You Get- what you see is what you get), its development began in October 1990 and was completed in December of the same year. The program ran in the NeXTStep environment and began to spread across the Internet in the summer of 1991.

Mike Sendall buys a NeXT cube computer at this time in order to understand what the features of its architecture are, and then gives it to Tim [Berners-Lee]. Thanks to the sophistication of the NeXT cube software system, Tim wrote a prototype illustrating the basic concepts of the project in a few months. This was an impressive result: the prototype offered users, among other things, such advanced capabilities as WYSIWYG browsing/authoring!... During one of the sessions of joint discussions of the project in the CERN cafeteria, Tim and I tried to find a “catching” name for the system being created . The only thing I insisted on was that the name should not once again be taken from the same Greek mythology. Tim suggested the World Wide Web. I immediately really liked everything about this name, but it’s hard to pronounce in French.

The world's first website was hosted by Berners-Lee on August 6, 1991, on the first web server available at http://info.cern.ch/, (archived copy here). Resource defined the concept World Wide Web, contained instructions for setting up a web server, using a browser, etc. This site was also the world's first Internet directory because Tim Berners-Lee later posted and maintained a list of links to other sites there.

The first photograph on the World Wide Web was of the parody filk band Les Horribles Cernettes. Tim Bernes-Lee asked the group leader for scans of them after the CERN Hardronic Festival.

And yet, the theoretical foundations of the web were laid much earlier than Berners-Lee. Back in 1945, Vannaver Busch developed the concept of Memex - mechanical aids for “expanding human memory.” Memex is a device in which a person stores all his books and records (and, ideally, all his knowledge that can be formally described) and which provides the necessary information with sufficient speed and flexibility. It is an extension and addition to human memory. Bush also predicted comprehensive indexing of text and multimedia resources with the ability to quickly find the necessary information. The next significant step towards the World Wide Web was the creation of hypertext (a term coined by Ted Nelson in 1965).

Since 1994, the main work on the development of the World Wide Web has been taken over by the World Wide Web Consortium. World Wide Web Consortium, W3C), founded and still led by Tim Berners-Lee. This consortium is an organization that develops and implements technology standards for the Internet and the World Wide Web. W3C Mission: “Unleash the full potential of the World Wide Web by establishing protocols and principles to ensure the long-term development of the Web.” Two other major goals of the consortium are to ensure full “internationalization of the Web” and to make the Web accessible to people with disabilities.

The W3C develops common principles and standards (called "recommendations") for the Internet. W3C Recommendations), which are then implemented by software and hardware manufacturers. In this way, compatibility is achieved between software products and equipment of different companies, which makes the World Wide Web more advanced, universal and convenient. All recommendations of the World Wide Web consortium are open, that is, they are not protected by patents and can be implemented by anyone without any financial contributions to the consortium.

Prospects for the development of the World Wide Web

Currently, there are two trends in the development of the World Wide Web: the semantic web and the social web.

  • The Semantic Web involves improving the coherence and relevance of information on the World Wide Web through the introduction of new metadata formats.
  • The Social Web relies on the work of organizing the information available on the Web, carried out by the Web users themselves. In the second direction, developments that are part of the semantic web are actively used as tools (RSS and other web channel formats, OPML, XHTML microformats). Partially semanticized sections of the Wikipedia Category Tree help users consciously navigate the information space, however, very soft requirements for subcategories do not give reason to hope for the expansion of such sections. In this regard, attempts to compile knowledge atlases may be of interest.

There is also the popular concept of Web 2.0, which summarizes several directions of development of the World Wide Web.

Methods for actively displaying information on the World Wide Web

Information on the web can be displayed either passively (that is, the user can only read it) or actively - then the user can add information and edit it. Methods for actively displaying information on the World Wide Web include:

It should be noted that this division is very arbitrary. So, say, a blog or guest book can be considered a special case of a forum, which, in turn, is a special case of a content management system. Usually the difference is manifested in the purpose, approach and positioning of a particular product.

Some information from websites can also be accessed through speech. India has already begun testing a system that makes the text content of pages accessible even to people who cannot read and write.

The World Wide Web is sometimes ironically called the Wild Wild Web, in reference to the title of the film Wild Wild West.

Safety

For cybercriminals, the World Wide Web has become a key method for distributing malware. In addition, the concept of online crime includes identity theft, fraud, espionage and illegal collection of information about certain subjects or objects. Web vulnerabilities, according to some data, currently outnumber any traditional manifestation of computer security problems; Google estimates that approximately one in ten pages on the World Wide Web may contain malicious code. According to Sophos, a British manufacturer of antivirus solutions, the majority of cyber attacks on the web are carried out by legitimate cyberattacks, mainly located in the USA, China and Russia. The most common type of such attacks, according to information from the same company, is SQL injection - maliciously entering direct queries to the database into text fields on resource pages, which, if the level of security is insufficient, can lead to disclosure of the contents of the database. Another common threat that exploits HTML and unique resource identifiers to World Wide Web sites is cross-site scripting (XSS), which became possible with the introduction of JavaScript technology and gained momentum with the development of Web 2.0 and Ajax - new standards that encouraged the use of interactive scripting. In 2008, it was estimated that up to 70% of all websites in the world were vulnerable to XSS attacks against their users.

Proposed solutions to relevant problems vary significantly, even to the point of completely contradicting each other. Large security solution providers like McAfee develop products to evaluate information systems for compliance with certain requirements; other market players (for example, Finjan) recommend conducting active research of program code and all content in general in real time, regardless of the data source. There are also views that businesses should view security as a business opportunity rather than as a cost; To do this, the hundreds of companies that provide information security today must be replaced by a small group of organizations that would enforce the infrastructure policy of ongoing and pervasive digital rights management.

Confidentiality

Each time a user's computer requests a web page from a server, the server determines and typically logs the IP address from which the request came. Likewise, most Internet browsers record information about the pages you visit, which can then be viewed in your browser history, and also cache downloaded content for possible reuse. If an encrypted HTTPS connection is not used when interacting with the server, requests and responses to them are transmitted over the Internet in clear text and can be read, recorded and viewed on intermediate network nodes.

When a web page requests and a user provides a certain amount of personal information, such as a first and last name or a real or email address, the data stream can be de-anonymized and associated with a specific person. If a website uses cookies, supports user authentication or other technologies for tracking visitor activity, then a relationship may also be established between previous and subsequent visits. Thus, an organization operating on the World Wide Web has the opportunity to create and update the profile of a specific client using its site (or sites). Such a profile may include, for example, information about leisure and entertainment preferences, consumer interests, occupation and other demographic indicators. Such profiles are of significant interest to marketers, advertising agency employees and other similar professionals. Depending on the terms of service of specific services and local laws, such profiles may be sold or transferred to third parties without the user's knowledge.

Disclosure of information is also facilitated by social networks, which invite participants to independently disclose a certain amount of personal data about themselves. Careless handling of the capabilities of such resources may result in information that the user would prefer to hide become publicly available; among other things, such information may become the subject of attention of hooligans or, moreover, cybercriminals. Modern social networks provide their members with a fairly wide range of profile privacy settings, but these settings can be unnecessarily complex - especially for inexperienced users.

Spreading

Between 2005 and 2010, the number of web users doubled to reach the billion mark. According to early studies in 1998 and 1999, most existing websites were not indexed correctly by search engines, and the web itself was larger than expected. As of 2001, more than 550 million web documents had already been created, most of which were located within the invisible network. As of 2002, more than 2 billion web pages were created, 56.4% of all Internet content was in English, it was followed by German (7.7%), French (5.6%) and Japanese (4.9%). According to research conducted at the end of January 2005, more than 11.5 billion web pages were identified in 75 different languages ​​and indexed on the open web. And as of March 2009, the number of pages increased to 25.21 billion. On July 25, 2008, Google software engineers Jesse Alpert and Nissan Hiai announced that Google Search had detected more than a billion unique URLs.

  • In 2011, they planned to erect a monument to the World Wide Web in St. Petersburg. The composition was supposed to be a street bench in the form of the abbreviation WWW with free access to the Internet.

see also

  • Wide Area Network
  • World Digital Library
  • Global Internet Use

Literature

  • Fielding, R.; Gettys, J.; Mogul, J.; Fristik, G.; Mazinter, L.; Leach, P.; Berners-Lee, T. (June 1999). "Hypertext Transfer Protocol - http://1.1" (Information Sciences Institute).
  • Berners-Lee, Tim; Bray, Tim; Connolly, Dan; Cotton, Paul; Fielding, Roy; Jeckle, Mario; Lilly, Chris; Mendelsohn, Noah; Orcard, David; Walsh, Norman; Williams, Stuart (December 15, 2004). "Architecture of the World Wide Web, Volume One" (W3C).
  • Polo, Luciano. World Wide Web Technology Architecture: A Conceptual Analysis. New Devices (2003).

Already today the number of Internet users reaches 3.5 billion people, which is almost half of the world's population. And, of course, everyone knows that The World Wide Web has completely enveloped our planet. But still not everyone can say whether there is a difference between the concepts of the Internet and the World Wide Web. Oddly enough, many are absolutely sure that these are synonyms, but savvy guys can give arguments that will reduce this confidence.

What is the Internet?

Without going into complex technical details, we can say that The Internet is a system that connects computer networks around the world. Computers are divided into two groups – clients and servers.

Clients are called ordinary user devices, which include personal computers, laptops, tablets, and, of course, smartphones. They send a request, receive and display information.

All information is stored on servers, which can be classified according to different purposes:

  • web server,
  • postal,
  • chats,
  • radio and television broadcast systems,
  • file sharing.

Servers are powerful computers that work continuously. In addition to storing information, they receive requests from clients and send the necessary response. At the same time, they process hundreds of such requests.

Also in our brief educational program it is necessary to mention it is worth mentioning Internet providers, which provide communication between the client and server. A provider is an organization with its own Internet server to which all its clients are connected. Providers provide communication via telephone cable, dedicated channel or wireless network.

This is how you get on the Internet

Is it possible to do without a provider and connect directly to the Internet? Theoretically it is possible! You will have to become your own provider and spend a huge amount of money to get to the central servers. So don’t blame your Internet provider too much for high tariffs - these guys also need to pay for many things and spend money on equipment maintenance.

The World Wide Web has entangled the whole world

World Wide Web or simply web - “web”. Actually it is represented by a huge number of pages that are interconnected. This connection is provided by links, through which you can move from one page to another, even if it is located on another computer connected to.

The World Wide Web is the most popular and largest Internet service.

The World Wide Web uses special web servers to operate. They store web pages (one of which you see now). Pages connected by links, having a common theme, appearance, and usually located on the same server are called a website.

To view web pages and documents, special programs are used - browsers.

The World Wide Web includes forums, blogs and social networks. But its work and existence is directly ensured by the Internet...

Is there a big difference?

In fact, the difference between the Internet and the World Wide Web is quite large. If the Internet is a huge network connecting millions of computers around the planet to share information, then the World Wide Web is just one way to exchange this information. In addition to ensuring the operation of the World Wide Web, the Internet allows you to use email and various instant messengers, as well as transfer files via the FTP protocol,

The Internet is what connects numerous computer networks.

The World Wide Web is all pages that are stored on special Internet servers.

Conclusion

Now you know that the World Wide Web and the World Wide Web are different things. And most importantly, you will be able to show off your intelligence and explain to your friends what this difference is.

The Internet is a communication system and at the same time an information system - a medium for people to communicate. Currently, there are many definitions of this concept. In our opinion, one of the definitions of the Internet that most fully characterizes the information interaction of the planet’s population is: “The Internet is a complex transport and information system of mushroom-shaped (dipole) structures, the cap of each of which (the dipoles themselves) represents the brain of a person sitting at a computer , together with the computer itself, which is, as it were, an artificial extension of the brain, and the legs, for example, are the telephone network connecting computers, or the ether through which radio waves are transmitted.”

The advent of the Internet gave impetus to the development of new information technologies, leading not only to changes in the consciousness of people, but also the world as a whole. However, the worldwide computer network was not the first discovery of its kind. Today, the Internet is developing in the same way as its predecessors - telegraph, telephone and radio. However, unlike them, it combined their advantages - it became not only useful for communication between people, but also a publicly accessible means for receiving and exchanging information. It should be added that the capabilities of not only stationary, but also mobile television have already begun to be fully used on the Internet.

The history of the Internet begins around the 60s of the 20th century.

The first documentation of the social interaction made possible by the Internet was a series of notes written by J. Licklider. These notes discussed the concept of the "Galactic Network". The author envisioned the creation of a global network of interconnected computers, through which everyone could quickly access data and programs located on any computer. In spirit, this concept is very close to the current state of the Internet.

Leonard Kleinrock published the first paper on packet switching theory in July 1961. In the article, he presented the advantages of his theory over the existing principle of data transmission - circuit switching. What is the difference between these concepts? When packet switching occurs, there is no physical connection between two end devices (computers). In this case, the data necessary for transmission is divided into parts. Each part is appended with a header containing complete information about the delivery of the packet to its destination. When switching channels, two computers are physically connected “each to each” during the transmission of information. During the connection period, the entire volume of information is transferred. This connection is maintained until the end of the information transfer, i.e., just as it was when transmitting information over analog systems that provide connection switching. At the same time, the utilization rate of the information channel is minimal.

To test the concept of packet circuit switching, Lawrence Roberts and Thomas Merrill connected a TX-2 computer in Massachusetts to a Q-32 computer in California using low-speed telephone dial-up lines in 1965. Thus, the first ever (albeit small) non-local computer network was created. The result of the experiment was the understanding that time-shared computers could successfully work together, executing programs and retrieving data on a remote machine. It also became clear that the telephone system with circuit switching (connections) was absolutely unsuitable for building a computer network.

In 1969, the American agency ARPA (Advanced Research Projects Agency) began research on creating an experimental packet-switching network. This network was created and named ARPANET, i.e. network of advanced research projects agency. A sketch of the ARANET network, consisting of four nodes - the embryo of the Internet - is shown in Fig. 6.1.

At this early stage, research was conducted on both network infrastructure and network applications. At the same time, work was underway to create a functionally complete protocol for computer-to-computer interaction and other network software.

In December 1970, the Network Working Group (NWG), led by S. Crocker, completed work on the first version of the protocol, called the Network Control Protocol (NCP). After work was completed to implement NCP on ARPANET nodes in 1971–1972, network users were finally able to begin developing applications.

In 1972, the first application appeared - email.

In March 1972, Ray Tomlinson wrote basic programs for sending and reading electronic messages. In July of the same year, Roberts added to these programs the ability to display a list of messages, selective reading, saving to a file, forwarding, and preparing a response.

Since then, email has become the largest network application. For its time, e-mail became what the World Wide Web is today - an extremely powerful catalyst for the growth of the exchange of all types of interpersonal data flows.

In 1974, the Internet Network Working Group (INWG) introduced a universal protocol for data transmission and network interconnection - TCP/IP. This is the protocol that is used on the modern Internet.

However, the ARPANET switched from NCP to TCP/IP only on January 1, 1983. This was a Day X style transition, requiring simultaneous changes to all computers. The transition had been carefully planned by all parties involved over the previous several years and went surprisingly smoothly (it did, however, lead to the proliferation of the "I Survived the TCP/IP Migration" badge). In 1983, the transition of the ARPANET from NCP to TCP/IP allowed the network to be divided into MILNET, the military network itself, and ARPANET, which was used for research purposes.

In the same year, another important event occurred. Paul Mockapetris developed the Domain Name System (DNS). This system allowed the creation of a scalable, distributed mechanism for mapping hierarchical computer names (eg, www.acm.org) to Internet addresses.

Also in 1983, a Domain Name Server (DNS) was created at the University of Wisconsin. This server (DNS) automatically and secretly from the user provides translation of the dictionary equivalent of the site into an IP address.

With the general spread of the Internet outside the United States, national first-level domains ru, uk, ua, etc. appeared.

In 1985, the National Science Foundation (NSF) took part in the creation of its own network, NSFNet, which was soon connected to the Internet. Initially, the NSF included 5 supercomputer centers, however, less than in APRANET, and the data transmission speed in communication channels did not exceed 56 kbit/s. At the same time, the creation of NSFNet was a significant contribution to the development of the Internet, as it allowed for a new look at how the Internet could be used. The Foundation set the goal that every scientist, every engineer in the United States would be “connected” to a single network, and therefore began to create a network with faster channels that would unite numerous regional and local networks.

Based on ARPANET technology, the NSFNET network (the National Science Foundation NETwork) was created in 1986, in the creation of which NASA and the Department of Energy were directly involved. Six large research centers equipped with the latest supercomputers, located in different regions of the United States, were connected. The main purpose of this network was to provide US research centers with access to supercomputers based on an interregional backbone network. The network operated at a base speed of 56 Kbps. When creating the network, it became obvious that it was not worth even trying to connect all universities and research organizations directly to the centers, since laying such an amount of cable was not only very expensive, but practically impossible. Therefore, we decided to create networks on a regional basis. In every part of the country the institutions concerned connected with their nearest neighbors. The resulting chains were connected to the supercomputer centers through one of their nodes, thus the supercomputer centers were connected together. With this design, any computer could communicate with any other by passing messages through its neighbors.

One of the problems that existed at the time was that early networks (including the ARPANET) were built deliberately to benefit a narrow circle of interested organizations. They were to be used by a closed community of specialists; As a rule, the work of networks was limited to this. There was no particular need for network compatibility; accordingly, there was no compatibility itself. At the same time, alternative technologies began to appear in the commercial sector, such as XNS from Xerox, DECNet, and SNA from IBM. Therefore, under the auspices of DARPA NSFNET, together with specialists from the subordinate thematic groups on technology and Internet architecture (Internet Engineering and Architecture Task Forces) and members of the NSF Network Technical Advisory Group, “Requirements for Internet Gateways” were developed. These requirements formally guaranteed interoperability between parts of the Internet administered by DARPA and NSF. In addition to choosing TCP/IP as the basis for NSFNet, US federal agencies adopted and implemented a number of additional principles and rules that shaped the modern face of the Internet. Most importantly, NSFNET had a policy of "universal and equal access to the Internet." Indeed, in order for an American university to receive NSF funding for an Internet connection, it, as the NSFNet program states, “must make that connection available to all qualified users on campus.”

NSFNET worked quite successfully at first. But the time came when she could no longer cope with the increased needs. The network created for the use of supercomputers allowed connected organizations to use a lot of information data not related to supercomputers. Internet users in research centers, universities, schools, etc. realized that they now had access to a wealth of information and that they had direct access to their colleagues. The flow of messages on the Internet grew faster and faster, until, in the end, it overloaded the computers that controlled the network and the telephone lines connecting them.

In 1987, NSF transferred to Merit Network Inc. a contract under which Merit, with the participation of IBM and MCI, was to provide management of the NSFNET core network, transition to higher-speed T-1 channels and continue its development. The growing core network already united more than 10 nodes.

In 1990, the concepts of ARPANET, NFSNET, MILNET, etc. finally left the stage, giving way to the concept of the Internet.

The scope of the NSFNET network, combined with the quality of the protocols, led to the fact that by 1990, when the ARPANET was finally dismantled, the TCP/IP family had supplanted or significantly displaced most other global computer network protocols around the world, and IP was confidently becoming the dominant data transport service in the global network. information infrastructure.

In 1990, the European Organization for Nuclear Research established the largest Internet site in Europe and provided Internet access to the Old World. To help promote and facilitate the concept of distributed computing over the Internet, CERN (Switzerland, Geneva), Tim Berners-Lee developed hypertext document technology - the World Wide Web (WWW), allowing users to access any information located on the Internet on computers around the world.

WWW technology is based on the definition of URL specifications (Universal Resource Locator), HTTP (HyperText Transfer Protocol) and the HTML language itself (HyperText Markup Language). Text can be marked up in HTML using any text editor. A page marked up in HTML is often called a Web page. To view a Web page, a client application—a Web browser—is used.

In 1994, the W3 Consortium was formed, which brought together scientists from different universities and companies (including Netscape and Microsoft). Since that time, the committee began to deal with all standards in the Internet world. The organization's first step was the development of the HTML 2.0 specification. This version has the ability to transfer information from the user's computer to the server using forms. The next step was the HTML 3 project, work on which began in 1995. For the first time, the CSS system (Cascading Style Sheets, hierarchical style sheets) was introduced. CSS allows you to format text without disrupting logical and structural markup. The HTML 3 standard was never approved; instead, HTML 3.2 was created and adopted in January 1997. Already in December 1997, the W3C adopted the HTML 4.0 standard, which distinguishes between logical and visual tags.

By 1995, the growth of the Internet showed that regulation of connectivity and funding issues could not be in the hands of NSF alone. In 1995, payments for connecting numerous private networks to the national backbone were transferred to regional networks.

The Internet has grown far beyond what it was envisioned and designed to be; it has outgrown the agencies and organizations that created it; they can no longer play a dominant role in its growth. Today it is a powerful worldwide communication network based on distributed switching elements - hubs and communication channels. Since 1983, the Internet has grown exponentially, and hardly a single detail has survived from those times - the Internet still operates based on the TCP/IP protocol suite.

If the term “Internet” was originally used to describe a network built on the Internet protocol (IP), now this word has acquired a global meaning and is only sometimes used as a name for a set of interconnected networks. Strictly speaking, the Internet is any set of physically separate networks that are interconnected by a single IP protocol, which allows us to talk about them as one logical network. The rapid growth of the Internet has caused increased interest in the TCP/IP protocols, and as a result, specialists and companies have appeared who have found a number of other applications for it. This protocol began to be used to build local area networks (LAN - Local Area Network) even when their connection to the Internet was not provided. In addition, TCP/IP began to be used in the creation of corporate networks that adopted Internet technologies, including WWW (World Wide Web) - the World Wide Web, in order to establish an effective exchange of intra-corporate information. These corporate networks are called "Intranets" and may or may not be connected to the Internet.

Tim Berners-Lee, who is the author of HTTP, URI/URL and HTML technologies, is considered the inventor of the World Wide Web. In 1980, for his own use, he wrote the Enquirer program, which used random associations to store data and laid the conceptual basis for the World Wide Web. In 1989, Tim Berners-Lee proposed the global hypertext project, now known as the World Wide Web. The project implied the publication of hypertext documents interconnected by hyperlinks, which would facilitate the search and consolidation of information for scientists. To implement the project, he invented URIs, the HTTP protocol, and the HTML language. These are technologies without which it is no longer possible to imagine the modern Internet. Between 1991 and 1993, Berners-Lee refined the technical specifications of these standards and published them. He wrote the world's first web server, "httpd", and the world's first hypertext web browser, called "WorldWideWeb". This browser was also a WYSIWYG editor (short for What You See Is What You Get). Its development began in October 1990 and was completed in December of the same year. The program worked in the NeXTStep environment and began to spread across the Internet in the summer of 1991. Berners-Lee created the world's first Web site at http://info.cern.ch/; the site is now archived. This site went online on the Internet on August 6, 1991. This site described what the World Wide Web is, how to install a Web server, how to use a browser, etc. This site was also the world's first Internet directory, because Tim Berners-Lee later posted and maintained a list of links to other sites.

Since 1994, the main work on the development of the World Wide Web has been taken over by the World Wide Web Consortium (W3C), founded by Tim Berners-Lee. This Consortium is an organization that develops and implements technology standards for the Internet and the World Wide Web. The W3C's mission is to "Unleash the full potential of the World Wide Web by establishing protocols and principles to ensure the long-term development of the Web." Two other major goals of the Consortium are to ensure complete “internationalization of the Network” and to make the Network accessible to people with disabilities.

The W3C develops uniform principles and standards for the Internet (called “Recommendations”, English W3C Recommendations), which are then implemented by software and hardware manufacturers. In this way, compatibility is achieved between software products and equipment of different companies, which makes the World Wide Web more advanced, universal and convenient. All World Wide Web Consortium Recommendations are open, that is, not protected by patents and can be implemented by anyone without any financial contributions to the consortium.

Currently, the World Wide Web is formed by millions of Internet Web servers located around the world. A web server is a program that runs on a computer connected to a network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard drive and sends it over the network to the requesting computer. More complex Web servers are capable of dynamically allocating resources in response to an HTTP request. To identify resources (often files or parts thereof) on the World Wide Web, Uniform Resource Identifiers (URIs) are used. Uniform Resource Locators (URLs) are used to determine the location of resources on the network. Such URL locators combine URI identification technology and the DNS (Domain Name System) domain name system - a domain name (or directly an IP address in a numeric notation) is part of the URL to designate a computer (more precisely, one of its network interfaces) ), which executes the code of the desired Web server.

To view information received from the Web server, a special program, a Web browser, is used on the client computer. The main function of a Web browser is to display hypertext. The World Wide Web is inextricably linked with the concepts of hypertext and hyperlinks. Most of the information on the Web is hypertext. To facilitate the creation, storage and display of hypertext on the World Wide Web, HTML (HyperText Markup Language), a hypertext markup language, is traditionally used. The work of marking up hypertext is called layout; markup masters are called webmasters. After HTML markup, the resulting hypertext is placed in a file; such an HTML file is the most common resource on the World Wide Web. Once an HTML file is made available to a web server, it is called a “web page.” A collection of web pages makes up a website. Hyperlinks are added to the hypertext of web pages. Hyperlinks help World Wide Web users easily navigate between resources (files), regardless of whether the resources are located on the local computer or on a remote server. "Web" hyperlinks are based on URL technology.

In general, we can conclude that the World Wide Web is based on “three pillars”: HTTP, HTML and URL. Although recently HTML has begun to lose its position somewhat and give way to more modern markup technologies: XHTML and XML. XML (eXtensible Markup Language) is positioned as the foundation for other markup languages. To improve the visual perception of the web, CSS technology has become widely used, which allows you to set uniform design styles for many web pages. Another innovation worth paying attention to is the URN (Uniform Resource Name) resource naming system.

A popular concept for the development of the World Wide Web is the creation of a semantic web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make information posted on the network more understandable to computers. The Semantic Web is a concept of a network in which every resource in human language would be provided with a description that a computer can understand. The Semantic Web opens up access to clearly structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find the necessary resources themselves, process information, classify data, identify logical connections, draw conclusions and even make decisions based on these conclusions. If widely adopted and implemented wisely, the Semantic Web has the potential to spark a revolution on the Internet. To create a machine-readable description of a resource on the Semantic Web, the RDF (Resource Description Framework) format is used, which is based on XML syntax and uses URIs to identify resources. New in this area are RDFS (RDF Schema) and SPARQL (Protocol And RDF Query Language), a new query language for quickly accessing RDF data.

Currently, there are two trends in the development of the World Wide Web: the semantic web and the social web. The Semantic Web involves improving the coherence and relevance of information on the World Wide Web through the introduction of new metadata formats. The Social Web relies on the work of organizing the information available on the Web, carried out by the Web users themselves. In the second direction, developments that are part of the semantic web are actively used as tools (RSS and other web channel formats, OPML, XHTML microformats).

Internet telephony has become one of the most modern and economical types of communication. Her birthday can be considered February 15, 1995, when VocalTec released its first soft-phone - a program used for voice exchange over an IP network. Microsoft then released the first version of NetMeeting in October 1996. And already in 1997, connections via the Internet between two ordinary telephone subscribers located in completely different places on the planet became quite common.

Why is regular long-distance and international telephone communication so expensive? This is explained by the fact that during a conversation the subscriber occupies an entire communication channel, not only when speaking or listening to the interlocutor, but also when he is silent or distracted from the conversation. This happens when voice is transmitted over the telephone using the usual analog method.

With the digital method, information can be transmitted not continuously, but in separate “packets”. Then, information can be sent simultaneously from many subscribers via one communication channel. This principle of packet transmission of information is similar to transporting many letters with different addresses in one mail car. After all, they don’t “drive” one mail car to transport each letter separately! This temporary “packet compaction” makes it possible to use existing communication channels much more efficiently and “compress” them. At one end of the communication channel, information is divided into packets, each of which, like a letter, is equipped with its own individual address. Over a communication channel, packets from many subscribers are transmitted “interspersed”. At the other end of the communication channel, packets with the same address are again combined and sent to their destination. This packet principle is widely used on the Internet.

Having a personal computer, a sound card, a compatible microphone and headphones (or speakers), a subscriber can use Internet telephony to call any subscriber who has a regular landline telephone. During this conversation, he will also only pay for using the Internet. Before using Internet telephony, the subscriber who owns a personal computer must install a special program on it.

To use Internet telephony services it is not necessary to have a personal computer. To do this, it is enough to have a regular telephone with tone dialing. In this case, each dialed digit goes into the line not in the form of a different number of electrical impulses, as when the disk rotates, but in the form of alternating currents of different frequencies. This tone mode is found in most modern telephones. To use Internet telephony using a telephone, you need to buy a credit card and call a powerful central computer server at the number indicated on the card. Then the server machine voice (optionally in Russian or English) communicates the commands: dial the serial number and card key using the telephone buttons, dial the country code and the number of your future interlocutor. Next, the server converts the analog signal into a digital one, sends it to another city, to a server located there, which again converts the digital signal into an analogue one and sends it to the desired subscriber. The interlocutors talk as if on a regular telephone, however, sometimes there is a slight (a fraction of a second) delay in the response. Let us recall that to save communication channels, voice information is transmitted in “packets” of digital data: your voice information is divided into segments, packets, called Internet protocols (IP).

In 2003, the Skype program was created (www.skype.com), which is completely free and does not require virtually any knowledge from the user either to install it or to use it. It allows you to talk in video mode with interlocutors who are at their computers in different parts of the world. In order for the interlocutors to see each other, the computer of each of them must be equipped with a web camera.

Humanity has come such a long way in the development of communications: from signal fires and drums to a cellular mobile phone, which allows two people located anywhere on our planet to communicate almost instantly. At the same time, despite the different distances, subscribers create a feeling of personal communication.