Internet World Wide Web. World Wide Web: why is the Internet called that?

Initially, the Internet was a computer network for transmitting information, developed at the initiative of the US Department of Defense. The reason was given by the first artificial Earth satellite launched by the Soviet Union in 1957. The US military decided that in this case they needed an ultra-reliable communication system. ARPANET was not a secret for long and soon began to be actively used by various branches of science.

The first successful remote communication session was conducted in 1969 from Los Angeles to Stanford. In 1971, an instantly popular program for sending email over the Internet was developed. The first foreign organizations to connect to the network were in the UK and Norway. With the installation of the transatlantic telephone cable to these countries, ARPANET became an international network.

ARPANET was perhaps a more advanced communication system, but it was not the only one. And only by 1983, when the American network was filled with the first news groups, bulletin boards and switched to using the TCP/IP protocol, which made it possible to integrate into other computer networks, ARPANET became the Internet. Literally a year later, this title began to gradually pass to NSFNet - an inter-university network that had a large capacity and accumulated 10 thousand connected computers in an annual period. The first Internet chat appeared in 1988, and in 1989 Tim Berners-Lee proposed the concept of the World Wide Web.

World Wide Web

In 1990, ARPANET finally lost to NSFNet. It is worth noting that both of them were developed by the same scientific organizations, only the first was commissioned by the US defense services, and the second was on its own initiative. However, this competitive pairing led to scientific developments and discoveries that made the World Wide Web a reality, which became publicly available in 1991. Berners Lee, who proposed the concept, over the next two years developed the HTTP (hypertext) protocol, the HTML language, and URL identifiers, which are more familiar to ordinary users as Internet addresses, sites, and pages.

The World Wide Web is a system that provides access to files on a server computer connected to the Internet. This is partly why today the concepts of the web and the Internet often replace each other. In fact, the Internet is a communication technology, a kind of information space, and the World Wide Web fills it. This spider network consists of many millions of web servers - computers and their systems that are responsible for the operation of websites and pages. To access web resources (download, view) from a regular computer, a browser program is used. Web, WWW are synonyms for the World Wide Web. WWW users number in the billions.

The Internet is a communication system and at the same time an information system - a medium for people to communicate. Currently, there are many definitions of this concept. In our opinion, one of the definitions of the Internet that most fully characterizes the information interaction of the planet’s population is: “The Internet is a complex transport and information system of mushroom-shaped (dipole) structures, the cap of each of which (the dipoles themselves) represents the brain of a person sitting at a computer , together with the computer itself, which is, as it were, an artificial extension of the brain, and the legs, for example, are the telephone network connecting computers, or the ether through which radio waves are transmitted.”

The advent of the Internet gave impetus to the development of new information technologies, leading not only to changes in the consciousness of people, but also the world as a whole. However, the worldwide computer network was not the first discovery of its kind. Today, the Internet is developing in the same way as its predecessors - telegraph, telephone and radio. However, unlike them, it combined their advantages - it became not only useful for communication between people, but also a publicly accessible means for receiving and exchanging information. It should be added that the capabilities of not only stationary, but also mobile television have already begun to be fully used on the Internet.

The history of the Internet begins around the 60s of the 20th century.

The first documentation of the social interaction made possible by the Internet was a series of notes written by J. Licklider. These notes discussed the concept of the "Galactic Network". The author envisioned the creation of a global network of interconnected computers, through which everyone could quickly access data and programs located on any computer. In spirit, this concept is very close to the current state of the Internet.

Leonard Kleinrock published the first paper on packet switching theory in July 1961. In the article, he presented the advantages of his theory over the existing principle of data transmission - circuit switching. What is the difference between these concepts? When packet switching occurs, there is no physical connection between two end devices (computers). In this case, the data necessary for transmission is divided into parts. Each part is appended with a header containing complete information about the delivery of the packet to its destination. When switching channels, two computers are physically connected “each to each” during the transmission of information. During the connection period, the entire volume of information is transferred. This connection is maintained until the end of the information transfer, i.e., just as it was when transmitting information over analog systems that provide connection switching. At the same time, the utilization rate of the information channel is minimal.

To test the concept of packet circuit switching, Lawrence Roberts and Thomas Merrill connected a TX-2 computer in Massachusetts to a Q-32 computer in California using low-speed telephone dial-up lines in 1965. Thus, the first ever (albeit small) non-local computer network was created. The result of the experiment was the understanding that time-shared computers could successfully work together, executing programs and retrieving data on a remote machine. It also became clear that the telephone system with circuit switching (connections) was absolutely unsuitable for building a computer network.

In 1969, the American agency ARPA (Advanced Research Projects Agency) began research on creating an experimental packet-switching network. This network was created and named ARPANET, i.e. network of advanced research projects agency. A sketch of the ARANET network, consisting of four nodes - the embryo of the Internet - is shown in Fig. 6.1.

At this early stage, research was conducted on both network infrastructure and network applications. At the same time, work was underway to create a functionally complete protocol for computer-to-computer interaction and other network software.

In December 1970, the Network Working Group (NWG), led by S. Crocker, completed work on the first version of the protocol, called the Network Control Protocol (NCP). After work was completed to implement NCP on ARPANET nodes in 1971–1972, network users were finally able to begin developing applications.

In 1972, the first application appeared - email.

In March 1972, Ray Tomlinson wrote basic programs for sending and reading electronic messages. In July of the same year, Roberts added to these programs the ability to display a list of messages, selective reading, saving to a file, forwarding, and preparing a response.

Since then, email has become the largest network application. For its time, e-mail became what the World Wide Web is today - an extremely powerful catalyst for the growth of the exchange of all types of interpersonal data flows.

In 1974, the Internet Network Working Group (INWG) introduced a universal protocol for data transmission and network interconnection - TCP/IP. This is the protocol that is used on the modern Internet.

However, the ARPANET switched from NCP to TCP/IP only on January 1, 1983. This was a Day X style transition, requiring simultaneous changes to all computers. The transition had been carefully planned by all parties involved over the previous several years and went surprisingly smoothly (it did, however, lead to the proliferation of the "I Survived the TCP/IP Migration" badge). In 1983, the transition of the ARPANET from NCP to TCP/IP allowed the network to be divided into MILNET, the military network itself, and ARPANET, which was used for research purposes.

In the same year, another important event occurred. Paul Mockapetris developed the Domain Name System (DNS). This system allowed the creation of a scalable, distributed mechanism for mapping hierarchical computer names (eg, www.acm.org) to Internet addresses.

Also in 1983, a Domain Name Server (DNS) was created at the University of Wisconsin. This server (DNS) automatically and secretly from the user provides translation of the dictionary equivalent of the site into an IP address.

With the general spread of the Internet outside the United States, national first-level domains ru, uk, ua, etc. appeared.

In 1985, the National Science Foundation (NSF) took part in the creation of its own network, NSFNet, which was soon connected to the Internet. Initially, the NSF included 5 supercomputer centers, however, less than in APRANET, and the data transmission speed in communication channels did not exceed 56 kbit/s. At the same time, the creation of NSFNet was a significant contribution to the development of the Internet, as it allowed for a new look at how the Internet could be used. The Foundation set the goal that every scientist, every engineer in the United States would be “connected” to a single network, and therefore began to create a network with faster channels that would unite numerous regional and local networks.

Based on ARPANET technology, the NSFNET network (the National Science Foundation NETwork) was created in 1986, in the creation of which NASA and the Department of Energy were directly involved. Six large research centers equipped with the latest supercomputers, located in different regions of the United States, were connected. The main purpose of this network was to provide US research centers with access to supercomputers based on an interregional backbone network. The network operated at a base speed of 56 Kbps. When creating the network, it became obvious that it was not worth even trying to connect all universities and research organizations directly to the centers, since laying such an amount of cable was not only very expensive, but practically impossible. Therefore, we decided to create networks on a regional basis. In every part of the country the institutions concerned connected with their nearest neighbors. The resulting chains were connected to the supercomputer centers through one of their nodes, thus the supercomputer centers were connected together. With this design, any computer could communicate with any other by passing messages through its neighbors.

One of the problems that existed at the time was that early networks (including the ARPANET) were built deliberately to benefit a narrow circle of interested organizations. They were to be used by a closed community of specialists; As a rule, the work of networks was limited to this. There was no particular need for network compatibility; accordingly, there was no compatibility itself. At the same time, alternative technologies began to appear in the commercial sector, such as XNS from Xerox, DECNet, and SNA from IBM. Therefore, under the auspices of DARPA NSFNET, together with specialists from the subordinate thematic groups on technology and Internet architecture (Internet Engineering and Architecture Task Forces) and members of the NSF Network Technical Advisory Group, “Requirements for Internet Gateways” were developed. These requirements formally guaranteed interoperability between parts of the Internet administered by DARPA and NSF. In addition to choosing TCP/IP as the basis for NSFNet, US federal agencies adopted and implemented a number of additional principles and rules that shaped the modern face of the Internet. Most importantly, NSFNET had a policy of "universal and equal access to the Internet." Indeed, in order for an American university to receive NSF funding for an Internet connection, it, as the NSFNet program states, “must make that connection available to all qualified users on campus.”

NSFNET worked quite successfully at first. But the time came when she could no longer cope with the increased needs. The network created for the use of supercomputers allowed connected organizations to use a lot of information data not related to supercomputers. Network users in research centers, universities, schools, etc. realized that they now had access to a wealth of information and that they had direct access to their colleagues. The flow of messages on the Internet grew faster and faster, until, in the end, it overloaded the computers that controlled the network and the telephone lines connecting them.

In 1987, NSF transferred to Merit Network Inc. a contract under which Merit, with the participation of IBM and MCI, was to provide management of the NSFNET core network, transition to higher-speed T-1 channels and continue its development. The growing core network already united more than 10 nodes.

In 1990, the concepts of ARPANET, NFSNET, MILNET, etc. finally left the scene, giving way to the concept of the Internet.

The scope of the NSFNET network, combined with the quality of the protocols, led to the fact that by 1990, when the ARPANET was finally dismantled, the TCP/IP family had supplanted or significantly displaced most other global computer network protocols around the world, and IP was confidently becoming the dominant data transport service in the global network. information infrastructure.

In 1990, the European Organization for Nuclear Research established the largest Internet site in Europe and provided Internet access to the Old World. To help promote and facilitate the concept of distributed computing over the Internet, CERN (Switzerland, Geneva), Tim Berners-Lee developed hypertext document technology - the World Wide Web (WWW), allowing users to access any information located on the Internet on computers around the world.

WWW technology is based on the definition of URL specifications (Universal Resource Locator), HTTP (HyperText Transfer Protocol) and the HTML language itself (HyperText Markup Language). Text can be marked up in HTML using any text editor. A page marked up in HTML is often called a Web page. To view a Web page, a client application—a Web browser—is used.

In 1994, the W3 Consortium was formed, which brought together scientists from different universities and companies (including Netscape and Microsoft). Since that time, the committee began to deal with all standards in the Internet world. The organization's first step was the development of the HTML 2.0 specification. This version has the ability to transfer information from the user's computer to the server using forms. The next step was the HTML 3 project, work on which began in 1995. For the first time, the CSS system (Cascading Style Sheets, hierarchical style sheets) was introduced. CSS allows you to format text without disrupting logical and structural markup. The HTML 3 standard was never approved; instead, HTML 3.2 was created and adopted in January 1997. Already in December 1997, the W3C adopted the HTML 4.0 standard, which distinguishes between logical and visual tags.

By 1995, the growth of the Internet showed that regulation of connectivity and funding issues could not be in the hands of NSF alone. In 1995, payments for connecting numerous private networks to the national backbone were transferred to regional networks.

The Internet has grown far beyond what it was envisioned and designed to be; it has outgrown the agencies and organizations that created it; they can no longer play a dominant role in its growth. Today it is a powerful worldwide communication network based on distributed switching elements - hubs and communication channels. Since 1983, the Internet has grown exponentially, and hardly a single detail has survived from those times - the Internet still operates based on the TCP/IP protocol suite.

If the term “Internet” was originally used to describe a network built on the Internet protocol (IP), now this word has acquired a global meaning and is only sometimes used as a name for a set of interconnected networks. Strictly speaking, the Internet is any set of physically separate networks that are interconnected by a single IP protocol, which allows us to talk about them as one logical network. The rapid growth of the Internet has caused increased interest in the TCP/IP protocols, and as a result, specialists and companies have appeared who have found a number of other applications for it. This protocol began to be used to build local area networks (LAN - Local Area Network) even when their connection to the Internet was not provided. In addition, TCP/IP began to be used in the creation of corporate networks that adopted Internet technologies, including WWW (World Wide Web) - the World Wide Web, in order to establish an effective exchange of intra-corporate information. These corporate networks are called "Intranets" and may or may not be connected to the Internet.

Tim Berners-Lee, who is the author of HTTP, URI/URL and HTML technologies, is considered the inventor of the World Wide Web. In 1980, for his own use, he wrote the Enquirer program, which used random associations to store data and laid the conceptual basis for the World Wide Web. In 1989, Tim Berners-Lee proposed the global hypertext project, now known as the World Wide Web. The project implied the publication of hypertext documents interconnected by hyperlinks, which would facilitate the search and consolidation of information for scientists. To implement the project, he invented URIs, the HTTP protocol, and the HTML language. These are technologies without which it is no longer possible to imagine the modern Internet. Between 1991 and 1993, Berners-Lee refined the technical specifications of these standards and published them. He wrote the world's first web server, "httpd", and the world's first hypertext web browser, called "WorldWideWeb". This browser was also a WYSIWYG editor (short for What You See Is What You Get). Its development began in October 1990 and was completed in December of the same year. The program worked in the NeXTStep environment and began to spread across the Internet in the summer of 1991. Berners-Lee created the world's first Web site at http://info.cern.ch/; the site is now archived. This site went online on the Internet on August 6, 1991. This site described what the World Wide Web is, how to install a Web server, how to use a browser, etc. This site was also the world's first Internet directory, because Tim Berners-Lee later posted and maintained a list of links to other sites.

Since 1994, the main work on the development of the World Wide Web has been taken over by the World Wide Web Consortium (W3C), founded by Tim Berners-Lee. This Consortium is an organization that develops and implements technology standards for the Internet and the World Wide Web. The W3C's mission is to "Unleash the full potential of the World Wide Web by establishing protocols and principles to ensure the long-term development of the Web." Two other major goals of the Consortium are to ensure complete “internationalization of the Network” and to make the Network accessible to people with disabilities.

The W3C develops uniform principles and standards for the Internet (called “Recommendations”, English W3C Recommendations), which are then implemented by software and hardware manufacturers. In this way, compatibility is achieved between software products and equipment of different companies, which makes the World Wide Web more advanced, universal and convenient. All World Wide Web Consortium Recommendations are open, that is, not protected by patents and can be implemented by anyone without any financial contributions to the consortium.

Currently, the World Wide Web is formed by millions of Internet Web servers located around the world. A web server is a program that runs on a computer connected to a network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard drive and sends it over the network to the requesting computer. More complex Web servers are capable of dynamically allocating resources in response to an HTTP request. To identify resources (often files or parts thereof) on the World Wide Web, Uniform Resource Identifiers (URIs) are used. Uniform Resource Locators (URLs) are used to determine the location of resources on the network. Such URL locators combine URI identification technology and the DNS (Domain Name System) domain name system - a domain name (or directly an IP address in a numeric notation) is part of the URL to designate a computer (more precisely, one of its network interfaces) ), which executes the code of the desired Web server.

To view information received from the Web server, a special program, a Web browser, is used on the client computer. The main function of a Web browser is to display hypertext. The World Wide Web is inextricably linked with the concepts of hypertext and hyperlinks. Most of the information on the Web is hypertext. To facilitate the creation, storage and display of hypertext on the World Wide Web, HTML (HyperText Markup Language), a hypertext markup language, is traditionally used. The work of marking up hypertext is called layout; markup masters are called webmasters. After HTML markup, the resulting hypertext is placed into a file; such an HTML file is the most common resource on the World Wide Web. Once an HTML file is made available to a web server, it is called a “web page.” A collection of web pages makes up a website. Hyperlinks are added to the hypertext of web pages. Hyperlinks help World Wide Web users easily navigate between resources (files), regardless of whether the resources are located on the local computer or on a remote server. "Web" hyperlinks are based on URL technology.

In general, we can conclude that the World Wide Web is based on “three pillars”: HTTP, HTML and URL. Although recently HTML has begun to lose its position somewhat and give way to more modern markup technologies: XHTML and XML. XML (eXtensible Markup Language) is positioned as the foundation for other markup languages. To improve the visual perception of the web, CSS technology has become widely used, which allows you to set uniform design styles for many web pages. Another innovation worth paying attention to is the URN (Uniform Resource Name) resource naming system.

A popular concept for the development of the World Wide Web is the creation of a semantic web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make information posted on the network more understandable to computers. The Semantic Web is a concept of a network in which every resource in human language would be provided with a description that a computer can understand. The Semantic Web opens up access to clearly structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find the necessary resources themselves, process information, classify data, identify logical connections, draw conclusions and even make decisions based on these conclusions. If widely adopted and implemented wisely, the Semantic Web has the potential to spark a revolution on the Internet. To create a machine-readable description of a resource on the Semantic Web, the RDF (Resource Description Framework) format is used, which is based on XML syntax and uses URIs to identify resources. New in this area are RDFS (RDF Schema) and SPARQL (Protocol And RDF Query Language), a new query language for quickly accessing RDF data.

Currently, there are two trends in the development of the World Wide Web: the semantic web and the social web. The Semantic Web involves improving the coherence and relevance of information on the World Wide Web through the introduction of new metadata formats. The Social Web relies on the work of organizing the information available on the Web, carried out by the Web users themselves. In the second direction, developments that are part of the semantic web are actively used as tools (RSS and other web channel formats, OPML, XHTML microformats).

Internet telephony has become one of the most modern and economical types of communication. Her birthday can be considered February 15, 1995, when VocalTec released its first soft-phone - a program used for voice exchange over an IP network. Microsoft then released the first version of NetMeeting in October 1996. And already in 1997, connections via the Internet between two ordinary telephone subscribers located in completely different places on the planet became quite common.

Why is regular long-distance and international telephone communication so expensive? This is explained by the fact that during a conversation the subscriber occupies an entire communication channel, not only when speaking or listening to the interlocutor, but also when he is silent or distracted from the conversation. This happens when voice is transmitted over the telephone using the usual analog method.

With the digital method, information can be transmitted not continuously, but in separate “packets”. Then, information can be sent simultaneously from many subscribers via one communication channel. This principle of packet transmission of information is similar to transporting many letters with different addresses in one mail car. After all, they don’t “drive” one mail car to transport each letter separately! This temporary “packet compaction” makes it possible to use existing communication channels much more efficiently and “compress” them. At one end of the communication channel, information is divided into packets, each of which, like a letter, is equipped with its own individual address. Over a communication channel, packets from many subscribers are transmitted “interspersed”. At the other end of the communication channel, packets with the same address are again combined and sent to their destination. This packet principle is widely used on the Internet.

Having a personal computer, a sound card, a compatible microphone and headphones (or speakers), a subscriber can use Internet telephony to call any subscriber who has a regular landline telephone. During this conversation, he will also only pay for using the Internet. Before using Internet telephony, the subscriber who owns a personal computer must install a special program on it.

To use Internet telephony services it is not necessary to have a personal computer. To do this, it is enough to have a regular telephone with tone dialing. In this case, each dialed digit goes into the line not in the form of a different number of electrical impulses, as when the disk rotates, but in the form of alternating currents of different frequencies. This tone mode is found in most modern telephones. To use Internet telephony using a telephone, you need to buy a credit card and call a powerful central computer server at the number indicated on the card. Then the server machine voice (optionally in Russian or English) communicates the commands: dial the serial number and card key using the telephone buttons, dial the country code and the number of your future interlocutor. Next, the server converts the analog signal into a digital one, sends it to another city, to a server located there, which again converts the digital signal into an analogue one and sends it to the desired subscriber. The interlocutors talk as if on a regular telephone, however, sometimes there is a slight (a fraction of a second) delay in the response. Let us recall that to save communication channels, voice information is transmitted in “packets” of digital data: your voice information is divided into segments, packets, called Internet protocols (IP).

In 2003, the Skype program was created (www.skype.com), which is completely free and does not require virtually any knowledge from the user either to install it or to use it. It allows you to talk in video mode with interlocutors located at their computers in different parts of the world. In order for the interlocutors to see each other, the computer of each of them must be equipped with a web camera.

Humanity has come such a long way in the development of communications: from signal fires and drums to a cellular mobile phone, which allows two people located anywhere on our planet to communicate almost instantly. At the same time, despite the different distances, subscribers create a feeling of personal communication.

The World Wide Web is made up of hundreds of millions of web servers. Most of the resources on the World Wide Web are based on hypertext technology. Hypertext documents posted on the World Wide Web are called web pages. Several web pages united by a common theme, design, and also interconnected by links and usually located on the same web server are called. To download and view web pages, special programs are used - browsers ( browser).

The World Wide Web has caused a real revolution in information technology and an explosion in the development of the Internet. Often, when talking about the Internet, they mean the World Wide Web, but it is important to understand that they are not the same thing.

Structure and principles of the World Wide Web

The World Wide Web is made up of millions of Internet web servers located around the world. A web server is a computer program that runs on a computer connected to a network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard drive and sends it over the network to the requesting computer. More complex web servers can dynamically generate documents in response to an HTTP request using templates and scripts.

To view information received from the web server, a special program is used on the client computer - web browser. The main function of a web browser is to display hypertext. The World Wide Web is inextricably linked with the concepts of hypertext and hyperlinks. Most of the information on the Internet is hypertext.

To facilitate the creation, storage and display of hypertext on the World Wide Web, HTML is traditionally used ( HyperText Markup Language"Hypertext Markup Language"). The work of creating (marking up) hypertext documents is called layout, it is done by a webmaster or a separate markup specialist - a layout designer. After HTML markup, the resulting document is saved to a file, and such HTML files are the main type of resources on the World Wide Web. Once an HTML file is made available to a web server, it is called a “web page.” A set of web pages forms .

The hypertext of web pages contains hyperlinks. Hyperlinks help World Wide Web users easily navigate between resources (files), regardless of whether the resources are located on the local computer or on a remote server. Uniform URL resource locators are used to locate resources on the World Wide Web. Uniform Resource Locator). For example, the full URL of the main page of the Russian section of Wikipedia looks like this: http://ru.wikipedia.org/wiki/Main_page. Such URL locators combine URI identification technology. Uniform Resource Identifier"Uniform Resource Identifier") and the Domain Name System (DNS). Domain Name System). The domain name (in this case ru.wikipedia.org) as part of the URL designates the computer (more precisely, one of its network interfaces) that executes the code of the desired web server. The URL of the current page can usually be seen in the browser's address bar, although many modern browsers prefer to show only the domain name of the current site by default.

World Wide Web Technologies

To improve the visual perception of the web, CSS technology has become widely used, which allows you to set uniform design styles for many web pages. Another innovation worth paying attention to is the URN resource designation system. Uniform Resource Name).

A popular concept for the development of the World Wide Web is the creation of the Semantic Web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make information posted on the network more understandable to computers. The Semantic Web is a concept of a network in which every resource in human language would be provided with a description that a computer can understand. The Semantic Web opens up access to clearly structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find the necessary resources themselves, process information, classify data, identify logical connections, draw conclusions and even make decisions based on these conclusions. If widely adopted and implemented wisely, the Semantic Web has the potential to spark a revolution on the Internet. To create a computer-readable description of a resource, the Semantic Web uses the RDF (English) format. Resource Description Framework), which is based on XML syntax and uses URIs to identify resources. New products in this area are RDFS (English) Russian. (English) RDF Schema) and SPARQL (eng. Protocol And RDF Query Language) (pronounced "sparkle"), a new query language for fast access to RDF data.

History of the World Wide Web

Tim Berners-Lee and, to a lesser extent, Robert Cayo are considered the inventors of the World Wide Web. Tim Berners-Lee is the originator of HTTP, URI/URL and HTML technologies. In 1980 he worked at the European Council for Nuclear Research (French). Conseil Européen pour la Recherche Nucléaire, CERN) software consultant. It was there, in Geneva (Switzerland), that he wrote the Enquire program for his own needs. Enquire, can be loosely translated as "Interrogator"), which used random associations to store data and laid the conceptual foundation for the World Wide Web.

In 1989, while working at CERN on the organization's intranet, Tim Berners-Lee proposed the global hypertext project now known as the World Wide Web. The project involved the publication of hypertext documents linked by hyperlinks, which would facilitate the search and consolidation of information for CERN scientists. To implement the project, Tim Berners-Lee (together with his assistants) invented URIs, the HTTP protocol, and the HTML language. These are technologies without which it is no longer possible to imagine the modern Internet. Between 1991 and 1993, Berners-Lee refined the technical specifications of these standards and published them. But, nevertheless, the official year of birth of the World Wide Web should be considered 1989.

As part of the project, Berners-Lee wrote the world's first web server, httpd, and the world's first hypertext web browser, called WorldWideWeb. This browser was also a WYSIWYG editor (short for English). What You See Is What You Get- what you see is what you get), its development began in October 1990 and was completed in December of the same year. The program ran in the NeXTStep environment and began to spread across the Internet in the summer of 1991.

Mike Sendall buys a NeXT cube computer at this time in order to understand what the features of its architecture are, and then gives it to Tim [Berners-Lee]. Thanks to the sophistication of the NeXT cube software system, Tim wrote a prototype illustrating the basic concepts of the project in a few months. This was an impressive result: the prototype offered users, among other things, such advanced capabilities as WYSIWYG browsing/authoring!... During one of the sessions of joint discussions of the project in the CERN cafeteria, Tim and I tried to find a “catching” name for the system being created . The only thing I insisted on was that the name should not once again be taken from the same Greek mythology. Tim suggested the World Wide Web. I immediately really liked everything about this name, but it’s hard to pronounce in French.

The world's first website was hosted by Berners-Lee on August 6, 1991, on the first web server available at http://info.cern.ch/, (archived copy here). Resource defined the concept World Wide Web, contained instructions for setting up a web server, using a browser, etc. This site was also the world's first Internet directory because Tim Berners-Lee later posted and maintained a list of links to other sites there.

The first photograph on the World Wide Web was of the parody filk band Les Horribles Cernettes. Tim Bernes-Lee asked the group leader for scans of them after the CERN Hardronic Festival.

And yet, the theoretical foundations of the web were laid much earlier than Berners-Lee. Back in 1945, Vannaver Busch developed the concept of Memex - mechanical aids for “expanding human memory.” Memex is a device in which a person stores all his books and records (and, ideally, all his knowledge that can be formally described) and which provides the necessary information with sufficient speed and flexibility. It is an extension and addition to human memory. Bush also predicted comprehensive indexing of text and multimedia resources with the ability to quickly find the necessary information. The next significant step towards the World Wide Web was the creation of hypertext (a term coined by Ted Nelson in 1965).

Since 1994, the main work on the development of the World Wide Web has been taken over by the World Wide Web Consortium. World Wide Web Consortium, W3C), founded and still led by Tim Berners-Lee. This consortium is an organization that develops and implements technology standards for the Internet and the World Wide Web. W3C Mission: “Unleash the full potential of the World Wide Web by establishing protocols and principles to ensure the long-term development of the Web.” Two other major goals of the consortium are to ensure full “internationalization of the Web” and to make the Web accessible to people with disabilities.

The W3C develops common principles and standards (called "recommendations") for the Internet. W3C Recommendations), which are then implemented by software and hardware manufacturers. In this way, compatibility is achieved between software products and equipment of different companies, which makes the World Wide Web more advanced, universal and convenient. All recommendations of the World Wide Web consortium are open, that is, they are not protected by patents and can be implemented by anyone without any financial contributions to the consortium.

Prospects for the development of the World Wide Web

Currently, there are two trends in the development of the World Wide Web: the semantic web and the social web.

  • The Semantic Web involves improving the coherence and relevance of information on the World Wide Web through the introduction of new metadata formats.
  • The Social Web relies on the work of organizing the information available on the Web, carried out by the Web users themselves. In the second direction, developments that are part of the semantic web are actively used as tools (RSS and other web channel formats, OPML, XHTML microformats). Partially semanticized sections of the Wikipedia Category Tree help users consciously navigate the information space, however, very soft requirements for subcategories do not give reason to hope for the expansion of such sections. In this regard, attempts to compile atlases of Knowledge may be of interest.

There is also the popular concept of Web 2.0, which summarizes several directions of development of the World Wide Web.

Methods for actively displaying information on the World Wide Web

Information on the web can be displayed either passively (that is, the user can only read it) or actively - then the user can add information and edit it. Methods for actively displaying information on the World Wide Web include:

It should be noted that this division is very arbitrary. So, say, a blog or guest book can be considered a special case of a forum, which, in turn, is a special case of a content management system. Usually the difference is manifested in the purpose, approach and positioning of a particular product.

Some information from websites can also be accessed through speech. India has already begun testing a system that makes the text content of pages accessible even to people who cannot read and write.

The World Wide Web is sometimes ironically called the Wild Wild Web, in reference to the title of the film Wild Wild West.

Safety

For cybercriminals, the World Wide Web has become a key method for distributing malware. In addition, the concept of online crime includes identity theft, fraud, espionage and illegal collection of information about certain subjects or objects. Web vulnerabilities, according to some data, currently outnumber any traditional manifestation of computer security problems; Google estimates that approximately one in ten pages on the World Wide Web may contain malicious code. According to Sophos, a British manufacturer of antivirus solutions, the majority of cyber attacks on the web are carried out by legitimate cyberattacks, mainly located in the USA, China and Russia. The most common type of such attacks, according to information from the same company, is SQL injection - maliciously entering direct queries to the database into text fields on resource pages, which, if the level of security is insufficient, can lead to disclosure of the contents of the database. Another common threat that exploits HTML and unique resource identifiers to World Wide Web sites is cross-site scripting (XSS), which became possible with the introduction of JavaScript technology and gained momentum with the development of Web 2.0 and Ajax - new standards that encouraged the use of interactive scripting. In 2008, it was estimated that up to 70% of all websites in the world were vulnerable to XSS attacks against their users.

Proposed solutions to relevant problems vary significantly, even to the point of completely contradicting each other. Large security solution providers like McAfee develop products to evaluate information systems for compliance with certain requirements; other market players (for example, Finjan) recommend conducting active research of program code and all content in general in real time, regardless of the data source. There are also views that businesses should view security as a business opportunity rather than as a cost; To do this, the hundreds of companies that provide information security today must be replaced by a small group of organizations that would enforce the infrastructure policy of ongoing and ubiquitous digital rights management.

Confidentiality

Every time a user's computer requests a web page from a server, the server determines and typically logs the IP address from which the request came. Likewise, most Internet browsers record information about the pages you visit, which you can then view in your browser history, and also cache downloaded content for possible reuse. If an encrypted HTTPS connection is not used when interacting with the server, requests and responses to them are transmitted over the Internet in clear text and can be read, recorded and viewed on intermediate network nodes.

When a web page requests and a user provides a certain amount of personal information, such as a first and last name or a real or email address, the data stream can be de-anonymized and associated with a specific person. If a website uses cookies, supports user authentication or other technologies for tracking visitor activity, then a relationship may also be established between previous and subsequent visits. Thus, an organization operating on the World Wide Web has the opportunity to create and update the profile of a specific client using its site (or sites). Such a profile may include, for example, information about leisure and entertainment preferences, consumer interests, occupation and other demographic indicators. Such profiles are of significant interest to marketers, advertising agency employees and other similar professionals. Depending on the terms of service of specific services and local laws, such profiles may be sold or transferred to third parties without the user's knowledge.

Disclosure of information is also facilitated by social networks, which invite participants to independently disclose a certain amount of personal data about themselves. Careless handling of the capabilities of such resources may result in information that the user would prefer to hide become publicly available; among other things, such information may become the target of hooligans or, moreover, cybercriminals. Modern social networks provide their members with a fairly wide range of profile privacy settings, but these settings can be unnecessarily complex - especially for inexperienced users.

Spreading

Between 2005 and 2010, the number of web users doubled to reach the billion mark. According to early studies in 1998 and 1999, most existing websites were not indexed correctly by search engines, and the web itself was larger than expected. As of 2001, more than 550 million web documents had already been created, most of which were located within the invisible network. As of 2002, more than 2 billion web pages were created, 56.4% of all Internet content was in English, it was followed by German (7.7%), French (5.6%) and Japanese (4.9%). According to research conducted at the end of January 2005, more than 11.5 billion web pages were identified in 75 different languages ​​and indexed on the open web. And as of March 2009, the number of pages increased to 25.21 billion. On July 25, 2008, Google software engineers Jesse Alpert and Nissan Hiai announced that Google Search had detected more than a billion unique URLs.

  • In 2011, they planned to erect a monument to the World Wide Web in St. Petersburg. The composition was supposed to be a street bench in the form of the abbreviation WWW with free access to the Internet.

see also

  • Wide Area Network
  • World Digital Library
  • Global Internet Use

Literature

  • Fielding, R.; Gettys, J.; Mogul, J.; Fristik, G.; Mazinter, L.; Leach, P.; Berners-Lee, T. (June 1999). "Hypertext Transfer Protocol - http://1.1" (Information Sciences Institute).
  • Berners-Lee, Tim; Bray, Tim; Connolly, Dan; Cotton, Paul; Fielding, Roy; Jeckle, Mario; Lilly, Chris; Mendelsohn, Noah; Orcard, David; Walsh, Norman; Williams, Stuart (December 15, 2004). "Architecture of the World Wide Web, Volume One" (W3C).
  • Polo, Luciano. World Wide Web Technology Architecture: A Conceptual Analysis. New Devices (2003).

When talking about the Internet, they often mean the World Wide Web. However, it is important to understand that these are not the same thing.

Structure and principles

The World Wide Web is made up of millions of Internet web servers located around the world. A web server is a computer program that runs on a computer connected to a network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard drive and sends it over the network to the requesting computer. More sophisticated web servers are capable of dynamically generating documents in response to an HTTP request using templates and scripts.

To view information received from the web server, a special program is used on the client computer - web browser. The main function of a web browser is to display hypertext. The World Wide Web is inextricably linked with the concepts of hypertext and hyperlinks. Most of the information on the Internet is hypertext.

HTML (HyperText Markup Language) is traditionally used to create, store and display hypertext on the World Wide Web. The work of creating (marking up) hypertext documents is called layout, it is done by a webmaster or a separate markup specialist - a layout designer. After HTML markup, the resulting document is saved into a file, and such HTML files are the main type of resources on the World Wide Web. Once an HTML file is made available to a web server, it is called a “web page.” A collection of web pages makes up a website.

The hypertext of web pages contains hyperlinks. Hyperlinks help World Wide Web users easily navigate between resources (files), regardless of whether the resources are located on the local computer or on a remote server. To determine the location of resources on the World Wide Web, uniform resource locators URL (English Uniform Resource Locator) are used. For example, the full URL of the main page of the Russian section of Wikipedia looks like this: http://ru.wikipedia.org/wiki/Main_page. Such URL locators combine URI identification technology (English Uniform Resource Identifier) ​​and the DNS domain name system (English Domain Name System). The domain name (in this case ru.wikipedia.org) as part of the URL designates the computer (more precisely, one of its network interfaces) that executes the code of the desired web server. The URL of the current page can usually be seen in the browser's address bar, although many modern browsers prefer to show only the domain name of the current site by default.

Technologies

To improve the visual perception of the web, CSS technology has become widely used, which allows you to set uniform design styles for many web pages. Another innovation worth paying attention to is the resource naming system URN (Uniform Resource Name).

A popular concept for the development of the World Wide Web is the creation of the Semantic Web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make information posted on the network more understandable to computers. The Semantic Web is a concept of a network in which every resource in human language would be provided with a description that a computer can understand. The Semantic Web opens up access to clearly structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find the necessary resources themselves, process information, classify data, identify logical connections, draw conclusions and even make decisions based on these conclusions. If widely adopted and implemented wisely, the Semantic Web has the potential to spark a revolution on the Internet. To create a computer-readable description of a resource, the Semantic Web uses the RDF (English) format. Resource Description Framework), which is based on XML syntax and uses URIs to identify resources. New products in this area are RDFS (eng. RDF Schema) and SPARQL (eng. Protocol And RDF Query Language) (pronounced "sparkle"), a new query language for fast access to RDF data.

Story

Main article: History of the World Wide Web

Tim Berners-Lee and, to a lesser extent, Robert Caillot are considered the inventors of the World Wide Web. Tim Berners-Lee is the originator of HTTP, URI/URL, and HTML technologies. In 1980 he worked at the European Council for Nuclear Research (French). conseil européen pour la recherche nucléaire, CERN) software consultant. It was there, in Geneva (Switzerland), that for his own needs he wrote the Enquire program, which used random associations to store data and laid the conceptual basis for the World Wide Web.

As part of the project, Berners-Lee wrote the world's first web server, called "httpd", and the world's first hypertext web browser, called "WorldWideWeb". This browser was also a WYSIWYG editor (short for what you see is what you get - what you see is what you get), its development began in October 1990, and was completed in December of the same year. The program ran in the NeXTStep environment and began to spread across the Internet in the summer of 1991.

Mike Sendall buys a NeXT cube computer at this time in order to understand what the features of its architecture are, and then gives it to Tim [Berners-Lee]. Thanks to the sophistication of the NeXT cube software system, Tim wrote a prototype illustrating the main points of the project in a few months. This was an impressive result: the prototype offered users, among other things, such advanced capabilities as WYSIWYG browsing/authoring!... During one of the sessions of joint discussions of the project in the CERN cafeteria, Tim and I tried to find a “catching” name for the system being created . The only thing I insisted on was that the name should not once again be taken from the same Greek mythology. Tim suggested "world wide web". I immediately really liked everything about this name, but it’s hard to pronounce in French.

The world's first website was hosted by Berners-Lee on August 6, 1991, on the first web server, available at http://info.cern.ch/, (). The resource defined the concept “ World Wide Web", contained instructions for installing a web server, using a browser, etc. This site was also the world's first Internet directory, because Tim Berners-Lee later posted and maintained a list of links to other sites there.

The first photograph to appear on the World Wide Web was of the parody filk band Les Horribles Cernettes. Tim Berners-Lee asked the band leader for scanned photographs after the CERN hardronic festival.

And yet, the theoretical foundations of the web were laid much earlier than Berners-Lee. Back in 1945, Vannaver Bush developed the concept of Memex - mechanical aids for “extending human memory”. Memex is a device in which a person stores all his books and records (and, ideally, all his knowledge that can be formally described) and which provides the necessary information with sufficient speed and flexibility. It is an extension and addition to human memory. Bush also predicted comprehensive indexing of text and multimedia resources with the ability to quickly find the necessary information. The next significant step towards the World Wide Web was the creation of hypertext (a term coined by Ted Nelson in 1965).

Since 1994, the main work on the development of the World Wide Web has been undertaken by the World Wide Web Consortium (English: world wide web consortium, abbreviated as W3C), founded and still led by Tim Berners-Lee. This consortium is an organization that develops and implements technology standards for the Internet and the World Wide Web. W3C Mission: “Unleash the full potential of the World Wide Web by establishing protocols and principles to ensure the long-term development of the Web.” Two other major goals of the consortium are to ensure full “internationalization of the Web” and to make the Web accessible to people with disabilities.

The W3C develops uniform principles and standards for the Internet (called “recommendations”, English W3C recommendations), which are then implemented by software and hardware manufacturers. In this way, compatibility is achieved between software products and equipment of different companies, which makes the World Wide Web more advanced, universal and convenient. All recommendations of the World Wide Web consortium are open, that is, they are not protected by patents and can be implemented by anyone without any financial contributions to the consortium.

Development prospects

Currently, there are two directions in the development of the World Wide Web: the semantic web and the social web.

  • The Semantic Web involves improving the coherence and relevance of information on the World Wide Web through the introduction of new metadata formats.
  • The social web relies on users to organize the information available on the network.

In the second direction, developments that are part of the semantic web are actively used as tools (RSS and other formats, web channels, OPML, XHTML microformats). Partially semanticized sections of the Wikipedia category tree help users consciously navigate the information space, however, very soft requirements for subcategories do not give reason to hope for the expansion of such sections. In this regard, attempts to compile atlases of Knowledge may be of interest.

There is also a popular concept Web 2.0, which summarizes several directions of development of the World Wide Web.

Ways to actively display information

Information presented online may be available:

  • read-only (“passive”);
  • for reading and adding/modifying (“active”).

Methods for actively displaying information on the World Wide Web include:

This division is very arbitrary. So, say, a blog or guest book can be considered a special case of a forum, which, in turn, is a special case of a content management system. Usually the difference is manifested in the purpose, approach and positioning of a particular product.

Some information from websites can also be accessed through speech. India has already begun testing a system that makes the text content of pages accessible even to people who cannot read and write.

Safety

Spreading

Between 2005 and 2010, the number of web users doubled to reach two billion. According to early research in 1999, most existing websites were not indexed correctly by search engines, and the web itself was larger than expected. As of 2001, more than 550 million web documents had already been created, most of which, however, were located within the invisible network. As of 2002, more than 2 billion web pages were created, 56.4% of all Internet content was in English, followed by German (7.7%), French (5.6%) and Japanese (4. 9 %). According to research conducted at the end of January 2005, more than 11.5 billion web pages were identified in 75 different languages ​​and indexed on the open web. And according to data for March 2009, the number of pages increased to 25.21 billion. On July 25, 2008, Google software engineers Jesse Alpert and Nissan Hiai announced that Google's search engine had detected more than a billion unique URLs.

Monument

see also

Notes

  1. "The Web as the 'NextStep' of the Personal Computer Revolution."
  2. LHC: The first band on the web
  3. IBM developed voice Internet
  4. Ben-Itzhak, Yuval. Infosecurity 2008 – New defense strategy in battle against e-crime, ComputerWeekly, Reed Business Information (18 April 2008). Retrieved April 20, 2008.
  5. Christey, Steve and Martin, Robert A. Vulnerability Type Distributions in CVE (version 1.1) (undefined) . MITER Corporation (May 22, 2007). Retrieved June 7, 2008. Archived April 15, 2013.
  6. “Symantec Internet Security Threat Report: Trends for July–December 2007 (Executive Summary)” (PDF). XIII. Symantec Corp. April 2008: 1-2 . Retrieved 11 May 2008.
  7. Google searches web's dark side, BBC News (May 11, 2007). Retrieved April 26, 2008.
  8. Security Threat Report (undefined) (PDF). Sophos (Q1 2008). Retrieved April 24, 2008. Archived April 15, 2013.
  9. Security threat report (undefined) (PDF). Sophos (July 2008). Retrieved August 24, 2008. Archived April 15, 2013.
  10. Fogie, Seth, Jeremiah Grossman, Robert Hansen, and Anton Rager. Cross Site Scripting Attacks: XSS Exploits and Defense. - Syngress, Elsevier Science & Technology, 2007. - P. 68–69, 127. - ISBN 1-59749-154-3.
  11. O'Reilly, Tim. What Is Web 2.0 (undefined) 4–5. O"Reilly Media (September 30, 2005). Retrieved June 4, 2008. Archived April 15, 2013.
  12. Ritchie, Paul (March 2007). “The security risks of AJAX/web 2.0 applications” (PDF). Infosecurity. Elsevier. Archived from the original (PDF) 2008-06-25 . Retrieved 6 June 2008.
  13. Berinato, Scott. Software Vulnerability Disclosure: The Chilling Effect, CSO, CXO Media (1 January 2007), page 7. Archived April 18, 2008. Retrieved June 7, 2008.
  14. Prince, Brian. McAfee Governance, Risk and Compliance Business Unit, eWEEK, Ziff Davis Enterprise Holdings (9 April 2008). Retrieved April 25, 2008.
  15. Preston, Rob. Down To Business: It's Past Time To Elevate The Infosec Conversation, InformationWeek, United Business Media (12 April 2008). Retrieved April 25, 2008.
  16. Claburn, Thomas. RSA's Coviello Predicts Security Consolidation, InformationWeek, United Business Media (6 February 2007). Retrieved April 25, 2008.
  17. boyd,danah; Hargittai, Eszter (July 2010). “Facebook privacy settings: Who cares?” . First Monday. University of Illinois at Chicago. 15 (8). Uses deprecated |month= parameter (help)
  18. Lynn, Jonathan. Internet users to exceed 2 billion…, Reuters (19 October 2010). Retrieved February 9, 2011.
  19. S. Lawrence, C.L. Giles, "Searching the World Wide Web," Science, 280(5360), 98-100, 1998.
  20. S. Lawrence, C.L. Giles, "Accessibility of Information on the Web," Nature, 400, 107-109, 1999.
  21. (undefined) . brightplanet.com. Retrieved July 27, 2009.

Scientific and technological progress does not stand still, but is in constant development, search, and improvement. Perhaps the most useful invention of human genius, the Internet, was invented relatively recently, by the standards of the development of civilization. At its core, it is a unique data exchange tool.

The Internet (Network, Internet) is a virtual environment that guarantees access to information resources, the elements of which are personal computers. They are combined into a single circuit and endowed with unique addressing features, using high-speed communication lines with host computers.

The Internet is a huge network connecting countless devices. It serves to exchange information that exists on this network in various forms. Nowadays, not only computers can connect to the Internet. Mobile phones, tablets, game consoles, other gadgets and even TVs can easily access the network at any time.

The significance of this information space is undeniable due to the amazing communication capabilities between users of all devices connected to the Network.

In technical terms, the online space is formed by countless computer devices connected to each other. Billions of PC users living in different countries communicate with each other every day, transmit and receive useful information, download arrays of digital data in the form of applications, programs, utilities; watch videos, listen to music.

The online environment has another important property - unlimited possibilities for storing information. Personal experience is transmitted through the Internet; in addition, it is a unique platform for informing the masses for modern media and a colossal repository of world knowledge.

What is the Internet?

In order for PC owners living on different continents to be able to freely use the services of searching for network resources, trunk cables are laid at the bottom of the ocean through which useful information is pumped around the clock.

A personal computer is controlled by special protocols. This is a kind of instruction that allows you to set rules for communication between devices. The single criterion for constructing a software protocol is the IP address. Thanks to this structure, each participant receives his own digital address, with the help of which search and identification takes place.

For example, after entering the name “novichkam.info” into the browser line, in a matter of moments the client finds himself on a web site offering assistance to beginners. In technical terms, the software robot simply finds the IP address code that is assigned to a specific site.

The machine algorithm includes the following operations:

  1. the request is recorded by the main server, where the name of the desired thematic data array is stored;
  2. the name of this resource is found in memory, i.e. detecting the required IP address;
  3. the client lands on the website.

There are other protocols, such as HTTP. Requests in another way are carried out with the addition of a prefix http://

What is the World Wide Web (WWW)

For most representatives of the target audience, the definition of an Internet service as the World Wide Web in abbreviation (WWW or simply WEB) is of great interest. Its definition is understood as a set of interconnected web pages, access to which is provided by a limited number of PCs connected to the Internet.

A set of text files marked up in HTML with links, placed on an electronic platform, is called a website. You can get acquainted with the content of a particular website by activating the browser to search for the address name.

The web today is positioned as the most sought-after and popular service in the online space, i.e. Internet. An important element of the WEB are hypertext links. By clicking on the link of the desired document or requesting a unique URL (name code, path) in the browser, a person can view the desired array of text.

Addressing system

If you enter an incorrect address name into the search bar or follow a broken link, the browser will promptly signal an error (confirm the absence of the required page). Often, upon request, a person gains access to an advertising (fraudulent) site.

In this situation, you should correct the inaccuracy in the query string field without attempting to investigate the advertising website for security reasons. The fact is that these sites can be infected with a virus. If the resource was created for the purpose of fraud, then it would be useful to familiarize yourself with our section, where the most popular methods of deception on the Internet are perfectly described.

The main thing in the address of any website is the domain, which makes it easier to remember. The domain usually displays the home page address. At the same time, it should be understood that for technical downloading of a page, the computer device uses IP with the protocol "12.123.45.5". Agree, this combination is much more difficult to remember than the domain name of our site.

It is important to know that entering “http://” or the “WWW” prefix in the search bar is NOT necessary. It is better to use the services of a search engine, where the mistake made will be immediately corrected, and the domain can be entered without a zone that causes confusion.

What does the Internet give us?

  • unlimited communication and communications

Many people are looking for like-minded people here, communicating on popular social projects and forums. Others like the unique service of personal communication using ICQ or Skype. Visitors to a dating website expect to find their other half here;

  • unlimited possibilities for entertainment and personal leisure

Here you can listen to popular music tracks for free, enjoy the latest films from film studios, play various games, including gambling, get acquainted with the works of modern authors and classics of the literary genre, take surveys, tests, etc.

  • self-education

In the environment of mass communication, you can not only read useful articles, but also participate in trainings, master classes, watch video lessons;

  • creative personality development

Here you can meet rare people, visit their professional projects for creative and personal improvement;

  • purchase of goods and services

Clients of virtual supermarkets can buy goods without leaving home. Online you can purchase shares of industrial companies, order tickets, book a hotel room, etc.;

  • new ways to earn money

There are more types of earnings on the Internet. For example, you can open an online store by creating your own blog (website). For those who are just trying their hand at this field, it’s easier to start with freelancing: writing custom articles, selling photos, offering services for creating and promoting various projects, doing web design and programming.

  • much more. The information on our website will help you find out not only all the possibilities of this global network, but also great experience while being in it.