The relationship between the Internet and the World Wide Web. What is the Internet, who created the World Wide Web and how the global network works

World Wide Web (www)

As the Internet developed, more and more information was involved in its circulation, and navigating the Internet became increasingly difficult. Then the task arose to create a simple and understandable way to organize information posted on Internet sites. The new www (world wide web) service has fully coped with this task.

World Wide Web is a system of documents with text and graphic information posted on Internet sites and interconnected by hyperlinks. Perhaps this particular service is the most popular and for many users it is synonymous with the word INTErNET itself. Often, novice users confuse two concepts - the Internet and WWW (or Web). It should be recalled that WWW is just one of the many services provided to Internet users.

The main idea that was used in the development of the www system was is the idea of ​​accessing information using hypertext links. Its essence is to include in the text of a document links to other documents, which can be located either on the same or on remote information servers.

The history of www begins from the moment when, in 1989, an employee of the famous scientific organization CErN Berners-Lee proposed to his management to create a database in the form of an information network that would consist of documents that included both the information itself and links to other documents. Such documents are nothing more than hypertext.

Another feature that sets www apart from other types of services is that through this system you can access almost all other types of Internet services, such as FTP, Gopher, Telnet.

WWW is a multimedia system. This means that using www you can, for example, watch a video about historical monuments or find out information about the World Cup. It is possible to access library information and recent photographs of the globe taken five minutes ago by meteorological satellites, along with.

The idea of ​​organizing information in the form of hypertext is not new. Hypertext lived long before the advent of computers. The simplest example of non-computer hypertext is encyclopedias. Some words in articles are marked in italics. This means that you can refer to the relevant article and get more detailed information. But if in a non-computer hypertext you need to turn pages, then on the monitor screen, following a hypertext link is instantaneous. You just need to click on the link word.

The main merit of the above-mentioned Tim Berners-Lee is that he not only put forward the idea of ​​​​creating an information system based on hypertext, but also proposed a number of methods that formed the basis of the future www service.

In 1991, the ideas that originated in CErN began to be actively developed by the Center for Supercomputing Applications (NCSA). It is NCSA that creates the hypertext document language html, as well as the Mosaic program designed to view them. Mosaic, developed by Mark Andersen, became the first browser and opened a new class of software products.

In 1994, the number of www servers began to grow rapidly and the new Internet service not only received worldwide recognition, but also attracted a huge number of new users to the Internet.

Now let's give the basic definitions.

www– this is a set of web pages located on Internet sites and interconnected by hyperlinks (or simply links).

web page is a structural unit of www, which includes the actual information (text and graphic) and links to other pages.

website– these are web pages physically located on one Internet node.

The www hyperlink system is based on the fact that some selected sections of one document (which can be parts of text or illustrations) act as links to other documents that are logically related to them.

In this case, those documents to which links are made can be located both on a local and on a remote computer. In addition, traditional hypertext links are also possible - these are links within the same document.

Linked documents may, in turn, contain cross-references to each other and to other information resources. Thus, it is possible to collect documents on similar topics into a single information space. (For example, documents containing medical information.)

Architecture www

The architecture of www, like the architecture of many other types of Internet services, is built on the principle client-server.

The main task of the server program is the organization of access to information stored on the computer on which this program is running. After startup, the server program works in the mode of waiting for requests from client programs. Typically, web browsers are used as client programs, which are used by ordinary www users. When such a program needs to obtain some information from the server (usually these are documents stored there), it sends a corresponding request to the server. With sufficient access rights, a connection is established between the programs, and the server program sends a response to the request to the client program. After which the connection established between them is broken.

To transfer information between programs, the HTTP protocol (Hypertext Transfer Protocol) is used.

www server functions

www-server is a program that runs on the host computer and processes requests coming from www clients. When receiving a request from a www client, this program establishes a connection based on the TCP/IP transport protocol and exchanges information using the HTTP protocol. In addition, the server determines access rights to the documents that are located on it.

To access information that cannot be processed by the server directly, it is used lock system. Using a special CGI (Common Gateway Interface) interface to exchange information with gateways, the www server has the ability to receive information from sources that would be inaccessible to other types of Internet service. At the same time, for the end user, the operation of the gateways is “transparent”, i.e., when viewing web resources in his favorite browser, an inexperienced user will not even notice that some information was presented to him using the gateway system



www client functions

There are two main types of www clients: web browsers and utility applications.

web browsers are used to directly work with www and obtain information from there.

Service web applications can communicate with the server either to obtain some statistics or to index the information contained there. (This is how information gets into search engine databases.) In addition, there are also service web clients, whose work is related to the technical side of storing information on a given server.

"World Wide Web" (WWW)

The World Wide Web (WWW) is the most popular and interesting Internet service, a popular and convenient means of working with information. The most common name for a computer on the Internet today is www; more than half of the Internet data flow comes from WWW. The number of WWW servers today cannot be estimated accurately, but according to some estimates there are more than 30 million. The growth rate of WWW is even higher than that of the Internet itself.

WWW is a worldwide information repository in which information objects are linked by a hypertext structure. Hypertext is primarily a system of cross-referenced documents, a way of presenting information using links between documents. Since the WWW system allows these documents to include not only texts, but also graphics, sound and video, a hypertext document has become a hypermedia document.

A little WWW history. The World Wide Web (WWW) is one of the important components of the World Wide Web. And she has her own story.

This is interesting. The European Particle Physics Laboratory (CERN) is located in Switzerland. In 1980, a man named Tim Bernes-Lee, who was then working at CERN, began developing a project for a global computer network that would provide physicists around the world with access to various information. It took nine years. In 1989, after many years of technical experiments, Mr. Tim proposed a specific option, which was the beginning of the World Wide Web, or WWW for short.

Over time, many realized that such services could be used by different people, not just physicists. WWW began to grow rapidly. Many people helped her in this: some developed hardware, others created software that developed WWW, and others improved communication lines. All this allowed it to become what it is now - the "World Wide Web".

Principles of client and server operation. WWW works on the client-server principle, or more precisely, client-servers: there are many servers that, at the client’s request, return to him a hypermedia document - a document consisting of parts with a diverse representation of information (text, sound, graphics, three-dimensional objects, etc.). ), in which each element can be a link to another document or part of it. Links in WWW documents are organized in such a way that each information resource on the global Internet is uniquely addressed, and the document that you are reading at the moment is able to link both to other documents on the same server and to documents (and in general to Internet resources ) on other computers on the Internet. Moreover, the user does not notice this and works with the entire information space of the Internet as a single whole.

WWW links point not only to documents specific to the WWW itself, but also to other services and information resources on the Internet. Moreover, most WWW client programs (browsers, navigators) not only understand such links, but are also client programs for the corresponding services: FTP, gopher, Usenet network news, email, etc. Thus, WWW software tools are universal for various Internet services, and the WWW information system itself plays an integrating role.

Let's list some terms used on the WWW.

The first term - html - is a set of control sequences of commands contained in an html document and defining the actions that the viewer (browser) should perform when loading this document. This means that each page is a regular text file containing text that is visible to everyone, and some instructions for the program that are invisible to people in the form of links to other pages, images, servers. Thus, questionnaires and registration cards are filled out, and sociological surveys are conducted.

The second term is URL (uniform resource locator - a universal pointer to a resource). This is what those links to information resources on the Internet are called.

Another term is http (hypertext transfer protocol). This is the name of the protocol by which the client and WWW server interact.

WWW is a direct access service that requires a full Internet connection and, moreover, often requires fast communication lines if the documents you are reading contain a lot of graphics or other non-text information.

The rapid development of the Internet, which began in the early 90s, is largely due to the emergence of new WWW technology. This technology is based on hypertext technology, which has been extended to all computers connected to the Internet.

When using hypertext technology, the text is structured and link words are highlighted in it. When a link is activated (for example, using the mouse), a transition occurs to the text fragment specified in the link or to another document. So, we could convert our text into hypertext by highlighting the words “hypertext technology” in the first paragraph and recording that when this link is activated, a transition will occur to the beginning of the second paragraph.

WWW technology allows you to navigate not only within the source document, but also to any document located on a given computer and, most importantly, to any document on any computer currently connected to the Internet. Documents implemented using WWW technology are called Web pages.

Structuring documents and creating Web pages is carried out using HTML (Hyper Text Markup Language). The Word text editor allows you to save documents in Web page format. Web pages are viewed using special browser viewing programs. Currently, the most common browsers are Internet Explorer, Netscape Navigator, Opera.

If your computer is connected to the Internet, you can download one of the browsers and go on a journey through the World Wide Web. First, you need to download a Web page from one of the Internet servers, then find the link and activate it. As a result, a Web page will be loaded from another Internet server, which may be located in another part of the world. In turn, you can activate the link on this Web page, the next Web page will load, etc.

The Internet is growing at a very fast pace, and it is becoming increasingly difficult to find the necessary information among tens of millions of documents. To search for information, special search servers are used, which contain accurate and constantly updated information about the content of tens of millions of Web pages.










World Wide Web World Wide Web is a hyperconnected information system distributed throughout the world, existing on the technical basis of the World Wide Web. The World Wide Web is only 16 years old. The birth date of the World Wide Web (WWW) is considered to be August 6, 1991. On this day, Tim Berners-Lee, who worked at the European Nuclear Research Center in Geneva (Switzerland), published a brief description of the WWW project. Tim Berners-Lee




Web page Access protocol - http Computer name - elhovka.narod.ru Directory name - html File name - urok.htm The web page has its own name by which it can be accessed.


WWW hyperstructure Viewing Web pages does not have to be in a row, turning them over, like in a book. The most important property of the WWW is the hypertext organization of connections between Web pages. These connections operate not only between pages on the same server, but also between different WWW servers. The transition from one page to another occurs through hyperlinks, forming a network that resembles a spider's web.








In computer science, considerable attention is paid to computer networks. Their most prominent representatives are the Internet and the World Wide Web. The Internet is a telecommunications network of computers. It is the basis of the World Wide Web (Network), a system of interconnected documents located on various computers connected to the Internet. If you want to emphasize the virtual nature of documents, their totality is characterized as hyperspace. It is quite obvious that the Internet, the World Wide Web and hyperspace are an inseparable trinity. Their subjects are not individuals, but online communication community. In accordance with this circumstance, concepts come to the fore communication, group discourse And social community of people. All these concepts were considered by philosophers long before their appearance in the 1980s. World Wide Web. The results of their analysis can shed light on the nature of the Internet and the Network1. Let's present them in the most economical form.

The concept of communication is the result of a complex process of understanding the nature of interactions between people. But it is not enough to say that people interact with each other: it is important to understand the conceptual content of such interaction. In acting as social beings, people strive to optimize their values. Communication is an exchange of values, the result of which is the achievement of agreement (consensus) or disagreement (dissensus). Hermeneutics (H.-G. Gadamer, J. Habermas) assign greater ethical weight to agreement than to disagreement. Poststructuralists (J. Derrida, J.-F. Lyotard) adhere to the exact opposite point of view. For them, dissensus is ethically more significant than consensus. Both disputing parties cannot imagine social reality without discourse - the exchange of judgments of value content. Discourse always indicates the presence of some community of people: participants in discourse, by definition, are not atoms claiming individual privacy.

So, in the future we will have to constantly keep in mind the inseparable trinity of concepts: communication, discourse, community of people. Moreover, they all appear in different guises depending on the nature of the knowledge in question. The mentioned concepts are most often considered in the context of: 1) computer science; 2) management; 3) economics; 4) political science; 5) sociology; 6) psychology; 7) ordinary knowledge.

Researchers do not always distinguish between levels of knowledge. In this case, in pursuit of universal values, they are led astray by superficial reasoning such as “The Network is good”, “The Internet is evil”. This kind of reasoning is only substantive at first glance. Upon closer examination, it turns out that they need specification, and this is impossible without recourse to the conceptual wealth of the sciences. Taking into account this circumstance, let us consider the Internet and the Network in the context of various sciences, as well as non-scientific knowledge.

Network from the perspective of computer science

Of course, the phenomena that interest us have absorbed all the richness of computer science as a science. But five “pillars” were of decisive importance in the formation and development of the Web: hypertext, HTML, URL, HTTP and search engines.

Hypertext is a document that includes references to other texts. The term was coined and introduced into computer science by the American T. Nelson in 1969. The primary feature of hypertext is its branching rather than linear nature. Knowledge is realized in the form of cross-references. Consequently, there is a crossing of texts, and this, as is known, is a necessary feature of dialogue1. The remarkable achievement of the specialists who developed the concept of hypertext was the creation of the technological ability to reproduce discourse in the form of intertextuality. Its peculiarity is that the initiative constantly passes from one person to another. Hypertext provides this opportunity. At the beginning of the 20th century. philosophers L. Wittgenstein and M. Heidegger initiated a linguistic turn under the motto “language is more important than mentality.” In the process of its implementation, it was also realized that dialogue is more important than monologue. Intersecting texts are structurally and semantically much richer than linear construction.

HTML(English) HyperText Markup Language) - a standard language for structuring and formatting documents on the Web. Text documents containing HTML code are processed and displayed in formatted form by browsers.

URL(English) Uniform Resource Locator) - a uniform locator (location identifier) ​​of a resource on the Internet. All resources are assigned names by which they are found on the Internet and to which they respond.

HTTP(English) HyperText Transfer Protocol) - hypertext transfer protocol. The consumer (client) sends a request to the provider (server). It performs the necessary actions and returns a message with the result. In the request and response, the resource is specified in accordance with a specific encoding method.

The concepts of HTML, URL, HTTP were developed by the creator of the World Wide Web, the Anglo-American scientist T. B. Lee in 1990-1992. T. B. Lee's genius was manifested primarily in his deep understanding of the conceptual structure of the Web.

A search system is a software and hardware complex that provides the ability to search for documents on the Internet. The software part of the search engine that provides its functionality is called search engine. The main criterion for the quality of a search engine is relevance, those. degree of correspondence of the found query. According to numerous surveys, Google is the most popular search engine today. Of course, there is no universal search engine. Various search strategies lead to new knowledge. It is always important to remember that any search is not carried out by chance, but in connection with a decision made. Thus, search triggers a mechanism for synthesizing new knowledge, and this is impossible without communication with other subjects of the Network and, consequently, without the formation of one or another virtual community of people, for example, adherents of the Yandex search engine, so popular on the Runet. As we see, the concepts of communication, discourse and community of people receive a specific form in computer science.

The conceptual foundations of the Internet and the Network were discussed above. Of course, they have all undergone and are undergoing numerous metamorphoses. HTML, URLs, HTTP, search engines and browsers have numerous competitors. If you want to understand their history, it is necessary to build appropriate problem series and their interpretations. It was important for us to identify the main conceptual nodes of the Network, which are the property of computer science itself.

Today, using the Internet has become commonplace. Going online is sometimes easier than getting up from the couch to turn on the TV because the remote control has disappeared somewhere again :). Why, many people don’t even watch TV anymore, because the Internet has everything they need, except that they don’t feed them... yet.

But who came up with what we use every day, hourly? You know? I had no idea until now. And the Internet invented it Sir Timothy John Berners-Lee. He's the one inventor of the World Wide Web and author of many other major developments in this area.

Timothy John Berners-Lee was born on June 8, 1955 in London, into an unusual family. His parents were mathematicians Conway Berners-Lee and Mary Lee Woods, who conducted research into the creation of one of the first computers, the Manchester Mark I.

It must be said that the time itself was conducive to various kinds of technological breakthroughs in the field of IT technologies: a few years earlier, Vannevar Bush (a scientist from the USA) proposed the so-called hypertext. This is a unique phenomenon that represented an alternative to the usual linear structure of development, narrative, etc. and had a significant impact on many areas of life - from science to art.

And a few years after the birth of Tim Berners-Lee, Ted Nelson came up with a proposal to create a “documentary universe” where all the texts ever written by humanity would be linked together using what we would today call “cross-references.” . On the eve of the invention of the Internet, all these and many other events certainly created fertile ground and prompted corresponding reflections.

At the age of 12, his parents sent the boy to the Emanuel private school in the town of Wandsworth, where he showed interest in the exact sciences. After graduating from school, he entered college at Oxford, where he and his friends were caught in a hacker attack and for this they were deprived of access to school computers. This unfortunate circumstance prompted Tim to independently assemble a computer based on the M6800 processor for the first time, with an ordinary TV instead of a monitor and a broken calculator instead of a keyboard.

Berners-Lee graduated from Oxford in 1976 with a degree in Physics, after which he began his career at Plessey Telecommunications Ltd. His area of ​​activity at that time was distributed transactions. After a couple of years, he moved to another company - DG Nash Ltd, where he developed software for printers. It was here that he first created a kind of analogue of a future operating system capable of multitasking.

The next place of work was the European Nuclear Research Laboratory, located in Geneva (Switzerland). Here, as a software consultant, Berners-Lee wrote the Enquire program, which used the method of random associations. The principle of its operation, in many ways, was a help for the creation of the World Wide Web.

This was followed by three years as a systems architect and a research stint at CERN, where he developed a range of distributed systems for data collection. Here, in 1989, he first introduced a project based on hypertext - the founder of the modern Internet network. This project later became known as the World Wide Web. World Wide Web).

In a nutshell, its essence was as follows: the publication of hypertext documents that would be interconnected by hyperlinks. This made it much easier to find information, organize it and store it. It was originally intended that the project would be implemented on the CERN intranet for local research needs, as a modern alternative to libraries and other data repositories. At the same time, downloading and accessing data was possible from any computer connected to the WWW.

Work on the project continued from 1991 to 1993 in the form of collecting user feedback, coordination and various improvements to the World Wide Web. In particular, the first versions of the URL protocols (as a special case of the URI), HTTP and HTML protocols were already proposed. The first World Wide Web hypertext-based web browser and WYSIWYG editor were also introduced.

In 1991, the very first website was launched, which had an address. Its content was introductory and supporting information regarding the World Wide Web: how to install a web server, how to connect to the Internet, how to use a web browser. There was also an online catalog with links to other sites.

Since 1994, Berners-Lee has held the 3Com Founders' Chair at the MIT Computer Science Laboratory (now the Computer Science and Artificial Intelligence Laboratory, jointly with the Massachusetts Institute of Technology), where she has served as a principal investigator.

In 1994, he founded the laboratory at the Laboratory, which to this day develops and implements standards for the Internet. In particular, the Consortium works to ensure stable and continuous development of the World Wide Web - in accordance with the latest user requirements and the level of technical progress.

In 1999, Berners-Lee's famous book entitled "" was published. It describes in detail the process of working on a key project in the author’s life, discusses the prospects for the development of the Internet and Internet technologies, and outlines a number of the most important principles. Among them:

— the importance of web 2.0, the direct participation of users in the creation and editing of website content (a striking example of Wikipedia and social networks);
— close relationship of all resources with each other through cross-references in combination with equal positions of each of them;
— moral responsibility of scientists introducing certain IT technologies.

Since 2004, Berners-Lee has been a professor at the University of Southampton, where he works on the Semantic Web project. It is a new version of the World Wide Web, where all data is suitable for processing using special programs. This is a kind of “add-on”, suggesting that each resource will have not only regular text “for people”, but also specially encoded content that is understandable to the computer.

His second book, Crossing the Semantic Web: Unlocking the Full Potential of the World Wide Web, was published in 2005.

Tim Berners-Lee currently holds the title of Knight Commander from Queen Elizabeth II, is a Distinguished Member of the British Computer Society, a Foreign Member of the US National Academy of Sciences, and many others. His work has been recognized with numerous awards, including the Order of Merit, a place in the list of “100 Greatest Minds of the Century” according to Time Magazine (1999), the Quadriga Award in the Knowledge Network category (2005), and the M.S. Gorbachev Award in the category “Perestroika” - “The Man Who Changed the World” (2011), etc.

Unlike many of his successful brethren, like, or, Berners-Lee was never distinguished by a particular desire to monetize and receive excess profits from his projects and inventions. His manner of communication is characterized as a “rapid stream of thought,” accompanied by rare digressions and self-irony. In a word, there are all the signs of a genius living in his own, “virtual” world, which, at the same time, has had a colossal impact on the world today.