Dino Esposito: Development of modern web applications. Analysis of subject areas and technologies. What you need to learn to develop modern web applications

In this regard, the question is - what else do you need to know?
In any case, you need a backend. If I understand correctly, Angular, Vue and other frameworks are only frontend.
That’s right. Everywhere you turn, web development is spoken of everywhere as front-end development, and it is certainly connected with Node.js (in order to write something in Angular, you can’t do without it). I don’t understand how the frontend is connected to Node.js, because... Node.js is essentially a way to run JS outside of the browser.
Most likely, you are reading articles about the frontend, because there is nothing about the backend in them. As you know, the front end is written in JS and many are captivated by the fact that you can install NodeJS on the backend and create websites using one language. If I want to run an application in a browser, then why do I need node? This all confuses me; I see only contradictions.
Don't be confused. There are technologies that are used during the application process and there are technologies that are used during the application development process. All these Gulp, Grunt, Babel, Webpack and others are development tools. They speed up, simplify, and improve the quality of work. At that time, jQuery, Angular, React are the libraries and frameworks with which the application will work.

If previously websites were created using a couple of technologies, then modern applications can use dozens, or even hundreds of the latter. Moreover, these could be different programming languages, libraries, frameworks, services, etc. All this is often called the “zoo” of technology.

Here I can only assume that the server, instead of html, should exchange data with the application via json or something else.
Yes, JSON is the most common one. You need a backend framework on which you can deploy the REST API. As far as I know most modern frameworks modern languages programming programs that are used for web development can do this. I can’t say for sure, I work within the same language. Still, the server is the basis of any network application, and first of all you need to develop the server part.
Definitely. Modern single page applications (SPA) consist of two separate parts - frontend and backend. They can be created completely separately by different developers, the main thing is to agree on the data transfer format and all the nuances.

The beauty of a SPA is in the separation of these parts. Any of them can be replaced with another without any special consequences. One backend can serve sites, mobile applications, provide access to data for third parties partner applications, and all through a single API.

What else needs to be studied? Or is the knowledge listed sufficient?
I don't think that's enough. You will precisely determine the tasks that your project must solve and select technologies for them. You need to focus on one thing, you won’t be able to study everything modern, there won’t be enough time. Is it possible not to use Node.js and, accordingly, npm if JS (TS) is required only in the browser? At the same time, testing is also necessary.
Yes, it is quite. On the client side, for example, JS+Angular. And on the backend side, for example, PHP+Laravel. Now there are a lot of languages ​​and even more frameworks for them. Choose what is easier for you.

Recently, mainly due to UX and performance.

I want to present 7 actionable principles for websites that want to use JavaScript to control the UI. These principles are the result of my work as a web designer, but also as a long-time user of the WWW.

JavaScript has undoubtedly become an indispensable tool for front-end developers. Now its scope is expanding to other areas such as servers and microcontrollers. This programming language has been chosen by prestigious universities to teach students the basics of computer science.

At the same time, there are a number of questions regarding its role and specific use, which many find it difficult to answer, including the authors of frameworks and libraries.

  • Should JavaScript be used as a replacement for browser functions: history, navigation, rendering?
  • Is the backend dying? Is it necessary to render HTML at all?
  • Is it true that Single Page Applications (SPAs) are the future?
  • Should JS generate pages on a website and render pages in web applications?
  • Should I use techniques like PJAX or TurboLinks?
  • What is the exact difference between a website and a web application? Should there be one thing left?
What follows will be my attempts to answer these questions. I tried to research how to use JavaScript from a user experience (UX) perspective. In particular, he paid Special attention the idea of ​​minimizing the time it takes the user to obtain the data he is interested in. Starting from the basics of network technologies and ending with predicting future user behavior.1. Rendering pages on the servertl;DR: Rendering on the server is done not for the sake of SEO, but for performance. Consider additional requests for scripts, styles, and subsequent API requests. In the future, take into account the use HTTP method 2.0 Push.
First of all, I have to point out the common mistake of separating “server rendered applications” from “single page applications”. If we want to achieve the best experience from the user's point of view, we must not limit ourselves to such limits and abandon one alternative in favor of another.

The reasons are quite obvious. Pages are transmitted over the Internet, which has physical limitations, as Stuart Cheshire memorably illustrated in the famous essay “It’s latency, fool”:

The distance between Stanford and Boston is 4320 km.
The speed of light in a vacuum is 300 x 10^6 m/s.
The speed of light in optical fiber is approximately 66% of the speed of light in a vacuum.
The speed of light in optical fiber is 300 x 10^6 m/s * 0.66 = 200 x 10^6 m/s.
One-way delay when transmitting to Boston 4320 km / 200 x 10^6 m/s = 21.6 ms.
Round trip latency 43.2 ms.
Ping from Stanford to Boston on the modern Internet is about 85 ms (...)
So, modern equipment The Internet transmits a signal at a speed of 0.5 times the speed of light.
The reported 85ms result can be improved (and is already slightly better), but it is important to understand that there is physical limitation to the delay in transmitting information via the Internet, no matter how much bandwidth on users’ computers increases.

This is especially important with the rise in popularity of JavaScript applications, which typically only contain markup and a blank field next to it. So-called single page applications (SPA) - the server returns one page, and everything else is called by code on the client side.

Imagine a scenario where a user directly accesses app.com/orders. By the time your application receives and processes this request, it already has an important information about what needs to be shown on the page. It can, for example, load an order from the database and add it to the response. But most SPAs in this situation return an empty page and a tag. Then you will have to exchange requests again to receive the contents of the script, and again to receive the content.

Parsing the HTML sent by the server for each SPA page

Many developers deliberately make this sacrifice. They try to ensure that additional network hops will only happen once for the user, sending correct headings for caching in responses with scripts and CSS. Conventional wisdom is that this is a good deal because once all the files are downloaded to the computer, most user actions (like navigating to other sections) occur without requiring additional pages or scripts.

However, even taking into account the cache, there is a certain loss in performance if we take into account the time for parsing and script execution. In the article “Is jQuery too big for mobile?” it says how jQuery alone can slow down some mobile browsers for hundreds of milliseconds.

To make matters worse, the user usually does not receive any feedback while the scripts are loading. Result - blank page on the screen, which then suddenly turns into a fully loaded page.

Most importantly, we tend to forget that the most common Internet data transport (TCP) gets off to a slow start. This almost certainly guarantees that most script bundles will not be transferred in one go, making the above situation even worse.

A TCP connection begins with the exchange of handshake packets. If you are using SSL, which is important for secure script transfer, there are two additional packet exchanges (one if the client recovers the session). Only after this the server can start sending data, but practice shows that it does this slowly and in batches.

A congestion control mechanism called Slow Start is built into TCP protocol to send data, gradually increasing the amount segments. This has two serious implications for SPA:

1. Large scripts take much longer to load than they seem. As explained in the book "High Performance Browser Networking" by Ilya Grigorik, it takes "four packet exchanges (...) and hundreds of milliseconds of latency to reach 64 KB of data exchange between client and server." For example, in the case of a fast Internet connection between London and New York, it takes 225 ms before TCP can reach maximum size package.

2. Since this rule also applies to initial download page, then it is very important what content is loaded to be rendered on the page first. As Paul Irish concludes in his presentation Delivering Goods, the first 14 KB are critical. This is clear if you look at the graph indicating the transfer volumes between the client and the server during the first stages of establishing a connection.


How many KB can the server send at each connection stage, by segment

Websites that manage to deliver content (even basic markup without data) in this window appear exceptionally responsive. In fact, many authors of fast server applications perceive JavaScript as something unnecessary or to be used with great caution. This attitude is further strengthened if the application has a fast backend and database, and its servers are located near the users (CDN).

The role of the server in accelerating the presentation of content depends directly on the web application. The solution doesn't always boil down to "render entire pages on the server."

In some cases, irrelevant in this moment For the user, it is better to exclude part of the page from the initial response and leave it for later. Some applications, for example, prefer to render only the "core" of the page to ensure immediate responsiveness. They then request different parts of the page in parallel. This provides better responsiveness even in situations with a slow, legacy backend. For some pages good option Only the visible part of the page will be rendered.

Extremely important qualitative assessment scripts and styles, taking into account the information that the server has about the session, client and URL. The scripts that sort orders will obviously be more important to /orders than the settings page logic. Maybe not so obvious, but there is a difference in loading " structural CSS" and "CSS for styling". The first one may be needed for JavaScript code, so it is required blocking, and the second one is loaded asynchronously.

A good example of an SPA that doesn't result in unnecessary packet exchange is the 4096 byte StackOverflow concept clone, which could theoretically load with the first packet after a handshake on a TCP connection! The author managed to achieve this by refusing caching, using inline for all resources in the response from the server. By using SPDY or HTTP/2 server push, it is theoretically possible to transfer all cached client code in one hop. Well, at present, rendering parts or the entire page on the server side remains the most popular way to get rid of unnecessary rounds of packet exchange.


Proof-of-concept SPA using inline for CSS and JS to get rid of unnecessary roundtrips

A fairly flexible system that splits rendering between the browser and the server and provides tools for gradually loading scripts and styles could very well blur the line between websites And web applications. Both use URLs, navigation, and demonstrate data to the user. Even a spreadsheet application that traditionally relies on client-side functionality must first show the client the information that needs to be edited. And doing this in the least number of roundtrips is of paramount importance.

From my point of view, the biggest performance flaw in many popular systems in modern times is explained by the progressive accumulation of complexity in the stack. Over time, technologies like JavaScript and CSS were added. Their popularity also gradually grew. Only now can we appreciate how they can be used differently. We are also talking about improving protocols (this is shown by the current progress of SPDY and QUIC), but the greatest benefit comes from optimizing applications.

It may be useful to recall some of the historical discussions surrounding design. earlier versions HTML and WWW. For example, this mailing list from 1997 suggests adding the tag in HTML. Marc Andreessen reiterates the importance of delivering information quickly:

“If a document needs to be put together on the fly, it can be as complex as we want, and even if the complexity is limited, we will still have major performance problems from structuring documents this way. First of all, this immediately breaks the WWW one-hop principle (well, IMG breaks it too, but for a very specific reason and in a very limited sense) - are we sure we want this? 2. Immediate response to user actionstl;DR: JavaScript allows you to completely hide network latency. Using this as a design principle, we can even remove almost all loading indicators and “loading” messages from the application. PJAX or TurboLinks are missing opportunities to increase subjective interface speed.
Our task is maximum acceleration reactions to user actions. No matter how much effort we put into reducing the number of hops when working with a web application, there are things beyond our control. This is the theoretical limit of the speed of light and the minimum ping between client and server.

An important factor is the unpredictable quality of communication between client and server. If the connection quality is poor, then packets will be retransmitted. Where content needs to load in a couple of roundtrips, you may need much more.

This is the main advantage of JavaScript for improving UX. If the interface is scripted on the client side, we can hide the network latency. We can create an impression high speed. We can artificially achieve zero latency.

Let's assume again that this is plain HTML. Documents are connected by hyperlinks or tags . If you click on any of them, the browser makes a network request, which takes an unpredictably long time, then receives and processes the received data and finally enters a new state.

JavaScript allows you to respond immediately and optimistically to user actions. Clicking on a link or button results in an immediate response, without having to go to the Internet. A well-known example is the Gmail (or Google Inbox) interface, in which the archiving of an email message occurs immediately, while the corresponding request to the server is sent and processed asynchronously.

In the case of a form, instead of waiting for some HTML code as a response to filling it out, we can respond immediately as soon as the user presses “Enter”. Or even better, like Google search does, we can react even earlier by preparing the markup for a new page in advance.

This behavior is an example of what I call markup adaptation. The basic idea is that the page "knows" its future markup, so it can switch to it when there is no data to indicate it yet. This is "optimistic" behavior because there is still a risk that the data will never arrive and an error message will have to be reported, but this is obviously rare.

Google's home page is a good example because it demonstrates the first two principles from our article very clearly.

At the end of 2004, Google became a pioneer in using JavaScript to provide real-time hints while typing search query(interestingly, the employee developed this function in 20% of his time free from his main job, just like Gmail). This even became the basis for the emergence of Ajax:

Look at Google Suggest. Watch your search terms update as you type, almost instantly... without delaying page reloads. Google Suggest and Google Maps are two examples of a new approach to creating web applications that we at Adaptive Path call “Ajax”
And in 2010, they introduced Instant Search, in which JS plays a central role, eliminating manual page refreshes altogether and switching to “search results” markup on the first keystroke, as seen in the illustration above.

Another prominent example of markup adaptation may be in your pocket. From the very first days, iPhone OS required application authors to provide a picture default.png, which can be immediately displayed on the screen while the application itself is loading.


iPhone OS forces default.png to load before app launch

Another type of action other than clicks and form submissions that improve greatly with using JavaScript, is a rendering of file loading.

We can log a user's attempt to download a file different ways: drag-n-drop, paste from clipboard, select file. Then, thanks to the new HTML5 APIs, we can display the content as if it had already been downloaded. An example of this kind of interface is our work with downloads in Cloudup. Notice how the image thumbnail is generated and rendered instantly:


The image is rendered and displayed until loading finishes

In all these cases we improve speed perception. Fortunately, there is a lot of evidence for the usefulness of this approach. Take at least an example of how increase distances to the baggage conveyor at Houston Airport decreased number of complaints about lost luggage, without the need to expedite baggage processing.

This idea should seriously impact the UI of our applications. I believe that loading indicators should become a rarity, especially as we move to real-time information applications, which are described in the next section.

There are situations where the illusion of instant action actually has a detrimental effect on UX. This could be a form of payment or ending a session on the site. By taking an optimistic approach here, de facto deceiving the user, we risk irritating him.

But even in these cases, displaying spinners or loading indicators on the screen should be stopped. They should be displayed only after the user considers the response not immediate. According to an oft-cited Nielsen study:

Basic response time advice has remained the same for thirty years Miller 1968; Card et al. 1991:
*0.1 seconds is the limit for the user to perceive the response as immediate, no display required here additional information, except for the result of the operation.
* 1.0 seconds is the limit on the user's continuity of thought, even though he will notice the delay. Typically, no additional indication is required for delays greater than 0.1 seconds but less than 1.0 seconds, but the user loses the feeling of directly working with the data.
* 10 seconds is the limit for keeping the user’s attention on the dialogue. At longer delay users will want to perform another task while waiting for a response from the computer.
Techniques like PJAX or TurboLinks unfortunately miss most of the features described in this section. The client-side code does not “know” about the future state of the page until the data exchange with the server takes place.3. Response to Data Changetl;DR: When data is updated on the server, the client should be notified without delay. This is a form of productivity improvement where the user is freed from having to additional actions(press F5, refresh page). New issues: (re)connection management, state recovery.
The third principle relates to the UI's response to changes in data at the source, typically one or more database servers.

The transmission model is becoming a thing of the past. HTML data, which remain static until the user refreshes the page (traditional websites) or interacts with it (Ajax).

Your UI should update automatically.

This is critical in a world with an increasing flow of information from different sources, including watches, phones, tablets and wearables coming in the future.

Imagine the Facebook News Feed right after its introduction, when information was published primarily from users' personal computers. Static rendering wasn't optimal, but it made sense for people who refreshed their feed, say, once a day.

We now live in a world where you upload a photo and almost immediately receive likes and comments from friends and acquaintances. The need for instant response has become a natural necessity in the competitive environment of other applications.

It would be wrong, however, to assume that the benefits of instant UI updates are limited to multi-user applications. That's why I love talking about agreed data points, instead of users. Let's take a typical scenario for synchronizing photos between your phone and your own laptop:


A single-user application can also benefit from reactivity.

Helpful to imagine all information that is sent to the user as "reactive". Synchronizing the session and authorization state is one example of a universal approach. If users of your application have several tabs open at the same time, then ending the work session on one of them should immediately deactivate authorization on all others. This inevitably leads to improved safety and better protection confidential information, especially in situations where several people have access to the same device.


Each page reacts to session state and authorization status

Once you have established a rule that the information on the screen updates automatically, it is important to work on new task: State recovery.

When sending requests and receiving atomic updates, it's easy to forget that your application should update normally even after a long period of inactivity. Imagine closing the lid of your laptop and opening it a few days later. How will the application behave?


An example of what happens if the connection is not updated correctly

The application's ability to reconnect normally interacts with principle #1. If you choose to send data on the first page load, you must also consider the time that elapses before the scripts are loaded. This time is essentially equivalent to the disconnect time, so the initial connection of your scripts is the resumption of the session.

4. Control of data exchange with the server tl;DR: Now we can fine-tune data exchange with the server. Ensure error handling, repeated requests in favor of the client, data synchronization in background and saving the cache offline.
When the web came into being, communication between client and server was limited in several ways:
  • Clicking the link will send a GET to receive new page and its rendering.
  • Submitting the form will send a POST or GET followed by rendering of a new page.
  • Injecting an image or object will send a GET asynchronously followed by rendering.
  • The simplicity of this model is very attractive, and now things have definitely become more complex when it comes to understanding how to receive and send information.

    The main restrictions relate to the second point. The inability to send data without necessarily loading a new page was a disadvantage from a performance perspective. But the most important thing is that it completely broke the "Back" button:


    Probably the most annoying artifact of the old web

    This is why the web as an application platform remained incomplete without JavaScript. Ajax represented a huge leap forward in terms of ease of user publishing.

    We now have many APIs (XMLHttpRequest, WebSocket, EventSource, just to name a few) that give complete and precise control over the flow of data. In addition to the ability to publish user data through a form, we have new opportunities to improve UX.

    Directly related to previous principle has a show connection status. If we expect data to be updated automatically, we must inform the user of the facts loss of connection And attempts to restore it.

    When a disconnect is detected, it is useful to store the data in memory (or better yet, in localStorage) so that it can be sent later. This is especially important in light of the future use of ServiceWorker, which allows JavaScript applications work in the background. If your application is not open, you can still continue to try to sync data with the server in the background.

    Consider the possibility of timeouts and errors when sending data; such situations should be resolved in favor of the client. If the connection is restored, try sending the data again. When permanent error, inform the user about this.

    Some errors need to be handled especially carefully. For example, an unexpected 403 could mean that the user's session has been invalidated. In such cases, it is possible to restore the session by showing the user a window for entering login and password.

    It is also important to make sure that the user does not accidentally interrupt the data flow. This can happen in two situations. The first and most obvious case is closing the browser or tab, which is what we're trying to prevent with the beforeunload handler.


    Warning beforeunload

    Another (and less obvious) case is when you try to navigate to another page, such as clicking on a link. In this case, the application can stop the user using other methods, at the discretion of the developer.

    5. Don't break history, improve it tl;DR: If the browser does not manage URLs and history, we will have new problems. Make sure you meet the expected scrolling behavior. Save your own cache for quick feedback.
    Short of submitting forms, using just hyperlinks in a web application will give us fully functional Forward/Back navigation in the browser.

    For example, a typical "endless" page is usually made with a JavaScript button that requests additional data/HTML and inserts it. Unfortunately, few people remember to call history.pushState or replaceState as a required step.

    That's why I use the word "break". With the simple model of the original web, this situation was not possible. Each state change was based on a URL change.

    But there is also back side medals - opportunity improve surfing history, which we now control using JavaScript.

    One such possibility was called Fast Back by Daniel Pipius:

    The back button should work quickly; users do not expect too much data change.
    It's like treating the back button as a button from a web application and applying principle #2 to it: respond immediately to user action. The main thing is that you have the opportunity to decide how to organize caching previous page and instantly display it on the screen. You can then apply principle #3 and then inform the user when new data arrives on that page.

    There are still a few situations where you have no control over cache behavior. For example, if you rendered a page, then went to a third-party site, and then the user clicked “Back”. Applications that render HTML on the server side and then modify it on the client side are especially susceptible to this little bug:


    Incorrect operation of the "Back" button

    Another way to break navigation is to ignore the scroll state memory. Once again, pages that don't use JS and manual history management likely won't have a problem here. But there will be dynamic pages. I tested two of the most popular news feeds on JavaScript based on the Internet: Twitter and Facebook. Both had scrolling amnesia.


    Endless page turning is usually a sign of scrolling amnesia

    After all, be wary of state changes that are only relevant when viewing history. For example, this case with changing the state of subtrees with comments.


    Changing the type of comments must be saved in history

    If the page was re-rendered after clicking a link within the application, the user might expect all comments to be expanded. When the state changes, it must be saved in history.

    6. Updating code via push messagestl;DR: It is not enough to send only data via push messages, you also need code. Avoid API errors and improve performance. Use stateless DOM to painlessly redesign your application.
    It is extremely important that your application responds to changes in the code.

    Firstly, it reduces the number of possible errors and increases reliability. If you made an important change to the backend API, then must update the code of client programs. Otherwise, clients may not accept the new data or may send data in an incompatible format.

    An equally important reason is to adhere to principle No. 3. If your interface updates itself, then there is little reason for users to resort to manually reloading the page.

    Keep in mind that for a typical site, a page refresh triggers two things: a data reload and a code reload. Organizing a system with push data updates without push code updates is incomplete, especially in a world where one tab (session) can remain open for very long. for a long time.

    If the server push channel is working, then the user can be notified about the availability of a new code. If not, the version number can be added to the header of outgoing HTTP requests. The server can compare it with the latest known version, agree to process the request or not, and issue a job to the client.

    After this, some web applications forcefully reload the page on behalf of the user. For example, if the page is not in the visible area of ​​the screen and there are no completed forms for input.

    An even better approach is to “hot” code swap. This means that you don't have to carry out full reboot pages. Instead, certain modules are replaced on the fly, and their code is resubmitted for execution.

    In many existing applications It's quite difficult to hot swap code. To do this, you must initially adhere to an architecture that separates behavior(code) from data(state). This division will allow us to roll out many different patches quite quickly.

    For example, in our web application there is a module that sets up a bus for transmitting events (like socket.io). When an event occurs, the state of a particular component changes and this is reflected in the DOM. You then change the behavior of that component, for example so that it generates different DOM markup for the existing state and the new state.

    Ideally, we should be able to change the code modularly. There will be no need to re-establish a connection to the socket, for example, if it is possible to simply update the code of the required component. The ideal architecture for push code updates is therefore modular.

    But the problem immediately arises of how to evaluate modules without unwanted side effects. An architecture like the one offered by React is best suited here. If a component's code is updated, its logic can simply be re-executed and the DOM is updated. Read Dan Abramov's explanation of this concept.

    Essentially, the idea is that you update the DOM (or recolor it), which essentially helps with code replacement. If state is stored in the DOM or event handlers are installed by the application, then updating the code can become a much more difficult task.

    7. Behavior prediction tl;DR: Negative delay.
    A modern JavaScript application may have mechanisms to predict user actions.

    The most obvious application of this idea is to pre-download data from a server before the user requests it. Loading a web page with the mouse cursor hovering over it so that clicking on links displays it instantly is a simple example.

    A slightly more advanced method of monitoring mouse tracking analyzes the mouse's trajectory for future "collisions" with interactive elements such as buttons. :


    jQuery plugin predicts mouse path

    Conclusion The Web remains the most versatile medium for transmitting information. We continue to add dynamics to our pages, and before we implement them, we must ensure that we preserve the important principles of the web that we inherited.

    Hyperlinked pages are good building blocks for any application. Loading code, styles, and markup progressively as the user interacts ensures great performance without sacrificing interactivity.

    New unique features are provided by JavaScript. If these technologies are widely used, they will provide the best experience for users of the freest platform in existence - the WWW.

    Tags:

    • latency
    • performance
    • PJAX
    • TurboLinks
    Add tags

    MODERN TOOLS FOR DEVELOPING INTERNET SITES AND WEB APPLICATIONS

    Krupina Tatyana Aleksandrovna 1, Shcherbakova Svetlana Mikhailovna 1
    1 Moscow Pedagogical State University, master's student


    annotation
    This article is dedicated to review modern means development of Internet sites and Web applications. The problems of teaching students and schoolchildren these technologies are also discussed.

    MODERN DEVELOPMENT TOOLS ONLINE SITES AND WEB-APPLICATIONS

    Krupina Tatiana Aleksandrovna 1 , Shcherbakova Svetlana Mikhajlovna 1
    1 Moscow State Pedagogical University, Graduate of the Department of Applied Mathematics and IT


    Abstract
    This article provides an overview of the development of modern websites and Web-based applications. It also discusses the problem of training students and pupils of these technologies.

    Informatization of modern society is associated with the introduction of means and methods of information and communication technologies(ICT) in various areas human activity. A special role in this process undoubtedly belongs to the development of network technologies and communications, which, among other things, manifests itself in the creation of corporate automated information systems and e-commerce network projects. Indeed, the activities of any modern enterprise, one way or another, is related to the creation and maintenance of a corporate website.

    Modern Federal State Educational Standards (FSES) in many not only engineering but also humanitarian areas require graduates to have the skills to develop and manage Internet sites.

    Methods and tools for developing Internet sites and Web applications are developing dynamically from the ability to create simple business card sites to the development of server applications that process and store large amounts of data.

    To develop a simple website, including a business card website with a description and contact information, you can use different methods:

    • Creation HTML document, i.e. using the Notepad editor, type the code into HTML language V manual mode and implement it using a browser on the client workstation, and subsequently publish it with the provider using its hosting services;
    • creating the same HTML document using Adobe editor Dreamweaver, taking advantage of a wide range of features and conveniences;
    • use ready-made shell sites for developing Web sites of various thematic areas and designs and also publish the site on the Internet using free or paid hosting services.

    Unlike the development of simple and non-interactive sites, for the development of Web applications that run and process data on the server, methods and tools are required that complement those indicated in the previous paragraph. The development of Web applications involves, in addition to creating HTML code, programming in a special language. The language used to develop Web applications is PHP programming, and also cannot do without, for example, local server Apache and MySQL databases.

    Let's look at some more Web application programming tools:

    To independently develop Web applications, you can use the freely distributed Denwer resource.

    Denwer (from the abbreviation DNVR - a gentleman's set of Web developers) is a set of distributions and a software shell that are designed for creating and debugging Web applications and other dynamic content of Web pages on a PC running an OS.

    The Denwer set includes:

    • local Apache server to run applications on the user's computer, simulating functionality server where the provider will subsequently install the developed application. Apache is software– cross-platform, freely distributed and supported by various operating systems;
    • PHP programming system is a C-like language for developing program codes embedded in the HTML code of a site and executed on the server for the purpose of processing data received from users of a particular site. PHP (Hypertext Preprocessor, originally Personal Home Page Tools) is a scripting language general purpose, used for developing Web applications, was created by Rasmus Lerdorf in 1994;
    • MySQL is freely available software for processing databases, including when working with data received from client browsers. MySQL (Structured Query Language) was created by Michael Widenius of the Swedish company TcX in 1995.

    The Denwer software suite or its individual components are widely used by both amateurs and professionals to create and debug Web applications and sites. This set is also widely used for educational purposes to teach Web programming to schoolchildren and students.

  • Abdulgalimov G.L., Kugel L.A. Training in information systems design and data analysis. Professional education. Capital. 2013. No. 4. pp. 31-33.
  • Abdulgalimov G.L. System for training teachers of IT disciplines. Higher education in Russia. 2010. No. 3. P. 156-158.
  • Luke Welling, Laura Thomson. Development of web applications with using PHP and MySQL. Williams Publishing House. 2010. -837. ISBN: 978-5-8459-1574-0.
  • Post views: Please wait

    What is web application development?

    Web application development is a general term for the process of creating web pages or websites. Web pages are created using HTML, CSS and JavaScript. These pages can contain simple text and graphics, resembling a static document. Pages can also be interactive or display changing information. Create interactive pages a little more complex, but they allow you to create content-rich websites. Today, most pages are interactive and provide advanced interactive services such as online shopping carts, dynamic visualizations, and even sophisticated social networks.

    Application development for modern computers carried out using specialized programming languages. These introductory materials will help you become familiar with them.


    Video | 15 minutes | Programming languages

    This report talks about why programming languages ​​are needed, what they are, and what purposes they are intended for. Markup languages ​​(HTML), data representation (XML), and query languages ​​(SQL) are also briefly mentioned.


    Video | 23 minutes | Programming languages

    This report provides a brief overview of the C# programming language, its main features and designs, and demonstrates examples of creating simple console and window applications for Windows in Visual Studio 2010.

    Explore the rich capabilities of the operating room Windows systems, which can and should be used when developing web applications.

    4 development tools


    Video | 10 minutes | WebMatrix

    A short story about WebMatrix - an environment for developing websites. WebMatrix allows you to create websites of varying complexity: from home page to small corporate portal


    . The environment includes a set of website templates that can be used as a basis for creating your own website. WebMatrix allows you to create and edit website markup and code, as well as manage databases and publish ready-made websites to hosting. Video | 11 minutes |

    Internet Explorer

    This talk provides a brief overview of the Pinned Sites technology introduced in Internet Explorer 9. Demonstrates how to work with Jump Lists, Overlay Icons, and Thumbnail Toolbar Buttons.