Test layout prototype. The Five Most Annoying Aspects of CSS. Inaccurate prototypes do not put pressure on users

Taking a working system as a criterion (since it is final result all previous development processes), you can put the prototype, prototype and simulator in different working conditions. If we evaluate the similarity to a working system, then the mock-up resembles it less than the simulator. A prototype is much closer to a simulator than to a mock-up. In terms of the number of units in these systems, the layout usually consists of only one device, although it can fit the entire system; the prototype depicts the entire system in exactly the same way as the simulator (as for the simulator, it is this device that is provided to the human operator as an interface to the machine). If we talk about the number of actions with equipment or aspects of servicing the system by personnel, then fewer checks can be carried out with a mock-up, more with a system simulator and its prototype (however, of course, not as many as with real system).

For electronic equipment, breadboards are much simpler than simulators, but if you add optional equipment(microcomputer, software, control and display blocks, etc.), then the layout will ultimately turn into a simulator.

Testing of the layout is carried out in the event that the IChF decides that it is necessary, since usually checks are not provided for at this stage of development. RF prototypes are regulated by the instructions of the Ministry of Defense. As with mock-up testing, the processes for measuring the performance of the simulator and the running system depend on special circumstances (such as testing requirements). this modification systems); Sometimes these measurements are performed for research purposes (for example, to confirm the possibility of using the simulator for personnel training).

Simulator - physical device, which reproduces those features of real equipment that are related to the human-machine interface. It is necessary to distinguish between simulation systems and computer models, which are essentially conceptual. In addition to evaluating the effectiveness of the training simulator, the ICHF usually prefers to use not the real system, but the simulator in following cases: 1) it is not possible to carry out measurements with the system running; 2) it is necessary to change the parameters of the system in order to conduct an experimental comparison, and the parameters of the current system cannot be changed for this purpose; 3) checking emergency conditions, which is too dangerous for real equipment.

In addition, measurements using a simulator have certain advantages, namely the ability to 1) start and stop the simulated system at any desired moment, 2) record some stage of the system’s functioning and study it more carefully, 3) change the operating mode, and 4) provide automatic registration data.

On the other hand, there are a number of disadvantages: 1) using a simulator it is impossible to reproduce the full cycle of system operation (of course, this applies to the operation process); 2) the quality of the modeling may be lower than desired; 3) the personnel being tested usually do not have the same skills as operational personnel; 4) when modeling a system, only training scenarios can be played that do not fully reflect all operating conditions; 5) the time for which the IChF can obtain a simulator solely for measurements may be limited.

The remaining test situations have only their own inherent disadvantages. Since the operating system is included in external environment, sometimes external influences on it are possible, which reduce the purity of research (for example, in the case of interruptions caused by commands with more high priority). This is the reason for many known cases of control failure when working in real conditions. It may happen that the RI prerequisites of the prototype system are not fully satisfied, for example in the case of stable hardware configurations (both equipment and procedures); Due to continuous development difficulties, the equipment under test may fail more often than in real conditions. The design is so limited in its ability to present stimuli and the corresponding responses of personnel that careful extrapolation is required when analyzing test results.

The main objectives of assessing the effectiveness of systems may be the following: 1) check whether personnel can solve the tasks facing them without significant errors or overload; 2) check whether control procedures and other characteristics of the equipment and system pose insurmountable obstacles to efficient work personnel and whether these characteristics satisfy the criteria of engineering psychology; 3) determine the impact on the work of personnel of certain sharply different variables directly related to the purpose of the equipment and the entire system (for example, operations performed during the day and operations performed at night); 4) determine from a behavioral point of view the adequacy of a particular modification in the work, equipment, the entire system or its purpose, as well as the adequacy of a particular constructive solution to the problem; 5) by assessing the work of personnel under operating conditions of the system, determine whether this work is adequate, and if not, then find out what factors cause a decrease in work efficiency.

The more complete the system being tested is and the closer its operating conditions are to real ones, the more fully the testing goals are achieved.

This is why it is possible to achieve only a subset of testing goals using a static mock (the most common type of mock). This means that it is possible to obtain partial data that does not fully satisfy the testing objectives. For example, let's say we have a static mockup of a fighter jet cockpit. Then, with its help, it is established that the controls are accessible and the instrument readings are read, which is required for the effective functioning of the aircraft. However, this is only a partial test (from an anthropometric point of view only) to establish that personnel can perform their tasks. A simulator is also required, at least to check whether the pilot reacts quickly enough to changes in the environment.

The testing tasks described above apply to both real-world systems and the system development process. As the system is created, new characteristics are discovered, based on which equipment and procedures are modified. Testing is necessary to determine whether these modifications are feasible. The system must be retested during operation because testing cannot provide data on how well it meets its intended purpose; for example, military systems are not programmed to operate in combat conditions until they begin fighting. Testing of combat systems can be carried out in war games, for example, when two infantry units compete with support in the form of tanks, artillery and aircraft. If interruptions occur in the operation of the system, then it is necessary to examine the parameters of the problem. Or the FSI should determine the impact of potentially important internal variables (for example, shift rotation) on performance. Sometimes RI and simulator tests are conducted simply to collect research data.
Assessing the effectiveness of a system (SI) has features that have little connection with the features of traditional research in a controlled experiment mode (for example, in laboratory conditions): 1) orientation towards real tasks, but with the presence of interference: 2) time and resource limitations; 3) measurements are made in macro units rather than micro units (minutes rather than seconds); 4) both equipment and personnel are evaluated; 5) used systems approach; 6) characterized by high validity; 7) there are fewer options for test management; 8) multiple goals and multiple criteria (both intermediate and final); 9) the presence of many levels of entry into the test process and/or into the system.

The term "system performance assessment" means that the FSI measures the performance characteristics of personnel. Testing by characteristic features(which consists of receiving subjective assessments characteristics of the system, equipment and operation as opposed to the activities of the personnel interacting with those characteristics) is not an assessment of performance. The most common type of testing based on characteristics is assessing the adequacy of the engineering and psychological characteristics of equipment. Geer distinguished between informal and formal I and E procedures, with the former relating to attribute testing (evaluating the performance of a hardware design, represented either by drawings or a non-functioning mock-up or equipment) and the latter relating to performance measurement. Evaluating equipment performance is sometimes part of the performance testing process and is therefore also covered here. The above also applies to subjective measurement tools such as interviews, questionnaires and rankings, which are used to obtain information about performance, although they cannot be used to directly measure performance.

Prototype user experience- this is a hypothesis, a design development option that you consider as possible solution Problems. The easiest way to test a hypothesis is to look at how ordinary users will work with it. There are different classifications of prototypes:

  • a one-page site, or a multi-page one with a sufficient number of menus and screens with which the user performs his tasks;
  • a realistic and detailed prototype or a schematic one that exists as a sketch on paper;
  • interactive (clickable) or static (the actions of the computer are imitated by a person).

The choice of one or another option depends on the goals of testing, the completeness of the design, the tools used to create the prototype, and the resources for support before and during usability testing. But no matter what prototype you use, testing it will tell you a lot about the effectiveness of the user experience, the quality of audience interaction with the interface, and will allow you to make the necessary adjustments.

Why test a prototype?

Correcting the code of a finished product or website is expensive, but changing a prototype is much cheaper, especially if it is depicted on a piece of paper. But the following arguments are often given against testing prototypes:

  • the final version of the design is preferable for testing because it represents working system, and users, interacting with it, will feel much more natural and calmer, which means the test results will be more reliable;
  • some supporters of the Lean Startup concept note that without testing a prototype, there will be no need to get rid of it if it fails the tests, which means there will be no additional costs;
  • lack of customization in an agile or waterfall development model when aligning UX and iterative design.

These arguments seem convincing only at first glance. But quit testing the product before final stage quite risky. Experienced developers will take the time to prototype a product, test it, and refine it to an optimal state. At the same time, testing the product at the final stage is also mandatory: to assess usability, profitability, conduct competitive analysis and final check the project.

Interactive and static prototypes

Any prototype must respond to user actions. You can spend a lot of time, but still implement interactive elements before starting tests, or simply imitate the system's response. Both methods have advantages and disadvantages.

Interactive (clickable) prototypes

When choosing an interactive prototype, the designer will have to establish the system's response to every possible user action before testing. Even if everyone is there necessary tools it will take a lot of time.

Static prototypes

Here, the system's response to user actions is simulated by a person familiar with the design. There are several simulation methods that are useful in static prototypes:

1. "The Wizard of Oz"

The method is named after the popular book by Frank Baum. In the work, as you remember, the wizard turned out to be a common person. In the experience, a “wizard” (played by a designer who is intimately familiar with the prototype) remotely controls the participant’s screen from another room. None of the user's clicks actually mean anything. When a person takes an action, a “wizard” behind the wall decides what should happen next and makes changes to the user's screen. In this case, the source of changes and system responses is unknown to the subjects. And as a rule, they are indignant that the system constantly slows down and takes a long time to respond.

This method is useful when testing systems based on artificial intelligence- before its implementation, since the “wizard” controlling the computer imitates AI reactions based on natural intelligence.

2. Paper computer prototype

The design is created on paper. A person familiar with the layout plays the role of the computer and places the papers on the table near the user, but not in his line of sight. As soon as the user touches the paper “screen” lying in front of him with his finger, the “computer” selects correct page and places it in front of the user.

3. Steal-the-Mouse

A version of The Wizard of Oz, where the "wizard" sits in the same room as the user (the role of the "wizard" can be played by the host). The prototype is shown to the user on a computer screen. As soon as the participant clicks, the presenter asks him to turn away while he makes changes on the screen. The person then continues working with the prototype.

1. The “computer” must notify the completion of each operation so that the user continues to interact with the system. With this signal, you can choose a special gesture or an icon printed on paper (for example, an hourglass), which appears every time the “computer” selects the most appropriate response for the situation, and disappears when the operation is completed.

2. The presenter should refrain from commenting or explaining the design to the user.

To determine the prototype that best suits you, answer the following questions:

  • Do you have the time and necessary skills to implement responses to everything into the system? possible actions user?
  • do you have time to conduct pilot tests and fix any bugs found?
  • can you say that your design is complete and you won't have to make changes between sessions?
  • Is it possible for a designer to play the role of a computer in all the tests being carried out?
  • moving from one screen to another is important part research?
  • user reaction to dynamic changes is an important part of testing?

If most of the answers are yes, then try the clickable option; if not, then the static option will do.

Prototypes can be accurate or inaccurate. The fidelity of a prototype refers to how closely it matches the final design of the system. Accurate model maybe in terms of interactivity, appearance, content and management.

The prototype may have high or low fidelity for all or some of the listed components. The high and low accuracy of these elements is explained below:

Accurate prototype

Inaccurate prototype

Interactivity

A lot of them. Most of them are clickable.

No. Interactive elements does not work.

Automatic response to user actions

Absent. The screens are replaced by a person playing the role of a computer

Appearance

Realistic visual hierarchy, screen element priority and screen size

Yes, all graphics and layout look like the finished product (even if the prototype is implemented on paper).

No with final version systems can only match individual elements. The placement and priority of elements may or may not be preserved.

Content and navigation hierarchy

Content

The prototype includes all the content that will appear in the final design (all articles, product descriptions, images).

No, the prototype only shows summary materials and placeholders instead of images.

The benefits of high-fidelity prototypes

1. Prototypes that are accurate in terms of interactivity show higher response in tests. Often search correct screen or response to a user action, it may take a person playing the role of a computer Extra time. Delays that are too long between the user's action and the "computer's" reaction disrupt the smooth interaction with the interface and distract the participant from last event and expected system response.

The delay also gives you extra time to study the page. Therefore, when working with a slow prototype, usability test participants cover more details and content than when working with the real system. Of course, this introduces a certain amount of distortion into the final results. You can reduce the negative impact of delays by asking the user to look away or placing a blank white sheet in front of him while waiting for a response from the system.

As soon as new screen will be ready, you should show the previous screen for a few seconds so that the user refreshes his memory of his past actions, and only then show the new one. The leader of the experiment can also provide all possible assistance, verbally pointing out to the user his past actions. For example, by prompting: “Here you clicked on the link “About the company.”

2. If the prototype is close to the final form of the system in terms of interactivity or its visual side, you can test workflows and specific components user interface(e.g. mega menus, drop-down menus), such graphic elements, such as (“obviousness” in design), page hierarchy, font readability, image quality, and even the degree of engagement.

3. High-fidelity prototypes are perceived as working versions of programs and websites. This means that participants in the experiment will behave more realistically, as if they were interacting with a real system. And a sketchy prototype can create unclear expectations about the system's capabilities. As a result, user behavior ceases to be completely natural.

4. Prototypes that are accurate in terms of interactivity free you from the need to imitate the operation of the system and allow you to focus on the progress of the experiment.

5. There is less chance of error when testing interactive prototypes. Imitating the operation of a system is a responsible task, and it is human nature to make mistakes. Haste, stress, nervous tension, the need to closely monitor the user’s clicks - all this will cause the “computer” to work illogically and distort the test results.

Benefits of Low Fidelity Prototypes

1. Less time preparing a static prototype, more time working on the design before starting experiments

Creating a clickable prototype takes time. But it’s wiser to spend it on creating more pages, menus or content. (You will still have to arrange the pages in in the right order, to provide normal work"computer", but this usually takes much less time than creating an interactive prototype).

2. It’s easier to adjust the design during the experiment

The designer can sketch quick plan changes and make them between test sessions, without wasting time on adapting and connecting new elements with existing system, as with an interactive prototype.

3. Inaccurate prototypes do not put pressure on users

If a design appears unfinished, users won't wonder whether it took a month or a couple of days to develop it. They will be convinced that you are really testing the system and not the participants themselves, they will not feel obligated to complete all the tasks and will be more willing to show negative emotions.

4. Designers treat prototypes without any trepidation.

Making adjustments to a ready-made system, interactive and not devoid of aesthetics, is not always pleasant. Once you invest time and effort into a design, it is much more difficult to abandon it, even if it does not work satisfactorily. The design, which exists in the form of a sketch on paper, is adjusted without unnecessary talk or sentiment.

5. Investors and other interested parties do not bother you

When people see a raw prototype, they don't expect to receive a finished product the next day. Each member of the development team will be prepared for any changes to the project before its completion.

User interaction during testing of any prototype

When testing a prototype, the presenter, as a rule, communicates with the participants much more than during testing finished system. Mainly, this happens for the following reasons:

  • the presenter needs to explain the nature of the prototype (but not how the design works);
  • sometimes the presenter needs to explain the current state of the system (for example, “This page is not working yet”) or ask “What did you expect to happen?”;
  • It happens that the presenter needs to find out why the participant in the experiment stopped working: because he was waiting for a response or because he thought that he had completed the task.

Even though the facilitator may have to interact with the test taker from time to time, his main goal is to quietly observe the person working on the design, not to talk to him.

1. If the user clicks on an element for which a corresponding response has not yet been designed:

  • say: “This element does not work”;
  • ask: “What did you expect to happen?”;
  • ask the user to click the next element.

For example: “You clicked on the link “compact cars.” Unfortunately, we have not yet prepared the corresponding screen. Try selecting the "mid-size cars" category. After the user selects this option, try to speak as little as possible, remain indifferent.

2. If after clicking the user opens wrong page, the “computer” must remove it from the participant’s field of view as quickly as possible, loading it instead previous page. The presenter must immediately report that the wrong page has opened, then verbally speak out the participant’s actions on current page. Only then does the “computer” show the correct page.

Computer errors distort data

Please note that inaccuracies made by the “computer” can have a significant impact on the test results. Once the screen appears, users form a mental model of how the system and research method work. If the wrong page is displayed, don't assume that users will immediately erase what they saw from their memory.

Even if you roll back the process and try to explain the error, participants will conclude that the incorrect screen is still somehow related to the task and will learn even more useful information from your explanations, which will then influence their choices and behavior. Showing the wrong page also breaks the flow and confuses users. They may decide that the prototype is faulty. This will affect expectations, confidence in the research method, and the ability to form a coherent mental model.

Because the computer errors negatively impact the study, take the time to conduct a pilot test and correct any inaccuracies before running the main tests.

Conclusion

You cannot refuse prototype tests. Your design will be tested one way or another, whether you like it or not. Once the system is up and running and people can use it, they will test it. And instead of the necessary information about the quality of the user experience and interface, you will receive: bad reviews and lost sales, abandoned orders, returns, lack of understanding of content and products, calls to support, increased training costs, deteriorating image.

And you will have to correct these errors, which, of course, will be incredibly expensive. Therefore, testing prototypes - interactive or drawn on paper - is a mandatory stage of entering the market.

Hello!

CSS seems simple and straightforward as long as you don't have to do anything custom. Here's another look at CSS issues. This is a translation of the article “5 Most Annoying Things with CSS”.

In 1996, major browsers only partially supported CSS, and because of this, web designers were forced to come up with a bunch of hacks and workarounds to make their styles work the way they wanted. In fact, it wasn't until 1999 that every browser finally began to fully support CSS1. IE 5.0 for Macintosh, released in March 2000, was the first browser to achieve full support CSS1 specifications. CSS2 was released in 1999, but designers were hesitant to use it widely due to incomplete browser support for the standard.

CSS3 began development in 1998 and was in development until 2009. It contained a ton of welcome additions like rounded corners, shadows, gradients, transitions, and animations, as well as new layout features like multi-column layouts, flexbox, and grids.

Fortunately, in addition to the W3C's efforts to improve the specification to meet developer needs, the community itself has developed many solutions to improve and simplify working with CSS in a complex environment. SASS, Stylus, LESS introduced loops, mixins and functions. The introduction of CSS variables has made it easier to write complex styles, improved readability, and easier to maintain.

CSS is certainly a great improvement on good old HTML, but CSS limitations are sometimes overwhelming, and the lack of industry support has held designers back for many years. Therefore, CSS has not yet found its place in the hearts of developers.

Even today, calling yourself a "fullstack developer" or " front-end developer", with a year or eight years of experience, you will still encounter a situation where CSS will make you sweat.

I'll list some major CSS problems:

CSS is about markup, not design

Designers should spend more time creating great designs and less time fiddling with markup and browser compatibility issues. When I say "markup-centric" I mean that every CSS design tool forces you to go into source to create good design. Tools for designers should be design-focused. CSS is bad because it forces designers to think about how to implement their designs technically, rather than from a design perspective.

Browser wars

You've created an amazing layout for your new website. But transforming that beautiful Photoshop mockup into pixel-perfect layout is a challenge. The problem isn't that you don't understand how to code it. The problem is mostly that different browsers interpret your markup differently, even if you use fully valid CSS. It is very demotivating when you fix a bug in one browser, and thereby add a new bug in another.

  • Always use Normalize.css. It forces browsers to render all elements more predictably and in accordance with modern standards. It precisely affects only the elements that need normalization.
  • You can use frameworks like Bootstrap, Bulma and Materialize. For the most part, they are highly compatible with most browsers.
  • Use CSS3 code generators. They help developers write cross-browser code for different CSS3 properties. They give developers ample opportunities customization, including border-radius, text-shadow, RGBa, box sizing. One of the services is CSS3 Generator.
  • Validation: W3C Validation Service validates different versions XHTML and HTML, outputting a bunch important mistakes and posts to help developers build great websites. W3C Validator: http://validator.w3.org
  • W3C Css Validator: http://jigsaw.w3.org/css-validator
  • Testing: since it is almost impossible to test the site manually in all possible browsers And operating systems, cross-browser testing tools come to the rescue! You can use Browsershots, BrowserStack, Cross Browser Testing and similar services for testing.

Responsive Layout

With the advent of many different devices different resolutions and screen sizes, and different orientations appeared. New devices with new screen sizes appear every day, and each device has some variability in size and functionality. Some of them are in portrait orientation, some are landscape, and sometimes they can even be square. Moreover, looking at the popularity of iPhone, iPad and others modern smartphones, we understand that the user can change the screen orientation whenever he wants. How to act in such a situation from a design point of view?

Besides orientation, we also need to take into account thousands different sizes screen. Many users do not expand the browser to full screen, and this also contributes.

  • Give priority important content and hide unimportant details on small screens. I believe that the most important thing is to show the most important things on small screens. Sometimes it is impossible to delete some content on mobile phones, then it is worth considering the possibility of hiding it.
  • Use SVG. Unlike traditional images like PNG or JPEG, SVG is easily scaled without losing quality. Moreover, they often smaller in size, so you will improve your site loading speed a little.
  • When it comes to user interaction, focus on providing large enough controls (inputs, buttons, radio buttons).
  • For smartphones, if possible, make buttons at least 44px, as recommended in iOS Human Guidelines
  • Test the design with at least five users using their usual devices.
  • Use frameworks like Bootstrap. With them, creating something suitable for mobile devices the site becomes much easier. Using ready-made classes from Bootstrap will help you decide on the grid and content placement.

Make red more blue

Many clients come with strange requirements, high expectations, desired functionality that is never discussed. As a result, we end up with endless edits and countless iterations. Clients change their desires every second, especially when it comes to design. Being forced to give in to a client's every whim feels like bad treatment or an insult to designers. This is why sites like Clients From Hell are popular.

  • Creating animated prototypes - a good option show your ideas. Use programs like Adobe XD, Sketch, InVision and so on. Start development only after the design has been agreed upon.
  • It is wise to plan the entire development process from the very beginning. Most likely, you will have to add time for various surprises in the process. Remember Murphy's Law: "What can go wrong, will go wrong."
  • Keep calm. Don't let your emotions get the better of you. Just remember that the client didn't graduate from art school and may not realize that red text on a green background doesn't improve readability. Explain your decisions regarding visual hierarchy, typography, and anything else that might be affected by changes.
  • Remember that the site is for your client, and your goal is to make the client happy with the site. The best you can do is make your recommendations for changes. If you don't agree, just do your best to make the site look as good as possible.

CSS is underrated

CSS is frustrating for the most part simply because no one really wants to take the time to learn it. Many people, especially programmers, underestimate it and try to do something without wanting to at least understand the mechanics of it. And they always get stuck on the same problems.

I've worked with some amazing backend engineers who knew every OOP pattern in the book, but were stuck with positioning and floats, and thought responsive CSS was some kind of black magic. So I find this comic to be very accurate.

People expect CSS to be simple and straightforward, but like most things, it takes some time.

As I said in my previous article "Troubleshooting CSS", working with CSS and getting good results from working with CSS - two big differences. CSS is easy to get started with, but mastery takes effort. This is something that takes a minute to get to know and a lifetime (with exaggeration) to master.