Carrying out a b tests. It's important about the data. How and why to test different page variations

We released new book“Content Marketing on Social Media: How to Get into Your Subscribers’ Heads and Make Them Fall in Love with Your Brand.”

Subscribe

If as a child you loved to take apart cars with a motor or mix all the liquids that were in the house, then this article is for you. Today we’ll look at A/B website testing and find out why in the right hands it turns into a powerful weapon. We dig out the spirit of the experimenter in the depths of consciousness, shake off the dust from it and read.

What is A/B website testing?

In short, it is a method of evaluating the effectiveness of two versions of the same page. For example, there are two product card designs and both of them are so cool that you can’t even sleep or eat. The logical solution is to check which option works better. To do this, half of the visitors are shown option No. 1, and half – option No. 2. The one who better copes with the assigned tasks wins.

This is not the only way to use A/B (or split) website testing. With its help, you can test crazy hypotheses, the convenience of a new page structure or different text options.

How to conduct A/B testing of a website

Formulation of the problem

First you need to decide on your goal. Understand what you want to achieve: increase conversion, time spent on the site, or reduce the bounce rate. If everything is OK with the goals and objectives, change the content or design based on them. For example, you can follow the path of all growth hackers and change the location and design of the “Buy” button. Now it hangs on the bottom left and you want to see what will happen if you change it appearance and move the button higher and to the right.

Technical implementation

Everything is simple here - either it is created separate page, in which only the test object changes, or the programmer applies magic and implements everything within one document.

Preparation of test data

The page has been redesigned and everything is ready to run the test. But first we need to measure the initial conversion rates and all the other parameters that we will take into account. Original version We assign the name “A” to the page, and “B” to the new one.

Test

Now you need randomly split the traffic in half. Half of the users are shown page A, and the rest – B. To do this, you can use special services (there are a lot of them) or do everything manually by a programmer.

It is important that the “composition” of traffic is the same. The experiment will not be objective if only the first option is available to all users who click on the context, and all visitors from social networks- only the second.

Analysis

Now you need to wait until enough statistics are collected and compare the results of A/B testing. Exactly how long you have to wait depends on the popularity of the site and some other parameters. The sample must represent statistical significance. This means that the probability of the result being random should be no higher than 5%. Example: Let’s say both pages have the same number of visits – a thousand each. At the same time, page A has 5 target actions, and page B has 6. The result differs too little to speak of a pattern, so it is not suitable.

Majority special services calculate the threshold themselves statistical significance. If you do everything by hand, you can use calculator

Developing a solution

What you do with the test results is up to you. If new approach worked, you can leave it on the site new option pages. At the same time, it is not necessary to stop there, especially if you see that there is still potential for growth in indicators. In this case, leave option B on the site and prepare a new test.

How to make A/B and split testing objective

Reduce the influence of external factors.We have already touched on this topic a little - you need to conduct the test in the same period of time, and the traffic sources should be the same for both pages. If you do not take care of equal conditions, you will get an unrepresentative sample. People from search behave differently on the page than visitors from a group on Facebook or Vkontakte. The same goes for the volume of traffic – it should be approximately the same.

Minimize the influence of internal factors.This is relevant for websites large companies– statistics can be greatly influenced by the company’s employees themselves. They visit the site, but do not take any targeted actions. Therefore, they need to be excluded from the statistics. To do this, you need to install a filter in web analytics systems.

Plus, there is a rather obvious thing that is sometimes forgotten. You need to test one element. If you changed half a page at once, but there was no complete redesign of the site, the results of the experiment will not be valid.

Does A/B testing a website affect SEO?

There is a popular myth that A/B testing can backfire, because due to duplication of pages you can fall under search engine filters. It is not true. Google even tells you how to do everything right and provides special tools for this.

What and how can be improved using A/B testing

  • Conversion.The most popular option. Even a small page change can impact your conversion rate. In this case, the target action can be considered a purchase, registration, viewing a page, subscribing to a newsletter, or clicking on a link.
  • Average check.In this case, new additional sales blocks are often tested: “ similar products” and “this product is often purchased with.”
  • Behavioral factors.These include viewing depth, average time on site, and bounces.

Usually they try to change:

  • Design of buttons “Buy”, “Leave a request”.
  • Page content: headlines, product description, images, calls to action and everything else.
  • Location and appearance of the block with prices.
  • Page structure.
  • The layout, structure and design of the application form.

In principle, anything can work, tell you exactly how to increase conversion or average bill Not a single Vanga can do it. There are a lot of recommendations, but it’s simply unrealistic to take them all into account, and they can work with the opposite effect. And sometimes completely illogical things lead to improved performance, for example, abandoning detailed product descriptions. Try it different approaches and options, this is a test.

Tools for A/B website testing

There are just a ton of them, so we chose the best ones. They are all English-language and therefore expensive, but each has a free trial period. In Russia, only lpgenerator.ru does something similar, but only landing pages created in the service’s constructor can be tested there. You won't be able to load your page.

Optimizely.com

One of the most popular services. Able to test everything and in any combination. Other advantages: the possibility of multichannel testing, experiments with mobile applications, convenient result filters, targeting, visual editor and a little web analytics.

Changeagain.me

Enough convenient service, the main advantage is simple and complete integration with Google Analytics: goals can be created directly in the service, and they are then automatically loaded into the system. The remaining functions are more or less standard: a simple visual editor, targeting by device and country. the specific set depends on the tariff plan..

ABtasty.com

This service is very different trial period– it lasts as much as 30 days, instead of the standard 14-15. Plus, the tool integrates into WordPress, Google Analytics and several other services used by foreign marketers and webmasters. Additional advantages: user-friendly interface and detailed targeting.

How to conduct A/B testing using Google Analytics

To do this, you need to log into your account, open the report menu, scroll to the “Behavior” tab and click “Experiments”. Everything is extremely simple there.

We give the experiment a name, distribute traffic across pages in the required proportion, select goals and move on to the next stage - detailed configuration.

The addresses of pages A and B are set there. If you check the “Unification of options for other content reports” checkbox, then in other reports the indicators of all options will be taken into account as indicators of the original page.

After this, Analytics will produce a code that you need to place on page A and run the experiment. Performance reports can be seen in the same “Experiments” menu.

How to set up Yandex Metrica for A/B testing

The work is divided into two parts. First you need to either create two pages, or configure one to show the user two different types elements. How to do this is a topic for a separate large article, so we’ll skip it for now.

After this, you need to transfer information to the metric about which version of the site the user saw. Small instructionsYandex itself gives . To do this, we need to create an A/B testing parameter and assign it the desired value. In the case of a button, we define the parameter as:

var yaParams = (ab_test: "Button1" );

or

var yaParams = (ab_test: "Button2" );

After this, the parameter is transferred to Metrica and can be used to generate a report on “visit parameters”.

Results

A/B (or split) website testing is an important, necessary and almost mandatory tool. If you regularly test new hypotheses, page performance can be deduced from new level. But it cannot be said that this requires a minimum of effort. To simply change the location or color of a button, you will have to involve a programmer or designer, even if it doesn’t take much time. Plus, any assumption may turn out to be wrong. But those who don’t take risks don’t receive an increased flow of applications and don’t run around the office happy.

AB test is a useful thing that should simply be by default in Internet projects. How to do it and what is needed for this?

Today, testing hypotheses and testing ideas, compulsory program. AB test is perfect for this task. Let's take a closer look at what it is, what its benefits are, and what tools are available.

AB test: what is it and why

AB test or Split test is a marketing research method, the essence of which is that you take and compare several versions of a product element with one specific change. And then see which of the options performed better.

For example, we had an idea to change the color of a button on a certain page. We think this change will bring us more clicks. We run both options, showing half of our users option A, and the other half option B.

After some time has passed (the duration is determined before starting the test), we measure the result. Let's see which option worked better and use it in our work. This way you can test almost any hypothesis and see what works best and what doesn’t.

What can be analyzed using AB test?

  • Conversions. Number of successful targeted actions on your website. This could be clicking on the “Buy” button, visiting a page, or something else.
  • Economy. Average bill or revenue volume.
  • Behavioral factors. Viewing depth, session duration.

Nuances and subtleties

  • It is very important to change only one factor when testing. If this is the color of a button on a landing page, then we only test different button colors and do not change anything else on the pages.
  • Also with external factors. The test runs at the same time under the same conditions. Otherwise, you may end up with data that is biased.

Sorry to interrupt reading. Join my telegram channel. Fresh announcements of articles, development of digital products and growth hack, it’s all there. Waiting for you! Let's continue...

Important about data

Everything would be very simple if not for one “But”. You can conduct an AB test and get results that clearly show that one of the options is much better than the other.

For example, we showed 2 versions of pages with different button colors 1000 times each. The test was carried out for one week. And we got the following results:

With the same banner impressions (this is important), the number of clicks for option B is three times greater. We conclude that this option is more effective and take it into account. working version, and delete the old one.

What if, for example, this is the case?

Is it worth taking option B? Or maybe this is just an error? And is it enough to show each option 1000 times to make a decision? Maybe 10,000 users visit our site a day and the sample is too small to draw a conclusion? What if the data we analyze is not just the number of clicks, but the average receipt from transactions?

Statistics help us

To understand how the world of numbers and experiments works, let's take a little look at the mathematical part. If you don’t have the time and energy, I advise you to skip this section. Next, I'll give more simple solutions tasks.

It’s a great temptation, when we received the results of the experiment, to make a decision and that’s it, here it is, a “bright future.” But, if you dig a little deeper, then over the week the distribution of clicks by day was uneven. Let's write it down.

The table shows that clicks are distributed differently by day. This means that our values ​​for option A and option B can change every day. That is, we are dealing with random variables. For such cases, average values ​​are used. But if we conduct the experiment again, what is the probability that the result will be repeated?

Let's plot the distribution of all data for the week according to options A and B.

If we take the average values ​​for each of the options (this is vertical stripes in the middle of the two waves), we will see that the difference is quite small. But there are certain deviations, more and less from the average. Therefore, we get the intersection of two waves. The larger it is, the less significant the experiment and, accordingly, the smaller the intersection, the higher the statistical significance.

Statistical significance is how valid the results are. That is, in our example, the answer to the question “should I take option B?”

Typically, the default significance level is 95%. This means that we are 95% likely to want to know whether we should choose the other option (B) when comparing. The remaining 5% is the probability of error that we allow, or p-value in statistical terminology.

Interestingly, many people forget to check the significance level in their experiments and thus may receive erroneous data. 8 out of 10 AB tests miss this mark. ( )

I won’t go into details about how the significance indicator is calculated, I’ll just give you a tool that will calculate everything for you.

Tools for calculating significance

To assess the significance of data, I recommend using this tool.

Here we have A and B, respectively, our options. And by the numbers:

  1. Number of visitors/number of impressions can be inserted.
  2. Number of conversions. Clicked the button and registered. In general, the target action was completed.
  3. P-value. The probability of error, which we omit given the data.
  4. The answer to the question is whether the changes obtained in our experiment are significant.

Example: we take data on impressions and clicks from the table that we showed above.

We enter them into the service, click on the “Calculate Significance” button and...

We get the answer “No” or “No” (in Russian) in the bottom line, and just above the p-value is 0.283. What does this mean? And the fact is that with a probability of 28.3% (0.283*100), if we choose option “B”, then it will not bring any significant results.

For an experiment to be considered successful, the p-value must be less than 5%

There is another service into which you also enter data and see the result, available at link.

This is what is being built on basic principle measurements of random variables. Just the moment you get the AB test results, run them through the tool and see if the improvement from the other option is significant enough to use it?

How to understand how much data you need?

It happens that there is not enough data received to draw conclusions. In order to understand how many times you need to show page A and B, and then get the required amount of data, use this tool .

It is very important, I repeat, to run the experiment under the same conditions. Ideally, we take a week in which there are no holidays or anything else and test options at the same time. Let's return to the service.

Thanks to this service, you will understand the sample size for each of the options.

More details on points:

  1. Current conversion rate. Or, for example, what percentage of all users are currently clicking the button.
  2. Minimum significant change, which interests us. How much we would like to change the baseline conversion rate.
  3. The significant deviation that we introduced in the previous step shows that the conversion can either increase or decrease.
  4. You select the value: absolute or relative. Choose the value you want to get. If your baseline conversion rate is 30% (as in the example from the picture) and you want to increase it by 5% using ab tests, then select “relative”. That is, the final result of the change if the experiment is successful will be 5% of 30%, that is, 31.5%.
  5. Sample size for each option. How many times should we show page A and page B separately in order to draw conclusions from the experiment.Very important! To draw conclusions from the experiment, we show 24,409 times A and 24,409 times B!
  6. Statistical significance. How accurate an experiment do we want to conduct?
  7. p-value error. What is the probability of error allowed?

Is it possible to stop the experiment earlier?

Can. There is an option when we don’t have to wait for the end of the experiment, but at a certain stage we can already draw conclusions. To do this, use the already known tool, the “ tab Sequential Sampling.

Step by step:

  1. Enter your conversion rate now. For example, 30%, that’s exactly how many of the 100% of people who come to our page click the button.
  2. Enter how much you would like to increase the previously entered indicator. I set it to 10%. It was 30, I want to raise it to 33.
  3. The number of conversions of one of the experiments, after which we stop the experiment and make a decision.
  4. The difference in conversions between option A and B, after which we stop the experiment and take the one that scored more.
  5. We set the significance level to 95% (as expected, see the material above).
  6. We set the p-value error (again, see the material above).

There is no trick here, just statistics. Use this tool when experiments require a lot of resources (development time, advertising budgets to test hypotheses, etc.). Now you have two rules under which you can stop the experiment and draw conclusions.

How to conduct an AB test?

Ready solutions:

  • Optimizely, vwo.com, zarget.com
  • http://alternativeto.net/software/optimizely
  • Google Analytics (link on how to do it)

Own solution:

  • We are writing to the admin.
  • We write and set up each experiment.

Here about 10 services for AB test. There are plenty to choose from.

All

Now you have a general idea of ​​what an AB test is, what nuances there are and what tools to use to conduct it. In conclusion, I would like to add that this hypothesis research is one of the most useful in the development of a digital project. Isn't it great that you can test almost any idea? The main thing is correct, now you know how.

Review of services for A/B testing

We try services that help change the site for the better

A/B testing is a small experiment that is carried out on site users. Its essence is to test hypotheses.

If you think that site users will be more likely to click on a photo of a model in a bikini than on a businessman in glasses, this is easy to confirm or deny. Create two pages, place the businessman on one, and the model on the other. And wait. And time will tell whether you are right or wrong. The site’s audience will take action to vote for the option that is more attractive to them. And so, by conducting A/B testing and observing user behavior, you can gradually adjust the site to their tastes and desires.

We wrote more about A/B testing in. But something was missing from her. We twisted it, turned it, looked at the light. And we realized - we need a review of testing tools! So let's get started.

Google Analytics Experiments

Google Anatytics can do a lot, it’s just modestly silent about it. If you dig deeper into it, you can set up A/B testing (or program Android phones to self-destruct, depending on your luck). This is convenient if you already use Analytics, you can do a little code, or you have familiar developers who will create a page for testing.

Pros:
Convenient for users accustomed to Google Anatytics. There is a Russian language. And, most importantly, the service is free.

Minuses: No visual editor. If the elements you want to test cannot be changed through the site admin, and reprogramming the skill yourself is not enough, you will have to contact the developers.

Price: For free.

The service is simple and clear. At each stage there are tips on what to do and why. In the visual editor you can change the text, pictures and structure of the site. Everything is simple: you change the site in the editor, add the code to the original page and watch the results. To collect statistics, the service integrates with Yandex.Metrica.

Pros: There is a simple visual editor. Russian language is supported. .

Minuses: Visual editor too much simple In a good way, it only works with text and images. But you can’t play around with the structure: RealROI suggests either hiding or deleting the element. Replace, move, change shape - none of this can be done.

And we have a suspicion that the “Send code to developer” function is not working. We tried three times, but still no letter. Therefore, we recommend submitting the code yourself, using the good old Ctrl+C - Ctrl+V.

Price: For free.

This tool already has more features. The visual editor allows you to create any madness: elements can be changed, moved, added, deleted. The service allows you to run a test on a given date or pause the flow of traffic to a page (can be useful in an experiment where more than 2 options are involved). You can customize targeting and personalization.

Pros: Convenient visual editor - no programmers are needed to create pages for testing. The service integrates with Google Analytics, WordPress and other analytics and CRM systems.

Minuses: There seems to be a Russian language, but the deeper you go into the site, the more complex the terms become, the less there is.

There is no trial version. You can test the visual editor, but you can only learn about other functions from the descriptions.

Price:$39 per month if you have 5,000 tested users. The fatter tariff is $140 per month, which allows you to test the site on 40,000 unique visitors. 200,000 tested users per month costs $390. If you pay for the year at once, you get a discount on all tariffs.

A service that can arrange A/B for computers and mobile devices. In the VWO visual editor, you can immediately mark a target for clicks. The rest - add to next step.

The service offers to look at the heat map, add pop-ups and send out a call to users who bought something on the site to leave a review.

VWO also has an ideas gallery. It seems like a small thing, but it’s nice. And it’s useful: the site owner doesn’t have to come up with something to test on his own. He can choose from options prepared by professionals. Ideas can be filtered by industry, complexity, and time spent. Very cool.

Pros: Lots of functions, and tips and instructions everywhere. A clear visual editor makes programmers nervously smoke on the sidelines. There is a trial version for 30 days. VWO integrates with Google Analytics, WordPress and 12 other services.

Minuses: There is no Russian language. And therefore, tips may not help, but infuriate.

Price: If the site has less than 10,000 monthly visitors, the cost of the service is $59 per month. Up to 30,000 visitors - $155, up to 100,000 people on the site - $299, and so on. Traditionally, when paying annually, there is a discount.

Offers A/B, multivariate and split testing, personalization. Click targets can be marked in the visual editor.

There are fewer functions than some of the competitors in the review, but Convert (be careful, this is a very subjective opinion) has the most convenient visual editor in terms of selecting and dragging objects. In other services, the frames of the object tremble, as if the user is attacking them with an ax rather than carefully touching them with the mouse.

Catching a frame, resizing an object and moving it in the A/B Tasty editor is not a test for the faint of heart. And in Convert everything goes smoothly and pleasantly. The only thing is that to edit the text, you will have to get your hands on the CSS code.

Pros:
Convenient visual editor, integration with 35 analytics and CRM services, free trial period - 15 days. You can customize tests for mobile devices.

Minuses: There is no Russian language. The visual editor is nice, but you will have to dig into it and figure it out.

Price: Lite tariff (easy, yeah) - $499 per month for 400,000 visitors, without technical support. Do you want service staff to help you? Pay another $200. The more visitors, the higher the price. If you pay for the service a year in advance, there will be a discount.

As you know, there are no static states in business. The enterprise must constantly develop to meet the current market situation, the needs of customers and owners. Having stopped development, the project immediately begins to degrade. For example, you cannot create an online store, add 200 products to the site and make a monthly profit of 100 thousand rubles. In order for the profitability of the project to at least not fall, the entrepreneur needs to constantly expand the range, increase audience coverage through advertising and publication useful content, improve site behavioral metrics and conversion rates.

One of the tools for developing web projects is A/B testing. This method allows you to measure audience preferences and influence key website performance indicators, including conversions, time users spend on the page, average amount orders, failure rate and other metrics. In this article, you will learn how to properly conduct A/B testing.

What is A/B testing

A/B testing is a marketing technique used to measure and manage the performance of a web page. This method is also called split testing.

A/B testing allows you to evaluate quantitative indicators of the performance of two versions of a web page, as well as compare them with each other. Split testing can also help you evaluate the effectiveness of page changes, such as adding new design elements or calls to action. The practical point of using this method is to find and implement page components that increase its effectiveness. Please note again that A/B testing is an applied marketing method that can be used to influence conversion, stimulate sales and increase the profitability of a web project.

Split testing begins by evaluating the metrics of an existing web page (A, control page) and looking for ways to improve it. For example, you created an online store. Imagine a landing page for this store with a 2% conversion rate. The marketer wants to increase this figure to 4%, so he plans changes that will help solve this problem.

Let's say a specialist suggests that by changing the color of a conversion button from a neutral blue to an aggressive red, he will make it more noticeable. To test whether this will lead to more sales and conversions, the marketer creates an improved version of the web page (B, new page).

Using split testing tools, the expert randomly divides the traffic between pages A and B into two approximately equal parts. Relatively speaking, half of the visitors end up on page A, and the other half on page B. At the same time, the marketer keeps traffic sources in mind. To ensure the validity and objectivity of testing, it is necessary to direct 50% of visitors who came to the site from social networks, natural search, to pages A and B. contextual advertising and so on.

Having collected enough information, the marketer evaluates the test results. As stated above, Page A has a 2% conversion rate. If on Page B this indicator was 2.5%, then changing the conversion button from blue to red actually increased the effectiveness of the landing page. However, the conversion rate did not reach the desired 4%. Therefore, the marketer is further looking for ways to improve the page using A/B testing. In this case, the page with the red conversion button will act as a control page.

What to test

As noted above, split testing is an applied method that allows you to influence various website metrics. Therefore, the choice of testing object depends on the goals and objectives that the marketer sets for himself.

For example, if your landing page bounce rate is 99% and most visitors leave the landing page within 2-3 seconds of landing, you might want to consider changing the visual components of the page. With the help of an A/B test, a marketer can find best option page layout, choose an attractive one color scheme and images, use a readable font. And if a marketer is faced with the task of increasing the number of subscriptions, he can try changing the corresponding conversion form. A split test will help a specialist choose the optimal button color, the best option text, the number of fields in the subscription form or its location.

Most often marketers test the following elements web pages:

  • The text and appearance of conversion buttons, as well as their location.
  • Product title and description.
  • Dimensions, appearance and location of conversion forms.
  • Page layout and design.
  • The price of the product and other elements of the business proposal.
  • Product images and other illustrations.
  • The amount of text on the page.

Which split testing tools to use

To perform A/B testing, a marketer needs to use one of specialized services. The most popular of them is Content Experiments Google, available to users Analytics systems. Until mid-2012, this tool was called Google Website Optimizer. It can be used to test various page elements, including headings, fonts, conversion buttons and forms, images, etc. The Content Experiments service remains free, which is one of its main advantages. Its disadvantages include the need to work with HTML code.

You can also use the following Russian and foreign tools for split testing:

  • Optimizely - the most popular in the burzhunet paid service A/B testing. It costs between $19 and $399 depending on the subscription type. To the benefits of this service This includes the ability to create experiments in a visual interface, which relieves the marketer of the need to work with the HTML code of the pages being tested.
  • RealRoi.ru is another domestic service that allows you to conduct A/B testing. Among the main advantages are that it is free and very easy to use. You can see in detail how it works in the following video:
  • Visual Website Optimizer is a paid service that allows you to test various page elements. To use this tool, a marketer needs to have HTML coding skills. Subscription prices range from $49 to $249.
  • Unbounce is a service designed to create and optimize landing pages. Among other things, it allows you to perform A/B testing. The cost of use ranges from $50 to $500 per month. The domestic analogue is LPGenerator. This service allows you to test only pages created with its help.

How to A/B Test with Content Experiments

The Google Analytics Experiments service allows you to simultaneously test the effectiveness of five variations of a page. Using it, marketers can perform A/B/N testing, which differs from standard A/B experiments by allowing them to monitor the performance of multiple new pages, each of which can have multiple new elements.

The marketer has the opportunity to independently determine the share of traffic participating in testing. The minimum duration of the test is two weeks, the maximum is limited to three months. The specialist can receive data on test results for email.

To run split testing using Content Experiments, follow these steps:

  1. Sign in Google account Analytics, select the site whose performance you want to check. After that, select the “Behavior - Experiments” menu.

  1. Enter in the appropriate form Page URL, which you will test, and click the “Start Experiment” button.

  1. Select the name and purpose of the test. Determine the percentage of traffic participating in the experiment. Decide whether you want to receive test progress notifications by email. Click Next after selecting the required options.

  1. Select the page variants involved in testing. Add them to the appropriate forms and click Next.

  1. Create the experiment code. If you don't know how to insert it into the page, select the "Send code to webmaster" option. If the mention of HTML code doesn't make you sweat, select the "Insert Code Manually" option.

Select "Insert code manually" if you know how to handle HTML code

  1. Copy the code marked in the previous illustration and paste it into source control page. The code must be inserted directly after the tag . After completing this action, click the “Save Changes” button.

  1. Check for the testing code on the control page and click the “Start Experiment” button. Please note that the code only needs to be added to the control page.

You will be able to evaluate the first test results a few days after the start of the experiment. To monitor test results, select the appropriate experiment in the list and go to the reports page.

Ideas whose effectiveness should definitely be tested using split testing

It has been repeatedly noted above that A/B testing helps increase the effectiveness of web pages. For this marketing method to bring results, the marketer must generate ideas that can positively influence certain website metrics. You can’t just pull any changes out of thin air, implement them and test their effectiveness. For example, your site's metrics are unlikely to change if you simply decide to change the page background from blue to light green.

A marketer must see ways to improve pages and understand why they should work. Split testing simply helps test the specialist’s assumptions. However, every marketer sometimes finds himself in a situation where all ideas have been tested, and required result failed to achieve. If you find yourself in this situation, try implementing the following changes and check their effectiveness:

  • Remove unnecessary fields from the conversion form. Perhaps your potential subscribers do not want to disclose their passport details.
  • Add the words “free” or “free” to your conversion page. Of course, the audience knows that subscribing to the newsletter is free. But sometimes the word free works real miracles, because free vinegar is sweet.
  • Publish on landing page video. This typically has a positive impact on a number of metrics, including bounce rate, conversion rate, and time on page.
  • Extend the period during which users can test your product for free. It's simple and effective method increasing conversions for companies selling software and web services.
  • Experiment with the color of your conversion buttons. In some cases, aggressive red buttons work well. However, sometimes they annoy users. Use an A/B test to find the most effective button color for your site.
  • Promise bonuses to the first 10 or 100 customers (subscribers). Don't rush to delete this promise even after the promotion ends. Many users do not expect to be among the lucky ones, but still subconsciously react to a lucrative offer.

How and why to test different page variations

Split testing allows you to evaluate the effectiveness of changes to web pages. This marketing method has practical significance. It allows you to almost constantly improve pages by improving various metrics.

To test a change, you need to create a new version of the page and save the old one. Both options must have different URLs. After this, you should use one of the services for conducting split tests, for example, Content Experiments. Evaluation of test results can be carried out at least two weeks after the start of the experiment.

Do you think it's worth doing A/B tests? When is this marketing method a waste of time?

kak-provodit-a-b-testirovanie