The path to a successful service is made up of evaluations and new attempts. In order to improve the performance constantly, it is necessary to test, analyze and make choices based on the results and the data collected. This optimization process is the key to growing users and sales.
In the first part of this first in-depth analysis dedicated to the conversion rate optimization, we have focused on the first three elements of the process (the definition of the objectives, the formulation of hypotheses on the problems of UX and the design of the changes that can have a positive impact), now we move the attention on the remaining three actions to be included in our iterative cycle.
4. Test an alternative design and compare the conversion results
To understand if our hypothesis is founded and if our new design is able to generate a higher number of conversions, we need to test it. To compare the effectiveness of layouts that differ for a single element (for example, the appearance or the label of a button) or a page component (for example, the footer) we can use an A / B test.
This type of test helps to measure and compare the conversion rate of the original version with a variation that incorporates the improvements identified with our research hypothesis.
Thanks to the diffusion of the online tools that allow to put it into practice even at zero cost and without the need to intervene in the page code, the A / B test has become an indispensable tool for optimizing the conversion rate. Services like Optimizely, Google Optimize and VWO help to create variations of the original page of a site or an app by simply selecting the element to be modified and making changes to the text or the design of the page. Once the test is started, the system randomly redirects the incoming traffic, making sure that new users are distributed to the two-page layouts to be tested.
The A / B test has a lot of potentials. One of the most interesting ones is the possibility of the testing changes only for certain segments of the public: returning users, those using a particular browser or device, users who have already purchased, and so on. However, the A / B test is not exempt from some critical issues. It is often impractical from very small websites because it requires a sufficient volume of traffic to guarantee the statistical significance of the results obtained.
It also has an impact on the SEO that must be considered: if it is true that search engines are tolerant to A / B tests and multivariate tests and do not normally penalize the websites that practice them, some steps must be taken to do so that Google does not believe that it is realizing the so-called "cloaking", that is the technical one that helps to show the search engines a different content than the one actually present in the page with the aim of obtaining a better positioning in the SERP.
5. Analyze the results and implement the winning choice
Once the quota of users that are required to guarantee the validity of the test is reached, it is time to analyze the results. The key parameter of the A / B test is obviously the comparison between the conversion percentages of the two variants tested: better performance in the conversion rate by the alternative layout suggests the implementation of the new design, while a worse performance could push us to develop new research hypotheses and to test new solutions.
But even if the test should show an increase in the conversion rate, it is not necessarily the best way to change immediately. If, for example, the increase in the conversion rate of the single product page does not result in an increase in sales, it means that the problem could lie in the entire funnel. And in that case, it might be good to test changes on other pages of the funnel before moving on to implementation.
The potential of the A / B test does not end with the absolute comparison between the ability to generate conversions of design B with respect to design A. The systems that allow us to carry out these tests offer other significant data, which can be analyzed to draw others conclusions and generate new hypotheses. Besides the examination of the primary metric, the conversion rate can be very useful to measure other performance parameters depending on the typology of the type being tested: for an e-commerce, it is important to take into account the average order value, for a blog or an information site, the number of pages viewed per session should be analyzed.
The use of data segmentation features in the A / B test is equally fundamental: by exploiting the data relating to the device, to the context of use or to the user's behavior, it is, in fact, possible to analyze the conversion rate change for a single segment of public, like returning visitors, iOS users or people who are in Milan.
6. Repeat the process to constantly improve the performance
The work of improving the conversion rate is never over. For this reason, every test, whether successful or negative, is always the first step to elaborate new hypotheses and adapt the optimization strategy to the new scenario.
As mentioned in the first part of this in-depth analysis, it is a good practice to take note of all the experiments conducted and their results to ensure that the testing process is structured and does not proceed randomly or on the hypotheses of the moment. Planning is useful especially when the succession of experiments is not aimed at making specific changes but at planning an overall change.
Regardless of the UX redesign approach, however, the conversion rate optimization should always be conducted through iterative cycles: from analysis to hypothesis, from hypothesis to alternative design, from alternative test to test, from test to choices based on the data.