Wir arbeiten daran, alle unsere Artikel so schnell wie möglich zu übersetzen.
Ricardo developed the mobile experience in a user-centric way with many rounds of testing. Previous user interviews revealed awareness of web bidding’s improvement potential during the purchasing journey. That is why it received the highest priority (RICE score) among all backlog opportunities. With this in mind, the Purchasing Team at Ricardo started the ideation and implementation of A/B testing. UX Writer Yuliya Denysenko at Ricardo talked about this project and gave us an insight into how she defined Ricardo’s testing practices and goals.
Goal Setting
The team referred to past reports containing user research insights to identify A/B test development starting points. They had previously researched the purchasing experience of Ricardo users. After reviewing the UX research reports, two main areas could be improved:
-
- User Interface (UI) related issues
-
- Aspects of the auction feature, which users didn’t seem to understand
The purchasing team put everything on a Miro Board and started ideating how to improve the bidding flow. Their goal was to solve the central problems that were identified and to make the bidding flow more intuitive. To gain inspiration, the team examined competitors‘ approaches to the bidding phase within their platforms‘ user journeys. Another comparison that fueled their insights was between the UI of Ricardo’s own web version and their new mobile app. Based on this ideation process, they created several designs for a new bidding flow that they believed would solve the problems identified.
A/B Testing? A/B/C/+ Testing!
To verify the improvements in these new designs, they conducted A/B tests. This was easier said than done! The solutions they had identified for the UI covered multiple areas. This included the copy, the sequence, and the bidding flow. This meant that testing wasn’t limited to one simple change that could be observed for better or worse performance, compared to the current version. The team had to verify a complex mix and multiple combinations of various ideas to produce meaningful testing results.
The complexity of the project sparked many discussions to decide what should be tested first and which solutions should be put together in each of the tests.
Yuliya regards this as the biggest challenge this project faced. She also didn’t know whether to A/B-test all UX writing changes across all steps at once or implement the wording changes in small steps. Eventually, the team decided to mix UI and copy changes in six different variations for A/B/C/+ tests. This approach was meant to simultaneously test every possible combination of the changes applied to the problem areas identified. However, this approach made the testing process intricate, complex to define, and laborious to set up.
A/B Testing: First Results and Insights
The first A/B test showed that the simultaneous test approach was not a good idea. There was no conclusive result after running this test for two weeks, and there was no way of telling which variation was working or not working. The first round of tests concluded that not everything could be tested at the same time. The more specific the A/B tests were, the more conclusive they proved to be. Leading to the team’s first takeaway that changed their testing approach: Testing in smaller steps. The second learning was that if a test is not fully conclusive, qualitative research can help understand the why.
New A/B Testing Approach
The first “small” A/B Test, after adapting their testing approach, was to test the wording of the bidding CTA button. From the user research, it had become clear that the users were not sure if clicking this button would already confirm their bid. In this round, they tested the status quo CTA “Make a bid (Gebot Abgeben) against a new term – “Bid” (Bieten). This small, but very specific test on the CTA came back with conclusive results in favour of the new term.
With this successful example of the new testing approach, the team continued the A/B tests with a focus on other microcopy such as field titles (“Your next bid” and “Your auction limit”). The tests showed that the microcopy was not understandable to the user, which inspired the team to conduct qualitative tests to understand the why. The qualitative tests showed that the problem didn’t lie in the wording itself but in the concept of an “auction limit”, which was not clear to the user.
During the process of iterating and testing endless variations, the team eventually came to a realisation that simplified their decision-making: They needed to keep their focus on the Purchasing team’s KPI, which was the increase of the overall bidding conversion rate. Their goal was not to collect more bids per user or to reach a higher winning price. Remembering this helped them conclude that the key priority was to make it as simple as possible for the users to place the first bid. Keeping this specific goal in mind enabled the team to refine designs and conduct more focused A/B tests.
To Be Continued
Finally, Yuliya and her team defined two different variations of the whole bidding flow: An updated version of the current system based on the previous tests and a new, simplified design. This test is now ongoing and should be more conclusive. But the work doesn’t end here. After this test, the team has already made plans to introduce other improvements into the bidding flow little by little and continuously adjust the quality of the purchasing journey.
Learnings So Far
- Keep it simple! For the analysis, it is easier to focus on one change at a time. The more specific the A/B tests are, the more conclusive the results can be.
- Remember the goal. It is important to keep the aims of your tests in mind. In this case: Which KPI do I want to impact with the solutions I am testing?
- If in doubt, ask a user. Involving qualitative research helps understand the why.
Our Expert
Yuliya Denysenko
UX Writer at General Marketplaces