This story was originally published on Adevinta Tech Blog
As the Covid-19 pandemic forced us all to work from home, user researchers, UX designers and product managers had to look for tools and methods to run user tests, interviews and workshops remotely.
As we have users across three continents, we’ve been trialling remote user testing for several years. But we’ve had to scale and adapt processes and tools to make sure we can continue to test new design decisions and incorporate users’ feedback during 2020.
A standard user-centred design process is not new in our industry — but what is new is that the world has significantly changed over the last ten months. In this article, I’d like to share my experience of remote user testing within Adevinta and how it has become part of our product development process.
First, to give you a bit of context, let me introduce you to Serenity. It’s the solution provided by Adevinta’s Trust & Transactions tribe to help marketplaces moderate user generated content. It’s used by fourteen marketplaces across Adevinta.
In the Serenity team, we follow a research-driven process. Each user-facing decision is based on research and tested with our users. This way, we put the user first and make sure the decision is understood, doesn’t negatively impact the moderation process, fulfills the business needs and solves the actual user problem.
Why we test remotely
The value of remote user testing is sometimes underestimated by researchers. It’s true that in-person testing is needed for some studies as it allows us to observe participants first hand, meaning we can see their body language and other physical cues. However, there are many benefits of remote user testing.
Serenity adopted remote user testing three years ago as it was important for us to get feedback and learnings from the internal users of all of our marketplaces (e.g. customer service agents, content moderators and fraud analysts). This way, we could ensure that our validated hypotheses and designs work for all of our users. As our marketplaces are spread over several continents, we swiftly realised that not all user tests could be done in-person.
There are three types of remote testing: moderated, unmoderated and quantitative. Let me briefly explain the difference between them:
- Moderated: this method is the most similar to in-person testing. The moderator and participants interact in real time and the moderator observes the participants trying to complete tasks in the prototypeThe benefit of this method is that follow up questions can be asked to the participants while they’re progressing. It enables valuable qualitative feedback and gives a better understanding of what the participants are thinking.
- Unmoderated: unmoderated tests are done by the participants in their own pace and time, without the involvement of a moderator. The participants follow a predefined script and the audio and/or visual tests are recorded with specialised software such as UserTesting, Lookback or Playbook UX. The downside of these tests is that users don’t have real-time support if they have a question, need clarification or can’t get the software to work.
- Quantitative: when we want to know more about the ‘what’ and ‘how many’, we use quantitative methods (rather than the qualitative methods listed above, which gather insights on the ‘how’ and ‘why’). Quantitative tests are always unmoderated and focus on specific usability metrics and KPIs such as time spent on task, success or completion rate and satisfaction. They are ideal for measuring usability over time and making data-informed decisions.
As a team, we’re big fans of moderated remote user tests as we’re looking for valuable qualitative feedback. We’re also able to remind participants of the think-aloud protocol and can ask them to talk us through what they’re doing while they’re doing it.
This is particularly important for us as Serenity replaces legacy tools used to fight fraud and moderate content. In the tested design proposals, we make decisions that impact the speed and quality of manual moderation or the efficiency of module configuration. It’s therefore of great importance that new patterns, features and mental models are not only understood by the user, but also considered as useful, usable and valuable.
Our setup for remote moderated testing is fairly basic. We make use of the following tools:
- Google Docs: first, we create a user test plan in a Google Doc. This plan outlines the objective, problem statement, hypothesis and research questions, along with the script template. The plan becomes a living document as the participants’ feedback is directly placed into the Doc. This makes other documents redundant and saves us time when synthesising learnings afterwards.
- Google Meet: for video calls, we use Google Meet. This tool is widely adopted internally and means we avoid hiccups that might occur when using other software such as Loopback. The participants are asked to share their screen and turn on the camera, so we can see both the actions they take and their reactions.
- Invision: the participants are asked to open an Invision link that holds the prototype being tested. In this prototype, we usually place two or three scenarios that we want to test.
- QuickTime: the user test is recorded by the facilitator using QuickTime. This enables us to refine the notes taken during the test and to look back at some key moments.
Pros and cons of remote user tests
There are benefits and drawbacks of moderated remote testing. Let me start with the benefits:
- Reduction of time: we don’t have to travel to meet the participants and there’s no need to arrange specific facilities.
- Cost saving: testing remotely is a great solution for teams with limited budgets.
- Increase of our coverage: we can test designs with users who are geographically scattered. We can test a prototype with employees in Mexico, France, Spain and Belarus in the same week!
Of course, there are drawbacks as well:
- It’s not in-person: it’s therefore hard to see participants’ body language and certain expressions.
- It creates a barrier: the physical aspect of being together helps the participants feel comfortable and leads to deeper conversations.
- No control over the setup: as we’re not in the same location, we can’t control the setup of the test. There are a variety of interruptions that can occur, such as barking dogs, bad video call connections, frozen screens and traffic accidents outside (yes, unfortunately we experienced that a few months ago during a test).
Serenity embedded remote user testing some years ago in the product development process. The biggest benefit we get from running remote user tests each month is that it helps us connect lean principles with Agile development:
- We innovate through customer insights
- We iterate quickly
- We stop guessing and stay research-driven
Conclusion: start remote testing!
The current Covid-19 crisis has forced us to rethink our way of living, working and interacting with each other, but this doesn’t mean user research has to stop. We do however need to think carefully about what we do and how we do it. As researchers, designers and product managers, we have a responsibility towards the participants of our tests while also having to take care of our own health and wellbeing.
Remote user testing has helped us continue to collect and validate insights, feedback and needs in an iterative way. And in these times, the benefits of testing remotely always outweigh the disadvantages.
Would you like to know more?
Feel free to contact me if you have any questions or if you would like to know more about this topic: Chris.email@example.com
Illustrations by Katerina Limpitsouni © 2020