By Melanie Brewer
Have you ever heard that you shouldn’t get stock tips from your shoeshine boy?
This phrase hearkens back to the Great Depression era, when a big tipoff that a crash was coming came in the form of shoeshine boys trading in stock tips. The moral is: when your shoeshine boy is a stock expert, there's something funny going on.
Similarly, when even your test participant is a UX expert, something could be amiss.
When you hear test participants spouting terms and phrases that are more often heard from UX professionals (phrases like, ‘top level navigation’ and ‘I expected to see….” and “I’ll just check my avatar”, or blazing through your test like they’ve done it all a million times, it could be a clue that they they are not so average at all. In fact, they might be far from it. The ease and availability of unmoderated, remote testing has increased the exposure of the UX field to such "Expert Participants".
It’s tricky to find numbers on how often participants on sites like UserTesting or Whadousersdo are participating in tests, but at least one site states in its participant recruitment section that users will participate in 3-5 tests per month. That’s up to 60 tests in a year. It’s easy to see how some of them become a “Expert Participant” pretty fast. Some of them even advertise their “expertise” in their online handles with names like UXExpert or UserDesignExpert.
So is that a problem? Yes and no.
One good thing about Expert Participants is they are often great at remembering to think aloud and express their experience articulately. Where less-experienced individuals might clam up or read verbatim what's on the screen, an Expert Participant knows to tell you what they are thinking and why they're taking certain actions.
A downside of Expert Participants, though, is that their results can potentially be skewed toward making sites appear more usable than they really are. The reason is that they have honed their skills and been influenced by their prior test participation experiences. Some testing tasks might even have been repeated across multiple tests. In short, these users may have developed prior knowledge. Sometimes that matters, sometimes not.
So what does it mean for the UX professionals who run across such participants? What are your options?
Screen Like a Southerner During Mosquito Season.
The first line of defense in situations where Expert Participants will be particularly detrimental is to screen them out explicitly from the panels of users on the site. Probe for prior testing experience in your screener, during their test session or in a questionnaire at the end. Take note of anyone with a handle suggesting they are an Expert Participant and consider screening them too. Sorry, UXMaster.
Another option is to recruit and screen your own participants from ground zero (avoiding the panels altogether). UserTesting recently released a new feature called MyRecruit to help you connect to your own users. You can use your existing subscriber/customer base or other sites/recruiters that connect you to target users (usually for a fee).
Throw Out the Baby and the Bathwater.
If you aren’t able to screen out a Expert Participant for some reason and you wind up with a session that seems biased, you can, of course, just toss all the data after the fact. This is the best policy if you get the sense while watching the recording that the user is working through your test more like they are QA-ing the system, rather than using it naturally. Better to just toss it than wonder if it's messing up your results.
Use the High Altitude Insights.
Keep in mind that Expert Participants are not always a bad thing. They can be very verbal and articulate. They can also provide really helpful insights--especially when it comes to finding errors and problems. This is because testing Expert Participants is a bit like training at “high altitude”. If something is challenging or difficult to navigate for the Expert Participant, you can be fairly confident that you have indeed identified something that will be a problem for “normal” users. In fact, it will be nigh unto impossible for your grandma or your neighbor.
Looking to the Future.
With the introduction of features allowing integration of different users into testing platforms, the remote usability testing industry gives the UX field a good way to test the right users. It would also be great if the industry adopted standards, such as consistently publishing stats on their participants (e.g., number of tests they've done).
The convenience and speed of remote unmoderated testing sites and the usefulness of their results has made user testing that much easier and affordable, and that’s making websites better and raising usability overall -- even if we do run across the occasional SuperNinjaUXMaster.