This article was originally featured on House of Test’s blog at www.houseoftest.se
The short answer: A lot.
In December 2015 I was called to the offices of a project, who went live with their website the month before. On the website, customers could search for and buy products that could be sent to other people via both physical and virtual shipping channels. The website had been in development for a year and then some, and they needed the website to be online for Christmas, but had a feeling that there were issues in some critical areas. The project did not have any formal test resources attached when I met them, but relied on issues found by the developers and designers. And in short, they wanted me to test the “user experience” of the website on different platforms and devices.
Testing user experience
“How do you test user experience?”, I hear you ask. That’s a good question, and there’s really no easy answer to it. How you test user experience relies on your tester, your project, and what you want to get out of the test sessions. In this specific project it was important that the visitors had the same experience with the website, no matter whether they visited it through different browsers or on different mobile devices. So obviously, I needed to look into if there were some unwanted differences across different platforms.
Furthermore, since I didn’t have a lot of time, I needed to use my own skills. Luckily I have quite a lot of experience with usability design, so I wanted to look into the usability of the site, as it in this project was closely related to the “user experience”. There are many ways to approach usability, but I like working from a simple recipe that involves the following 5 keywords:
- Learnability (How easy is it for users to accomplish basic tasks the first time they encounter the design?)
- Efficiency (Once users have learned the design, how quickly can they perform tasks?)
- Memorability (When users return to the design after a period of not using it, how easily can they reestablish proficiency?)
- Errors (How many errors do users make, how severe are these errors, and how easily can they recover from the errors?)
- Satisfaction (How pleasant is it to use the design?)
When you can answer these questions, you have a pretty good idea about how a visitor experiences browsing the website.
The plan
In order to do as much testing as possible in only 2 days, I knew I had to follow a strict plan.
Test cases were out of the questions. I didn’t have time to write lengthy guidelines, I didn’t know the system, so how would I be able to write guidelines about looking in specific areas for specific issues. Furthermore, the company put their faith in my skills, so I didn’t have to prove what I did down to the last detail. They could read about what I did and didn’t do in the test plan and test report. So in short, test cases would be a waste of time. Instead, I went with testing any page and function I could find, trusting my skills and intuition. I traced my test steps in a mind map tool, that would later do double-duty as an easy way to go through the found issues with the project group.
After some thinking and planning, I ended up with a test plan. It’s quite possibly the shortest test plan I’ve ever seen. It’s actually just an image. And yeah, you can make test plans that the average mortal can read.
Do testing that fits the situation and you
“Can you do all these test in 2 days?!” you might think to yourself. Yes and no, I say! With just 2 days, it’s obvious that you can’t get to the bottom of every aspect of the method. I did however go through with every method in the plan.
On day 1 I contacted a lot of people through my preferred social media, and asked for their help in the First-Click-test. After setting up and starting the First-click test in Google Analytics, I left my home to ride around in Copenhagen for the in-the-wild testing on some of the mobile devices chosen for the test. During this trip I visited two focus group test participants, who went through the website with me and pointed out issues that I never would have seen. After that I returned home and tested on every browser that I could get my hands on. End of Day 1
On day 2 I did an expert review of the website, had lunch, and then went out to visit the last focus group test participants. After returning home I went through the remaining mobile devices, and after dinner I visited a friend for some help with getting to the bottom of some technical issues with some mobile devices. End of day 2.
I admit, I had the best circumstances for doing these exact tests (And that’s why I chose them). I was not a part of the project team, and did not have any prejudiced ideas about how the system I tested looked and functioned. That made it possible for me to do an expert review of the website. Had I been on the project from the beginning, I probably would have missed a lot of the issues I found in the expert review. Had I been a part of the project, I could not have used my friends and family as test participants, because I would’ve been invested in the system and could potentially have influenced the test participants while testing. If I had to be situated in an office, I couldn’t have involved the focus group test participants, since most of them were busy students studying for their exams. If I hadn’t done a LOT of focus group interviews back in the days, I wouldn’t have been able to do them as quick as I did. And so on.
And that’s why I chose these methods, because they suited the project and my test situation. I could’ve planned this in many ways, but ultimately chose what I knew would work.
The result and the moral
Testing is not about finding as many defects as possible. The amount of defects found in a system says nothing about the quality of the system, as the severity of the defects is wildly ambiguous to different people and different parts of the project. I will however reveal that in 2 days, I registered 67 unique issues. These ranged from very critical ones such as “Category menu does not show when the menu button is clicked“, to the maybe not-so-critical ones like “In the box where I choose the date, the symbol for the date looks like something that can be clicked, but it can’t“, and all the way to the more suggestive ones like “I would like a “cancel” button on the video upload“. But what’s more important than the number of issues found: The customer was very happy with getting another angle on the website and its current state. Not the angle of a developer or a designer, but from a professional tester and potential future customers. And as it turned out, there certainly was some critical issues in some areas that ought to work seamlessly.
So in short; Sometimes you don’t need a lot of planning and test documents, sometimes an image is all it takes to get an overview of the test process. Testing doesn’t have to be expensive, tedious, or repetitive. Sometimes you shouldn’t overthink testing, just jump into it and trust your tester. Professional testers will probably find issues that the project team does not notice in areas others would never even look in.