Testing doesn't have to begin after the code has been written. In this column, Jeff Patton resurrects the oldest and most overlooked development technique, which can be used to test a product before any piece of it materializes.
Bill, the business stakeholder, gets a pained look on his face. "Yes, this software looks great, but it really stinks to use." As the lead developer looks on Laura, the business analyst, explains "Well you reviewed the user interface before we built it. We all agreed that it would work, right?" Starting to sound frustrated Bill says "Yeah, I know I did. We all agreed it seemed OK, but now that I use it, there are some things that really need to change before we can go live."
If you're in Laura's shoe's it feels like you did all the right things. You designed the user interface collaboratively with the user. Everyone felt good about it. You built it, and with a little help from a good visual designer, it turned out sexier than you'd expected. Your testing department tested it thoroughly, and you know there are no bugs. So why do you have to go back now and fix it? There's one very important test you likely forgot.
Functional Testing Isn't Usability Testing
In most software projects we're concerned with building functionality as specified, then making sure it works as specified—without bugs. But software that doesn't blow up isn't the same as software that's well suited to the tasks that users need to perform. Usability testing attempts to verify that the product is suitable to task by measuring exactly that.
Given some test subjects and a list of the tasks that users need to perform, we might measure usability by asking each subject to attempt to perform those tasks. We'd measure the following:
- Completion rate: How many users completed the tasks?
- Performance: How quickly were users able to complete the tasks?
- Errors: How many errors or missteps did users make while completing the tasks?
Improving usability means increasing completion rate and performance while decreasing errors.
Often usability testing is neglected in software development because it's considered time consuming or expensive to perform. In perfect circumstances, software is tested in a usability lab: Finished software or a high-fidelity prototype is developed; appropriate test subjects are rounded up; a lab environment is set up with a camera and recording devices; and a facilitator helps guide users through completing tasks, while observers watch via camera or through one-way glass and take notes. This whole process takes days or weeks on top of the time it takes to build the working software or prototype to test. Assuming we know the risks of skipping usability testing, given the difficulty of "the perfect" approach to doing it, we might work hard to avoid doing it or to justify not needing it.
But wait, there's an easier way. One of the oldest and most overlooked development techniques is paper prototyping.
Building a Paper Prototype Is Simple
Start with a blank sheet of paper to use as your computer screen; 11" x 17", tabloid size works best. Get index cards, pencils, scissors, markers, transparency film, and—very necessary—repositionable, double-sided tape. Sketch each component of your user interface on an index card or heavy cardstock, and cut it out. Stick it to your user interface using repositionable tape. For lists, lined cardstock works well. If your list contains data and you can print it from a spreadsheet or some existing piece of software, do that, cut it out, and stick it on. When you're done you'll have a componentized paper prototype, which is better than a plain old drawing on paper. I call it componentized because you can easily pull off each independent component, move it around, modify it, or replace it outright.
Usability Testing a Paper Prototype Is Simple
Now that you have a componentized paper prototype, you're ready to run some simple usability tests.
Start by identifying the tasks you'd like users to perform. Make sure you choose reasonable-size tasks—something that an end-user would likely sit down and try to accomplish in a single sitting. Tiny bits of functionality are difficult to test. Larger sets of functionality are time consuming to test. Make sure you've prototyped enough of the user interface to support the tasks you'd like users to perform.
Identify your test team. Ideally you'll need at least three people: a facilitator, one or more observers, and someone to play the important role of the computer.
Identify your test subjects. If you're very early in your software design, run down the hall and grab a coworker—preferably someone not on your team who isn't familiar with the software. If your design is more mature, you may need more subjects (three to five is good), preferably people who are similar to or who will be the target users of your software.
The facilitator starts the test by inviting the test subject in and explaining that the subject will be testing a very early prototype of software being designed:
"In this type of testing, the user interface is constructed from paper and will be controlled by using your finger as the mouse to select or click things. If you need to enter information, grab a pen and write it directly on the transparency film placed over the user interface. While you're using the software, please think out loud. Tell me what you see. Before you click a button, tell me that you're about to click it and what you expect to happen." The facilitator then explains what the software does and asks the user to start by performing the first task. The facilitator doesn't explain how to perform the—that's the part we're measuring.
As the user clicks paper buttons by touching a finger to the paper prototype, the person playing the computer role kicks in and moves parts of the user interface as needed. The "computer" might add information to the screen, change screens, or place dialogue boxes over the screen—anything an actual computer might do, just a bit slower and with some paper shuffling.
While this is going on, the observers watch quietly and take notes about what they see. They pay careful attention to note any missteps or errors the user makes. When the user has completed his tasks or run aground trying, the facilitator stops the test. At this time the observers can ask questions if they have any. The facilitator then thanks the test subject and sends him on his way.
You've Just Performed Wizard of Oz-style Usability Testing
It's called Wizard of Oz-style testing because your user interface is controlled by a person playing the computer, just like how the wizard in the film was actually controlled by the man behind the curtain. Wizard of Oz-style usability testing doesn't catch all usability issues, but you'll be amazed by how many it does catch. And doing usability testing early will catch many issues long before you write a single line of code. By spending a few hours paper prototyping and testing, I've saved days' or weeks' worth of rework.
All software eventually gets usability tested. It's up to you to decide if it happens before release, while under your control, or after release by your end-users.
For more information see:
Carolyn Snyder's book Paper Prototyping
Marc Rettig's article Prototyping for Tiny Fingers: www.portal.acm.org/citation.cfm?doid=175276.175288
Gerard Meszaros and Janice Aston's paper Adding Usability Testing to an Agile Project