How I make sure I really understand a feature before building it

by Jason Swett,

The very real danger of misinterpreting a feature requirement

When I was a young and inexperienced developer I often failed to build a feature the right way the first time.

My memory is that, often, I would show what I built to my stakeholder (stakeholder meaning boss or freelance client) and the stakeholder would say that what I built didn’t really resemble what we had talked about.

I would then go away, embarrassed and surprised, and make another attempt. Sometimes this process would be repeated multiple times before we finally got on the same page. It sucked. It was obviously a bad experience for both myself and my stakeholder. Yet, unfortunately, this was the way things normally went.

The glory of getting it right the first time

I don’t have this problem anymore. Today, when I build a feature, I typically do get it right the first time.

It’s true that the feature still sometimes has to be tweaked after the first pass, but this is usually not because I misunderstood my stakeholder, but because neither myself nor my stakeholder could imagine the ideal version of the feature without seeing a rough draft first. Today I almost never have the “this isn’t what we talked about” problem.

So what changed? How did I go from being so terrible at understanding feature requirements to being so good at it? I’ll tell you, but first let me explain the reasons why it’s so hard to understand a feature’s requirements.

Why it’s so hard to understand a feature’s requirements the first time

It’s hard to communicate things

Conveying things in high fidelity from one person’s mind to another is notoriously hard. It’s basically impossible to say something of any complexity to another person and have them understand exactly what you mean.

Imagine you’ve never been to Paris before. Then you schedule a trip to Paris. Some people tell you in detail what kind of stuff is in Paris. You read some books about the place. Then, when you actually get to Paris, much of the experience is nothing like you imagined.

And Paris actually exists! Think of how much harder it gets when the thing being communicated is just an idea in someone’s mind. This brings me to the next reason why it’s hard to understand a feature’s requirements the first time.

Sometimes the stakeholder doesn’t even understand what the stakeholder means

Just because someone describes a feature requirement to you doesn’t mean that that feature idea is complete, self-consistent, or the right thing to build. When I was less experienced I tended to think of the requirements as The Requirements. I didn’t always realize that the “requirements” were often just half-baked ideas that needed a lot of polishing before they were really ready to be implemented.

Even if a particular feature does make sense and could conceivably be built, there’s often an issue of scoping ambiguity. When we say we’re going to build Feature X, where exactly are the contours of Feature X? What’s inside the scope and Feature X and what’s outside it? These questions need to be answered in detail before the feature can be successfully built. Otherwise the development of the feature tends to drag on, getting kicked from sprint to sprint, with both developers and stakeholders getting increasingly frustrated as the feature fails to reach a state of doneness.

How to understand a feature’s requirements the first time

The secret to understanding a feature’s requirements is to do two things:

  1. Perform usability tests to flesh out the feature and verify the feature’s logical consistency and suitability for the task
  2. Make a detailed written/graphical agreement with the stakeholder regarding what the feature is

I’ll explain each of these things individually.

Usability testing

At some point early in my career I got tired of building the wrong thing over and over. So I decided to try to figure out how to stop making that mistake. I started researching “user interface design” and found a book called User Interface Design by Soren Lauesen (which remains one of the best software books I’ve ever read). This was when I first learned about usability testing.

Usability testing is a topic big enough to fill a book (or a lot of books, actually), so I can only scratch the surface here, but here’s the basic idea.

There are two important truths in usability testing. The first is that for all but the simplest features, it’s impossible to come up with a usable design on the first try. (By “usable design”, I mean a design that’s complete enough that the user can carry out their business and that’s sufficiently unconfusing that the user doesn’t throw their hands up in frustration along the way.)

The second truth is that the only way to tell whether a design is sufficiently usable is to test it.

If you buy these two axioms, here’s how to actually perform usability testing, or at least how I do it.

The first step is to talk with the stakeholder to gain an understanding of the feature’s requirements. I don’t have any special methodology for this one, although I’ll give two tips. First, at the risk of stating the obvious, it’s a good idea to write down the things the stakeholder says. Second, it’s a good idea to repeat your understanding of the requirements to the stakeholder and ask if your understanding is consistent with what they’re thinking.

The next step is to translate the requirements into a visual UI design. If I’m working in-person, I usually start really lo-fi and just draw wireframes with pen and paper. If I’m working remotely, I’ll use Balsamiq. The point is that I know my early attempts will probably be off the mark, so I want to use methods that will allow me a quick turnaround time.

Once I have the first draft my design prototypes ready I’ll get together with either an actual user of the system I’m working on or someone who’s representative of a user. Sometimes that person and the stakeholder are the same person and sometimes they’re not. Sometimes no such person is available and I have to get some random person to serve in the role. Sometimes I just have to use the stakeholder in that role, even if it’s not ideal. In any case, it’s best if the test user is someone who’s not involved with the design work. Anyone who’s too familiar with the feature already will fail to stumble over the design’s confusing parts and the usability testing process will be less fruitful.

Once I’ve found my test user I’ll have a usability testing meeting. Ideally, the meeting involves a facilitator, the test user, and a record keeper. I myself typically serve in the facilitator role. Before the meeting, I as facilitator will have come up with a handful of test scenarios for us to go through. For example, a test scenario might say, “Wanda Smith enters the hotel and wants to book a room for two nights. Find the available options in the system and complete the transaction.” For each test scenario, I’ll give the test user the appropriate starting design (which is a piece of paper with a single web page’s wireframes on it) and instruct the test user to “click” on things with their pen and to fill out forms with the pen just like they would on a web page. When the test user “submits” a form, I take away that piece of paper and give them the appropriate subsequent page.

I want to emphasize that never in this process do I ask the test user what they “think” of the design. When asked what they “think” of a design, people will typically just say that it looks good. This is of course not helpful. Our purpose is to put the design through a series of pass/fail tests that will not allow any serious defects to hide.

It’s also important not to give the test user any hints during this process. If the design is sufficiently unclear that the test user gets stuck, we need to let them get stuck. The whole point of the usability testing process is to uncover the weaknesses of the design that let the user get stuck. When this happens, it’s the record keeper’s job to make a note of where exactly the test user got stuck so that the defect can be addressed in the next round of design prototypes. Once the record keeper has made their note of the defect, the test user can be given a hint and allowed to continue, if possible.

When I started doing usability testing, I found to my surprise that my first attempt at the design was almost always unusable. The test user almost always got hopelessly stuck during the first meeting. It would usually take two or three rounds of prototyping before I arrived at a usable design.

Once the usability testing process is complete, the designs created during the usability testing process can be translated into written requirements.

Written requirements

If a stakeholder and a developer don’t have a shared vision of exactly what a feature is going to be when it’s complete, then they’re inviting trouble down the road when the software that gets built doesn’t match what’s expected. And it’s really hard to have a shared vision of exactly what a feature is going to be if there aren’t highly detailed written and/or graphical requirements.

The beauty of usability testing is that it kills ambiguity and logical inconsistency. If the ideas for the feature start off vague, they must be made concrete in order to be translated into design prototypes. If the ideas for the feature start of illogical, the logic failures will surface as defects and not survive the usability testing process.

Because usability testing goes so far toward clarifying requirements, often the designs that result from the usability testing process can just be translated into words and used along with the designs themselves as the feature’s specs. It’s not quite that simple though.

I’ve seen a lot of features fail to get completed within anything close to the time expected, including features I’ve worked on myself. Often the reason is that the feature was too big, or too vague, or both.

When translating designs to specs (“user stories” if you’re using agile development), it’s important not to make the user stories too big. I like to keep all my user stories small enough that they can be started, completed, tested, deployed and verified all in the same day. I’ve sometimes seen the mistake of teams working in one-week sprints where the stories inside the sprint are each expected to take one week. This is bad because it leaves no margin for error. And of course, stories that are expected to take a week often take two weeks or more. So these stories keep getting pushed from sprint to sprint, for several sprints in a row. When each story is only a day’s worth of work, this is not so much of a risk, even if the story takes several times as long as expected.

It’s also important for each user story to be crisply defined. My rule of thumb is that some outside person should be able to read the story’s description and understand exactly what steps they would need to perform in the application in order to verify that the story is done. The question of “How do we know when this feature is done?” should be abundantly clear.

In addition to each story having a clear definition of done, it’s also helpful if the team has a definition of done for stories in general. For example, is it considered done when it’s feature-complete or only when it’s deployed and verified to be working? (I prefer the latter definition myself.) Having an agreement like this helps prevent any ambiguity or disagreement as to whether any particular feature is really actually done.

Takeaways

Building the wrong thing is a common problem, but luckily, it’s a very fixable one. The key is to perform usability testing in order to sharply define the feature and then to document the feature’s requirements clearly and precisely so there’s little room for misunderstanding.

Not only will this way of working eliminate huge amounts of waste and frustration in the development process, it’s also a lot more fun.

Leave a Reply

Your email address will not be published. Required fields are marked *