Category Archives: Programming

How I make testing a habit

A reader of mine recently asked how I make testing a habit.

The question is an understandable one. There’s a lot of friction in the process of writing tests. Even for someone like me who has been in the habit of writing tests most of my code for many years, the friction is still there. On many occasions I still have to “make myself” write tests, the same way I “make myself” brush my teeth every few days or so.

I actually had to think long and hard about the question. I definitely DO write tests, but I’m not sure that the force that drives me to write tests is exactly a habit.

Yesterday I realized what drives to me to write tests. What drives me to write tests is that I realized, at some point, that coding with tests is easier, faster and more enjoyable than coding without tests.

So for me, writing tests is not a matter of self-discipline or anything like that. To the contrary, the driving force is something more like laziness. I’m taking the easier, faster, more enjoyable path instead of the harder, slower, more miserable one.

Let me explain each one of these three things in a little more detail.

How writing tests makes programming easier

Imagine you have some big, fuzzily defined programming task to carry out. You’re not sure what part to try to tackle first, or even what the parts of the task are.

It’s easy to see this kind of task as a monolithic, intractable mystery. You look around for an obvious toehold and there isn’t one. It’s overwhelming and discouraging.

Tests can serve as an aid in finding that non-obvious toehold. The act of writing a test separates the mental task of figuring out what from the mental task of figuring out how.

How writing tests makes programming faster

Writing tests is often seen as a trade-off: it takes more time to write features with tests, but it saves you time in the long run. I’m not so sure this is the case. I’ve definitely had cases where I’ve wasted 15 minutes writing a feature by testing my code manually, failed to get it properly working, and then had the feature working in 5 minutes with the aid of tests.

The reason it’s faster to code with tests is because you don’t have to perform a full set of manual regression tests every time you make a change. Without tests, it’s so easy to get into a game of whack-a-mole where new changes break old functionality.

How writing tests makes programming more enjoyable

As a person I’m lazy, impatient and easily bored. I also hate wasting time. This means that manually repeating the same 9 clicks through a feature over and over, and then multiplying that chore by however many possible paths there are through my feature, is like a task that was custom-made to torture me.

Automated tests can never fully replace manual testing (I’m still going to pull up every feature in the browser at least once) but they can go quite a long way. I find that coding using tests is quite a different experience from not using tests.

My testing habit

So my testing “habit” is really just a choice to take the path of least resistance. Since I believe that my job will be easier, faster and more fun if I write tests, then I’ll usually take that path instead of the other, worse path. And if I fail to remember that I know this and start coding without tests, the pain I encounter usually reminds me that writing tests is the easier way.

How to write a test when the test implementation isn’t obvious

Why testers often suffer from “writer’s block”

When you write a test, you’re carrying out two jobs:

  1. Deciding what steps the test should carry out
  2. Translating those steps into code

For trivial tests, these two steps are easy enough that both can be done together, and without conscious thought. For non-trivial tests (especially tests that include complex setup steps), doing both these steps at once is too much mental juggling for a feeble human brain, or at least my feeble human brain.

This is when testers experience “writer’s block”.

Luckily, there’s a way out. When the implementation of a test eludes me, I consciously separate the job into the two steps listed above and I tackle each one at a time.

How to overcome writer’s block

Let’s say, for example, that I want to write an integration test that tests a checkout process. Maybe I can’t think of all the code and setup steps that are necessary to carry out this test. I probably can, however, think of the steps I would take just as a regular human being manually testing the feature in a web browser. So, in my test file, I might write something like this:

Now I have all the high-level steps listed out. I can tell my brain that it’s relieved of the duty of trying to hold all the test steps in memory. Now my brain’s capacity is freed up to do other jobs. The next job I’ll give it is translating the steps into code.

I’ll start with the first line.

That was easy enough. Now I’ll move onto the next line.

And I’ll go down the comments, line by line, translating each line into Ruby code.

Not every step will be trivially easy. But at the same time, it’s unlikely that any individual line is all that hard. What’s hard is trying to solve 26 different problems at once in your head. The way to make progress is to break those problems down into tiny parts, get those parts out of your head and onto the screen, and then tackle the tiny parts one at a time.

How to deal with complex Factory Bot associations in RSpec tests

What’s the best way to handle tests that involve complex setup data? You want to test object A but it needs database records for objects B, C and D first.

I’ll describe how I address this issue but first let me point out why complex setup data is a problem.

Why complex setup data is a problem

Complex setup data can be a problem for two reasons. First, a large amount of setup data can make for an Obscure Test, meaning the noise of the setup data drowns out the meaning of the test.

Second, complex setup data can be a problem if it results in duplication. Then, if you ever need to change the setup steps, you’ll have to make the same change in multiple places.

How to cut down on duplication and noise

Unfortunately, I find that it’s often not very possible to take a complex data setup and somehow make it simple. Often, the reality is that the world is complicated and so the code must be complicated too.

What we can do, though, is push the complexity down to an appropriate level of abstraction. The way I tend to do this in Factory Bot is to use traits and before/after hooks.

Below is an example of some “complex” test setup. It’s not actually all that complex, but unfortunately examples often have to be overly simplified. The concept still applies though.

What if instead of this relatively noisy setup, the setup could be as simple as the following?

This can be achieved by adding a trait called with_appointment_soon to the :account factory definition. Here it is:

That’s all that’s needed. We haven’t made the complexity go away, but we have moved the complexity out of our test so that our test is more easily understandable.

How do you tell which areas of a project’s test suite need attention?

A client of mine recently asked: “How do you determine, when first opening a project, which areas of testing need attention?”

There are three methods I use for determining what areas of a test suite need attention: 1) looking at the test suite code for “test smells”, 2) asking the team “where it hurts”, and 3) actually working directly with the project.

Looking at the code for “test smells”

Just as there are code smells, there are test smells. The book xUnit Test Patterns: Refactoring Test Code (which I highly recommend) has a list of test smells in the back. Some of these smells are detectible by looking at the code. Some of them can only be found experientially. Here are some that can be discovered by looking at the test code without running it.

Developers Not Writing Tests

This one is obvious. If there are very few tests in the test suite (or if the test suite doesn’t exist at all), then this is a clear opportunity for improvement.

Obscure Test

An Obscure Test is one where it’s hard to understand the meaning of a test just by looking at it, usually because the meaning is obscured by the presence of so many low-level details. These are relatively easy to spot.

Test Code Duplication

Duplication is also pretty easy to spot. It’s also often relatively cheap and straightforward to fix.

Asking the team “where it hurts”

Often a team is well aware of where their testing deficiencies lie. Common problems include:

  • Not enough tests
  • Not enough testing skills on the team
  • Slow tests
  • Flaky tests
  • Application code that’s too hard to test

These issues can be easily uncovered simply by asking.

Actually working directly with the project

This is often the most effective way of telling which areas of the test suite need the most attention, although it’s also the slowest.

One challenge in identifying areas of a test suite that need attention is that not all tests have equal value. The test that tests the contact page is not as valuable as the test that tests the checkout page. So effort spent improving the contact page test might result in a much better test but the exercise still would have been a waste of time.

Another challenge is that not all code is equally testable. Let’s say I have an application with zero tests and no testing infrastructure. Logic might seem to dictate that I should write a test for the checkout page before I write a test for the contact page because the checkout page is much more economically valuable than the contact page.

But if my application lacks testing infrastructure than the task of writing a test for the checkout page might be a prohibitively large undertaking because it would involve not only writing complex tests but also setting up the underlying test infrastructure. Maybe I’d be better off writing a test for the contact page first just to get a beachhead in place, then gradually work my way up to having tests for the checkout page.

These are things that might be hard to tell just by looking at the code. So when I start work on a new project, I’ll often use all three of these methods—looking at the code, talking with the team, working with the code—to determine which areas of the test suite need attention.

How to get a team to get behind an engineering project

A client of mine recently asked: “What’s the best way to get a team behind a future refactor/architecture goal that hasn’t happened yet?”

This is a great question. Before I answer how to do this successfully, let me point out three certain ways to fail in this endeavor. Then I’ll share the key to giving an idea its best shot at success.

Three ways to fail to get a team behind an engineering project

The effort fails because the project was a genuinely bad idea

I mention this just to get the obvious out of the way: bad ideas often fail and they of course ought to fail. Even though this scenario isn’t necessarily a happy one, there’s nothing unjust about it.

The effort fails because it was “pearls before swine”

Sometimes you have a good idea but, for whatever reason, the team you’re in or the leadership you’re under is made up of the kind of people who are behind the times or (to be blunt) just plain not very smart.

In 2013 I worked somewhere where I tried to help implement continuous integration, but my boss was dead-set against it. There’s nothing I could have done to have been successful in this case because my boss just wasn’t the kind of person who “gets it”. The only “solution” to this problem was to go work somewhere else.

This scenario is regrettable but it’s futile to try to fight against it.

The effort fails because, although the idea was a good one, the methodology of persuasion was bad

Of the three ways to fail at getting a team to get behind an engineering effort, this one is the most tragic. I had a good idea, I had a receptive audience, but I bungled the pitch and so my idea got killed.

The key to making any project idea maximally enticing

Think about how interested you tend to be in implementing other people’s ideas. Then think about how interested you are in implementing your OWN ideas. If you’re like most people, you’re of course much more interested in your own ideas than other people’s.

So the key to making an idea enticing is: get your teammates (or bosses or subordinates) to believe that the idea is theirs.

This is easier said than done. I’ll describe two ways of accomplishing this.

How to get your teammates to believe that your ideas are their ideas

I know of two ways to get someone to think that an idea of mine is actually an idea of theirs: 1) use some sort of psychological manipulation to trick or 2) find an idea of theirs that actually, legitimately matches an idea of mine.

The latter method is of course going to be the more successful.

Here’s an example. Let’s say I want to try to get the team to increase test coverage on our application. One way I could go about this is by saying, “Hey guys! We need to increase our test coverage! We need to start this project ASAP!” This method isn’t likely to be very successful. Any attempt to “sell” an idea tends to be met with an equal resistance.

A better way to approach it would be to start by saying, “Hey guys, how do you feel about the quality of our codebase in general?” Then I would listen to my teammates’ responses and make sure they felt heard. (It’s easier to influence someone else if you first let them influence you.) Then I would ask, “How do you feel about our test coverage specifically?” If the whole team unanimously agreed that our test coverage was fine, I’d probably stop right there. It’s impossible to get someone to spend time fixing a problem they don’t believe exists.

But if my teammates expressed some level of dissatisfaction with our current test coverage, I would ask them to elaborate. I would take notes on what they said. I would ask if these problems strike them as worth fixing. Then I’d ask them if they’d be open to coming up with some goals together around test coverage and a plan to achieve those goals. People are much more supportive of goals and plans they were involved in creating than goals and plans someone else came up with.

Key takaways

Uncover existing desires. If you have a desire to accomplish X, ask probing questions of your teammates (or bosses or subordinates) to see if they also desire X. Position the project not as an endeavor to realize your desires, but to realize their desires.

Involve the team in the roadmapping. The team will be motivated to help accomplish the goal in proportion to how involved they were in developing the goal.

Mystified by RSpec’s DSL? Some parentheses can add clarity

Many developers have a hard time wrapping their head around RSpec’s DSL syntax. To me for a long time, RSpec’s syntax was a mystery.

Then one day I realized that RSpec’s syntax is just methods and blocks. There’s not some crazy ass metaprogramming going on that’s beyond my abilities to comprehend. Everything in RSpec I’ve ever used can be (as far as I can tell) boiled down to methods and blocks.

I’ll give a concrete example to aid understanding.

Below is an RSpec test I pulled from a project of mine, Mississippi.com, which I’ve live-coded in my Sausage Factory videos.

Now here’s the same test with all the optional parentheses left in instead of out. What might not have been obvious to you before, but is made clear by the addition of parentheses, is that describe and it are both just methods.

Like any Ruby method, it and describe are able to accept a block, which of course they always do. (If you don’t have a super firm grasp on blocks yet, I might suggest reading up on them and then writing some of your own methods which take blocks. I went through this exercise myself recently and found it illuminating.)

In addition to putting in all optional parentheses, I changed every block from brace syntax to do/end syntax. I think this makes it more clear when we’re dealing with a block versus a hash.

The latter method is runnable just like the former version (I checked!) because although I changed the syntax, I haven’t changed any of the functionality.

I hope seeing these two different versions of the same test is as eye-opening for you as it was for me.

Lastly, to be clear, I’m not suggesting that you permanently leave extra parentheses in your RSpec tests. I’m only suggesting that you temporarily add parenthesis as a learning exercise.

How to test Ruby methods that involve puts or gets

I recently saw a post on Reddit where the OP asked how to test a method which involved puts and gets. The example the OP posted looked like the following (which I’ve edited very slightly for clarity):

What makes the ask_for_number method challenging to test is a dependency. Most methods can be tested by saying, “When I pass in argument X, I expect return value Y.” This one isn’t so straightforward though. This is more like “When the user sees output X and then enters value V, expect subsequent output O.”

Instead of accepting arguments, this method gets its value from user input. And instead of necessarily returning a value, this method sometimes simply outputs more text.

How can we give this method the values it needs, and how can we observe the way the method behaves when we give it these values?

A solution using dependency injection (thanks to Myron Marston)

Originally, I had written my own solution to this problem, but then Myron Marson, co-author of Effective Testing with RSpec 3, supplied an answer of his own which was a lot better than mine. Here it is.

I comment on this solution some more below, but you can see in the initialize method that the input/output dependencies are being injected into the class. By default, we use $stdin/$stdout, and under test, we use something else for easier testability.

Under test, an instance of StringIO can be used instead of $stdout, thus making the messages sent to @output visible and testable. That’s how we can “see inside” the Example class and test what at first glance appears to be a difficult-to-test piece of code.

Examples of pointless types of RSpec tests

A reader of mine recently shared with me a GitHub gist called rspec_model_testing_template.rb. He also said to me, “I would like your opinion on the value of the different tests that are specified in the gist. Which ones are necessary and which ones aren’t?”

In this post I’d like to point out which types are RSpec tests I think are pointless to write.

Testing the presence of associations

Here are some examples from the above gist of association tests:

Unless I’m crazy, these sorts of tests don’t actually do anything. A test like it { expect(classroom).to have_many(:students) } verifies that classroom.rb contains the code has_many :students but that’s all the value the test provides.

I’ve heard these tests referred to as “tautological tests”. I had to look up that word when I first heard it, so here’s a definition for your convenience: “Tautology is useless restatement, or saying the same thing twice using different words.” That’s exactly what these tests are: a useless restatement.

What does it mean for a classroom to have many students and what sorts of capabilities does the existence of that association give us? Whatever those capabilities are is what we should be testing. For example, maybe we want to calculate the average student GPA per classroom. The ability to do classroom.average_student_gpa would be a valuable thing to write a test for.

If we test the behaviors that has_many :students enables, then we don’t have to directly test the :students association at all because we’re already indirectly testing the association by testing the behaviors that depend on it. For example, our test for the classroom.average_student_gpa method would break if the line has_many :students were taken away.

How do you know the difference between a tautological test and a genuinely valuable test? Here’s an analogy. What would you do if you wanted to test the brakes on your bike? Would you visually verify that your bike has brake components attached to it, and therefore logically conclude that your bike has working brakes? No, because that conclusion is logically invalid. The way to test your brakes is to actually try to use your brakes to make your bike stop. In other words, you wouldn’t test for the presence of brakes, you would test for the capability or that your brakes enable.

Testing that a model responds to certain methods

There’s negligible value in simply testing that a model responds to a method. Better to test that that method does the right thing.

Testing for the presence of callbacks

Don’t verify that the callback got called. Verify that you got the result you expected the callback to produce.

Testing for database columns and indexes

I had actually never seen this before and didn’t know you could do it. I find it pointless for the same exact reasons that testing associations is pointless. Don’t test that the database has an index, test that the page loads sufficiently quickly.

Tips for writing valuable RSpec tests

Here’s how I tend to write model specs: for every method I create on a model, I try to poke at that method from every possible angle and make sure it returns the desired result.

For example, I recently added a feature in an application that made it impossible to schedule an appointment for a patient who has been inactivated. So I wrote three test cases: one where the patient is active (expect success), one where the patient is inactive (expect an error to get added to the object), and one where the patient was missing altogether (expect a different error on the object).

My methodology for writing feature specs with Capybara is a little more involved. I describe it in detail here.

How I test JavaScript-heavy Rails applications

A common question I get is how to test JavaScript in Rails applications. My approach is almost radically simple and unsophisticated.

My Rails + JavaScript testing approach

I think of the fact that the application uses JavaScript like an inconsequential and irrelevant implementation detail. I test JavaScript-heavy applications using just RSpec + Capybara integration tests, the same exact way I’d test an application that has very little JavaScript or no JavaScript at all.

I don’t really have anything more to say about it since I literally don’t do anything different from my regular RSpec + Capybara tests.

Single-page applications

What about single-page applications? I still use the same approach. When I used to build Angular + Rails SPAs, I would add a before(:all) RSpec hook that would kick off a build of my Angular application before the test suite ran. After that point my RSpec + Capybara tests could interact with my SPA just as easily as if the application were a “traditional” Rails application.

I’ve tried testing single-page applications using tools like Protractor or Cypress and I don’t like it. It’s awkward and cumbersome to try to drive the Rails app from that end. How do you spin up test data? In my experience, it’s very tedious. Much easier to drive testing from the Rails end and treat the client-side JavaScript application as an implementation detail.

Side note: traditional Rails applications are fine. Using Rails with React/Vue/Angular/etc. isn’t “modern” and using Rails without any of those isn’t “outdated”. For most regular old boring business applications, Rails by itself without a front-end framework is not only a sufficient approach but a superior approach to an SPA because the complexity of development with plain Rails and only “JavaScript sprinkles” tends to be far lower than Rails with a JavaScript framework.

Testing JavaScript directly

Despite my typical approach of treating JavaScript as a detail, there are times when I want to have a little tighter control and test my JavaScript directly. In those cases I use Jasmine to test my JavaScript.

But it’s my goal to use such little JavaScript that I never get above that threshold of complexity where I feel the need to test my JavaScript directly with Jasmine. I’ve found that if I really try, I can get away with very little JavaScript in most applications without sacrificing any UI richness.

A repeatable, step-by-step process for writing Rails integration tests with Capybara

Many Rails developers who are new to writing tests struggle with the question of what to write tests for and how.

I’m about to share with you a repeatable formula that you can use to write an integration test for almost any Rails feature. It’s nothing particularly profound, it won’t result in 100% test coverage, and it won’t work in all cases, but it will certainly get you started if you’re stuck.

The three integration test cases I write for any Rails CRUD feature

Most features in most Rails applications are, for better or worse, some slight variation on CRUD operations. Here are the tests I almost always write for every CRUD feature.

  1. Creating a record with valid inputs
  2. Trying to create a record with invalid inputs
  3. Updating a record

Let me go into detail on each one of these. For each test case, let’s imagine we’re working with a resource called Location.

Creating a record with valid inputs

This test case will involve the following steps:

  1. If necessary, sign a user in (using login_as(create(:user)) as I describe in this post)
  2. Visit the “new” route for the resource (e.g. visit new_location_path)
  3. Fill out the form fields using fill_in, select, etc.
  4. Click the “submit” button
  5. Expect that the page has whatever content the resource’s index page has, e.g. expect(page).to have_content('Locations')

Here’s what that test might look like:

Trying to create a record with invalid inputs

This test is pretty simple because all I have to do is not fill out the form. So the steps are:

  1. If necessary, sign a user in (using login_as(create(:user))
  2. Visit the “new” route for the resource (e.g. visit new_location_path)
  3. Click the “submit” button
  4. Expect that the page shows an error (e.g. expect(page).to have_content("Name can't be blank"))

Here’s what a test of this type might look like.

Updating a record

The steps for this one go:

  1. If necessary, sign a user in (using login_as(create(:user))
  2. Create an instance of the resource I’m testing (e.g. location = create(:location))
  3. Visit the “edit” route for the resource (e.g. visit edit_location_path(location))
  4. Fill in just one field with a different value
  5. Click the “submit” button
  6. Expect that the page has whatever content the resource’s index page has, e.g. expect(page).to have_content('Locations')

“But is that enough? Don’t I need more?”

No, this is not “enough”, and yes, you do need more…eventually. What I’m hoping to share with you here is a base that you can start with if you’re totally lost. Writing halfway-good tests is better than writing no tests, and even writing crappy tests is better than writing no tests. Once you get some practice following the formula above, you can expand on that formula to get a better level of test coverage.

Where to go next

If you get some practice with writing tests like the above and you want to go further, you might like my RSpec/Capybara integration test tutorial. You might also like my Sausage Factory videos where I live-code a realistic Rails project, including tests.