Category Archives: Programming

Stuck on a programming problem? These tactics will get you unstuck about 99% of the time

Being stuck is possibly the worst state to be in when programming. And the longer you allow yourself to stay stuck, the harder it is to finally get unstuck. That’s why I find it important to try never to allow myself to get stuck. And in fact, I very rarely do get stuck for any meaningful length of time. Here are my favorite tactics for getting unstuck.

Articulate the problem

I teach corporate training classes. Often, my students get stuck. When they’re stuck, it’s usually because the problem they’re trying to solve is ill-defined. It’s not possible to solve a problem or achieve a goal when you don’t even know what the problem or goal is.

So when you get stuck, a good first question to ask is: do I even know exactly what the problem is that I’m trying to solve? I recommend going so far as to write it down.

If it’s hard to articulate your goal, there’s a good chance your goal is too large to be tractable. Large goals can’t be worked on directly. For example, if you say your goal is “get to the moon and back”, that goal is too big. What you have to articulate instead are things like figure out how to get into space, figure out how to land a spaceship on the moon, figure out how to build a spacesuit that lets people hang out in space for a while, etc.

Just try something

Far too often I see programmers stare at their code and reason about what would happen if they changed it in such-and-such a way. They run intricate thought experiments so they can, presumably, fix all the code in one go and arrive at the complete solution the very next time they try to run the program. Nine times out of ten, they make the change that they think will solve the problem and then discover they’re wrong.

A better way to move forward is to just try something. The cost of trying is very low compared to the cost of reasoning. Also, trying has the added bonus of supplying empirical data instead of untested hypotheses.

“Rubber duck” it

“Rubber duck debugging” is the famous tactic of explaining your problem to a rubber duck on your desk. The phenomenon is that even though the duck is incapable of supplying any advice, the very act of explaining the problem out loud leads you to realize the solution to the problem.

Ironically, most of the “rubber duck” debugging I’ve done in my career has involved explaining my issue to a sentient human being. The result is often the classic scenario where you spend ten minutes explaining the details of a problem to a co-worker, realize the way forward, and then thank your co-worker for his or her help, all without your co-worker uttering a word.

The explanation doesn’t have to be verbal in order for this tactic to work. Typing out the problem can be perfectly effective. This leads me to my next tactic.

Put up a forum question

In my experience many programmers seem to be hesitant to write their own forum questions. I’m not sure why this is, although I have some ideas. Maybe they feel like posting forum questions is for noobs. Maybe they think it will slow them down. Neither of these things are true, though. I’ve been programming for over 20 years and I still post Stack Overflow questions somewhat regularly. And rather than slowing me down, posting a forum question usually speeds me up, not least because the very act of typing out the forum question usually leads me to realize the answer on my own (and usually about 45 seconds after I post the question).

If you don’t know a good way, try a bad way

Theodore Roosevelt is credited with having said, “In any moment of decision, the best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing.” I tend to agree.

Often I’m faced with a programming task that I don’t know how to complete in an elegant way. So, in these situations where I can’t think of a good solution, I do what I can, which is that I just move forward with a bad or stupid solution.

What often happens is that after I put my dumb solution in place, I’ll take a step back, look at what I’ve done, and a better solution will make itself fairly obvious to me. Or at least, a nice solution will be easier for me to think of at this stage than it would have been from the outset.

With programming and in general, people are much better at looking at a bad thing and saying how it could be made good than they are at coming up with a good thing out of thin air.

If you go down a bad path and find that you’ve completely painted yourself into a corner, it’s no big deal. If you’re using version control and using atomic commits, you can just revert back to your last good state. And you can start on a second attempt now that you’re a little older and wiser.

Study your buns off

There’s a thought experiment I like to run in my head sometimes.

Let’s say I’m working on a Rails project and I can’t quite figure out how to get a certain form to behave how I want it to behave. I try several things, I check the docs, but I just can’t get it to work how I want.

In these situations I like to ask myself: “If I knew everything there was to know about this topic, would I be stuck?”

And the answer is of course not. The reason why I don’t understand how to get my form to work is exactly that—I don’t understand how to get my form to work. So what I need to do is set to work on gaining that understanding.

In an ideal world, a programmer could instantly pinpoint the exact piece of knowledge he or she is missing and then go find that piece of knowledge. And in fact, we’re very fortunate to live in a world where that’s possible. But it’s not possible 100% of the time.

In those cases I set down my code and say okay, universe, I guess this is how it’s gonna be. I’ll go to the relevant educational resource—in the case of this example perhaps the official Rails documentation on forms—and start reading it top to bottom. Again, if I knew everything about Rails forms then I wouldn’t have a problem, so I’ll set out of a journey to learn everything. Luckily, my level of understanding usually becomes sufficient to squash the problem long before I read everything there is to read about it.

Write some tests

One of the reasons I find tests to be a useful development or debugging tool is that tests help save me some mental juggling. When developing or debugging there are two jobs that need to be done: 1) write the code that makes the program behave as desired and 2) verify that the desired behavior is present.

If I write code without tests, I’m at risk of mentally mixing these two jobs and getting muddled. On the other hand, if I write a test for my desired behavior, I’m now free to completely forget about the verification step and focus fully on coding the solution. The fewer balls I have to mentally juggle, the faster I can work.

Take a break

One of my favorite Pink Floyd songs is “See Emily Play“. In the first chorus they sing “There is no other day / Let’s try it another way.” In a later chorus they sing “There is no other way / Let’s try it another day.” This is great debugging advice.

I often find that if I allow myself to forget about whatever problem I’m working on and go for a walk or something, my subconscious mind will set to work on the problem and surface deliver the solution later. Sometimes this happens 15 minutes later. Sometimes it doesn’t happen until a day or a week later. In any case, it’s often effective.

Work on something else

If all else fails, you can always work on something else. This might feel like a failure, but it’s only a failure if you decide to give up on the original problem permanently. Setting aside one goal in exchange for making progress on a different goal is infinitely better than allowing yourself to stay stuck.

Logging the user in before Capybara feature specs

Logging in

If you’re building an app that has user login functionality, you’ll at some point need to write some tests that log the user in before performing the test steps.

One approach I’ve seen is to have Capybara actually navigate to the sign in page, fill_in the email and password fields, and hit submit. This works but, if you’re using Devise, it’s a little more complicated than necessary. There’s a better way.

The Devise docs have a pretty good solution. Just add this to spec/rails_helper:

Then, any place you need to log in, you can do login_as(FactoryBot.create(:user)).

Log in before each test or log in once before all tests?

One of my readers, Justin K, wrote me with the following question:

If you use Capybara to do system tests, how do you handle authentication? Do you do the login step on each test or do you log in once and then just try and run all of your system level tests?

The answer is that I log in individually for each test.

The reason is that some of my tests require the user to be logged in, some of my tests require that the user is not logged in, and other tests require that some specific type of user (e.g. an admin user) is logged in.

I do often put my login_as call in a before block at the top of a test file to reduce duplication, but that doesn’t mean only once for that set of files. A common misconception is that a before block will give a performance benefit by only running the code in the before block once. This is not the case. before is shorthand for before(:each) and any code inside it will get run before each individual it block. I never use before(:all) inside of individual tests because I want each test case to be as isolated as possible.

All my best programming tips

What follows is a running list of all the programming tips I can think of that are worth sharing.

Many of these tips may seem so obvious they go without saying, but every single one of these tips is here because I’ve seen programmers fail to take advantage of these tips on multiple occasions, even very experienced programmers.

I’ve divided the tips into two sections: development tips and debugging tips. I’ve arranged them in no particular order.

Development tips

Articulate the goal

Before you sit down and start typing, clearly articulate in writing what it is you’re trying to accomplish. This will help you avoid going down random paths which will dilute your productivity. It will also help you work in small, complete units of work.

Break big jobs into small jobs

Some job seem so big and nebulous as to be intractable. For example, if I’m tasked with creating a schedule that contains recurring appointments, where do I start with that? It will probably make my job easier if I start with a smaller job, for example, thinking of how exactly to specify and codify a recurrence rule.

Keep everything working all the time

Refactoring is the act of improving the structure of a program without altering the behavior of the program. I’ve seen a lot of occasions where a programmer will attempt to perform a refactoring and fail not to alter the behavior of the code they’re trying to refactor. They end up having to throw away their work and start over fresh.

If in the course of refactoring I break or change some existing functionality, then I somehow have to find my way back to making the code behave as it originally did. This is hard.

Alternatively, if I take care never to let the code stray from its original behavior, I never have to worry about bringing back anything that got altered. This is relatively easy. I just have to work in very small units and regression-test at the end of each small change.

Continuously deliver

Continuous delivery is the practice of always having your program in a deployable state.

Let’s say I start working on a project on January 1st which has a launch date of July 1st. But then, unexpectedly, leadership comes to me on March 1st and says the launch date is now March 10th and we’re going to go live with whatever we have by then.

If I’ve been practicing continuous delivery, launching on March 10th is no big deal. At the end of each week (and the end of each day and perhaps even each hour) I’ve made sure that my application is 100% complete even if it might not be 100% finished.

Work on one thing at a time

It’s better to be all the way done with half your tasks than to be halfway done with all your tasks.

If I’m 100% done with 50% of 8 features, I can deploy four features. If I’m 50% done with 100% of 8 features, I can deploy zero features.

Also, open work costs mental bandwidth. Even if you believe it’s just as fast to jump between tasks as it is to focus on just one thing at a time, it’s more mentally expensive to have 5 balls in the air than just one.

Use automated tests

All code eventually gets tested, it’s just a question of when and how and by whom.

It’s much cheaper and faster for a bug to get caught by an automated test during development than for a feature to get sent to a QA person and then kicked back due to the bug.

Testing helps prevent the introduction of new bugs, helps prevent regressions, helps improve code design, helps enable refactoring, and helps aid the understandability of the codebase by serving as documentation.

Use clear names for things

When a variable name is unclear due to being a misnomer or when it’s unclear due to being abbreviated to obfuscation, it adds unnecessary mental friction.

Don’t abbreviate variable names, with the exception of universally-understood abbreviations (e.g. SSN or PIN) or hyperlocal temp variables (e.g. records.each do { |r| puts r }).

A good rule for naming things: call things what they are.

Don’t prematurely optimize

There’s a saying attributed to Kent Beck: “Make it work, then make it right, then make it fast.”

For the vast majority of the work I do, I never even need to get to the “make it fast” step. Unless you’re working on high-visibility features for a high-profile consumer app, performance bottlenecks probably won’t emerge until weeks or months after the feature is initially deployed.

Only write a little code at a time

I have a saying that I repeat to myself: “Never underestimate your ability to screw stuff up.” I’ve often been amazed by how often a small change that will “definitely” work doesn’t work.

Only work in tiny increments. Test what you’re working on after each change, preferably using automated tests.

Use atomic commits

An atomic commit is a commit that’s only “about” one thing.

Atomic commits are valuable for at least two reasons. One, if you need to roll back a commit because it introduces a bug, you won’t have to also roll back other, unrelated (and perfectly good) code that was mixed into the same commit. Two, atomic commits make it easier to pinpoint the introduction of a bug than non-atomic commits.

A separate but related idea is to use small commits, which is just a specific application of “Only write a little code at a time”.

Don’t allow yourself to stay stuck

Being stuck is about the worst state to be in. Going down the wrong path is (counterintuitively) more productive than standing still. Going down the wrong path stirs up new thoughts and new information; standing still doesn’t.

If you can’t think of a good way to move forward, move forward in a bad way. You can always revert your work later, especially if you’re using small, frequent, atomic commits.

If you can’t think of a way to keep moving, try to precisely articulate what exactly it is you’re stuck on and why you’re stuck on it. Sometimes that very act helps you get unstuck.

If you think the reason you’re stuck is a lack of some necessary piece of knowledge, do some Google searches. Read some relevant blog posts and book passages. Post some forum questions. Ask some people in some Slack groups (and if you’re not in any technical Slack groups, join some). Go for a walk and then come back and try again. If all else fails, set the problem aside and work on something else instead. But whatever you do, don’t just sit there and think.

Debugging tips

Articulate the problem

If you can precisely articulate what the problem is, you’ve already gone a good ways toward the solution. Conversely, if you can’t articulate exactly what the problem is, your situation is almost hopeless.

Don’t fall into logical fallacies

One of the biggest dangers in debugging is the danger of tricking yourself into thinking you know something you don’t actually know.

If in the course of your investigation you uncover a rock solid truth, write it down on your list of things you know to be true.

If on the other hand you uncover something that’s seems to be true but you don’t have enough evidence to be 100% sure that it’s true, don’t write it down as a truth. It’s okay to write it down as a hypothesis, but if you confuse hypotheses with facts then you’re liable to get confused and waste some time.

Favor isolation over reason

I know of two ways to identify the cause of a bug. One is that I can study the code and perform experiments and investigations until I’ve pinpointed the line of code where the problem lies. The other method is that I can isolate the bug. The later is usually many times quicker.

One of my favorite ways to isolate a bug is to use git bisect. Once I’ve found the commit that introduced the bug, it’s often pretty plain to see what part of the commit introduced the bug. If it’s not, I’ll usually negate the offending code by doing a git revert --no-commit the offending commit and then re-introduce the offending code tiny bit by tiny bit until I’ve found the culprit. This methodology almost never fails to work.

Favor googling over thinking

Mental energy is a precious resource. Don’t waste mental energy over problems other people have already solved, especially if solving that particular problem does little or nothing to enhance your skills.

If you run a command and get a cryptic error, don’t sit and squint at it. Copy and paste the error message into Google.

Thinking for yourself of course has its time and place. Understanding things deeply has its time and place too. Exercise good judgment over when you should pause and understand what you’re seeing versus when you should plow through the problem as effortlessly as possible so you can get on with your planned work.

Learn to formulate good search queries for error messages

Let’s say I run a command and get the following error: /Users/jasonswett/.rvm/gems/ruby-2.5.1/gems/rspec-core-3.8.0/lib/rspec/core/reporter.rb:229:in require': cannot load such file -- rspec/core/profiler (LoadError)

Which part of the error should I google?

If I paste the whole thing into Google, the /Users/jasonswett/.rvm/gems/ruby-2.5.1/gems/rspec-core-3.8.0, which is unique to my computer, my Ruby version and my rspec-core version, narrows my search so much that I don’t even get a single result.

At the other extreme, require': cannot load such file is far too broad. Lots of different error messages could contain that text.

The sensible part of the error message to copy/paste is require': cannot load such file -- rspec/core/profiler (LoadError). It’s specific enough to have hope of matching other similar problems, but not so specific that it won’t return any results.

Learn to formulate good search queries in general

I’m not sure how to articulate what makes a good search query but here’s my attempt: a query that’s as short as it can possibly be while still containing all the relevant search terms.

Don’t give too much credence to the error message

Whenever I get an error message, I pretty much never pay attention to what it says, at least not as the first step.

When I get an error message, I look at the line number and file. Then I go to that line number and look to see if there’s anything obviously wrong. Probably about 90% of the time there is.

For the other 10% of the time, I’ll go back to the error message and read it to see if it offers any clues. If it does, I’ll make a change to see if that fixes the problem. If the error is so cryptic that I don’t know what to do, I’ll google the error message.

Make the terminal window big

If I had a dollar for every time I helped a student who was trying to debug a problem using a terminal window the size of a Post-It note, I’d be a rich man. Maximize the window so you can actually see what you’re doing.

Close unneeded tabs

Open tabs cost mental energy. Between the cost of re-finding the page later and the cost of being mentally cluttered with all those tabs open, it’s way cheaper just to re-find the page later. On average I myself usually have about two tabs open at any given time.

RSpec/Capybara integration tests: the ultimate guide

What exactly are integration tests?

The many different types of RSpec specs

When I started learning about Rails testing and RSpec I discovered that there are many different kinds of tests. I encountered terms like “model spec”, “controller spec”, “view spec”, “request spec”, “route spec”, “feature spec”, and more. I asked myself: Why do these different types of tests exist? Do I need to use all of them? Are some more important than others?

As I’ve gained experience with Rails testing I’ve come to believe that yes, some types of tests (or, to use the RSpec terminology, “specs”) are more important than other types of specs and no, I don’t need to use all of types of specs that RSpec offers. Some types of specs I never use at all (e.g. view specs).

There are two types of tests I use way more than any other types of specs: model specs and feature specs. Let’s focus on these two types of specs for a bit. What are model specs and feature specs for, and what’s the difference?

Feature specs and model specs

Speaking loosely, model specs test ActiveRecord models by themselves and feature specs test the whole “stack” including model code, controller code, and any HTML/CSS/JavaScript together. Neither type of spec is better or worse (and it’s not an “either/or” situation anyway), the two types of specs just have different strengths and weaknesses in different scenarios.

The strengths and weaknesses of model specs

An advantage of model specs is that they’re “inexpensive”. Compared to feature specs, model specs are relatively fast to run and relatively fast to write. It tends to be slower to actually load the pages of an application and switch among them than to just test model code by itself with no browser involved. That’s an advantage of model specs. A disadvantage of model specs is that they don’t tell you anything about whether your whole application actually works from the user’s perspective. You can understand what I mean if you imagine a Rails model that works perfectly by itself but doesn’t have have any working HTML/CSS/JavaScript plugged into that model to make the whole application work. In such a case the model specs could pass even though there’s no working feature built on top of the model code.

The strengths and weaknesses of feature specs

Feature specs have basically the opposite pros and cons. Unlike model specs, feature specs do tell you whether all the parts of your application are working together. There’s a cost to this, though, in that feature specs are relatively “expensive”. Feature specs are relatively time-consuming to run because you’re running more stuff than in a model spec. (It’s slower to have to bring up a web browser than to not have to.) Feature specs are also relatively time-consuming to write. The reason feature specs are time-consuming to write is that, unlike a model spec where you can exercise e.g. just one method at a time, a feature spec has to exercise a whole feature at a time. In order to test a feature it’s often necessary to have certain non-trivial conditions set up—if you want to test, for example, the creation of a hair salon appointment you first have to have a salon, a stylist, a client, and maybe more. This necessity makes the feature spec slower to write and slower to run than a model spec where you could test e.g. a method on a Stylist class all by itself.

Where integration tests come into the picture, and what integration tests are

If you’re okay with being imprecise, we can say that feature specs and integration tests are roughly the same thing. And I hope you’re okay with being imprecise because when it comes to testing terminology in general there’s very little consensus on what the various testing terms mean, so it’s kind of impossible to be precise all the time. We do have room to be a little more precise in this particular case, though.

In my experience it’s commonly agreed that an integration test is any test that tests two or more parts of an application together. What do we mean when we say “part”? A “part” could be pretty much anything. For example, it has been validly argued that models specs could be considered integration tests because model specs typically test Ruby code and database interaction, not just Ruby code in isolation with no database interaction. But it could also be validly argued that a model spec is not an integration test because a model spec just tests “one” thing, a model, without bringing controllers or HTML pages into the picture. (For our purposes though let’s say model specs are NOT integration tests, which is the basic view held, implicitly or explicitly, by most Rails developers.)

So, while general agreement exists on what an integration test is in a broad sense, there still is a little bit of room for interpretation once you get into the details. It’s kind of like the definition of a sandwich. Most people agree that a sandwich is composed of a piece of food surrounded by two pieces of bread, but there’s not complete consensus on whether e.g. a hamburger is a sandwich. Two different people could have different opinions on the matter and no one could say either is wrong because there’s not a single authoritative definition of the term.

Since feature specs exercise all the layers of an application stack, feature specs fall solidly and uncontroversially into the category of integration tests. This is so true that many developers (including myself) speak loosely and use the terms “feature spec” and “integration test” interchangeably, even though we’re being a little inaccurate by doing so. Inaccurate because all feature specs are integration specs but not all integration tests are feature specs. (For example, a test that exercises an ActiveRecord model interacting with the Stripe API could be considered an integration test even though it’s not a feature spec.)

I hope at this point you have some level of understanding of how feature specs and integration tests are different and where they overlap. Now that we’ve discussed feature specs vs. integration tests, let’s bring some other common testing terms into the picture. What about end-to-end tests and system tests?

What’s the difference between integration tests, acceptance tests, end-to-end tests, system tests, and feature specs?

A useful first step in discussing these four terms may be to name the origin of each term. Is it a Ruby/Rails-specific term or just a general testing term that we happen to use in the Rails world sometimes?

Integration tests General testing term, not Ruby/Rails-specific
Acceptance tests General testing term, not Ruby/Rails-specific
End-to-end tests General testing term, not Ruby/Rails-specific
System tests Rails-specific, discussed in the official Rails guides
Feature specs An RSpec concept/term

Integration tests, acceptance tests and end-to-end tests

Before we talk about where system tests and feature specs fit in, let’s discuss integration tests, acceptance tests and end-to-end tests.

Like most testing terms, I’ve heard the terms integration tests, acceptance tests, and end-to-end tests used by different people to mean different things. It’s entirely possible that you could talk to three people and walk away thinking that integration tests, acceptance tests and end-to-end tests mean three different things. It’s also entirely possible that you could talk to three people and walk away thinking all three terms mean the same exact thing.

I’ll tell you how I interpret and use each of these three terms, starting with end-to-end tests.

To me, an end-to-end test is a test that tests all layers of an application stack under conditions that are very similar to production. So in a Rails application, a test would have to exercise the HTML/CSS/JavaScript, the controllers, the models, and the database in order to qualify as an end-to-end test. To contrast end-to-end tests with integration tests, all end-to-end tests are integration tests but not all integration tests are end-to-end tests. I would say that the only difference between the terms “end-to-end test” and “feature spec” are that feature spec is an RSpec-specific term while end-to-end test is a technology-agnostic industry term. I don’t tend to hear the term “end-to-end test” in Rails circles.

Like I said earlier, an integration test is often defined as a test that verifies that two or more parts of an application behave correctly not only in isolation but also when used together. For practical purposes, I usually hear Rails developers say “integration test” when they’re referring to RSpec feature specs.

The purpose of an acceptance tests is quite a bit different (in my definition at least) from end-to-end tests or integration tests. The purpose of an acceptance test is to answer the question, “Does the implementation of this feature match the requirements of the feature?” Like end-to-end tests, I don’t ever hear Rails developers refer to feature specs as acceptance tests. I have come across this usage, though. One of my favorite testing books, Growing Object-Oriented Software, Guided by Tests, uses the term “acceptance test” to refer to what in RSpec/Rails would be a feature spec. So again, there’s a huge lack of consensus in the industry around testing terminology.

System tests and feature specs

Unlike many distinctions we’ve discussed so far, the difference between system tests and feature specs is happily pretty straightforward: when an RSpec user says integration test, they mean feature spec; when a MiniTest user says integration test, they mean system test. If you use RSpec you can focus on feature specs and ignore system tests. If you use MiniTest you can focus on system tests and ignore feature specs.

My usage of “integration test”

For the purposes of this article I’m going to swim with the current and use the terms “integration test” and “feature spec” synonymously from this point forward. When I say “integration test”, I mean feature spec.

What do I write integration tests for, and how?

We’ve discussed what integration tests are and aren’t. We’ve also touched on some related testing terms. If you’re like many developers getting started with testing, your next question might be: what do I write integration tests for, and how?

This question (what do I write tests for) is probably the most common question I hear from people who are new to testing. It was one of my own main points of confusion when I myself was getting started with testing.

What to write integration tests for

I can tell you what I personally write integration tests for in Rails: just about everything. Virtually every time I build a feature that a user will interact with in a browser, I’m going to write at least one integration test for that feature. Unfortunately this answer of mine, while true, is perhaps not very helpful. When someone asks what they should write tests for, that person is probably somewhat lost and that person is probably looking for some sort of toehold. So here’s what’s hopefully a more helpful answer.

If you’ve never written any sort of integration test (which again I’m sloppily using interchangeably with “feature spec”) then my advice would be to first do an integration test “hello world” just so you can see what’s what and get a tiny bit of practice under your belt. I have another post, titled A Rails testing “hello world” using RSpec and Capybara, that will help you do just that.

What if you’re a little further? What if you’re already comfortable with a “hello world” level of integration testing and you’re more curious about how to add real integration tests to your Rails application?

My advice would be to start with what’s easiest. If you’re working with a legacy project (legacy project here could just be read as “any existing project without a lot of test coverage”), it’s often the case that the structure of the code makes it particularly difficult to add tests. So in these cases you’re dealing with two separate challenges at once: a) you’re trying to learn testing (not easy) and b) you’re trying to overcome the obstacles peculiar to adding tests to legacy code (also not easy). To the extent that you can, I would encourage you to try to separate these two endeavors and just focus on getting practice writing integration tests first.

If you’re trying to add tests to an existing project, maybe you can find some areas of the application where adding integration tests is relatively easy. If there’s an area of your application where the UI simply provides CRUD operations on some sort of resource that has few or no dependencies, then that area might be a good candidate to begin with. A good clue would be to look for how many has_many/belongs_to calls you find in your various models (in other words, look for how many associations your models have). If you have a model that has a lot of associations, the CRUD interface for that model is probably not going to be all that easy to test because you’re going to have to spin up a lot of associated records in order to get the feature to function. If you can find a model with fewer dependencies, a good first integration test to write might be a test for updating a record.

How to write integration tests

Virtually all the integration tests I write follow the same pattern:

  1. Generate some test data
  2. Log into the application
  3. Visit the page I’m interested in
  4. Perform whatever clicks and typing need to happen in order to exercise the feature I’m testing
  5. Perform an assertion
  6. If you can follow this basic template, you can write integration tests for almost anything. The hardest part is usually the step of generating test data that will help get your program/feature into the right state for testing what you want to test.

    Most of the Rails features I write most of the time are pretty much just some variation of a CRUD interface for a Rails resource. So when I’m developing a feature, I’ll usually write an integration test for at least creating and updating the resource, and maybe deleting an instance of that resource. (I’ll pretty much never write a test for simply viewing a list of resources, the “R” in CRUD, because that functionality is usually exercised anyway in the course of testing other behavior.) Sometimes I write my integration tests before I write the application code that makes the tests pass. Usually I write generate the CRUD code first using Rails scaffolds and add the tests afterward. To put it another way, I usually don’t write my integration tests in a TDD fashion. In fact, it’s not really possible to use scaffolds and TDD at the same time. In the choice between having the benefits of scaffolds and having the benefits of TDD, I choose having the benefits of scaffolds. (I do, however, practice TDD when writing other types of tests besides integration tests.)

    Now that we’ve discussed in English what to write integration tests for and how, let’s continue fleshing out this answer using some actual code examples. The remainder of this article is a Rails integration test tutorial.

    Tutorial overview

    Project description

    In this short tutorial we’re going to create a small Rails application which is covered by a handful of integration tests. The application will be ludicrously small as far as Rails applications go, but a ludicrously small Rails application is all we’re going to need.

    Our application will have just a single model, City, which has just a single attribute, name. Since the application has to be called something, we’ll call it “Metropolis”.

    Tutorial outline

    Here are the steps we’re going to take:

    1. Initialize the application
    2. Create the city resource
    3. Write some integration tests for the city resource

    That’s all! Let’s dive in. The first step will be to set up our Rails project.

    Setting up our Rails project

    First let’s initialize the application. The -T flag means “no test framework”. We need the -T flag because, if it’s not there, Rails will assume we want MiniTest, which in this case is not the case. I’m using the -d postgresql flag because I like PostgreSQL, but you can use whatever RDBMS you want.

    Next let’s cd into the project directory and create the database.

    With that groundwork out of the way we can add our first resource and start adding our first integration tests.

    Writing our integration tests

    What we’re going to do in this section is generate the city scaffold, then pull up the city index page in the browser, then write some tests for the city resource.

    The tests we’re going to write for the city resource are:

    • Creating a city (with valid inputs)
    • Creating a city (with invalid inputs)
    • Updating a city (with valid inputs)
    • Deleting a city

    Creating the city resource

    The City resource will have only one attribute, name.

    Let’s set our root route to cities#index so Rails has something to show when we visit localhost:3000.

    If we now run the rails server command and visit http://localhost:3000, we should see the CRUD interface for the City resource.

    Integration tests for City

    Before we can write our tests we need to install some certain gems.

    Installing the necessary gems

    Let’s add the following to our Gemfile under the :development, :test group.

    Remember to bundle install.

    In addition to running bundle install which will install the above gems, we also need to install RSpec into our Rails application which is a separate step. The rails g rspec:install command will add a couple RSpec configuration files into our project.

    The last bit of plumbing work we have to do before we can start writing integration tests is to create a directory where we can put the integration tests. I tend to create a directory called spec/features and this is what I’ve seen others do as well. There’s nothing special about the directory name features. It could be called anything at all and still work.

    Writing the first integration test (creating a city)

    The first integration test we’ll write will be a test for creating a city. The steps will be this:

    1. Visit the “new city” page
    2. Fill in the Name field with a city name (Minneapolis)
    3. Click the Create City button
    4. Visit the city index page
    5. Assert that the city we just added, Minneapolis, appears on the page

    Here are those four steps translated into code.

    1. visit new_city_path
    2. fill_in 'Name', with: 'Minneapolis'
    3. click_on 'Create City'
    4. visit cities_path
    5. expect(page).to have_content('Minneapolis')

    Finally, here are the contents of a file called spec/features/create_city_spec.rb with the full working test code.

    Let’s create the above file and then run the test.

    The test passes. There’s kind of a problem, though. How can we be sure that the test is actually doing its job? What if we accidentally wrote the test in such a way that it always passes, even if the underlying feature doesn’t work? Accidental “false positives” certainly do happen. It’s a good idea to make sure we don’t fool ourselves.

    Verifying that the test actually does its job

    How can we verify that a test doesn’t give a false positive? By breaking the feature and verifying that the test no longer passes.

    There are two ways to achieve this:

    1. Write the failing test before we write the feature itself (test-driven development)
    2. Write the feature, write the test, then break the feature

    Method #1 is not an option for us in this case because we already created the feature using scaffolding. That’s fine though. It’s easy enough to make a small change that breaks the feature.

    In our CitiesController, let’s replace the line if @city.save with simply if true. This way the flow through the application will continue as if everything worked, but the city record we’re trying to create won’t actually get created, and so the test should fail when it looks for the new city on the page.

    If we run the test again now, it does in fact fail.

    Now we can change if true back to if @city.save, knowing that our test really does protect against a regression should this city saving functionality ever break.

    We’ve just added a test for attempting (successfully) to create a city when all inputs are valid. Now let’s add a test that verifies that we get the desired behavior when not all inputs are valid.

    Integration test for trying to create a city with invalid inputs

    In our “valid inputs” case we followed the following steps.

    1. Visit the “new city” page
    2. Fill in the Name field with a city name (Minneapolis)
    3. Click the Create City button
    4. Visit the city index page
    5. Assert that the city we just added, Minneapolis, appears on the page

    For the invalid case we’ll follow a slightly different set of steps.

    1. Visit the “new city” page
    2. Leave the Name field blank
    3. Click the Create City button
    4. Assert that the page contains an error

    Here’s what these steps might look like when translated into code.

    1. visit new_city_path
    2. fill_in 'Name', with: ''
    3. click_on 'Create City'
    4. expect(page).to have_content("Name can't be blank")

    A comment about the above steps: step 2 is actually not really necessary. The Name field is blank to begin with. Explicitly setting the Name field to an empty string is superfluous and doesn’t make a bit of change in how the test actually works. However, I’m including this step just to make it blatantly obvious that we’re submitting a form with an empty Name field. If we were to jump straight from visiting new_city_path to clicking the Create City button, it would probably be less clear what this test is all about.

    Here’s the full version of the “invalid inputs” test scenario alongside our original “valid inputs” scenario.

    Let’s see what we get when we run this test.

    The test fails.

    Instead of finding the text Name can't be blank on the page, it found the text City was successfully created.. Evidently, Rails happily accepted our blank Name input and created a city with an empty string for a name.

    To fix this behavior we can add a presence validator to the name attribute on the City model.

    The test now passes.

    Integration test for updating a city

    The steps for this test will be:

    1. Create a city in the database
    2. Visit the “edit” page for that city
    3. Fill in the Name field with a different value from what it currently is
    4. Click the Update City button
    5. Visit the city index page
    6. Assert that the page contains the city’s new name

    Here are these steps translated into code.

    1. nyc = City.create!(name: 'NYC')
    2. visit edit_city_path(id: nyc.id)
    3. fill_in 'Name', with: 'New York City'
    4. click_on 'Update City'
    5. visit cities_path
    6. expect(page).to have_content('New York City')

    Here’s the full test file which we can put at spec/features/update_city_spec.rb.

    If we run this test (rspec spec/features/update_city_spec.rb), it will pass. Like before, though, we don’t want to trust this test without seeing it fail once. Otherwise, again, we can’t be sure that the test isn’t giving us a false positive.

    Let’s change the line if @city.update(city_params) in CitiesController to if true so that the controller continues on without actually updating the city record. This should make the test fail.

    The test does in fact now fail.

    Integration test for deleting a city

    This will be the last integration test we write. Here are the steps we’ll follow.

    1. Create a city in the database
    2. Visit the city index page
    3. Assert that the page contains the name of our city
    4. Click the “Destroy” link
    5. Assert that the page no longer contains the name of our city

    Translated into code:

    1. City.create!(name: 'NYC')
    2. visit cities_path
    3. expect(page).to have_content('NYC')
    4. click_on 'Destroy'
    5. expect(page).not_to have_content('NYC')

    Here’s the full test file, spec/features/delete_city_spec.rb.

    If we run this test, it will pass.

    To protect against a false positive and see the test fail, we can comment out the line @city.destroy in CitiesController.

    Now the test fails.

    Remember to change that line back to the test passes again.

    At this point we’ve written four test cases:

    • Creating a city (with valid inputs)
    • Creating a city (with invalid inputs)
    • Updating a city (with valid inputs)
    • Deleting a city

    Let’s invoke the rspec command to run all four test cases. Everything should pass.

    Where to go next

    If you want to learn more about writing integration tests in Rails, here are a few recommendations.

    First, I recommend good old practice and repetition. If you want to get better at writing integration tests, write a whole bunch of integration tests. I would suggest building a side project of non-trivial size that you maintain over a period of months. Try to write integration tests for all the features in the app. If there’s a feature you can’t figure out how to write a test for, give yourself permission to skip that feature and come back to it later. Alternatively, since it’s just a side project without the time pressures of production work, give yourself permission to spend as much time as you need in order to get a test working for that feature.

    Second, I’ll recommend a couple books.

    Growing Object-Oriented Software, Guided by Tests. This is one of the first books I read when I was personally getting started with testing. I recommend it often and hear it recommended often by others. It’s not a Ruby book but it’s still highly applicable.

    The RSpec Book. If you’re going to be writing tests with RSpec, it seems like a pretty good idea to read the RSpec book. I also did a podcast interview with one of the authors where he and I talk about testing. I’m recommending this book despite the fact that it advocates Cucumber, which I am pretty strongly against.

    Effective Testing with RSpec 3. I have not yet picked up this book although I did do a podcast interview with one of the authors.

    My last book recommendation is my own book, Rails Testing for Beginners. I think my book offers a pretty good mix of model-level testing and integration-level testing with RSpec and Capybara.

Common causes of flickering/flapping/flaky tests

A flapping test is a test that sometimes passes and sometimes fails even though the application code being tested hasn’t changed. Flapping tests can be hard to reproduce, diagnose, and fix. Here are some of the common causes I know of for flapping tests. If you know the common causes of flapping tests it can go a long way toward diagnosis. Once diagnosis is out of the way, the battle is half over.

Race conditions

Let’s say you have a Capybara test that clicks a page element that fires off an AJAX request. The AJAX request completes, making some other page element clickable. Sometimes the AJAX request beats the test runner, meaning the test works fine. Sometimes the test runner beats the AJAX request, meaning the test breaks.

“Leaking” tests (non-deterministic test suite)

Sometimes a test will fail to clean up after itself and “leak” state into other tests. To take a contrived but obvious example, let’s say test A checks that the “list customers” page shows exactly 5 customers and test B verifies that a customer can be created successfully. If test B doesn’t clean up after itself, then running test B before test A will cause test A to fail because there are now 6 customers instead of 5. This exact issue is almost never an issue in Rails (because each test runs inside a transaction) but database interaction isn’t the only possible form of leaky tests.

When I’m fixing this sort of issue I like to a) determine an order of running the tests that always causes my flapping test to fail and then b) figure out what’s happening in the earlier test that makes the later test fail. In RSpec I like to take note of the seed number, then run rspec --seed <seed number> which will re-run my tests in the same order.

Reliance on third-party code

Sometimes tests unexpectedly fail because some behavior of third-party code has changed. I actually sometimes consciously choose to live with this category of flapping test. If an API I rely on goes down and my tests fail, the tests arguably should fail because my application is genuinely broken. But if my tests are flapping in a way that’s not legitimate, I’d probably change my test to use something like VCR instead of hitting the real API.

A Rails model test “hello world”

The following is an excerpt from my book, Rails Testing for Beginners.

What we’re going to do

What we’re going to do in this post is:

  1. Initialize a new Rails application
  2. Install RSpec using the rspec-rails gem
  3. Generate a User model
  4. Write a single test for that User model

The test we’ll write will be a trivial one. The user will have both a first and last name. On the User model we’ll define a full_name method that concatenates first and last name. Our test will verify that this method works properly. Let’s now begin the first step, initializing the Rails application.

Initializing the Rails application

Let’s initialize the application using the -T flag meaning “no test framework”. (We want RSpec in this case, not the Rails default of MiniTest.)

Now we can install RSpec.

Installing RSpec

To install RSpec, all we have to do is add rspec-rails to our Gemfile and then do a bundle install.

We’ve installed the RSpec gem but we still have to run rails g rspec:install which will create a couple configuration files for us.

Now we can move on to creating the User model.

Creating the User model

Let’s create a User model with just two attributes: first_name and last_name.

Now we can write our test.

The User test

Writing the test

Here’s the test code:

Let’s take a closer look at this test code.

Examining the test

Let’s focus for a moment on this part of the test code:

This code is saying:

  1. Instantiate a new User object with first name Abraham and last name Lincoln.
  2. Verify that when we call this User object’s full_name method, it returns the full name, Abraham Lincoln.

What are the its and describes all about? Basically, these “it blocks” and “describe blocks” are there to allows us to put arbitrary labels on chunks of code to make the test more human-readable. (This is not precisely true but it’s true enough for our purposes for now.) As an illustration, I’ll drastically change the structure of the test:

This test will pass just fine. Every it block needs to be nested inside a do block, but RSpec doesn’t care how deeply nested our test cases are or whether the descriptions we put on our tests (e.g. the vague and mysterious it 'does stuff') make any sense.

Running the test

If we run this test it will fail. It fails because the full_name method is not yet defined.

Let’s now write the full_name method.

Now the test passes.

Code smell: long, procedural background jobs (and how to fix it)

I’ve worked on a handful of Rails applications that have pretty decent test coverage in terms of model specs and feature specs but are lacking in tests for background jobs. It seems that developers often find background jobs more difficult to test than Active Record models—otherwise the test coverage would presumably be better in that area. I’ve found background jobs of 100+ lines in length with no tests, even though the same application’s Active Record models had pretty good test coverage.

I’m not sure why this is. My suspicion is that background job code is somehow viewed as “separate” from the application code. For some reason the background job code doesn’t need tests, and for some reason the background job code wouldn’t benefit from an object-oriented structure like the rest of the application.

Whatever the root cause of this issue, the fix is fortunately often pretty simple. We can apply the extract class refactoring technique to turn the long procedure background job into one or more neatly abstracted classes.

Just as I would when embarking on the risky task of refactoring any piece of untested legacy code, I would use characterization testing to help me make sure I’m preserving the original behavior of the background job code as I move it into its own class.

A Rails testing “hello world” using RSpec and Capybara

The following is an excerpt from my book, Rails Testing for Beginners.

What we’re going to do

One of the biggest challenges in getting started with testing is just that—the simple act of getting started. What I’d like is for you, dear reader, to get a small but meaningful “win” under your belt as early as possible. If you’ve never written a single test in a Rails application before, then by the end of this chapter, you will have written your first test.

Here’s what we’re going to do:

    • Initialize a plain-as-possible Rails application
    • Create exactly one static page that says “Hello, world!”
    • Write a single test (using RSpec and Capybara) that verifies that our static page in fact says “Hello, world!”

The goal here is just to walk through the motions of writing a test. The test itself is rather pointless. I would probably never write a test in real life that just verifies the content of a static page. But the goal is not to write a realistic test but to provide you with a mental “Lego brick” that you can combine with other mental Lego bricks later. The ins and outs of writing tests get complicated quick enough that I think it’s valuable to start with an example that’s almost absurdly simple.

The tools we’ll be using

For this exercise we’ll be using RSpec, anecdotally the most popular of the Rails testing frameworks, and Capybara, a library that enables browser interaction (clicking buttons, filling out forms, etc.) using Ruby. You may find some of the RSpec or Capybara syntax confusing or indecipherable. You also may well not understand where the RSpec stops and the Capybara starts. For the purposes of this “hello world” exercise, that’s okay. Our goal right now is not deep understanding but to begin putting one foot in front of the other.

Initializing the application

Run the “rails new” command

First we’ll initialize this Rails app using the good old rails new command.

You may be familiar with the -T flag. This flag means “no tests”. If we had done rails new hello_world without the -T flag, we would have gotten a Rails application with a test directory containing MiniTest tests. I want to use RSpec, so I want us to start with no MiniTest.

Let’s also create our application’s database at this point since we’ll have to do that eventually anyway.

Update our Gemfile

This is the step where RSpec and Capybara will start to come into the picture. Each library will be included in our application in the form of a gem. We’ll also include two other gems, selenium-webdriver and chromedriver-helper, which are necessary in order for Capybara to interact with the browser.

Let’s add the following to our Gemfile under the :development, :test group. I’ve added a comment next to each gem or group of gems describing its role. Even with the comments, it may not be abundantly clear at this moment what each gem is for. At the end of the exercise we’ll take a step back and talk about which library enabled which step of what we just did.

Don’t forget to bundle install.

Install RSpec

Although we’ve already installed the RSpec gem, we haven’t installed RSpec into our application. Just like the Devise gem, for example, which requires us not only to add devise to our Gemfile but to also run rails g devise:install, RSpec installation is a two-step process. After we run this command we’ll have a spec directory in our application containing a couple config files.

Now that we’ve gotten the “plumbing” work out of the way, let’s write some actual application code.

Creating our static page

Let’s generate a controller, HelloController, with just a single action, index.

Let’s modify the action’s template so it just says “Hello, world!”.

Just as a sanity check, let’s start the Rails server and open up our new page in the browser to make sure it works as expected.

We have our test infrastructure. We have the feature we’re going to test. Now let’s write the test itself.

Writing out test

Write the actual test

Let’s create a file called spec/hello_world_spec.rb with the following content:

If you focus on the “middle” two lines, the ones starting with visit and expect, you can see that this test takes the form of two steps: visit the hello world index path and verify that the page there says “Hello, world!” If you’d like to understand the test in more detail, below is an annotated version.

Now that we’ve written and broken down our test, let’s run it!

Run the test

This test can be run by simply typing rspec followed by the path to the test file.

The test should pass. There’s a problem with what we just did, though. How can we be sure that the test is testing what we think it’s testing? It’s possible that the test is passing because our code works. It’s also possible that the test is passing because we made a mistake in writing our test. This concern may seem remote but “false positives” happen a lot more than you might think. The only way to be sure our test is working is to see our test fail first.

Make the test fail

Why does seeing a test fail prior to passing give us confidence that the test works? What we’re doing is exercising two scenarios. In the first scenario, the application code is broken. In the second scenario, the application code is working correctly. If our test passes under that first scenario, the scenario where we know the application code is broken, then we know our test isn’t doing its job properly. A test should always fail when its feature is broken and always pass when its feature works. Let’s now break our “feature” by changing the text on our page to something wrong.

When we run our test now it should see it fail with an error message like expected to find text "Hello, world!" in "Jello, world!".

Watch the test run in the browser

Add the following to the bottom of spec/rails_helper.rb:

Then run the test again:

This time, Capybara should pop open a browser where you can see our test run. When I’m actively working with a test and I want to see exactly what it’s doing, I often find the running of the test to be too fast for me, so I’ll temporarily add a sleep wherever I want my test to slow down. In this case I would put the sleep right after visiting hello_index_path.

Review

There we have it: the simplest possible Rails + RSpec + Capybara test. Unfortunately “the simplest possible Rails + RSpec + Capybara test” is still not particularly simple in absolute terms, but it’s pretty simple compared to everything that goes on in a real production application.

Who’s to blame for bad code? Coders.

When time pressure from managers drives coders to write poor-quality code, some people think the responsible party is the managers. I say it’s the coders. Here’s why I think so.

The ubiquity of time pressure in software development work

I’ve worked on a lot of development projects with a lot of development teams. In the vast majority of cases, if not all cases, the development team is perceived as being “behind”. Sometimes the development team’s progress is slower than the development team themselves set. Sometimes development progress is seen as behind relative to some arbitrary, unrealistic deadline set by someone outside the development team. Either way, no one ever comes by and says to the development team, “Wow, you guys are way ahead of where we expected!”

Often leadership will beg, plead, cajole, flatter, guilt-trip or bribe in order to increase the chances that their timeline will be met. They’ll ask the developers things like, “How can we ‘get creative’ to meet this timeline?” They’ll offer to have lunch, dinner, soda and alcohol brought in for a stretch of time so the developers can focus on nothing but the coding task at hand. Some leaders, lacking a sense of short-term/long-term thinking, will ask the developers to cut corners, to consciously incur technical debt in order to work faster. They might say things like “users don’t care about code” or “perfect is the enemy of the good“.

Caving to the pressure

Unfortunately, in my experience, most developers, most of the time, cave to the pressure to cut corners and deliver poor quality work. If you think I’m mistaken, I would ask you to show me a high-quality production codebase. There are very few in existence.

I myself am by no means innocent. I’ve caved to the pressure and delivered poor-quality work. But that doesn’t mean I think it’s excusable to do poor work or even (as some developers seem to believe) heroic.

Why bad code is the fault of coders, not managers

Why are coders, not managers, the ones responsible for bad code? Because coders are the ones who wrote the code. It was their fingers on the keyboard. The managers weren’t holding a gun to the developers’ heads and demanding they do a bad job.

“But,” a developer might say, “I had to cut corners in order to meet the deadline, and if I didn’t meet the deadline, I might get fired.” I don’t believe this reasoning stands up very well to scrutiny. I’d like to examine this invalid excuse for doing poor work as well as some others.

Invalid excuses for shipping poor work

Fallacy: cutting corners is faster

Why exactly is “bad code” bad? Bad code is bad precisely because it’s slow and expensive to work with. Saying, “I know this isn’t the right way to do it but it was faster” is a self-contradiction. The exact reason “the right way” is the right way is because it’s faster. Faster in the long run, of course. Sloppy, rushed work can be faster in the very short-term, of course, but any speed gain is usually negated the next time someone has to work with that area of code, which could be as soon as later that same day.

Fallacy: “they didn’t give me enough time to do a good job”

If I’m not given enough time to do a good job, the answer is not to do a bad job. The answer is that that thing can’t be done in that amount of time.

If my boss comes to me and says, “Can we do X in 6 weeks” and my response is “It would take 12 weeks to do X, but if we cut a lot of corners we could do it in 6 weeks” then all my boss hears is “WE CAN DO IT IN 6 WEEKS!”

Given the option between hitting a deadline with “bad code” and missing a deadline, managers will roughly always choose hitting the deadline with bad code. In this way managers are like children who ask over and over to eat candy for dinner. They can’t be trusted to take the pros and cons and make the right decision.

Fallacy: cutting corners is a business decision

It’s my boss’s role to tell me what to do and when to do it. It’s my job to, for the most part, comply. As a programmer I typically won’t have visibility into business decisions the way my boss does. As a programmer I will often not have the right skillset or level of information necessary to have a valid opinion on most business matters.

This idea that my boss can tell me what to do only applies down to a certain level of granularity though. I should let my boss tell me what to do but I shouldn’t let my boss tell me how to do it. I care about the professional reputation I have with my colleagues and with myself. I think there’s a certain minimum level of quality below which professionals should not be willing to go, even though it’s always an option to go lower.

Fallacy: “we can’t afford perfection”

I wish I had a dollar for every time I heard somebody say something like, “Guys, it doesn’t need to be perfect, it just needs to be good enough.”

The fallacy here is that the choice is between “perfect” and “good”. That’s a mistaken way of looking at it. The standard mode of operation during a rush period for developers is to pile sloppy Band-Aid fix on top of sloppy Band-Aid fix as if they desire nothing more strongly in life than to paint themselves into a corner from which they will never be able to escape. The alternative to this horror show isn’t “perfection”. The alternative is just what I might call a “minimally good job”.

Fallacy: “if I don’t comply with instructions to cut corners, I’ll get fired”

This might be true a very very very small percentage of the time. Most of the time it’s not, though. It’s pretty hard to get fired from a job. Firing someone is painful and expensive. Usually someone has to rack up many offenses before they finally get booted.

Fallacy: “we’ll clean it up after the deadline”

I find it funny that some people can make this claim with a straight face. Fixing sloppy work after the deadline “when we have more time” or “when things slow down” never happens. Deadlines, such as a go-live date, often translate to increased pressure and time scarcity because now there’s more traffic and visibility on the system. Deadlines are usually followed by more deadlines. You never get “more time”. Pressure rarely goes down over time. It usually goes up.

Fallacy: “I would have done a better job, but it wasn’t up to me”

Nope. I alone am responsible for the work I do. If someone pressured me to do the wrong thing and in response I chose to do the wrong thing, then I’m responsible for choosing to cave to the pressure.

Fallacy: “I haven’t been empowered to make things better”

It’s rare that a superior will come along and tell me, “Jason, you explicitly have permission to make things better around here.” The reality is that it’s up to me to observe what needs to be done better and help make it happen. I’m a believer in the saying that “it’s easier to get forgiveness than permission”. I tend to do what my professional judgment tells me needs to be done without necessarily asking my boss if it’s okay. Does this get me in trouble sometimes? Yes. I look at those hand-slaps as the cost of self-empowerment.

Plus, all employees have power over their superiors in that they have the power to leave. I don’t mean this is a “let me have my way or I quit!” way. I mean that if I work somewhere that continually tries to force me to do a bad job, I don’t have to just accept it as my fate in life. I can go work somewhere else. That doesn’t fix anything for the organization I’m leaving (bad organizations and bad people can’t be fixed anyway) but it does fix the problem for me.

My plea to my colleagues: don’t make excuses for poor work

I think developers make too many excuses and apologies for bad code. (And as a reminder, “bad code” means code that’s risky, time-consuming and expensive to work with.)

It’s not realistic to do a great job all the time, but I think we can at least do a little better. Right now I think we as a profession err too much on the side of justifying poor-quality work. I think we could stand to err more on the side of taking responsibility.

Sometimes it’s better for tests to hit the real API

A mysteriously broken test suite

The other day I ran my test suite and discovered that nine of my tests were suddenly and mysteriously broken. These tests were all related to the same feature: a physician search field that hits something called the NPPES NPI Registry, a public API.

My first thought was: is something wrong with the API? I visited the NPI Registry site and, sure enough, I got a 500 error when I clicked the link to their API URL. The API appeared to be broken. This was a Sunday so I didn’t necessarily the NPI API maintainers to fix it speedily. I decided to just ignore these nine specific tests for the time being.

By mid-Monday, to my surprise, my tests were still broken and the NPI API was still giving a 500 error. I would have expected them to have fixed it by then.

The surprising root cause of the bug

On Tuesday the API was still broken. On Wednesday the API was still broken. At this point I thought okay, maybe it’s not the case that the NPI API has just been down for four straight days with the NPI API maintainers not doing anything about it. Maybe the problem lies on my end. I decided to do some investigation.

Sure enough, the problem was on my end. Kind of. Evidently the NPI API maintainers had made a small tweak to how the API works. In addition to the fields I had already been sending like first_name and last_name, the NPI API now required an additional field, address_purpose. Without address_purpose, the NPI API would return a 500 error. I added address_purpose to my request and all my tests started passing again. And, of course, my application’s feature started working again.

The lesson: my tests were right to hit the real API

When I first discovered that nine of my tests were failing due to a broken external API, my first thought was, “Man, maybe I should mock out those API calls so my tests don’t fail when the API breaks.” But then I thought about it a little harder. The fact that the third-party API was broken meant my application was broken. The tests were failing because they ought to have been failing.

It’s been my observation that developers new to testing often develop the belief that “third-party API calls should be mocked out”. I don’t think this is always true. I think the decision whether to mock third-party APIs should be made on a case-by-case basis depending on certain factors. These factors include, but aren’t limited to:

  • Would allowing my tests to hit the real API mess up production data?
  • Is the API rate-limited?
  • Is the API prone to sluggishness or outages?

(Thanks to Kostis Kapelonis, who I interviewed on The Ruby Testing Podcast, for helping me think about API interaction this way.)

If hitting the real API poses no danger to production data, I would probably be more inclined to let my tests hit the real API than to mock out the responses. (Or, to be more precise, I would want my test suite to include some integration tests that hit the real API, even if I might have some other API-related tests that mock out API responses for economy of speed.) My application is only really working if the API is working too.