Category Archives: Programming

Mystified by RSpec’s DSL? Some parentheses can add clarity

Many developers have a hard time wrapping their head around RSpec’s DSL syntax. To me for a long time, RSpec’s syntax was a mystery.

Then one day I realized that RSpec’s syntax is just methods and blocks. There’s not some crazy ass metaprogramming going on that’s beyond my abilities to comprehend. Everything in RSpec I’ve ever used can be (as far as I can tell) boiled down to methods and blocks.

I’ll give a concrete example to aid understanding.

Below is an RSpec test I pulled from a project of mine, Mississippi.com, which I’ve live-coded in my Sausage Factory videos.

require 'rails_helper'

RSpec.describe Order, type: :model do
  subject { build(:order) }

  describe 'validations' do
    it { should validate_presence_of(:customer) }
  end

  describe '#total_cents' do
    it 'returns the total amount for the order' do
      order = create(
        :order,
        line_items: [
          create(:line_item, total_amount_cents: 5000),
          create(:line_item, total_amount_cents: 2500)
        ]
      )

      expect(order.total_cents).to eq(7500)
    end
  end
end

Now here’s the same test with all the optional parentheses left in instead of out. What might not have been obvious to you before, but is made clear by the addition of parentheses, is that describe and it are both just methods.

Like any Ruby method, it and describe are able to accept a block, which of course they always do. (If you don’t have a super firm grasp on blocks yet, I might suggest reading up on them and then writing some of your own methods which take blocks. I went through this exercise myself recently and found it illuminating.)

In addition to putting in all optional parentheses, I changed every block from brace syntax to do/end syntax. I think this makes it more clear when we’re dealing with a block versus a hash.

require('rails_helper')

RSpec.describe(Order, { type: :model }) do
  subject do
    build(:order)
  end

  describe('validations') do
    it do
      should(validate_presence_of(:customer))
    end
  end

  describe('#total_cents') do
    it('returns the total amount for the order') do
      order = create(
        :order,
        line_items: [
          create(:line_item, total_amount_cents: 5000),
          create(:line_item, total_amount_cents: 2500)
        ]
      )

      expect(order.total_cents).to(eq(7500))
    end
  end
end

The latter method is runnable just like the former version (I checked!) because although I changed the syntax, I haven’t changed any of the functionality.

I hope seeing these two different versions of the same test is as eye-opening for you as it was for me.

Lastly, to be clear, I’m not suggesting that you permanently leave extra parentheses in your RSpec tests. I’m only suggesting that you temporarily add parenthesis as a learning exercise.

How to test Ruby methods that involve puts or gets

I recently saw a post on Reddit where the OP asked how to test a method which involved puts and gets. The example the OP posted looked like the following (which I’ve edited very slightly for clarity):

class Example
  def ask_for_number
    puts "Input an integer 5 or above"
    loop do
      input = gets.to_i
      return true if input >= 5
      puts "Invalid. Try again:"
    end
  end
end

What makes the ask_for_number method challenging to test is a dependency. Most methods can be tested by saying, “When I pass in argument X, I expect return value Y.” This one isn’t so straightforward though. This is more like “When the user sees output X and then enters value V, expect subsequent output O.”

Instead of accepting arguments, this method gets its value from user input. And instead of necessarily returning a value, this method sometimes simply outputs more text.

How can we give this method the values it needs, and how can we observe the way the method behaves when we give it these values?

A solution using dependency injection (thanks to Myron Marston)

Originally, I had written my own solution to this problem, but then Myron Marson, co-author of Effective Testing with RSpec 3, supplied an answer of his own which was a lot better than mine. Here it is.

I comment on this solution some more below, but you can see in the initialize method that the input/output dependencies are being injected into the class. By default, we use $stdin/$stdout, and under test, we use something else for easier testability.

class Example
  def initialize(input: $stdin, output: $stdout)
    @input = input
    @output = output
  end

  def ask_for_number
    @output.puts "Input an integer 5 or above"
    loop do
      input = @input.gets.to_i
      return true if input >= 5
      @output.puts "Invalid. Try again:"
    end
  end
end

require 'stringio'

RSpec.describe Example do
  context 'with input greater than 5' do
    it 'asks for input only once' do
      output = ask_for_number_with_input(6)

      expect(output).to eq "Input an integer 5 or above\n"
    end
  end

  context 'with input equal to 5' do
    it 'asks for input only once' do
      output = ask_for_number_with_input(5)

      expect(output).to eq "Input an integer 5 or above\n"
    end
  end

  context 'with input less than 5' do
    it 'asks repeatedly, until a number 5 or greater is provided' do
      output = ask_for_number_with_input(2, 3, 6)

      expect(output).to eq <<~OUTPUT
        Input an integer 5 or above
        Invalid. Try again:
        Invalid. Try again:
      OUTPUT
    end
  end

  def ask_for_number_with_input(*input_numbers)
    input = StringIO.new(input_numbers.join("\n") + "\n")
    output = StringIO.new

    example = Example.new(input: input, output: output)
    expect(example.ask_for_number).to be true

    output.string
  end
end

Under test, an instance of StringIO can be used instead of $stdout, thus making the messages sent to @output visible and testable. That’s how we can “see inside” the Example class and test what at first glance appears to be a difficult-to-test piece of code.

Examples of pointless types of RSpec tests

A reader of mine recently shared with me a GitHub gist called rspec_model_testing_template.rb. He also said to me, “I would like your opinion on the value of the different tests that are specified in the gist. Which ones are necessary and which ones aren’t?”

In this post I’d like to point out which types are RSpec tests I think are pointless to write.

Testing the presence of associations

Here are some examples from the above gist of association tests:

it { expect(profile).to belong_to(:user) }
it { expect(user).to have_one(:profile }
it { expect(classroom).to have_many(:students) }
it { expect(gallery).to accept_nested_attributes_for(:paintings) }

Unless I’m crazy, these sorts of tests don’t actually do anything. A test like it { expect(classroom).to have_many(:students) } verifies that classroom.rb contains the code has_many :students but that’s all the value the test provides.

I’ve heard these tests referred to as “tautological tests”. I had to look up that word when I first heard it, so here’s a definition for your convenience: “Tautology is useless restatement, or saying the same thing twice using different words.” That’s exactly what these tests are: a useless restatement.

What does it mean for a classroom to have many students and what sorts of capabilities does the existence of that association give us? Whatever those capabilities are is what we should be testing. For example, maybe we want to calculate the average student GPA per classroom. The ability to do classroom.average_student_gpa would be a valuable thing to write a test for.

If we test the behaviors that has_many :students enables, then we don’t have to directly test the :students association at all because we’re already indirectly testing the association by testing the behaviors that depend on it. For example, our test for the classroom.average_student_gpa method would break if the line has_many :students were taken away.

How do you know the difference between a tautological test and a genuinely valuable test? Here’s an analogy. What would you do if you wanted to test the brakes on your bike? Would you visually verify that your bike has brake components attached to it, and therefore logically conclude that your bike has working brakes? No, because that conclusion is logically invalid. The way to test your brakes is to actually try to use your brakes to make your bike stop. In other words, you wouldn’t test for the presence of brakes, you would test for the capability or that your brakes enable.

Testing that a model responds to certain methods

it { expect(factory_instance).to respond_to(:public_method_name) }

There’s negligible value in simply testing that a model responds to a method. Better to test that that method does the right thing.

Testing for the presence of callbacks

it { expect(user).to callback(:calculate_some_metrics).after(:save) }
it { expect(user).to callback(:track_new_user_signup).after(:create) }

Don’t verify that the callback got called. Verify that you got the result you expected the callback to produce.

Testing for database columns and indexes

it { expect(user).to have_db_column(:political_stance).of_type(:string).with_options(default: 'undecided', null: false)
it { expect(user).to have_db_index(:email).unique(:true)

I had actually never seen this before and didn’t know you could do it. I find it pointless for the same exact reasons that testing associations is pointless. Don’t test that the database has a particular column, test that the feature that uses that column works. Don’t test that the database has a uniqueness index, test that you get a graceful error message if you try to create a duplicate.

For better tests, test behavior, not implementation

In order to write tests that are actually valuable, test behavior, not implementation. For example, rather than testing an association, test the behavior that the association enables.

If you’re new to testing and would like some better guidance on how to write valuable tests, I might suggest my model spec tutorial, my RSpec/Capybara hello world, or my book, The Complete Guide to Rails Testing.

How I test JavaScript-heavy Rails applications

A common question I get is how to test JavaScript in Rails applications. My approach is almost radically simple and unsophisticated.

My Rails + JavaScript testing approach

I think of the fact that the application uses JavaScript like an inconsequential and irrelevant implementation detail. I test JavaScript-heavy applications using just RSpec + Capybara integration tests, the same exact way I’d test an application that has very little JavaScript or no JavaScript at all.

I don’t really have anything more to say about it since I literally don’t do anything different from my regular RSpec + Capybara tests.

Single-page applications

What about single-page applications? I still use the same approach. When I used to build Angular + Rails SPAs, I would add a before(:all) RSpec hook that would kick off a build of my Angular application before the test suite ran. After that point my RSpec + Capybara tests could interact with my SPA just as easily as if the application were a “traditional” Rails application.

I’ve tried testing single-page applications using tools like Protractor or Cypress and I don’t like it. It’s awkward and cumbersome to try to drive the Rails app from that end. How do you spin up test data? In my experience, it’s very tedious. Much easier to drive testing from the Rails end and treat the client-side JavaScript application as an implementation detail.

Side note/rant: despite the popularity of single-page applications, “traditional” Rails applications are 100% fine. Using Rails with React/Vue/Angular/etc. isn’t “modern” and using Rails without any of those isn’t “outdated”. For most regular old boring business applications, Rails by itself without a front-end framework is not only a sufficient approach but a superior approach to an SPA because the complexity of development with plain Rails and only “JavaScript sprinkles” tends to be far lower than Rails with a JavaScript framework.

Testing JavaScript directly

Despite my typical approach of treating JavaScript as a detail, there are times when I want to have a little tighter control and test my JavaScript directly. In those cases I use Jasmine to test my JavaScript.

But it’s my goal to use such little JavaScript that I never get above that threshold of complexity where I feel the need to test my JavaScript directly with Jasmine. I’ve found that if I really try, I can get away with very little JavaScript in most applications without sacrificing any UI richness.

A repeatable, step-by-step process for writing Rails integration tests with Capybara

Many Rails developers who are new to writing tests struggle with the question of what to write tests for and how.

I’m about to share with you a repeatable formula that you can use to write an integration test for almost any Rails feature. It’s nothing particularly profound, it won’t result in 100% test coverage, and it won’t work in all cases, but it will certainly get you started if you’re stuck.

The three integration test cases I write for any Rails CRUD feature

Most features in most Rails applications are, for better or worse, some slight variation on CRUD operations. Here are the tests I almost always write for every CRUD feature.

  1. Creating a record with valid inputs
  2. Trying to create a record with invalid inputs
  3. Updating a record

Let me go into detail on each one of these. For each test case, let’s imagine we’re working with a resource called Location.

Creating a record with valid inputs

This test case will involve the following steps:

  1. If necessary, sign a user in (using login_as(create(:user)) as I describe in this post)
  2. Visit the “new” route for the resource (e.g. visit new_location_path)
  3. Fill out the form fields using fill_in, select, etc.
  4. Click the “submit” button
  5. Expect that the page has whatever content the resource’s index page has, e.g. expect(page).to have_content('Locations')

Here’s what that test might look like:

require 'rails_helper'

RSpec.describe 'Creating a location', type: :system do
  before do
    login_as(create(:user))
    create(:state, name: 'Michigan')
    visit new_location_path
  end

  scenario 'valid inputs' do
    fill_in 'Name', with: "Jason's House"
    fill_in 'Line 1', with: '69420 Cool Ave'
    fill_in 'City', with: 'Sand Lake'
    select 'Michigan', from: 'State'
    fill_in 'Zip code', with: '49343'
    click_on 'Save Location'

    expect(page).to have_content('Locations')
  end
end

Trying to create a record with invalid inputs

This test is pretty simple because all I have to do is not fill out the form. So the steps are:

  1. If necessary, sign a user in (using login_as(create(:user))
  2. Visit the “new” route for the resource (e.g. visit new_location_path)
  3. Click the “submit” button
  4. Expect that the page shows an error (e.g. expect(page).to have_content("Name can't be blank"))

Here’s what a test of this type might look like.

require 'rails_helper'

RSpec.describe 'Creating a location', type: :system do
  before do
    login_as(create(:user))
    create(:state, name: 'Michigan')
    visit new_location_path
  end

  scenario 'invalid inputs' do
    click_on 'Save Location'
    expect(page).to have_content("Name can't be blank")
  end
end

Updating a record

The steps for this one go:

  1. If necessary, sign a user in (using login_as(create(:user))
  2. Create an instance of the resource I’m testing (e.g. location = create(:location))
  3. Visit the “edit” route for the resource (e.g. visit edit_location_path(location))
  4. Fill in just one field with a different value
  5. Click the “submit” button
  6. Expect that the page has whatever content the resource’s index page has, e.g. expect(page).to have_content('Locations')
require 'rails_helper'

RSpec.describe 'Updating a location', type: :system do
  before do
    login_as(create(:user))
    location = create(:location)
    visit edit_location_path(location)
  end

  scenario 'valid inputs' do
    fill_in 'Name', with: "Jason's Filthy Shack"
    click_on 'Save Location'
    expect(page).to have_content('Locations')
  end
end

“But is that enough? Don’t I need more?”

No, this is not “enough”, and yes, you do need more…eventually. What I’m hoping to share with you here is a base that you can start with if you’re totally lost. Writing halfway-good tests is better than writing no tests, and even writing crappy tests is better than writing no tests. Once you get some practice following the formula above, you can expand on that formula to get a better level of test coverage.

Where to go next

If you get some practice with writing tests like the above and you want to go further, you might like my RSpec/Capybara integration test tutorial.

The difference between RSpec, Capybara and Cucumber

If you’re new to Rails testing you’ve probably come across the terms RSpec, Capybara and Cucumber.

All three are testing tools. What are they for? Do you need all of them? Here are some answers.

RSpec

RSpec is a testing framework. It’s what allows you to write and run your tests.

An analogous tool would be MiniTest. In my experience, most commercial Rails projects use RSpec and most open-source Ruby projects use MiniTest. At any Rails job you’re more likely to be using RSpec than MiniTest. (I’m not sure why this is the way it is.)

Capybara

Some Rails tests operate at a “low level”, meaning no browser interaction is involved. Other “high level” tests do actually spin up a browser and click links, fill out form fields, etc.

Low-level tests can be executed with just RSpec and nothing more. But for tests that use the browser, something more is needed.

This is where Capybara comes into the picture. Capybara provides helper methods like fill_in to fill in a form field, click_on to click a button, etc.

Please note that Capybara does NOT have to be used in conjunction with Cucumber. It’s completely possible to write integration tests in Rails with just RSpec and Capybara.

Cucumber

Cucumber is a tool for writing test cases in something close to English. Here’s an example from Wikipedia:

Scenario: Eric wants to withdraw money from his bank account at an ATM
    Given Eric has a valid Credit or Debit card
    And his account balance is $100
    When he inserts his card
    And withdraws $45
    Then the ATM should return $45
    And his account balance is $55

Cucumber can be connected with RSpec and Capybara and used to write integration tests.

My personal take on Cucumber is that while the English-like syntax might appear clearer at first glance, it’s actually less clear than bare RSpec/Capybara syntax. (Would a Ruby class be more understandable if it were English instead of Ruby?)

Cucumber adds both a layer of mental overhead and a layer of maintenance overhead on top of the RSpec + Capybara combination. I always try as hard as I can to try to steer new testers away from Cucumber.

“So, what should I use to test my Rails apps?”

My advice is to use the combination of RSpec + Capybara and forget about Cucumber. What if you don’t know where to start with writing RSpec/Capybara tests? If that’s the case, you might like to check out my guide to RSpec + Capybara testing, which includes a tutorial.

Stuck on a programming problem? These tactics will get you unstuck most of the time

Being stuck is possibly the worst state to be in when programming. And the longer you allow yourself to stay stuck, the harder it is to finally get unstuck. That’s why I find it important to try never to allow myself to get stuck. And in fact, I very rarely do get stuck for any meaningful length of time. Here are my favorite tactics for getting unstuck.

Articulate the problem to yourself

I teach corporate training classes. Often, my students get stuck. When they’re stuck, it’s usually because the problem they’re trying to solve is ill-defined. It’s not possible to solve a problem or achieve a goal when you don’t even know what the problem or goal is.

So when you get stuck, a good first question to ask is: do I even know exactly what the problem is that I’m trying to solve? I recommend going so far as to write it down.

If it’s hard to articulate your goal, there’s a good chance your goal is too large to be tractable. Large goals can’t be worked on directly. For example, if you say your goal is “get to the moon and back”, that goal is too big. What you have to articulate instead are things like figure out how to get into space, figure out how to land a spaceship on the moon, figure out how to build a spacesuit that lets people hang out in space for a while, etc.

Just try something

Far too often I see programmers stare at their code and reason about what would happen if they changed it in such-and-such a way. They run intricate thought experiments so they can, presumably, fix all the code in one go and arrive at the complete solution the very next time they try to run the program. Nine times out of ten, they make the change that they think will solve the problem and then discover they’re wrong.

A better way to move forward is to just try something. The cost of trying is very low compared to the cost of reasoning. Also, trying has the added bonus of supplying empirical data instead of untested hypotheses.

Articulate the problem to someone else

“Rubber duck debugging” is the famous tactic of explaining your problem to a rubber duck on your desk. The phenomenon is that even though the duck is incapable of supplying any advice, the very act of explaining the problem out loud leads you to realize the solution to the problem.

Ironically, most of the “rubber duck” debugging I’ve done in my career has involved explaining my issue to a sentient human being. The result is often the classic scenario where you spend ten minutes explaining the details of a problem to a co-worker, realize the way forward, and then thank your co-worker for his or her help, all without your co-worker uttering a word.

The explanation doesn’t have to be verbal in order for this tactic to work. Typing out the problem can be perfectly effective. This leads me to my next tactic.

Put up a forum question

In my experience many programmers seem to be hesitant to write their own forum questions. I’m not sure why this is, although I have some ideas. Maybe they feel like posting forum questions is for noobs. Maybe they think it will slow them down. Neither of these things are true, though. I’ve been programming for over 20 years and I still post Stack Overflow questions somewhat regularly. And rather than slowing me down, posting a forum question usually speeds me up, not least because the very act of typing out the forum question usually leads me to realize the answer on my own (and usually about 45 seconds after I post the question).

If you don’t know a good way, try a bad way

Theodore Roosevelt is credited with having said, “In any moment of decision, the best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing.” I tend to agree.

Often I’m faced with a programming task that I don’t know how to complete in an elegant way. So, in these situations where I can’t think of a good solution, I do what I can, which is that I just move forward with a bad or stupid solution.

What often happens is that after I put my dumb solution in place, I’ll take a step back, look at what I’ve done, and a better solution will make itself fairly obvious to me. Or at least, a nice solution will be easier for me to think of at this stage than it would have been from the outset.

With programming and in general, people are much better at looking at a bad thing and saying how it could be made good than they are at coming up with a good thing out of thin air.

If you go down a bad path and find that you’ve completely painted yourself into a corner, it’s no big deal. If you’re using version control and using atomic commits, you can just revert back to your last good state. And you can start on a second attempt now that you’re a little older and wiser.

Study the hell out of the problem area

There’s a thought experiment I like to run in my head sometimes.

Let’s say I’m working on a Rails project and I can’t quite figure out how to get a certain form to behave how I want it to behave. I try several things, I check the docs, but I just can’t get it to work how I want.

In these situations I like to ask myself: “If I knew everything there was to know about this topic, would I be stuck?”

And the answer is of course not. The reason why I don’t understand how to get my form to work is exactly that—I don’t understand how to get my form to work. So what I need to do is set to work on gaining that understanding.

In an ideal world, a programmer could instantly pinpoint the exact piece of knowledge he or she is missing and then go find that piece of knowledge. And in fact, we’re very fortunate to live in a world where that’s possible. But it’s not possible 100% of the time.

In those cases I set down my code and say okay, universe, I guess this is how it’s gonna be. I’ll go to the relevant educational resource—in the case of this example perhaps the official Rails documentation on forms—and start reading it top to bottom. Again, if I knew everything about Rails forms then I wouldn’t have a problem, so I’ll set out of a journey to learn everything. Luckily, my level of understanding usually becomes sufficient to squash the problem long before I read everything there is to read about it.

Write some tests

One of the reasons I find tests to be a useful development or debugging tool is that tests help save me some mental juggling. When developing or debugging there are two jobs that need to be done: 1) write the code that makes the program behave as desired and 2) verify that the desired behavior is present.

If I write code without tests, I’m at risk of mentally mixing these two jobs and getting muddled. On the other hand, if I write a test for my desired behavior, I’m now free to completely forget about the verification step and focus fully on coding the solution. The fewer balls I have to mentally juggle, the faster I can work.

Take a break

One of my favorite Pink Floyd songs is “See Emily Play“. In the first chorus they sing “There is no other day / Let’s try it another way.” In a later chorus they sing “There is no other way / Let’s try it another day.” This is great debugging advice.

I often find that if I allow myself to forget about whatever problem I’m working on and go for a walk or something, my subconscious mind will set to work on the problem and surface deliver the solution later. Sometimes this happens 15 minutes later. Sometimes it doesn’t happen until a day or a week later. In any case, it’s often effective.

Work on something else

If all else fails, you can always work on something else. This might feel like a failure, but it’s only a failure if you decide to give up on the original problem permanently. Setting aside one goal in exchange for making progress on a different goal is infinitely better than allowing yourself to stay stuck.

Logging the user in before Capybara feature specs/system specs

Logging in

If you’re building an app that has user login functionality, you’ll at some point need to write some tests that log the user in before performing the test steps.

One approach I’ve seen is to have Capybara actually navigate to the sign in page, fill_in the email and password fields, and hit submit. This works but, if you’re using Devise, it’s a little more complicated than necessary. There’s a better way.

The Devise docs have a pretty good solution. Just add this to spec/rails_helper:

RSpec.configure do |config|
  config.include Warden::Test::Helpers
end

Then, any place you need to log in, you can do login_as(FactoryBot.create(:user)).

Log in before each test or log in once before all tests?

One of my readers, Justin K, wrote me with the following question:

If you use Capybara to do system tests, how do you handle authentication? Do you do the login step on each test or do you log in once and then just try and run all of your system level tests?

The answer is that I log in individually for each test.

The reason is that some of my tests require the user to be logged in, some of my tests require that the user is not logged in, and other tests require that some specific type of user (e.g. an admin user) is logged in.

I do often put my login_as call in a before block at the top of a test file to reduce duplication, but that doesn’t mean only once for that set of files. A common misconception is that a before block will give a performance benefit by only running the code in the before block once. This is not the case. before is shorthand for before(:each) and any code inside it will get run before each individual it block. I never use before(:all) inside of individual tests because I want each test case to be as isolated as possible.

All my best programming tips

What follows is a running list of all the programming tips I can think of that are worth sharing.

Many of these tips may seem so obvious they go without saying, but every single one of these tips is here because I’ve seen programmers fail to take advantage of these tips on multiple occasions, even very experienced programmers.

I’ve divided the tips into two sections: development tips and debugging tips. I’ve arranged them in no particular order.

Development tips

Articulate the goal

Before you sit down and start typing, clearly articulate in writing what it is you’re trying to accomplish. This will help you avoid going down random paths which will dilute your productivity. It will also help you work in small, complete units of work.

Break big jobs into small jobs

Some jobs seem so big and nebulous as to be intractable. For example, if I’m tasked with creating a schedule that contains recurring appointments, where do I start with that? It will probably make my job easier if I start with a smaller job, for example, thinking of how exactly to specify and codify a recurrence rule.

Keep everything working all the time

Refactoring is the act of improving the structure of a program without altering the behavior of the program. I’ve seen a lot of occasions where a programmer will attempt to perform a refactoring and fail not to alter the behavior of the code they’re trying to refactor. They end up having to throw away their work and start over fresh.

If in the course of refactoring I break or change some existing functionality, then I somehow have to find my way back to making the code behave as it originally did. This is hard.

Alternatively, if I take care never to let the code stray from its original behavior, I never have to worry about bringing back anything that got altered. This is relatively easy. I just have to work in very small units and regression-test at the end of each small change.

Continuously deliver

Continuous delivery is the practice of always having your program in a deployable state.

Let’s say I start working on a project on January 1st which has a launch date of July 1st. But then, unexpectedly, leadership comes to me on March 1st and says the launch date is now March 10th and we’re going to go live with whatever we have by then.

If I’ve been practicing continuous delivery, launching on March 10th is no big deal. At the end of each week (and the end of each day and perhaps even each hour) I’ve made sure that my application is 100% complete even if it might not be 100% finished.

Work on one thing at a time

It’s better to be all the way done with half your tasks than to be halfway done with all your tasks.

If I’m 100% done with 50% of 8 features, I can deploy four features. If I’m 50% done with 100% of 8 features, I can deploy zero features.

Also, open work costs mental bandwidth. Even if you believe it’s just as fast to jump between tasks as it is to focus on just one thing at a time, it’s more mentally expensive to have 5 balls in the air than just one.

Use automated tests

All code eventually gets tested, it’s just a question of when and how and by whom.

It’s much cheaper and faster for a bug to get caught by an automated test during development than for a feature to get sent to a QA person and then kicked back due to the bug.

Testing helps prevent the introduction of new bugs, helps prevent regressions, helps improve code design, helps enable refactoring, and helps aid the understandability of the codebase by serving as documentation.

Use clear names for things

When a variable name is unclear due to being a misnomer or when it’s unclear due to being abbreviated to obfuscation, it adds unnecessary mental friction.

Don’t abbreviate variable names, with the exception of universally-understood abbreviations (e.g. SSN or PIN) or hyperlocal temp variables (e.g. records.each do { |r| puts r }).

A good rule for naming things: call things what they are.

Don’t prematurely optimize

There’s a saying attributed to Kent Beck: “Make it work, then make it right, then make it fast.”

For the vast majority of the work I do, I never even need to get to the “make it fast” step. Unless you’re working on high-visibility features for a high-profile consumer app, performance bottlenecks often won’t emerge until weeks or months after the feature is initially deployed.

Only write a little code at a time

I have a saying that I repeat to myself: “Never underestimate your ability to screw stuff up.” I’ve often been amazed by how often a small change that will “definitely” work doesn’t work.

Only work in tiny increments. Test what you’re working on after each change, preferably using automated tests.

Use atomic commits

An atomic commit is a commit that’s only “about” one thing.

Atomic commits are valuable for at least two reasons. One, if you need to roll back a commit because it introduces a bug, you won’t have to also roll back other, unrelated (and perfectly good) code that was mixed into the same commit. Two, atomic commits make it easier to pinpoint the introduction of a bug than non-atomic commits.

A separate but related idea is to use small commits, which is just a specific application of “Only write a little code at a time”.

Don’t allow yourself to stay stuck

Being stuck is about the worst state to be in. Going down the wrong path is (counterintuitively) more productive than standing still. Going down the wrong path stirs up new thoughts and new information; standing still doesn’t.

If you can’t think of a good way to move forward, move forward in a bad way. You can always revert your work later, especially if you’re using small, frequent, atomic commits.

If you can’t think of a way to keep moving, try to precisely articulate what exactly it is you’re stuck on and why you’re stuck on it. Sometimes that very act helps you get unstuck.

If you think the reason you’re stuck is a lack of some necessary piece of knowledge, do some Google searches. Read some relevant blog posts and book passages. Post some forum questions. Ask some people in some Slack groups (and if you’re not in any technical Slack groups, join some). Go for a walk and then come back and try again. If all else fails, set the problem aside and work on something else instead. But whatever you do, don’t just sit there and think.

Debugging tips

Articulate the problem

If you can precisely articulate what the problem is, you’ve already gone a good ways toward the solution. Conversely, if you can’t articulate exactly what the problem is, your situation is almost hopeless.

Don’t fall into logical fallacies

One of the biggest dangers in debugging is the danger of tricking yourself into thinking you know something you don’t actually know.

If in the course of your investigation you uncover a rock solid truth, write it down on your list of things you know to be true.

If on the other hand you uncover something that’s seems to be true but you don’t have enough evidence to be 100% sure that it’s true, don’t write it down as a truth. It’s okay to write it down as a hypothesis, but if you confuse hypotheses with facts then you’re liable to get confused and waste some time.

Favor isolation over reason

I know of two ways to identify the cause of a bug. One is that I can study the code and perform experiments and investigations until I’ve pinpointed the line of code where the problem lies. The other method is that I can isolate the bug. The latter is usually many times quicker.

One of my favorite ways to isolate a bug is to use git bisect. Once I’ve found the commit that introduced the bug, it’s often pretty plain to see what part of the commit introduced the bug. If it’s not, I’ll usually negate the offending code by doing a git revert --no-commit the offending commit and then re-introduce the offending code tiny bit by tiny bit until I’ve found the culprit. This methodology almost never fails to work.

Favor googling over thinking

Mental energy is a precious resource. Don’t waste mental energy over problems other people have already solved, especially if solving that particular problem does little or nothing to enhance your skills.

If you run a command and get a cryptic error, don’t sit and squint at it. Copy and paste the error message into Google.

Thinking for yourself of course has its time and place. Understanding things deeply has its time and place too. Exercise good judgment over when you should pause and understand what you’re seeing versus when you should plow through the problem as effortlessly as possible so you can get on with your planned work.

Learn to formulate good search queries for error messages

Let’s say I run a command and get the following error: /Users/jasonswett/.rvm/gems/ruby-2.5.1/gems/rspec-core-3.8.0/lib/rspec/core/reporter.rb:229:in require': cannot load such file -- rspec/core/profiler (LoadError)

Which part of the error should I google?

If I paste the whole thing into Google, the /Users/jasonswett/.rvm/gems/ruby-2.5.1/gems/rspec-core-3.8.0, which is unique to my computer, my Ruby version and my rspec-core version, narrows my search so much that I don’t even get a single result.

At the other extreme, require': cannot load such file is far too broad. Lots of different error messages could contain that text.

The sensible part of the error message to copy/paste is require': cannot load such file -- rspec/core/profiler (LoadError). It’s specific enough to have hope of matching other similar problems, but not so specific that it won’t return any results.

Learn to formulate good search queries in general

I’m not sure how to articulate what makes a good search query but here’s my attempt: a query that’s as short as it can possibly be while still containing all the relevant search terms.

Don’t give too much credence to the error message

Whenever I get an error message, I pretty much never pay attention to what it says, at least not as the first step.

When I get an error message, I look at the line number and file. Then I go to that line number and look to see if there’s anything obviously wrong. Probably about 90% of the time there is.

For the other 10% of the time, I’ll go back to the error message and read it to see if it offers any clues. If it does, I’ll make a change to see if that fixes the problem. If the error is so cryptic that I don’t know what to do, I’ll google the error message.

Make the terminal window big

If I had a dollar for every time I helped a student who was trying to debug a problem using a terminal window the size of a Post-It note, I’d be a rich man. Maximize the window so you can actually see what you’re doing.

Close unneeded tabs

Open tabs cost mental energy. Between the cost of re-finding the page later and the cost of being mentally cluttered with all those tabs open, it’s way cheaper just to re-find the page later. On average I myself usually have about two tabs open at any given time.

RSpec/Capybara integration tests: the ultimate guide

What exactly are integration tests?

The many different types of RSpec specs

When I started learning about Rails testing and RSpec I discovered that there are many different kinds of tests. I encountered terms like “model spec”, “controller spec”, “view spec”, “request spec”, “route spec”, “feature spec”, and more. I asked myself: Why do these different types of tests exist? Do I need to use all of them? Are some more important than others?

As I’ve gained experience with Rails testing I’ve come to believe that yes, some types of tests (or, to use the RSpec terminology, “specs”) are more important than other types of specs and no, I don’t need to use all of types of specs that RSpec offers. Some types of specs I never use at all (e.g. view specs).

There are two types of tests I use way more than any other types of specs: model specs and feature specs. Let’s focus on these two types of specs for a bit. What are model specs and feature specs for, and what’s the difference?

Feature specs and model specs

Speaking loosely, model specs test ActiveRecord models by themselves and feature specs test the whole “stack” including model code, controller code, and any HTML/CSS/JavaScript together. Neither type of spec is better or worse (and it’s not an “either/or” situation anyway), the two types of specs just have different strengths and weaknesses in different scenarios.

The strengths and weaknesses of model specs

An advantage of model specs is that they’re “inexpensive”. Compared to feature specs, model specs are relatively fast to run and relatively fast to write. It tends to be slower to actually load the pages of an application and switch among them than to just test model code by itself with no browser involved. That’s an advantage of model specs. A disadvantage of model specs is that they don’t tell you anything about whether your whole application actually works from the user’s perspective. You can understand what I mean if you imagine a Rails model that works perfectly by itself but doesn’t have have any working HTML/CSS/JavaScript plugged into that model to make the whole application work. In such a case the model specs could pass even though there’s no working feature built on top of the model code.

The strengths and weaknesses of feature specs

Feature specs have basically the opposite pros and cons. Unlike model specs, feature specs do tell you whether all the parts of your application are working together. There’s a cost to this, though, in that feature specs are relatively “expensive”. Feature specs are relatively time-consuming to run because you’re running more stuff than in a model spec. (It’s slower to have to bring up a web browser than to not have to.) Feature specs are also relatively time-consuming to write. The reason feature specs are time-consuming to write is that, unlike a model spec where you can exercise e.g. just one method at a time, a feature spec has to exercise a whole feature at a time. In order to test a feature it’s often necessary to have certain non-trivial conditions set up—if you want to test, for example, the creation of a hair salon appointment you first have to have a salon, a stylist, a client, and maybe more. This necessity makes the feature spec slower to write and slower to run than a model spec where you could test e.g. a method on a `Stylist` class all by itself.

Where integration tests come into the picture, and what integration tests are

If you’re okay with being imprecise, we can say that feature specs and integration tests are roughly the same thing. And I hope you’re okay with being imprecise because when it comes to testing terminology in general there’s very little consensus on what the various testing terms mean, so it’s kind of impossible to be precise all the time. We do have room to be a little more precise in this particular case, though.

In my experience it’s commonly agreed that an integration test is any test that tests two or more parts of an application together. What do we mean when we say “part”? A “part” could be pretty much anything. For example, it has been validly argued that models specs could be considered integration tests because model specs typically test Ruby code and database interaction, not just Ruby code in isolation with no database interaction. But it could also be validly argued that a model spec is not an integration test because a model spec just tests “one” thing, a model, without bringing controllers or HTML pages into the picture. (For our purposes though let’s say model specs are NOT integration tests, which is the basic view held, implicitly or explicitly, by most Rails developers.)

So, while general agreement exists on what an integration test is in a broad sense, there still is a little bit of room for interpretation once you get into the details. It’s kind of like the definition of a sandwich. Most people agree that a sandwich is composed of a piece of food surrounded by two pieces of bread, but there’s not complete consensus on whether e.g. a hamburger is a sandwich. Two different people could have different opinions on the matter and no one could say either is wrong because there’s not a single authoritative definition of the term.

Since feature specs exercise all the layers of an application stack, feature specs fall solidly and uncontroversially into the category of integration tests. This is so true that many developers (including myself) speak loosely and use the terms “feature spec” and “integration test” interchangeably, even though we’re being a little inaccurate by doing so. Inaccurate because all feature specs are integration specs but not all integration tests are feature specs. (For example, a test that exercises an ActiveRecord model interacting with the Stripe API could be considered an integration test even though it’s not a feature spec.)

I hope at this point you have some level of understanding of how feature specs and integration tests are different and where they overlap. Now that we’ve discussed feature specs vs. integration tests, let’s bring some other common testing terms into the picture. What about end-to-end tests and system tests?

What’s the difference between integration tests, acceptance tests, end-to-end tests, system tests, and feature specs?

A useful first step in discussing these four terms may be to name the origin of each term. Is it a Ruby/Rails-specific term or just a general testing term that we happen to use in the Rails world sometimes?

Integration tests General testing term, not Ruby/Rails-specific
Acceptance tests General testing term, not Ruby/Rails-specific
End-to-end tests General testing term, not Ruby/Rails-specific
System tests Rails-specific, discussed in the official Rails guides
Feature specs An RSpec concept/term

Integration tests, acceptance tests and end-to-end tests

Before we talk about where system tests and feature specs fit in, let’s discuss integration tests, acceptance tests and end-to-end tests.

Like most testing terms, I’ve heard the terms integration tests, acceptance tests, and end-to-end tests used by different people to mean different things. It’s entirely possible that you could talk to three people and walk away thinking that integration tests, acceptance tests and end-to-end tests mean three different things. It’s also entirely possible that you could talk to three people and walk away thinking all three terms mean the same exact thing.

I’ll tell you how I interpret and use each of these three terms, starting with end-to-end tests.

To me, an end-to-end test is a test that tests all layers of an application stack under conditions that are very similar to production. So in a Rails application, a test would have to exercise the HTML/CSS/JavaScript, the controllers, the models, and the database in order to qualify as an end-to-end test. To contrast end-to-end tests with integration tests, all end-to-end tests are integration tests but not all integration tests are end-to-end tests. I would say that the only difference between the terms “end-to-end test” and “feature spec” are that feature spec is an RSpec-specific term while end-to-end test is a technology-agnostic industry term. I don’t tend to hear the term “end-to-end test” in Rails circles.

Like I said earlier, an integration test is often defined as a test that verifies that two or more parts of an application behave correctly not only in isolation but also when used together. For practical purposes, I usually hear Rails developers say “integration test” when they’re referring to RSpec feature specs.

The purpose of an acceptance tests is quite a bit different (in my definition at least) from end-to-end tests or integration tests. The purpose of an acceptance test is to answer the question, “Does the implementation of this feature match the requirements of the feature?” Like end-to-end tests, I don’t ever hear Rails developers refer to feature specs as acceptance tests. I have come across this usage, though. One of my favorite testing books, Growing Object-Oriented Software, Guided by Tests, uses the term “acceptance test” to refer to what in RSpec/Rails would be a feature spec. So again, there’s a huge lack of consensus in the industry around testing terminology.

System tests and feature specs

Unlike many distinctions we’ve discussed so far, the difference between system tests and feature specs is happily pretty straightforward: when an RSpec user says integration test, they mean feature spec; when a MiniTest user says integration test, they mean system test. If you use RSpec you can focus on feature specs and ignore system tests. If you use MiniTest you can focus on system tests and ignore feature specs.

My usage of “integration test”

For the purposes of this article I’m going to swim with the current and use the terms “integration test” and “feature spec” synonymously from this point forward. When I say “integration test”, I mean feature spec.

What do I write integration tests for, and how?

We’ve discussed what integration tests are and aren’t. We’ve also touched on some related testing terms. If you’re like many developers getting started with testing, your next question might be: what do I write integration tests for, and how?

This question (what do I write tests for) is probably the most common question I hear from people who are new to testing. It was one of my own main points of confusion when I myself was getting started with testing.

What to write integration tests for

I can tell you what I personally write integration tests for in Rails: just about everything. Virtually every time I build a feature that a user will interact with in a browser, I’m going to write at least one integration test for that feature. Unfortunately this answer of mine, while true, is perhaps not very helpful. When someone asks what they should write tests for, that person is probably somewhat lost and that person is probably looking for some sort of toehold. So here’s what’s hopefully a more helpful answer.

If you’ve never written any sort of integration test (which again I’m sloppily using interchangeably with “feature spec”) then my advice would be to first do an integration test “hello world” just so you can see what’s what and get a tiny bit of practice under your belt. I have another post, titled A Rails testing “hello world” using RSpec and Capybara, that will help you do just that.

What if you’re a little further? What if you’re already comfortable with a “hello world” level of integration testing and you’re more curious about how to add real integration tests to your Rails application?

My advice would be to start with what’s easiest. If you’re working with a legacy project (legacy project here could just be read as “any existing project without a lot of test coverage”), it’s often the case that the structure of the code makes it particularly difficult to add tests. So in these cases you’re dealing with two separate challenges at once: a) you’re trying to learn testing (not easy) and b) you’re trying to overcome the obstacles peculiar to adding tests to legacy code (also not easy). To the extent that you can, I would encourage you to try to separate these two endeavors and just focus on getting practice writing integration tests first.

If you’re trying to add tests to an existing project, maybe you can find some areas of the application where adding integration tests is relatively easy. If there’s an area of your application where the UI simply provides CRUD operations on some sort of resource that has few or no dependencies, then that area might be a good candidate to begin with. A good clue would be to look for how many `has_many`/`belongs_to` calls you find in your various models (in other words, look for how many associations your models have). If you have a model that has a lot of associations, the CRUD interface for that model is probably not going to be all that easy to test because you’re going to have to spin up a lot of associated records in order to get the feature to function. If you can find a model with fewer dependencies, a good first integration test to write might be a test for updating a record.

How to write integration tests

Virtually all the integration tests I write follow the same pattern:

  1. Generate some test data
  2. Log into the application
  3. Visit the page I’m interested in
  4. Perform whatever clicks and typing need to happen in order to exercise the feature I’m testing
  5. Perform an assertion

If you can follow this basic template, you can write integration tests for almost anything. The hardest part is usually the step of generating test data that will help get your program/feature into the right state for testing what you want to test.

Most of the Rails features I write most of the time are pretty much just some variation of a CRUD interface for a Rails resource. So when I’m developing a feature, I’ll usually write an integration test for at least creating and updating the resource, and maybe deleting an instance of that resource. (I’ll pretty much never write a test for simply viewing a list of resources, the “R” in CRUD, because that functionality is usually exercised anyway in the course of testing other behavior.) Sometimes I write my integration tests before I write the application code that makes the tests pass. Usually I write generate the CRUD code first using Rails scaffolds and add the tests afterward. To put it another way, I usually don’t write my integration tests in a TDD fashion. In fact, it’s not really possible to use scaffolds and TDD at the same time. In the choice between having the benefits of scaffolds and having the benefits of TDD, I choose having the benefits of scaffolds. (I do, however, practice TDD when writing other types of tests besides integration tests.)

Now that we’ve discussed in English what to write integration tests for and how, let’s continue fleshing out this answer using some actual code examples. The remainder of this article is a Rails integration test tutorial.

Tutorial overview

Project description

In this short tutorial we’re going to create a small Rails application which is covered by a handful of integration tests. The application will be ludicrously small as far as Rails applications go, but a ludicrously small Rails application is all we’re going to need.

Our application will have just a single model, City, which has just a single attribute, name. Since the application has to be called something, we’ll call it “Metropolis”.

Tutorial outline

Here are the steps we’re going to take:

  1. Initialize the application
  2. Create the `city` resource
  3. Write some integration tests for the `city` resource

That’s all! Let’s dive in. The first step will be to set up our Rails project.

Setting up our Rails project

First let’s initialize the application. The `-T` flag means “no test framework”. We need the `-T` flag because, if it’s not there, Rails will assume we want MiniTest, which in this case is not the case. I’m using the `-d postgresql` flag because I like PostgreSQL, but you can use whatever RDBMS you want.

$ rails new metropolis -T -d postgresql

Next let’s `cd` into the project directory and create the database.

$ cd metropolis
$ rails db:create

With that groundwork out of the way we can add our first resource and start adding our first integration tests.

Writing our integration tests

What we’re going to do in this section is generate the city scaffold, then pull up the city index page in the browser, then write some tests for the city resource.

The tests we’re going to write for the city resource are:

  • Creating a city (with valid inputs)
  • Creating a city (with invalid inputs)
  • Updating a city (with valid inputs)
  • Deleting a city

Creating the city resource

The City resource will have only one attribute, name.

$ rails g scaffold city name:string
$ rails db:migrate

Let’s set our root route to cities#index so Rails has something to show when we visit localhost:3000.

Rails.application.routes.draw do
  resources :cities
  root 'cities#index'
end

If we now run the rails server command and visit http://localhost:3000, we should see the CRUD interface for the City resource.

$ rails server

Integration tests for City

Before we can write our tests we need to install some certain gems.

Installing the necessary gems

Let’s add the following to our Gemfile under the :development, :test group.

group :development, :test do
  # The RSpec testing framework
  gem 'rspec-rails'
 
  # Capybara, the library that allows us to interact with the browser using Ruby
  gem 'capybara'
 
  # This gem helps Capybara interact with the web browser.
  gem 'webdrivers'
end

Remember to bundle install.

$ bundle install

In addition to running bundle install which will install the above gems, we also need to install RSpec into our Rails application which is a separate step. The rails g rspec:install command will add a couple RSpec configuration files into our project.

$ rails g rspec:install

The last bit of plumbing work we have to do before we can start writing integration tests is to create a directory where we can put the integration tests. I tend to create a directory called spec/features and this is what I’ve seen others do as well. There’s nothing special about the directory name features. It could be called anything at all and still work.

$ mkdir spec/features

Writing the first integration test (creating a city)

The first integration test we’ll write will be a test for creating a city. The steps will be this:

  1. Visit the “new city” page
  2. Fill in the Name field with a city name (Minneapolis)
  3. Click the Create City button
  4. Visit the city index page
  5. Assert that the city we just added, Minneapolis, appears on the page

Here are those four steps translated into code.

  1. visit new_city_path
  2. fill_in 'Name', with: 'Minneapolis'
  3. click_on 'Create City'
  4. visit cities_path
  5. expect(page).to have_content('Minneapolis')

Finally, here are the contents of a file called spec/features/create_city_spec.rb with the full working test code.

# spec/features/create_city_spec.rb

require 'rails_helper'

RSpec.describe 'Creating a city', type: :feature do
  scenario 'valid inputs' do
    visit new_city_path
    fill_in 'Name', with: 'Minneapolis'
    click_on 'Create City'
    visit cities_path
    expect(page).to have_content('Minneapolis')
  end
end

Let’s create the above file and then run the test.

rspec spec/features/create_city_spec.rb

The test passes. There’s kind of a problem, though. How can we be sure that the test is actually doing its job? What if we accidentally wrote the test in such a way that it always passes, even if the underlying feature doesn’t work? Accidental “false positives” certainly do happen. It’s a good idea to make sure we don’t fool ourselves.

Verifying that the test actually does its job

How can we verify that a test doesn’t give a false positive? By breaking the feature and verifying that the test no longer passes.

There are two ways to achieve this:

  1. Write the failing test before we write the feature itself (test-driven development)
  2. Write the feature, write the test, then break the feature

Method #1 is not an option for us in this case because we already created the feature using scaffolding. That’s fine though. It’s easy enough to make a small change that breaks the feature.

In our CitiesController, let’s replace the line if @city.save with simply if true. This way the flow through the application will continue as if everything worked, but the city record we’re trying to create won’t actually get created, and so the test should fail when it looks for the new city on the page.

# app/controllers/cities_controller.rb

def create
  @city = City.new(city_params)

  respond_to do |format|
    #if @city.save
    if true
      format.html { redirect_to @city, notice: 'City was successfully created.' }
      format.json { render :show, status: :created, location: @city }
    else
      format.html { render :new }
      format.json { render json: @city.errors, status: :unprocessable_entity }
    end
  end
end

If we run the test again now, it does in fact fail.

F

Failures:

  1) Creating a city valid inputs
     Failure/Error: expect(page).to have_content('Minneapolis')
       expected to find text "Minneapolis" in "Cities\nName\nNew City"
     # ./spec/features/create_city_spec.rb:9:in `block (2 levels) in <top (required)>'

Finished in 0.23035 seconds (files took 0.89359 seconds to load)
1 example, 1 failure

Failed examples:

rspec ./spec/features/create_city_spec.rb:4 # Creating a city valid inputs

Now we can change if true back to if @city.save, knowing that our test really does protect against a regression should this city saving functionality ever break.

We’ve just added a test for attempting (successfully) to create a city when all inputs are valid. Now let’s add a test that verifies that we get the desired behavior when not all inputs are valid.

Integration test for trying to create a city with invalid inputs

In our “valid inputs” case we followed the following steps.

  1. Visit the “new city” page
  2. Fill in the Name field with a city name (Minneapolis)
  3. Click the Create City button
  4. Visit the city index page
  5. Assert that the city we just added, Minneapolis, appears on the page

For the invalid case we’ll follow a slightly different set of steps.

  1. Visit the “new city” page
  2. Leave the Name field blank
  3. Click the Create City button
  4. Assert that the page contains an error

Here’s what these steps might look like when translated into code.

  1. visit new_city_path
  2. fill_in 'Name', with: ''
  3. click_on 'Create City'
  4. expect(page).to have_content("Name can't be blank")

A comment about the above steps: step 2 is actually not really necessary. The Name field is blank to begin with. Explicitly setting the Name field to an empty string is superfluous and doesn’t make a bit of change in how the test actually works. However, I’m including this step just to make it blatantly obvious that we’re submitting a form with an empty Name field. If we were to jump straight from visiting new_city_path to clicking the Create City button, it would probably be less clear what this test is all about.

Here’s the full version of the “invalid inputs” test scenario alongside our original “valid inputs” scenario.

# spec/features/create_city_spec.rb

require 'rails_helper'

RSpec.describe 'Creating a city', type: :feature do
  scenario 'valid inputs' do
    visit new_city_path
    fill_in 'Name', with: 'Minneapolis'
    click_on 'Create City'
    visit cities_path
    expect(page).to have_content('Minneapolis')
  end

  scenario 'invalid inputs' do
    visit new_city_path
    fill_in 'Name', with: ''
    click_on 'Create City'
    expect(page).to have_content("Name can't be blank")
  end
end

Let’s see what we get when we run this test.

$ rspec spec/features/create_city_spec.rb

The test fails.

.F

Failures:

  1) Creating a city invalid inputs
     Failure/Error: expect(page).to have_content('error')
       expected to find text "Name can't be blank" in "City was successfully created.\nName:\nEdit | Back"
     # ./spec/features/create_city_spec.rb:16:in `block (2 levels) in <top (required)>'

Finished in 0.19976 seconds (files took 0.96104 seconds to load)
2 examples, 1 failure

Failed examples:

rspec ./spec/features/create_city_spec.rb:12 # Creating a city invalid inputs

Instead of finding the text Name can't be blank on the page, it found the text City was successfully created.. Evidently, Rails happily accepted our blank Name input and created a city with an empty string for a name.

To fix this behavior we can add a presence validator to the name attribute on the City model.

# app/models/city.rb

class City < ApplicationRecord
  validates :name, presence: true
end

The test now passes.

Integration test for updating a city

The steps for this test will be:

  1. Create a city in the database
  2. Visit the “edit” page for that city
  3. Fill in the Name field with a different value from what it currently is
  4. Click the Update City button
  5. Visit the city index page
  6. Assert that the page contains the city’s new name

Here are these steps translated into code.

  1. nyc = City.create!(name: 'NYC')
  2. visit edit_city_path(id: nyc.id)
  3. fill_in 'Name', with: 'New York City'
  4. click_on 'Update City'
  5. visit cities_path
  6. expect(page).to have_content('New York City')

Here’s the full test file which we can put at spec/features/update_city_spec.rb.

# spec/features/update_city_spec.rb

require 'rails_helper'

RSpec.describe 'Updating a city', type: :feature do
  scenario 'valid inputs' do
    nyc = City.create!(name: 'NYC')
    visit edit_city_path(id: nyc.id)
    fill_in 'Name', with: 'New York City'
    click_on 'Update City'
    visit cities_path
    expect(page).to have_content('New York City')
  end
end

If we run this test (rspec spec/features/update_city_spec.rb), it will pass. Like before, though, we don’t want to trust this test without seeing it fail once. Otherwise, again, we can’t be sure that the test isn’t giving us a false positive.

Let’s change the line if @city.update(city_params) in CitiesController to if true so that the controller continues on without actually updating the city record. This should make the test fail.

# app/controllers/cities_controller.rb

def update
  respond_to do |format|
    #if @city.update(city_params)
    if true
      format.html { redirect_to @city, notice: 'City was successfully updated.' }
      format.json { render :show, status: :ok, location: @city }
    else
      format.html { render :edit }
      format.json { render json: @city.errors, status: :unprocessable_entity }
    end
  end
end

The test does in fact now fail.

F

Failures:

  1) Updating a city valid inputs
     Failure/Error: expect(page).to have_content('New York City')
       expected to find text "New York City" in "Cities\nName NYC Show Edit Destroy\nNew City"
     # ./spec/features/update_city_spec.rb:10:in `block (2 levels) in <top (required)>'

Finished in 0.17966 seconds (files took 0.89554 seconds to load)
1 example, 1 failure

Failed examples:

rspec ./spec/features/update_city_spec.rb:4 # Updating a city valid inputs

Integration test for deleting a city

This will be the last integration test we write. Here are the steps we’ll follow.

  1. Create a city in the database
  2. Visit the city index page
  3. Assert that the page contains the name of our city
  4. Click the “Destroy” link
  5. Accept the “Are you sure?” alert
  6. Assert that the page no longer contains the name of our city

Translated into code:

  1. City.create!(name: 'NYC')
  2. visit cities_path
  3. expect(page).to have_content('NYC')
  4. click_on 'Destroy'
  5. accept_alert
  6. expect(page).not_to have_content('NYC')

Here’s the full test file, spec/features/delete_city_spec.rb.

# spec/features/delete_city_spec.rb

require 'rails_helper'

RSpec.describe 'Deleting a city', type: :feature do
  scenario 'success' do
    City.create!(name: 'NYC')
    visit cities_path
    expect(page).to have_content('NYC')

    click_on 'Destroy'
    accept_alert
    expect(page).not_to have_content('NYC')
  end
end

If we run this test, it will pass.

$ rspec spec/features/delete_city_spec.rb

To protect against a false positive and see the test fail, we can comment out the line @city.destroy in CitiesController.

# app/controllers/cities_controller.rb

def destroy
  #@city.destroy
  respond_to do |format|
    format.html { redirect_to cities_url, notice: 'City was successfully destroyed.' }
    format.json { head :no_content }
  end
end

Now the test fails.

F

Failures:

  1) Deleting a city success
     Failure/Error: expect(page).not_to have_content('NYC')
       expected not to find text "NYC" in "City was successfully destroyed.\nCities\nName NYC Show Edit Destroy\nNew City"
     # ./spec/features/delete_city_spec.rb:9:in `block (2 levels) in <top (required)>'

Finished in 0.18866 seconds (files took 0.88234 seconds to load)
1 example, 1 failure

Failed examples:

rspec ./spec/features/delete_city_spec.rb:4 # Deleting a city success

Remember to change that line back to the test passes again.

At this point we’ve written four test cases:

  • Creating a city (with valid inputs)
  • Creating a city (with invalid inputs)
  • Updating a city (with valid inputs)
  • Deleting a city

Let’s invoke the rspec command to run all four test cases. Everything should pass.

$ rspec
....

Finished in 0.23001 seconds (files took 0.88377 seconds to load)
4 examples, 0 failures

Where to go next

If you want to learn more about writing integration tests in Rails, here are a few recommendations.

First, I recommend good old practice and repetition. If you want to get better at writing integration tests, write a whole bunch of integration tests. I would suggest building a side project of non-trivial size that you maintain over a period of months. Try to write integration tests for all the features in the app. If there’s a feature you can’t figure out how to write a test for, give yourself permission to skip that feature and come back to it later. Alternatively, since it’s just a side project without the time pressures of production work, give yourself permission to spend as much time as you need in order to get a test working for that feature.

Second, I’ll recommend a couple books.

Growing Object-Oriented Software, Guided by Tests. This is one of the first books I read when I was personally getting started with testing. I recommend it often and hear it recommended often by others. It’s not a Ruby book but it’s still highly applicable.

The RSpec Book. If you’re going to be writing tests with RSpec, it seems like a pretty good idea to read the RSpec book. I also did a podcast interview with one of the authors where he and I talk about testing. I’m recommending this book despite the fact that it advocates Cucumber, which I am pretty strongly against.

Effective Testing with RSpec 3. I have not yet picked up this book although I did do a podcast interview with one of the authors.

My last book recommendation is my own book, Rails Testing for Beginners. I think my book offers a pretty good mix of model-level testing and integration-level testing with RSpec and Capybara.