Category Archives: Automated Testing

How I approach test coverage metrics

Different developers have different opinions about test coverage. Some engineering organizations not only measure test coverage but have rules around it. Other developers think test coverage is basically BS and don’t measure it at all.

I’m somewhere in between. I think test coverage is a useful metric but only in a very approximate and limited sort of way.

If I encounter two codebases, one with 10% coverage and another with 90% coverage, I can of course probably safely conclude that the latter codebase has a healthier test suite. But if there’s a difference of 90% and 100% I’m not convinced that that means much.

I personally measure test coverage on my projects, but I don’t try to optimize for it. Instead, I make testing a habit and let my habitual coding style be my guiding force instead of the test coverage metrics.

If you’re curious what type of test coverage my normal workflow naturally results in, I just checked the main project I’ve been working on for the last year or so and the coverage level is 96.62%, according to simplecov. I feel good about that number, although more important to me than the test coverage percentage is what it feels like to work with the codebase on a day-to-day basis. Are annoying regressions popping up all the time? Is new code hard to write tests for due to the surrounding code not having been written in an easily testable way? Then the codebase could probably benefit from more tests.

My general approach to Rails testing

My development workflow

The code I write is influenced by certain factors upstream of the code itself.

Before I start coding a feature, I like to do everything I can to try to ensure that the user story I’m working on is small and that it’s crisply defined. By small, I mean not more than a day’s worth of work. By crisply defined, I mean that the user story includes a reasonably precise and detailed description of what the scope of that story is.

I also like to put each user story through a reasonably thorough scrutinization process. In my experience, user stories often aren’t very thoroughly thought through and contain a level of ambiguity or inconsistency such that they’re not actually possible to implement as described.

I find that if care is taken to make sure the user stories are high-quality before development work begins, the development work goes dramatically more smoothly.

Assuming I’m starting with good user stories, I’ll grab a story to work on and then break that story into subtasks. Each day I have a to-do list for the day. On that to-do list I’ll put the subtasks for whatever story I’m currently working on. Here’s an example of what that might look like, at least as a first draft:

Feature: As a staff member, I can manage insurance payments

  • Scaffolding for insurance payments
  • Feature spec for creating insurance payment (valid inputs)
  • Feature spec for creating insurance payment (invalid inputs)
  • Feature spec for updating insurance payment

(By the way, I write more about my specific formula for writing feature specs in my post https://www.codewithjason.com/repeatable-step-step-process-writing-rails-integration-tests-capybara/.)

Where my development workflow and my testing workflow meet

Looking at the list above, you can see that my to-do list is expressed mostly in terms of tests. I do it this way because I know that if I write a test for a certain piece of functionality, then I’ll of course have to build the functionality itself as part of the process.

When I use TDD and when I don’t

Whether or not I’ll use TDD on a feature depends largely on whether it’s a whole new CRUD interface or whether it’s a more minor modification.

If I’m working on a whole CRUD interface, I’ll use scaffolding to generate the code, and then I’ll make a pass over everything and write tests. (Again, I write more about the details of this process here.) The fact that I use scaffolding for my CRUD interfaces makes TDD impossible. This is a trade-off I’m willing to make due to how much work scaffolding saves me and due to the fact that I pretty much never forget to write feature specs for my scaffold-generated code.

It’s also rare that I ever need to write any serious amount of model specs for scaffold-generated code. I usually write tests for my validators using shoulda-matchers, but that’s all. (I often end up writing more model specs later as the model grows in feature richness, but not in the very beginning.)

If instead of writing a whole new CRUD interface I’m just making a modification to existing code (or fixing a bug), that’s a case where I usually will use TDD. I find that TDD in these cases is typically easier and faster than doing test-after or skipping tests altogether. If for example I need to add a new piece of data to a CSV file my program generates, I’ll go to the relevant test file and add a failing expectation for that new piece of data. Then I’ll go and add the code to put the data in place to make the test pass.

The other case where I usually practice TDD is if the feature I’m writing is not a CRUD-type feature but rather a more of a model-based feature, where the interesting work happens “under the hood”. In those cases I also find TDD to be easier and faster than not-TDD.

The kinds of tests I write and the kind I don’t

I write more about this in my book, but I tend to mostly write model specs and feature specs. I find that most other types of tests tend to have very little value. By the way, I’m using the language of “spec” instead of “test” because I use RSpec instead of Minitest, but my high-level approach would be the exact same under any testing framework.

When I use mocks and stubs and when I don’t

I almost never use mocks or stubs in Rails projects. In 8+ years of Rails development, I’ve hardly ever done it.

How I think about test coverage

I care about test coverage enough to measure it, but I don’t care about test coverage enough to set a target for it or impose any kind of rule on myself or anything like that. I’ve found that the natural consequence of me following my normal testing workflow is that I end up with a pretty decent test coverage level. Today I checked the test coverage on the project I’ve been working on for the last year or so and the measurement was 97%.

The test coverage metric isn’t the main barometer for me though on the health level of a project’s tests. To me it seems much more useful to pay attention to how many exceptions pop up in production and what it feels like to do maintenance coding on the project. “What it feels like to do maintenance coding” is obviously not a verify quantifiable metric, but of course not everything that counts can be counted.

How do I make testing a habitual part of my development work?

One of the most common questions asked by Rails developers new to testing is: how do I make testing a habitual part of my development work?

It’s one thing to know how to write tests. It’s another thing to actually write tests consistently as a normal part of your work.

In order to share with you how to make testing a habitual part of your development work, I conducted a poll among some of my peers in the Rails world to see what keeps them in the habit of writing tests consistently. I also examined my own motivations.

When I drew the commonalities among the answers, what I came up with was a trifecta not unlike Larry Wall’s three virtues of a great programmer. The trifecta is laziness, fear, and pride. Let’s examine each “virtue” individually.

Laziness

It might sound funny to name laziness as the first motivation for writing tests habitually. After all, tests seem like extra work. Writing tests consistently seems like something that would require discipline. But for me and many of the people who responded to my poll, it’s quite the opposite.

The alternative to automated tests

The alternative to writing tests isn’t just doing nothing. The alternative to writing tests is to perform manual testing, to let your users test your application for you in production, or most likely, a combination of the two. The alternative to writing tests is to suffer great pain and toil.

The laziness factor also extends beyond QA. I personally find that the process of writing features is often easier and more pleasant when I’m writing with the assistance of tests than when I’m not.

Mental energy

Mental energy is a finite, precious resource that (for me at least) starts full in the morning and depletes throughout the day. When I’m working I don’t ever want to use more than the minimum amount of mental exertion necessary to complete a task.

If I write a feature without using tests, I’m often juggling the “deciding what to do” work and the “actually doing it” work at the same time, which has a cognitive cost more than twice as much as performing those two jobs separately in serial. When I build a feature with the aid of tests, the tests allow me to separate the “deciding what to do” work from the “actually doing it” work.

It works like this. First I capture what to do in the form of a test. Then I follow my own instructions by getting the test to pass. Then I repeat. This is a much lighter cognitive burden than if I were to juggle these different mental jobs and allows me to be productive for longer because I don’t run out of mental energy as early in the day.

Code understandability

It’s more difficult, time-consuming and unpleasant to work with messy code than to work with clear and tidy code.

Being a lazy person, difficult, time-consuming and unpleasant work is exactly what I don’t want to do. I want to do work that’s pleasant, quick and easy.

Unfortunately it’s not possible to have clean, understandable code without having automated tests. This might sound like a hyperbolic claim but it’s not. I can prove it based on a chain of truths.

The first truth is that it’s impossible to write a piece of code cleanly on the first try. Some amount of refactoring, typically a lot of refactoring, is necessary in order to get the code into a reasonably good state. This is true on a feature-by-feature basis but it’s especially true on the scale of a whole project codebase.

The second truth is that it’s impossible to do non-trivial refactorings without having automated tests. The feedback cycle is just too long when all the testing is done manually. Either that or the risk of refactoring without testing afterward is just too large to be justified.

So, if it’s impossible to have good code without refactoring, and it’s impossible to do refactoring without tests, then it’s impossible to have good code without tests.

My extreme personal laziness demands that I only write neat and understandable code. Therefore, I have to write tests in order to satisfy my laziness.

Fear

Fear is another powerful impetus for testing. If I don’t write tests for my features, it increases the risk that I release a bug to production. Bugs cause me shame and embarrassment. I don’t want to feel embarrassment or shame.

Bugs may also have negative business consequences to the company I work for. This could negatively affect the company’s ability or willingness to pay me as much as I want.

When laziness doesn’t drive me to write tests, fear often does.

Pride

Lastly there’s pride. (I find Larry Wall’s “hubris” a little too strong a word.)

Sometimes, when I’m tempted not to write a test for a feature, I imagine another developer stumbling across my work in the future and seeing that there are no tests. I imagine myself sheepishly admitting to that developer that I didn’t bother to write tests for that feature. Why didn’t I write tests? No good reason.

As the arrogant person that I am, this imaginary interaction brings me pain. I really don’t like the idea that somebody else would look at my work and make a (legitimate) negative judgment.

I also want my work to be exemplary. If we hire a junior developer where I work, I want to be able to point to my code and say “This is how we do it.” I don’t know how I would explain that my test coverage is poor but I want theirs to be good.

Takeaways

I’m not driven to write tests out of discipline. I also don’t consider testing to be “extra” effort but rather an effort-saver.

The main forces that drive me to write tests are laziness, fear and pride. Mostly laziness.

How to write a test when the test implementation isn’t obvious

Why testers often suffer from “writer’s block”

When you write a test, you’re carrying out two jobs:

  1. Deciding what steps the test should carry out
  2. Translating those steps into code

For trivial tests, these two steps are easy enough that both can be done together, and without conscious thought. For non-trivial tests (especially tests that include complex setup steps), doing both these steps at once is too much mental juggling for a feeble human brain, or at least my feeble human brain.

This is when testers experience “writer’s block”.

Luckily, there’s a way out. When the implementation of a test eludes me, I consciously separate the job into the two steps listed above and I tackle each one at a time.

How to overcome writer’s block

Let’s say, for example, that I want to write an integration test that tests a checkout process. Maybe I can’t think of all the code and setup steps that are necessary to carry out this test. I probably can, however, think of the steps I would take just as a regular human being manually testing the feature in a web browser. So, in my test file, I might write something like this:

scenario 'create order' do
  # create a product
  # visit the page for that product
  # add that product to my cart
  # visit my cart
  # put in my credit card details, etc.
  # click the purchase button
  # expect the page to have a success message
end

Now I have all the high-level steps listed out. I can tell my brain that it’s relieved of the duty of trying to hold all the test steps in memory. Now my brain’s capacity is freed up to do other jobs. The next job I’ll give it is translating the steps into code.

I’ll start with the first line.

scenario 'create order' do
  electric_dog_polisher = create(:product)
  # visit the page for that product
  # add that product to my cart
  # visit my cart
  # put in my credit card details, etc.
  # click the purchase button
  # expect the page to have a success message
end

That was easy enough. Now I’ll move onto the next line.

scenario 'create order' do
  electric_dog_polisher = create(:product)
  visit product_path(electric_dog_polisher)
  # add that product to my cart
  # visit my cart
  # put in my credit card details, etc.
  # click the purchase button
  # expect the page to have a success message
end

And I’ll go down the comments, line by line, translating each line into Ruby code.

Not every step will be trivially easy. But at the same time, it’s unlikely that any individual line is all that hard. What’s hard is trying to solve 26 different problems at once in your head. The way to make progress is to break those problems down into tiny parts, get those parts out of your head and onto the screen, and then tackle the tiny parts one at a time.

How do you tell which areas of a project’s test suite need attention?

A client of mine recently asked: “How do you determine, when first opening a project, which areas of testing need attention?”

There are three methods I use for determining what areas of a test suite need attention: 1) looking at the test suite code for “test smells”, 2) asking the team “where it hurts”, and 3) actually working directly with the project.

Looking at the code for “test smells”

Just as there are code smells, there are test smells. The book xUnit Test Patterns: Refactoring Test Code (which I highly recommend) has a list of test smells in the back. Some of these smells are detectible by looking at the code. Some of them can only be found experientially. Here are some that can be discovered by looking at the test code without running it.

Developers Not Writing Tests

This one is obvious. If there are very few tests in the test suite (or if the test suite doesn’t exist at all), then this is a clear opportunity for improvement.

Obscure Test

An Obscure Test is one where it’s hard to understand the meaning of a test just by looking at it, usually because the meaning is obscured by the presence of so many low-level details. These are relatively easy to spot.

Test Code Duplication

Duplication is also pretty easy to spot. It’s also often relatively cheap and straightforward to fix.

Asking the team “where it hurts”

Often a team is well aware of where their testing deficiencies lie. Common problems include:

  • Not enough tests
  • Not enough testing skills on the team
  • Slow tests
  • Flaky tests
  • Application code that’s too hard to test

These issues can be easily uncovered simply by asking.

Actually working directly with the project

This is often the most effective way of telling which areas of the test suite need the most attention, although it’s also the slowest.

One challenge in identifying areas of a test suite that need attention is that not all tests have equal value. The test that tests the contact page is not as valuable as the test that tests the checkout page. So effort spent improving the contact page test might result in a much better test but the exercise still would have been a waste of time.

Another challenge is that not all code is equally testable. Let’s say I have an application with zero tests and no testing infrastructure. Logic might seem to dictate that I should write a test for the checkout page before I write a test for the contact page because the checkout page is much more economically valuable than the contact page.

But if my application lacks testing infrastructure than the task of writing a test for the checkout page might be a prohibitively large undertaking because it would involve not only writing complex tests but also setting up the underlying test infrastructure. Maybe I’d be better off writing a test for the contact page first just to get a beachhead in place, then gradually work my way up to having tests for the checkout page.

These are things that might be hard to tell just by looking at the code. So when I start work on a new project, I’ll often use all three of these methods—looking at the code, talking with the team, working with the code—to determine which areas of the test suite need attention.

Mystified by RSpec’s DSL? Some parentheses can add clarity

Many developers have a hard time wrapping their head around RSpec’s DSL syntax. To me for a long time, RSpec’s syntax was a mystery.

Then one day I realized that RSpec’s syntax is just methods and blocks. There’s not some crazy ass metaprogramming going on that’s beyond my abilities to comprehend. Everything in RSpec I’ve ever used can be (as far as I can tell) boiled down to methods and blocks.

I’ll give a concrete example to aid understanding.

Below is an RSpec test I pulled from a project of mine, Mississippi.com, which I’ve live-coded in my Sausage Factory videos.

require 'rails_helper'

RSpec.describe Order, type: :model do
  subject { build(:order) }

  describe 'validations' do
    it { should validate_presence_of(:customer) }
  end

  describe '#total_cents' do
    it 'returns the total amount for the order' do
      order = create(
        :order,
        line_items: [
          create(:line_item, total_amount_cents: 5000),
          create(:line_item, total_amount_cents: 2500)
        ]
      )

      expect(order.total_cents).to eq(7500)
    end
  end
end

Now here’s the same test with all the optional parentheses left in instead of out. What might not have been obvious to you before, but is made clear by the addition of parentheses, is that describe and it are both just methods.

Like any Ruby method, it and describe are able to accept a block, which of course they always do. (If you don’t have a super firm grasp on blocks yet, I might suggest reading up on them and then writing some of your own methods which take blocks. I went through this exercise myself recently and found it illuminating.)

In addition to putting in all optional parentheses, I changed every block from brace syntax to do/end syntax. I think this makes it more clear when we’re dealing with a block versus a hash.

require('rails_helper')

RSpec.describe(Order, { type: :model }) do
  subject do
    build(:order)
  end

  describe('validations') do
    it do
      should(validate_presence_of(:customer))
    end
  end

  describe('#total_cents') do
    it('returns the total amount for the order') do
      order = create(
        :order,
        line_items: [
          create(:line_item, total_amount_cents: 5000),
          create(:line_item, total_amount_cents: 2500)
        ]
      )

      expect(order.total_cents).to(eq(7500))
    end
  end
end

The latter method is runnable just like the former version (I checked!) because although I changed the syntax, I haven’t changed any of the functionality.

I hope seeing these two different versions of the same test is as eye-opening for you as it was for me.

Lastly, to be clear, I’m not suggesting that you permanently leave extra parentheses in your RSpec tests. I’m only suggesting that you temporarily add parenthesis as a learning exercise.

How to test Ruby methods that involve puts or gets

I recently saw a post on Reddit where the OP asked how to test a method which involved puts and gets. The example the OP posted looked like the following (which I’ve edited very slightly for clarity):

class Example
  def ask_for_number
    puts "Input an integer 5 or above"
    loop do
      input = gets.to_i
      return true if input >= 5
      puts "Invalid. Try again:"
    end
  end
end

What makes the ask_for_number method challenging to test is a dependency. Most methods can be tested by saying, “When I pass in argument X, I expect return value Y.” This one isn’t so straightforward though. This is more like “When the user sees output X and then enters value V, expect subsequent output O.”

Instead of accepting arguments, this method gets its value from user input. And instead of necessarily returning a value, this method sometimes simply outputs more text.

How can we give this method the values it needs, and how can we observe the way the method behaves when we give it these values?

A solution using dependency injection (thanks to Myron Marston)

Originally, I had written my own solution to this problem, but then Myron Marson, co-author of Effective Testing with RSpec 3, supplied an answer of his own which was a lot better than mine. Here it is.

I comment on this solution some more below, but you can see in the initialize method that the input/output dependencies are being injected into the class. By default, we use $stdin/$stdout, and under test, we use something else for easier testability.

class Example
  def initialize(input: $stdin, output: $stdout)
    @input = input
    @output = output
  end

  def ask_for_number
    @output.puts "Input an integer 5 or above"
    loop do
      input = @input.gets.to_i
      return true if input >= 5
      @output.puts "Invalid. Try again:"
    end
  end
end

require 'stringio'

RSpec.describe Example do
  context 'with input greater than 5' do
    it 'asks for input only once' do
      output = ask_for_number_with_input(6)

      expect(output).to eq "Input an integer 5 or above\n"
    end
  end

  context 'with input equal to 5' do
    it 'asks for input only once' do
      output = ask_for_number_with_input(5)

      expect(output).to eq "Input an integer 5 or above\n"
    end
  end

  context 'with input less than 5' do
    it 'asks repeatedly, until a number 5 or greater is provided' do
      output = ask_for_number_with_input(2, 3, 6)

      expect(output).to eq <<~OUTPUT
        Input an integer 5 or above
        Invalid. Try again:
        Invalid. Try again:
      OUTPUT
    end
  end

  def ask_for_number_with_input(*input_numbers)
    input = StringIO.new(input_numbers.join("\n") + "\n")
    output = StringIO.new

    example = Example.new(input: input, output: output)
    expect(example.ask_for_number).to be true

    output.string
  end
end

Under test, an instance of StringIO can be used instead of $stdout, thus making the messages sent to @output visible and testable. That’s how we can “see inside” the Example class and test what at first glance appears to be a difficult-to-test piece of code.

How I test JavaScript-heavy Rails applications

A common question I get is how to test JavaScript in Rails applications. My approach is almost radically simple and unsophisticated.

My Rails + JavaScript testing approach

I think of the fact that the application uses JavaScript like an inconsequential and irrelevant implementation detail. I test JavaScript-heavy applications using just RSpec + Capybara integration tests, the same exact way I’d test an application that has very little JavaScript or no JavaScript at all.

I don’t really have anything more to say about it since I literally don’t do anything different from my regular RSpec + Capybara tests.

Single-page applications

What about single-page applications? I still use the same approach. When I used to build Angular + Rails SPAs, I would add a before(:all) RSpec hook that would kick off a build of my Angular application before the test suite ran. After that point my RSpec + Capybara tests could interact with my SPA just as easily as if the application were a “traditional” Rails application.

I’ve tried testing single-page applications using tools like Protractor or Cypress and I don’t like it. It’s awkward and cumbersome to try to drive the Rails app from that end. How do you spin up test data? In my experience, it’s very tedious. Much easier to drive testing from the Rails end and treat the client-side JavaScript application as an implementation detail.

Side note/rant: despite the popularity of single-page applications, “traditional” Rails applications are 100% fine. Using Rails with React/Vue/Angular/etc. isn’t “modern” and using Rails without any of those isn’t “outdated”. For most regular old boring business applications, Rails by itself without a front-end framework is not only a sufficient approach but a superior approach to an SPA because the complexity of development with plain Rails and only “JavaScript sprinkles” tends to be far lower than Rails with a JavaScript framework.

Testing JavaScript directly

Despite my typical approach of treating JavaScript as a detail, there are times when I want to have a little tighter control and test my JavaScript directly. In those cases I use Jasmine to test my JavaScript.

But it’s my goal to use such little JavaScript that I never get above that threshold of complexity where I feel the need to test my JavaScript directly with Jasmine. I’ve found that if I really try, I can get away with very little JavaScript in most applications without sacrificing any UI richness.

A repeatable, step-by-step process for writing Rails integration tests with Capybara

Many Rails developers who are new to writing tests struggle with the question of what to write tests for and how.

I’m about to share with you a repeatable formula that you can use to write an integration test for almost any Rails feature. It’s nothing particularly profound, it won’t result in 100% test coverage, and it won’t work in all cases, but it will certainly get you started if you’re stuck.

The three integration test cases I write for any Rails CRUD feature

Most features in most Rails applications are, for better or worse, some slight variation on CRUD operations. Here are the tests I almost always write for every CRUD feature.

  1. Creating a record with valid inputs
  2. Trying to create a record with invalid inputs
  3. Updating a record

Let me go into detail on each one of these. For each test case, let’s imagine we’re working with a resource called Location.

Creating a record with valid inputs

This test case will involve the following steps:

  1. If necessary, sign a user in (using login_as(create(:user)) as I describe in this post)
  2. Visit the “new” route for the resource (e.g. visit new_location_path)
  3. Fill out the form fields using fill_in, select, etc.
  4. Click the “submit” button
  5. Expect that the page has whatever content the resource’s index page has, e.g. expect(page).to have_content('Locations')

Here’s what that test might look like:

require 'rails_helper'

RSpec.describe 'Creating a location', type: :system do
  before do
    login_as(create(:user))
    create(:state, name: 'Michigan')
    visit new_location_path
  end

  scenario 'valid inputs' do
    fill_in 'Name', with: "Jason's House"
    fill_in 'Line 1', with: '69420 Cool Ave'
    fill_in 'City', with: 'Sand Lake'
    select 'Michigan', from: 'State'
    fill_in 'Zip code', with: '49343'
    click_on 'Save Location'

    expect(page).to have_content('Locations')
  end
end

Trying to create a record with invalid inputs

This test is pretty simple because all I have to do is not fill out the form. So the steps are:

  1. If necessary, sign a user in (using login_as(create(:user))
  2. Visit the “new” route for the resource (e.g. visit new_location_path)
  3. Click the “submit” button
  4. Expect that the page shows an error (e.g. expect(page).to have_content("Name can't be blank"))

Here’s what a test of this type might look like.

require 'rails_helper'

RSpec.describe 'Creating a location', type: :system do
  before do
    login_as(create(:user))
    create(:state, name: 'Michigan')
    visit new_location_path
  end

  scenario 'invalid inputs' do
    click_on 'Save Location'
    expect(page).to have_content("Name can't be blank")
  end
end

Updating a record

The steps for this one go:

  1. If necessary, sign a user in (using login_as(create(:user))
  2. Create an instance of the resource I’m testing (e.g. location = create(:location))
  3. Visit the “edit” route for the resource (e.g. visit edit_location_path(location))
  4. Fill in just one field with a different value
  5. Click the “submit” button
  6. Expect that the page has whatever content the resource’s index page has, e.g. expect(page).to have_content('Locations')
require 'rails_helper'

RSpec.describe 'Updating a location', type: :system do
  before do
    login_as(create(:user))
    location = create(:location)
    visit edit_location_path(location)
  end

  scenario 'valid inputs' do
    fill_in 'Name', with: "Jason's Filthy Shack"
    click_on 'Save Location'
    expect(page).to have_content('Locations')
  end
end

“But is that enough? Don’t I need more?”

No, this is not “enough”, and yes, you do need more…eventually. What I’m hoping to share with you here is a base that you can start with if you’re totally lost. Writing halfway-good tests is better than writing no tests, and even writing crappy tests is better than writing no tests. Once you get some practice following the formula above, you can expand on that formula to get a better level of test coverage.

Where to go next

If you get some practice with writing tests like the above and you want to go further, you might like my RSpec/Capybara integration test tutorial.

The difference between RSpec, Capybara and Cucumber

If you’re new to Rails testing you’ve probably come across the terms RSpec, Capybara and Cucumber.

All three are testing tools. What are they for? Do you need all of them? Here are some answers.

RSpec

RSpec is a testing framework. It’s what allows you to write and run your tests.

An analogous tool would be MiniTest. In my experience, most commercial Rails projects use RSpec and most open-source Ruby projects use MiniTest. At any Rails job you’re more likely to be using RSpec than MiniTest. (I’m not sure why this is the way it is.)

Capybara

Some Rails tests operate at a “low level”, meaning no browser interaction is involved. Other “high level” tests do actually spin up a browser and click links, fill out form fields, etc.

Low-level tests can be executed with just RSpec and nothing more. But for tests that use the browser, something more is needed.

This is where Capybara comes into the picture. Capybara provides helper methods like fill_in to fill in a form field, click_on to click a button, etc.

Please note that Capybara does NOT have to be used in conjunction with Cucumber. It’s completely possible to write integration tests in Rails with just RSpec and Capybara.

Cucumber

Cucumber is a tool for writing test cases in something close to English. Here’s an example from Wikipedia:

Scenario: Eric wants to withdraw money from his bank account at an ATM
    Given Eric has a valid Credit or Debit card
    And his account balance is $100
    When he inserts his card
    And withdraws $45
    Then the ATM should return $45
    And his account balance is $55

Cucumber can be connected with RSpec and Capybara and used to write integration tests.

My personal take on Cucumber is that while the English-like syntax might appear clearer at first glance, it’s actually less clear than bare RSpec/Capybara syntax. (Would a Ruby class be more understandable if it were English instead of Ruby?)

Cucumber adds both a layer of mental overhead and a layer of maintenance overhead on top of the RSpec + Capybara combination. I always try as hard as I can to try to steer new testers away from Cucumber.

“So, what should I use to test my Rails apps?”

My advice is to use the combination of RSpec + Capybara and forget about Cucumber. What if you don’t know where to start with writing RSpec/Capybara tests? If that’s the case, you might like to check out my guide to RSpec + Capybara testing, which includes a tutorial.