Author Archives: Jason Swett

Stuck on a programming problem? These tactics will get you unstuck about 99% of the time

Being stuck is possibly the worst state to be in when programming. And the longer you allow yourself to stay stuck, the harder it is to finally get unstuck. That’s why I find it important to try never to allow myself to get stuck. And in fact, I very rarely do get stuck for any meaningful length of time. Here are my favorite tactics for getting unstuck.

Articulate the problem

I teach corporate training classes. Often, my students get stuck. When they’re stuck, it’s usually because the problem they’re trying to solve is ill-defined. It’s not possible to solve a problem or achieve a goal when you don’t even know what the problem or goal is.

So when you get stuck, a good first question to ask is: do I even know exactly what the problem is that I’m trying to solve? I recommend going so far as to write it down.

If it’s hard to articulate your goal, there’s a good chance your goal is too large to be tractable. Large goals can’t be worked on directly. For example, if you say your goal is “get to the moon and back”, that goal is too big. What you have to articulate instead are things like figure out how to get into space, figure out how to land a spaceship on the moon, figure out how to build a spacesuit that lets people hang out in space for a while, etc.

Just try something

Far too often I see programmers stare at their code and reason about what would happen if they changed it in such-and-such a way. They run intricate thought experiments so they can, presumably, fix all the code in one go and arrive at the complete solution the very next time they try to run the program. Nine times out of ten, they make the change that they think will solve the problem and then discover they’re wrong.

A better way to move forward is to just try something. The cost of trying is very low compared to the cost of reasoning. Also, trying has the added bonus of supplying empirical data instead of untested hypotheses.

“Rubber duck” it

“Rubber duck debugging” is the famous tactic of explaining your problem to a rubber duck on your desk. The phenomenon is that even though the duck is incapable of supplying any advice, the very act of explaining the problem out loud leads you to realize the solution to the problem.

Ironically, most of the “rubber duck” debugging I’ve done in my career has involved explaining my issue to a sentient human being. The result is often the classic scenario where you spend ten minutes explaining the details of a problem to a co-worker, realize the way forward, and then thank your co-worker for his or her help, all without your co-worker uttering a word.

The explanation doesn’t have to be verbal in order for this tactic to work. Typing out the problem can be perfectly effective. This leads me to my next tactic.

Put up a forum question

In my experience many programmers seem to be hesitant to write their own forum questions. I’m not sure why this is, although I have some ideas. Maybe they feel like posting forum questions is for noobs. Maybe they think it will slow them down. Neither of these things are true, though. I’ve been programming for over 20 years and I still post Stack Overflow questions somewhat regularly. And rather than slowing me down, posting a forum question usually speeds me up, not least because the very act of typing out the forum question usually leads me to realize the answer on my own (and usually about 45 seconds after I post the question).

If you don’t know a good way, try a bad way

Theodore Roosevelt is credited with having said, “In any moment of decision, the best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing.” I tend to agree.

Often I’m faced with a programming task that I don’t know how to complete in an elegant way. So, in these situations where I can’t think of a good solution, I do what I can, which is that I just move forward with a bad or stupid solution.

What often happens is that after I put my dumb solution in place, I’ll take a step back, look at what I’ve done, and a better solution will make itself fairly obvious to me. Or at least, a nice solution will be easier for me to think of at this stage than it would have been from the outset.

With programming and in general, people are much better at looking at a bad thing and saying how it could be made good than they are at coming up with a good thing out of thin air.

If you go down a bad path and find that you’ve completely painted yourself into a corner, it’s no big deal. If you’re using version control and using atomic commits, you can just revert back to your last good state. And you can start on a second attempt now that you’re a little older and wiser.

Study your buns off

There’s a thought experiment I like to run in my head sometimes.

Let’s say I’m working on a Rails project and I can’t quite figure out how to get a certain form to behave how I want it to behave. I try several things, I check the docs, but I just can’t get it to work how I want.

In these situations I like to ask myself: “If I knew everything there was to know about this topic, would I be stuck?”

And the answer is of course not. The reason why I don’t understand how to get my form to work is exactly that—I don’t understand how to get my form to work. So what I need to do is set to work on gaining that understanding.

In an ideal world, a programmer could instantly pinpoint the exact piece of knowledge he or she is missing and then go find that piece of knowledge. And in fact, we’re very fortunate to live in a world where that’s possible. But it’s not possible 100% of the time.

In those cases I set down my code and say okay, universe, I guess this is how it’s gonna be. I’ll go to the relevant educational resource—in the case of this example perhaps the official Rails documentation on forms—and start reading it top to bottom. Again, if I knew everything about Rails forms then I wouldn’t have a problem, so I’ll set out of a journey to learn everything. Luckily, my level of understanding usually becomes sufficient to squash the problem long before I read everything there is to read about it.

Write some tests

One of the reasons I find tests to be a useful development or debugging tool is that tests help save me some mental juggling. When developing or debugging there are two jobs that need to be done: 1) write the code that makes the program behave as desired and 2) verify that the desired behavior is present.

If I write code without tests, I’m at risk of mentally mixing these two jobs and getting muddled. On the other hand, if I write a test for my desired behavior, I’m now free to completely forget about the verification step and focus fully on coding the solution. The fewer balls I have to mentally juggle, the faster I can work.

Take a break

One of my favorite Pink Floyd songs is “See Emily Play“. In the first chorus they sing “There is no other day / Let’s try it another way.” In a later chorus they sing “There is no other way / Let’s try it another day.” This is great debugging advice.

I often find that if I allow myself to forget about whatever problem I’m working on and go for a walk or something, my subconscious mind will set to work on the problem and surface deliver the solution later. Sometimes this happens 15 minutes later. Sometimes it doesn’t happen until a day or a week later. In any case, it’s often effective.

Work on something else

If all else fails, you can always work on something else. This might feel like a failure, but it’s only a failure if you decide to give up on the original problem permanently. Setting aside one goal in exchange for making progress on a different goal is infinitely better than allowing yourself to stay stuck.

Logging the user in before Capybara feature specs

Logging in

If you’re building an app that has user login functionality, you’ll at some point need to write some tests that log the user in before performing the test steps.

One approach I’ve seen is to have Capybara actually navigate to the sign in page, fill_in the email and password fields, and hit submit. This works but, if you’re using Devise, it’s a little more complicated than necessary. There’s a better way.

The Devise docs have a pretty good solution. Just add this to spec/rails_helper:

Then, any place you need to log in, you can do login_as(FactoryBot.create(:user)).

Log in before each test or log in once before all tests?

One of my readers, Justin K, wrote me with the following question:

If you use Capybara to do system tests, how do you handle authentication? Do you do the login step on each test or do you log in once and then just try and run all of your system level tests?

The answer is that I log in individually for each test.

The reason is that some of my tests require the user to be logged in, some of my tests require that the user is not logged in, and other tests require that some specific type of user (e.g. an admin user) is logged in.

I do often put my login_as call in a before block at the top of a test file to reduce duplication, but that doesn’t mean only once for that set of files. A common misconception is that a before block will give a performance benefit by only running the code in the before block once. This is not the case. before is shorthand for before(:each) and any code inside it will get run before each individual it block. I never use before(:all) inside of individual tests because I want each test case to be as isolated as possible.

All my best programming tips

What follows is a running list of all the programming tips I can think of that are worth sharing.

Many of these tips may seem so obvious they go without saying, but every single one of these tips is here because I’ve seen programmers fail to take advantage of these tips on multiple occasions, even very experienced programmers.

I’ve divided the tips into two sections: development tips and debugging tips. I’ve arranged them in no particular order.

Development tips

Articulate the goal

Before you sit down and start typing, clearly articulate in writing what it is you’re trying to accomplish. This will help you avoid going down random paths which will dilute your productivity. It will also help you work in small, complete units of work.

Break big jobs into small jobs

Some job seem so big and nebulous as to be intractable. For example, if I’m tasked with creating a schedule that contains recurring appointments, where do I start with that? It will probably make my job easier if I start with a smaller job, for example, thinking of how exactly to specify and codify a recurrence rule.

Keep everything working all the time

Refactoring is the act of improving the structure of a program without altering the behavior of the program. I’ve seen a lot of occasions where a programmer will attempt to perform a refactoring and fail not to alter the behavior of the code they’re trying to refactor. They end up having to throw away their work and start over fresh.

If in the course of refactoring I break or change some existing functionality, then I somehow have to find my way back to making the code behave as it originally did. This is hard.

Alternatively, if I take care never to let the code stray from its original behavior, I never have to worry about bringing back anything that got altered. This is relatively easy. I just have to work in very small units and regression-test at the end of each small change.

Continuously deliver

Continuous delivery is the practice of always having your program in a deployable state.

Let’s say I start working on a project on January 1st which has a launch date of July 1st. But then, unexpectedly, leadership comes to me on March 1st and says the launch date is now March 10th and we’re going to go live with whatever we have by then.

If I’ve been practicing continuous delivery, launching on March 10th is no big deal. At the end of each week (and the end of each day and perhaps even each hour) I’ve made sure that my application is 100% complete even if it might not be 100% finished.

Work on one thing at a time

It’s better to be all the way done with half your tasks than to be halfway done with all your tasks.

If I’m 100% done with 50% of 8 features, I can deploy four features. If I’m 50% done with 100% of 8 features, I can deploy zero features.

Also, open work costs mental bandwidth. Even if you believe it’s just as fast to jump between tasks as it is to focus on just one thing at a time, it’s more mentally expensive to have 5 balls in the air than just one.

Use automated tests

All code eventually gets tested, it’s just a question of when and how and by whom.

It’s much cheaper and faster for a bug to get caught by an automated test during development than for a feature to get sent to a QA person and then kicked back due to the bug.

Testing helps prevent the introduction of new bugs, helps prevent regressions, helps improve code design, helps enable refactoring, and helps aid the understandability of the codebase by serving as documentation.

Use clear names for things

When a variable name is unclear due to being a misnomer or when it’s unclear due to being abbreviated to obfuscation, it adds unnecessary mental friction.

Don’t abbreviate variable names, with the exception of universally-understood abbreviations (e.g. SSN or PIN) or hyperlocal temp variables (e.g. records.each do { |r| puts r }).

A good rule for naming things: call things what they are.

Don’t prematurely optimize

There’s a saying attributed to Kent Beck: “Make it work, then make it right, then make it fast.”

For the vast majority of the work I do, I never even need to get to the “make it fast” step. Unless you’re working on high-visibility features for a high-profile consumer app, performance bottlenecks probably won’t emerge until weeks or months after the feature is initially deployed.

Only write a little code at a time

I have a saying that I repeat to myself: “Never underestimate your ability to screw stuff up.” I’ve often been amazed by how often a small change that will “definitely” work doesn’t work.

Only work in tiny increments. Test what you’re working on after each change, preferably using automated tests.

Use atomic commits

An atomic commit is a commit that’s only “about” one thing.

Atomic commits are valuable for at least two reasons. One, if you need to roll back a commit because it introduces a bug, you won’t have to also roll back other, unrelated (and perfectly good) code that was mixed into the same commit. Two, atomic commits make it easier to pinpoint the introduction of a bug than non-atomic commits.

A separate but related idea is to use small commits, which is just a specific application of “Only write a little code at a time”.

Don’t allow yourself to stay stuck

Being stuck is about the worst state to be in. Going down the wrong path is (counterintuitively) more productive than standing still. Going down the wrong path stirs up new thoughts and new information; standing still doesn’t.

If you can’t think of a good way to move forward, move forward in a bad way. You can always revert your work later, especially if you’re using small, frequent, atomic commits.

If you can’t think of a way to keep moving, try to precisely articulate what exactly it is you’re stuck on and why you’re stuck on it. Sometimes that very act helps you get unstuck.

If you think the reason you’re stuck is a lack of some necessary piece of knowledge, do some Google searches. Read some relevant blog posts and book passages. Post some forum questions. Ask some people in some Slack groups (and if you’re not in any technical Slack groups, join some). Go for a walk and then come back and try again. If all else fails, set the problem aside and work on something else instead. But whatever you do, don’t just sit there and think.

Debugging tips

Articulate the problem

If you can precisely articulate what the problem is, you’ve already gone a good ways toward the solution. Conversely, if you can’t articulate exactly what the problem is, your situation is almost hopeless.

Don’t fall into logical fallacies

One of the biggest dangers in debugging is the danger of tricking yourself into thinking you know something you don’t actually know.

If in the course of your investigation you uncover a rock solid truth, write it down on your list of things you know to be true.

If on the other hand you uncover something that’s seems to be true but you don’t have enough evidence to be 100% sure that it’s true, don’t write it down as a truth. It’s okay to write it down as a hypothesis, but if you confuse hypotheses with facts then you’re liable to get confused and waste some time.

Favor isolation over reason

I know of two ways to identify the cause of a bug. One is that I can study the code and perform experiments and investigations until I’ve pinpointed the line of code where the problem lies. The other method is that I can isolate the bug. The later is usually many times quicker.

One of my favorite ways to isolate a bug is to use git bisect. Once I’ve found the commit that introduced the bug, it’s often pretty plain to see what part of the commit introduced the bug. If it’s not, I’ll usually negate the offending code by doing a git revert --no-commit the offending commit and then re-introduce the offending code tiny bit by tiny bit until I’ve found the culprit. This methodology almost never fails to work.

Favor googling over thinking

Mental energy is a precious resource. Don’t waste mental energy over problems other people have already solved, especially if solving that particular problem does little or nothing to enhance your skills.

If you run a command and get a cryptic error, don’t sit and squint at it. Copy and paste the error message into Google.

Thinking for yourself of course has its time and place. Understanding things deeply has its time and place too. Exercise good judgment over when you should pause and understand what you’re seeing versus when you should plow through the problem as effortlessly as possible so you can get on with your planned work.

Learn to formulate good search queries for error messages

Let’s say I run a command and get the following error: /Users/jasonswett/.rvm/gems/ruby-2.5.1/gems/rspec-core-3.8.0/lib/rspec/core/reporter.rb:229:in require': cannot load such file -- rspec/core/profiler (LoadError)

Which part of the error should I google?

If I paste the whole thing into Google, the /Users/jasonswett/.rvm/gems/ruby-2.5.1/gems/rspec-core-3.8.0, which is unique to my computer, my Ruby version and my rspec-core version, narrows my search so much that I don’t even get a single result.

At the other extreme, require': cannot load such file is far too broad. Lots of different error messages could contain that text.

The sensible part of the error message to copy/paste is require': cannot load such file -- rspec/core/profiler (LoadError). It’s specific enough to have hope of matching other similar problems, but not so specific that it won’t return any results.

Learn to formulate good search queries in general

I’m not sure how to articulate what makes a good search query but here’s my attempt: a query that’s as short as it can possibly be while still containing all the relevant search terms.

Don’t give too much credence to the error message

Whenever I get an error message, I pretty much never pay attention to what it says, at least not as the first step.

When I get an error message, I look at the line number and file. Then I go to that line number and look to see if there’s anything obviously wrong. Probably about 90% of the time there is.

For the other 10% of the time, I’ll go back to the error message and read it to see if it offers any clues. If it does, I’ll make a change to see if that fixes the problem. If the error is so cryptic that I don’t know what to do, I’ll google the error message.

Make the terminal window big

If I had a dollar for every time I helped a student who was trying to debug a problem using a terminal window the size of a Post-It note, I’d be a rich man. Maximize the window so you can actually see what you’re doing.

Close unneeded tabs

Open tabs cost mental energy. Between the cost of re-finding the page later and the cost of being mentally cluttered with all those tabs open, it’s way cheaper just to re-find the page later. On average I myself usually have about two tabs open at any given time.

My experience speaking at 7 tech conferences in 9 months including RailsConf, RubyHACK and RubyConf India

Why I wanted to speak at tech conferences

On and off since 2011 I’ve been a freelancer—or, to use the term that I think sounds better, consultant. The vast majority of the work I’ve done as a “consultant” has just been full-time-ish staff-aug contracting. In other words, I worked a lot of contract jobs where I was paid hourly as opposed to salary but by every other measure I was just another developer on my client’s development team, and my client was basically my employer.

Ever since the time I started freelancing it’s been my desire to move on the freelancer-consultant spectrum from the “freelance” end of the spectrum to the “consultant” end of the spectrum. My understanding is that “real” consultants are able to have higher fees, more schedule autonomy, more discretionary time, more fulfilling work, and a better lifestyle overall. After several years of studying successful consultants like Alan Weiss, I discovered that the way to attract clients who would hire me for true consulting is through speaking and writing on my areas of expertise.

Not all speaking and writing is equally effective for the purpose of attracting clients, of course. Before 2018 I had only delivered talks at local user groups and never at a conference. I wanted to start giving talks at conferences as a way to start more effectively attracting more of the types of clients I wanted to work with.

There’s also another reason I was interested in speaking at conferences. At the time I started speaking at conferences I was also starting to think about a certain book I was going to write on the topic of Rails testing. I figured if I could list some conferences I had spoken at in the bio on my sales page it would lend credibility to my book.

How I got accepted to my first conference

At one point in time I only knew of maybe 10 programming conferences, period. I imagined there were maybe a handful more. I find my naïveté funny now. There are certainly hundreds of programming conferences worldwide of various sizes. My challenge early on was that I couldn’t find conferences to apply to speak at.

Then at some point I discovered, a CFP aggregator. In case you’re not familiar with the term CFP (call for papers) or the CFP process, here’s how the CFP process tends to work:

  1. Conferences announces that their CFP is open
  2. Prospective speakers submit their talk proposals
  3. CFP closes, conference organizers select talks and notify selected speakers
  4. Speaker list is publicly announced

An actual proposal will usually contain things like:

  • Talk title
  • Talk length
  • Target audience level
  • Talk description
  • Speaker bio

My discovery of allowed me to start sending proposals to a relatively large number of conferences. Ruby conferences were my preference but I didn’t limit myself to that. My initial goal was just to get a talk accepted to any conference that would accept me since I figured that was hard enough.

I’m not sure how many conferences I applied to before I finally got something accepted. It might have been about 30.

According to my account, it looks like the first proposal I sent was sent March 15th, 2018. The first conference I got accepted to was RubyConf Kenya. I was notified of acceptance on March 22nd, 2018. Unfortunately I had to decline due to a schedule conflict.

The next talk I got accepted was for DevOps Midwest in St. Louis. That talk got accepted on June 29th, 2018 and the conference took place in September. So I guess it took about three and a half months between the time I started submitting proposals (again, about 30 proposals by my guess) and the time I got accepted to a US-based conference.

My speaking experiences

Between the months of September 2018 and April 2019 I spoke at the following seven conferences.

  • DevOps Midwest (St. Louis)
  • Little Rock Tech Fest (Little Rock)
  • RubyConf India (Goa, India)
  • PyTennessee (Nashville)
  • Connectaha (Omaha)
  • RubyHACK (Salt Lake City)
  • RailsConf (Minneapolis)

DevOps Midwest, Little Rock Tech Fest

I feel fortunate that my first talk was at a small regional conference. My audience at DevOps Midwest was perhaps 50 people. I had spoken at meetups with an audience of more like 80 people before. So although the event was maybe a step up in terms of prestige, it wasn’t a new level in terms of audience size.

Little Rock Tech Fest is also a regional conference but it felt a little more like a national conference than DevOps Midwest.

RubyConf India

RubyConf India was my first national-level conference experience. Out of all the seven conference I’ve spoken at now, RubyConf India had the most intimidating setup. The conference was single-track (meaning there was only one stage and one talk at a time) so I was giving my talk in front of the WHOLE group of conference attendees, not just a handful who chose to come to my particular talk. My talk went pretty much fine except for an audio glitch which was out of my control. I also got nervous and talked too fast and ended my talk earlier than intended (which would not be the last time this would happen for me at a conference).

Unlike the first two conferences which I attended without my family, I brought my wife and two kids (5 and 8 at the time) with me to India. I figured my family wouldn’t mind missing out on St. Louis too much but it would be kind of a tough sell to go all the way to India without them, especially since I had already traveled to places like Africa, Amsterdam and Bulgaria without them (all business trips).

I want to briefly mention a few interesting culture surprises I encountered in India.

  • I saw more than a few Indians wearing sweatshirts and wool caps in 90+ degree heat.
  • In India it’s normal for straight men to walk around in public holding hands.
  • There were not only stray cows everywhere but stray dogs everywhere. So many of them.
  • Indians don’t drink coffee. Real (non-instant) coffee was IMPOSSIBLE to find.
  • Despite my pleas, no one would give me spicy food. Too white.
  • Speaking of white: no one suspected us of being American. On one occasion a cab driver asked, “So, are you Swedish or English?” We got Russian a lot. There were a lot of Russians there in Goa, and even signs written in Russian.

A couple neat things about the conference: after my talk someone came up to me and said, “Your talk was good, but not as good as your podcast!” It was cool to meet a podcast listener there. Another attendee asked to take a selfie with me. Apparently he considered me famous. Lastly, I got to meet up with my new friend Swanand, a fellow student in 30×500, an entrepreneurship course I enrolled in in early 2018.

PyTennessee, Connectaha

Because I’m trying to become known as “the Rails testing guy”, my preference is to give testing-related conferences at Ruby/Rails-related conferences. However, not all of my talk proposals were Ruby-related. In fact, I think most of my proposals were technology-agnostic.

The talk of mine that got accepted to the most conferences was a talk called Using Tests as a Tool to Wrangle Legacy Projects. This talk got accepted to RubyConf India, PyTennessee, Connectaha and RubyHACK.

So, interestingly, I found myself speaking at PyTennessee, a Python conference. It was an okay experience although I don’t believe I’ll do it again. I did enjoy Nashville though.

Connectaha, which was in Omaha, was one of my favorite conferences so far. It was a super well-organized conference. I’ll speak more to this in a bit.


RubyHACK (High Altitude Coding Konference) was another favorite of mine. Both the level of organization and the location (Salt Lake City) were great. There were maybe a few hundred people in attendance.

I got a couple of neat ego strokes at RubyHACK. One, I got to meet one of my email subscribers in person who I had actually helped get his first programming job some months prior. Also, someone came up to me at one point and said, “You must be Jason!” Turns out he was a listener of The Ruby Testing Podcast. Not only was he a listener but he found out about RubyHACK through my podcast, and decided not only to come to the conference but to bring a co-worker as well.

The day after the conference I went up to Park City where a friend of mine lives and we went skiing.


RailsConf was the big one. When I got accepted to RailsConf, I couldn’t believe it. Ironically, the proposal of mine that got accepted to RailsConf was not only not Ruby-related and not testing-related but it wasn’t even all that technical.

Speaking at RailsConf was great for the prestige factor but I’m not sure it was among my favorite conference experiences. It was huge. Like 1600 attendees or something like that. I had never been to a conference that large before. Having now experienced a big conference like that, I think I prefer the smaller, more intimate conferences. Ironically, the smaller the conference, the more people you can meet. At a huge conference you might meet somebody and then never run into them again. I knew I knew perhaps 10+ people who were going to be at RailsConf but I could hardly find them!

What makes a good or bad conference speaking experience

Speaking at seven different conferences exposed me to a decent range of organization quality.

Connectaha, one of the best-organized conferences, pre-emptively emailed me several times leading up to the conference, anticipating and answering any question I possibly could have had: hotel address, speaker dinner, etc. A certain other conference I spoke at had basically zero communication before the conference, leading me to wonder whether we were even really on or not.

I also experienced a range of levels of reception warmth. At RubyHACK, the conference organizers met with us speakers, bought us dinner, and thanked us for coming. This was much appreciated. At a certain other conference, the organizer set up an ad-hoc informal dinner a few hours in advance and I showed up at the restaurant straight from the airport with all my luggage, wasn’t able to find the group, and had to just go straight to my hotel and then go get dinner on my own. At the conference itself I never met the organizer. I left the conference the minute my talk was over and I don’t intend to ever attend again.

When I tell people I speak at conferences, they often ask me about compensation. My experience is that it can vary quite a bit. Out of the seven conferences I’ve spoken at so far, three covered no expenses at all. One conference paid for ALL expenses (plane ticket, hotel, cab fare, etc.) which I was very surprised by, especially since it was just a small regional conference. The other three conferences contributed at least something to travel expenses, usually roughly equivalent to one or two nights in a hotel. I have yet to actually make money directly from speaking at a conference.

My advice to hopeful conference speakers

If you’re a developer who hopes to speak at a conference soon, I have some advice, but first I have some meta-advice.

My meta-advice is to be very skeptical of any conference speaking advice. I’ve seen articles whose authors say things like “Here are what my accepted proposals have in common, so this is what works”. This does not strike me as a logically sound way to draw conclusions about what works and doesn’t work. There are a lot of variables involved in why a particular proposal would get accepted or rejected (fitness of topic for the conference, whether the submitter is from an underrepresented background or not, whether the speaker has an “in” with the conference organizers, what kind of mood the organizer was in at the time of evaluating the proposal, etc.). So be wary of advice that flows from the fallacy that successful tactics can be pinpointed experimentally.

Having said that, there are of course certainly things that work better than others. However, I don’t claim to know what they are. I don’t know whether my later talks got accepted because my proposals were better or if it’s simply a numbers game and I “sprayed and prayed” enough to be successful from a sheer brute force perspective. So instead of pretending to know what works and what doesn’t let me share some articles from some people who seem to know what they’re talking about:

There’s one thing I know for an absolute fact though. All other things being equal, you’ll get accepted to more conferences if you apply to more conferences. There are certain areas of life where dumb brute force is a perfectly effective tactic and I think this is one of them.

What if you don’t have any ideas for talks? I had this problem myself. I didn’t have any good talk ideas. My solution was just to submit bad ideas. Eventually one of my bad ideas got selected. I gave my talk, and in the process of doing so, I realized that I was trying to say too much. My talk was really like three talks squeezed into one. So I took one of those three talks and made it into its own talk. This is the “Using Tests as a Tool to Wrangle Legacy Projects by Jason Swett” talk that got accepted to four conferences (actually more, IIRC). When I gave that talk it gave me ideas for more talks. The more talks ideas I generated, the more talk ideas I was able to generate.

One more piece of advice: I’ve been able to find out about more and more CFPs by following conferences on Twitter. The Twitter accounts I follow are almost exclusively conferences. You can see the list of accounts I follow here.

My plans from here

Seven conference talks in nine months is too much (and that wasn’t even my only business travel during that time). I never intended to do that many. When I got my first two talks accepted, of course I wasn’t going to turn down those opportunities. Then when I got accepted to RubyConf India, of COURSE I wasn’t going to turn down that opportunity. Then when I got accepted to RailsConf, of COURSE I wasn’t going to turn that down. Each opportunity was more un-turn-downable than the one before it. I think the time and money I invested in giving these talks will pay off in the long run, but the investment I made in the last nine months was frankly more than I would like to have made in such a short period of time.

In 2020 I’ll probably speak at much fewer conferences, and I’ll only speak at ones that are some combination of being professionally relevant, geographically close to me, and/or favorable in terms of travel compensation.

Overall, I’m very glad that I’ve dipped my toe (perhaps my whole leg!) into these waters. Now I know what conference speaking is all about. I expect that this tactic will be part of my self-marketing strategy for a long time into the future.

Capybara: expect field to have value

I commonly find myself wanting to assert that a certain field contains a certain value. The way to do this with Capybara is documented on StackOverflow but, unfortunately, the answer there is buried in a little too much noise. I decided to create my own tiny noise-free blog post that contains the answer. Here it is:

RSpec/Capybara integration tests: the ultimate guide

What exactly are integration tests?

The many different types of RSpec specs

When I started learning about Rails testing and RSpec I discovered that there are many different kinds of tests. I encountered terms like “model spec”, “controller spec”, “view spec”, “request spec”, “route spec”, “feature spec”, and more. I asked myself: Why do these different types of tests exist? Do I need to use all of them? Are some more important than others?

As I’ve gained experience with Rails testing I’ve come to believe that yes, some types of tests (or, to use the RSpec terminology, “specs”) are more important than other types of specs and no, I don’t need to use all of types of specs that RSpec offers. Some types of specs I never use at all (e.g. view specs).

There are two types of tests I use way more than any other types of specs: model specs and feature specs. Let’s focus on these two types of specs for a bit. What are model specs and feature specs for, and what’s the difference?

Feature specs and model specs

Speaking loosely, model specs test ActiveRecord models by themselves and feature specs test the whole “stack” including model code, controller code, and any HTML/CSS/JavaScript together. Neither type of spec is better or worse (and it’s not an “either/or” situation anyway), the two types of specs just have different strengths and weaknesses in different scenarios.

The strengths and weaknesses of model specs

An advantage of model specs is that they’re “inexpensive”. Compared to feature specs, model specs are relatively fast to run and relatively fast to write. It tends to be slower to actually load the pages of an application and switch among them than to just test model code by itself with no browser involved. That’s an advantage of model specs. A disadvantage of model specs is that they don’t tell you anything about whether your whole application actually works from the user’s perspective. You can understand what I mean if you imagine a Rails model that works perfectly by itself but doesn’t have have any working HTML/CSS/JavaScript plugged into that model to make the whole application work. In such a case the model specs could pass even though there’s no working feature built on top of the model code.

The strengths and weaknesses of feature specs

Feature specs have basically the opposite pros and cons. Unlike model specs, feature specs do tell you whether all the parts of your application are working together. There’s a cost to this, though, in that feature specs are relatively “expensive”. Feature specs are relatively time-consuming to run because you’re running more stuff than in a model spec. (It’s slower to have to bring up a web browser than to not have to.) Feature specs are also relatively time-consuming to write. The reason feature specs are time-consuming to write is that, unlike a model spec where you can exercise e.g. just one method at a time, a feature spec has to exercise a whole feature at a time. In order to test a feature it’s often necessary to have certain non-trivial conditions set up—if you want to test, for example, the creation of a hair salon appointment you first have to have a salon, a stylist, a client, and maybe more. This necessity makes the feature spec slower to write and slower to run than a model spec where you could test e.g. a method on a Stylist class all by itself.

Where integration tests come into the picture, and what integration tests are

If you’re okay with being imprecise, we can say that feature specs and integration tests are roughly the same thing. And I hope you’re okay with being imprecise because when it comes to testing terminology in general there’s very little consensus on what the various testing terms mean, so it’s kind of impossible to be precise all the time. We do have room to be a little more precise in this particular case, though.

In my experience it’s commonly agreed that an integration test is any test that tests two or more parts of an application together. What do we mean when we say “part”? A “part” could be pretty much anything. For example, it has been validly argued that models specs could be considered integration tests because model specs typically test Ruby code and database interaction, not just Ruby code in isolation with no database interaction. But it could also be validly argued that a model spec is not an integration test because a model spec just tests “one” thing, a model, without bringing controllers or HTML pages into the picture. (For our purposes though let’s say model specs are NOT integration tests, which is the basic view held, implicitly or explicitly, by most Rails developers.)

So, while general agreement exists on what an integration test is in a broad sense, there still is a little bit of room for interpretation once you get into the details. It’s kind of like the definition of a sandwich. Most people agree that a sandwich is composed of a piece of food surrounded by two pieces of bread, but there’s not complete consensus on whether e.g. a hamburger is a sandwich. Two different people could have different opinions on the matter and no one could say either is wrong because there’s not a single authoritative definition of the term.

Since feature specs exercise all the layers of an application stack, feature specs fall solidly and uncontroversially into the category of integration tests. This is so true that many developers (including myself) speak loosely and use the terms “feature spec” and “integration test” interchangeably, even though we’re being a little inaccurate by doing so. Inaccurate because all feature specs are integration specs but not all integration tests are feature specs. (For example, a test that exercises an ActiveRecord model interacting with the Stripe API could be considered an integration test even though it’s not a feature spec.)

I hope at this point you have some level of understanding of how feature specs and integration tests are different and where they overlap. Now that we’ve discussed feature specs vs. integration tests, let’s bring some other common testing terms into the picture. What about end-to-end tests and system tests?

What’s the difference between integration tests, acceptance tests, end-to-end tests, system tests, and feature specs?

A useful first step in discussing these four terms may be to name the origin of each term. Is it a Ruby/Rails-specific term or just a general testing term that we happen to use in the Rails world sometimes?

Integration tests General testing term, not Ruby/Rails-specific
Acceptance tests General testing term, not Ruby/Rails-specific
End-to-end tests General testing term, not Ruby/Rails-specific
System tests Rails-specific, discussed in the official Rails guides
Feature specs An RSpec concept/term

Integration tests, acceptance tests and end-to-end tests

Before we talk about where system tests and feature specs fit in, let’s discuss integration tests, acceptance tests and end-to-end tests.

Like most testing terms, I’ve heard the terms integration tests, acceptance tests, and end-to-end tests used by different people to mean different things. It’s entirely possible that you could talk to three people and walk away thinking that integration tests, acceptance tests and end-to-end tests mean three different things. It’s also entirely possible that you could talk to three people and walk away thinking all three terms mean the same exact thing.

I’ll tell you how I interpret and use each of these three terms, starting with end-to-end tests.

To me, an end-to-end test is a test that tests all layers of an application stack under conditions that are very similar to production. So in a Rails application, a test would have to exercise the HTML/CSS/JavaScript, the controllers, the models, and the database in order to qualify as an end-to-end test. To contrast end-to-end tests with integration tests, all end-to-end tests are integration tests but not all integration tests are end-to-end tests. I would say that the only difference between the terms “end-to-end test” and “feature spec” are that feature spec is an RSpec-specific term while end-to-end test is a technology-agnostic industry term. I don’t tend to hear the term “end-to-end test” in Rails circles.

Like I said earlier, an integration test is often defined as a test that verifies that two or more parts of an application behave correctly not only in isolation but also when used together. For practical purposes, I usually hear Rails developers say “integration test” when they’re referring to RSpec feature specs.

The purpose of an acceptance tests is quite a bit different (in my definition at least) from end-to-end tests or integration tests. The purpose of an acceptance test is to answer the question, “Does the implementation of this feature match the requirements of the feature?” Like end-to-end tests, I don’t ever hear Rails developers refer to feature specs as acceptance tests. I have come across this usage, though. One of my favorite testing books, Growing Object-Oriented Software, Guided by Tests, uses the term “acceptance test” to refer to what in RSpec/Rails would be a feature spec. So again, there’s a huge lack of consensus in the industry around testing terminology.

System tests and feature specs

Unlike many distinctions we’ve discussed so far, the difference between system tests and feature specs is happily pretty straightforward: when an RSpec user says integration test, they mean feature spec; when a MiniTest user says integration test, they mean system test. If you use RSpec you can focus on feature specs and ignore system tests. If you use MiniTest you can focus on system tests and ignore feature specs.

My usage of “integration test”

For the purposes of this article I’m going to swim with the current and use the terms “integration test” and “feature spec” synonymously from this point forward. When I say “integration test”, I mean feature spec.

What do I write integration tests for, and how?

We’ve discussed what integration tests are and aren’t. We’ve also touched on some related testing terms. If you’re like many developers getting started with testing, your next question might be: what do I write integration tests for, and how?

This question (what do I write tests for) is probably the most common question I hear from people who are new to testing. It was one of my own main points of confusion when I myself was getting started with testing.

What to write integration tests for

I can tell you what I personally write integration tests for in Rails: just about everything. Virtually every time I build a feature that a user will interact with in a browser, I’m going to write at least one integration test for that feature. Unfortunately this answer of mine, while true, is perhaps not very helpful. When someone asks what they should write tests for, that person is probably somewhat lost and that person is probably looking for some sort of toehold. So here’s what’s hopefully a more helpful answer.

If you’ve never written any sort of integration test (which again I’m sloppily using interchangeably with “feature spec”) then my advice would be to first do an integration test “hello world” just so you can see what’s what and get a tiny bit of practice under your belt. I have another post, titled A Rails testing “hello world” using RSpec and Capybara, that will help you do just that.

What if you’re a little further? What if you’re already comfortable with a “hello world” level of integration testing and you’re more curious about how to add real integration tests to your Rails application?

My advice would be to start with what’s easiest. If you’re working with a legacy project (legacy project here could just be read as “any existing project without a lot of test coverage”), it’s often the case that the structure of the code makes it particularly difficult to add tests. So in these cases you’re dealing with two separate challenges at once: a) you’re trying to learn testing (not easy) and b) you’re trying to overcome the obstacles peculiar to adding tests to legacy code (also not easy). To the extent that you can, I would encourage you to try to separate these two endeavors and just focus on getting practice writing integration tests first.

If you’re trying to add tests to an existing project, maybe you can find some areas of the application where adding integration tests is relatively easy. If there’s an area of your application where the UI simply provides CRUD operations on some sort of resource that has few or no dependencies, then that area might be a good candidate to begin with. A good clue would be to look for how many has_many/belongs_to calls you find in your various models (in other words, look for how many associations your models have). If you have a model that has a lot of associations, the CRUD interface for that model is probably not going to be all that easy to test because you’re going to have to spin up a lot of associated records in order to get the feature to function. If you can find a model with fewer dependencies, a good first integration test to write might be a test for updating a record.

How to write integration tests

Virtually all the integration tests I write follow the same pattern:

  1. Generate some test data
  2. Log into the application
  3. Visit the page I’m interested in
  4. Perform whatever clicks and typing need to happen in order to exercise the feature I’m testing
  5. Perform an assertion
  6. If you can follow this basic template, you can write integration tests for almost anything. The hardest part is usually the step of generating test data that will help get your program/feature into the right state for testing what you want to test.

    Most of the Rails features I write most of the time are pretty much just some variation of a CRUD interface for a Rails resource. So when I’m developing a feature, I’ll usually write an integration test for at least creating and updating the resource, and maybe deleting an instance of that resource. (I’ll pretty much never write a test for simply viewing a list of resources, the “R” in CRUD, because that functionality is usually exercised anyway in the course of testing other behavior.) Sometimes I write my integration tests before I write the application code that makes the tests pass. Usually I write generate the CRUD code first using Rails scaffolds and add the tests afterward. To put it another way, I usually don’t write my integration tests in a TDD fashion. In fact, it’s not really possible to use scaffolds and TDD at the same time. In the choice between having the benefits of scaffolds and having the benefits of TDD, I choose having the benefits of scaffolds. (I do, however, practice TDD when writing other types of tests besides integration tests.)

    Now that we’ve discussed in English what to write integration tests for and how, let’s continue fleshing out this answer using some actual code examples. The remainder of this article is a Rails integration test tutorial.

    Tutorial overview

    Project description

    In this short tutorial we’re going to create a small Rails application which is covered by a handful of integration tests. The application will be ludicrously small as far as Rails applications go, but a ludicrously small Rails application is all we’re going to need.

    Our application will have just a single model, City, which has just a single attribute, name. Since the application has to be called something, we’ll call it “Metropolis”.

    Tutorial outline

    Here are the steps we’re going to take:

    1. Initialize the application
    2. Create the city resource
    3. Write some integration tests for the city resource

    That’s all! Let’s dive in. The first step will be to set up our Rails project.

    Setting up our Rails project

    First let’s initialize the application. The -T flag means “no test framework”. We need the -T flag because, if it’s not there, Rails will assume we want MiniTest, which in this case is not the case. I’m using the -d postgresql flag because I like PostgreSQL, but you can use whatever RDBMS you want.

    Next let’s cd into the project directory and create the database.

    With that groundwork out of the way we can add our first resource and start adding our first integration tests.

    Writing our integration tests

    What we’re going to do in this section is generate the city scaffold, then pull up the city index page in the browser, then write some tests for the city resource.

    The tests we’re going to write for the city resource are:

    • Creating a city (with valid inputs)
    • Creating a city (with invalid inputs)
    • Updating a city (with valid inputs)
    • Deleting a city

    Creating the city resource

    The City resource will have only one attribute, name.

    Let’s set our root route to cities#index so Rails has something to show when we visit localhost:3000.

    If we now run the rails server command and visit http://localhost:3000, we should see the CRUD interface for the City resource.

    Integration tests for City

    Before we can write our tests we need to install some certain gems.

    Installing the necessary gems

    Let’s add the following to our Gemfile under the :development, :test group.

    Remember to bundle install.

    In addition to running bundle install which will install the above gems, we also need to install RSpec into our Rails application which is a separate step. The rails g rspec:install command will add a couple RSpec configuration files into our project.

    The last bit of plumbing work we have to do before we can start writing integration tests is to create a directory where we can put the integration tests. I tend to create a directory called spec/features and this is what I’ve seen others do as well. There’s nothing special about the directory name features. It could be called anything at all and still work.

    Writing the first integration test (creating a city)

    The first integration test we’ll write will be a test for creating a city. The steps will be this:

    1. Visit the “new city” page
    2. Fill in the Name field with a city name (Minneapolis)
    3. Click the Create City button
    4. Visit the city index page
    5. Assert that the city we just added, Minneapolis, appears on the page

    Here are those four steps translated into code.

    1. visit new_city_path
    2. fill_in 'Name', with: 'Minneapolis'
    3. click_on 'Create City'
    4. visit cities_path
    5. expect(page).to have_content('Minneapolis')

    Finally, here are the contents of a file called spec/features/create_city_spec.rb with the full working test code.

    Let’s create the above file and then run the test.

    The test passes. There’s kind of a problem, though. How can we be sure that the test is actually doing its job? What if we accidentally wrote the test in such a way that it always passes, even if the underlying feature doesn’t work? Accidental “false positives” certainly do happen. It’s a good idea to make sure we don’t fool ourselves.

    Verifying that the test actually does its job

    How can we verify that a test doesn’t give a false positive? By breaking the feature and verifying that the test no longer passes.

    There are two ways to achieve this:

    1. Write the failing test before we write the feature itself (test-driven development)
    2. Write the feature, write the test, then break the feature

    Method #1 is not an option for us in this case because we already created the feature using scaffolding. That’s fine though. It’s easy enough to make a small change that breaks the feature.

    In our CitiesController, let’s replace the line if with simply if true. This way the flow through the application will continue as if everything worked, but the city record we’re trying to create won’t actually get created, and so the test should fail when it looks for the new city on the page.

    If we run the test again now, it does in fact fail.

    Now we can change if true back to if, knowing that our test really does protect against a regression should this city saving functionality ever break.

    We’ve just added a test for attempting (successfully) to create a city when all inputs are valid. Now let’s add a test that verifies that we get the desired behavior when not all inputs are valid.

    Integration test for trying to create a city with invalid inputs

    In our “valid inputs” case we followed the following steps.

    1. Visit the “new city” page
    2. Fill in the Name field with a city name (Minneapolis)
    3. Click the Create City button
    4. Visit the city index page
    5. Assert that the city we just added, Minneapolis, appears on the page

    For the invalid case we’ll follow a slightly different set of steps.

    1. Visit the “new city” page
    2. Leave the Name field blank
    3. Click the Create City button
    4. Assert that the page contains an error

    Here’s what these steps might look like when translated into code.

    1. visit new_city_path
    2. fill_in 'Name', with: ''
    3. click_on 'Create City'
    4. expect(page).to have_content("Name can't be blank")

    A comment about the above steps: step 2 is actually not really necessary. The Name field is blank to begin with. Explicitly setting the Name field to an empty string is superfluous and doesn’t make a bit of change in how the test actually works. However, I’m including this step just to make it blatantly obvious that we’re submitting a form with an empty Name field. If we were to jump straight from visiting new_city_path to clicking the Create City button, it would probably be less clear what this test is all about.

    Here’s the full version of the “invalid inputs” test scenario alongside our original “valid inputs” scenario.

    Let’s see what we get when we run this test.

    The test fails.

    Instead of finding the text Name can't be blank on the page, it found the text City was successfully created.. Evidently, Rails happily accepted our blank Name input and created a city with an empty string for a name.

    To fix this behavior we can add a presence validator to the name attribute on the City model.

    The test now passes.

    Integration test for updating a city

    The steps for this test will be:

    1. Create a city in the database
    2. Visit the “edit” page for that city
    3. Fill in the Name field with a different value from what it currently is
    4. Click the Update City button
    5. Visit the city index page
    6. Assert that the page contains the city’s new name

    Here are these steps translated into code.

    1. nyc = City.create!(name: 'NYC')
    2. visit edit_city_path(id:
    3. fill_in 'Name', with: 'New York City'
    4. click_on 'Update City'
    5. visit cities_path
    6. expect(page).to have_content('New York City')

    Here’s the full test file which we can put at spec/features/update_city_spec.rb.

    If we run this test (rspec spec/features/update_city_spec.rb), it will pass. Like before, though, we don’t want to trust this test without seeing it fail once. Otherwise, again, we can’t be sure that the test isn’t giving us a false positive.

    Let’s change the line if @city.update(city_params) in CitiesController to if true so that the controller continues on without actually updating the city record. This should make the test fail.

    The test does in fact now fail.

    Integration test for deleting a city

    This will be the last integration test we write. Here are the steps we’ll follow.

    1. Create a city in the database
    2. Visit the city index page
    3. Assert that the page contains the name of our city
    4. Click the “Destroy” link
    5. Assert that the page no longer contains the name of our city

    Translated into code:

    1. City.create!(name: 'NYC')
    2. visit cities_path
    3. expect(page).to have_content('NYC')
    4. click_on 'Destroy'
    5. expect(page).not_to have_content('NYC')

    Here’s the full test file, spec/features/delete_city_spec.rb.

    If we run this test, it will pass.

    To protect against a false positive and see the test fail, we can comment out the line @city.destroy in CitiesController.

    Now the test fails.

    Remember to change that line back to the test passes again.

    At this point we’ve written four test cases:

    • Creating a city (with valid inputs)
    • Creating a city (with invalid inputs)
    • Updating a city (with valid inputs)
    • Deleting a city

    Let’s invoke the rspec command to run all four test cases. Everything should pass.

    Where to go next

    If you want to learn more about writing integration tests in Rails, here are a few recommendations.

    First, I recommend good old practice and repetition. If you want to get better at writing integration tests, write a whole bunch of integration tests. I would suggest building a side project of non-trivial size that you maintain over a period of months. Try to write integration tests for all the features in the app. If there’s a feature you can’t figure out how to write a test for, give yourself permission to skip that feature and come back to it later. Alternatively, since it’s just a side project without the time pressures of production work, give yourself permission to spend as much time as you need in order to get a test working for that feature.

    Second, I’ll recommend a couple books.

    Growing Object-Oriented Software, Guided by Tests. This is one of the first books I read when I was personally getting started with testing. I recommend it often and hear it recommended often by others. It’s not a Ruby book but it’s still highly applicable.

    The RSpec Book. If you’re going to be writing tests with RSpec, it seems like a pretty good idea to read the RSpec book. I also did a podcast interview with one of the authors where he and I talk about testing. I’m recommending this book despite the fact that it advocates Cucumber, which I am pretty strongly against.

    Effective Testing with RSpec 3. I have not yet picked up this book although I did do a podcast interview with one of the authors.

    My last book recommendation is my own book, Rails Testing for Beginners. I think my book offers a pretty good mix of model-level testing and integration-level testing with RSpec and Capybara.

Common causes of flickering/flapping/flaky tests

A flapping test is a test that sometimes passes and sometimes fails even though the application code being tested hasn’t changed. Flapping tests can be hard to reproduce, diagnose, and fix. Here are some of the common causes I know of for flapping tests. If you know the common causes of flapping tests it can go a long way toward diagnosis. Once diagnosis is out of the way, the battle is half over.

Race conditions

Let’s say you have a Capybara test that clicks a page element that fires off an AJAX request. The AJAX request completes, making some other page element clickable. Sometimes the AJAX request beats the test runner, meaning the test works fine. Sometimes the test runner beats the AJAX request, meaning the test breaks.

“Leaking” tests (non-deterministic test suite)

Sometimes a test will fail to clean up after itself and “leak” state into other tests. To take a contrived but obvious example, let’s say test A checks that the “list customers” page shows exactly 5 customers and test B verifies that a customer can be created successfully. If test B doesn’t clean up after itself, then running test B before test A will cause test A to fail because there are now 6 customers instead of 5. This exact issue is almost never an issue in Rails (because each test runs inside a transaction) but database interaction isn’t the only possible form of leaky tests.

When I’m fixing this sort of issue I like to a) determine an order of running the tests that always causes my flapping test to fail and then b) figure out what’s happening in the earlier test that makes the later test fail. In RSpec I like to take note of the seed number, then run rspec --seed <seed number> which will re-run my tests in the same order.

Reliance on third-party code

Sometimes tests unexpectedly fail because some behavior of third-party code has changed. I actually sometimes consciously choose to live with this category of flapping test. If an API I rely on goes down and my tests fail, the tests arguably should fail because my application is genuinely broken. But if my tests are flapping in a way that’s not legitimate, I’d probably change my test to use something like VCR instead of hitting the real API.

A Rails model test “hello world”

The following is an excerpt from my book, Rails Testing for Beginners.

What we’re going to do

What we’re going to do in this post is:

  1. Initialize a new Rails application
  2. Install RSpec using the rspec-rails gem
  3. Generate a User model
  4. Write a single test for that User model

The test we’ll write will be a trivial one. The user will have both a first and last name. On the User model we’ll define a full_name method that concatenates first and last name. Our test will verify that this method works properly. Let’s now begin the first step, initializing the Rails application.

Initializing the Rails application

Let’s initialize the application using the -T flag meaning “no test framework”. (We want RSpec in this case, not the Rails default of MiniTest.)

Now we can install RSpec.

Installing RSpec

To install RSpec, all we have to do is add rspec-rails to our Gemfile and then do a bundle install.

We’ve installed the RSpec gem but we still have to run rails g rspec:install which will create a couple configuration files for us.

Now we can move on to creating the User model.

Creating the User model

Let’s create a User model with just two attributes: first_name and last_name.

Now we can write our test.

The User test

Writing the test

Here’s the test code:

Let’s take a closer look at this test code.

Examining the test

Let’s focus for a moment on this part of the test code:

This code is saying:

  1. Instantiate a new User object with first name Abraham and last name Lincoln.
  2. Verify that when we call this User object’s full_name method, it returns the full name, Abraham Lincoln.

What are the its and describes all about? Basically, these “it blocks” and “describe blocks” are there to allows us to put arbitrary labels on chunks of code to make the test more human-readable. (This is not precisely true but it’s true enough for our purposes for now.) As an illustration, I’ll drastically change the structure of the test:

This test will pass just fine. Every it block needs to be nested inside a do block, but RSpec doesn’t care how deeply nested our test cases are or whether the descriptions we put on our tests (e.g. the vague and mysterious it 'does stuff') make any sense.

Running the test

If we run this test it will fail. It fails because the full_name method is not yet defined.

Let’s now write the full_name method.

Now the test passes.

Reader Q&A: Kaemon’s question about test-first vs. test-after

Recently reader Kaemon L wrote me with the following question:

“As a beginner, is it better to write tests before you code to make it pass? or is it better to code first, write tests for the code to pass, and then add more tests as you come across bugs? In my experience so far learning RSpec, I’ve found it easier to code first and then write tests afterwards. Only because when I would try to write tests first I wasn’t exactly sure what needed to be tested, or how I was planning to write the code.

This is a great question. In addressing this question I find it useful to realize that when you’re learning testing you’re actually embarking on two parallel endeavors:

1. Writing tests (outcome)
2. Learning how to write tests (education)

I think it’s useful to make the distinction between these two parts of the work. If you make the realization that achieving the desired outcome is only half your job, and the other half is learning, then it frees you up to do things “incorrectly” for the sake of moving forward.

With that out of the way, what’s actually better? Writing tests first or after?

I’m not sure that it makes sense to frame this question in terms of “better” or “worse”. When I think of test-driven development, I don’t think of it as “better” than test-after in all situations, I think of TDD as having certainadvantages in certain scenarios.

What are the advantages of test-driven development?

TDD can separate the what from the how. If I write the test first, I can momentarily focus on what I want to accomplish and relieve my mind of the chore of thinking of how. Then, once I switch from writing the test to writing the implementation, I can free my mind of thinking about everything the feature needs to do and just focus on making the feature work.

TDD increases the chances that every single thing I’ve written is covered by a test. The “golden rule” of TDD (which I don’t always follow) is said to be “never write any new code without a failing test first”. If I follow that, I’m virtually guaranteed 100% test coverage.

TDD forces me to write easily-testable code. If I write the test first and the code after, I’m forced to write code that can be tested. There’s no other way. If I write the code first and try to test it afterward, I might find myself in a pickle. As a happy side benefit, code that’s easily testable happens to also usually be easy to understand and to work with.

TDD forces me to have a tight feedback loop. I write a test, I write some code. I write another test, I write some more code. When I write my tests after, I’m not forced to have such a fast feedback loop. There’s nothing stopping me from coding for hours before I stop myself and write a test.

If I choose to write my tests after my application code instead of before, I’m giving up the above benefits. But that doesn’t mean that test-after is automatically an inferior workflow in all situations.

Let’s go back to the two parallel endeavors listed above: writing tests and learning how to write tests. If I’m trying to write tests as I’m writing features and I just can’t figure out how to write the test first, then I have the following options:
1. Try to plow through and somehow write the test first anyway
2. Give up and don’t write any tests
3. Write the tests after

If #1 is too hard and I’m just hopelessly stuck, then #3 is a much better option than #2. Especially if I make a mental shift and switch from saying “I’m trying to write a test” to saying “I’m trying to learn how to write tests”. If all I’m trying to do is learn how to write tests, then anything goes. There’s literally nothing at all I could do wrong as part of my learning process, because the learning process is a separate job from producing results.

Lastly, what if I get to the stage in my career where I’m fully comfortable with testing? Is TDD better than test-after? I would personally consider myself fully comfortable with testing at this stage in my career (although of course no one is ever “done” learning). I deliberately do not practice TDD 100% of the time. Sometimes I just find it too hard to write the test first. In these cases sometimes I’ll do a “spike” where I write some throwaway code just to get a feel for what the path forward might look like. Then I’ll discard my throwaway code afterward and start over now that I’m smarter. Other times I’ll just begin with the implementation and keep a list of notes like “write test for case X, write test for case Y”.

To sum it all up: I’m not of the opinion that TDD is a universally superior workflow to non-TDD. I don’t think it’s important to hold oneself to TDD practices when learning testing. But once a person does reach a point of being comfortable with testing, TDD is an extremely valuable methodology to follow.

Reader Q&A: Tommy’s question about testing legacy code

Code with Jason subscriber Tommy C. recently wrote in with the following question:


So I have found that one of the hurdles to testing beginners face is that the code they are trying to test is not always very testable. This is either because they themselves have written it that way or because they have inherited it. So, this presents a sort of catch 22. You have code with no tests that is hard to test. You can’t refactor the code because there are no tests in place to ensure you have not changed the behavior of the code.

I noticed that you have said that you don’t bother to test controllers or use request specs. I agree that in your situation, since you write really thin controllers that is a good call. However, in my situation, I have inherited controllers that are doing some work that I would like to test. I would like to move that logic out eventually, but right now all I can get away with is adding some test/specs.

These are some of the things that make testing hard for me. When I’m working on a greenfield side project all is good, but you don’t always have clean, testable code to work with.


Chicken/egg problem

Tommy brings up a classic legacy project chicken/egg problem: you can’t add tests until you change the way the code is structured, but you’re afraid to change the structure of the code before you have tests. It’s a seemingly intractable problem but, luckily, there’s a path forward.

The answer is to make use of the Extract Method and Extract Class refactoring techniques, also known as Sprout Method and Sprout Class. The idea is that if you come across a method that’s too long to easily write a test for, you just grab a chunk of lines from the method and move it – completely unmodified – into its own method or class, and then write tests around that new method or class. These techniques are a way to be reasonably sure (not absolutely guaranteed, but reasonably sure) that your small change has not altered the behavior of the application.

I learned about Sprout Method/Sprout Class from Michael Feathers and Martin Fowler. I also wrote a post about using Sprout Method in Ruby here.

Request specs

To address controller tests/request specs: Tommy might be referring to a post I wrote where I said most of the time I don’t use controller specs/request specs. (I also wrote about the same thing in my book, Rails Testing for Beginners.) There are two scenarios where I do use request specs, though: API-only projects and legacy projects that have a bunch of logic in the controllers. I think Tommy is doing the exact right thing by putting tests around the controller code and gradually moving the code out of the controllers over time.

If you, like Tommy, are trying to put tests on a legacy project and finding it difficult, don’t despair. It’s just a genuinely hard thing. That’s why people have written entire books about it!

Do you have a question about testing Rails legacy code, or about anything else to do with testing? Just email me at or tweet me at @jasonswett. I’m able to, I’ll write an answer to your question just like I did with Tommy’s.