Author Archives: Jason Swett

How to launch an EC2 instance for hosting a Rails application

This post is the first in my series on how to deploy a Ruby on Rails application to AWS.

This post will walk you through launching an EC2 instance using the AWS console GUI. By the end of this post you’ll have an Ubuntu EC2 instance up and running.

1. Choose the instance type

Log into your AWS console and go to the EC2 section under the Services menu.

On the left-hand menu, click Instances.

On the subsequent page, click Launch Instance.

You’ll be shown a list of possible instance types. Select Ubuntu Server (ami-0d5d9d301c853a04a).

On the next screen click Review and Launch without changing anything.

Click Launch on the screen that follows.

2. Create and download a key pair

After you click Launch you’ll be prompted to either create a key pair or choose an existing one. I’m not going to assume you have an existing key pair to use, so I’ll have you create a new one.

Choose “Create a new key pair”. For the name, use ec2-tutorial. Then click Download Key Pair.

If you’re wondering what a key pair is exactly, the short explanation is that a key pair is a way to ensure that only you can connect to your new EC2 instance. You’ll download your new key pair to your local machine, then anytime you SSH into your EC2 instance, you’ll specify that you want to use that key pair when you connect. If your local key pair matches what your EC2 instance has, you’ll be good to go. If not, you’ll be denied access.

3. Launch the EC2 instance

After you’ve downloaded your key pair (make sure you download that key pair!) click Launch Instances. At this point your EC2 instance will finally actually be launched.

4. SSH into your new instance as a test

While you’re waiting for your EC2 instance to be ready, move the ec2-tutorial.pem file to ~/.ssh/ec2-tutorial.pem.

Go back to Services > EC2 > Instances. Right-click on your instance and click Connect.

In the popup that comes up, copy the ssh command that appears under “Example:”. You won’t be able to use it yet, though.

You’ll need to change the command from this

ssh -i "ec2-tutorial.pem"

to this

ssh -i "~/.ssh/ec2-tutorial.pem"

The difference is that the initial command won’t have the correct path to ec2-tutorial.pem.

You’ll also need to change the permissions on ec2-tutorial.pem. The ssh program doesn’t like it when the specified key’s permissions are overly open. Change the permissions as follows:

Now you can finally run your SSH command.

When asked if you’re sure you want to continue connecting, say yes.

Congratulations. You’re now the proud owner of a fresh new EC2 instance!

Now we can move onto the next step, installing nginx and Passenger.

How I approach test coverage metrics

Different developers have different opinions about test coverage. Some engineering organizations not only measure test coverage but have rules around it. Other developers think test coverage is basically BS and don’t measure it at all.

I’m somewhere in between. I think test coverage is a useful metric but only in a very approximate and limited sort of way.

If I encounter two codebases, one with 10% coverage and another with 90% coverage, I can of course probably safely conclude that the latter codebase has a healthier test suite. But if there’s a difference of 90% and 100% I’m not convinced that that means much.

I personally measure test coverage on my projects, but I don’t try to optimize for it. Instead, I make testing a habit and let my habitual coding style be my guiding force instead of the test coverage metrics.

If you’re curious what type of test coverage my normal workflow naturally results in, I just checked the main project I’ve been working on for the last year or so and the coverage level is 96.62%, according to simplecov. I feel good about that number, although more important to me than the test coverage percentage is what it feels like to work with the codebase on a day-to-day basis. Are annoying regressions popping up all the time? Is new code hard to write tests for due to the surrounding code not having been written in an easily testable way? Then the codebase could probably benefit from more tests.

Exosuit demo video #2: launching an EC2 instance from Exosuit’s web UI

Exosuit is a tool I’ve been working on to make Rails-AWS deployments almost as easy as Rails-Heroku deployments.

Back in late September 2019, I coded up an initial version of Exosuit and released a demo video of what I had built.

Since then a lot has changed, including my conception of what Exosuit even is.

My original thought was that Exosuit would be mainly a command-line tool, with a web UI in a supporting role. Now my thinking is that Exosuit will be mainly a web UI tool, with a command-line tool in a supporting role.

Here’s what I’m currently imagining the launch process to look like, roughly:

  1. Create an Exosuit account
  2. Connect Exosuit with your AWS account and let Exosuit launch an EC2 instance
  3. Run git push exosuit master to deploy your Rails application to your new EC2 instance

So far I have steps 1 and 2 complete. It might not sound like a lot, but it took me over a month of work!

Below is a demo video of what Exosuit can do so far.

If you’d like to get real-time updates on my progress with Exosuit, you can sign up for my email list below or follow me on Twitter.

My general approach to Rails testing

My development workflow

The code I write is influenced by certain factors upstream of the code itself.

Before I start coding a feature, I like to do everything I can to try to ensure that the user story I’m working on is small and that it’s crisply defined. By small, I mean not more than a day’s worth of work. By crisply defined, I mean that the user story includes a reasonably precise and detailed description of what the scope of that story is.

I also like to put each user story through a reasonably thorough scrutinization process. In my experience, user stories often aren’t very thoroughly thought through and contain a level of ambiguity or inconsistency such that they’re not actually possible to implement as described.

I find that if care is taken to make sure the user stories are high-quality before development work begins, the development work goes dramatically more smoothly.

Assuming I’m starting with good user stories, I’ll grab a story to work on and then break that story into subtasks. Each day I have a to-do list for the day. On that to-do list I’ll put the subtasks for whatever story I’m currently working on. Here’s an example of what that might look like, at least as a first draft:

Feature: As a staff member, I can manage insurance payments

  • Scaffolding for insurance payments
  • Feature spec for creating insurance payment (valid inputs)
  • Feature spec for creating insurance payment (invalid inputs)
  • Feature spec for updating insurance payment

(By the way, I write more about my specific formula for writing feature specs in my post

Where my development workflow and my testing workflow meet

Looking at the list above, you can see that my to-do list is expressed mostly in terms of tests. I do it this way because I know that if I write a test for a certain piece of functionality, then I’ll of course have to build the functionality itself as part of the process.

When I use TDD and when I don’t

Whether or not I’ll use TDD on a feature depends largely on whether it’s a whole new CRUD interface or whether it’s a more minor modification.

If I’m working on a whole CRUD interface, I’ll use scaffolding to generate the code, and then I’ll make a pass over everything and write tests. (Again, I write more about the details of this process here. The fact that I use scaffolding for my CRUD interfaces makes TDD impossible. This is a trade-off I’m willing to make due to how much work scaffolding saves me and due to the fact that I pretty much never forget to write feature specs for my scaffold-generated code.

It’s also rare that I ever need to write any serious amount of model specs for scaffold-generated code. I usually write tests for my validators using shoulda-matchers, but that’s all. (I often end up writing more model specs later as the model grows in feature richness, but not in the very beginning.)

If instead of writing a whole new CRUD interface I’m just making a modification to existing code (or fixing a bug), that’s a case where I usually will use TDD. I find that TDD in these cases is typically easier and faster than doing test-after or skipping tests altogether. If for example I need to add a new piece of data to a CSV file my program generates, I’ll go to the relevant test file and add a failing expectation for that new piece of data. Then I’ll go and add the code to put the data in place to make the test pass.

The other case where I usually practice TDD is if the feature I’m writing is not a CRUD-type feature but rather a more of a model-based feature, where the interesting work happens “under the hood”. In those cases I also find TDD to be easier and faster than not-TDD.

The kinds of tests I write and the kind I don’t

I write more about this in my book, but I tend to mostly write model specs and feature specs. I find that most other types of tests tend to have very little value. By the way, I’m using the language of “spec” instead of “test” because I use RSpec instead of Minitest, but my high-level approach would be the exact same under any testing framework.

When I use mocks and stubs and when I don’t

I almost never use mocks or stubs in Rails projects. In 8+ years of Rails development, I’ve hardly ever done it.

How I think about test coverage

I care about test coverage enough to measure it, but I don’t care about test coverage enough to set a target for it or impose any kind of rule on myself or anything like that. I’ve found that the natural consequence of me following my normal testing workflow is that I end up with a pretty decent test coverage level. Today I checked the test coverage on the project I’ve been working on for the last year or so and the measurement was 97%.

The test coverage metric isn’t the main barometer for me though on the health level of a project’s tests. To me it seems much more useful to pay attention to how many exceptions pop up in production and what it feels like to do maintenance coding on the project. “What it feels like to do maintenance coding” is obviously not a verify quantifiable metric, but of course not everything that counts can be counted.

The difference between domains, domain models, object models and domain objects

I recently came across a question regarding the difference between domains and domain models. These terms probably mean different things to different people, but I’ll define the terms as I use them.


When I’m working on a software project, the domain is the conceptual area I’m working inside of. For example, if I’m working on an application that has to do with restaurants, the domain is restaurants.

Domain model

The world is a staggeringly complex place. Even relatively simple-seeming things like restaurants involve way more complexity than could be accurately captured in a software system. So instead of coding to a domain, we have to code to a domain model.

For me, a domain model is a separate thing from any particular code or piece of software. If I come up with a domain model for something to do with restaurants, I could express my domain model on a piece of paper if I wanted to (or just inside my head). My domain model is a standalone conceptual entity, regardless of whether I actually end up writing any software based on it or not.

A domain model also doesn’t even need to be consciously expressed in order to exist. In fact, on most software systems I’ve ever worked on, the domain model of the system only exists in the developers’ minds. The domain model isn’t something that someone planned at the beginning, it’s something that each developer synthesizes in his or her mind based on the code that exists in the application and based on what the developer understands about the domain itself.

Object model

The place where my domain model turns into actual code is in the object model. If my domain concepts include restaurant, order, and customer, then my object model will probably include objects like Restaurant, Order and Customer.

Domain object

Any object in my object model that also exist as a concept in my domain model I would call a domain object. In the previous example, Restaurant, Order and Customer would all be domain objects.

Not every object in a system is a domain object. Some objects are value objects. A value object is an object whose identity doesn’t matter. Examples of concepts that would make sense as value objects rather than domain objects are phone number values or money values.

One type of “object” that’s popular in the Rails world that I tend not to use is the service object, for reasons explained in the linked post.

Related reading: Domain Driven Design

Why I don’t like Dokku

In the process of building and talking about Exosuit, my tool to make AWS-Rails deployment easier, the question has come up a couple times: “Have you heard about Dokku?”

I’ve seen Dokku but I don’t want to use it. There are two reasons. First, I don’t want to have to involve Docker. (More precisely, I don’t want to make my Exosuit users have to involve Docker.) Second, I find Dokku aesthetically repulsive. I’ll explain what I mean by “aesthetically repulsive”.

Here are the installation instructions from the Dokku docs:

Yuck. What’s all this stuff? What’s DOKKU_TAG=v0.18.3 all about?

Obviously, I’m a reasonably smart guy (unless you ask my wife) and I could figure out what these details mean. And of course, these details don’t even really matter that much. If I just copy and paste the commands into my terminal, Dokku will get installed.

My objection the Dokku installation steps isn’t a technical objection. It’s an aesthetic objection. Compare Dokku’s installation commands with the installation command for Heroku CLI:

So much nicer. And that’s exactly why the installation steps for Exosuit look like this:

So, yes, I’ve heard of Dokku, but I don’t want to use it. I realize that my objections to Dokku might seem silly/weird/crazy/stupid to some people. That’s fine. Exosuit is not for those people.

If you get what I’m trying to say with all this, no explanation is necessary, and if you don’t, no explanation will help.

How I wrote a command-line Ruby program to manage EC2 instances for me

Why I did this

Heroku is great, but not in 100% of cases

When I want to quickly deploy a Rails application, my go-to choice is Heroku. I’m a big fan of the idea that I can just run heroku create and have a production application online in just a matter of seconds.

Unfortunately, Heroku isn’t always a desirable option. If I’m just messing around, I don’t usually want to pay for Heroku features, but I also don’t always want my dynos to fall asleep after 30 minutes like on the free tier. (I’m aware that there are ways around this but I don’t necessarily want to deal with the hassle of all that.)

Also, sometimes I want finer control than what Heroku provides. I want to be “closer to the metal” with the ability to directly manage my EC2 instances, RDS instances, and other AWS services. Sometimes I desire this for cost reasons. Sometimes I just want to learn what I think is the valuable developer skill of knowing how to manage AWS infrastructure.

Unfortunately, using AWS by itself isn’t very easy.

Setting up Rails on bare EC2 is a time-consuming and brain-consuming hassle

Getting a Rails app standing up on AWS is pretty hard and time-consuming. I’m actually not even going to get into Rails-related stuff in this post because even the small task of getting an EC2 instance up and running—without no additional software installed on that instance—is a lot harder than I think it should be, and there’s a lot to discuss and improve just inside that step.

Just to briefly illustrate what a pain in the ass it is to get an EC2 instance launched and to SSH into it, here are the steps. The steps that follow are the command-line steps. I find the AWS GUI console steps roughly equally painful.

1. Use the AWS CLI create-key-pair command to create a key pair. This step is necessary for later when I want to SSH into my instance.

2. Think of a name for the key pair and save it somewhere. Thinking of a name might seem like a trivially small hurdle, but every tiny bit of mental friction adds up. I don’t want to have to think of a name, and I don’t want to have to think about where to put the file (even if that means just remembering that I want to put the key in ~/.ssh, which is the most likely case.

3. Use the run-instances command, using an AMI ID (AMI == Amazon Machine Image) and passing in my key name. Now I have to go look up the run-instances (because I sure as hell don’t remember it) and, look up my AMI ID, and remember what my key name is. (If you don’t know what an AMI ID is, that’s what determines whether the instance will be Ubuntu, Amazon Linux, Windows, etc.)

4. Use the describe-instances command to find out the public DNS name of the instance I just launched. This means I either have to search the JSON response of describe-instances for the PublicDnsName entry or apply a filter. Just like with every AWS CLI command, I’d have to go look up the exact syntax for this.

5. Run the ssh command, passing in my instance’s DNS and the path to my key. This step is probably the easiest, although it took me a long time to commit the exact ssh -i syntax to memory. For the record, the command is ssh -i ~/.ssh/my_key.pem It’s a small pain in the ass to have to look up the public DNS for my instance again and remember whether my EC2 user is going to be ubuntu or ec2-user (it depends on what AMI I used).

My goals for my AWS command-line tool

All this fuckery was a big hassle so I decided to write my own command-line tool to manage EC2 instances. I call the tool Exosuit. You can actually try it out yourself by following these instructions.

There were four specific capabilities I wanted Exosuit to have.

Launch an instance

By running bin/exo launch, it should launch an EC2 instance for me. It should assume I want Ubuntu. It should let me know when the instance is ready, and what its instance ID and public DNS are.

SSH into an instance

I should be able to run bin/exo ssh, get prompted for which instance I want to SSH into, and then get SSH’d into that instance.

List all running instances

I should be able to run bin/exo instances to see all my running instances. It should show the instance ID and public DNS for each.

Terminate instances

I should be able to run bin/exo terminate which will show me all my instance IDs and allow me to select one or more of them for termination.

How I did it

Side note: when I first wrote this, I forgot that the AWS SDK for Ruby existed, so I reinvented some wheels. Whoops. After I wrote this I refactored the project to use AWS SDK instead of shell out to AWS CLI.

For brevity I’ll focus on the bin/exo launch command.

Using the AW CLI run-instances command

The AWS CLI command for launching an instance looks like this:

Hopefully most of these flags are self-explanatory. You might wonder where the key name of normal-quiet-carrot came from. When the bin/exo launch command is run, Exosuit asks “Is there a file defined at .exosuit/config.yml that contains a key pair name and path? If not, create that file, create a new key pair with a random phrase for a name, and save the name and path to that file.”

Here’s what my .exosuit/config.yml looks like:

The aws_profile_name is something that I imagine most users aren’t likely to need. I personally happen to have multiple AWS accounts, so it’s necessary for me to send a --profile flag when using AWS CLI commands so AWS knows which account of mine to use. If a profile isn’t specified in .exosuit/config.yml, Exosuit will just leave the --profile flag off and everything will still work fine.

Abstracting the run-instances command

Once I had coded Exosuit to construct a few different AWS CLI commands (e.g. run-instances, terminate-instances), I noticed that things were getting a little repetitive. Most troubling, I had to always remember to include the --profile flag (just as I would if I were typing all this on the command line manually), and I didn’t always remember to do so. In those cases my command would get sent to the wrong account. That’s bad.

So I created an abstraction called AWSCommand. Here’s what a usage of it looks like:

You can probably see the resemblance it bears to the bare run-instances usage. Note the conspicuous absence of the profile flag, which is now automatically included every single time.

Listening for launch success

One of my least favorite things about manually launching EC2 instances is having to check periodically to see when they’ve started running. So I wanted Exosuit to tell me when my EC2 instance was running.

I achieved this by writing a loop that hits AWS once per second, checking the state of my new instance each time.

You might wonder what Instance.find and instance.running? do.

The Instance.find method will run the aws ec2 describe-instances command, parse the JSON response, then grab the relevant JSON data for whatever instance_id I passed to it. The return value is an instance of the Instance class.

When an instance of Instance is instantiated, an instance variable gets set (pardon all the “instances”) with all the JSON data for that instance that was returned by the AWS CLI. The instance.running? method simply looks at that JSON data (which has since been converted to a Ruby hash) and checks to see what the value of ['State']['Name'] is.

Here’s an abbreviated version of the Instance class for reference.

(By the way, all the Exosuit code is available on GitHub if you’d like to take a look.)

Success notification

As you can see from the code a couple snippets above, Exosuit lets me know once my instances has entered a running state. At this point I can run bin/exo ssh, bin/exo instances or bin/exo terminate to mess with my instance(s) as I please.

Demo video

Here’s a small sample of Exosuit in action:

Try it out yourself

If you’d like to try out Exosuit, just visit the Getting Started with Exosuit guide.

If you think this idea is cool and useful, please let me know by opening a GitHub issue for a feature you’d like to see, or tweeting at me, or simply starring the project on GitHub so I can gage interest.

I hope you enjoyed this explanation and I look forward to sharing the next steps I take with this project.

Exosuit demo video #1: launching and SSHing into an EC2 instance

Update: this video is now out of date. See demo video #2 for a more up-to-date version.

Recently I decided to begin work on a tool that makes it easier to deploy Rails apps to AWS. My wish is for something that has the ease of use of Heroku, but the fine-grained control of AWS.

My tool, which is free and open source, is called Exosuit. Below is a demo video of what Exosuit can do so far which, given the fact that Exosuit has only existed since the day before this writing, isn’t very much. Currently Exosuit can launch an EC2 instance for you and let you SSH into it.

But I think even this little bit is pretty cool – I don’t know of any other method that lets you go from zero to SSH’d into your EC2 instance in 15 seconds like Exosuit does.

If you’d like to take Exosuit for a spin on your own computer, you can! Just visit the Exosuit repo and go to the getting started guide. And if you do try it, please tweet me or send an email to with any thoughts or feedback you might have.

Rails service objects: why they’re bad, and what I use instead

The good and bad thing about Active Record models

In the early stages of a Rails project’s life, the pattern of putting all the model code into objects that inherit from Active Record works out pretty nicely. Each model object gets its own database table, some validations, some associations, and a few custom methods.

Later in the project’s lifecycle, this pattern of putting everything into Active Record objects gets less good. If there’s an Appointment model, for example, everything remotely related to an appointment gets put into the Appointment model, leading to models with tens of methods and hundreds if not thousands of lines of code.

Despite the fact that this style of coding—stuffing huge amounts of code into Active Record models—leads to a mess, many Rails projects are built this way. My guess is that the reason this happens is that developers see the organizational structures Rails provides (models, controllers, views, helpers, etc.) and don’t realize that they’re not limited to ONLY these structures. I myself for a long time didn’t realize that I wasn’t limited to only those structures.

Service objects as an alternative to the “Active Record grab bag” anti-pattern

A tactic I frequently hear recommended as an antidote to the “Active Record grab bag” anti-pattern is to use “service objects”. I put the term “service objects” in quotes because it seems to mean different things to different people.

For my purposes I’ll use the definition that I’ve been able to synthesize from several of the top posts I found when I googled for rails service objects.

The idea is this: instead of putting domain logic in Active Record objects, you put domain logic in service objects which live in a separate place in your Rails application, perhaps in app/services. Some example service class names I found in various service object posts I found online include:

  • TweetCreator
  • RegisterUser
  • CompleteOrder
  • NewRegistrationService

So, typically, a service object is responsible for carrying out some action. A TweetCreator creates a tweet, a RegisterUser object registers a user. This seems to be the most commonly held (or at least commonly written about) conception of a service object. It’s also apparently the idea behind the Interactor gem.

Why service objects are a bad idea

One of the great benefits of object-oriented programming is that we can bestow objects with a mix of behavior and data to give the objects powerful capabilities. Not only this, but we can map objects fairly neatly with concepts in the domain model in which we’re working, making the code more easily understandable by humans.

Service objects throw out the fundamental advantages of object-oriented programming.

Instead of having behavior and data neatly encapsulated in easy-to-understand objects with names like Tweet or User, we have conceptually dubious ideas like TweetCreator and RegisterUser. “Objects” like this aren’t abstractions of concepts in the domain model. They’re chunks of procedural code masquerading as object-oriented code.

A better alternative to service objects: domain objects

Let me take a couple service object examples I’ve found online and rework them into something better.

Tweet example

The first example I’ll use is TweetCreator from this TopTal article, the first result when I google for rails service objects.

I think it’s far better just to have a Tweet object.

Isn’t it more natural to say'hello').send than to say'hi').send_tweet? I think so. Rather than being this weird single-purpose procedure-carrier-outer, Tweet is just a simple representation of a real domain concept. This, to me, is what domain objects are all about.

The differences between the good and bad examples in this case are pretty small, so let me address the next example in the TopTal article which I think is worse.

Currency exchange example

First, the abstractions of MoneyManager, CurrencyExchanger, etc. aren’t really abstractions. I’m automatically skeptical of any object whose name ends in -er.

I’m not going to try to rework this example line for line because there’s too much there, but let’s see if we can start toward something better.

Someone could probably legitimately find fault with the details of my currency conversion logic (an area with which I have no experience) but hopefully the conceptual superiority of my approach over the MoneyManager approach is clear. A currency value is clearly a real thing in the real world, and so is a currency type and so is an exchange rate. Things like MoneyManager, CurrencyExchanger and ExchangeRateGetter are clearly just contrived. These latter objects (which again are really just collections or procedural code) would probably fall under the category of what Martin Fowler calls an anemic domain model.

I’ll run through one last example, a RegisterUser service object I found on this post.

User registration example

There’s no need for this special-purpose RegisterUser object. There’s already a concept in the domain model (if we think hard enough) of a user registration. Just like how the Devise gem has controller names like RegistrationsController, SessionsController, etc., we can consider there to be a concept of a user registration and simply create a new one.

So the above code becomes this:

By doing it this way we’re neither cluttering up the main User object nor creating an awkward RegisterUser object.

Suggestions for further reading

Enough With the Service Objects Already

I really enjoyed and agreed with Avdi Grimm’s Enough With the Service Objects Already. Avdi’s post helped me realize that most service objects are just wrappers for chunks of procedural code. What I wanted to add was that I think it’s usually possible to refactor that procedural code into meaningful objects. For example, in a piece of schedule code I recently wrote, I have a concept of an AvailabilityBlock and a mechanism for detecting conflicts between them. Instead of taking the maybe “obvious” route of creating a AvailabilityBlockConflictDetector, I created an object called an AvailabilityBlockPair which can be used like this:, b).conflict?. To me this is much nicer. Kind of like the UserRegistration example above, the concept of an AvailabiltyBlockPair isn’t something that obviously exists in the domain, but it does exist in the domain if I consider it to be. It’s like drawing an arbitrary border around letters in a word search. Any word you can find on the page is really there if you can circle it.

Anemic Domain Model

Martin Fowler’s Anemic Domain Model post helped me articulate exactly what’s wrong with service objects, which seem to me to be a specific kind of Anemic Domain Model. My favorite passage from the article is: “The fundamental horror of this anti-pattern [Anemic Domain Model] is that it’s so contrary to the basic idea of object-oriented design; which is to combine data and process together. The anemic domain model is really just a procedural style design, exactly the kind of thing that object bigots like me (and Eric) have been fighting since our early days in Smalltalk. What’s worse, many people think that anemic objects are real objects, and thus completely miss the point of what object-oriented design is all about.”

Martin Fowler on Service Objects via the Ruby Rogues Parley mailing list

This GitHub Gist contains what’s supposedly an excerpt from an email Martin Fowler wrote. This email snippet helped clue me into the apparent fact that what most people call service objects are really just an implementation of the Command pattern. It seems to me that the Interactor gem is also an implementation of the Command pattern. I could see the Command pattern making sense in certain scenarios but I think a) when the Command pattern is used it should be called a Command and not a service object, and b) it’s not a good go-to pattern for all behavior in an application. I’ve seen engineering teams try to switch over big parts of their application to Interactors, thinking it’s a great default style of coding. I’m highly skeptical that this is the case. I want to write object-oriented code in Ruby, not procedural code in Interactor-script.

Don’t Create Objects That End With -ER

If an object ends in “-er” (e.g. Manager, Processor), chances are it’s not a valid abstraction. There’s probably a much more fitting domain object in there (or aggregate of domain objects) if you think hard enough.

The Devise gem code

I think the Devise gem does a pretty good job of finding non-obvious domain concepts and turning them into sensible domain objects. I’ve found it profitable to study the code in that gem.

Domain-Driven Design

I’m slowly trudging through this book at home as I write this post. I find the book a little boring and hard to get through but I’ve found the effort to be worth it.


I find the last sentence of Martin Fowler’s Anemic Domain Model article to be a great summary of what I’m trying to convey: “In general, the more behavior you find in the services, the more likely you are to be robbing yourself of the benefits of a domain model. If all your logic is in services, you’ve robbed yourself blind.”

Don’t use service objects. Use domain objects instead.

How I make testing a habit

A reader of mine recently asked how I make testing a habit.

The question is an understandable one. There’s a lot of friction in the process of writing tests. Even for someone like me who has been in the habit of writing tests most of my code for many years, the friction is still there. On many occasions I still have to “make myself” write tests, the same way I “make myself” brush my teeth every few days or so.

I actually had to think long and hard about the question. I definitely DO write tests, but I’m not sure that the force that drives me to write tests is exactly a habit.

Yesterday I realized what drives to me to write tests. What drives me to write tests is that I realized, at some point, that coding with tests is easier, faster and more enjoyable than coding without tests.

So for me, writing tests is not a matter of self-discipline or anything like that. To the contrary, the driving force is something more like laziness. I’m taking the easier, faster, more enjoyable path instead of the harder, slower, more miserable one.

Let me explain each one of these three things in a little more detail.

How writing tests makes programming easier

Imagine you have some big, fuzzily defined programming task to carry out. You’re not sure what part to try to tackle first, or even what the parts of the task are.

It’s easy to see this kind of task as a monolithic, intractable mystery. You look around for an obvious toehold and there isn’t one. It’s overwhelming and discouraging.

Tests can serve as an aid in finding that non-obvious toehold. The act of writing a test separates the mental task of figuring out what from the mental task of figuring out how.

How writing tests makes programming faster

Writing tests is often seen as a trade-off: it takes more time to write features with tests, but it saves you time in the long run. I’m not so sure this is the case. I’ve definitely had cases where I’ve wasted 15 minutes writing a feature by testing my code manually, failed to get it properly working, and then had the feature working in 5 minutes with the aid of tests.

The reason it’s faster to code with tests is because you don’t have to perform a full set of manual regression tests every time you make a change. Without tests, it’s so easy to get into a game of whack-a-mole where new changes break old functionality.

How writing tests makes programming more enjoyable

As a person I’m lazy, impatient and easily bored. I also hate wasting time. This means that manually repeating the same 9 clicks through a feature over and over, and then multiplying that chore by however many possible paths there are through my feature, is like a task that was custom-made to torture me.

Automated tests can never fully replace manual testing (I’m still going to pull up every feature in the browser at least once) but they can go quite a long way. I find that coding using tests is quite a different experience from not using tests.

My testing habit

So my testing “habit” is really just a choice to take the path of least resistance. Since I believe that my job will be easier, faster and more fun if I write tests, then I’ll usually take that path instead of the other, worse path. And if I fail to remember that I know this and start coding without tests, the pain I encounter usually reminds me that writing tests is the easier way.