Category Archives: Programming

Where to start with introducing TDD to a new Rails app

A Code With Jason reader recently wrote me with the following question:

How do I introduce TDD into a new Rails app? Where do I start? I am deeply knowledgable on RSpec and use it a lot. I am not sure how to get started with testing in Rails. When should testing be introduced? (hopefully as soon as possible) What should be testing vs. what should I assume works because it was tested in some other way? For example, why test generated code?

There are a lot of good questions here. I’ll pick a couple and address them individually.

How do I introduce TDD into a new Rails app?

This depends on your experience level with Rails and with testing.

I personally happen to be very experienced with both Rails and testing. When I build a new Rails application, I always start writing tests from the very beginning of the project.

If you’re new to Rails AND new to testing, I wouldn’t recommend trying to learn both at the same time. That’s too much to learn at once. Instead, get comfortable with Rails first and then start learning testing.

What if you’re comfortable with Rails but not testing yet? How do you introduce testing to a new Rails app then?

I would suggest starting with testing from the very beginning. Adding tests to your application is never going to be easier than it will be at the very beginning. In fact, retroactively adding tests to an existing application is notoriously super hard.

The kinds of tests to start with

What kinds of tests should you add at the beginning? I would recommend starting with acceptance tests, also known to some as integration tests, end-to-end tests or, in RSpec terminology, feature specs. The reason I recommend starting with feature specs is that they’re conceptually easy to grasp. They’re a simulation of a human opening a browser and clicking on things and typing stuff.

You can start with a feature spec “hello world” where you just visit a static page and verify that it says “hello world”. Once you’re comfortable with that, you can add feature specs to all your CRUD operations. I have a step-by-step formula for writing feature specs that you can use too.

Once you’re somewhat comfortable with feature specs, you can move on to model specs.

Testing vs. TDD

I’ve been sloppy with my language so far. I’ve been talking about testing but the question was specifically about TDD. How do you introduce TDD into a new Rails app?

First let me say that I personally don’t do TDD 100% of the time. I maybe do it 60% of the time.

I mainly write two types of tests in Rails: feature specs and model specs. When I’m building a new feature, I often don’t know exactly what form the UI is going to take or what labels or IDs all the page elements are going to have. This makes it really hard for me to try to write a feature spec before I write any code. So, usually, I don’t bother.

Another reason I don’t always do TDD is that I often use scaffolds, and scaffolds and TDD are incompatible. With TDD you write the tests first and the code after. Since scaffolds give you all the code up front, it’s of course impossible to write the tests before the code because the code already exists.

So again, the only case when I do TDD is when I’m writing model code, and I would guess this is maybe 60% of the time.

How, then, should you start introducing TDD to a new Rails app? I personally start TDD’ing when I start writing my first model code. The model code is often pretty trivial for the first portion of a Rails app’s lifecycle, so it’s usually a while before I get into “serious TDD”.

What should I be testing vs. what should I assume works because it was tested in some other way? For example, why test generated code?

I’ll answer the second part of this question first.

Why test generated code?

I assume when we say “generated code” we’re mainly talking about scaffolding.

If you’re never ever going to modify the generated code there’s little value in writing tests for it. Scaffold-generated CRUD features pretty much always just work.

However, it’s virtually never the case that a scaffolded feature comes out exactly the way we want with no modification required. For anything but the most trivial models, a little tweaking of the scaffold-generated code is necessary in order to get what we need, and each tweak carries a risk of introducing a bug. So it’s not really generated code. Once a human starts messing with it, all bets are off.

That’s not even the main value of writing tests for generated code though. The main value of the tests covering generated code (which, again, is rarely 100% actual generated code) is that the tests help protect against regressions that may be introduced later. If you have tests, you can refactor freely and be reasonably confident that you’re not breaking anything.

Having said that, I don’t try to test all parts of the code generated by a scaffold. There are some things that are pointless to test.

What should I test and what should I assume works?

When I generate a scaffold, there are certain types of tests I put on the scaffold-generated code. I have a step-by-step process I use to write specific test scenarios.

I typically write three feature specs: a feature spec for creating a record using valid inputs, a feature spec for trying to create a record using invalid inputs, and a feature spec for updating a record. That’s enough to give me confidence that things aren’t horribly broken.

At the model level, I’ll typically add tests for validations (I am TDD’ing in this case) and then add the validations. Usually I only need a few presence validations and maybe a uniqueness validation. It’s not common that I’ll need anything fancier than that at that time.

There are certain other types of tests I’ve seen that I think are pointless. These include testing the presence of associations, testing that a model responds to certain methods, testing for the presence of callbacks, and testing for database columns and indexes. There’s no value in testing these things directly. These tests are tautological. What’s better is to write tests for the behaviors that these things (i.e. these associations, methods, etc.) enable.

Suggestions for further reading

Extracting a tidy PORO from a messy Active Record model

Skinny controller, fat model – the old best practice

In 2006 Jamis Buck wrote a famous post called Skinny Controller, Fat Model. In it Jamis observed that Rails developers often put too much logic in controllers, making the code harder to understand and to test than it needs to be.

“Skinny controllers, fat models” became a piece of accepted wisdom in the Rails community. I think it was a genuine step forward in the overall state of affairs.

The new best practice: skinny everything

But over time developers apparently came to a realization that while it’s better in certain ways to put bulky code in models than in controllers, the fatness, wherever you put it, is still kind of a problem.

Other ways of combating digital obesity were adopted or devised, including concerns, service objects, domain objects, Interactors, factories, something called Trailblazer, and probably others.

Not all weight loss methods are equally good

When we talk about making e.g. a controller less fat, we of course aren’t talking about obliterating the bulk from existence but rather just moving it to a more appropriate place. The world is complex and we can’t make the complexity go away. The art is in distributing the complexity throughout the code such that a human can understand what the application does, in spite of the complexity.

For whatever reason, it seems lately that “service objects” (a term which means different things to different people, hence the quotes) and Interactors are widely esteemed as great ways to reorganize complexity in a Rails application. I personally find Interactors and service objects unappealing. In fact, I think they’re terrible. Avdi Grimm seems to have a similar opinion.

I think service objects and Interactors add unnecessary complexity to an application without adding any meaning to justify their added complexity. To take every action and make an object named after it—e.g. AuthenticateUser, PlaceOrder, SendThankYou—seems like a superfluous, tautological extra step. I think we can do better.

The case for POROs and domain objects

In Domain Driven Design, Eric Evans describes the concept of domain objects, which I take to mean: objects that represent concepts from the domain.

To me this makes a lot of sense. Most Rails applications of course already make use of this concept. If there’s something called an appointment in the domain, there will be an Appointment class in the application. If there’s something called a Patient in the domain, there will be a Patient class in the application.

It’s also possible to invent objects that don’t exist in the domain, that only exist to make the code easier to understand. It took me personally a long time to realize this. Examples of object-oriented programming often include things like creating a Dog and a Cat class, each of which inherit from Animal. These examples seem sensible and straightforward, but think of how many objects in Ruby and Rails have nothing to do with any corresponding real-world idea, e.g. Request, MimeNegotiation and SingularAssociation.

These types of objects take a little extra imagination to conceive of and name, but I think once a person opens up their mind to this way of coding, it gets easier and easier.

The form that I like to give these sorts of objects is POROs (Plain Old Ruby Objects).

My example code: a messy Active Record class

For the remainder of this post I’m going to take a certain big, messy Active Record class and refactor it into some number of POROs.

The code below is taken from a real production application. It was written by a terrible programmer: me in 2012. The class represents a stylist at a hair salon.

I invite you to have a nice scroll through the code and give yourself a light familiarity with it.

The PORO to extract: calendar-related code

The class is so big and the code is so bad that it’s hard to decide where to start. I could spend quite a lot of time refactoring this class, but to make it all fit into a reasonable blog post, I’ll extract just one POROs out of this class.

There are a few methods that I happen to know are related to displaying the stylist’s name on the calendar:

This is relatively easy low-hanging fruit. I can just conceive of a Calendar object and move these methods into it.

This won’t exactly work, though. Many of the values referred to inside this new class only exist in the context of a Stylist. We’ll need to pass in a Stylist instance.

Now that I see this much, I realize I’ve chosen an unfitting name for this class. All this code deals with figuring out where to show the current stylist in a list of all the stylists. There’s no way it makes sense to have a Calendar instance with just one single Stylist inside it. This class name needs to change.

Perhaps CalendarStylistListItem is a better name.

Now I can take a closer look at the contents of this class itself. I’m going to focus on this method first:

Why is salon concerned with highest_manual_order_index? That seems like too fine-grained a detail for it to care about. That also seems like perhaps a responsibility of the CalendarStylistListItem class. Let’s bring it in.

Now I’ll focus on the next method down, alphabetical_position.

Some of this code could certainly be refactored to be tidier, but I don’t see a lot with respect to object composition that needs to change. One thing that does stick out to me is salon.ordered_stylist_names. This seems like a concern of the CalendarStylistListItem, not of the salon. Let’s bring this method in.

In the process of doing this I see one more method that’s currently in Salon that would be more appropriate in CalendarStylistListItem: the has_manual_order? method. Let’s bring that in and rename it to salon_has_manual_order?.

Lastly, I’m pretty sure the only method needed from the outside is order_index. Let’s make manual_position and alphabetical_position private.

Conclusion

The process of creating my PORO, CalendarStylistListItem, didn’t require importing any libraries or learning any radically new techniques. All I had to do was identify a “missing abstraction” in my Stylist class and then conceive of a new object to bundle it up in.

The overall impact to the Stylist class was not all that profound, but cleaning up a big messy class is of course going to take quite a lot of work. I feel like this was a meaningful step in a positive direction.

Next time you encounter a bloated Active Record model, I hope that instead of reaching for a service object or Interfactor, you might consider refactoring to a PORO.

My top five Rails performance tips for performance noobs

Performance is a topic that’s widely discussed and also, sadly, widely misunderstood.

I’m going to share five performance tips that have worked well for me in my 9+ years of using Rails and even before that, since much of what’s here is technology-agnostic.

I want to add a caveat: I’ve never worked for a big B2C startup that has needed to scale to millions of users. I’ve worked mostly applications with modest user bases and modest performance requirements. Luckily, I imagine my career experiences are probably pretty similar to that of most Rails developers, so it’s highly likely that what has worked for me performance-wise will also worked for you.

If you already know the basics of performance optimization and want to hear from someone who does Rails performance work all day every day, I recommend checking out the work of my friend Nate Berkopec, who I know as “The Rails Performance Guy”. (I’ve also had Nate on my podcast.)

Now onto my tips.

1. Don’t optimize prematurely

Before I came to Rails I worked with PHP. At one of my PHP jobs, I worked at a place where our official policy was to use single-quoted strings instead of double-quoted strings. The reason for this policy was that, supposedly, single-quoted strings were parsed ever so slightly faster than double-quoted strings.

In other words, someone in leadership took great care to implement and enforce a policy that was probably responsible for like 0.0001% of the total performance picture for any one of our apps. Penny wise, pound foolish.

Don’t do this. When you’re first building an application, the right approach to performance 99% of the time is to not think about it at all. Unlike bugs, which are best never deployed to production, performance problems are usually best dealt with after they’re discovered in production, after they prove themselves to be legitimate problems.

2. Profile and benchmark before optimizing

When I talk about performance profiling, I mean taking stock of performance across an application and judging where optimization effort is best spent. Some judgment and common sense come into the picture here. For example, if two pages are equally slow, and one of them is visited 100 times a day and the other one 1 time a month, obviously fixing the busier page will give a higher ROI.

In practice I rarely bother to do any actual profiling though. This is partly due to the nature of the work I personally have done historically. Most of my projects have been back-office internal-facing apps where I have a direct line of communication with the users or someone representative of the users. If there’s an important page that’s unacceptably slow, I’ll usually hear complaints about it or I’ll observe the sluggishness firsthand.

The next step I take before setting to work on optimization is to get a benchmark. This is just a fancy way of saying that I measure how fast the page currently loads. I of course can’t know how much of an improvement I made unless I know the difference between the “before” speed and the “after” speed.

The way I typically perform my benchmarking is to simply load a page in my browser while watching the output of the logs in a different window. At the bottom of the log it will say, e.g. “Completed 200 OK in 760ms”. I also use the sql_queries_count gem because sometimes it’s helpful to be aware of how many queries are being executed on the page.

3. The key to performance is usually not to add clever things but to remove stupid things

Performance optimization is often more akin to digging a ditch than to solving a Rubik’s cube.

Typically, the thing responsible for most of the performance cost of a page is waste. There’s a thing that could easily be done in an efficient way, but for whatever reason it was done in an inefficient way. So the task when optimizing a page is to find the waste and remove it. More on this in the next tip.

4. It’s almost always the database

The cause of most performance problems is waste. Most waste takes the form of making too many trips to the database.

This fact is so true and important that, in my 15+ years of programming experience, almost every performance optimization I’ve made has been some variation of “hit the database less”.

In Rails, the most common form of hitting the database too much is the famous “N+1 query problem”. Let’s say you have a page that shows a list of 10 things—let’s say clients—where each list item shows not only the client’s name but the client’s address. This means that a database query occurs for each one of those 10 clients. That’s the N in N+1. The +1 part of N+1 is the original query behind Client.all that grabs all the clients in the first place.

A common solution to the N+1 query problem, one I use a lot is eager loading. Instead of doing Client.limit(10), you can do Client.includes(:address).limit(10), which will grab all associated addresses ahead of time in just one query instead of in 10 separate queries. So instead of a total of 11 queries (1 for all clients, 1 for each of 10 addresses) you’ll have a total of just 2 queries (1 for all clients, 1 for all addresses).

Another way to make fewer trips to the database is to cache the results of expensive queries. I often use Rails’ low-level caching for this. However, this adds a certain amount of complexity that I’m not always eager to add. So I prefer to try to make fewer trips to the databases and/or optimize my queries themselves, resorting to caching only if I have to.

5. If it seems faster, it is faster

The only thing that ultimately matters is how fast users perceive the application to be. If a certain change makes an application feel faster, then for all intents and purposes, it is faster.

I once heard a story about a condo where the residents complained about the elevators being slow. Management’s reaction was not to immediately try to speed up the elevators somehow, which would presumably have been difficult and expensive. Instead, management installed mirrors on the walls next to the elevators. The complaints went away.

So, sometimes the right initial response to a performance problem is not necessarily to embark upon a big, expensive optimization project. Sometimes a change can be introduced that’s less invasive but still effective. For example, let’s say I have a calendar page with back/forward buttons to go from month to month, and let’s say each month takes 800ms to load. If I click the button and nothing at all happens for 800ms, it might feel a little slow. But if I change it so that immediately upon clicking the button I see an animated loading indicator, the page will feel faster. It sounds dumb but it’s absolutely true. Sometimes those little changes that improve the responsiveness of a page will increase its perceived performance, without needing to touch the actual performance at all.

6. Bonus tip: it’s almost never Ruby

Despite what you might read online about Ruby being “slow”, I’ve never ever encountered a performance problem where the culprit was the Ruby language itself. Whether an application is written in Ruby, Go, Python or whatever else, the performance bottleneck is almost always the database.

If all you know about performance is the five tips above, you’ll probably be able to tackle 80% of the performance challenges you encounter.

How to perform hotfixes without side effects or extra stress

A reader of mine recently wrote to me with a question which I’ll paraphrase.

The “tagalong” problem

Let’s say I have a production server. Let’s also say that in addition to that, I have a staging server where I push most changes before production so the changes can be reviewed. This works out fine most of the time.

But let’s say occasionally there’s a critical bug in production that needs to be fixed ASAP. I need to make my fix and push straight to production without the delay of going to staging first. Unfortunately when I do this I also push the last however many commits to production as well.

In other words, as a side effect of my bugfix, I end up pushing work to production that potentially wasn’t ready for production yet.

The fix

The solution is to always keep the master branch in a deployable state. I’ll mention a few tactics that can make this easier.

The first tactic is frequent deployments. I try to do all my work in the form of atomic commits which could each conceivably be deployed to production. And in fact, I tend to perform a deployment almost every time I commit to master or merge something into master.

The second tactic is feature branches. Feature branches help keep partway-done work out of master, helping ensure that a deployment of what’s on master never means deploying incomplete work. In my experience it’s good for a feature branch to have a lifespan of a few hours or a few days. Anything much longer than that gets dangerous because it leaves too much room for the feature branch to diverge from master.

But what if you need to work on a feature that spans, say, 4 weeks, and can’t be released until it’s fully complete? This is where my third tactic comes in, dark launching. Dark launching is where you separate the concept of deployment from the concept of release. Just because the code for a feature gets deployed to production doesn’t mean that that feature necessarily needs to be surfaced to users. The UI of that feature can be hidden using a feature flag, which can be as simple as something that says if false until you’re ready to unveil that feature and remove the code that hides it.

Recommended reading

The things I’ve shared above are basically DevOps principles. To learn more you can check out the books The DevOps Handbook and Continuous Delivery.

How to deploy a Ruby on Rails application to AWS

Overview

This tutorial will show you how to deploy a Rails application to AWS.

There are a number of ways this task could be tackled. It can be done manually or it can be done using an infrastructure-as-code approach, with a tool like Ansible.

This tutorial shows how to deploy Rails to AWS manually.

Before you dive in, be forewarned: it’s kind of a monster of a task. There are a large number of steps involved, many of them tricky and error-prone. Be prepared for the full process to involve hours or even days of potentially frustrating work.

Contents

The size of the setup process makes it impractical to put everything into one post, so each step is its own post.

  1. Launch EC2 instance
  2. Install nginx and Passenger
  3. Add the Rails application to the nginx server
  4. Set up secrets
  5. Create RDS database

Don’t be discouraged if not everything works on the first try. It most likely won’t. My advice if something goes wrong is to just blow everything away and start again from the beginning. I find that that approach is, paradoxically, often the fastest.

Good luck!

How I approach test coverage metrics

Different developers have different opinions about test coverage. Some engineering organizations not only measure test coverage but have rules around it. Other developers think test coverage is basically BS and don’t measure it at all.

I’m somewhere in between. I think test coverage is a useful metric but only in a very approximate and limited sort of way.

If I encounter two codebases, one with 10% coverage and another with 90% coverage, I can of course probably safely conclude that the latter codebase has a healthier test suite. But if there’s a difference of 90% and 100% I’m not convinced that that means much.

I personally measure test coverage on my projects, but I don’t try to optimize for it. Instead, I make testing a habit and let my habitual coding style be my guiding force instead of the test coverage metrics.

If you’re curious what type of test coverage my normal workflow naturally results in, I just checked the main project I’ve been working on for the last year or so and the coverage level is 96.62%, according to simplecov. I feel good about that number, although more important to me than the test coverage percentage is what it feels like to work with the codebase on a day-to-day basis. Are annoying regressions popping up all the time? Is new code hard to write tests for due to the surrounding code not having been written in an easily testable way? Then the codebase could probably benefit from more tests.

My general approach to Rails testing

My development workflow

The code I write is influenced by certain factors upstream of the code itself.

Before I start coding a feature, I like to do everything I can to try to ensure that the user story I’m working on is small and that it’s crisply defined. By small, I mean not more than a day’s worth of work. By crisply defined, I mean that the user story includes a reasonably precise and detailed description of what the scope of that story is.

I also like to put each user story through a reasonably thorough scrutinization process. In my experience, user stories often aren’t very thoroughly thought through and contain a level of ambiguity or inconsistency such that they’re not actually possible to implement as described.

I find that if care is taken to make sure the user stories are high-quality before development work begins, the development work goes dramatically more smoothly.

Assuming I’m starting with good user stories, I’ll grab a story to work on and then break that story into subtasks. Each day I have a to-do list for the day. On that to-do list I’ll put the subtasks for whatever story I’m currently working on. Here’s an example of what that might look like, at least as a first draft:

Feature: As a staff member, I can manage insurance payments

  • Scaffolding for insurance payments
  • Feature spec for creating insurance payment (valid inputs)
  • Feature spec for creating insurance payment (invalid inputs)
  • Feature spec for updating insurance payment

(By the way, I write more about my specific formula for writing feature specs in my post https://www.codewithjason.com/repeatable-step-step-process-writing-rails-integration-tests-capybara/.)

Where my development workflow and my testing workflow meet

Looking at the list above, you can see that my to-do list is expressed mostly in terms of tests. I do it this way because I know that if I write a test for a certain piece of functionality, then I’ll of course have to build the functionality itself as part of the process.

When I use TDD and when I don’t

Whether or not I’ll use TDD on a feature depends largely on whether it’s a whole new CRUD interface or whether it’s a more minor modification.

If I’m working on a whole CRUD interface, I’ll use scaffolding to generate the code, and then I’ll make a pass over everything and write tests. (Again, I write more about the details of this process here. The fact that I use scaffolding for my CRUD interfaces makes TDD impossible. This is a trade-off I’m willing to make due to how much work scaffolding saves me and due to the fact that I pretty much never forget to write feature specs for my scaffold-generated code.

It’s also rare that I ever need to write any serious amount of model specs for scaffold-generated code. I usually write tests for my validators using shoulda-matchers, but that’s all. (I often end up writing more model specs later as the model grows in feature richness, but not in the very beginning.)

If instead of writing a whole new CRUD interface I’m just making a modification to existing code (or fixing a bug), that’s a case where I usually will use TDD. I find that TDD in these cases is typically easier and faster than doing test-after or skipping tests altogether. If for example I need to add a new piece of data to a CSV file my program generates, I’ll go to the relevant test file and add a failing expectation for that new piece of data. Then I’ll go and add the code to put the data in place to make the test pass.

The other case where I usually practice TDD is if the feature I’m writing is not a CRUD-type feature but rather a more of a model-based feature, where the interesting work happens “under the hood”. In those cases I also find TDD to be easier and faster than not-TDD.

The kinds of tests I write and the kind I don’t

I write more about this in my book, but I tend to mostly write model specs and feature specs. I find that most other types of tests tend to have very little value. By the way, I’m using the language of “spec” instead of “test” because I use RSpec instead of Minitest, but my high-level approach would be the exact same under any testing framework.

When I use mocks and stubs and when I don’t

I almost never use mocks or stubs in Rails projects. In 8+ years of Rails development, I’ve hardly ever done it.

How I think about test coverage

I care about test coverage enough to measure it, but I don’t care about test coverage enough to set a target for it or impose any kind of rule on myself or anything like that. I’ve found that the natural consequence of me following my normal testing workflow is that I end up with a pretty decent test coverage level. Today I checked the test coverage on the project I’ve been working on for the last year or so and the measurement was 97%.

The test coverage metric isn’t the main barometer for me though on the health level of a project’s tests. To me it seems much more useful to pay attention to how many exceptions pop up in production and what it feels like to do maintenance coding on the project. “What it feels like to do maintenance coding” is obviously not a verify quantifiable metric, but of course not everything that counts can be counted.

The difference between domains, domain models, object models and domain objects

I recently came across a question regarding the difference between domains and domain models. These terms probably mean different things to different people, but I’ll define the terms as I use them.

Domain

When I’m working on a software project, the domain is the conceptual area I’m working inside of. For example, if I’m working on an application that has to do with restaurants, the domain is restaurants.

Domain model

The world is a staggeringly complex place. Even relatively simple-seeming things like restaurants involve way more complexity than could be accurately captured in a software system. So instead of coding to a domain, we have to code to a domain model.

For me, a domain model is a separate thing from any particular code or piece of software. If I come up with a domain model for something to do with restaurants, I could express my domain model on a piece of paper if I wanted to (or just inside my head). My domain model is a standalone conceptual entity, regardless of whether I actually end up writing any software based on it or not.

A domain model also doesn’t even need to be consciously expressed in order to exist. In fact, on most software systems I’ve ever worked on, the domain model of the system only exists in the developers’ minds. The domain model isn’t something that someone planned at the beginning, it’s something that each developer synthesizes in his or her mind based on the code that exists in the application and based on what the developer understands about the domain itself.

Object model

The place where my domain model turns into actual code is in the object model. If my domain concepts include restaurant, order, and customer, then my object model will probably include objects like Restaurant, Order and Customer.

Domain object

Any object in my object model that also exist as a concept in my domain model I would call a domain object. In the previous example, Restaurant, Order and Customer would all be domain objects.

Not every object in a system is a domain object. Some objects are value objects. A value object is an object whose identity doesn’t matter. Examples of concepts that would make sense as value objects rather than domain objects are phone number values or money values.

One type of “object” that’s popular in the Rails world that I tend not to use is the service object, for reasons explained in the linked post.

Related reading: Domain Driven Design

How I wrote a command-line Ruby program to manage EC2 instances for me

Why I did this

Heroku is great, but not in 100% of cases

When I want to quickly deploy a Rails application, my go-to choice is Heroku. I’m a big fan of the idea that I can just run heroku create and have a production application online in just a matter of seconds.

Unfortunately, Heroku isn’t always a desirable option. If I’m just messing around, I don’t usually want to pay for Heroku features, but I also don’t always want my dynos to fall asleep after 30 minutes like on the free tier. (I’m aware that there are ways around this but I don’t necessarily want to deal with the hassle of all that.)

Also, sometimes I want finer control than what Heroku provides. I want to be “closer to the metal” with the ability to directly manage my EC2 instances, RDS instances, and other AWS services. Sometimes I desire this for cost reasons. Sometimes I just want to learn what I think is the valuable developer skill of knowing how to manage AWS infrastructure.

Unfortunately, using AWS by itself isn’t very easy.

Setting up Rails on bare EC2 is a time-consuming and brain-consuming hassle

Getting a Rails app standing up on AWS is pretty hard and time-consuming. I’m actually not even going to get into Rails-related stuff in this post because even the small task of getting an EC2 instance up and running—without no additional software installed on that instance—is a lot harder than I think it should be, and there’s a lot to discuss and improve just inside that step.

Just to briefly illustrate what a pain in the ass it is to get an EC2 instance launched and to SSH into it, here are the steps. The steps that follow are the command-line steps. I find the AWS GUI console steps roughly equally painful.

1. Use the AWS CLI create-key-pair command to create a key pair. This step is necessary for later when I want to SSH into my instance.

2. Think of a name for the key pair and save it somewhere. Thinking of a name might seem like a trivially small hurdle, but every tiny bit of mental friction adds up. I don’t want to have to think of a name, and I don’t want to have to think about where to put the file (even if that means just remembering that I want to put the key in ~/.ssh, which is the most likely case.

3. Use the run-instances command, using an AMI ID (AMI == Amazon Machine Image) and passing in my key name. Now I have to go look up the run-instances (because I sure as hell don’t remember it) and, look up my AMI ID, and remember what my key name is. (If you don’t know what an AMI ID is, that’s what determines whether the instance will be Ubuntu, Amazon Linux, Windows, etc.)

4. Use the describe-instances command to find out the public DNS name of the instance I just launched. This means I either have to search the JSON response of describe-instances for the PublicDnsName entry or apply a filter. Just like with every AWS CLI command, I’d have to go look up the exact syntax for this.

5. Run the ssh command, passing in my instance’s DNS and the path to my key. This step is probably the easiest, although it took me a long time to commit the exact ssh -i syntax to memory. For the record, the command is ssh -i ~/.ssh/my_key.pem ubuntu@mypublicdns.com. It’s a small pain in the ass to have to look up the public DNS for my instance again and remember whether my EC2 user is going to be ubuntu or ec2-user (it depends on what AMI I used).

My goals for my AWS command-line tool

All this fuckery was a big hassle so I decided to write my own command-line tool to manage EC2 instances. I call the tool Exosuit. You can actually try it out yourself by following these instructions.

There were four specific capabilities I wanted Exosuit to have.

Launch an instance

By running bin/exo launch, it should launch an EC2 instance for me. It should assume I want Ubuntu. It should let me know when the instance is ready, and what its instance ID and public DNS are.

SSH into an instance

I should be able to run bin/exo ssh, get prompted for which instance I want to SSH into, and then get SSH’d into that instance.

List all running instances

I should be able to run bin/exo instances to see all my running instances. It should show the instance ID and public DNS for each.

Terminate instances

I should be able to run bin/exo terminate which will show me all my instance IDs and allow me to select one or more of them for termination.

How I did it

Side note: when I first wrote this, I forgot that the AWS SDK for Ruby existed, so I reinvented some wheels. Whoops. After I wrote this I refactored the project to use AWS SDK instead of shell out to AWS CLI.

For brevity I’ll focus on the bin/exo launch command.

Using the AW CLI run-instances command

The AWS CLI command for launching an instance looks like this:

Hopefully most of these flags are self-explanatory. You might wonder where the key name of normal-quiet-carrot came from. When the bin/exo launch command is run, Exosuit asks “Is there a file defined at .exosuit/config.yml that contains a key pair name and path? If not, create that file, create a new key pair with a random phrase for a name, and save the name and path to that file.”

Here’s what my .exosuit/config.yml looks like:

The aws_profile_name is something that I imagine most users aren’t likely to need. I personally happen to have multiple AWS accounts, so it’s necessary for me to send a --profile flag when using AWS CLI commands so AWS knows which account of mine to use. If a profile isn’t specified in .exosuit/config.yml, Exosuit will just leave the --profile flag off and everything will still work fine.

Abstracting the run-instances command

Once I had coded Exosuit to construct a few different AWS CLI commands (e.g. run-instances, terminate-instances), I noticed that things were getting a little repetitive. Most troubling, I had to always remember to include the --profile flag (just as I would if I were typing all this on the command line manually), and I didn’t always remember to do so. In those cases my command would get sent to the wrong account. That’s bad.

So I created an abstraction called AWSCommand. Here’s what a usage of it looks like:

You can probably see the resemblance it bears to the bare run-instances usage. Note the conspicuous absence of the profile flag, which is now automatically included every single time.

Listening for launch success

One of my least favorite things about manually launching EC2 instances is having to check periodically to see when they’ve started running. So I wanted Exosuit to tell me when my EC2 instance was running.

I achieved this by writing a loop that hits AWS once per second, checking the state of my new instance each time.

You might wonder what Instance.find and instance.running? do.

The Instance.find method will run the aws ec2 describe-instances command, parse the JSON response, then grab the relevant JSON data for whatever instance_id I passed to it. The return value is an instance of the Instance class.

When an instance of Instance is instantiated, an instance variable gets set (pardon all the “instances”) with all the JSON data for that instance that was returned by the AWS CLI. The instance.running? method simply looks at that JSON data (which has since been converted to a Ruby hash) and checks to see what the value of ['State']['Name'] is.

Here’s an abbreviated version of the Instance class for reference.

(By the way, all the Exosuit code is available on GitHub if you’d like to take a look.)

Success notification

As you can see from the code a couple snippets above, Exosuit lets me know once my instances has entered a running state. At this point I can run bin/exo ssh, bin/exo instances or bin/exo terminate to mess with my instance(s) as I please.

Demo video

Here’s a small sample of Exosuit in action:

Try it out yourself

If you’d like to try out Exosuit, just visit the Getting Started with Exosuit guide.

If you think this idea is cool and useful, please let me know by opening a GitHub issue for a feature you’d like to see, or tweeting at me, or simply starring the project on GitHub so I can gage interest.

I hope you enjoyed this explanation and I look forward to sharing the next steps I take with this project.

Exosuit demo video #1: launching and SSHing into an EC2 instance

Update: this video is now out of date. See demo video #2 for a more up-to-date version.

Recently I decided to begin work on a tool that makes it easier to deploy Rails apps to AWS. My wish is for something that has the ease of use of Heroku, but the fine-grained control of AWS.

My tool, which is free and open source, is called Exosuit. Below is a demo video of what Exosuit can do so far which, given the fact that Exosuit has only existed since the day before this writing, isn’t very much. Currently Exosuit can launch an EC2 instance for you and let you SSH into it.

But I think even this little bit is pretty cool – I don’t know of any other method that lets you go from zero to SSH’d into your EC2 instance in 15 seconds like Exosuit does.

If you’d like to take Exosuit for a spin on your own computer, you can! Just visit the Exosuit repo and go to the getting started guide. And if you do try it, please tweet me or send an email to jason@codewithjason.com with any thoughts or feedback you might have.