Category Archives: DevOps

The difference between Docker and Docker Compose

When I first started trying to learn how to Dockerize a Rails application, I confused by the difference between Docker Compose. Many writings I came across mentioned Docker Compose but they didn’t really explain what it was.

Background: development vs. production

First, a little bit of background. There are two ways to Dockerize an application. You can either use Docker to help with your development environment or your production environment.

Docker Compose is a tool that helps with your development environment. Docker Compose does not apply to production environments.

Let’s talk specifically about how Docker Compose can help with a development environment.

How Docker Compose helps development environments

For a Rails app, running a development environment is not as simple as just running rails server. In addition to the Rails server, you might have to run other services including an RDBMS (e.g. PostgreSQL), a key/value data store (e.g. Redis), a worker (i.e. the service that handles background jobs) and possibly webpack-dev-server.

Installing and running all these services manually can be tedious. Plus remember that not only do you have to run all this stuff, but you have to somehow know what all the things your app needs to run are.

Docker Compose gives you a way to say “Hey, my app uses PostgreSQL, Redis, a worker instance, and webpack-dev-server. Install all that stuff and run it for me.” As long as you have your Docker Compose config file set up properly, you can run the docker-compose up command and it will do all that for you.


  • There are two ways to Dockerize an app: for development and for production.
  • Docker Compose helps Dockerize an app for development.
  • Docker Compose provides a way to specify what services your development environment needs in order to function properly. Once configured, Docker Compose can install and run your services for you.

The two ways to Dockerize a Rails application

Here’s something that should be Docker 101, but for some reason people don’t really tell you: there’s no such thing as just “Dockerizing” a Rails app. There are two use cases for using Docker with Rails. Before you go down the path of Dockerizing your app, you have to pick which use case you’re after. The implementation for each is very different from the other.

You can Dockerize an app for development or you can Dockerize it for production.

Dockerizing for development

The reason you would want to Dockerize an app for development is to make it easier for a new developer to get a development environment set up on their machine.

When you have your app Dockerized for development, Docker can install and run services for you. For example, if your app uses PostgreSQL, Redis, webpack-dev-server, and Sidekiq, Docker will run all those processes for you, and before that Docker will install PostgreSQL and Redis if you don’t already have them.

This means that each new developer who starts the project doesn’t have to go through a potentially long and painful process of installing all the dependencies for your app before getting to work. They can just clone the repo, run a single command, and be up and running.

Dockerizing for production

The benefits of Dockerizing for production overlap with Dockerizing for development, but they’re not exactly the same.

In my development example I mentioned that you might have certain services running locally like PostgreSQL, Redis, webpack-dev-server, and Sidekiq. In a production environment, you of course don’t have all these processes running on a single machine or VM. Rather, you might have (if you’re using AWS for example), your database hosted on RDS, a Redis instance on ElasticCache, Sidekiq running on a worker EC2 instance, and no webpack-dev-server because that doesn’t apply in production. So the need is very different.

So unlike a development use case, a production Docker use case doesn’t include installing and running various services for you, because running all kinds of services on a host machine is not how a “modern” production infrastructure configuration works.


  • There are two ways to Dockerize a Rails app: for development and for production.
  • Dockerizing for development makes it easier to spin up a development environment.
  • Dockerizing for production helps avoid the problem of having snowflake servers.
  • Dockerizing for development could probably benefit just about any team, but Dockerizing for production is probably much more dependent on your app’s unique hosting needs.

How to run RSpec with headless Chrome/Chromium on Alpine Linux

Why Alpine Linux?

When you Dockerize a Rails application, you have to choose which distribution of Linux you want your container to use.

When I first started Dockerizing my applications I used Ubuntu for my containers because that was the distro I was familiar with. Unfortunately I discovered that using Ubuntu results in slow builds, slow running of commands, and large image sizes.

I discovered that Alpine Linux is popular for Docker containers because it affords better performance and smaller images.

Alpine + Capybara problems

Alpine had its own drawback though: I couldn’t run my tests because it wasn’t as straightforward in Alpine to get a Capybara + ChromeDriver configuration working on Alpine.

The evident reason for this is that Alpine can’t run a normal Chrome package the way Ubuntu can. Instead, it’s typical on Alpine to use Chromium, which doesn’t quite play nice with Capybara the way Chrome does.

How to get Alpine + Capyabara working

There are three steps to getting Capybara working on Alpine.

  1. Use selenium-webdriver instead of webdrivers
  2. Install chromium, chromium-chromedriver and selenium on your Docker image
  3. Configure a Capybara driver

Use selenium-webdriver instead of webdrivers

The first step is very simple: if you happen to be using the webdrivers gem in your Gemfile, replace it with selenium-webdriver.

Install chromium, chromium-chromedriver and selenium on your Docker image

The next step is to alter your Dockerfile so that chromium, chromium-chromedriver and selenium are installed.

Below is a Dockerfile from one of my projects (which is based on Mike Rogers’ fantastic Docker-Rails template).

I’ll call out the relevant bits of the file.

chromium chromium-chromedriver python3 python3-dev py3-pip

This line, as you can see, installs chromium and chromium-chromedriver. It also installs pip3 and its dependencies because we need pip3 in order to install Selenium. (If you don’t know, pip3 is a Python package manager.)

Here’s the line that installs Selenium:

RUN pip3 install -U selenium

And here’s the full Dockerfile.

FROM ruby:2.7.2-alpine AS builder

RUN apk add --no-cache \
    build-base libffi-dev \
    nodejs yarn tzdata \
    postgresql-dev postgresql-client zlib-dev libxml2-dev libxslt-dev readline-dev bash \
    # For testing
    chromium chromium-chromedriver python3 python3-dev py3-pip \
    # Nice-to-haves
    git vim \
    # Fixes watch file issues with things like HMR

RUN pip3 install -U selenium

FROM builder AS development

# Add the current apps files into docker image
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install any extra dependencies via Aptfile - These are installed on Heroku
# COPY Aptfile /usr/src/app/Aptfile
# RUN apk add --update $(cat /usr/src/app/Aptfile | xargs)

ENV PATH /usr/src/app/bin:$PATH

# Install latest bundler
RUN bundle config --global silence_root_warning 1

CMD ["rails", "server", "-b", "", "-p", "3000"]

FROM development AS production

COPY Gemfile /usr/src/app
COPY .ruby-version /usr/src/app
COPY Gemfile.lock /usr/src/app

COPY package.json /usr/src/app
COPY yarn.lock /usr/src/app

# Install Ruby Gems
RUN bundle config set deployment 'true'
RUN bundle config set without 'development:test'
RUN bundle check || bundle install --jobs=$(nproc)

# Install Yarn Libraries
RUN yarn install --check-files

# Copy the rest of the app
COPY . /usr/src/app

# Precompile the assets
RUN RAILS_SERVE_STATIC_FILES=enabled SECRET_KEY_BASE=secret-key-base RAILS_ENV=production RACK_ENV=production NODE_ENV=production bundle exec rake assets:precompile

# Precompile Bootsnap
run RAILS_SERVE_STATIC_FILES=enabled SECRET_KEY_BASE=secret-key-base RAILS_ENV=production RACK_ENV=production NODE_ENV=production bundle exec bootsnap precompile --gemfile app/ lib/

The next step is to configure a Capybara driver.

Configure a Capybara driver

Below is the Capybara configuration that I use for one of my projects. This configuration is actually identical to the config I used before I started using Docker, so there’s nothing special here related to Alpine Linux, but the Alpine Linux configuration described in this post won’t work without something like this.

# spec/support/chrome.rb

driver = :selenium_chrome_headless

Capybara.server = :puma, {Silent: true}

Capybara.register_driver driver do |app|
  options =

  options.add_argument("--window-size=1400,1400"), browser: :chrome, options: options)

Capybara.javascript_driver = driver

RSpec.configure do |config|
  config.before(:each, type: :system) do
    driven_by driver

Remember to make sure that the line in spec/rails_helper that includes files from spec/support is uncommented so this file gets loaded.

How to Dockerize a Rails application (“hello world” version)

Reasons to Dockerize a Rails application

I had to hear Docker explained many times before I finally grasped why it’s useful.

I’ll explain in my own words why I think Dockerizing my applications is something worth exploring. There are two benefits that I recognize: one for the development environment and one for the production environment.

Development environment benefits

When a new developer joins a team, that developer has to get set up with the team’s codebase(s) and get a development environment running. This can often be time-consuming and tedious. I’ve had experiences where getting a dev environment set up takes multiple days. If an application is Dockerized, spinning up a dev environment can be as simple as running a single command.

In addition to simplifying the setup of a development environment, Docker can simplify the running of a development environment. For example, in addition to running the Rails server, my application also needs to run Redis and Sidekiq. These services are listed in my file, but you have to know that file is there, and you need to know to start the app using foreman start -f With Docker you can tell it what services you need to run and Docker will just run the services for you.

Production environment benefits

There are a lot of different ways to deploy an application to production. None of them is particularly simple. As of this writing, my main production application is deployed to AWS using Ansible for infrastructure management. This is nice in many ways but it’s also somewhat duplicative. I’m using one (currently manual) way to set up my development environment and then another (codified, using Ansible) ways to set up my production environment.

Docker allows me to set up both my development environment and production environment the same way, or at least close to the same way. Once I have my application Dockerized I can use a tool like Kubernetes to deploy my application to any cloud provider without having to do a large amount of unique infrastructure configuration myself the way I currently am with Ansible. (At least that’s my understanding. I’m not at the stage of actually running my application in production with Docker yet.)

What we’ll be doing in this tutorial

In this tutorial we’ll be Dockerizing a Rails application using Docker and a tool called Docker Compose.

The Dockerization we’ll be doing will be the kind that will give us a development environment. Dockerizing a Rails app for use in production hosting will be a separate later tutorial.

My aim for this tutorial is to cover the simplest possible example of Dockerizing a Rails application. What you’ll get as a result is unfortunately not robust enough to be usable as a development environment as-is, but will hopefully serve as a good exercise to build your Docker confidence and to serve as a good jumping-off point for creating a more robust Docker configuration.

In this example our Rails application will have a PostgreSQL database and no other external dependencies. No Redis, no Sidekiq. Just a database.


I’m assuming that before you begin this tutorial you have both Docker and Docker Compose installed. I’m assuming you’re using a Mac. Nothing else is required.

If you’ve never Dockerized anything before, I’d recommend that you check out my other post, How to Dockerize a Sinatra application, before digging into this one. The other post is simpler because there’s less stuff involved.

Fundamental Docker concepts

Let’s say I have a tiny Dockerized Ruby (not Rails, just Ruby) application. How did the application get Dockerized, and what does it mean for it to be Dockerized?

I’ll answer this question by walking sequentially through the concepts we’d make use of during the process of Dockerizing the application.


When I run a Dockerized application, I’m running it from an image. An image is kind of like a blueprint for an application. The image doesn’t actually do anything, it’s just a definition.

If I wanted to Dockerize a Ruby application, I might create an image that says “I want to use Ruby 2.7.1, I have such-and-such application files, and I use such-and-such command to start my application”. I specify all these things in a particular way inside a Dockerfile, which I then use to build my Docker image.

Using the image, I’d be able to run my Dockerized application. The application would run inside a container.


Images are persistent. Once I create an image, it’s there on my computer (findable using the docker images command) until I delete it.

Containers are more ephemeral. When I use the docker run command to run an image, part of what happens is that I get a container. (Containers can be listed using docker container ls.) The container will exist for a while and then, when I kill my docker run process, the container will go away.

The difference between Docker and Docker Compose

One of the things that confused me in other Rails + Docker tutorials was the usage of Docker Compose. What is Docker Compose? Why do we need to use it in order to Dockerize Rails?

Docker Compose is a tool that lets you Dockerize an application that’s composed of multiple containers.

When in this example we Dockerize a Rails application that uses PostgreSQL, we can’t use just one image/container for that. We have to have one container for Rails and one container for PostgreSQL. Docker Compose lets us say “hey, my application has a Rails container AND a PostgreSQL container” and it lets us say how our various containers need to talk to each other.

The files involved in Dockerizing our application

Our Dockerized Rails application will have two containers: one for Rails and one for PostgreSQL. The PostgreSQL container can mostly be grabbed off the shelf using a base image. Since certain container needs are really common—e.g. a container for Python, a container for MySQL, etc.—Docker provides images for these things that we can grab and use in our application.

For our PostgreSQL need, we’ll grab the PostgreSQL 11.5 image from Docker Hub. Not much more than that is necessary for our PostgreSQL container.

Our Rails container is a little more involved. For that one we’ll use a Ruby 2.7.1 image plus our own Dockerfile that describes the Rails application’s dependencies.

All in all, Dockerizing our Rails application will involve two major files and one minor one. An explanation of each follows.


The first file we’ll need is a Dockerfile which describes the configuration for our Rails application. The Dockerfile will basically say “use this version of Ruby, put the code in this particular place, install the gems using Bundler, install the JavaScript dependencies using Yarn, and run the application using this command”.

You’ll see the contents of the Dockerfile later in the tutorial.


The docker-compose.yml file describes what our containers are and how they’re interrelated. Again, we’ll see the contents of this file shortly.


This file plays a more minor role. In order for the PostgreSQL part of our application to function, we need a user with which to connect to the PostgreSQL instance. The only way to have a user is for us to create one. Docker allows us to have a file called init.sql which will execute once per container, ever. That is, the init.sql will run the first time we run our container and never again after that.

Dockerizing the application

Start from this repo called boats.

$ git clone

The master branch is un-Dockerized. You can start here and Dockerize the app yourself or you can switch to the docker branch which I’ve already Dockerized.


Paste the following into a file called Dockerfile and put it right at the project root.

# Use the Ruby 2.7.1 image from Docker Hub
# as the base image (
FROM ruby:2.7.1

# Use a directory called /code in which to store
# this application's files. (The directory name
# is arbitrary and could have been anything.)

# Copy all the application's files into the /code
# directory.
COPY . /code

# Run bundle install to install the Ruby dependencies.
RUN bundle install

# Install Yarn.
RUN curl -sS | apt-key add -
RUN echo "deb stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y yarn

# Run yarn install to install JavaScript dependencies.
RUN yarn install --check-files

# Set "rails server -b" as the command to
# run when this container starts.
CMD ["rails", "server", "-b", ""]


Create another file called docker-compose.yml. Put this one at the project root as well.

# Use the file format compatible with Docker Compose 3.8
version: "3.8"

# Each thing that Docker Compose runs is referred to as
# a "service". In our case, our Rails application is one
# service ("web") and our PostgreSQL database instance
# is another service ("database").

    # Use the postgres 11.5 base image for this container.
    image: postgres:11.5

      # We need to tell Docker where on the PostgreSQL
      # container we want to keep the PostgreSQL data.
      # In this case we're telling it to use a directory
      # called /var/lib/postgresql/data, although it
      # conceivably could have been something else.
      # We're associating this directory with something
      # called a volume. (You can see all your Docker
      # volumes by running +docker volume ls+.) The name
      # of our volume is db_data.
      - db_data:/var/lib/postgresql/data

      # This copies our init.sql into our container, to
      # a special file called
      # /docker-entrypoint-initdb.d/init.sql. Anything
      # at this location will get executed one per
      # container, i.e. it will get executed the first
      # time the container is created but not again.
      # The init.sql file is a one-line that creates a
      # user called (arbitrarily) boats_development.
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql

    # The root directory from which we're building.
    build: .

    # This makes it so any code changes inside the project
    # directory get synced with Docker. Without this line,
    # we'd have to restart the container every time we
    # changed a file.
      - .:/code:cached

    # The command to be run when we run "docker-compose up".
    command: bash -c "rm -f tmp/pids/ && bundle exec rails s -p 3000 -b ''"

    # Expose port 3000.
      - "3000:3000"

    # Specify that this container depends on the other
    # container which we've called "database".
      - database

    # Specify the values of the environment variables
    # used in this container.
      RAILS_ENV: development
      DATABASE_NAME: boats_development
      DATABASE_USER: boats_development
      DATABASE_HOST: database

# Declare the volumes that our application uses.


This one-liner is our third and final Docker-related file to add. It will create a PostgreSQL user for us called boats_development. Like the other two files, this one can also go at the project root.

CREATE USER boats_development SUPERUSER;


We’re done adding our Docker files but we still need to make one change to the Rails application itself. We need to modify the Rails app’s database configuration so that it knows it needs to be pointed at a PostgreSQL instance running in the container called database, not the same container the Rails app is running in.

default: &default
  adapter: postgresql
  encoding: unicode
  database: <%= ENV['DATABASE_NAME'] %>
  username: <%= ENV['DATABASE_USER'] %>
  password: <%= ENV['DATABASE_PASSWORD'] %>
  port: 5432
  host: <%= ENV['DATABASE_HOST'] %>
  pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
  timeout: 5000

  <<: *default

  <<: *default

  <<: *default

Building and running our Dockerized application

Run the following command to build the application we’ve just described in our configuration files.

$ docker-compose build

Once that has successfully completed, run docker-compose up to run our application’s containers.

$ docker-compose up

The very last step before we can see our Rails application in action is to create the database, just like we would if we were running a Rails app without Docker.

$ docker-compose run web rails db:create
$ docker-compose run web rails db:migrate

The docker-compose run command is what we use to run commands inside a container. Running docker-compose run web means “run this command in the container called web“.

Finally, open http://localhost:3000 to see your Dockerized Rails application running in the browser.

$ open http://localhost:3000

Congratulations. You now have a Dockerized Rails application.

If you followed this tutorial and it didn’t work for you, please leave a comment with your problem, and if I can, I’ll help troubleshoot. Good luck.

How to Dockerize a Sinatra application

Why we’re doing this

Docker is difficult

In my experience, Dockerizing a Rails application for the first time is pretty hard. Actually, doing anything with Docker seems pretty hard. The documentation isn’t that good. Clear examples are hard to find.

Dockerizing Rails is too ambitious as a first goal

Whenever I do anything for the first time, I want to do the simplest, easiest possible version of that thing before I try anything more complicated. I also never want to try to learn more than one thing at once.

If I try to Dockerize a Rails application without any prior Docker experience, then I’m trying to learn the particulars of Dockerizing a Rails application while also learning the general principles of Docker at the same time. This isn’t a great way to go.

Dockerizing a Sinatra application gives us practice

Dockerizing a Sinatra application lets us learn some of the principles of Docker, and lets us get a small Docker win under our belt, without having to confront all the complications of Dockerizing a Rails application. (Sinatra is a very simple Ruby web application framework.)

After we Dockerize our Sinatra application we’ll have a little more confidence and a little more understanding than we did before. This confidence and understanding will be useful when we go to try to Dockerize a Rails application (which will be a future post).

By the way, if you’ve never worked with Sinatra before, don’t worry. No prior Sinatra experience is necessary.

What we’re going to do

Here’s what we’re going to do:

  1. Create a Sinatra application
  2. Run the Sinatra application to make sure it works
  3. Dockerize the Sinatra application
  4. Run the Sinatra application using Docker
  5. Shotgun a beer in celebration (optional)

I’m assuming you’re on a Mac and that you already have Docker installed. If you don’t want to copy/paste everything, I have a repo of all the files here.

(Side note: I must give credit to Marko Anastasov’s Dockerize a Sinatra Microservice post, from which this post draws heavily.)

Let’s get started.

Creating the Sinatra application

Our Sinatra “application” will have just one file. The application will have just one endpoint. Create a file called hello.rb with the following content.

# hello.rb

require 'sinatra'

get '/' do
  'It works!'

We’ll also need to create a Gemfile that says Sinatra is a dependency.

# Gemfile

source ''

gem 'sinatra'

Lastly for the Sinatra application, we’ll need to add the rackup file,


require './hello'

run Sinatra::Application

After we run bundle install to install the Sinatra gem, we can run the Sinatra application by running ruby hello.rb.

$ bundle install
$ ruby hello.rb

Sinatra apps run on port 4567 by default, so let’s open up http://localhost:4567 in a browser.

$ open http://localhost:4567

If everything works properly, you should see the following.

Dockerizing the Sinatra application

Dockerizing the Sinatra application will involve two steps. First, we’ll create a Dockerfile will tells Docker how to package up the application. Next we’ll use our Dockerfile to build a Docker image of our Sinatra application.

Creating the Dockerfile

Here’s what our Dockerfile looks like. You can put this file right at the root of the project alongside the Sinatra application files.

# Dockerfile

FROM ruby:2.7.4

COPY . /code
RUN bundle install


CMD ["bundle", "exec", "rackup", "--host", "", "-p", "4567"]

Since it might not be clear what each part of this file does, here’s an annotated version.

# Dockerfile

# Include the Ruby base image (
# in the image for this application, version 2.7.4.
FROM ruby:2.7.4

# Put all this application's files in a directory called /code.
# This directory name is arbitrary and could be anything.
COPY . /code

# Run this command. RUN can be used to run anything. In our
# case we're using it to install our dependencies.
RUN bundle install

# Tell Docker to listen on port 4567.

# Tell Docker that when we run "docker run", we want it to
# run the following command:
# $ bundle exec rackup --host -p 4567.
CMD ["bundle", "exec", "rackup", "--host", "", "-p", "4567"]

Building the Docker image

All we need to do to build the Docker image is to run the following command.

I’m choosing to tag this image as hello, although that’s an arbitrary choice that doesn’t connect with anything inside our Sinatra application. We could have tagged it with anything.

The . part of the command tells docker build that we’re targeting the current directory. In order to work, this command needs to be run at the project root.

$ docker build --tag hello .

Once the docker build command successfully completes, you should be able to run docker images and see the hello image listed.

Running the Docker image

To run the Docker image, we’ll run docker run. The -p 4567:4567 portion says “take whatever’s on port 4567 on the container and expose it on port 4567 on the host machine”.

$ docker run -p 4567:4567 hello

If we visit http://localhost:4567, we should see the Sinatra application being served.

$ open http://localhost:4567


Congratulations. You now have a Dockerized Ruby application!

With this experience behind you, you’ll be better equipped to Dockerize a Rails application the next time you try to take on that task.

How to launch an EC2 instance using Ansible

What this post covers

In this post I’m going to show what could be considered a “hello world” of Ansible + AWS, using Ansible to launch an EC2 instance.

Aside from the time required to set up an AWS account and install Ansible, you should be able to get your EC2 instance running in 20 minutes or less.

Why Ansible + AWS for Rails hosting?

AWS vs. Heroku

For hosting Rails applications, the service I’ve reached for the most in the past is Heroku.

Unfortunately it’s not always possible or desirable to use Heroku. Heroku can get expensive at scale. There are also sometimes legal barriers due to e.g. HIPAA.

So, for whatever reason, AWS is sometimes a more viable option than Herokou.

The challenges with AWS + Rails

Unfortunately once you leave Heroku and enter the land of AWS, you’re largely on your own in many ways. Unlike with Heroku, there’s not one single way to do your deployment. There are basically infinite possible ways.

One way is to deploy manually, but that has all the disadvantages you can imagine a manual solution would have, such as, most obviously, a bunch of manual work to do each time you deploy.

Another option is to use Elastic Beanstalk. Elastic Beanstalk is kind of like AWS’s answer to Heroku, but it’s not nearly as nice as Heroku, and customizations can be a little tricky/hacky.

The advantages of using Ansible

I’ve been using Ansible for the last several months both on a commercial project and for my own personal projects.

If you’re not familiar with Ansible, here’s a description from the docs: “Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.”

Ansible is often seen mentioned with similar tools like Puppet and Chef. I went with Ansible because among Ansible, Puppet, and Chef, Ansible had documentation that I could actually comprehend.

I’ve personally been using Ansible for two things so far: 1) provisioning EC2 instances and 2) deploying my Rails application. When I say “provisioning” I mainly mean spinning up an EC2 instance and installing all the software on it that my Rails app needs, like PostgreSQL, Bundler, Yarn, etc.

I like using Ansible because it allows me to manage my infrastructure using (at least to an extent so far) infrastructure as code. Rather than e.g. manually installing PostgreSQL and Bundler each time I provision a new EC2 instance, I write playbooks comprised of tasks (playbook and task are Ansible terms) that say things like “install Postgresql” and “install the Bundler gem”. This makes the cognitive burden of maintenance way lower and it also makes my infrastructure setup less of a black box.

Instructions for provisioning an EC2 instance

Before you start you’ll need to have Ansible installed and of course have an AWS account available for use.

For this exercise we’re going to create two files. The first will be an Ansible playbook in a file called launch.yml which can be placed anywhere on your filesystem.

The launch playbook

Copy and paste from the content below into a file called launch.yml placed, again, wherever you want.

Make careful note of the region entry. I use us-west-2, so if you use that region also, you’ll need to make sure to look for your EC2 instance in that region and not in another one.

Also make note of the image entry. I believe EC2 images can vary from region to region, so make sure that the image ID you use does in fact exist in the region you use.

Lastly, replace my_ssh_key_name with the full path to whatever SSH key you normally use to SSH into your EC2 instances, for example, ~/.ssh/aws-key.

- hosts: localhost
  gather_facts: false
    - vars.yml

    - name: Provision instance
        aws_access_key: "{{ aws_access_key }}"
        aws_secret_key: "{{ aws_secret_key }}"
        key_name: my_ssh_key_name
        instance_type: t2.micro
        image: ami-0d1cd67c26f5fca19
        wait: yes
        count: 1
        region: us-west-2

The vars file

Rather than hard-coding the entries for aws_access_key and aws_secret_key, which would of course be a bad idea if we were to commit our playbook to version control, we can have a separate file where we keep secret values. This separate file can either be added to a .gitignore or managed with something called Ansible Vault (which is outside the scope of this post).

Create a file called vars.yml in the same directory where you put launch.yml. This file will only need the two lines below. You’ll of course need to replace my placeholder values with your real AWS access key and secret key.

aws_access_key: XXXXXXXXXXXXXXXX
aws_secret_key: XXXXXXXXXXXXXXXX

The launch command

With our playbook and vars file in place, we can now run the command to execute the playbook:

$ ansible-playbook -v launch.yml

You’ll probably see a couple warnings including No config file found; using defaults and [WARNING]: No inventory was parsed, only implicit localhost is available. These are normal and can be ignored. In many use cases for Ansible, we’re running our playbooks against remote server instances, but in this case we don’t even have any server instances yet, so we’re just running our playbook right on localhost. For whatever reason Ansible feels the need to warn us about this.


If you now open up your AWS console and go to EC2, you should now be able to see a fresh new EC2 instance.

To me this sure beats manually clicking through the EC2 instance launch wizard.

Good luck, and if you have any troubles, please let me know in the comments.