My hybrid approach to Dockerizing Rails applications

by Jason Swett,

Why I started using Docker

I was a latecomer to the Docker craze. I simply didn’t have a problem that I left like Docker would solve for me.

Then, in 2020, a scenario arose for me that finally made me feel a need for Docker. My team at work was growing. I was finding that onboarding new developers was painful due to all the setup work we had to do on each new developer’s machine. I suspected that Docker could ease my pain.

The details of my Docker use case

First of all, I want to explain the exact use case that I used Docker for. There are two ways to Dockerize a Rails application. You can either Dockerize a Rails application to help with your development environment or your production infrastructure. My use case was to make my development environment easier.

I was tired of having each use have to install RVM, Ruby, RubyGems, PostgreSQL, Redis, etc. I wanted developers to be able to run a single command and have a complete development ready to go.

My good experiences with Docker

Regarding the objective of a complete development environment ready to go, I must say that Docker delivered on its promise. It took me a lot of difficult work but once I Dockerized my Rails application, it just worked. It was magical.

It was really helpful not to have to spend $HOURS to $DAYS getting a new developer set up with the development environment. It was nice not having to pair with people while we googled esoteric errors that only came up on their machine.

It was also really nice not to have to juggle services on my local machine. No more manually starting Rails, Sidekiq and Redis separately.

My bad experiences with Docker

Sadly, Docker’s benefits didn’t come without costs. Here were the downsides in rough descending order of severity.

Worse performance when running Rails commands

When a Rails app is Dockerized, you can no longer run commands like rails db:migrate because your local computer (the “host machine”) is not the one running Rails, your Docker container is. So you have to run docker-compose run web rails db:migrate.

Running Rails commands via Docker Compose took noticeably longer than running Rails commands straight on the host machine. These commands were so sluggish as to be intolerable.

Since tests are a big part of the workflow where I work, it was a real drag to have to pay this “performance tax” every time I wanted to run a test.

Worse performance when interacting with the app in the browser

Clicking around on stuff in the browser was slower as well. This issue wasn’t quite as bad as the command line issue but it was still bad enough to be a noticeable bummer.

No easy way to run tests non-headlessly

Most of the time I run my system specs headlessly. But it’s not uncommon for me to run across a difficult test which I need to run non-headlessly in order to see what’s going on with it.

Running tests non-headlessly directly on my host machine works fine. Trying to get my Docker container to open a browser for my was a nightmare though and I never did get it figured out.

No easy binding.pry

Since the docker-compose up command runs all services under the same parent process or whatever, you can’t just stick a binding.pry in your code and drop into the console inside the rails server process like you can if you’re not using Docker. To be fair, I understand there are ways around this, and I didn’t try very hard to solve this particular problem, so I might be griping about nothing with this one.

The hybrid Docker solution I developed instead

The solution I ultimately landed on was a hybrid approach. I decided to use Rails natively and Docker for Redis and PostgreSQL. That way I don’t have that impenetrable seal between me and my Rails application, but I still don’t have to manually install and run PostgreSQL and Redis.

This still leaves Sidekiq and webpack-dev-server. Luckily I found an easy fix for this. I just reverted to an old-fashioned solution, Foreman.

My Docker Compose config

Here’s what my docker-compose.yml looks like.

---
# Docker Compose 2.4 is for local development
# https://www.heroku.com/podcasts/codeish/57-discussing-docker-containers-and-kubernetes-with-a-docker-captain - Source on that.
version: '2.4'

services:
  postgres:
    image: postgres:13.1-alpine
    mem_limit: 256m
    volumes:
      - postgresql:/var/lib/postgresql/data:delegated
    ports:
      - "127.0.0.1:5432:5432"
    environment:
      PSQL_HISTFILE: /root/log/.psql_history
      POSTGRES_USER: mednote_development
      POSTGRES_PASSWORD: pgpassword
    restart: on-failure
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 2s
      retries: 10
    logging:
      driver: none

  redis:
    image: redis:4.0.14-alpine
    mem_limit: 64m
    volumes:
      - redis:/data:delegated
    ports:
      - "127.0.0.1:6379:6379"
    restart: on-failure
    logging:
      driver: none

volumes:
  postgresql:
  redis:
  storage:

That’s all I need for the Docker portion.

My Procfile

docker:    docker-compose up
web:       bundle exec puma -p 3000
worker:    bundle exec sidekiq
webpacker: bundle exec bin/webpack-dev-server

I can run all these things by doing foreman start -f Procfile.dev. Then, in a separate terminal window (or rather, a separate tmux pane) I run rails server so I can still use binding.pry tidily.

The trade-offs

The downsides of this approach are that each developer will still have to install RVM, Ruby, and RubyGems manually. But to me that’s much less of a downside than the downsides I experienced when my app was “fully” Dockerized.

Takeaways

  • Docker can be a great help in making it easier for each developer to get set up with a development environment.
  • Dockerizing an app “fully” can unfortunately come with some side effects that you may find unacceptable.
  • Using a hybrid approach can provide a sensible balance of the areas where using Docker is better versus the areas where going outside of Docker is better.

3 thoughts on “My hybrid approach to Dockerizing Rails applications

  1. Francisco Quintero

    Once I thought about Dockerizing half application but never tried it. It’s great to know it’s possible considering that a full dockerize RoR app is a PITA to work with.

    For the RVM, Ruby, and all stuff that still needs to be installed I was so tired of copy pasting that I wrote a script that does it for me and works almost all the times with fewer modifications.

    I’m copying this new technique into my scripts 😀

    Reply
  2. David Backeus

    Provided you only need to support MacOS development environments, automating dev env bootstrapping via the conventional bin/setup script together with a Brewfile and a version manager like asdf or rbenv is pretty straightforward.

    I’ve personally spent multiple days debugging docker environments on MacOS and Windows. Mostly due to docker runtime not being 100% stable outside of Linux. But also due to issues with files leaking in from the host file system due to over-mounting.

    What exact issues have you encountered that would take “days” to solve when setting up a Rails app without docker?

    Reply
  3. alex

    I am currently working on dockerizing a large rails app. Not surprised performance takes a hit but somehow I’d have expected less damage.

    If you don’t mind my asking, what steps have you taken to optimize the stack? what ruby version? did you use puma or nginx/apache? malloc configuration? code caching? caching via memcached/redis? what machine are you testing on? what’s the app’s memory footprint?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *