Category Archives: Uncategorized

Reader Q&A: Tommy’s question about testing legacy code

Code with Jason subscriber Tommy C. recently wrote in with the following question:

Jason,

So I have found that one of the hurdles to testing beginners face is that the code they are trying to test is not always very testable. This is either because they themselves have written it that way or because they have inherited it. So, this presents a sort of catch 22. You have code with no tests that is hard to test. You can’t refactor the code because there are no tests in place to ensure you have not changed the behavior of the code.

I noticed that you have said that you don’t bother to test controllers or use request specs. I agree that in your situation, since you write really thin controllers that is a good call. However, in my situation, I have inherited controllers that are doing some work that I would like to test. I would like to move that logic out eventually, but right now all I can get away with is adding some test/specs.

These are some of the things that make testing hard for me. When I’m working on a greenfield side project all is good, but you don’t always have clean, testable code to work with.

Thanks,
–Tommy

Chicken/egg problem

Tommy brings up a classic legacy project chicken/egg problem: you can’t add tests until you change the way the code is structured, but you’re afraid to change the structure of the code before you have tests. It’s a seemingly intractable problem but, luckily, there’s a path forward.

The answer is to make use of the Extract Method and Extract Class refactoring techniques, also known as Sprout Method and Sprout Class. The idea is that if you come across a method that’s too long to easily write a test for, you just grab a chunk of lines from the method and move it – completely unmodified – into its own method or class, and then write tests around that new method or class. These techniques are a way to be reasonably sure (not absolutely guaranteed, but reasonably sure) that your small change has not altered the behavior of the application.

I learned about Sprout Method/Sprout Class from Michael Feathers and Martin Fowler. I also wrote a post about using Sprout Method in Ruby here.

Request specs

To address controller tests/request specs: Tommy might be referring to a post I wrote where I said most of the time I don’t use controller specs/request specs. (I also wrote about the same thing in my book, Rails Testing for Beginners.) There are two scenarios where I do use request specs, though: API-only projects and legacy projects that have a bunch of logic in the controllers. I think Tommy is doing the exact right thing by putting tests around the controller code and gradually moving the code out of the controllers over time.

If you, like Tommy, are trying to put tests on a legacy project and finding it difficult, don’t despair. It’s just a genuinely hard thing. That’s why people have written entire books about it!

Do you have a question about testing Rails legacy code, or about anything else to do with testing? Just email me at jason@codewithjason.com or tweet me at @jasonswett. I’m able to, I’ll write an answer to your question just like I did with Tommy’s.

What if I want to do test-first but I find it easier to do test-after?

Recently Code with Jason reader Kaemon L. wrote me with the following question:

“As a beginner, is it better to write tests before you code to make it pass? or is it better to code first, write tests for the code to pass, and then add more tests as you come across bugs? In my experience so far learning RSpec, I’ve found it easier to code first and then write tests afterwards.​ Only because when I would try to write tests first I wasn’t exactly sure what needed to be tested, or how I was planning to write the code.​”

This is a great question. In addressing this question I find it useful to realize that when you’re learning testing you’re actually embarking on two parallel endeavors.

The two parallel endeavors of learning testing

The two parallel endeavors are:

1. Writing tests (outcome)
2. Learning how to write tests (education)

I think it’s useful to make the distinction between these two parts of the work. If you make the realization that achieving the desired outcome is only half your job, and the other half is learning, then it frees you up to do things “incorrectly” for the sake of moving forward.

With that out of the way, what’s actually better? Writing tests first or after?

What’s better, test-first or test-after?

I’m not sure that it makes sense to frame this question in terms of “better” or “worse”. When I think of test-driven development, I don’t think of it as “better” than test-after in all situations, I think of TDD as having certain ​advantages ​in certain scenarios.

What are the advantages of test-driven development?

TDD can separate the ​what​ from the ​how​. If I write the test first, I can momentarily focus on ​what​ I want to accomplish and relieve my mind of the chore of thinking of ​how​. Then, once I switch from writing the test to writing the implementation, I can free my mind of thinking about everything the feature needs to do and just focus on making the feature work.

TDD increases the chances that every single thing I’ve written is covered by a test. The “golden rule” of TDD (which I don’t always follow) is said to be “never write any new code without a failing test first”. If I follow that, I’m virtually guaranteed 100% test coverage.

TDD forces me to write easily-testable code. If I write the test first and the code after, I’m forced to write code that can be tested. There’s no other way. If I write the code first and try to test it afterward, I might find myself in a pickle. As a happy side benefit, code that’s easily testable happens to also usually be easy to understand and to work with.

TDD forces me to have a tight feedback loop. I write a test, I write some code. I write another test, I write some more code. When I write my tests after, I’m not forced to have such a fast feedback loop. There’s nothing stopping me from coding for hours before I stop myself and write a test.

If I choose to write my tests after my application code instead of before, I’m giving up the above benefits. But that doesn’t mean that test-after is automatically an inferior workflow in all situations.

Learning how to test (process) vs. producing tests (result)

Let’s go back to the two parallel endeavors listed above: writing tests and learning how to write tests. If I’m trying to write tests as I’m writing features and I just can’t figure out how to write the test first, then I have the following options:
1. Try to plow through and somehow write the test first anyway
2. Give up and don’t write any tests
3. Write the tests after

If #1 is too hard and I’m just hopelessly stuck, then #3 is a much better option than #2. Especially if I make a mental shift and switch from saying “I’m trying to write a test” to saying “I’m trying to learn how to write tests”. If all I’m trying to do is learn how to write tests, then anything goes. There’s literally nothing at all I could do wrong as part of my learning process, because the learning process is a separate job from producing results.

What about later?

Lastly, what if I get to the stage in my career where I’m fully comfortable with testing? Is TDD better than test-after? I would personally consider myself fully comfortable with testing at this stage in my career (although of course no one is ever “done” learning). I deliberately do not practice TDD 100% of the time. Sometimes I just find it too hard to write the test first. In these cases sometimes I’ll do a “spike” where I write some throwaway code just to get a feel for what the path forward might look like. Then I’ll discard my throwaway code afterward and start over now that I’m smarter. Other times I’ll just begin with the implementation and keep a list of notes like “write test for case X, write test for case Y”.

TDD: Advantageous, indispensable, but not universally “better”

To sum it all up: I’m not of the opinion that TDD is a universally superior workflow to non-TDD. I don’t think it’s important to hold oneself to TDD practices when learning testing. But once a person does reach a point of being comfortable with testing, TDD is an extremely valuable methodology to follow.

Five benefits of automated testing

Why do I bother writing tests? I can think of five reasons why writing tests is helpful. There are more sub-reasons under these reasons, but I think any other benefit of testing can be traced back to one of these five.

Testing helps prevent bugs

I personally find that the process of writing tests gets me into a mindset of trying to think of all the paths through a piece of code, including all the ways the feature I’m writing could be abused. This means that a feature I’ve written tests for is less likely to be buggy than a feature I haven’t written tests for.

Testing helps prevent regressions

It never ceases to amaze me (and humble me and embarrass me) how frequently I’ll write a new piece of code and then discover that my addition breaks some older, supposedly rock-solid part of the application. To take a recent example, I recently added a database column in an application, an act which broke a static page! Automated testing is great for catching unexpected regressions like this.

Testing helps improve design

For the most part, code that is modular and loosely coupled is easily testable. Incidentally, code that is modular and loosely coupled also happens to tend to be easily understandable. Testing is often said to encourage “good design”, which is another way of saying the it encourages code that is easily understandable.

Testing helps enable refactoring

Because automated tests help catch regressions, tests help make refactoring possible. Refactoring is not only helpful, it’s necessary. As an application grows, it’s impossible for the entire codebase to stay cohesive and DRY without occasionally taking a step back, observing any repetition and lack of cohesion, and refactoring to bring the codebase back to an acceptably DRY and cohesive state.

Testing aids the understandability of the codebase by serving as documentation

If I look at a piece of code I wrote 8 months ago and I don’t remember what it does, I can look to the test I wrote to help me understand what the code is supposed to do.

What testing doesn’t do

Notice how each of my headings starts with “testing helps” or “testing aids“. Despite these very real and valuable benefits of testing listed above, testing does not provide guarantees. Tests don’t catch every bug or regression. Tests don’t (and can’t) prove the absence of bugs. Tests can’t tell you whether your UI is user-friendly or whether your product has business value. But tests can save you a heck of a lot of time and money and toil.

Continuous integration

As I’ve learned more and more about testing I’ve learned that there’s way more to testing than just automated tests. There are other practices which, when combined with writing automated tests, serve to make the development process smoother and improve the quality of the product the development team is working on.

One such practice is continuous integration. Perhaps the easiest way for me to explain what continuous integration (CI) is is to describe its opposite.

The opposite of continuous integration

In 2008 I worked at a higher-education software startup in Austin, Texas. As I recall it, our developers tended to work in feature branches that would live for up to a few weeks. About once a month or so we would hold something called a “merge party”. I found the name funny because parties are supposed to be fun but merge parties were a fucking nightmare.

Here’s what would happen at a merge party. The development team (about 10 people) would get in a conference room. Someone would pull up some code on a screen. We’d attempt a merge and get a merge conflict. Anyone whose code was involved in the merge conflict would squint at the screen and try to judge if the “left” code or “right” code was the right version. We’d repeat that process about 95 times until everyone wanted to kill themselves. You might wonder: How did we know, at the end of the merge party, whether our decisions were correct and the code still worked? Good question. There was no way to know. We just had to hope to hell we were right.

Later, in 2013, I worked at a healthcare startup. We used CircleCI there (a great tool BTW) and so our development manager believed that this automatically meant we were practicing CI. We weren’t, though. Once I was forced to code on a feature branch for about three months. I don’t recall the exact process of merging that branch back in but I certainly remember that dealing with merge conflicts at that job was part of our regular workflow.

So if I’m working on a team where we have long-living feature branches (that is, several days or more), then that’s not continuous integration, because the integrations happen infrequently. In continuous integration the integration happens continuously.

What continuous integration actually is

Continuous integration is when developers stitch their work together as frequently as possible.

The benefit of CI is that it reduces the pain of the integration process. There’s a saying: “When something hurts, do it more often.” The more frequently a team integrates their work, the less stuff there is to integrate. Merge conflicts will be less frequent and less severe, and when they happen, the code will be fresh in the developers’ minds and so the merge conflict resolution work will be fairly trivial. But in my experience, when a team practices CI, merge conflicts just don’t happen that much.

What continuous integration has to do with testing

A CI server like CircleCI or Jenkins tends to be associated with running tests. The idea here is that you always want to be sure that the test suite is passing on master, so you check the test suite after every single merge.

I want to be clear though that CI could certainly be practiced without having automated tests and that I think it would still be a really good idea. Merging infrequently without tests would still be more painful and risky than merging frequently. But given the choice, I’d of course prefer to be able to verify that my post-integration master branch is in a working state by running a test suite than by having to run manual tests.

You could say that your test suite is your mechanism for ensuring with a reasonable confidence that each integration was successful.

The difference between continuous integration, continuous deployment and continuous delivery

Continuous integration and continuous deployment are often mentioned together. Continuous integration means always keeping a small difference between the master branch and any feature branch. Continuous deployment means always keeping a small difference between what’s in the production environment and what’s in the development environment.

CI and CD are beneficial for basically the same reasons. Small changes are less risky and less of a headache than big changes.

What’s the difference between continuous deployment and continuous delivery? Think about the difference between a Rails application and an iOS application. We can deploy a Rails application however frequently we want. There’s nothing stopping us from deploying, say, 50 times a day. But an iOS app has the whole review process and stuff. You can’t just deploy it willy-nilly. But that doesn’t mean you can’t always have a complete and working build sitting on the master branch of your iOS project. That’s continuous delivery – always having something ready to go. If you’re practicing continuous deployment, you’re practicing continuous delivery. But you can practice continuous delivery without necessarily practicing continuous deployment.

How to See Your Feature Specs Run in the Browser

When you write feature specs you can either have them run headlessly (i.e. invisibly) or not-headlessly, where you can actually see a browser instance being spun up and see your tests run in the browser.

I might be in the minority but I prefer to see the tests run in the browser, especially when I’m in the process of developing the tests.

Non-headless test running can be enabled with the following two steps.

First, add this to `spec/rails_helper.rb`:

Capybara.default_driver = :selenium_chrome

Then add these gems to the `Gemfile`:

group :development, :test do
  gem 'selenium-webdriver'
  gem 'chromedriver-helper'
end

And don’t forget to run `bundle install`.

Now, when you run your tests, they should come up in Chrome.

Reverse job ad

.footer-opt-in {display: none;}

I’m looking for a new job. Inspired by this reverse job ad, I decided to create one of my own.

Who I am

I’m Jason Swett, software engineer. I’ve been coding since the ’90s. I’ve taught programming in Nigeria, Bulgaria, the Netherlands and even Missouri. I’m the host of the Ruby Testing Podcast and author of Angular for Rails Developers. Most of my work over the last six years has been in Ruby on Rails. I’m primarily a back-end engineer although I’ve done my fair share of JavaScript work as well.

What I’m looking for in my next role

In my next role I can see myself doing any combination of the following things:

  • Training and mentoring junior developers
  • Developing and documenting the organization’s processes (e.g. incident response process)
  • Helping foster a healthy relationship between engineering and other parts of the organization
  • Helping engineering follow Agile development methodologies (pragmatically, not dogmatically)
  • Helping engineering follow best practices like TDD, continuous integration and continuous delivery
  • Mopping the floor

Why I’m looking for a new job

While I’m quite happy at the job I’m working now, there are two logistical problems. First, I live in Eastern time, but the company is located on the west coast, and I often have to work Pacific hours. Second, I’m one of the only remote people at the company which I find a little bit isolating.

So in my next role I’m looking for something that will allow me to work remotely from Michigan. I’d prefer to work for a remote-first company if possible. (This is not a hard requirement though.)

I would also prefer to work for a product company as opposed to a development agency. I like to be able to work continuously on one single thing for a long period of time as opposed to shipping a project for a client and never seeing it again.

Want to talk?

If you’d like to talk about working together, my email address is jason@codewithjason.com.

How to Deploy a Rails Application with an Angular CLI Webpack Front-End

I’ve written previously about how to deploy an Angular CLI Webpack project without Rails.

I’ve also written about how to deploy an Angular 2/Rails 5 project, but not one that uses Angular CLI Webpack.

Following are instructions for deploying a Rails 5 app with an Angular 2 front-end that was generated with Angular CLI, Webpack version. (Specifically, Angular CLI version 1.0.0-beta.11-webpack.8.)

Create the Rails App

First, create an API-only Rails application.

Create the Angular App

Before you create the Angular app, make sure you have the following versions of the following things installed:

Angular CLI: 1.0.0-beta.11-webpack.8
NPM: 3.10.6
Node: 6.5.0

Then, just like I had us do in the Angular-only version of this post, we’ll create an Angular app with the silly name of bananas.

In this case it’s important that we call the Angular directory client. Don’t worry about why right now. I’ll explain shortly.

$ ng new bananas
$ mv bananas client

And also just like in the Angular-only version, we’ll want to make sure we can run ng-build without problems.

$ cd client
$ ng build

If you try it and it doesn’t work (which is very likely), just refer to the other post for how to fix it.

Modify package.json

We need to do a few things to package.json. Don’t worry if you don’t understand every item. I’ve shared my package.json below which you can just copy and paste if you want.

We want Heroku to do an ng build for us after the build. So we need to add this line:

"heroku-postbuild": "ng build",

We also need to move our devDependencies into dependencies because we need some of them in production.

We’ll want to remove the "start": "ng serve" script because it doesn’t apply.

Lastly, an absence of node-gyp will cause error messages to appear. So we’ll add this line:

"preinstall": "npm install -g node-gyp",

Following is what my package.json looks like.

{
  "name": "bananas",
  "version": "0.0.0",
  "license": "MIT",
  "angular-cli": {},
  "scripts": {
    "lint": "tslint \"src/**/*.ts\"",
    "test": "ng test",
    "pree2e": "webdriver-manager update",
    "e2e": "protractor",
    "preinstall": "npm install -g node-gyp",
    "heroku-postbuild": "ng build"
  },
  "private": true,
  "dependencies": {
    "@angular/common": "2.0.0-rc.5",
    "@angular/compiler": "2.0.0-rc.5",
    "@angular/core": "2.0.0-rc.5",
    "@angular/forms": "0.3.0",
    "@angular/http": "2.0.0-rc.5",
    "@angular/platform-browser": "2.0.0-rc.5",
    "@angular/platform-browser-dynamic": "2.0.0-rc.5",
    "@angular/router": "3.0.0-rc.1",
    "core-js": "^2.4.0",
    "rxjs": "5.0.0-beta.11",
    "ts-helpers": "^1.1.1",
    "zone.js": "0.6.12",
    "@types/jasmine": "^2.2.30",
    "angular-cli": "1.0.0-beta.11-webpack.8",
    "codelyzer": "~0.0.26",
    "jasmine-core": "2.4.1",
    "jasmine-spec-reporter": "2.5.0",
    "karma": "0.13.22",
    "karma-chrome-launcher": "0.2.3",
    "karma-jasmine": "0.3.8",
    "karma-remap-istanbul": "^0.2.1",
    "protractor": "4.0.3",
    "ts-node": "1.2.1",
    "tslint": "3.13.0",
    "typescript": "2.0.0"
  },
  "devDependencies": {
  }
}

Create public symlink

When someone visits /index.html in the browser, the file they’ll actually be served is Rails’ public/index.html. We can do a little trick where we symlink the public directory to client/dist. That way when someone visits /index.html they’ll be served client/dist/index.html, thus loading the Angular app.

Let’s kill the public directory and replace it with a symlink.

$ rm -rf public
$ ln -s client/dist public

Create the Heroku app

I’m assuming you have a Heroku account and you have Heroku CLI installed.

$ heroku create

Specify the buildpacks

Lastly, we’ll tell Heroku to use two certain buildpacks:

$ heroku buildpacks:add https://github.com/jasonswett/heroku-buildpack-nodejs
$ heroku buildpacks:add heroku/ruby

Remember when I said we needed to call our Angular directory client? The reason is because of my custom Node buildpack. I’ve modified Heroku’s Node buildpack to look for package.json in client rather than at the root. This is what allows us to nest an Angular app inside of a Rails app and still have Heroku do what it needs to do with each.

Deploy the app

Make sure your code is all committed and do a git push.

$ git push heroku master

When it’s done, open the app. You should see “app works!”.

$ heroku open

If you want, I have a full repo with the code used in this example.

How to Deploy an AngularJS/Rails Single-Page Application to Heroku

Note: this post is somewhat aged. You may be interested to check out the Rails 5/Angular 2 version, which still applies somewhat to Rails 4/Angular 1.x.

Why deploying a single-page application is different

Before I explain how to deploy an Angular/Rails application to Heroku, it might make sense to explain why deploying a single-page application (SPA) is different from deploying a “traditional” web application.

The way I chose to structure the SPA in this example is to have all the client-side code a) outside Rails’ the asset pipeline and b) inside the same Git repo as Rails. I have a directory called client that sits at the top level of my project directory.

Gemfile
Gemfile.lock
README.rdoc
Rakefile
app
bin
client <-- This is where all the client-side code lives.
config
config.ru
db
lib
log
node_modules
spec
test
tmp
vendor

When I’m in development mode, I use Grunt to spin up a) a Rails server, which simply powers an API, and b) a client-side server.

In production the arrangement is a little different. In preparation for deployment, the grunt build command poops out a version of my client-side app into public. Rails will of course check for a file at public/index.html and, if one exists, serve that as the default page. In fact, if you run grunt build locally, spin up a development server and navigate to http://localhost:3000, you’ll see your SPA served there.

But it would be pretty tedious to have to manually run grunt build before each deployment. And even if you somehow automated that process so grunt build was run before each git push heroku master, it wouldn’t be ideal to check all the code generated by grunt build into version control.

Heroku’s deployments are Git-based. The version of my client-side app that gets served will never be checked into Git. This is the challenge.

Automatically building the client-side app on deployment

Fortunately, there is a way to tell Heroku to run grunt build after each deployment.

First let’s get grunt build functioning locally so you can see how it works.

Change the line dist: 'dist' to dist: '../public' under the var appConfig section of client/Gruntfile.js. For me this is found on line 19.

Now remove the public directory from the filesystem and from version control. (Add public to .gitignore is not necessary.)

$ rm -rf public

If you now run grunt build, you’ll see the public directory populated with files. This is what we want to have happen in production each time we deploy our app.

Configuring the buildpacks

Next you’ll want to add a file at the root level of your project called .buildpacks that uses both Ruby and Node buildpacks:

https://github.com/jasonswett/heroku-buildpack-nodejs-grunt-compass
https://github.com/heroku/heroku-buildpack-ruby.git

You can see that I have my own fork of the Node buildpack.

In order to deploy this you might need to adjust your client/package.json and move all your devDependencies to just regular dependencies. I had to do this. Here’s my client/package.json:

// client/package.json

{
  "name": "lunchhub",
  "version": "0.0.0",
  "description": "The website you can literally eat.",
  "dependencies": {
    "source-map": "^0.1.37",
    "load-grunt-tasks": "^0.6.0",
    "time-grunt": "^0.3.1",
    "grunt": "^0.4.1",
    "grunt-autoprefixer": "^0.7.3",
    "grunt-concurrent": "^0.5.0",
    "grunt-connect-proxy": "^0.1.10",
    "grunt-contrib-clean": "^0.5.0",
    "grunt-contrib-compass": "^0.7.2",
    "grunt-contrib-concat": "^0.4.0",
    "grunt-contrib-connect": "^0.7.1",
    "grunt-contrib-copy": "^0.5.0",
    "grunt-contrib-cssmin": "^0.9.0",
    "grunt-contrib-htmlmin": "^0.3.0",
    "grunt-contrib-imagemin": "^0.7.0",
    "grunt-contrib-jshint": "^0.10.0",
    "grunt-contrib-uglify": "^0.4.0",
    "grunt-contrib-watch": "^0.6.1",
    "grunt-filerev": "^0.2.1",
    "grunt-google-cdn": "^0.4.0",
    "grunt-newer": "^0.7.0",
    "grunt-ngmin": "^0.0.3",
    "grunt-protractor-runner": "^1.1.0",
    "grunt-rails-server": "^0.1.0",
    "grunt-shell-spawn": "^0.3.0",
    "grunt-svgmin": "^0.4.0",
    "grunt-usemin": "^2.1.1",
    "grunt-wiredep": "^1.7.0",
    "jshint-stylish": "^0.2.0"
  },
  "engines": {
    "node": ">=0.10.0"
  },
  "scripts": {
    "test": "grunt test"
  }
}

I also registered a new task in my Gruntfile:

// client/Gruntfile.js

grunt.registerTask('heroku:production', 'build');

You can just put this near the bottom of the file next to all your other task registrations.

And since the Node buildpack will be using $NODE_ENV when it runs its commands, you need to specify the value of $NODE_ENV:

$ heroku config:set NODE_ENV=production

Lastly, tell Heroku about your custom buildpack (thanks to Sarah Vessels for catching this):

$ heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git

After you have all that stuff in place, you should be able to do a regular git push to Heroku and have your SPA totally work.

Don’t Use an Amateur Email Address

Have you ever gotten a business card from, e.g., a hair stylist and the email address on the card was something like kaylas_hairdesigns@gmail.com? An email address like that just screams “I’m an amateur. I don’t take myself seriously enough to invest the 10 minutes and 14 dollars that it would take to register my own domain.”

At the time of this writing, the email subscriber list for CodeWithJason.com has a little over 100 people on it. I would estimate that over 90% of the subscribers have an @gmail.com, @yahoo.com or @hotmail.com email address. These are people who supposedly care enough about their job search to sign up to get regular emails from me about it. Yet they apparently haven’t bothered to get themselves a non-Gmail email address. If that’s the state of affairs with this sample, what’s it like in the general population? Probably pretty fucking bad.

The reason I bring this up is to illustrate that if you get yourself a custom email address, you’re probably already ahead of 90% or more of your job-seeking competitors. The bar is low out there.

Some people have asked me what a good domain to register might be. For me personally, I have my own S Corp through which I do most of my freelance programming work. The name of my one-man business is Ben Franklin Labs. The domain I’ve registered is benfranklinlabs.com and my email address is jason@benfranklinlabs.com. That’s a good example for somebody like me who has his or her own business entity. I realize that it’s likely that you do not.

It’s probably more likely that you normally work as a full-time employee as opposed to a contractor like me. In that case, I would just recommend using .com. One of my personal domains is jasonswett.net (jasonswett.net instead of jasonswett.com because I wanted it to rhyme) and the email address I use under that domain is jason@jasonswett.net. It doesn’t matter a whole bunch what you choose. Just about anything is better than a Gmail address.