Author Archives: Jason Swett

Do I have to code in my free time in order to be a good programmer?

In programming interviews, job candidates are sometimes asked what kinds of side projects they work on in their spare time. The supposed implication is that if you work on side projects in your free time then that’s good, and if you don’t that’s bad.

This idea has led to a somewhat lively debate: do you have to code in your free time in order to be a good programmer?

The popular answer is an emphatic no. You can put in a solid 8-hour workday, do a kick-ass job, and then go home and relax knowing you’re fully fulfilling all of their professional obligations. And actually, you might even be a better programmer because you’re not running yourself ragged and burning yourself out.

But actually, both this question and the standard answer are misguided. In fact, they are miss the point so thoroughly that they can’t even be called wrong. I’ll explain what I mean.

Drumming in your free time

Imagine I’m a professional drummer. I make my living by hiring out my drumming services at bar shows, weddings and parties. I’m a very competent drummer although maybe not a particularly exceptional one.

Imagine how funny it would be for me to go on an online forum and ask, Do I have to practice drumming in my free time in order to be a good drummer?

I can imagine a couple inevitable responses. First of all, who’s this imaginary authority who’s going around and handing down judgments about who’s a good drummer or not? And second, yes, of course you have to spend some time practicing if you want to get good, especially when you’re first starting out.

The question reveals a very confused way of looking at the whole situation.

The reality is that there’s an economic judgement call to be made. Either I can choose to practice in my free time or get better faster, or I can choose not to practice in my free time and improve much more slowly, or perhaps even get worse. Neither choice is right or wrong. Neither choice automatically makes me “good” or “bad”. It’s simply I personal choice that I have to make for myself. The question is whether I personally find the benefits of practicing the drums to be worth the cost of practicing the drums.

An important factor that will inform my decision is the objectives that I’m personally pursuing. Am I, for example, trying to be the best drummer in New York City? Or do I just want to have a little fun on the weekends? The question is the same for everyone but the answer is going to be much different depending on what you want and what you’re willing to do to get it.

The drumming analogy makes it obvious how silly it is to ask directly if you spend your free time practicing. Maybe the person asking is trying to probe for “passion” (yuck). But passion is a means to an end, not an end in itself. Instead of looking for passion, the evaluator should look for the fruits of passion, i.e. being a good drummer.

Back to programming

Do your career goals, in combination with your current skill level, justify the extra cost of programming in your free time? If so, then coding in your free time is a rational choice. And if you decide that there are no factors in your life that make you want to code in your free time, then that’s a perfectly valid choice as well. There’s no right or wrong answer. You don’t “have to” or not have to. Rather, it’s a choice for each person to make for themselves.

Ruby memoization

What is memoization?

Memoization is a performance optimization technique.

The idea with memoization is: “When a method invokes an expensive operation, don’t perform that operation each time the method is called. Instead, just invoke the expensive operation once, remember the answer, and use that answer from now on each time the method is called.”

Below is an example that shows the benefit of memoization. The example is a class with two methods which both return the same result, but one is memoized and one is not.

The expensive operation in the example takes one second to run. As you can see from the benchmark I performed, the memoized method is dramatically more performant than the un-memoized one.

Running the un-memoized version 10 times takes 10 seconds (one second per run). Running the memoized version 10 times takes only just over one second. That’s because the first call takes one second but the calls after that take a negligibly small amount of time.

class Product
  # This method is NOT memoized. This method will invoke the
  # expensive operation every single time it's called.
  def price
    expensive_calculation
  end

  # This method IS memoized. It will invoke the expensive
  # operation the first time it's called but never again
  # after that.
  def memoized_price
    @memoized_price ||= expensive_calculation
  end
  
  def expensive_calculation
    sleep(1)
    500
  end
end

require "benchmark"

product = Product.new
puts Benchmark.measure { 10.times { product.price } }
puts Benchmark.measure { 10.times { product.memoized_price } }
$ ruby memoized.rb
  0.000318   0.000362   0.000680 ( 10.038078)
  0.000040   0.000049   0.000089 (  1.003962)

Why is memoization called memoization?

I’ve always thought memoization was an awkward term due to its similarity to “memorization”. The obscurity of the name bugged me a little so I decided to look up its etymology.

According to Wikipedia, “memoization” is derived from the Latin word “memorandum”, which means “to be remembered”. “Memo” is short for memorandum, hence “memoization”.

When to use memoization

The art of performance optimization is a bag of many tricks: query optimization, background processing, caching, lazy UI loading, and other techniques.

Memoization is one trick in this bag of tricks. You can recognize its use case when an expensive method is called repeatedly without a change in return value.

This is not to say that every time a case is encountered where an expensive method is called repeatedly without a change in return value that it’s automatically a good use case for memoization. Memoization (just like all performance techniques) is not without a cost, as we’ll see shortly. Memoization should only be used when the benefit exceeds the cost.

As with all performance techniques, memoization should only be used a) when you’re sure it’s needed and b) when you have a plan to measure the before/after performance effect. Otherwise what you’re doing is not performance optimization, you’re just randomly adding code (i.e. incurring costs) without knowing whether the costs you’re incurring are actually providing a benefit.

The costs of memoization

The main cost of memoization is that you risk introducing subtle bugs. Here are a couple examples of the kinds of bugs to which memoization is susceptible.

Instance confusion

Memoization works if and only if the return value will always be the same. Let’s say, for example, that you have a loop that makes use of an object which has a memoized method. Maybe this loop uses the same object instance in every single iteration, but you’re under the mistaken belief that a fresh instance is used for each iteration.

In this case the value from the object in the first iteration will be correct, but all the subsequent iterations risk being incorrect because they’ll use the value from the first iteration rather than getting their own fresh values.

If this type of bug sounds contrived, it’s not. It comes from a real example of a bug I once caused myself!

Nil return values

In the example above, if expensive_calculation had been nil, then the value wouldn’t get memoized because @memoized_price would be nil and nil is falsy.

The risk of such a bug is probably low, and the consequences of the bug are probably small in most cases, but it’s a good category of bug to be aware of. An alternative solution is to use defined? rather than lazy initialization, which is not susceptible to the nil-is-falsy bug.

Understandability

Last but certainly not least, code that involves memoization is harder to follow than code that doesn’t. This is probably the biggest cost of memoization. Code that’s hard to understand is hard to change. Code that’s hard to understand provides a place for bugs to hide.

Prudence pays off

Because memoization isn’t free, it’s not a good idea to habitually add memoization to methods as a default policy. Instead, add memoization on a case-by-case basis when it’s clearly justified.

Takeaways

  • Memoization is a performance optimization technique that prevents wasteful repeated calls to an expensive operation when the return value is the same each time.
  • Memoization should only be added when you’re sure it’s needed and you have a plan to verify the performance difference.
  • A good use case for memoization is when an expensive method is called repeatedly without a change in return value.
  • Memoization isn’t free. It carries with it the risk of subtle bugs. Therefore, don’t apply memoization indiscriminately. Only use it in cases where there’s a clear benefit.

The four phases of a test

When writing tests, or reading other people’s tests, it can be helpful to understand that tests are often structured in four distinct phases.

These phases are:

  1. Setup
  2. Exercise
  3. Assertion
  4. Teardown

Let’s illustrate these four phases using an example.

Test phase example

Let’s say we have an application that has a list of users that can receive messages. Only active users are allowed to receive messages. So, we need to assert that when a user is inactive, that user can’t receive messages.

Here’s how this test might go:

  1. Create a User record (setup)
  2. Set the user’s “active” status to false (exercise)
  3. Assert that the user is not “messageable” (assertion)
  4. Delete the User record we created in step 1 (teardown)

In parallel with this example, I’ll also use another example which is somewhat silly but also less abstract. Let’s imagine we’re designing a sharp-shooting robot that can fire a bow and accurately hit a target with an arrow. In order to test our robot’s design, we might:

  1. Get a fresh prototype of the robot from the machine shop (setup)
  2. Allow the robot to fire an arrow (exercise)
  3. Look at the target to make sure it was hit by the arrow (assertion)
  4. Return the prototype to the machine shop for disassembly (teardown)

Now let’s take a look at each step in more detail.

The purpose of each test phase

Setup

The setup phase typically creates all the data that’s needed in order for the test to operate. (There are other things that could conceivably happen during a setup phase but for our current purposes we can think of the setup phase’s role as being to put data in place.)In our case, the creation of the User record is all that’s involved in the setup step, although more complicated tests could of course create any number of database records and potentially establish relationships among them.

Exercise

The exercise phase walks through the motions of the feature we want to test. With our robot example, the exercise phase is when the robot fires the arrow. With our messaging example, the exercise phase is when the user gets put in an inactive state.

Side note: the distinction between setup and exercise may seem blurry, and indeed it sometimes is, especially in low-level tests like our current example. If someone were to argue that setting the user to inactive should actually be part of the setup, I’m not sure how I’d refute them. To help with the distinction in this case, imagine if we instead were writing an integration test that actually opened up a browser and simulated clicks. For this test, our setup would be the same (create a user record) but our exercise might be different. We might visit a settings page, uncheck an “active” checkbox, then save the form.

Assertion

The assertion phase is basically what all the other phases exist in support of. The assertion is the actual test part of the test, the thing that determines whether the test passes or fails.

Teardown

Each test needs to clean up after itself. If it didn’t, then each test would potentially pollute the world in which the test is running and affect the outcome of later tests, making the tests non-deterministic. We don’t want this. We want deterministic tests, i.e. tests that behave the same exact way every single time no matter what. The only thing that should make a test go from passing to failing or vice-versa is if the behavior that the test tests changes.

In reality, Rails tests tend not to have an explicit teardown step. The main pollutant we have to worry about with our tests is database data that gets left behind. RSpec is capable of taking care of this problem for us by running each test in a database transaction. The transaction starts before each test is run and aborts after the test finishes. So really, the data never gets permanently persisted in the first place. So although I’m mentioning the teardown step here for completeness’ sake, you’re unlikely to see it in the wild.

A concrete example

See if you can identify the phases in the following RSpec test.

RSpec.describe User do
  let!(:user) { User.create!(email: 'test@example.com') }

  describe '#messageable?' do
    context 'is inactive' do
      it 'is false' do
        user.update!(active: false)
        expect(user.messageable?).to be false
        user.destroy!
      end
    end
  end
end

Here’s my annotated version.

RSpec.describe User do
  let!(:user) { User.create!(email: 'test@example.com') } # setup

  describe '#messageable?' do
    context 'is inactive' do
      it 'is false' do
        user.update!(active: false)           # exercise
        expect(user.messageable?).to be false # assertion
        user.destroy!                         # teardown
      end
    end
  end
end

Takeaway

Being familiar with the four phases of a test can help you overcome the writer’s block that testers sometimes feel when staring at a blank editor. “Write the setup” is an easier job than “write the whole test”.

Understanding the four phases of a test can also help make it easier to parse the meaning of existing tests.

The Brainpower Conservation Principle

When asked to what he attributes his success in life, Winston Churchill purportedly said, “Economy of effort. Never stand up when you can sit down, and never sit down when you can lie down.”

My philosophy with programming is basically the same. Here’s why.

Finite mental energy

Sometimes people say they wish there were more hours in the day. I see it a little differently. It seems to me that the scarce resource isn’t time but energy. I personally run out of energy (or willpower or however you want to put it) well before I run out of time in the day.

Most days for me there comes a certain time where I’m basically spent for the day and I don’t have much more work in me. (When I say “work” I mean work of all kinds, not just “work work”.) Sometimes that used-up point comes before the end of the workday. Sometimes it comes after. But that point almost always arrives before I’m ready to go to bed.

The way I see it, I get a finite supply of mental energy in the morning. The harder I think during the day, the faster the energy gets depleted, and the sooner it runs out. It would really be a pity if I were to waste my mental energy on trivialities and run out of energy after 3 hours of working instead of 8 hours of working. So I try to conserve brainpower as much as possible.

The ways I conserve brainpower

Below are some examples of wasteful ways of working alongside the more economical version.

Wasteful way Economical way
Keep all your to-dos in your head Keep a written to-do list
Perform work in units of large, fuzzily-defined tasks Perform work in units of small, crisply-defined tasks
Perform work (and deployments) in large batches Perform work serially, deploying each small change as soon as it’s finished
Try to multitask or switch among tasks Work on one thing at a time
Write large chunks of code at a time Program in short feedback loops
Regularly allow yourself to slip into a state of chaos Take measures to always keep yourself in a state of order
Perform tests manually, or don’t test at all Write automated tests
Mix the jobs of deciding what to do with writing the code for doing it First decide what to do (and write down that decision), then write the code to do it
Puzzle over an error message Google the error message
Think hard in order to determine the cause of a bug Systematically find the location of a bug
When a change goes wrong, try to identify and fix the root cause Revert to the last known good state and start over
Keep a whole bunch of browser tabs open Only keep two or three tabs open at a time

The Brainpower Conservation Principle

I would state the Brainpower Conservation Principle as follows:

Each person gets a limited amount of mental energy each day. Never expend more mental energy on a task than is needed.

Following this principle can help you code faster, longer, and more enjoyably.

How I make a Git commit

Many programmers make Git commits in a haphazard way that makes it easy to make mistakes and commit things they didn’t mean to.

Here’s a six-step process that I use every time I make a Git commit.

1. Make sure I have a clean working state

In Git terminology, “working state” refers to what you get when you run git status.

I always run git status before I start working on a feature. Otherwise I might start working, only to discover later that the work I’ve done is mixed in with some other, unrelated changes from earlier. Then I’ll have to fix my mistake. It’s cheaper just to check my working state in the beginning to make sure it’s clean.

2. Make the change

This step is of course actually the biggest, most complicated, and most time-consuming, but the content of this step is outside the scope of this post. What I will say is that I perform this step using feedback loops.

3. Run git status

When I think I’m finished with my change, I’ll run a git status. This will help me compare what I think I’m about to commit with what I’m actually about to commit. Those two things aren’t always the same thing.

4. Run git add .

Running git add . will stage all the current changes (including untracked files) to be committed.

5. Run git diff –staged

Running git diff --staged will show a line-by-line diff of everything that’s staged for commit. Just like the step where I ran git status, this step is to help compare what I think I’m about to commit with what I’m actually about to commit—this time at a line-by-line level rather than a file-by-file level.

6. Commit

Finally, I make the commit, using an appropriately descriptive commit message.

The reason I say appropriately descriptive commit message is because, in my opinion, different types of changes call for different types of commit messages. If the content of the commit makes the idea behind the commit blindingly obvious, then a vague commit message is totally fine. If the idea behind the commit can’t easily be inferred from the code that was changed, a more descriptive commit is called for. No sense in wasting brainpower on writing a highly descriptive commit message when none is called for.

Conclusion

By using this Git commit process, you can code faster and with fewer mistakes, while using up less brainpower.

The problem that ViewComponent solves for me

It’s easy to find information online regarding how to use ViewComponent. What’s not as easy to find is an explanation of why a person might want to use ViewComponent or what problem ViewComponent solves.

Here’s the problem that ViewComponent solves for me.

Cohesion

If you just stuff all your domain logic into Active Record models then the Active Record models grow too large and lose cohesion.

A model loses cohesion when its contents no longer relate to the same end purpose. Maybe there are a few methods that support feature A, a few methods that support feature B, and so on. The question “what idea does this model represent?” can’t be answered. The reason the question can’t be answered is because the model doesn’t represent just one idea, it represents a heterogeneous mix of ideas.

Because cohesive things are easier to understand than incohesive things, I try to organize my code into objects (and other structures) that have cohesion.

Achieving cohesion

There are two main ways that I try to achieve cohesion in my Rails apps.

POROs

The first way, the way that I use the most, is by organizing my code into plain old Ruby objects (POROs). For example, in the application I maintain at work, I have objects called AppointmentBalance, ChargeBalance, and InsuranceBalance which are responsible for the jobs of calculating the balances for various amounts that are owed.

I’m not using any fancy or new-fangled techniques in my POROs. I’m just using the principles of object-oriented programming. (If you’re new to OOP, I might recommend Steve McConnell’s Code Complete as a decent starting point.)

Regarding where I put my POROs, I just put them in app/models. As far as I’m concerned, PORO models are models every bit as much as Active Record models are.

Concerns/mixins

Sometimes I have a piece of code which doesn’t quite fit in with any existing model, but it also doesn’t quite make sense as its own standalone model.

In these cases I’ll often use a concerns or a mixin.

But even though POROs and concerns/mixins can go a really long way to give structure to my Rails apps, they can’t adequately cover everything.

Homeless code

I’ve found that I’m able to keep the vast majority of my code out of controllers and views. Most of my Rails apps’ code lives in the model.

But there’s still a good amount of code for which I can’t find a good home in the model. That tends to be view-related code. View-related code is often very fine-grained and detailed. It’s also often tightly coupled (at least from a conceptual standpoint) to the DOM or to the HTML or in some other way.

There are certain places where this code could go. None of them is great. Here are the options, as I see them, and why each is less than perfect.

The view

Perhaps the most obvious place to try to put view-related code is in the view itself. Most of the time this works out great. But when the view-related code is sufficiently complicated or voluminous, it creates a distraction. It creates a mixture of levels of abstraction, which makes the code harder to understand.

The controller

The controller is also not a great home for this view-related code. The problem of mixing levels of abstraction is still there. In addition, putting view-related code in a controller mixes concerns, which makes the controller code harder to understand.

The model

Another poorly-suited home for this view-related code is the model. There are two options, both not great.

The first option is to put the view-related code into some existing model. This option isn’t great because it pollutes the model with peripheral details, creates a potential mixture of concerns and mixture of levels of abstraction, and makes the model lose cohesion.

The other option is to create a new, standalone model just for the view-related code. This is usually better than stuffing it into an existing model but it’s still not great. Now the view-related code and the view itself are at a distance from each other. Plus it creates a mixture of abstractions at a macro level because now the code in app/models contains view-related code.

Helpers

Lastly, one possible home for non-trivial view-related code is a helper. This can actually be a perfectly good solution sometimes. I use helpers a fair amount. But sometimes there are still problems.

Sometimes the view-related code is sufficiently complicated to require multiple methods. If I put these methods into a helper which is also home to other concerns, then we have a cohesion problem, and things get confusing. In those cases maybe I can put the view-related code into its own new helper, and maybe that’s fine. But sometimes that’s a lost opportunity because what I really want is a concept with meaning, and helpers (with their -Helper suffix) aren’t great for creating concepts with meaning.

No good home

The result is that when I have non-trivial view-related code, it doesn’t have a good home. Instead, my view-related code has to “stay with friends”. It’s an uncomfortable arrangement. The “friends” (controllers, models, etc.) wish that the view-related code would move out and get a place of its own, but it doesn’t have a place to go.

How ViewComponent provides a home for view-related code

A ViewComponent consists of two entities: 1) an ERB file and 2) a Ruby object. These two files share a name (e.g. save_button_component.html.erb and save_button_component.rb and sit at a sibling level to each other in the filesystem. This makes it easy to see that they’re closely related to one another.

Ever since I started using ViewComponent I’ve had a much easier time working with views that have non-trivial logic. In those cases I just create a ViewComponent and put the logic in the ViewComponent.

Now my poor homeless view-related code can move into a nice, spacious, tidy new house that it gets all to its own. And just as important, it can get out of its friends’ hair.

And just in case you think this sounds like a “silver bullet” situation, it’s not. The reason is because ViewComponents are a specific solution to a specific problem. I don’t use ViewComponent for everything, I only use ViewComponent when a view has non-trivial logic associated with it that doesn’t have any other good place to live.

Takeaways

  • If you just stuff all your domain logic into Active Record models, your Active Record models will soon lose cohesion.
  • In my Rails apps, I mainly achieve cohesion through a mix of POROs and concerns/mixins (but mostly POROs).
  • Among the available options (views, controllers, models and helpers) it’s hard to find a good place to put non-trivial view-related code.
  • ViewComponent provides (in my mind) a reasonable place to put non-trivial view-related code.

Side note: if you’re interested in learning more about ViewComponent, you can listen to my podcast conversation with Joel Hawksley who created the tool. I also did a talk on ViewComponent which you can see below.

Atomic commits

Ruined soup

Let’s say I’m making a pot of tortilla soup. I’ve made tortilla soup before and it has always turned out good. This is a low-risk operation.

But for some reason, this time, I decide to get experimental (high risk). I add some cayenne pepper to the soup. Unfortunately, this makes the soup too spicy, and no one in my family except me will eat it. The soup is now basically garbage and I have to throw it out.

What just happened was bad. I mixed a small high-risk operation (cayenne) into a large low-risk operation (tortilla soup). Since cayenne is impossible to remove from soup, I’ve turned the entire operation into a high-risk operation. And, due to my poor choices, the whole operation got ruined.

Ruined bread

Now let’s say I’m making a different kind of soup, an experimental one. Let’s say I’m making another Mexican soup, pozole. I’ve never even eaten pozole before. In fact, I’m not even sure what pozole is. Who knows if my family will like it. This is a high-risk operation.

On the day I make pozole, I happen to have some homemade bread around. A lot of work went into making the bread. Unlike the pozole, the bread is a low-risk operation. In fact, it’s a no-risk operation because the bread is already made.

Since I know that bread usually goes good with soup, I decide to cut up the loaf of bread and serve it in bowls with the soup.

Unfortunately, when I serve the pozole for dinner, it’s discovered that my pozole is not good. No one, including me, eats very much. What’s more, I’ve wasted a whole loaf of homemade bread in addition to wasting the pozole ingredients.

In this case I mixed a small low-risk operation (the bread) into a large high-risk operation (the pozole). Now not only do I have to throw out the pozole but I have to throw out the bread with it.

What atomic commits are

The literal meaning of “atomic” is “cannot be cut”, from Greek. (The “a” part means “not” and the “tomic” part comes from a word that means “cut”.)

A commit that’s atomic is a commit that’s only “about” one thing. The commit can’t be cut into smaller logical pieces.

Why atomic commits are helpful

One of my programming principles is “keep everything working all the time”. It’s much easier to keep everything in a working state all the time then to allow things to slip into chaos and then try to recover.

But slipping into chaos once in a while is unavoidable. When that happens, the easiest way to get back to “order” usually isn’t to solve the problem, but rather to just blow away the bad work and revert back to your last known good state.

If your commits are non-atomic, then reverting back to your last known good state can be problematic. You’ll be forced to throw the baby out with the bathwater.

If you make my “tortilla soup mistake” and mix a small high-risk change in with a large low-risk change, then you’ll end up reverting a large amount of good work just because of a small amount of bad work. (You might be able to escape this problem if the two changes are sufficiently separate from each other but that’s often not the case.)

If you make my “pozole mistake” and mix a low-risk change in with a high-risk change, then you’ll have to revert the good change along with the bad one as well.

How to decide what constitutes an atomic commit

Unless you’re committing a one-character change, pretty much any commit could conceivably be cut into smaller pieces. So how do you decide what constitutes an atomic commit and what doesn’t?

The answer is that “atomicity” is subjective, and it’s up to you do decide. The heuristic I use to decide what to put in my commits is: the smallest thing that can be considered complete.

Sometimes when I code in front of people, they’re astonished by how frequently I make commits. But I figure since it’s super cheap to make a commit, why not turn the “commit frequency dial” up to the max?

Takeaways

  • An atomic commit is a commit that’s only “about” one thing.
  • Atomicity is subjective.
  • It’s easier to keep everything working all the time than to allow things to slip into a state of chaos and then try to recover.
  • When things do slip into a state of chaos, it’s usually cheaper to just revert to the last known working state and start over than to try to fix the actual problem.
  • Atomic commits ensure that, when you do have to revert, you don’t have to throw out the baby with the bathwater.

Order and chaos

Imagine you’re building a house out of wooden blocks.

Wooden blocks are easy to build with but they make a somewhat fragile structure. As you’re putting a new block in place, you accidentally bump the house and the whole thing comes tumbling down. The various shapes of blocks get all mixed together when the house crashes. It takes you several minutes to get the house rebuilt to the point of progress you were at before.

Once you’re almost done rebuilding the house, you accidentally bump it again and you send it crashing down a second time. You have to spend another several minutes rebuilding it. This process repeats several times before you finally succeed in building the house to completion.

In certain ways, writing a computer program is like building a house out of blocks. If you make a big enough mistake, you can throw the whole area you’re working on into disorder. Often, the degree of disorder is so great that it’s cheaper just to revert to where you were at some point in the past and start over than it is to recover from where you are now.

Order and chaos

As I’ve observed programmers working over the years (including observing myself), I’ve noticed two possible states a programmer can be in: order and chaos.

In a state of order, you’re in control. You know what you’re doing. Every step you take is a step that gets you closer to your objective.

In a state of chaos, you’re flailing. You’re confused. Your program is broken and you don’t know how to fix it. Chaos is a very expensive state to be in.

The cost of chaos

It costs drastically more to enter a state of chaos and then get back to order than it does to just stay in a state of order the whole time.

I think of order and chaos like a thin trail through thick, dark woods. Order is the trail and chaos is the woods. If you allow yourself to stray off the trail, you might be screwed. It can be so hard to find the trail again that you might never find the trail again. Even though it costs a little extra effort to pay close attention to the trail as you follow it, the cost is a tiny one, and that cost is obviously much lower than the cost of losing the trail and dying in the woods.

What causes programmers to enter a state of chaos, and how can they avoid allowing that to happen?

What causes chaos

Entering a state of chaos while programming might seem like an unavoidable fact of life. And to a certain extent, it is. But some programmers spend almost all their time in a state of chaos while others spend almost all their time in a state of order. It’s not just luck. They’re doing something different.

As I see it, the main risk factor for chaos is length of feedback cycle. The longer the feedback cycle, the higher the chances that something is going to go wrong. And the bigger a change, the harder it is to determine what went wrong, because there’s a greater amount of stuff to dig through in order to find the source of the problem.

How to stay in a state of order

The way to minimize your chances of slipping into a state of chaos is to work in tiny changes and check your work after each change.

As I described in my post about programming in feedback loops, this type of working process can look something like this:

  1. Decide what you want to accomplish
  2. Devise a test you can perform to see if #1 is done (manual or automated)
  3. Perform the test from step 2
  4. Write a line of code
  5. Repeat test from step 2 until it passes
  6. Repeat from step 1 with a new goal

Working in feedback loops of tiny changes is like hiking a trail in the woods and looking down every few steps to make sure you’re still on the trail.

Takeaways

  • When you’re programming, you’re always in either a state of order or a state of chaos.
  • Recovering from a state of chaos is expensive. Never entering a state of chaos in the first place in cheap.
  • The way to avoid entering a state of chaos is to work in feedback loops.

Don’t trust yourself

When I’m programming, I have a saying that I repeat to myself: “Never underestimate your ability to screw stuff up.”

It never ceases to amaze me how capable I am of messing up seemingly simple tasks.

Sometimes I get overconfident and, for example, I try to make a big change all at once without checking my work periodically. Usually I’m humbled. I end up creating a mess and I have to blow away my work and start over. I end up creating a lot more work than if I had just stayed humble and careful in the first place.

The advice that I have to frequently remind myself of, and which I give to you, is this. Recognize that your brain is weak with respect to the intellectually demanding task of programming. Work in small, careful steps as part of a feedback loop cycle. Always assume you made a mistake somewhere. Treat your work as guilty until proven innocent. Don’t trust yourself. You’re just setting yourself up for pain and frustration if you do.

Human brains are no match for programming

Why programming is hard for brains

The job of a programmer involves building new programs and changing existing programs.

In order to successfully work with a program, a programmer has to do two fairly difficult things. One is that the programmer has to learn and retain some number of facts about the program. The other is that, in order to understand existing behaviors and predict the effects of new changes, the programmer has to play out chains of logic in his or her head.

These two tasks involve what you could metaphorically call a person’s RAM (short-term memory), hard disk (long-term memory) and CPU (raw cognition ability).

The limitations of our mental hardware

RAM

When I poke around in a program before making a change in order to understand the substrate I’m working with, I might make mental notes which get stored in my short-term memory. Short-term memory has the obvious limitation that it’s impermanent. Six months from now I’ll no longer remember most of the stuff that’s in my short-term memory right now.

Hard disk

Other aspects of the program might enter my consciousness sufficiently frequently that they go into long-term memory. Long-term memory has the benefit of being more permanent, but the trade-off is that it costs more to get something into long-term memory than short-term memory. The cases where we can learn something easily and then remember it permanently are rare.

CPU

The other demanding mental task in programming is to look at a piece of code and run it in our minds. When I try to mentally run a piece of code, it’s like I’m combining my mental RAM with my CPU. I not only have to load some pieces of information into my short-term memory but I have to make a series of inferences about how those pieces of information will get transformed after they’ve had a number of operations performed on them. This is a harder task, kind of like trying to multiple two three-digit numbers in one’s head and having to remember each intermediate product to be summed up at the end. People tend not to be great at it.

If human memory were perfect and if our mental processing power were infinite, programming would be easy. But unfortunately our hardware is very limited. The best we can do is to make accommodations.

What if we had infinitely powerful brains?

If humans had infinitely powerful brains, programming would look a lot different.

We could, for example, read a program’s entire source code in one sitting, memorize it all the first time around, then instantly infer how the program would behave given any particular inputs.

It wouldn’t particularly matter if the code was “good” or “bad”. For all we cared, you could name the variables a, b, c, and so on. We would of course have to be told somehow that a really means customer, b really means order and so on, but thanks to our perfect memories, we would only have to be told once and we would remember forever. And thanks to our infinitely powerful mental CPUs, it wouldn’t cost anything to map all the cryptic variable names to their real meanings over and over again.

But our brains are weak, so…

So we make accommodations. Here are a few examples of weaknesses our brains have and solutions (or at least mitigations) we’ve come up with to accommodate.

We can’t read and memorize an entire codebase

The best we can do is to understand one small part of a system, often at great cost. The solution? Try to organize our code in a modular, loosely-coupled way, so that it’s not necessary to understand an entire system in order to successfully work with it.

We easily can’t map fake names to real names

When a codebase is full of variables, functions and classes whose names are lies, we get confused very easily, even if we know the real meanings behind the lies. We can neither easily remember the mappings from the lies to the real names, nor can we easily perform the fake-name-to-real-name translations. So the solution (which seems obvious but is often not followed in practice) is to call things what they are.

We’re not good at keeping track of state

Imagine two people, John and Randy, standing at the same spot on a map. John moves 10 feet north. Randy moves 200 feet west. John moves 10 feet west. A third person, Sally, shows up 11 feet to the west of Randy. Now they all move 3 feet south. Where did they all end up relative to John and Randy’s starting point? I’m guessing you probably had to read that sequence of events more than once in order to come up with the right answer. That’s because keeping track of state is hard. (Or, more precisely, it’s hard for us because we’re not computers.) One solution to this problem is to code in a declarative rather than imperative style.

Broadly speaking, our mental weaknesses are the whole reason we bother to write good code at all. We don’t write good code because we’re smart, we write good code because we’re stupid.

Takeaways

  • Our mental RAM, hard disk and CPU are all of very limited power.
  • Many of the techniques we use in programming are just accommodations for our mental weaknesses.
  • The more we acknowledge and embrace our intellectual limitations, the easier and more enjoyable time we’ll have with programming.