Author Archives: Jason Swett

How Ruby’s instance_exec works

In this post we’ll take a look at Ruby’s instance_exec, a method which can can change the execution context of a block and help make DSL syntax less noisy.

Passing arguments to blocks

When calling a Ruby block using block.call, you can pass an argument (e.g. block.call("hello") and the argument will be fed to the block.

Here’s an example of passing an argument when calling a block.

def word_fiddler(&block)
  block.call("hello")
end

word_fiddler do |word|
  puts word.upcase
end

In this case, the string "hello" gets passed for word, and word.upcase outputs HELLO.

We can also do something different and perhaps rather strange-seeming. We can use a method called instance_exec to execute our block in the context of whatever argument we send it.

Executing a block in a different context

Note how in the following example word.upcase has changed to just upcase.

def word_fiddler(&block)
  "hello".instance_exec(&block)
end

word_fiddler do
  puts upcase
end

The behavior is the exact same. The output is identical. The only difference is how the behavior is expressed in the code.

How this works

Every command in Ruby operates in a context. Every context is an object. The default context is an object called main, which you can demonstrate by opening a Ruby console and typing self.

We can also demonstrate this for our earlier word_fiddler snippet.

def word_fiddler(&block)
  block.call("hello")
end

word_fiddler do |word|
  puts self # shows the current context
  puts word.upcase
end

If you run the above snippet, you’ll see the following output:

main
HELLO

The instance_exec method works because in changes the context of the block it invokes. Here’s our instance_exec snippet with a puts self line added.

def word_fiddler(&block)
  "hello".instance_exec(&block)
end

word_fiddler do
  puts self
  puts upcase
end

Instead of main, we now get hello.

hello
HELLO

Why instance_exec is useful

instance_exec can help make Ruby DSLs less verbose.

Consider the following Factory Bot snippet:

FactoryBot.define do
  factory :user do
    first_name { 'John' }
    last_name { 'Smith' }
  end
end

The code above consists of two blocks, one nested inside the other. There are three methods called in the snippet, or more precisely, there are three messages being sent: factory, first_name and last_name.

Who is the recipient of these messages? In other words, in what contexts are these two blocks being called?

It’s not the default context, main. The outer block is operating in the context of an instance of a class called FactoryBot::Syntax::Default::DSL, which is defined by the Factory Bot gem. This means that the factory message is getting sent to an instance of FactoryBot::Syntax::Default::DSL.

The inner block is operating in the context of a different object, an instance of FactoryBot::Declaration::Implicit. The first_name and last_name messages are getting sent to this class.

You can perhaps imagine what the Factory Bot syntax would have to look like if it were not possible to change blocks’ contexts using instance_exec. The syntax would be pretty verbose and noisy.

Takeaways

  • instance_exec is a method that executes a block in the context of a certain object.
  • instance_exec can help make DSL syntax less noisy and verbose.
  • Methods like Factory Bot’s factory and RSpec’s it and describe are possible because of instance_exec.

How to save time and brainpower by dividing the bugfix process into three distinct steps

The reason for dividing bugfix work into steps

I don’t like to waste time when I’m working. And there’s hardly a surer way to waste time than to be sitting in front of the computer not even knowing what piece of work I’m trying to accomplish.

I also don’t like to think any harder than I need to. Brainpower is a limited resource. I only get so much in one day. If I spend a lot of brainpower on a job that could have been done just as well with less brainpower, then I’ve wasted brainpower and I will have cheated myself.

Dividing a bugfix job into three distinct steps helps me not to violate either of these principles. When I’m working on reproducing the bug, I can focus on reproduction, to the exclusion of all else. This helps me stay focused, which helps me work quickly. Focusing on one step also helps cut down on mental juggling. If I decide that I’m only thinking about reproducing the bug right now and not trying to think about diagnosing the bug or fixing it, then that limits the amount of stuff I have to think about, which makes me a clearer, more efficient and faster thinker.

Here are the three steps I carry out when fixing a bug as well as some tips of carrying them out effectively and efficiently.

Reproduction

When I become aware of the purported existence of a new bug, the first step I usually take is to try to reproduce the bug.

A key word here is “purported”. When a bug is brought to my attention, I don’t immediately form the belief that a bug exists. I can’t yet know for sure that a bug exists. All I can know for sure at that point is that someone told me that a bug exists. More than zero percent of the time, the problem is something other than a bug. Perhaps the problem was, for example, that the behavior of the system was inconsistent with the user’s expectations, but it was the user’s expectations that were wrong, not the behavior of the system.

Once I’m able to reproduce the bug manually, I often like to capture the bug reproduction in an automated test. The reason I like to do this is because it cuts down on mental juggling. Once the reproduction is captured in a test, I can safely unload the reproduction steps from my mind, freeing up mental RAM. I can now put all of my mental RAM toward the next debugging step, diagnosis.

Diagnosis

Many developers take a haphazard approach to debugging. They sit and stare at the code and try to reason through what might be going wrong. This works sometimes, but most of the time it’s a very painful and inefficient way to go about diagnosis.

A better approach is to try to find where the root cause of the bug is rather than determine what the root cause of the bug is. I write about this approach in detail in a separate post called The “20 Questions” method of debugging.

Fix

Sometimes a bugfix is an easy one-liner. Other times, the bug was merely a symptom of a deep and broad problem, and the fix is a serious project.

Whatever the size of the fix, there’s one thing that I always make sure of: I make sure that I’ve devised a test (manual or automated) that will fail when the bug is present and pass when the bug is absent. Otherwise I may fool myself into believing that I’ve fixed the bug when really my “fix” has had no effect.

Takeaways

  • You can work more efficiently when you know exactly what you’re working on.
  • Brainpower is a limited resource. Save brainpower for tasks that require high brainpower. Don’t waste brainpower on tasks that can be low-brainpower tasks.
  • Dividing a bugfix job into distinct steps can help you focus as well as save brainpower.
  • The three steps in fixing a bug are reproduction, diagnosis and fix.
  • Don’t take bug reports at face value.
  • Don’t try to find what the cause of the bug is, but rather try to find where the cause of the bug is. Once you accomplish that, the “what” will likely be obvious.
  • Before applying a bugfix, make sure you’ve devised a test (manual or automated) that will fail when the bug is present and pass when the bug is absent.

Why I don’t buy “duplication is cheaper than the wrong abstraction”

Note: If you disagree with what’s expressed in this post, I encourage you to also take a look at my more nuanced and comprehensive post on the topic of code duplication, which gives a more thorough refutation of some of the popular ideas around duplication.

“Duplication is cheaper than the wrong abstraction” is a saying I often hear repeated in the Rails community. I don’t really feel like the expression makes complete sense. I fear that it may lead developers to make poor decisions. Here’s what I take the expression to mean, why I can’t completely get on board with it, and what I would advise instead.

The idea (as I understand it)

My understanding of the “duplication is cheaper than the wrong abstraction” idea, based on Sandi Metz’s post about it, is as follows. When a programmer refactors a piece of code to be less duplicative, that programmer replaces the duplicative code with a new, non-duplicative abstraction. So far so good, perhaps. But, by creating this new abstraction, the programmer signals to posterity that this new abstraction is “the way things should be” and that this new abstraction ought not to be messed with. As a result, this abstraction gets bastardized over time as maintainers of the code need to change it yet simultaneously feel compelled to preserve it. The abstraction gets littered with conditional logic to behave different ways in different scenarios, and eventually becomes an unreadable mess.

I hope this is a fair and reasonably faithful paraphrasing of Sandi’s idea. Here are the parts of the idea that I agree with, followed by the parts of the idea that I take issue with.

The parts I agree with

In my experience it’s absolutely true that existing code has a certain inertia to it. Whether it’s out of caution or laziness or some other motive, it seems that a common approach to existing code is “don’t mess with it too much”. This is often a pretty reasonable approach, especially in codebases with poor test coverage where broad refactorings aren’t very safe. Unfortunately the “don’t mess with it too much” approach (as Sandi correctly points out) often makes bad code even worse.

I also of course agree that it’s bad when an abstraction gets cluttered up with a bunch of conditional logic to behave differently in different scenarios. Once that happens, the abstraction can hardly be called an abstraction anymore. It’s like two people trying to live in one body.

I also agree with Sandi’s approach to cleaning up poorly-deduplicated code. First, back out the “improvements” and return to the duplicated state. Then begin the de-duplication work anew. Good approach.

The parts I take issue with

What exactly is meant by “the wrong abstraction”?

I think “the wrong abstraction” is a confused way of referring to poorly-de-duplicated code. Here’s why.

It seems to me that what’s meant by “the wrong abstraction” is “a confusing piece of code littered with conditional logic”. I don’t really see how it makes sense to call that an abstraction at all, let alone the wrong abstraction.

Not every piece of code is an abstraction of course. To me, an abstraction is a piece of code that’s expressed in high-level language so that the distracting details are abstracted away. If I were to see a confusing piece of code littered with conditional logic, I wouldn’t see it and think “oh, there’s an incorrect abstraction”, I would just think, “oh, there’s a piece of crappy code”. It’s neither an abstraction nor wrong, it’s just bad code.

So instead of “duplication is cheaper than the wrong abstraction”, I would say “duplication is cheaper than confusing code littered with conditional logic”. But I actually wouldn’t say that, because I don’t believe duplication is cheaper. I think it’s usually much more expensive.

When duplication is dearer

I don’t see how it can be said, without qualification, that duplication is cheaper than the wrong abstraction. Some certain things must be considered. How bad is the duplication? How bad is the de-duplicated code? Sometimes the duplication is cheaper but sometimes it’s more expensive. How do you know unless you know how good or bad each alternative is? It depends on the scenario.

If the duplication is very small and obvious, and the alternative is to create a puzzling mess, then that duplication is absolutely cheaper than the bad code. But if the duplication is horrendous (for example, the same several lines of code duplicated across distant parts of the codebase dozens of times, and with inconsistent names which make the duplication hard to notice or track down) and the alternative is a piece of code that’s merely imperfect, then I would say that the duplication is more expensive.

In general, I find duplication to typically be more much expensive than the de-duplicated alternative. Duplication can bite you, hard. The worst is when there’s a piece of code that’s duplicated but you don’t know it’s duplicated. In that case you risk changing the code in one place without realizing you’re creating an inconsistency. It’s hard for a poor abstraction to have consequences worse than that.

Programmers’ reluctance to refactor isn’t a good justification for not DRYing up code

If a programmer “feels honor-bound to retain the existing abstraction” (to quote Sandi’s post), then to me that sounds like a symptom of a problem that’s distinct from duplication or bad abstractions. If developers are afraid to clean up poor code, then I don’t think the answer is to hold off on fixing duplication problems. I think the answer is to address the reasons why developers are reluctant to clean up existing code. Maybe that reason is a lack of automated tests and code review, or a lack of a culture of collective ownership. Whatever the underlying problem is, fixing that problem surely must be a better response than allowing duplication to live in your codebase.

My alternative take, summarized

Instead of “duplication is cheaper than the wrong abstraction”, I would say the following.

Duplication is bad. In fact, duplication is one of the most dangerous mistakes in coding. Except in very minor cases, duplication is virtually always worth fixing. But not all possible ways of addressing duplication are equally good. Don’t replace a piece of duplicative code with a confusing piece of code that’s made out of if statements.

When you find yourself adding if statements to a piece of code in order to get it to behave differently under different scenarios, you’re creating a confusion. Don’t try to make one thing act like two things. Instead, separate it into two things.

If you feel reluctant to modify someone else’s code, ask why that is. Is it because you feel like you’ll get in trouble if you do? Is it because you don’t understand the code, and there’s little test coverage, and you’re afraid you’ll break something if you make changes that are too drastic? Whatever the underlying reason for your reluctance is, it’s a problem, because it’s holding your organization back from improving its code. Instead of adding more bad code on top of the existing bad code, see if there’s anything you can do to try to address these underlying issues.

Takeaways

  • As Sandi Metz says (but in my words), confusing code littered with conditionals is not a good way to address duplicative code.
  • A piece of code filled with conditionals isn’t really an abstraction or even “wrong”, it’s just a confusing piece of code.
  • Duplication is one of the most dangerous mistakes in coding, and almost always worth fixing. Unless someone really botches the job when de-duplicating a piece of code, the duplicated version is almost always more expensive to maintain than the de-duplicated version.
  • Try to foster a culture of collective ownership in your organization so that developers aren’t afraid to question or change existing code when the existing code gets out of sync with current needs.
  • Try to use risk-mitigating practices like automated testing, small changes, and continuous deployment so that when refactorings are needed, you’re not afraid to do them.

The “20 Questions” method of debugging

There are three steps to fixing any bug: 1) reproduction, 2) diagnosis and 3) fix. This post will focus on the second step, diagnosis.

There are two ways to approach diagnosing a bug. One is that you can stare at the screen, think really hard, and try to guess what might be going wrong. In my experience this is the most common debugging approach that programmers take. It’s neither very efficient nor very enjoyable.

Another way is that you can perform a series of tests to find where the cause of the bug lies without yet worrying about what the cause of the bug is. Once the location of the bug is found, it’s often so obvious what’s causing the bug that little to no thinking is required. This method of bug diagnosis is much easier and more enjoyable than just thinking really hard, although unfortunately it’s not very common.

20 Questions strategies

You’ve probably played the game 20 Questions, where one player thinks of an object and the other players ask up to 20 yes-or-no questions to try to guess what the object is.

Smart players of 20 Questions try to ask questions that divide the possibilities more or less in half. If what’s known about the object is that it’s an animal, but nothing more, then it’s of course a waste of a guess to guess whether the animal is a turtle, because if the answer is no, then you’ve only eliminated a small fraction of all the kinds of animals that the target object might be because not very many kinds of animals are turtles.

Better to ask something like “Does it live on land?” or “Is it extinct?” which will rule out larger chunks of possibilities.

The binary search algorithm

This “eliminate-half strategy” is the genius of the binary search algorithm, which repeatedly chops a sorted list in half to efficiently find the target item. If you can determine that an item is not present in one half of the list, then it’s necessarily true that the item lies in the other half of the list. If you’re searching a list of 1000 items, you can chop the list down to 500, then 250, then 125, 63, 32, 16, 8, 4, 2, 1. That’s only 10 steps for a list of 1000 items. I find it pretty impressive that a single item can be found in a list of 1000 items in only 10 steps.

The simplicity of the binary search algorithm must be why, a surprisingly large amount of the time, a plastic toy that costs $12.99 is able to correctly determine what object you’re thinking of within 20 guesses. The machine seems miraculously intelligent but its job is probably actually requires more drudgery than actual thinking. All that presumably has to be done is to place the world’s information into nested categories, each “higher” category being about twice as broad as the one below it. For a human to manually perform these categorizations would take a lot of work, but once the categorizations are made, carrying out the work of the binary search steps is mindlessly simple.

If binary search is powerful enough that a cheap toy can use it to guess almost any object you can think of, it’s not hard to imagine how useful binary search can be in pinpointing a bug in a web application.

Playing “20 Questions” with your IT system

Whenever I’m faced with a bug, I play “20 Questions” with my IT system. I ask my system a series of yes-or-no questions. Each answer rules out a chunk of the system as the area which contains the bug. Once my answer has allowed me to rule out an area, I ask a question on the area of the system which hasn’t yet been ruled out. Eventually the area that hasn’t been ruled out becomes so tiny that it’s (much of the time) totally obvious what the root cause is.

Of course, I can’t literally “ask” questions of my system. Instead I devise a series of yes-or-no tests that I can perform. For example, let’s say my web service is not resolving requests. The first question I might ask is “Is the site actually down or is it down for just me?” For that question my test would involve visiting downforeveryoneorjustme.com and typing in the URL for my website. If the site is in fact down for everyone, then my next question might be “Are requests making it to my web server?” For that question my test would involve looking at my web server’s logs to see evidence of HTTP requests being received. And so on.

Unlike a real binary search algorithm, I don’t always do a 50/50 split. Often it’s more economical to do what you might call a “weighted binary search” instead.

Splitting on size vs. splitting on likelihood

The reason a binary search algorithm splits lists in half is because we want each yes-or-no question to eliminate as much search area as possible, but we don’t know whether the answer will be a yes or a no, and so splitting the list in half guarantees that we never eliminate less than half of the list. If for example we chose a 75/25 split instead of a 50/50 split, we risk eliminating only 25% of the possibilities rather than the other 75% of the possibilities. That would be a waste.

The 50/50 strategy makes sense if and only if you don’t know the probability of the target item appearing in one half or the other half (or if the chances truly are 50/50). The strategy makes less sense if have clues about where the target item lies.

For example, I become aware that a certain bug was present after I performed a certain deployment and absent before that, I don’t have to waste time stupidly dividing my entire codebase in half repeatedly in order to find the root cause of the bug. It’s reasonable for me to believe that the code introduced by the deployment is likely to (although of course not certain to) contain the offending code. The change in the deployment might represent just 1% or less of the codebase, meaning that if my guess was wrong that that deployment introduced the bug, then I will only have eliminated 1% of the codebase and my search area will still be the remaining 99%. But because the risk of my guess being wrong is usually so low, and because the cost of performing the yes/no test is usually so small, it’s a win on average to do this kind of 1/99 split rather than a 50/50 split.

Takeaways

The “20 Questions Method of Debugging” is to perform a binary search on your codebase. Divide your codebase roughly in half, devise a test to tell you which half you can rule out as containing the root cause of the bug, and repeat until the remaining search area is so small that the root cause can’t help but reveal itself.

More precisely, the 20 Questions Method of Debugging is a “weighted binary search”. If you have a good reason to believe that the root cause lies in a certain area of your codebase, then you can save steps by immediately testing to see if you can rule out everything that’s not the suspect area rather than doing a dumb 50/50 split. You will probably “lose” sometimes and end up eliminating the smaller fraction rather than the larger fraction, but as long as your guesses are right enough of the time, this strategy will be more efficient on average than always doing a 50/50 split.

The 20 Questions Method of Debugging is a better method of debugging than the “stare at the screen and think really hard” method for two reasons. The first reason is that the 20 Questions Method is more efficient than the “sit and think” method because thinking is expensive but performing a series of yes-or-no tests is cheap. The second reason is my favorite, which is this: the “sit and think” method may never yield the right answer, but the 20 Questions Method is virtually guaranteed to eventually turn up the right answer through power of sheer logic.

How to program in feedback loops

One of the classic mistakes that beginning programmers make is this:

  1. Spend a long time writing code without ever trying to run it
  2. Finally run the code and observe that it doesn’t work
  3. Puzzle over the mass of code that’s been written, trying to imagine what might have gone wrong

This is a painful, slow, wasteful way to program. There’s a better way.

Feedback loops

Computer programs are complicated. Human memory and reasoning are limited and fallible. Even a program consisting of just a few lines of code can defy our expectations for how the program ought to behave.

Knowing how underpowered our brains are in the face of the mentally demanding work of programming, we can benefit from taking some measures to lessen the demand on our brains.

One thing we can do is work in feedback loops.

The idea of a feedback loop is to shift the mindset from “I trust my mind very much, and so I don’t need empirical data from the outside world” to “I trust my mind very little, and so I need to constantly check my assumptions against empirical data from the outside world”.

How to program in feedback loops

The smallest feedback loop can go something like this.

  1. Decide what you want to accomplish
  2. Devise a manual test you can perform to see if #1 is done
  3. Perform the test from step 2
  4. Write a line of code
  5. Repeat test from step 2 until it passes
  6. Repeat from step 1 with a new goal

Let’s go over these steps individually with an example.

Decide what you want to accomplish

Let’s say you’re writing a program that reads a list of inputs from a file and processes those inputs somehow.

You can’t make your first goal “get program working”. That’s too big and vague of a goal. Maybe a good first goal could be “read from the file and print the first line to the screen”.

Devise a manual test you can perform to see if #1 is done

For this step, you could put some code in a file and then run that file on the command line. More precisely, “run the file on the command line” would be the test.

Perform test from #2

In this step, you would perform the test that you devised in the previous step, in this case running the program on the command line.

Significantly, you want to perform this test before you’ve actually attempted to make the test pass. The reason is that you want to avoid false positives. If you write some code and then perform the test and the test passes, you can’t always know whether the test passed because your code works or if the test passed because you devised an invalid test that will pass for reasons other than the code working. The risk of false positives may seem like a remote one but it happens all the time, even to experienced programmers.

Write a line of code

The key word in this sentence is “a”, as in “one”. I personally never trust myself to write more than one or two lines of code at a time, even after 15+ years of programming.

Running a test after each line of code, even if you don’t expect each line to sensibly progress your program toward its goal, helps ensure that you’re not making egregious mistakes. For example, if you add a line that contains a syntax error, it’s good to catch the syntax error in that line before adding more lines.

To continue the example, this step, at least the first iteration of this step, might involve you writing a line that attempts to read from a file (but not print output yet).

Repeat test from #2 until passing

After writing a line of code, perform your test again. Your test might pass. Your test might fail in an unsurprising way. Or something unexpected might happen, like an error.

Once your test results are in, write another line of code and repeat the process until your test passes.

Repeat from step 1

Once your test passes, decide on a new goal and repeat the whole process.

Advanced feedback loops

Once you get comfortable coding in feedback loops, you can do something really exciting, which is to automate the testing step of the feedback loop. Not only can this make your coding faster but it can protect your entire application against regressions (when stuff that used to work stops working).

I consider automated testing a necessary skill for true professional-level programming, but it’s also a relatively “advanced” skill that you can ignore when you’re first getting started with programming.

Bigger feedback loops

In a well-run organization, programmers work in concentric circles of feedback loops. Lines of code are written in feedback loops that last seconds. Automated test suites provide feedback loops that last minutes. Individual tasks are designed for feedback loops that last hours or days. Large projects are broken into sub-projects that last days or weeks.

Takeaways

  • No one’s brain is powerful enough to write a whole program at once without running it.
  • Feedback loops shift your trust from your own fallible mind to hard empirical data.
  • Follow the feedback loop instructions in the post to code faster, more easily and more enjoyably.

How to avoid wasteful refactoring

I’m a big fan of good code. Naturally, I’m also a big fan of refactoring.

Having said that, I certainly think it’s possible to refactor too much. I think the time to stop refactoring is when you’re no longer sure that the work you’re doing is an improvement.

Don’t keep a refactoring if you’re not sure it’s an improvement

If I refactor a piece of code and I’m not sure the refactored version is better than the original version, then I don’t help anything by committing the new version. I should throw away the new version and write it off as a cost of doing business.

I might be tempted to commit the new version because I spent time on it but that’s just sunk cost fallacy talking.

How to tell if a refactoring is an improvement

One way to tell if a refactoring is an improvement is to use heuristics: clear names, short methods, small classes, etc. Heuristics, together with common sense, can of course get you pretty far.

The only way to really, truly, objectively, empirically know if a refactoring is an improvement is to take a time machine to the next time that area of code needs to be changed and A/B test the refactored version against the unrefactored version and see which was easier to work with. Obviously that’s not a performable test.

So we have to do the closest thing we can which is to perform the time-machine test in our imaginations. Sometimes the time-machine test is hard because we’re so familiar with the code that it’s impossible to imagine seeing it for the first time.

When we reach this point, where it’s hard to imagine how easily Maintainer-of-the-Future would be able to understand our code, then it’s usually time to stop refactoring. It’s usually then more efficient wait and reassess the code the next time we have to work with it. By then we will have forgotten the context. Any faults in the code will be more glaringly obvious at that stage. The solutions will be more obvious too.

Refactoring too little

Having said all that, I actually think most developers refactor too little. They might be afraid of being accused of wasting time or gold-plating. But if you know how to tell with reasonable certainty whether a refactoring is an improvement, then you can be pretty sure you’re not wasting time. After all, the whole point of refactoring, and writing good code in general, is to make the code quick and easy to work with.

The surest way to avoid refactoring waste

The surest way to avoid refactoring waste is to keep a policy of doing refactorings before making behavior changes. If you need to make a change to a piece of code but that piece of code is too nasty for you to easily be able to change it, then refactor the code (separately from your behavior change) before changing it. As Kent Beck says, “make the hard change easy (warning: this may be hard) and then make the easy change”.

You can think of this as “lazy loading” your refactorings. You can be sure that your refactorings aren’t a waste because your refactorings are always driven by real needs, not speculation.

Takeaways

  • The time before you change a piece of code is a great time to refactor it. That way you know that your refactoring is driven by a real need.
  • It’s also good to refactor a piece of code after you change it. Just be sure you know how to tell when your work is no longer an improvement, and don’t succumb to the sunk cost fallacy.
  • With those two points in mind, don’t be afraid to refactor aggressively.

Bugfixes don’t make good onboarding tasks

It’s a popular opinion that bugfixes are a good way to get acquainted a codebase. I say they’re not. Here’s why.

Feedback loops

When a new developer starts at an organization, I find it helpful to keep the feedback loop tight. When a new developer can complete a small and easy task, then complete another small and easy task, then complete a slightly bigger and harder task and so on, it helps the developer get comfortable, gain confidence, and know where they stand.

This is why good early tasks have the following characteristics.

  • They’re small
  • They’re easy
  • They’re clearly and crisply defined
  • They don’t require any context that can’t be built into the task instructions
  • They have a clear expected completion time (e.g. “half a day of work”)

Here’s why bugfixes fail every single one of those tests.

Why bugfixes don’t make good onboarding tasks

Bugfixes aren’t always small and easy

Sometimes a bugfix is quick and easy. Often times a bugfix is really hard. Some bugs are the symptoms of underlying problems that are so deep that they can never reasonably be fixed.

Bugfix jobs are often hard to define

In order to fix a bug, the bug needs to 1) be reproduced, 2) be diagnosed and then 3) be fixed. It’s hard to convey to another person exactly what they need to do in order to carry out those steps. It’s going to vary wildly from bug to bug. There’s a lot of room for a person to “wander off the ranch”.

(Note, if the bug has been diagnosed with certainty before the bugfix is assigned to a developer, then the bugfix can be clearly defined and can make a perfectly good onboarding task.)

Bugfixes can require context

Many bugfixes require familiarity with the domain or the application itself in order to be reproduced, diagnosed and fixed successfully.

If a developer is faced with a task that requires him or her to go get some context, that can be a good thing. But often a bugfix requires some narrow piece of context that’s not all that profitable to know. In this case the context-gathering work just adds drag to the developer’s onboarding experience.

Bugfixes almost never have a clear expected completion time

Some bugs take five minutes to fix. Some bugs take weeks to fix. Some bugs require so much work to fix that they’re not even worth fixing.

It’s not great if a new developer is given a task and nobody knows quite how long to expect that developer to take in completing the task.

Takeaway

Unless the root cause has already been diagnosed and the fix can meet the “good onboarding task” criteria above, bugfixes don’t tend to make good onboarding tasks.

How map(&:some_method) works

The map method’s shorthand syntax

One of the most common uses of map in Ruby is to take an object and call some method on the object, like this.

[1, 2, 3].map { |number| number.to_s }

To save us from redundancy, Ruby has a shorthand version which is functionally equivalent to the above.

[1, 2, 3].map(&:to_s)

The shorthand version is nice but its syntax is a little mysterious. In this post I’ll explain why the syntax is what it is.

Passing symbols as blocks

Let’s leave the world of map for a moment and deal with a “regular” method.

I’m going to show you a method which takes a block and then demonstrate four different ways of passing a block to that method.

Side note: if you’re not too familiar with Proc objects yet, I would suggest reading my other posts on how Proc objects work and what the & in front of &block means before continuing.

First way: using a normal block

You’ve of course seen this way before. We call my_method and pass a regular block to it.

def my_method(&block)
  block.call("hello")
end

puts my_method { |value| value.upcase } # outputs "HELLO"

Second way: using Proc.new

If we wanted to, we could instead pass our block using Proc.new. Since my_method takes a block and not a Proc object, we would have to prefix Proc.new with an ampersand to convert the Proc object into a block.

(If you didn’t know, prefixing an expression with & will convert a proc to a block and a block to a proc. See this post for more details on how that works.)

def my_method(&block)
  block.call("hello")
end

puts my_method(&Proc.new { |value| value.upcase })

There would never really be a practical reason to express the syntax this way, but I wanted to show that it’s possible. This “second way” example will also connect the first way and the third way.

Third way: using to_proc

All symbols respond to a method called to_proc which returns a Proc object. If we do :upcase.to_proc, it gives us a Proc object that’s equivalent to what we would have gotten by doing Proc.new { |value| value.upcase }.

def my_method(&block)
  block.call("hello")
end

puts my_method(&:upcase.to_proc)

Fourth way: passing a symbol

I’ll show one final way. When Ruby sees an argument that’s prefixed with an ampersand, it attempts to call to_proc on the argument. So our to_proc on &:upcase.to_proc is actually superfluous. We can just pass &:upcase all by itself.

def my_method(&block)
  block.call("hello")
end

puts my_method(&:upcase)

What ultimately gets passed is the Proc object that results from calling :upcase.to_proc. Actually, more precisely, what gets passed is the block that results from calling &:upcase.to_proc, since the & converts the Proc object to a block.

Passing symbols to map

With the understanding of the above, you now know that this:

[1, 2, 3].map(&:to_s)

Is equivalent to this:

[1, 2, 3].map(&:to_s.to_proc)

Which is equivalent to this:

[1, 2, 3].map(&Proc.new { |number| number.to_s })

Which, finally, is equivalent to this:

[1, 2, 3].map { |number| number.to_s }

So, contrary to the way it may seem, there aren’t two different “versions” of the map method. The shorthand syntax is owing to the way that Ruby passes Proc objects.

Takeaways

  • When an argument is prefixed with &, Ruby attempts to call to_proc on it.
  • All symbols respond to the to_proc method.
  • There aren’t two different versions of the map method. The shorthand syntax is possible due to the two points above.

Understanding Ruby closures

Why you’d want to know about Ruby closures

Ruby blocks are one of the areas of the language that’s simultaneously one of the most fundamental parts of the language but perhaps one of the hardest to understand.

Ruby functions like map and each operate using blocks. You’ll also find heavy use of blocks in popular Ruby libraries including Ruby on Rails itself.

If you start to dig into Ruby blocks, you’ll discover that, in order to understand blocks, you have to understand something else called Proc objects.

And as if that weren’t enough, you’ll then discover that if you want to deeply understand Proc objects, you’ll have to understand closures.

The concept of a closure is one that suffers from 1) an arguably misleading name (more about this soon) and 2) unhelpful, jargony explanations online.

My goal with this post is to provide an explanation of closures in plain language that can be understood by someone without a Computer Science background. And in fact, a Computer Science background is not needed, it’s only the poor explanations of closures that make it seem so.

Let’s dig deeper into what a closure actually is.

What a closure is

A closure is a record which stores a function plus (potentially) some variables.

I’m going to break this definition into parts to reduce the chances that any part of it is misunderstood.

  • A closure is a record
  • which stores a function
  • plus (potentially) some variables

I’m going to discuss each part of this definition individually.

First, a reminder: the whole reason we’re interested in Ruby closures is because of the Ruby concept called a Proc object, which is heavily involved in blocks. A Proc object is a closure. Therefore, all the examples of closures in this post will take the form of Proc objects.

If you’re not yet familiar with Proc objects, I would suggest taking a look at my other post, Understanding Ruby Proc objects, before continuing. It will help you understand the ideas in this post better.

First point: “A closure is a record”

A closure is a value that can be assigned to a variable or some other kind of “record”. The term “record” doesn’t have a special technical meaning here, we just use the word “record” because it’s broader than “variable”. Shortly we’ll see an example of a closure being assigned to something other than a variable.

Here’s a Proc object that’s assigned to a variable.

my_proc = Proc.new { puts "I'm in a closure!" }
my_proc.call

Remember that every Proc object is a closure. When we do Proc.new we’re creating a Proc object and thus a closure.

Here’s another closure example. Here, instead of assigning the closure to a variable, we’re assigning the closure to a key in a hash. The point here is that the thing a closure gets assigned to isn’t always a variable. That’s why we say “record” and not “variable”.

my_stuff = { my_proc: Proc.new { puts "I'm in a closure too!" } }
my_stuff[:my_proc].call

Second point: “which stores a function”

As you may have deduced, or as you may have already known, the code between the braces (puts "I'm in a closure!") is the function we’re talking about when we say “a closure is a record which stores a function”.

my_proc = Proc.new { puts "I'm in a closure!" }
my_proc.call

A closure can be thought of as a function “packed up” into a variable. (Or, more precisely, a variable or some other kind of record.)

Third point: “plus (potentially) some variables”

Here’s a Proc object (which, remember, is a closure) that involves an outside variable.

The variable number_of_exclamation_points gets included in the “environment” of the closure. Each time we call the closure that we’ve named amplifier, the number_of_exclamation_points variable gets incremented and one additional exclamation point gets added to the string that gets outputted.

number_of_exclamation_points = 0

amplifier = Proc.new do
  number_of_exclamation_points += 1
  "louder" + ("!" * number_of_exclamation_points)
end

puts amplifier.call # louder!
puts amplifier.call # louder!!
puts amplifier.call # louder!!!
puts amplifier.call # louder!!!!
puts number_of_exclamation_points # 4 - the original variable was mutated

As a side note, I find the name “closure” to be misleading. The fact that the above closure can mutate number_of_exclamation_points, a variable outside the function’s scope, seems to me like a decidedly un-closed idea. In fact, it seems like there’s a tunnel, an opening, between the closure and the outside scope, through which changes can leak.

I personally started having an easy time understanding closures once I stopped trying to connect the idea of “a closed thing” with the mechanics of how closures actually work.

Takeaways

  • Ruby blocks heavily involve Proc objects.
  • Every Proc object is a closure.
  • A closure is a record which stores a function plus (potentially) some variables.

The two common ways to call a Ruby block

Ruby blocks can be difficult to understand. One of the details which presents an obstacle to fully understanding blocks is the fact that there is more than one way to call a block.

In this post we’ll go over the two most common ways of calling a Ruby block: block.call and yield.

There are also other ways to call a block, e.g. instance_exec. But that’s an “advanced” topic which I’ll leave out of the scope of this post.

Here are the two common ways of calling a Ruby block and why they exist.

The first way: block.call

Below is a method that accepts a block, then calls that block.

def hello(&block)
  block.call
end

hello { puts "hey!" }

If you run this code, you’ll see the output hey!.

You may wonder what the & in front of &block is all about. As I explained in a different post, the & converts the block into a Proc object. The block can’t be called directly using .call. The block has to be converted into a Proc object first and then .call is called on the Proc object.

I encourage you to read my two other posts about Proc objects and the & at the beginning of &block if you’d like to understand these parts more deeply.

The second way: yield

The example below is very similar to the first example, except instead of using block.call we’re using yield.

def hello(&block)
  yield
end

hello { puts "hey!" }

You may wonder: if we already have block.call, why does Ruby provide a second, slightly different way of calling a block?

One reason is that yield gives us a capability that block.call doesn’t have. In the below example, we define a method and then pass a block to it, but we never have to explicitly specify that the method takes a block.

def hello
  yield
end

hello { puts "hey!" }

As you can see, yield gives us the ability to call a block even if our method doesn’t explicitly take a block. (Side note: any Ruby method can be passed a block, even if the method doesn’t explicitly take one.)

The fact that yield exists raises the question: why not just use yield all the time?

The answer is that when you use block.call, you have the ability to pass the block to another method if you so choose, which is something you can’t do with yield.

When we put &block in a method’s signature, we can do more with the block than just call it using block.call. We could also, for example, choose not to call the block but rather pass the block to a different method which then calls the block.

Takeaways

  • There are two common ways to call a Ruby block: block.call and yield.
  • Unlike block.call, yield gives us the ability to call a block even if our method doesn’t explicitly take a block.
  • Unlike using an implicit block and yield, using and explicit block allows us to pass a block to another method.