Note: before starting this post, I recommend reading my other posts about procs and closures for background.
Overview
What’s the difference between a proc and a lambda?
Lambdas actually are procs. Lambdas are just a special kind of proc and they behave a little bit differently from regular procs. In this post we’ll discuss the two main ways in which lambdas differ from regular procs:
The return keyword behaves differently
Arguments are handled differently
Let’s take a look at each one of these differences in more detail.
The behavior of “return”
In lambdas, return means “exit from this lambda”. In regular procs, return means “exit from embracing method”.
Below is an example, pulled straight from the official Ruby docs, which illustrates this difference.
def test_return
# This is a lambda. The "return" just exits
# from the lambda, nothing more.
-> { return 3 }.call
# This is a regular proc. The "return" returns
# from the method, meaning control never reaches
# the final "return 5" line.
proc { return 4 }.call
return 5
end
test_return # => 4
Argument handling
Argument matching
A proc will happily execute a call with the wrong number of arguments. A lambda requires all arguments to be present.
> p = proc { |x, y| "x is #{x} and y is #{y}" }
> p.call(1)
=> "x is 1 and y is "
> p.call(1, 2, 3)
=> "x is 1 and y is 2"
> l = lambda { |x, y| "x is #{x} and y is #{y}" }
> l.call(1)
(irb):5:in `block in <main>': wrong number of arguments (given 1, expected 2) (ArgumentError)
> l.call(1, 2, 3)
(irb):14:in `block in <main>': wrong number of arguments (given 3, expected 2) (ArgumentError)
Array deconstruction
If you call a proc with an array instead of separate arguments, the array will get deconstructed, as if the array is preceded with a splat operator.
If you call a lambda with an array instead of separate arguments, the array will be interpreted as the first argument, and an ArgumentError will be raised because the second argument is missing.
> proc { |x, y| "x is #{x} and y is #{y}" }.call([1, 2])
=> "x is 1 and y is 2"
> lambda { |x, y| "x is #{x} and y is #{y}" }.call([1, 2])
(irb):9:in `block in <main>': wrong number of arguments (given 1, expected 2) (ArgumentError)
In other words, lambdas behave exactly like Ruby methods. Regular procs don’t.
Takeaways
In lambdas, return means “exit from this lambda”. In regular procs, return means “exit from embracing method”.
A regular proc will happily execute a call with the wrong number of arguments. A lambda requires all arguments to be present.
Regular procs deconstruct arrays in arguments. Lambdas don’t.
Lambdas behave exactly like methods. Regular procs behave differently.
If you’ve been programming for any length of time, you’ve probably come across the advice “don’t use global variables”.
Why are global variables so often advised against?
The reason is that global variables make a program less understandable. When you’re looking at a piece of code that uses a global variable, you don’t know if you’re seeing the whole picture. The code isn’t self-contained. In order to understand your piece of code, you potentially have to venture to some outside place to have a look at some other code that’s influencing your code at a distance.
The key idea is scope. If a local variable is defined inside of a function, for example, then that variable’s scope is limited to that function. Nobody from outside that function can see or mess with that variable. As another example, if a private instance variable is defined for a class, then that variable’s scope is limited to that class, and nobody from outside that class can see or mess with the variable.
The broader a variable’s scope, the more code has to be brought into the picture in order to understand any of the code that involves that variable. If I have a function that depends on its own arguments and nothing else, then that function can be understood in isolation. All I need in order to understand the function (at least in terms of causes and effects, as opposed to conceptual understanding which may require outside context) is the code inside the function. If alternatively the function involves a class instance variable, for example, then I potentially need to look at the other places in the class that involve the instance variable in order to understand the behavior of the function.
The maximum scope a variable can have is global scope. In terms of understanding, a global variable presents the biggest burden and requires the most investigative work. That’s why global variables are so often cautioned against.
Having said that, it’s actually a little simplistic to say “global variables are bad”. It would be more precise to say “global variables are costly”. There are some scenarios where the cost of a global variable is worth the price. In those cases, the idea of a global variable could be said to be good because it’s less costly than the alternatives.
But in the vast majority of cases, it’s good to keep the scope of variables as small as possible. The smaller the scopes of your variables are, the more it will aid the understandability of your code.
All code has a maintenance cost. Some code of course is an absolute nightmare to maintain. We would say its maintenance cost is high. Other code is easier to maintain. We would say its maintenance cost is low, or at least relatively low compared to worse code.
When thinking about good code and bad code, it’s worth considering when exactly code’s maintenance cost is incurred. We might refer to these events as “tollways”. We can’t travel these roads without paying a toll.
Tollway 1: when the code is changed
For any particular piece of code, a toll is incurred every time that code needs to be changed. The size of the toll depends on how easy the code is to understand and change.
Tollway 2: when the code needs to be understood in order to support a change in a different area
Even if a piece of code doesn’t need to be changed, the code incurs a toll whenever someone it needs to be understood in order to make a different change. This dependency of understanding happens when pieces of code are coupled via inheritance, passing values in methods, global variables, or any of the other ways that code can be coupled.
We could put code into two categories: “leaf code”, which depends on other code but has no dependencies itself, and “branch code”, which does have dependencies, and may or may not depend on other code. Branch code incurs tolls and leaf code doesn’t.
Good code matters in proportion to future tollway traffic
When any new piece of code is added to a codebase, it may be possible to predict the future “tollway traffic” of that code.
Every codebase has some areas that change more frequently than others. If the code you’re adding lies in a high-change area, then it’s probably safe to predict that that code will have high future tollway traffic. On average it’s a good investment to spend time making this code especially good because the upfront effort will get paid back a little bit every time the code gets touched in the future.
Conversely, if there’s a piece of code that you have good reason to believe will change infrequently, it’s less important to make this code good, because the return on investment won’t be as great. (If you find out that your original prediction was wrong, it may be wise to refactor the code so you don’t end up paying more in toll fees than you have to.)
If there’s a piece of code that’s very clearly “branch code” (other code depends on it) then it’s usually a good idea to spend extra time to make sure this code is easily understandable. Most codebases have a small handful of key models which are touched by a large amount of code in the codebase. If the branch code is sound, it’s a great benefit. If the branch code has problems (e.g. some fundamental concept was poorly-named early on) then those problems will stretch their tentacles throughout the codebase and cause very expensive problems.
On the other hand, if a piece of code can be safely identified as leaf code, then it’s not so important to worry about making that code super high-quality.
But in general, it’s hard to predict whether a piece of code will have high or low future tollway traffic, so it’s good to err on the side of assuming high future tollway traffic. Rarely do codebases suffer from the problem that the code is too good.
Bad reasons to write bad code
It’s commonly believed that it’s wise to take on “strategic technical debt” in order to meet deadlines. In theory this is a smart way to go, but in practice it’s always a farce. The debt gets incurred but then never paid back.
It’s also a mistake to write crappy code because “users don’t care about code”. Users obviously don’t literally care about code, but users do experience the symptoms of bad code when the product is full of bugs and the development team’s productivity slows to a crawl.
Takeaways
A piece of code incurs a “toll” when it gets changed or when it needs to be understood in order to support a change in a different piece of code.
The return on investment of making a piece of code good is proportionate to the future tollway traffic that code will receive.
Predicting future tollway traffic is not always easy or possible, but it’s not always impossible either. Being judicious about when to spend extra effort on code quality or to skip effort is more economical than indiscriminately writing “medium-quality” code throughout the entire codebase.
If you want to be a competent Rails tester, there are a lot of different things you have to learn. The things you have to learn might be divided into three categories.
The first of these three categories is tools. For example, you have to choose a testing framework and learn how to use it. Then there are principles, such as the principle of testing behavior vs. implementation. Lastly, there are practices, like the practice of programming in feedback loops.
This post will focus on the first category, tools.
For better or worse, the testing tools most commercial Rails projects use are RSpec, Factory Bot and Capybara. When developers who are new to testing (and possibly Ruby) first see RSpec syntax, for example, they’re often confused.
Below is an example of a test written using RSpec, Factory Bot and Capybara. To a beginner the syntax may look very mysterious.
describe "Signing in", type: :system do
it "signs the user in" do
user = create(:user)
visit new_user_session_path
fill_in "Username", with: user.username
fill_in "Password", with: user.password
click_on "Submit"
expect(page).to have_content("Sign out")
end
end
The way to take the above snippet from something mysterious to something perfectly clear is to learn all the details of how RSpec, Factory Bot and Capybara work. And doing that will require us to become familiar with domain-specific languages (DSLs).
For each of RSpec, Factory Bot and Capybara, there’s a lot to learn. And independently of those tools, there’s a lot to be learned about DSLs as well. Therefore I recommend learning a bit about DSLs separately from learning about the details of each of those tools.
Here are some posts that can help you learn about DSLs. If you’re learning testing, I suggest going through these posts and seeing if you can connect them to the code you see in your Rails projects’ codebases. As you gain familiarity with DSL concepts and the ins and outs of your particular tools, your test syntax should look increasingly clear to you.
Learning how Ruby DSLs work can be difficult and time-consuming but it’s well worth it. And if you’re using testing tools that make use of DSLs, learning about DSLs is a necessary step toward becoming a fully competent Rails tester.
In programming interviews, job candidates are sometimes asked what kinds of side projects they work on in their spare time. The supposed implication is that if you work on side projects in your free time then that’s good, and if you don’t that’s bad.
This idea has led to a somewhat lively debate: do you have to code in your free time in order to be a good programmer?
The popular answer is an emphatic no. You can put in a solid 8-hour workday, do a kick-ass job, and then go home and relax knowing you’re fully fulfilling all of their professional obligations. And actually, you might even be a better programmer because you’re not running yourself ragged and burning yourself out.
But actually, both this question and the standard answer are misguided. In fact, they are miss the point so thoroughly that they can’t even be called wrong. I’ll explain what I mean.
Drumming in your free time
Imagine I’m a professional drummer. I make my living by hiring out my drumming services at bar shows, weddings and parties. I’m a very competent drummer although maybe not a particularly exceptional one.
Imagine how funny it would be for me to go on an online forum and ask, Do I have to practice drumming in my free time in order to be a good drummer?
I can imagine a couple inevitable responses. First of all, who’s this imaginary authority who’s going around and handing down judgments about who’s a good drummer or not? And second, yes, of course you have to spend some time practicing if you want to get good, especially when you’re first starting out.
The question reveals a very confused way of looking at the whole situation.
The reality is that there’s an economic judgement call to be made. Either I can choose to practice in my free time or get better faster, or I can choose not to practice in my free time and improve much more slowly, or perhaps even get worse. Neither choice is right or wrong. Neither choice automatically makes me “good” or “bad”. It’s simply I personal choice that I have to make for myself. The question is whether I personally find the benefits of practicing the drums to be worth the cost of practicing the drums.
An important factor that will inform my decision is the objectives that I’m personally pursuing. Am I, for example, trying to be the best drummer in New York City? Or do I just want to have a little fun on the weekends? The question is the same for everyone but the answer is going to be much different depending on what you want and what you’re willing to do to get it.
The drumming analogy makes it obvious how silly it is to ask directly if you spend your free time practicing. Maybe the person asking is trying to probe for “passion” (yuck). But passion is a means to an end, not an end in itself. Instead of looking for passion, the evaluator should look for the fruits of passion, i.e. being a good drummer.
Back to programming
Do your career goals, in combination with your current skill level, justify the extra cost of programming in your free time? If so, then coding in your free time is a rational choice. And if you decide that there are no factors in your life that make you want to code in your free time, then that’s a perfectly valid choice as well. There’s no right or wrong answer. You don’t “have to” or not have to. Rather, it’s a choice for each person to make for themselves.
Memoization is a performance optimization technique.
The idea with memoization is: “When a method invokes an expensive operation, don’t perform that operation each time the method is called. Instead, just invoke the expensive operation once, remember the answer, and use that answer from now on each time the method is called.”
Below is an example that shows the benefit of memoization. The example is a class with two methods which both return the same result, but one is memoized and one is not.
The expensive operation in the example takes one second to run. As you can see from the benchmark I performed, the memoized method is dramatically more performant than the un-memoized one.
Running the un-memoized version 10 times takes 10 seconds (one second per run). Running the memoized version 10 times takes only just over one second. That’s because the first call takes one second but the calls after that take a negligibly small amount of time.
class Product
# This method is NOT memoized. This method will invoke the
# expensive operation every single time it's called.
def price
expensive_calculation
end
# This method IS memoized. It will invoke the expensive
# operation the first time it's called but never again
# after that.
def memoized_price
@memoized_price ||= expensive_calculation
end
def expensive_calculation
sleep(1)
500
end
end
require "benchmark"
product = Product.new
puts Benchmark.measure { 10.times { product.price } }
puts Benchmark.measure { 10.times { product.memoized_price } }
I’ve always thought memoization was an awkward term due to its similarity to “memorization”. The obscurity of the name bugged me a little so I decided to look up its etymology.
According to Wikipedia, “memoization” is derived from the Latin word “memorandum”, which means “to be remembered”. “Memo” is short for memorandum, hence “memoization”.
When to use memoization
The art of performance optimization is a bag of many tricks: query optimization, background processing, caching, lazy UI loading, and other techniques.
Memoization is one trick in this bag of tricks. You can recognize its use case when an expensive method is called repeatedly without a change in return value.
This is not to say that every time a case is encountered where an expensive method is called repeatedly without a change in return value that it’s automatically a good use case for memoization. Memoization (just like all performance techniques) is not without a cost, as we’ll see shortly. Memoization should only be used when the benefit exceeds the cost.
As with all performance techniques, memoization should only be used a) when you’re sure it’s needed and b) when you have a plan to measure the before/after performance effect. Otherwise what you’re doing is not performance optimization, you’re just randomly adding code (i.e. incurring costs) without knowing whether the costs you’re incurring are actually providing a benefit.
The costs of memoization
The main cost of memoization is that you risk introducing subtle bugs. Here are a couple examples of the kinds of bugs to which memoization is susceptible.
Instance confusion
Memoization works if and only if the return value will always be the same. Let’s say, for example, that you have a loop that makes use of an object which has a memoized method. Maybe this loop uses the same object instance in every single iteration, but you’re under the mistaken belief that a fresh instance is used for each iteration.
In this case the value from the object in the first iteration will be correct, but all the subsequent iterations risk being incorrect because they’ll use the value from the first iteration rather than getting their own fresh values.
If this type of bug sounds contrived, it’s not. It comes from a real example of a bug I once caused myself!
Nil return values
In the example above, if expensive_calculation had been nil, then the value wouldn’t get memoized because @memoized_price would be nil and nil is falsy.
The risk of such a bug is probably low, and the consequences of the bug are probably small in most cases, but it’s a good category of bug to be aware of. An alternative solution is to use defined? rather than lazy initialization, which is not susceptible to the nil-is-falsy bug.
Understandability
Last but certainly not least, code that involves memoization is harder to follow than code that doesn’t. This is probably the biggest cost of memoization. Code that’s hard to understand is hard to change. Code that’s hard to understand provides a place for bugs to hide.
Prudence pays off
Because memoization isn’t free, it’s not a good idea to habitually add memoization to methods as a default policy. Instead, add memoization on a case-by-case basis when it’s clearly justified.
Takeaways
Memoization is a performance optimization technique that prevents wasteful repeated calls to an expensive operation when the return value is the same each time.
Memoization should only be added when you’re sure it’s needed and you have a plan to verify the performance difference.
A good use case for memoization is when an expensive method is called repeatedly without a change in return value.
Memoization isn’t free. It carries with it the risk of subtle bugs. Therefore, don’t apply memoization indiscriminately. Only use it in cases where there’s a clear benefit.
When writing tests, or reading other people’s tests, it can be helpful to understand that tests are often structured in four distinct phases.
These phases are:
Setup
Exercise
Assertion
Teardown
Let’s illustrate these four phases using an example.
Test phase example
Let’s say we have an application that has a list of users that can receive messages. Only active users are allowed to receive messages. So, we need to assert that when a user is inactive, that user can’t receive messages.
Here’s how this test might go:
Create a User record (setup)
Set the user’s “active” status to false (exercise)
Assert that the user is not “messageable” (assertion)
Delete the User record we created in step 1 (teardown)
In parallel with this example, I’ll also use another example which is somewhat silly but also less abstract. Let’s imagine we’re designing a sharp-shooting robot that can fire a bow and accurately hit a target with an arrow. In order to test our robot’s design, we might:
Get a fresh prototype of the robot from the machine shop (setup)
Allow the robot to fire an arrow (exercise)
Look at the target to make sure it was hit by the arrow (assertion)
Return the prototype to the machine shop for disassembly (teardown)
Now let’s take a look at each step in more detail.
The purpose of each test phase
Setup
The setup phase typically creates all the data that’s needed in order for the test to operate. (There are other things that could conceivably happen during a setup phase but for our current purposes we can think of the setup phase’s role as being to put data in place.)In our case, the creation of the User record is all that’s involved in the setup step, although more complicated tests could of course create any number of database records and potentially establish relationships among them.
Exercise
The exercise phase walks through the motions of the feature we want to test. With our robot example, the exercise phase is when the robot fires the arrow. With our messaging example, the exercise phase is when the user gets put in an inactive state.
Side note: the distinction between setup and exercise may seem blurry, and indeed it sometimes is, especially in low-level tests like our current example. If someone were to argue that setting the user to inactive should actually be part of the setup, I’m not sure how I’d refute them. To help with the distinction in this case, imagine if we instead were writing an integration test that actually opened up a browser and simulated clicks. For this test, our setup would be the same (create a user record) but our exercise might be different. We might visit a settings page, uncheck an “active” checkbox, then save the form.
Assertion
The assertion phase is basically what all the other phases exist in support of. The assertion is the actual test part of the test, the thing that determines whether the test passes or fails.
Teardown
Each test needs to clean up after itself. If it didn’t, then each test would potentially pollute the world in which the test is running and affect the outcome of later tests, making the tests non-deterministic. We don’t want this. We want deterministic tests, i.e. tests that behave the same exact way every single time no matter what. The only thing that should make a test go from passing to failing or vice-versa is if the behavior that the test tests changes.
In reality, Rails tests tend not to have an explicit teardown step. The main pollutant we have to worry about with our tests is database data that gets left behind. RSpec is capable of taking care of this problem for us by running each test in a database transaction. The transaction starts before each test is run and aborts after the test finishes. So really, the data never gets permanently persisted in the first place. So although I’m mentioning the teardown step here for completeness’ sake, you’re unlikely to see it in the wild.
A concrete example
See if you can identify the phases in the following RSpec test.
RSpec.describe User do
let!(:user) { User.create!(email: 'test@example.com') }
describe '#messageable?' do
context 'is inactive' do
it 'is false' do
user.update!(active: false)
expect(user.messageable?).to be false
user.destroy!
end
end
end
end
Here’s my annotated version.
RSpec.describe User do
let!(:user) { User.create!(email: 'test@example.com') } # setup
describe '#messageable?' do
context 'is inactive' do
it 'is false' do
user.update!(active: false) # exercise
expect(user.messageable?).to be false # assertion
user.destroy! # teardown
end
end
end
end
Takeaway
Being familiar with the four phases of a test can help you overcome the writer’s block that testers sometimes feel when staring at a blank editor. “Write the setup” is an easier job than “write the whole test”.
Understanding the four phases of a test can also help make it easier to parse the meaning of existing tests.
When asked to what he attributes his success in life, Winston Churchill purportedly said, “Economy of effort. Never stand up when you can sit down, and never sit down when you can lie down.”
My philosophy with programming is basically the same. Here’s why.
Finite mental energy
Sometimes people say they wish there were more hours in the day. I see it a little differently. It seems to me that the scarce resource isn’t time but energy. I personally run out of energy (or willpower or however you want to put it) well before I run out of time in the day.
Most days for me there comes a certain time where I’m basically spent for the day and I don’t have much more work in me. (When I say “work” I mean work of all kinds, not just “work work”.) Sometimes that used-up point comes before the end of the workday. Sometimes it comes after. But that point almost always arrives before I’m ready to go to bed.
The way I see it, I get a finite supply of mental energy in the morning. The harder I think during the day, the faster the energy gets depleted, and the sooner it runs out. It would really be a pity if I were to waste my mental energy on trivialities and run out of energy after 3 hours of working instead of 8 hours of working. So I try to conserve brainpower as much as possible.
The ways I conserve brainpower
Below are some examples of wasteful ways of working alongside the more economical version.
Wasteful way
Economical way
Keep all your to-dos in your head
Keep a written to-do list
Perform work in units of large, fuzzily-defined tasks
Perform work in units of small, crisply-defined tasks
Perform work (and deployments) in large batches
Perform work serially, deploying each small change as soon as it’s finished
Many programmers make Git commits in a haphazard way that makes it easy to make mistakes and commit things they didn’t mean to.
Here’s a six-step process that I use every time I make a Git commit.
1. Make sure I have a clean working state
In Git terminology, “working state” refers to what you get when you run git status.
I always run git status before I start working on a feature. Otherwise I might start working, only to discover later that the work I’ve done is mixed in with some other, unrelated changes from earlier. Then I’ll have to fix my mistake. It’s cheaper just to check my working state in the beginning to make sure it’s clean.
2. Make the change
This step is of course actually the biggest, most complicated, and most time-consuming, but the content of this step is outside the scope of this post. What I will say is that I perform this step using feedback loops.
3. Run git status
When I think I’m finished with my change, I’ll run a git status. This will help me compare what I think I’m about to commit with what I’m actually about to commit. Those two things aren’t always the same thing.
4. Run git add .
Running git add . will stage all the current changes (including untracked files) to be committed.
5. Run git diff –staged
Running git diff --staged will show a line-by-line diff of everything that’s staged for commit. Just like the step where I ran git status, this step is to help compare what I think I’m about to commit with what I’m actually about to commit—this time at a line-by-line level rather than a file-by-file level.
6. Commit
Finally, I make the commit, using an appropriately descriptive commit message.
The reason I say appropriately descriptive commit message is because, in my opinion, different types of changes call for different types of commit messages. If the content of the commit makes the idea behind the commit blindingly obvious, then a vague commit message is totally fine. If the idea behind the commit can’t easily be inferred from the code that was changed, a more descriptive commit is called for. No sense in wasting brainpower on writing a highly descriptive commit message when none is called for.
Conclusion
By using this Git commit process, you can code faster and with fewer mistakes, while using up less brainpower.
It’s easy to find information online regarding how to use ViewComponent. What’s not as easy to find is an explanation of why a person might want to use ViewComponent or what problem ViewComponent solves.
Here’s the problem that ViewComponent solves for me.
Cohesion
If you just stuff all your domain logic into Active Record models then the Active Record models grow too large and lose cohesion.
A model loses cohesion when its contents no longer relate to the same end purpose. Maybe there are a few methods that support feature A, a few methods that support feature B, and so on. The question “what idea does this model represent?” can’t be answered. The reason the question can’t be answered is because the model doesn’t represent just one idea, it represents a heterogeneous mix of ideas.
Because cohesive things are easier to understand than incohesive things, I try to organize my code into objects (and other structures) that have cohesion.
Achieving cohesion
There are two main ways that I try to achieve cohesion in my Rails apps.
POROs
The first way, the way that I use the most, is by organizing my code into plain old Ruby objects (POROs). For example, in the application I maintain at work, I have objects called AppointmentBalance, ChargeBalance, and InsuranceBalance which are responsible for the jobs of calculating the balances for various amounts that are owed.
I’m not using any fancy or new-fangled techniques in my POROs. I’m just using the principles of object-oriented programming. (If you’re new to OOP, I might recommend Steve McConnell’s Code Complete as a decent starting point.)
Regarding where I put my POROs, I just put them in app/models. As far as I’m concerned, PORO models are models every bit as much as Active Record models are.
Concerns/mixins
Sometimes I have a piece of code which doesn’t quite fit in with any existing model, but it also doesn’t quite make sense as its own standalone model.
In these cases I’ll often use a concerns or a mixin.
But even though POROs and concerns/mixins can go a really long way to give structure to my Rails apps, they can’t adequately cover everything.
Homeless code
I’ve found that I’m able to keep the vast majority of my code out of controllers and views. Most of my Rails apps’ code lives in the model.
But there’s still a good amount of code for which I can’t find a good home in the model. That tends to be view-related code. View-related code is often very fine-grained and detailed. It’s also often tightly coupled (at least from a conceptual standpoint) to the DOM or to the HTML or in some other way.
There are certain places where this code could go. None of them is great. Here are the options, as I see them, and why each is less than perfect.
The view
Perhaps the most obvious place to try to put view-related code is in the view itself. Most of the time this works out great. But when the view-related code is sufficiently complicated or voluminous, it creates a distraction. It creates a mixture of levels of abstraction, which makes the code harder to understand.
The controller
The controller is also not a great home for this view-related code. The problem of mixing levels of abstraction is still there. In addition, putting view-related code in a controller mixes concerns, which makes the controller code harder to understand.
The model
Another poorly-suited home for this view-related code is the model. There are two options, both not great.
The first option is to put the view-related code into some existing model. This option isn’t great because it pollutes the model with peripheral details, creates a potential mixture of concerns and mixture of levels of abstraction, and makes the model lose cohesion.
The other option is to create a new, standalone model just for the view-related code. This is usually better than stuffing it into an existing model but it’s still not great. Now the view-related code and the view itself are at a distance from each other. Plus it creates a mixture of abstractions at a macro level because now the code in app/models contains view-related code.
Helpers
Lastly, one possible home for non-trivial view-related code is a helper. This can actually be a perfectly good solution sometimes. I use helpers a fair amount. But sometimes there are still problems.
Sometimes the view-related code is sufficiently complicated to require multiple methods. If I put these methods into a helper which is also home to other concerns, then we have a cohesion problem, and things get confusing. In those cases maybe I can put the view-related code into its own new helper, and maybe that’s fine. But sometimes that’s a lost opportunity because what I really want is a concept with meaning, and helpers (with their -Helper suffix) aren’t great for creating concepts with meaning.
No good home
The result is that when I have non-trivial view-related code, it doesn’t have a good home. Instead, my view-related code has to “stay with friends”. It’s an uncomfortable arrangement. The “friends” (controllers, models, etc.) wish that the view-related code would move out and get a place of its own, but it doesn’t have a place to go.
How ViewComponent provides a home for view-related code
A ViewComponent consists of two entities: 1) an ERB file and 2) a Ruby object. These two files share a name (e.g. save_button_component.html.erb and save_button_component.rb and sit at a sibling level to each other in the filesystem. This makes it easy to see that they’re closely related to one another.
Ever since I started using ViewComponent I’ve had a much easier time working with views that have non-trivial logic. In those cases I just create a ViewComponent and put the logic in the ViewComponent.
Now my poor homeless view-related code can move into a nice, spacious, tidy new house that it gets all to its own. And just as important, it can get out of its friends’ hair.
And just in case you think this sounds like a “silver bullet” situation, it’s not. The reason is because ViewComponents are a specific solution to a specific problem. I don’t use ViewComponent for everything, I only use ViewComponent when a view has non-trivial logic associated with it that doesn’t have any other good place to live.
Takeaways
If you just stuff all your domain logic into Active Record models, your Active Record models will soon lose cohesion.
In my Rails apps, I mainly achieve cohesion through a mix of POROs and concerns/mixins (but mostly POROs).
Among the available options (views, controllers, models and helpers) it’s hard to find a good place to put non-trivial view-related code.
ViewComponent provides (in my mind) a reasonable place to put non-trivial view-related code.
Side note: if you’re interested in learning more about ViewComponent, you can listen to my podcast conversation with Joel Hawksley who created the tool. I also did a talk on ViewComponent which you can see below.