Examples of pointless types of RSpec tests

by Jason Swett,

A reader of mine recently shared with me a GitHub gist called rspec_model_testing_template.rb. He also said to me, “I would like your opinion on the value of the different tests that are specified in the gist. Which ones are necessary and which ones aren’t?”

In this post I’d like to point out which types are RSpec tests I think are pointless to write.

Testing the presence of associations

Here are some examples from the above gist of association tests:

Unless I’m crazy, these sorts of tests don’t actually do anything. A test like it { expect(classroom).to have_many(:students) } verifies that classroom.rb contains the code has_many :students but that’s all the value the test provides.

I’ve heard these tests referred to as “tautological tests”. I had to look up that word when I first heard it, so here’s a definition for your convenience: “Tautology is useless restatement, or saying the same thing twice using different words.” That’s exactly what these tests are: a useless restatement.

What does it mean for a classroom to have many students and what sorts of capabilities does the existence of that association give us? Whatever those capabilities are is what we should be testing. For example, maybe we want to calculate the average student GPA per classroom. The ability to do classroom.average_student_gpa would be a valuable thing to write a test for.

If we test the behaviors that has_many :students enables, then we don’t have to directly test the :students association at all because we’re already indirectly testing the association by testing the behaviors that depend on it. For example, our test for the classroom.average_student_gpa method would break if the line has_many :students were taken away.

How do you know the difference between a tautological test and a genuinely valuable test? Here’s an analogy. What would you do if you wanted to test the brakes on your bike? Would you visually verify that your bike has brake components attached to it, and therefore logically conclude that your bike has working brakes? No, because that conclusion is logically invalid. The way to test your brakes is to actually try to use your brakes to make your bike stop. In other words, you wouldn’t test for the presence of brakes, you would test for the capability or that your brakes enable.

Testing that a model responds to certain methods

There’s negligible value in simply testing that a model responds to a method. Better to test that that method does the right thing.

Testing for the presence of callbacks

Don’t verify that the callback got called. Verify that you got the result you expected the callback to produce.

Testing for database columns and indexes

I had actually never seen this before and didn’t know you could do it. I find it pointless for the same exact reasons that testing associations is pointless. Don’t test that the database has an index, test that the page loads sufficiently quickly.

Tips for writing valuable RSpec tests

Here’s how I tend to write model specs: for every method I create on a model, I try to poke at that method from every possible angle and make sure it returns the desired result.

For example, I recently added a feature in an application that made it impossible to schedule an appointment for a patient who has been inactivated. So I wrote three test cases: one where the patient is active (expect success), one where the patient is inactive (expect an error to get added to the object), and one where the patient was missing altogether (expect a different error on the object).

My methodology for writing feature specs with Capybara is a little more involved. I describe it in detail here.

  •  
  •  
  •  
  • 10

16 thoughts on “Examples of pointless types of RSpec tests

  1. Drew Cooper

    It’s not about necessity or not. Someone worked really hard to make a behavioral test framework not test behaviors. This is something DHH could have used in his long rants against testing. He still would have been wrong but these are entertaining examples of bad active*-looking test code.

    There ought to be a tool to fail them somehow. Maybe something like blacklisting test cases that don’t increase code coverage. Something like mutation testing but not.

    Reply
  2. Oluwasegun

    Thanks for the clarification.

    Does that mean that all the other tests you didn’t point out, most especially “ActiveModel validations”, are worth testing?

    Reply
    1. Jason Swett Post author

      Well, there’s not much left. I do think validations are worth testing. On the surface it seems like these tests are tautological too. I think the key difference with validation tests is that validations aren’t a means to an end the way associations are a means to an end. Validations are an end in themselves. The guiding principle is to test the behavior, not the implementation.

      Reply
  3. TC

    Thanks for the post. I sometimes ponder what tests I have lurking around that are essentially useless.

    However I personally like to use shoulda_matcher’s belongs_to, has_many and has_db_index etc family of tests during TDD, though often go on to remove them once the feature is finished and I have other tests which implicitly or explicitly cover the same thing.

    For example when building out some models with associations, adding a has_many expectation for example helps me to a) think about the relationship more deeply, and b) test that I have implemented it properly – in red/green fashion. Adding a has_many to your model is not a big deal in term of typing, but is of course a significant step for young models (and their migrations) to take. Using them gives me confidence that I won’t be scratching my head in half an hour because I have missed a relationship or got the foreign key the wrong way round in a migration :0)
    Should they stay in the specs longer-term? Probably not. If a model’s has_many declaration was accidentally lost e.g. in a bodged rebase, other tests would hopefully break, letting you know.

    With regards to has_db_index I find that very useful in TDD. If I have identified a column that is missing an index, I write a test to prove to myself that the column or columns really aren’t covered, then create the migration to make the test pass.
    Quite useful. And if the index is key to the app then I would definitely keep this test in the suite long-term. Perhaps not as effective as actually testing real time performance as you suggest (and which is a good idea of course and something I don’t do as much as I should) but much cheaper and does not effect test suite times adversely. If that index is at some point inadvertently removed by someone who does not appreciate its importance, your test will catch it. What’s not to like? :0)

    So I think in general I perhaps draw more of a distinction between ephemeral test like the shoulda_matcher ones you mention, which I find helpful in TDD and I am grateful for, and more ‘serious’ tests that deserve to have a longer self life in your code.

    Reply
  4. Guillermo Siliceo

    I’ve found that when testing a concern, it is valuable to ensure the model including the concern implements certain methods as a way to ensure an expected API, at least before the developer implements all its needed for the concern.

    Think of a payable concern, you add it to the Order model and then on the order_spec you add the shared_examples ‘behaves like a payable’ on that shared example you have
    it { expect(factory_instance).to respond_to(:cost) }
    So the developer that adds a concern to other models know which methods have to implement.

    Reply
  5. Bob

    Great post. I agree with it all except ‘Testing the presence of associations’. Implicit understading may work for in-house teams, but projects for clients need explicit testing.

    You wouldn’t just ride a bike to test the brakes, without first looking to see if it had any!

    Testing behaviour is absolutely better than testing for method calls, but:

    (a) that requires a developer-mindset to ‘know’ the relationship is implicit within the test. When your client wants to tick thier ‘a classroom can have many students’ acceptance criteria, a simple association test lets them tick the box quickly and move on, without you explaining the implicitness of the other tests.

    (b) associations often have conditions, class names, dependencies, counter caches, etc. A direct test of the association’s existance and structure is better than assuming each part is implicity covered in other behaviour.

    (c) it’s common for classes to require associations, that they themselves make no direct use of, so there is no implicit testing. For example, other than belonging to a gallery, the Painting class may never refer to the gallery within any methods.

    (d) in TDD you write the test first, then when they all pass, you know the class is complete. No developer will take the extra time to ensuring all associations are implicitly tested. A green board – move on!

    (e) when a developer receives the class specification, it clearly details the expected associations. It doesn’t leave the developer to implicity guess them from the required functunality of its methods.

    Sorry to harp on, but I personally think explicitly testing for the existance and structure of associations is a key part of any unit test suite.

    Reply
    1. Jason Swett Post author

      Thanks for the kind words. I agree with some of what you say and disagree with some.

      My argument against your argument of “it’s common for classes to require associations, that they themselves make no direct use of, so there is no implicit testing” is that it would be more useful to write integration tests (meaning either feature specs or multi-model model specs) that ensure the proper behavior of the interactions of various classes through their associations.

      Reply
  6. esteban

    A tautology does not mean redundancy. It means that certain things that are true, stay true. Saying that, those Activerecord expectations could be useful depending on the context. What if the model is created via metaprogramming ? What if you are doing some BDD and you need to assure that the initial structure stays the same after other operations?

    I agree that there are certain tests that do not add any value (like testing index presence), but that does not mean is the rule. Sometimes, we just need the context in order to judge the importance of it.

    Reply
  7. Denny Mueller

    Sorry but testing the presence of associations is important when doing TDD. I don’t manually check if the associations are there. Making sure that the specs getting green while writing the model is the only important part.
    It also ensures that in the future no one accidentally removes or changes the options of an association without failing on the CI.

    All the other “useless” specs are not really useless but redundant. e.g. it { expect(factory_instance).to respond_to(:public_method_name) } You would test this via describing the method and the outcome. This obviously also ensures that the class responds to the method.
    The same goes for callbacks. Just test that on the hook something gets changed on the object when the hook gets triggered and you are good to go.

    I personally prefer to have to many specs than to little. I don’t want back into a world where you test your code with manually running it.

    Reply
  8. Toby

    There are a lot of good things to think about here, but I would have to disagree with you on a couple points, but perhaps they’re unique to longer-running organizations.

    We have a pre-existing database that our Rails app has connected to. As such, there have been developers that have set up models to point to tables and those tables have ids to relate to other tables. So, when we navigate the data in a database GUI, the data would seem to be there. Think of a Car having many Tires. It may even be that a Car has many Tires in the model. However, the Tire model was never set up properly, so when we do my_camry.tires, an error is thrown. So, sure, some of these tests may largely be pointless, they’re also fast (to run and write) and ensure that all the models are set up properly.

    The other thing is checking to verify that a class responds to certain methods. Let’s say you have an application that takes advantage of duck-typing. We definitely want to verify (in those class’s tests) that the behavior is correct. However, we should also verify that the classes we want to use in our duck-typed implementation respond to the right messages. Things like aliases, inheritance (direct, or via ‘extends’), or using ‘includes’ are all ways that we bring methods in and “expect(x).to respond_to(:message)” is a simple way to verify that the methods are there.

    The kinds of tests I would say are largely useless are the ones that say, “Expect the result to be a certain kind of object.” There have been a lot of times that I’ve seen a failure handler log an error and then return the same kind of object as is in the success path of that method. _This_ is a useless test. I have no idea if the code worked or not. Instead, we should be testing the actual values that come out of it—like a functional (as in functional programming) test would do: “Given 2 and 1, #sum(x, y) returns 3.”

    There are other kinds of useless tests, and I think this is always a worthwhile question. “What is it that we are REALLY testing here?” Great stuff to think about and I appreciate your write-up. Always a great discussion to have with yourself and the rest of your team when writing tests!

    Reply
  9. Edward Ellis

    You say “Don’t test that the database has an index, test that the page loads sufficiently quickly.” That begs the question of what does “sufficiently quickly” mean? Are you implying that if you don’t have a performance specification you shouldn’t index your tables or that you should make up arbitrary performance specifications and hard code them into your tests?
    Sometimes a very simple test is quite sufficient to insure that the code has not regressed.

    Reply
    1. Jason Swett Post author

      “Sufficiently quickly” means whatever you decide is fast enough. No, I’m not implying that you should hard-code arbitrary performance specifications into your tests.

      I think table indexes are a good idea but I don’t see much value in adding tests to check whether those indexes exist any more than I see value in adding a test to check whether a table itself exists. The guiding principle is that the thing to test for is the desired behavior, not the implementation.

      Reply
  10. Michael Ball

    Taken to the extreme, “test the behavior, not the implementation” leads only to integration tests.
    Unit tests are mostly implementation tests, and testing the behavior of a smaller component is equivalent to testing the implementation of a “bigger” or encapsulating component.

    I absolutely agree that most tautological tests provide little value, but it’s not 0. Typing things twice reveals typos and if some lines get deleted accidentally then a test specific to a missing has_many call would potentially reveal the issue more clear than a failing integration test might.

    Are these valuable enough for me to write those tests? Nah, not really. Autocomplete and code review really help, and I think I could figure out a missing has_many pretty quickly.. but hey, that all may depend on your team.

    It’s the same thing for indexes. I don’t find they’re worth testing because strong code review covers that. Plus if you have other tools like schema.rb and PgHero you’ll have additional checks. If I’m always using AR migrations, then I don’t think I need to worry too much. However, if I’m working in some setup where I might have less control over the DB, then the test can be useful.

    But here’s where I think the implementation is useful to test if you’re going to test it. The implementation is basically a guarantee about what’s in your DB. Perf testing can be really hard to determine what the appropriate baseline for “good enough” is, especially if your test suite runs in my different environments.
    Guaranteeing you have an index says “this is probably good enough for performance” or maybe “this is my backstop uniqueness check” while being easy to write and a reasonable proxy for other metrics.

    Every test strikes a balance between cost to write, cost to run, trust it gives you in the system, and information it gives you when it fails. To me many of these tests are pretty low on all counts – easy to write and run but not a lot of trust. If they’re your only tests, that’s no good, but if they’re part of a bigger suite then I think they’re mostly harmless, definitely not very valuable, but not as dangerous as super brittle tests either.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *