Different developers have different opinions about test coverage. Some engineering organizations not only measure test coverage but have rules around it. Other developers think test coverage is basically BS and don’t measure it at all.
I’m somewhere in between. I think test coverage is a useful metric but only in a very approximate and limited sort of way.
If I encounter two codebases, one with 10% coverage and another with 90% coverage, I can of course probably safely conclude that the latter codebase has a healthier test suite. But if there’s a difference of 90% and 100% I’m not convinced that that means much.
I personally measure test coverage on my projects, but I don’t try to optimize for it. Instead, I make testing a habit and let my habitual coding style be my guiding force instead of the test coverage metrics.
If you’re curious what type of test coverage my normal workflow naturally results in, I just checked the main project I’ve been working on for the last year or so and the coverage level is 96.62%, according to simplecov. I feel good about that number, although more important to me than the test coverage percentage is what it feels like to work with the codebase on a day-to-day basis. Are annoying regressions popping up all the time? Is new code hard to write tests for due to the surrounding code not having been written in an easily testable way? Then the codebase could probably benefit from more tests.