One of the most common areas of feedback in code review is test coverage. "Can we add additional tests here?" and "Can we test for this edge case?" are questions just about every software engineer has left on a pull request, and most developers have been on the receiving end of these comments as well.
There's good reason for this; there are countless other blog posts detailing the value of unit, integration, and end-to-end testing. However, despite the dogmatic approaches to testing and test coverage, I rarely see developers evangelizing for the quality of tests themselves.
In my experience, during code review tests are often not given the same level of thorough consideration as source code. Additionally, what relatively little attention test code receives tends to be focused on how the tests impact coverage, not on the correctness of the tests themselves.
The question developers need to ask here is "Who watches the watchmen?" Software engineers rely on tests to guarantee the correctness of their code, however they don't put as much effort into the correctness of their tests. This is scary to me, since in my opinion bad or misleading tests are even more dangerous than no tests, because they give developers confidence in their changes when they should have none.
There are three main points that I assess when evaluating test coder:
Nothing feels worse than when all tests in a project pass, only to find later that a breaking change was left uncaught. Unfortunately, this happens way more often than it should. The only way to prevent this is to be cognisant that testing has a purpose beyond coverage percentage, and to start giving test code the consideration it deserves.
~ MADReturn to Blog