One of the most common areas of feedback in code review is test coverage. Software engineers usually strive for 100% coverage, and while non-trivial projects rarely achieve that ideal target, it is generally considered best practice to come as close as possible.
There's good reason for this; there are countless other blog posts detailing the value of unit, integration, and end-to-end testing. However, despite the dogmatic approaches to testing and test coverage, I rarely see developers evangelizing for the quality of tests themselves.
In my experience, during code review tests are often not given the same level of thorough consideration as source code. Additionally, what relatively little attention test code receives tends to be focused on how the tests impact coverage, not on the correctness of the tests themselves.
The question developers need to ask here is "Who watches the watchmen?" Software engineers rely on tests to guarantee the correctness of their source code, however they don't put as much effort into the correctness of their tests. In my opinion, bad or misleading tests are even more dangerous than no tests, because they give developers confidence in their changes when they should have none.
There are three main points that I assess when evaluating test code:
Nothing feels worse than when all tests in a project pass, only to find later that a breaking change was left uncaught. Unfortunately, this happens way more often than it should. The only way to prevent this is to be cognizant that testing has a purpose beyond coverage percentage, and to start giving test code the consideration it deserves.
~ MAD