DEV Community

spO0q
spO0q

Posted on

You don't write tests for yourself

You write code, so you write tests

You may have a different version:

You write tests before you write code

While it's true, it's easy to miss the point and write tests that do not add any value or, worse, that actually validate an incorrect implementation.

❌ Red signs to spot

  • the tests still pass when you change the requirements
  • the tests only run locally (so not on the CI/CD pipelines)
  • the app generates bugs in production while all your tests pass
  • the tests simply duplicate production code logic
  • the tests break with every single refactoring
  • some of your tests do not include any assertion
  • the tests are not readable and hard to maintain

🚨 Less obvious warning signs

  • you focus on 100% coverage instead of the business logic
  • you write flaky tests (e.g., using non-deterministic dependency) that do not produce the same result every time they run, which can make the whole test suite less trustable
  • you are testing code that cannot really fail (e.g., getters & setters)
  • you are only adding new tests while you may also refactor the existing ones
  • your tests are redundant, and the same logic (e.g., if else elseif) is tested multiple times
  • some of your unit tests should actually be integration or functional tests
  • some of your tests lack assertions and only test trivial conditions

❓Coverage as a goal

Those situations are known issues, but dev teams keep focus on coverage.

Theoretically, a program exhibiting high code coverage undergoes more extensive testing and is less likely to contain bugs compared to one with low code coverage.

Besides, teams do need metrics, and we love growth πŸ« πŸ€ŒπŸ»πŸ’—.

However, 100% coverage is a bit unrealistic, and such goal can introduce useless tests that give of false sense of safety.

Note that it does not mean the metric itself is useless.

You may not test some parts of your code while there are unwanted behaviors to fix.

Many developers rely on the CRAP (Change Risk Anti-Patterns) index, which forces them to lower the complexity of their code.

πŸ₯Š 1..2..3..fight!

Legacy code and "non-readable" tests can discourage refactoring.

In some cases, you may have to re-implement the feature before you can actually refactor the associated tests.

The idea is not to spend a few hours polishing existing tests while the actual code is not testable.

Here are a few tips you can safely apply:

  • use patterns like AAA (Arrange, Act and Assert) to make your tests more readable and maintainable
  • isolate code blocks to ease unit testing (e.g, the atomic approach)
  • discuss tests in code reviews
  • avoid non-deterministic tests that rely on uncontrollable factors, random number generators without fixed seeds, direct uses of external dependencies, or hidden internal states
  • follow the single responsibility principle (SRP) everywhere
  • limit each test to 1 scenario

Going further

Top comments (2)

Collapse
 
xwero profile image
david duymelinck

I think of tests as interactive documentation.

My tips are:

  • Use mocking if there is no other way to test
  • write or think of sad path tests. Thinking about what bad inputs can cause make the code more robust.
Collapse
 
spo0q profile image
spO0q

I think of tests as interactive documentation

I really like this

Some comments may only be visible to logged-in visitors. Sign in to view all comments.