So if you wrote a unit test like this...
for (int i = 0; i < 100; i++) { assertTrue(i*i, square(i)); }
you would be given 100 points.
I would give this person 0 points (even if the test were testing something actually relevant), because assertions within a loop make little sense and tests with multiple asserts (especially in a form of a loop or a map) are difficult to work with.
The problem is essentially to have a metric which cannot [easily] be cheated. A metric which is exclusively based on the number of asserts is exactly the same as paying developers per LOC written. As with pay-by-LOC which leads to huge and impossible to maintain code, your actual company policy leads to useless and possibly badly written tests.
If the number of asserts is irrelevant, the number of tests is irrelevant as well. This is also the case for many metrics (including combined ones) one could imagine for this sort of situations.
Ideally, you would be applying systemic approach. In practice, this can hardly work in most software development companies. So I can suggest a few other things:
Using pair reviews for tests and have something similar to the number of WTFs per minute metric.
Measure the impact of those tests over time on the number of bugs. This has several benefits:
- Seems fair,
- Can actually be measured if you collect enough data about bug reports and their fate,
- Is actually worth it!
- Use branch coverage, but combine it with other metrics (as well as a review). Branch coverage has its benefits, but testing CRUD code just to get a better grade is not the best way to spend developers' time.