Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

9
  • I'm not clear on whether I would need to add more unit tests if I change/fix the code in order to verify that it has been fixed and now works? I'm guessing I would create unit tests (that fail first) which have all the data stubbed necessary to reproduce the problem, and then add code / change existing code until it passes? Commented Sep 23, 2012 at 9:44
  • 3
    It's also well worth auditing the existing tests that exercise this area of the applications. Tests are just code after all, so they can contain bugs. Commented Sep 23, 2012 at 9:52
  • 3
    @GregBair unless the tests that are there are right but not comprehensive... edge cases are a killer and may well be missed by the tests precisely because they are edge cases... Commented Sep 23, 2012 at 15:16
  • 1
    @GregBair I'm not sure I agree - if the tests are for the results of calculations and the issue is edge cases then you're adding more tests for more case and the old tests shouldn't fail because the rules they're testing still apply Commented Sep 25, 2012 at 12:31
  • 1
    @GregBair but the key point here is that you have specific scenarios that are not being correctly identified by the tests. One has to assume that the tests are right just not comprehensive. So write the new tests, fix the code and then see what happens... Commented Sep 25, 2012 at 17:43