black white and brown long coated dog on yellow and white inflatable ring

Prevent script bugs: Test early

Write bulletproof shell scripts. Learn key testing methods for reliable automation.
Home » Blog » Prevent script bugs: Test early

As your shell script grows in complexity, so too do the potential pitfalls. Catching errors early in the development process saves time and frustration down the line. By incorporating testing from the beginning, you can identify and fix problems before they snowball, leading to a more robust and reliable script.

This is the sixth post in a tutorial series on automation with shell scripts, learn about ensuring that your program runs correctly. The previous one taught you how to build efficient scripts with templates.

Let’s talk about testing. In my experience as a Linux administrator with years of testing under my belt, I’ve seen proper testing for shell scripts get neglected altogether or done in a rush.

I’ve also encountered cases in which testing was deliberately minimized or skipped altogether in the interest of meeting the pointy-haired bosses’ schedules.

As a sysadmin, my goal is to write shell scripts that work flawlessly under various conditions. This article explores different testing methods and guides you through implementing them.

About testing

There is always one more bug.”

— Lubarsky’s Law of Cybernetic Entomology

Lubarsky is correct. You can never find all the bugs in your code. For every bug I find, there always seems to be another that pops up, usually at the worst time.

Testing is not just about programs. Testing goes beyond just finding bugs in the code. It’s also about verifying that the script actually fixes the problems it’s designed for, regardless of the source (hardware, software, or even unexpected user actions). Testing also ensures the script is user-friendly and has a clear interface.

Following a well-defined process when writing and testing shell scripts can contribute to consistent and high-quality results.

My process is simple:

  1. Create a simple test plan.
  2. Start testing right at the beginning of development.
  3. Perform a final test when the code is complete.
  4. Move to production and test more.

The test plan

There are lots of different formats for test plans. I’ve worked with the full range—from having it all in my head; to a few notes jotted down on a sheet of paper to a complex set of forms that require a full description of each test, which functional code it would test, what the test would accomplish, and what the inputs and results should be.

After years of experience now I try to take the middle ground. Having at least a short written test plan will ensure consistency from one test run to the next. How much detail you need depends upon how formal your development and test functions are.

The sample test plan documents I found with my online searches were complex and intended for large organizations with very formal development and test processes. Although those test plans would be good for people with “test” in their job title, they don’t apply well to sysadmins’ more chaotic and time-dependent working conditions. As in most other aspects of the job, sysadmins need to be creative. So here’s a short list of things to consider including in your test plan. Modify it to suit your needs:

  • The name and a short description of the software being tested
  • A description of the software features to be tested
  • The starting conditions for each test
  • The procedures to follow for each test
  • A description of the desired outcome for each test
  • Specific tests designed to test for negative outcomes
  • Tests for how the program handles unexpected inputs
  • A clear description of what constitutes pass or fail for each test
  • Fuzzy testing, which is described below

This list should give you some ideas for creating your test plans. Most sysadmins should keep it simple and fairly informal.

Test early—test often

I always start testing my shell scripts as soon as I complete the first portion that is executable. This is true whether I am writing a short command-line program or a script that is an executable file.

I usually start creating new programs with the shell script template that we created in the previous article. I write the code for the Help function and test it. This is usually a trivial part of the process, but it helps me get started and ensures that things in the template are working properly at the outset. At this point, it’s easy to fix problems with the template portions of the script or to modify it to meet needs that the standard template does not.

Once the template and Help function are working, I move on to creating the body of the program by adding comments to document the programming steps required to meet the program specifications. Now I start adding code to meet the requirements stated in each comment. This code will probably require adding variables that are initialized in that section of the template—which is now becoming a shell script.

This is where testing is more than just entering data and verifying the results. It takes a bit of extra work. Sometimes I add a command that simply prints the intermediate result of the code I just wrote and verify that. For more complex scripts, I add a -t option for “test mode.” In this case, the internal test code executes only when the -t option is entered on the command line.

Final testing

After the code is complete, I go back to do a complete test of all the features and functions using known inputs to produce specific outputs. I also test some random inputs to see if the program can handle unexpected input.

Final testing is intended to verify that the program is functioning as intended. A large part of the final test is to ensure that functions that worked earlier in the development cycle have not been broken by code that was added or changed later in the cycle.

If you have been testing the script as you add new code to it, you may think there should not be any surprises during the final test. Wrong! There are always surprises during final testing. Always. Expect those surprises, and be ready to spend time fixing them. If there were never any bugs discovered during final testing, there would be no point in doing a final test, would there?

Testing in production

Huh—what?

“Not until a program has been in production for at least six months will the most harmful error be discovered.”

— Troutman’s Programming Postulates

Yes, testing in production is now considered normal and desirable. Having been a tester myself, this seems reasonable. “But wait! That’s dangerous,” you say. My experience is that it’s no more dangerous than extensive and rigorous testing in a dedicated test environment. In some cases, there is no choice because there is no test environment—only production.

Sysadmins are no strangers to the need to test new or revised scripts in production. Any time a script is moved into production, that becomes the ultimate test. The production environment constitutes the most critical part of that test. Nothing that testers can dream up in a test environment can fully replicate the true production environment.

The allegedly new practice of testing in production is just the recognition of what sysadmins have known all along. The best test is production—so long as it is not the only test.

Fuzzy testing

The term “fuzzy testing” might sound like randomly hitting keys until something breaks. But there’s more to it: It’s about intentionally feeding the program unexpected or invalid data to see how it reacts.

Fuzzy testing is a bit like the time my son broke the code for a game in less than a minute with random input. That pretty much ended my attempts to write games for him.

Most test plans utilize very specific input that generates a specific result or output. Regardless of whether the test defines a positive or negative outcome as a success, it is still controlled, and the inputs and results are specified and expected, such as a specific error message for a specific failure mode.

Fuzzy testing is about dealing with randomness in all aspects of the test, such as starting conditions, very random and unexpected input, random combinations of options selected, low memory, high levels of CPU contending with other programs, multiple instances of the program under test, and any other random conditions that you can think of to apply to the tests.

I try to do some fuzzy testing from the beginning. If the Bash script cannot deal with significant randomness in its very early stages, then it is unlikely to get better as you add more code. This is a good time to catch these problems and fix them while the code is relatively simple. A bit of fuzzy testing at each stage is also useful in locating problems before they get masked by even more code.

After the code is completed, I like to do some more extensive fuzzy testing. Always do some fuzzy testing. I have certainly been surprised by some of the results. It is easy to test for the expected things, but users do not usually do the expected things with a script.

Summary

In this article, we’ve dived into the importance and methods of testing the shell scripts we write. As a sysadmin, I aim to create code that functions flawlessly under a variety of conditions. This is why thorough testing is an essential part of writing good scripts.

Now that you understand the benefits, it’s time to create your own test plan and start testing the Bash script template in a structured way.

Next up we’ll look at initializing variables to ensure that your program runs under the correct set of conditions.

Author

If you enjoyed this post, you might also enjoy these