How to Use Hypothesis and Pytest for Robust Property-Based Testing in Python
When writing unit tests, it’s hard to consider all possible edge cases and validate that your code works correctly.
This is sometimes caught in production and a quick and speedy patch needs to be deployed. Only for a new bug to emerge later.
There will always be cases you didn’t consider, making this an ongoing maintenance job. Unit testing solves only some of these issues.
Property-based testing is a complementary approach to traditional unit testing, where test cases are generated based on properties or constriants that the code should satisfy.
Hypothesis addresses this limitation by automatically generating test data based on specified strategies.
This allows developers to test a much broader range of inputs and outputs than traditional unit tests, increasing the likelihood of catching edge cases and unexpected behaviour.
In this article, we’ll explore how to use Hypothesis with Pytest for property-based testing.
We’ll start with the difference between example-based testing and property-based testing and then dive into 2 examples of how to use Hypothesis with Pytest.
First we’ll go through a simple example (string/array transformations) and then move on to a more complex one — building a Shopping Cart app.
Finally, we’ll discuss best practices for using Hypothesis with Pytest, including tips for writing your own Hypothesis strategies and touch on Model-based testing.
By the end of this article, you’ll be equipped with the knowledge and tools to use Hypothesis and Pytest for efficient and comprehensive property-based testing of your code.
Let’s get started then?
Objectives
By the end of this tutorial you should be able to:
- Understand the key differences between example-based, property-based and model-based testing
- Use the Hypothesis library with Pytest to test your code and ensure coverage for a wide range of test data
- Apply property-based testing to your Python apps
- Build a Shopping App and test it using property-based testing
Example-Based Testing vs Property-Based Testing
Example-based testing and property-based testing are two common approaches to software testing along with model-based testing.
Example-based testing involves writing test cases that provide specific inputs and expected outputs for functions or methods.
These tests are easy to write and understand, and they can catch many common errors.
However, they are limited in scope and may not cover all possible edge cases or unexpected scenarios.
Property-based testing involves testing properties or invariants that code should satisfy, and then automatically generating test data to check if those properties hold true.
This approach can catch a much broader range of edge cases and unexpected behaviour that may not be covered by example-based testing.
However, it can be more challenging to write and understand these tests, and it may require more time and computation power to generate test data.
As a developer, you should strive to use a combination of both to capture as many edge cases as possible to produce robust well-tested code.
Project Set Up
The project has the following structure
Getting Started
To get started, clone the repo here, or you can create your own repo by creating a folder and running git init
to initialise it.
Prerequisites
In this project, we’ll be using Python 3.10.10.
Create a virtual environment and install the requirements (packages) using
1 | pip install -r requirements.txt |
This will install the Hypothesis Library that we’ll be using throughout this tutorial.
Simple Example
To understand how to use this library, it’s always good to start with a simple example. Walk before trying to run.
Source Code
The source code for our simple application contains a bunch of Array (list) and String transformations in Python.
Array Operations
src/random_operations.py
1 | # Array Operations |
String Operations
src/random_operations.py
1 | # String Operations |
Simple Example — Unit Tests
As you’ve seen in the above section, two popular types of testing are
- Example-based testing (tests input-output examples)
- Property-based testing (tests properties with various auto-generated input data)
Example-Based Testing
We write some simple example-based tests
tests/unit/test_random_operations.py
1 | import pytest |
Here we test our 4 methods using the traditional way. You give an input and you assert that input against an expected output.
Running The Unit Test
To run the unit tests, simply run
1 | pytest ./tests/unit/test_random_operations.py -v -s |
Great. But what if there are edge cases that you didn’t consider?
Let’s look at property-based testing.
Property-Based Testing
tests/unit/test_random_operations.py
1 | from hypothesis import given, strategies as st |
To use Hypothesis in this example, we import the given
, strategies
and assume
in-built methods.
The @given
decorator is placed just before each test followed by a strategy.
A strategy is specified using the strategy.X
method which can be st.list()
, st.integers()
, st.text()
and so on.
Here’s a comprehensive list of strategies.
Strategies are used to generate test data and can be heavily customised to your liking for example — generate only positive/negative numbers, X/Y key, value pairs, integer with a maximum value of 100 and so on.
In our test examples above we generate simple test data using relevant strategies and assert the actual property logic holds true rather than the data itself.
Common issues discovered (that aren’t accounted for) include
- Empty lists, strings, and dicts as input
- Repeated values in lists
- Negative numbers, 0 as input
- Keys not present in dict
- ASCII characters
These can be discovered easily using the Hypothesis library.
Running The Unit Test
To run the unit tests, simply run
1 | pytest ./tests/unit/test_random_operations.py -v -s |
Complex Example
Now that you’ve seen a basic example, it’s time to step up your testing game.
Let’s build a simple Shopping App that lets us
- Add items to the cart
- Remove items from the cart
- Calculate the total based on the quantity
- View cart items
- Clear cart
Source code
src/shopping_cart.py
1 | import random |
We define an Item
enum that only accepts a specific custom Item
object.
Complex Example — Unit Tests
Before diving head first into Property-based testing with Hypothesis, let’s get a feel for the code with some simple Example-based testing.
Example-Based Testing
tests/unit/test_shopping_cart.py
1 | import pytest |
The above example illustrates simple exam-based testing where we test
- Add items
- Add/Remove items
- Check the total
- Check that the clear cart functionality works
If you notice, we’ve used a Fixture to initialise the Cart
object with default function
scope.
If you’re unfamiliar with fixtures, this article on Pytest fixtures offers a solid base.
This ensures that the Cart object is reset after each test, ensuring statelessness and test isolation.
Running The Unit Test
To run the unit tests, simply run
1 | pytest ./tests/unit/test_shopping_cart.py -v -s |
Property-Based Testing
Now let’s look at the big boy. How to test our complex Shopping App with property-based testing.
We want to test properties like
- Assert the number of items in the cart is equal to the number of items added
- Assert that if we remove an item, the quantity of items in the cart is equal to the number of items added — 1
- Ensure the total calculation is correct.
And many more.
To keep this simple we’ll just test the above basic use cases, you can of course go as granular as you like to cover all possible edge cases.
tests/unit/test_shopping_cart_hypothesis.py
1 | from typing import Callable |
Let’s break down the above tests.
Define strategies
To keep things simple to understand, we first define 3 strategies —
- Item strategy (randomly choose a value from our
Item
enum) - Price strategy (randomly choose a float number from
0.01
to100
) - Quantity strategy (randomly choose an integer number from
1
to10
)
We can then use these strategies in our tests with the @given
decorator.
Running The Unit Test
To run the unit tests, simply run
1 | pytest ./tests/unit/test_shopping_cart_hypothesis.py -v -s |
Discover Bugs With Hypothesis
Often you will find your tests failing for input values you didn’t think about.
For example — when writing up these tests I caught bugs like
- The Same Item Added With Different Pricing which we need to account for.
In real life, this would not happen as each product will have its own unique SKU (stock-keeping unit). Nevertheless, it’s something to think about and fix.
Define Your Own Hypothesis Strategies
As we mentioned above you can create your own custom Hypothesis strategies and go as granular as you like.
In our examples, we defined custom strategies below
1 | # Create a strategy for items |
We use the st.composite
wrapper and draw a value from a pre-determined strategy or our custom Enums (as in the case of Items).
Model-Based Testing in Hypothesis
Model-based testing is an approach where a formal model (often smaller scale) of the system is created, and test cases are generated from that model.
The Hypothesis library can be customized to generate test data that satisfies the properties and constraints of the model, allowing you to efficiently and effectively test your systems.
We’ll cover Model-Based testing with examples in a future article.
Conclusion
I hope you’ve enjoyed this article in learning how to use the awesome Hypothesis testing library.
In this article, we looked at example-based testing and how some of its shortcomings can be overcome with property-based testing.
We looked at two examples — a simple array/string transformation example and a Shopping Cart app.
For each example, we drew up example-based tests and followed with property-based testing to increase the amount and variation of test data.
As a good developer, you must always strive to test your application using as many realistic use cases as possible.
But we’re human after all and there’s always something you haven’t considered. For all of that, there’s Hypothesis.
If you have ideas for improvement or like for me to cover anything specific, please send me a message via Twitter, GitHub or Email.
Till the next time… Cheers!