Skip to content

Setting up testing with pytest and uv

Every Python project needs tests, but setting up a test suite from scratch involves decisions about project layout, dependency management, and configuration. This tutorial walks through the full setup using pytest and uv: creating a project, writing tests, using fixtures, measuring coverage, and configuring defaults.

Prerequisites

Install uv on your system.

Creating a Project with Tests

Start by creating a sample project with a test directory structure:

$ uv init testing-demo --package
Initialized project `testing-demo` at `/path/to/testing-demo`
$ cd testing-demo

This creates a Python package project with the following structure:

testing-demo/
├── pyproject.toml
├── README.md
└── src
    └── testing_demo
        └── __init__.py

The --package flag tells uv to scaffold the src/testing_demo/ layout instead of a flat main.py script. That layout is what makes from testing_demo.calculator import add work in tests below. If you skip the flag and pytest later reports ModuleNotFoundError: No module named 'testing_demo', that’s why.

Adding pytest as a Development Dependency

Add pytest to your project’s development dependencies:

$ uv add --dev pytest
Using CPython 3.14.4
Creating virtual environment at: .venv
Resolved 7 packages in 98ms
Installed 6 packages in 15ms
 + iniconfig==2.3.0
 + packaging==26.2
 + pluggy==1.6.0
 + pygments==2.20.0
 + pytest==9.0.3
 + testing-demo==0.1.0 (from file:///path/to/testing-demo)

If you see error: No 'pyproject.toml' found in current directory or any parent directory, you ran the command outside the testing-demo directory.

This command:

  • Updates your pyproject.toml with pytest as a development dependency
  • Creates the project’s lockfile
  • Installs pytest in your project’s virtual environment

Open pyproject.toml and notice the new [dependency-groups] table. Pytest is registered there, not under [project] dependencies, so it ships with the project source but not with built wheels.

Creating a Simple Module to Test

Create a calculator module at src/testing_demo/calculator.py:

def add(a, b):
    return a + b


def subtract(a, b):
    return a - b


def multiply(a, b):
    return a * b


def divide(a, b):
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b

Creating Test Files

Create a tests directory at the root of your project:

$ mkdir tests

Now, create a test file for the calculator module in tests/test_calculator.py:

import pytest
from testing_demo.calculator import add, subtract, multiply, divide


def test_add():
    assert add(1, 2) == 3
    assert add(-1, 1) == 0
    assert add(-1, -1) == -2


def test_subtract():
    assert subtract(3, 2) == 1
    assert subtract(2, 3) == -1
    assert subtract(0, 0) == 0


def test_multiply():
    assert multiply(2, 3) == 6
    assert multiply(-2, 3) == -6
    assert multiply(-2, -3) == 6


def test_divide():
    assert divide(6, 3) == 2
    assert divide(6, -3) == -2
    assert divide(-6, -3) == 2


def test_divide_by_zero():
    with pytest.raises(ValueError):
        divide(5, 0)

Running Tests

Run tests using uv:

$ uv run pytest
============================= test session starts ==============================
platform darwin -- Python 3.14.4, pytest-9.0.3, pluggy-1.6.0
rootdir: /path/to/testing-demo
configfile: pyproject.toml
collected 5 items

tests/test_calculator.py .....                                           [100%]

============================== 5 passed in 0.01s ===============================

Each dot represents a passing test. The platform line will show linux or win32 instead of darwin on those systems. Pytest 9 picks up pyproject.toml as a config file by default, which is why it shows in the header even though the project has no [tool.pytest.ini_options] section yet.

To see more detailed output, use the verbose flag:

$ uv run pytest -v
============================= test session starts ==============================
platform darwin -- Python 3.14.4, pytest-9.0.3, pluggy-1.6.0 -- /path/to/testing-demo/.venv/bin/python
cachedir: .pytest_cache
rootdir: /path/to/testing-demo
configfile: pyproject.toml
collecting ... collected 5 items

tests/test_calculator.py::test_add PASSED                                [ 20%]
tests/test_calculator.py::test_subtract PASSED                           [ 40%]
tests/test_calculator.py::test_multiply PASSED                           [ 60%]
tests/test_calculator.py::test_divide PASSED                             [ 80%]
tests/test_calculator.py::test_divide_by_zero PASSED                     [100%]

============================== 5 passed in 0.00s ===============================

Notice the new .pytest_cache/ directory pytest created in your project root. It stores test outcomes between runs to support features like --last-failed. Add .pytest_cache/ to .gitignore.

Adding Test Coverage

coverage.py measures which lines of code your tests execute. Add it as a development dependency:

$ uv add --dev coverage
Resolved 8 packages in 161ms
Installed 2 packages in 6ms
 + coverage==7.13.5

Run your tests through coverage:

$ uv run coverage run -m pytest
============================= test session starts ==============================
...
============================== 5 passed in 0.01s ===============================

Notice the new .coverage file in your project root. That binary file holds the line-by-line execution data the next two commands turn into reports.

Then view the report:

$ uv run coverage report
Name                             Stmts   Miss  Cover
----------------------------------------------------
src/testing_demo/__init__.py         2      1    50%
src/testing_demo/calculator.py      10      0   100%
tests/test_calculator.py            21      0   100%
----------------------------------------------------
TOTAL                               33      1    97%

calculator.py is fully covered. The miss in src/testing_demo/__init__.py is the starter print() statement uv generated, which the tests never exercise.

To see which specific lines were missed:

$ uv run coverage report -m
Name                             Stmts   Miss  Cover   Missing
--------------------------------------------------------------
src/testing_demo/__init__.py         2      1    50%   2
src/testing_demo/calculator.py      10      0   100%
tests/test_calculator.py            21      0   100%
--------------------------------------------------------------
TOTAL                               33      1    97%

The Missing column shows line 2 of __init__.py is the uncovered line.

Tip

You may see pytest-cov recommended elsewhere. It wraps coverage.py with a --cov flag for pytest. Using coverage directly is one fewer dependency and teaches you the tool that’s doing the actual work.

Configuring pytest

Customize the default options when running pytest by adding the following to your pyproject.toml file:

[tool.pytest.ini_options]
addopts = "--maxfail=1"

Now re-run pytest on the command line. It will automatically run with this option set, stopping after the first failure.

Using Fixtures

Fixtures let you define reusable setup code that pytest injects into test functions automatically. They replace the setup/teardown pattern from unittest with something more composable.

Add this test file at tests/test_calculator_with_fixtures.py:

import pytest
from testing_demo.calculator import add, subtract, multiply, divide


@pytest.fixture
def sample_numbers():
    """Provide a pair of numbers for testing."""
    return (10, 5)


def test_add_with_fixture(sample_numbers):
    a, b = sample_numbers
    assert add(a, b) == 15


def test_subtract_with_fixture(sample_numbers):
    a, b = sample_numbers
    assert subtract(a, b) == 5


def test_multiply_with_fixture(sample_numbers):
    a, b = sample_numbers
    assert multiply(a, b) == 50


def test_divide_with_fixture(sample_numbers):
    a, b = sample_numbers
    assert divide(a, b) == 2.0

When pytest sees sample_numbers as a parameter name, it looks for a fixture with that name and passes its return value into the test. This keeps test data in one place and makes tests shorter.

Fixtures can also be shared across multiple test files by placing them in a tests/conftest.py file. Any fixture defined in conftest.py is available to all tests in the same directory and its subdirectories.

Running Specific Tests

As a test suite grows, running every test on each change slows you down. pytest provides several ways to run a subset:

Run a single test file:

$ uv run pytest tests/test_calculator.py
============================= test session starts ==============================
collected 5 items

tests/test_calculator.py .....                                           [100%]

============================== 5 passed in 0.00s ===============================

Run a single test function:

$ uv run pytest tests/test_calculator.py::test_add
============================= test session starts ==============================
collected 1 item

tests/test_calculator.py .                                               [100%]

============================== 1 passed in 0.00s ===============================

Run tests matching a keyword expression:

$ uv run pytest -k "divide"
============================= test session starts ==============================
collected 9 items / 6 deselected / 3 selected

tests/test_calculator.py ..                                              [ 66%]
tests/test_calculator_with_fixtures.py .                                 [100%]

======================= 3 passed, 6 deselected in 0.01s ========================

The 9 items / 6 deselected / 3 selected line is how pytest tells you the filter worked. Three tests across both files contain “divide” in their names: test_divide and test_divide_by_zero from test_calculator.py, plus test_divide_with_fixture from the fixtures file added in the previous section.

Final Project Structure

After completing this tutorial, the project looks like this:

    • pyproject.toml
    • README.md
        • __init__.py
        • calculator.py
      • test_calculator.py
      • test_calculator_with_fixtures.py

Next Steps

This handbook is free, independent, and ad-free. If it saved you time, consider sponsoring it on GitHub.

Last updated on