Site icon IT & Life Hacks Blog|Ideas for learning and practicing

The Complete FastAPI × pytest Guide: Building “Fearless-to-Change” APIs with Unit Tests, API Tests, Integration Tests, and Mocking Strategies

green snake

Photo by Pixabay on Pexels.com

The Complete FastAPI × pytest Guide: Building “Fearless-to-Change” APIs with Unit Tests, API Tests, Integration Tests, and Mocking Strategies


Quick Overview (Summary)

  • To grow a FastAPI app safely, it’s important to combine unit tests, API tests, and integration tests in a balanced way.
  • Center your testing around pytest, and pair it with FastAPI’s TestClient and dependency overrides to easily verify HTTP-level behavior.
  • For domain logic (service layer) and repositories, use mocks or a test SQLite database so you can test quickly without depending on real DBs or external APIs.
  • For security areas like authentication/authorization (JWT, scopes), protect behavior by testing valid tokens, invalid tokens, and insufficient permissions, so feature additions and spec changes remain safe.
  • Finally, connect everything to CI (e.g., GitHub Actions) so that tests run automatically on every push, creating a cycle where you can break things—and immediately know it.

Who Benefits from This (Concrete Personas)

Solo Developers / Learners

  • You built an API with FastAPI, but you’ve barely written tests yet.
  • You still verify everything manually with a browser or curl, and feel nervous every time you make changes.
  • You roughly know pytest basics, but you’re unsure how to combine it properly with FastAPI.

For you, the “start here” minimal setup and the clear separation between unit tests and API tests will help a lot.

Backend Engineers in Small Teams

  • You’re building FastAPI in a team of 3–5, and testing style is starting to vary by person.
  • You have layers (service, repository, router), but you want a cleaner policy on what to test where.
  • Auth/authz specs change often, so you want tests to protect those behaviors.

For you, a test pyramid mindset (Unit > API > Integration) plus dependency override and mocking patterns are especially effective.

SaaS Teams / Startups

  • The product is moving fast and you deploy frequently.
  • When a bug happens, you want regression tests that ensure it never returns.
  • You want a test policy that is shareable across the team, including CI/CD integration.

For you, this guide helps turn pytest-based testing into a reusable team-wide testing policy template.


Accessibility Note (Readability / Structure)

  • The article flows step-by-step: overview → setup → unit tests → API tests → DB/repositories → auth/authz → async/background → fixture design → CI → roadmap.
  • Each section is kept to 2–3 short paragraphs where possible, focusing on key points and sample code.
  • Code blocks use fixed-width formatting; comments are minimal to avoid visual overload.
  • Terms are briefly defined on first use, then kept consistent afterward to reduce cognitive load.

Overall, the writing aims for WCAG AA–level readability for a technical article.


1. First, Clarify What You Want to Test

Before diving into techniques, it helps to clarify what “testing a FastAPI app” really means.

1.1 The Three Testing Layers

  1. Unit Tests

    • Verify small functions/classes behave as expected.
    • Ideally independent from DB/network/FastAPI itself—pure Python.
    • Examples: price calculation, permission rules, service-layer domain logic.
  2. API Tests (Integration/API Tests)

    • Verify behavior through FastAPI routes using real HTTP-shaped requests.
    • DB/external APIs are swapped out with test versions or mocks so tests stay fast and repeatable.
  3. Integration / E2E Tests

    • Use a production-like setup (real DB, real external services, frontend) to verify full scenarios.
    • More expensive, so keep them few and focus on critical paths.

This article focuses mainly on unit tests and API tests, which form the practical core.

1.2 The Test Pyramid Mindset

The common “test pyramid” says:

  • Base: unit tests (many, fast)
  • Middle: API/integration tests (moderate number)
  • Top: E2E tests (few, but critical)

If you try to cover everything with E2E tests, you often get:

  • tests too slow to run regularly
  • failures that are hard to diagnose

So the best division is:

  • protect domain logic with unit tests
  • validate HTTP-specific behavior with API tests

2. Test Setup: The Basic pytest + FastAPI Toolkit

2.1 Installing pytest and Common Commands

FastAPI documentation commonly recommends pytest. Here’s the basic setup:

pip install pytest

Conventions:

  • file names: test_*.py or *_test.py
  • function names: start with test_
  • run from project root with pytest
pytest             # run all tests
pytest tests/api   # run a specific directory
pytest -k "login"  # run tests whose names contain "login"

2.2 FastAPI’s TestClient

FastAPI is built on Starlette, so you can use TestClient:

from fastapi.testclient import TestClient
from app.main import app

client = TestClient(app)

def test_read_root():
    res = client.get("/")
    assert res.status_code == 200
    assert res.json() == {"message": "Hello, World"}

TestClient is synchronous (get, post, etc.), which makes it easy to use with standard pytest tests.

For async testing, you can combine httpx.AsyncClient with pytest-asyncio, but starting with TestClient is usually the simplest.


3. Unit Tests: Protect the Service Layer and Domain Logic

Start by testing the logic that doesn’t depend on FastAPI itself.

3.1 Pure Functions (Example: Price Calculation)

# app/domain/billing/service.py
def calc_price(base: int, discount_rate: float) -> int:
    if not (0 <= discount_rate <= 1):
        raise ValueError("discount_rate must be between 0 and 1")
    price = int(base * (1 - discount_rate))
    if price < 0:
        price = 0
    return price
# tests/domain/test_billing_service.py
from app.domain.billing.service import calc_price
import pytest

def test_calc_price_normal():
    assert calc_price(1000, 0.2) == 800

def test_calc_price_zero():
    assert calc_price(1000, 1.0) == 0

def test_calc_price_invalid_discount():
    with pytest.raises(ValueError):
        calc_price(1000, 1.5)

Even small unit tests like this make it immediately clear whether a change broke core business rules.

3.2 Service Layer + Mock Repository

Assume a layered structure like “service + repository”:

# app/domain/articles/services.py
from dataclasses import dataclass
from app.domain.articles.models import Article
from app.domain.articles.schemas import ArticleCreate, ArticleRead

class ArticleRepositoryProtocol:
    def add(self, article: Article) -> Article: ...
    def get(self, article_id: int) -> Article | None: ...

@dataclass
class ArticleService:
    repo: ArticleRepositoryProtocol

    def create_article(self, author_id: int, data: ArticleCreate) -> ArticleRead:
        article = Article(
            title=data.title,
            body=data.body,
            author_id=author_id,
            status="draft",
        )
        saved = self.repo.add(article)
        return ArticleRead.model_validate(saved)

In tests, avoid DB and inject an in-memory repository:

# tests/domain/test_article_service.py
from app.domain.articles.services import ArticleService
from app.domain.articles.schemas import ArticleCreate
from app.domain.articles.models import Article

class InMemoryArticleRepo:
    def __init__(self):
        self.items: list[Article] = []
        self._id = 1

    def add(self, article: Article) -> Article:
        article.id = self._id
        self._id += 1
        self.items.append(article)
        return article

    def get(self, article_id: int) -> Article | None:
        for a in self.items:
            if a.id == article_id:
                return a
        return None

def test_create_article_sets_draft_status():
    repo = InMemoryArticleRepo()
    service = ArticleService(repo=repo)

    data = ArticleCreate(title="Hello", body="World")
    result = service.create_article(author_id=42, data=data)

    assert result.status == "draft"
    assert result.title == "Hello"
    assert result.author_id == 42

This makes service-layer changes safer, because tests quickly catch unintended behavior changes.


4. API Tests: Verify Behavior Through Endpoints

Now move to HTTP-facing behavior.

4.1 Testing a Simple CRUD Endpoint

Example routes:

# app/api/v1/articles.py
from fastapi import APIRouter, Depends
from app.domain.articles.schemas import ArticleCreate, ArticleRead
from app.domain.articles.services import ArticleService
from app.deps.services import get_article_service

router = APIRouter(prefix="/articles", tags=["articles"])

@router.get("", response_model=list[ArticleRead])
def list_articles(
    service: ArticleService = Depends(get_article_service),
):
    return service.list_articles()

@router.post("", response_model=ArticleRead, status_code=201)
def create_article(
    payload: ArticleCreate,
    service: ArticleService = Depends(get_article_service),
):
    return service.create_article(author_id=1, data=payload)

Basic test:

# tests/api/test_articles_api.py
from fastapi.testclient import TestClient
from app.main import app

client = TestClient(app)

def test_create_and_list_articles():
    res = client.post("/articles", json={"title": "T1", "body": "B1"})
    assert res.status_code == 201
    created = res.json()
    assert created["id"] is not None
    assert created["title"] == "T1"

    res_list = client.get("/articles")
    assert res_list.status_code == 200
    items = res_list.json()
    assert len(items) >= 1

4.2 Dependency Overrides

In real projects, you don’t want API tests to connect to production DBs.
FastAPI’s app.dependency_overrides lets you replace dependencies during tests.

# tests/api/conftest.py
import pytest
from fastapi.testclient import TestClient
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

from app.main import app
from app.infra.db.base import Base
from app.infra.db.base import get_db  # production dependency

TEST_DATABASE_URL = "sqlite:///./test.db"

engine = create_engine(TEST_DATABASE_URL, connect_args={"check_same_thread": False})
TestingSessionLocal = sessionmaker(bind=engine, autoflush=False, autocommit=False)

@pytest.fixture(scope="session", autouse=True)
def setup_database():
    Base.metadata.create_all(bind=engine)
    yield
    Base.metadata.drop_all(bind=engine)

def override_get_db():
    db = TestingSessionLocal()
    try:
        yield db
    finally:
        db.close()

@pytest.fixture()
def client():
    app.dependency_overrides[get_db] = override_get_db
    client = TestClient(app)
    yield client
    app.dependency_overrides.clear()

Now your tests can just use the fixture:

# tests/api/test_articles_api.py
def test_create_article_uses_test_db(client: TestClient):
    res = client.post("/articles", json={"title": "T1", "body": "B1"})
    assert res.status_code == 201

This pattern is extremely useful—worth making a reusable template in your project.


5. Testing DB / Repository Layers: Transactions and Rollbacks

A common challenge: “How do I keep the DB clean between tests?”

5.1 Create Tables Once, Roll Back Per Test

A standard pattern:

  • create/drop tables once per session
  • start a transaction per test
  • rollback after each test
# tests/db/conftest.py (example)
@pytest.fixture()
def db_session():
    connection = engine.connect()
    transaction = connection.begin()
    session = TestingSessionLocal(bind=connection)

    try:
        yield session
    finally:
        session.close()
        transaction.rollback()
        connection.close()

Then test repositories:

# tests/domain/test_article_repository.py
from app.infra.articles.sqlalchemy_repo import SqlAlchemyArticleRepository
from app.domain.articles.models import Article

def test_add_article(db_session):
    repo = SqlAlchemyArticleRepository(db_session)
    article = Article(title="T", body="B", author_id=1, status="draft")
    saved = repo.add(article)

    assert saved.id is not None

    again = repo.get(saved.id)
    assert again is not None
    assert again.title == "T"

Rollbacks keep the DB clean even as your test suite grows.


6. Auth / Authorization / JWT Testing

Security tests add a huge amount of confidence.

6.1 Testing Login

# tests/api/test_auth_api.py
from fastapi.testclient import TestClient
from app.main import app

client = TestClient(app)

def test_login_success():
    res = client.post(
        "/auth/token",
        data={"username": "alice", "password": "password123"},
    )
    assert res.status_code == 200
    body = res.json()
    assert "access_token" in body
    assert body["token_type"] == "bearer"

def test_login_failure():
    res = client.post(
        "/auth/token",
        data={"username": "alice", "password": "wrong"},
    )
    assert res.status_code == 401

6.2 Protected Route + Token Header

def get_token_for(username: str, password: str) -> str:
    res = client.post(
        "/auth/token",
        data={"username": username, "password": password},
    )
    assert res.status_code == 200
    return res.json()["access_token"]

def test_protected_route_requires_token():
    res = client.get("/me")
    assert res.status_code == 401

def test_protected_route_with_valid_token():
    token = get_token_for("alice", "password123")
    res = client.get("/me", headers={"Authorization": f"Bearer {token}"})
    assert res.status_code == 200
    body = res.json()
    assert body["username"] == "alice"

6.3 Scope-Based Routes

def test_requires_write_scope():
    # prepare a token with only "articles:read", etc. (via test helpers)
    token = get_token_for("alice", "password123")  # example: read-only user
    res = client.post(
        "/articles",
        json={"title": "T1", "body": "B1"},
        headers={"Authorization": f"Bearer {token}"},
    )
    assert res.status_code in (401, 403)

For permission tests, naming them after specs (e.g., test_admin_can_delete_user) makes future maintenance much easier.


7. Async and Background Work Testing

FastAPI supports async I/O and background tasks, but testing needs a bit of care.

7.1 BackgroundTasks

Since BackgroundTasks runs after the response, it’s often best to unit-test the task function itself.

# app/tasks/audit.py
def write_audit_log(user_id: int, action: str) -> None:
    print(f"{user_id} {action}")
# tests/tasks/test_audit.py
from app.tasks.audit import write_audit_log

def test_write_audit_log_runs_without_error(capsys):
    write_audit_log(1, "login")
    captured = capsys.readouterr()
    assert "login" in captured.out

In API tests, check that the task is registered, while testing task logic separately.

7.2 Celery Job Queues

If you use Celery, running tasks synchronously in tests is often easier:

# tests/conftest.py (example)
from app.celery_app import celery_app

def pytest_configure():
    celery_app.conf.update(task_always_eager=True)
# tests/tasks/test_long_add.py
from app.tasks import long_add

def test_long_add():
    res = long_add.delay(1, 2)
    assert res.result == 3

8. Fixtures and Test Data: Keep Tests Clean and Reusable

pytest fixtures help share state and data cleanly.

8.1 A “Create User” Fixture

# tests/fixtures/users.py
import pytest
from sqlalchemy.orm import Session
from app.models.user import User
from app.core.security import hash_password

@pytest.fixture
def user_alice(db_session: Session) -> User:
    alice = User(
        username="alice",
        hashed_password=hash_password("password123"),
        is_active=True,
    )
    db_session.add(alice)
    db_session.commit()
    db_session.refresh(alice)
    return alice

Reusing in API tests:

# tests/api/test_auth_api.py
def test_login_with_fixture(client, user_alice):
    res = client.post(
        "/auth/token",
        data={"username": "alice", "password": "password123"},
    )
    assert res.status_code == 200

8.2 Factory Pattern

As test data grows, factory_boy or similar tools can help.
But early on, simple helper functions are often enough—no need to force complexity.


9. CI Integration: Make Tests “Always Running”

Connect tests to CI so they run automatically on pushes and PRs.

9.1 Minimal GitHub Actions Workflow

# .github/workflows/tests.yaml
name: Run tests

on:
  push:
    branches: ["main"]
  pull_request:
    branches: ["main"]

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-python@v5
        with:
          python-version: "3.11"

      - name: Install deps
        run: |
          python -m pip install --upgrade pip
          pip install -r requirements.txt
          pip install -r requirements-dev.txt

      - name: Run pytest
        run: pytest

This makes it possible to enforce the rule: “Don’t merge if tests fail.”


10. Common Pitfalls and Gentle Fixes

Symptom Likely cause Fix
Tests are slow and nobody runs them Each test hits production DB/external APIs Use test DBs or mocks; use dependency overrides
Failures are hard to diagnose One test does too many things Narrow each test to a single purpose
Auth API tests are annoying You log in and fetch tokens every time Create helper functions or an authenticated-client fixture
DB state leaks between tests No rollback / isolation Use per-test transactions and rollbacks (or clean DB setup)
Unit vs API boundaries are unclear No policy for “what to test where” Decide “domain logic in unit tests, HTTP behavior in API tests”

11. Adoption Roadmap (Grow It Step by Step)

  1. Install pytest and write just one unit test
    Start with a pure function (like price calculation).

  2. Write one API test using TestClient
    A simple endpoint like /health works well.

  3. Add a test DB + dependency override for DB-dependent API tests
    Try CRUD tests using SQLite + transaction rollback.

  4. Increase service-layer unit tests using mock repositories
    Protect important business rules without DB or FastAPI.

  5. Add auth/authz tests
    Login success/failure, token required, permission denied cases.

  6. Hook into CI so tests run automatically
    At minimum, run tests before merging into main.

  7. Expand tooling as needed
    Add factories, advanced mocking, async tests, etc., based on project complexity.


Summary

  • For FastAPI testing, the core toolkit is pytest + TestClient + dependency overrides.
  • Protect domain logic with unit tests, and validate HTTP behavior (headers, status codes, JSON) with API tests.
  • Swap DB/external APIs with test DBs and mocks so tests stay fast and repeatable.
  • Cover auth/authz (JWT, scopes) with tests to make spec changes safer.
  • You don’t need to build everything at once—start with one unit test and one API test, and grow from there.

I’m quietly cheering for your project to be protected by tests—so you can experiment more freely, without fear of breaking things.


Exit mobile version