The Complete Guide to Background Processing with FastAPI × Celery/Redis
How to Separate Heavy Work from Your API to Keep Services Stable
Introduction (What You’ll Be Able to Do After Reading)
This article carefully explains how to add background processing and batch-like async jobs to a FastAPI-based Web API.
- Return a response to user requests quickly, while letting time-consuming tasks run behind the scenes
- Safely make tasks asynchronous, such as sending emails, generating reports, image processing, or issuing large volumes of external API requests
- Build the classic three-part setup using Celery + Redis: FastAPI (API server), workers, and a message broker
You’ll learn the whole flow with sample code.
Who Benefits From This (Specific Reader Profiles)
Solo developers / learners
- You want to trigger tasks that take several seconds to minutes (e.g., “signup confirmation email” or “PDF report generation”) from FastAPI
- You’ve used
BackgroundTasks, but you haven’t tried a production-style async setup with separated processes - You’ve heard of Redis and Celery, but you’re still unclear on “how do they fit together?”
→ By reading through this, you’ll get hands-on with the standard, proven background-job flow using a Celery worker and Redis broker.
Backend engineers in small teams
- You’re building internal systems or a small-to-mid SaaS with FastAPI and you’re unsure how to design “heavy tasks” or “tasks that must retry”
- Timeouts in synchronous APIs and slowdowns during peaks are starting to bother you
- You want a shared team pattern: “From here onward, we send it to the job queue.”
→ By learning responsibility separation between FastAPI and Celery, plus retry/scheduling patterns, you’ll get a solid picture of an ops-friendly design.
SaaS teams / startups
- Your product has many scaling workloads: image processing, bulk sending, data aggregation, etc.
- You want to start with a simple worker setup, while keeping future microservices or event-driven architecture in mind
- You’re already thinking ahead about scaling out workers and monitoring queues
→ Using the role split “FastAPI = synchronous API,” “Celery = async worker,” “Redis = queue broker,” you can build a foundation that scales step-by-step.
Accessibility Notes (Readability Considerations)
- The article is structured as: overview → why background processing → Celery/Redis roles → implementation → error handling & retries → scheduling → ops tips → roadmap.
- Specialized terms (broker, worker, job, task, etc.) are briefly explained on first mention and then used consistently to reduce cognitive load.
- Code blocks are kept short, with minimal essential comments to reduce visual burden.
- Paragraphs are intentionally short and bullet points are used to support self-paced reading.
Overall, the structure aims for a clear, step-by-step experience and a text layout mindful of WCAG AA-style readability.
1. Why Background Processing Is Necessary
Let’s clarify why you should separate heavy work from the API in the first place.
1.1 What happens if you put heavy work directly inside the API?
Consider an endpoint like this:
- A user submits a form
- The server generates a PDF report (10 seconds)
- It calls an external email provider (plus network wait)
- It returns the result
If you try to complete all of this inside a single HTTP request, you’ll likely see:
- Client-side timeouts
- Uvicorn workers being blocked by heavy tasks, forcing other requests to wait
- External-service latency or temporary failures directly harming user experience
1.2 The core idea of background processing
This is where background processing comes in:
- The HTTP request returns quickly with “Job accepted/registered.”
- The real heavy work is executed by a separate process (a worker) that pulls jobs from a queue.
- Results and progress are checked later via another API, a notification, or a dashboard.
In other words:
- FastAPI = the receptionist at the front desk
- Celery worker = staff doing the work in the back
- Redis (broker) = the place where the “to-do list” is stored
Splitting responsibilities like this improves fault tolerance and scalability.
2. Understanding Celery and Redis (High-Level Roles)
Here’s the common combo: Celery + Redis.
2.1 What is Celery?
Celery is a Python distributed task/job queue.
- Runs “tasks” (functions) asynchronously in separate processes
- Uses a message broker (Redis, RabbitMQ, etc.) to enqueue and dequeue tasks
- Provides retries, scheduling (periodic jobs), result storage (result backend), and more
FastAPI’s official docs also show integration examples with Celery.
2.2 What is Redis?
Redis is an in-memory datastore that is also often used as a message broker.
- For Celery, it works as “the place where task queues are stored”
- Lightweight, fast, and easy to run locally, in containers, or in the cloud
RabbitMQ is also common, but Redis is simpler to start with, so it’s a great first choice.
3. Environment Setup: Running FastAPI + Celery + Redis
Now let’s set up a runnable environment.
3.1 Required packages
Example:
pip install "fastapi[standard]" celery[redis] redis
fastapi[standard]: FastAPI plus Uvicorn and other basicscelery[redis]: Celery core with Redis supportredis: Python Redis client
(Adjust versions to fit your project.)
3.2 Start Redis (local development)
Docker makes local Redis easy:
docker run -d --name redis -p 6379:6379 redis:7
Now Redis is available at localhost:6379.
4. Build a Minimal FastAPI + Celery Setup
Let’s define the FastAPI app and the Celery app.
4.1 Example project structure
project/
app/
__init__.py
main.py # FastAPI API server
celery_app.py # Celery app definition
tasks.py # background tasks
requirements.txt
4.2 Define the Celery app
# app/celery_app.py
from celery import Celery
celery_app = Celery(
"worker",
broker="redis://localhost:6379/0", # broker (Redis)
backend="redis://localhost:6379/1", # result backend (optional)
)
celery_app.conf.update(
task_routes={
"app.tasks.send_email": {"queue": "emails"},
"app.tasks.generate_report": {"queue": "reports"},
},
task_serializer="json",
result_serializer="json",
accept_content=["json"],
)
This uses localhost for simplicity; in production you should read these URLs from environment variables or settings classes.
4.3 Define tasks
# app/tasks.py
from time import sleep
from app.celery_app import celery_app
@celery_app.task(name="app.tasks.send_email")
def send_email(to: str, subject: str, body: str) -> str:
# Real implementation would send an email
sleep(5) # pretend this is expensive
return f"Email sent to {to}"
@celery_app.task(name="app.tasks.generate_report")
def generate_report(user_id: int) -> str:
# Heavy aggregation or PDF generation
sleep(10)
return f"Report generated for user {user_id}"
Key points:
- Any function decorated with
@celery_app.taskbecomes a “task” - Calling
send_email.delay(...)does not execute immediately—it enqueues the task
5. Enqueue Tasks from FastAPI
Now register tasks from FastAPI endpoints.
5.1 FastAPI app definition
# app/main.py
from fastapi import FastAPI
from pydantic import BaseModel
from app.tasks import send_email, generate_report
app = FastAPI(title="FastAPI + Celery Example")
class EmailRequest(BaseModel):
to: str
subject: str
body: str
@app.post("/emails")
def create_email(req: EmailRequest):
task = send_email.delay(req.to, req.subject, req.body)
return {"task_id": task.id}
@app.post("/reports/{user_id}")
def create_report(user_id: int):
task = generate_report.delay(user_id)
return {"task_id": task.id}
The HTTP request returns only a task_id quickly, and the worker handles execution.
5.2 API to check task results (optional)
If you set a result backend (we pointed it at Redis above), you can query task state:
from app.celery_app import celery_app
@app.get("/tasks/{task_id}")
def get_task_status(task_id: str):
result = celery_app.AsyncResult(task_id)
return {
"task_id": task_id,
"status": result.status, # PENDING / STARTED / SUCCESS / FAILURE ...
"result": result.result, # return value on success, exception info on failure
}
Typical frontend flow:
- Call
/emailsor/reports/{user_id}and receivetask_id - Poll
/tasks/{task_id}to track status
6. Start Workers and Confirm It Works
At this point you have:
- FastAPI (HTTP entrypoint)
- Celery worker (background executor)
- Redis (broker)
Now run it.
6.1 Start FastAPI
uvicorn app.main:app --reload
6.2 Start Celery worker
From the project root:
celery -A app.celery_app.celery_app worker --loglevel=info
-A points to the import path of your Celery app.
Here, celery_app is defined in app/celery_app.py.
6.3 Quick test
From another terminal:
curl -X POST "http://127.0.0.1:8000/emails" \
-H "Content-Type: application/json" \
-d '{"to": "test@example.com", "subject": "Hello", "body": "Hi"}'
You should immediately receive:
{"task_id": "xxxxxxxx-xxxx-...."}
And in worker logs you’ll see something like “Email sent to …”.
You can also check status via GET /tasks/{task_id}.
7. Retries, Timeouts, and Other Controls for Failures
In real systems, temporary failures and network errors are unavoidable. Celery supports retries and time limits.
7.1 Automatic retry example
# app/tasks.py (excerpt)
from celery import shared_task
import requests
@shared_task(
bind=True,
max_retries=5,
default_retry_delay=10, # seconds
)
def send_notification(self, endpoint: str, payload: dict) -> str:
try:
r = requests.post(endpoint, json=payload, timeout=5)
r.raise_for_status()
except requests.RequestException as exc:
raise self.retry(exc=exc)
return "ok"
max_retries: maximum retry countdefault_retry_delay: retry interval (seconds)self.retry(): schedules the next retry by re-enqueuing
7.2 Task time limits
Use time_limit and soft_time_limit to cap runtime:
@celery_app.task(
name="app.tasks.heavy_task",
time_limit=60,
soft_time_limit=50,
)
def heavy_task():
...
This helps prevent runaway tasks from consuming workers and harming other jobs.
8. Scheduling (Periodic Jobs)
For “generate a report daily at midnight” or “run aggregation every 5 minutes,” use Celery Beat.
8.1 Configure beat schedule
# app/celery_app.py (add beat schedule)
from celery.schedules import crontab
celery_app.conf.beat_schedule = {
"generate-daily-reports": {
"task": "app.tasks.generate_report",
"schedule": crontab(hour=0, minute=0),
"args": (1,), # example: user_id=1
},
}
8.2 Start Beat
Run the scheduler as a separate process:
celery -A app.celery_app.celery_app beat --loglevel=info
Then:
- Beat enqueues tasks on schedule
- Workers consume and execute them
9. When to Use FastAPI BackgroundTasks vs Celery
FastAPI provides a lightweight background feature: BackgroundTasks.
from fastapi import BackgroundTasks
def send_email_sync(to: str):
...
@app.post("/signup")
def signup(..., background_tasks: BackgroundTasks):
background_tasks.add_task(send_email_sync, user.email)
return {"status": "ok"}
It’s very convenient, but:
- Runs in the same process as FastAPI
- If the process dies, tasks are lost
- No advanced retry/scheduling features
Celery, on the other hand:
- Runs tasks in separate processes/containers
- Tasks remain in the queue even if a worker temporarily dies
- Supports retries, scheduling, result storage, and more
A simple rule of thumb:
- Short tasks where failure isn’t critical →
BackgroundTasks - Heavy tasks or tasks requiring reliable execution/retry → Celery (job queue)
10. Ops-Oriented Design Tips
A few practical points for operating Celery in real systems.
10.1 Logging and monitoring
- Aggregate success/failure logs separately as “worker logs,” not just FastAPI logs
- Track failure rates and average duration as metrics to find bottlenecks
- For key tasks, define alert thresholds such as “3 consecutive failures”
10.2 Queue design
- Separate queues by task nature (e.g.,
emails,reports,default) - Allocate different numbers of workers per queue to prioritize critical work
- Consider backpressure: if the queue grows too large, add API-side throttling or acceptance limits
10.3 User experience on errors
- Decide how to communicate failures to users (notifications, automatic retries, support prompts)
- Design end-to-end flows such as “email when finished” or “update dashboard status” together with the API
11. Adoption Roadmap (Step-by-Step)
You don’t have to adopt everything at once. Here’s a gradual path:
-
Start by async-ing light work using
BackgroundTasks- For example, email sending.
-
Run Celery + Redis locally
- Build a small PoC based on the minimal setup in this article.
-
Move one heavy task to Celery
- Start with report generation or heavy aggregation.
-
Add task-status APIs and dashboards
- Use
task_idto show progress/results.
- Use
-
Introduce retries, queue splitting, and scheduling where needed
- Apply advanced features only to the tasks that truly need them.
-
Deploy to production (Docker / orchestration)
- Split containers: FastAPI, Celery workers, Redis; manage via Docker Compose or Kubernetes.
12. Summary
- Putting heavy work inside FastAPI can easily cause delays, timeouts, and throughput drops.
- Combining Celery + Redis makes it straightforward to move heavy or retry-critical work to separate worker processes.
- The basic split is: FastAPI = enqueue + fast response, Celery = execute reliably in the background.
- Think of
BackgroundTasksfor “light tasks,” and Celery for “tasks requiring reliability and scale,” and migrate gradually. - You don’t need to aim for a huge job-queue system immediately. Start by extracting one task, and your service can grow into a more resilient structure that’s harder to crash and less likely to bog down.
Thank you for reading this far.
I hope your “reliable workers behind the scenes” quietly take on more and more of the load in your FastAPI app—one steady step at a time.

