Three Python web frameworks. Three concurrency models.
TL;DR
| Chirp | Flask | FastAPI | |
|---|---|---|---|
| Concurrency | Sync, thread-based | Sync, process-based | Async, process-based |
| Free-threading | Yes (3.14t) | No | No |
| Shared app state | One app, many threads | One app per process | One app per process |
| Best for | HTMX, server-rendered | General purpose | APIs, async I/O |
The concurrency model
| Flask | FastAPI | Chirp | |
|---|---|---|---|
| Protocol | WSGI | ASGI | ASGI |
| Per request | One at a time per process | Many via event loop | One per thread |
| Scale with | Processes (Gunicorn --workers 4) |
Processes (Uvicorn --workers 4) |
Threads (one process) |
| App instances | One per process | One per process | One shared across threads |
| I/O-bound | Blocks | Scales well | Blocks (use async) |
| CPU-bound | Blocks process | Blocks event loop | Parallel (3.14t) |
Chirp is designed for it: frozen config, ContextVar request isolation, double-check locking. Under the GIL you would not get real parallelism; on free-threaded Python you do.
When free-threading matters
| Free-threading matters | Free-threading doesn't matter |
|---|---|
| CPU-bound work per request (templates, Markdown, rendering) | I/O-bound app (database, external APIs) |
| Share immutable config across workers without process duplication | Pure API — FastAPI's async model fits |
| Server-rendered apps (HTMX, forms) with thread-based concurrency | Python 3.12 or earlier — free-threading is 3.14+ |
Same route, three frameworks
A search route that returns a full page for browser navigation and a fragment for HTMX swaps. Same behavior, different code:
from chirp import App, AppConfig, Page, Request
app = App(AppConfig())
@app.route("/search")
def search(request: Request):
q = request.query.get("q", "")
books = find_books(q)
return Page("search.html", "results", books=books, query=q)
# One return type. Full page or fragment based on HX-Request header.
# No conditionals. No make_response().
from flask import Flask, request, render_template
app = Flask(__name__)
@app.route("/search")
def search():
q = request.args.get("q", "")
books = find_books(q)
if request.headers.get("HX-Request"):
return render_template("search.html", books=books, block="results")
return render_template("search.html", books=books)
# Manual branch. Forget the HX-Request check once and you get
# full page inside a div. No type-level contract.
from fastapi import FastAPI, Request
from fastapi.templating import Jinja2Templates
app = FastAPI()
templates = Jinja2Templates(directory="templates")
@app.get("/search")
async def search(request: Request, q: str = ""):
books = find_books(q)
if request.headers.get("hx-request"):
return templates.TemplateResponse(
"partials/results.html",
{"request": request, "books": books}
)
return templates.TemplateResponse(
"search.html", {"request": request, "books": books}
)
# Same manual branch. FastAPI typically uses separate partial templates
# (partials/results.html) rather than block rendering. Two templates
# to maintain for one logical view.
Chirp's Page type encapsulates the decision. No conditional. The return type is the contract.
OOB multi-fragment — update multiple targets in one response
When you add a contact, you need to update the table and the count badge. Chirp's OOB returns multiple fragments; each can target a different id:
@app.route("/contacts", methods=["POST"])
async def add_contact(request: Request):
form = await request.form()
result = validate(form, _CONTACT_RULES)
if not result:
return ValidationError(
"contacts.html", "contact_form",
retarget="#form-section",
errors=result.errors,
form={"name": form.get("name", ""), "email": form.get("email", "")},
)
_add_contact(form.get("name", ""), form.get("email", ""))
contacts = _get_contacts()
return OOB(
Fragment("contacts.html", "contact_table", contacts=contacts),
Fragment("contacts.html", "contact_count", target="contact-count", count=len(contacts)),
)
# One response. Two DOM targets updated. No manual hx-swap-oob wiring.
@app.route("/contacts", methods=["POST"])
def add_contact():
form = request.form
errors = validate_contact(form)
if errors:
return render_template("partials/form.html", errors=errors, form=form), 422
add_contact(form["name"], form["email"])
contacts = get_contacts()
# Must render two partials, add hx-swap-oob to each, concatenate
table = render_template("partials/table.html", contacts=contacts)
count = render_template("partials/count.html", count=len(contacts))
return table + count # or use a wrapper that adds OOB attributes
# No built-in OOB. You manually add hx-swap-oob="target:id" to each fragment.
@app.post("/contacts")
async def add_contact(request: Request):
form = await request.form()
errors = validate_contact(form)
if errors:
return templates.TemplateResponse("partials/form.html", {...}, status_code=422)
add_contact(form["name"], form["email"])
contacts = get_contacts()
table = templates.TemplateResponse("partials/table.html", {...})
count = templates.TemplateResponse("partials/count.html", {...})
# HTMX OOB requires concatenating HTML with hx-swap-oob on each part
# No built-in OOB type. Manual concatenation or custom response builder.
Chirp's OOB and ValidationError are first-class types. Flask and FastAPI require manual OOB markup and 422 handling. See chirp/examples/contacts for the full CRUD app.
Streaming with concurrent awaitables
Chirp's Stream resolves multiple async values concurrently, then streams rendered chunks. No manual asyncio.gather or chunk assembly:
@app.route("/")
async def index():
return Stream(
"dashboard.html",
stats=load_stats(), # awaitable
feed=load_feed(), # awaitable
)
# Framework resolves both concurrently, streams HTML as chunks arrive.
# Same template. No StreamingResponse boilerplate.
@app.route("/")
def index():
# No native streaming with concurrent I/O. Options:
# 1. Block: stats, feed = sync_fetch_both() — no concurrency
# 2. ThreadPool: run async in thread, block — awkward
# 3. SSE: different pattern, not progressive HTML
stats = load_stats_sync()
feed = load_feed_sync()
return render_template("dashboard.html", stats=stats, feed=feed)
@app.get("/")
async def index():
stats, feed = await asyncio.gather(load_stats(), load_feed())
return templates.TemplateResponse("dashboard.html", {"request": request, "stats": stats, "feed": feed})
# Can do concurrent fetch, but no progressive HTML streaming.
# For StreamingResponse you'd build a custom generator.
Response headers (HX-Trigger, HX-Push-Url)
Chirp lets you return a tuple of (body, status, headers) for HTMX response headers:
# Chirp — delete contact, trigger client-side event
@app.route("/contacts/{contact_id}", methods=["DELETE"])
def delete_contact(contact_id: int):
_delete_contact(contact_id)
contacts = _get_contacts()
return (
Fragment("contacts.html", "contact_table", contacts=contacts),
200,
{"HX-Trigger": "contactDeleted"},
)
# Edit contact, push URL to history
@app.route("/contacts/{contact_id}/edit")
def edit_contact(contact_id: int):
contact = _get_contact(contact_id)
return (
Fragment("contacts.html", "edit_row", contact=contact),
200,
{"HX-Push-Url": f"/contacts/{contact_id}/edit"},
)
| Chirp | Flask / FastAPI | |
|---|---|---|
| Headers | (body, status, headers) tuple |
make_response() or Response(..., headers={}) |
| Style | Declarative in handler | Imperative |
Hello world — minimal app
from chirp import App, AppConfig
app = App(AppConfig())
@app.route("/")
def index():
return "Hello, World!"
app.run() # or: pounce myapp:app --workers 4
from flask import Flask
app = Flask(__name__)
@app.route("/")
def index():
return "Hello, World!"
# Run: gunicorn -w 4 app:app
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def index():
return "Hello, World!"
# Run: uvicorn app:app --workers 4
| Chirp | Flask | FastAPI | |
|---|---|---|---|
| Handler | Sync | Sync | Async |
| Scale with | Threads (3.14t) | Processes | Processes |
Where each shines (and stumbles)
Shines: Type-driven responses. No make_response() or HTMX conditionals. Thread-based concurrency on 3.14t. One app instance shared across workers. Built for server-rendered + HTMX.
Stumbles: Newer. Smaller ecosystem. Python 3.14+ for free-threading. Requires Pounce or compatible ASGI server.
Shines: Mature. Huge ecosystem. Simple. WSGI everywhere. Works on any Python 3.8+.
Stumbles: Process-based. No shared app state across workers. HTMX requires manual branching. No built-in type-driven response model.
Shines: Async. OpenAPI. Pydantic. Great for JSON APIs. Fast. Well-documented.
Stumbles: Async for I/O; CPU-bound work blocks. Process-based. Server-rendered HTML is not the primary use case. HTMX support is manual.
| Shines | Stumbles | |
|---|---|---|
| Chirp | Type-driven responses, no HTMX conditionals, thread-based on 3.14t | Newer, smaller ecosystem, 3.14+ for free-threading |
| Flask | Mature, huge ecosystem, WSGI everywhere | Process-based, manual HTMX branching |
| FastAPI | Async, OpenAPI, Pydantic, great for APIs | CPU-bound blocks event loop, HTMX manual |
Type-driven responses
Chirp's response model is different. You return a type — Template, Fragment, Page, Stream, Suspense, EventStream — and the framework negotiates. See Type-Driven Responses for the full story.
When to choose which
| Use case | Choice | Why |
|---|---|---|
| General purpose, safe default | Flask | Mature, huge ecosystem |
| APIs, async I/O, OpenAPI | FastAPI | Best for API-first apps |
| Server-rendered, HTMX, free-threading | Chirp | Thread-based concurrency, type-driven responses |