I’m developing a production application using FastAPI where the API response schemas are frequently changing due to stakeholder requirements. The app is nearing production, and we have comprehensive unit tests in place.
Our testing framework consists of:
- Unit tests using pytest
- Two static files:
expected_outputs.json
: Contains expected API responsesmock_inputs.json
: Contains test input data
- Tests that compare mocked FastAPI client responses against expected outputs
The problem is, as stakeholders regularly request new fields to be added to our API responses, our unit tests keep breaking because the JSON responses no longer match the expected outputs defined in our test files. While we can remove obsolete schemas, I’m questioning whether maintaining these unit tests is worthwhile given the constantly evolving schema.
Here’s a simplified version of our current testing approach:
from pydantic import BaseModel
from fastapi import FastAPI
from typing import Optional
class UserCreate(BaseModel):
username: str
email: str
class User(UserCreate):
id: int
app = FastAPI()
@app.post("/users/", response_model=User)
async def create_user(user: UserCreate):
# Simplified example
return User(id=1, **user.model_dump())
# Original test
def test_create_user(client):
## In our real code we are referring to the objects in the mock_inputs.json
response = client.post("/users/", json={
"username": "john_doe",
"email": "john@example.com"
})
assert response.status_code == 200
## In our real code we are asserting to the objects in the expected_outputs.json
assert response.json() == {
"id": 1,
"username": "john_doe",
"email": "john@example.com"
}
assert response.json()['id'] == 1
...
assert response.json()['email'] == "john@example.com"
Then a new field is added, for example full_name, the test would also have to be modified:
class UserCreate(BaseModel):
username: str
email: str
full_name: Optional[str] = None
phone: Optional[str] = None
# Original test
def test_create_user(client):
## In our real code we are referring to the objects in the mock_inputs.json
response = client.post("/users/", json={
"username": "john_doe",
"email": "john@example.com"
})
assert response.status_code == 200
## In our real code we are asserting to the objects in the expected_outputs.json
assert response.json() == {
"id": 1,
"username": "john_doe",
"email": "john@example.com"
"full_name": "John Locke"
}
assert response.json()['id'] == 1
...
assert response.json()['full_name'] == "John Locke" ## Added new test for some key
Right now, we are adding the new fields (and sometimes remove old fields if they require us to do so) to the expected outputs and mock inputs, but this is hard to maintain since we will have to add these expected outputs and mock inputs for quite a lot of endpoints, since there are instances where there are tens of new fields being added.
Also good to mention, the schemas in question are also rather big 25++ fields on average per schema and we would like to test multiple scenarios, such as values being null (assuming a field can be optionally None) and then different data types as well.
You need to sign in to view this answers
Leave feedback about this