Hi there! I'm Maneshwar. Right now, I’m building LiveAPI, a first-of-its-kind tool that helps you automatically index API endpoints across all your repositories. LiveAPI makes it easier to discover, understand, and interact with APIs in large infrastructures.
Recently, while indexing some deeply structured API data into Meilisearch, I ran into a frustrating bug.
At first, it looked like a random ingestion failure, but the real issue was hidden in the fine print:
"A document cannot contain more than 65535 fields."
— Meilisearch error message
This is one of those issues that doesn’t show up until you scale — and when it does, it’s subtle, annoying, and non-trivial to debug.
In this post, I’ll walk through:
- Why this happens
- How deeply nested JSON structures can sabotage you
- And the exact transformation I used to solve it
What Caused the Bug?
Meilisearch automatically flattens nested objects when you index a document. This means every nested path becomes its own field.
Take a look at this document I was trying to index:
{
"project_id": "c610...",
"api": {
"path": "/polls/tags/",
"spec": {
"get": {
"description": "Retrieves a list of all available tags...",
"summary": "Get Tags",
"tags": ["Tag"],
"x-confidence": 100,
"x-sourceFiles": [
{
"fileName": "views.py",
"filePath": "polls-app-backend/mysite/polls/views.py",
"handlerName": "tags_manager",
"lineNumber": 185
}
],
"x-subProjectIndex": "Mysite",
"x-subProjectRoot": "polls-app-backend/mysite"
}
}
},
"project": {
"project_id": 2846,
"user": 516,
...
},
"type": "api"
}
Looks innocent, right?
But under the hood, Meilisearch turns each nested object into a dot-notated path. For example:
api.spec.get.description
api.spec.get.x-sourceFiles[0].fileName
api.spec.get.x-sourceFiles[0].filePath
- etc.
Now imagine that you have hundreds of these x-sourceFiles
, each with multiple fields. You’ll quickly blow past Meilisearch’s 65,535 field limit.
How Much Nesting Is Too Much?
Each unique path counts as one field. So:
- 10 array items × 5 keys each = 50 fields
- 100 endpoints with source files = 100 × 5 = 500 fields (just from that one key)
- Add in description, summary, tags, confidence, etc.
Even modest APIs with OpenAPI-like structures can hit this limit fast.
The Fix: Flatten the Structure
To fix this, I removed unnecessary nested objects and converted deep substructures into strings.
Here’s the updated version that works perfectly:
{
"project_id": "c610...",
"api_path": "/polls/tags/",
"api_method": "get",
"api_description": "Retrieves a list of all available tags...",
"api_summary": "Get Tags",
"api_tags": ["Tag"],
"api_confidence": 100,
"api_sourceFiles": "{\"fileName\":\"views.py\",\"filePath\":\"polls-app-backend/mysite/polls/views.py\",\"handlerName\":\"tags_manager\",\"lineNumber\":185}",
"api_subProjectIndex": "Mysite",
"api_subProjectRoot": "polls-app-backend/mysite",
"project_project_id": 2846,
"project_user": 516,
...
"type": "api"
}
Instead of nesting the x-sourceFiles
object, I simply stringified it using JSON.stringify
. This means:
- Just one field instead of 4–5
- Still searchable (full-text)
- No more field explosion
If I needed to query or filter filePath
later, I’d preprocess the field client-side after retrieval.
TL;DR: How to Avoid the Meilisearch 65535 Field Limit
✅ Do | ❌ Don’t |
---|---|
Flatten nested objects | Use large deeply nested structures |
Stringify complex arrays | Keep arrays of objects |
Use prefix naming (api_* ) |
Mix unrelated fields at top level |
Audit your field count at scale | Wait for the ingestion to break |
Final Thoughts
This issue is a classic example of how something as small as a nested array of objects can cause production-scale failures. Thankfully, Meilisearch makes it fast and easy to index — once you understand its limitations.
If you’re storing API specs, logs, traces, or anything with hierarchical data: flatten early, flatten smart.
LiveAPI helps you get all your backend APIs documented in a few minutes.
With LiveAPI, you can generate interactive API docs that allow users to search and execute endpoints directly from the browser.
If you're tired of updating Swagger manually or syncing Postman collections, give it a shot.
Top comments (0)