MCP Servers & Cloudflare Managers: FAQ
Get answers about our MCP servers (SQLite, PostgreSQL, MySQL, Memory Journal) and Cloudflare managers (D1, KV, R2, DO). Installation guides, configuration, and troubleshooting for AI-assisted development and cloud management.
General Questions
A Model Context Protocol (MCP) server is a specialized application that extends AI assistants with additional capabilities. MCP servers provide tools, resources, and prompts that allow AI assistants to perform tasks they couldn't do natively, such as database operations, file management, or specialized data processing.
Our MCP servers integrate seamlessly with Claude Desktop, Cursor, ChatGPT, Gemini, and other MCP-compatible clients, giving you powerful capabilities for database management, project journaling, and development workflows.
Adamic provides four production-ready MCP servers and four Cloudflare management applications:
MCP Servers:
- SQLite MCP Server v2.6.4 - 73 specialized tools across 14 categories including JSON operations, statistical analysis, semantic vector search, and SpatiaLite geospatial operations
- PostgreSQL MCP Server v1.2.0 - 63 tools, 10 intelligent resources, and 10 guided prompts. Features tool filtering, pgvector, PostGIS, and zero known vulnerabilities
- MySQL MCP Server v2.1.0 - 191 specialized tools, 18 observability resources, and 19 AI-powered prompts. Features OAuth 2.1 authentication, smart tool filtering, MySQL Router/ProxySQL/InnoDB Cluster integrations, and strict TypeScript with 97% test coverage
- Memory Journal MCP v3.0.0 - 27 tools, 14 prompts, 14 resources. TypeScript rewrite with Pure JS Stack, backup/restore tools, and semantic search
Cloudflare Managers:
- D1 Manager v2.0.0 - D1 database management with Drizzle ORM, Time Travel, Read Replication, R2 backups, and FTS5 search
- KV Manager v2.1.0 - Workers KV management with visual color tags, dual metadata system, bulk operations, and cross-namespace search
- R2 Manager v2.0.0 - R2 bucket management with AI Search integration, job history, rate limiting, and multi-bucket downloads
- DO Manager v1.2.0 - Durable Objects management with cross-namespace instance migration, freeze/unfreeze protection, admin hooks, SQL console, and batch operations
Use SQLite MCP Server when:
- You need a local, file-based database without setup
- Working with embedded databases or mobile apps
- You want 73 specialized tools including JSON helpers and SpatiaLite
- Tool filtering is important (v2.6.4 feature for MCP client limits)
Use PostgreSQL MCP Server when:
- You have an existing PostgreSQL database (versions 13-18)
- Need enterprise features like pgvector embeddings or PostGIS
- Want intelligent resources for database meta-awareness
- Tool filtering for client limits and token savings (v1.2.0)
Both servers are production-ready, free, open source, and support Docker deployment.
Yes! All MCP servers and Cloudflare managers are completely free and open source under the MIT License. You can use them for personal projects, commercial applications, or modify them to suit your needs.
Available on GitHub and Docker Hub at no cost, with no usage limits or subscription fees.
MCP Servers: Installation & Setup
All four MCP servers support multiple installation methods:
Method 1: Docker (Recommended)
SQLite (Docker):
docker pull writenotenow/sqlite-mcp-server:latestdocker run -i --rm -v $(pwd):/workspace writenotenow/sqlite-mcp-server:latest --db-path /workspace/database.dbPostgreSQL (Docker):
docker pull writenotenow/postgres-mcp-enhanced:latestdocker run -i --rm -e DATABASE_URI="postgresql://user:pass@localhost:5432/db" writenotenow/postgres-mcp-enhanced:latest --access-mode=restrictedMySQL MCP (npm - recommended)
npm install -g @neverinfamous/mysql-mcpMySQL MCP (Docker)
docker pull writenotenow/mysql-mcp:latestdocker run -i --rm writenotenow/mysql-mcp:latest --transport stdio --mysql mysql://user:password@host.docker.internal:3306/databaseMemory Journal (npm - recommended)
npm install -g memory-journal-mcpMemory Journal (Docker)
docker pull writenotenow/memory-journal-mcp:latestdocker run -i --rm -v ./data:/app/data writenotenow/memory-journal-mcp:latestMethod 2: Python/PyPI
SQLite (PyPI):
pip install sqlite-mcp-server-enhancedPostgreSQL (PyPI):
pip install postgres-mcp-enhancedMemory Journal
pip install memory-journal-mcpMethod 3: From Source
Clone from GitHub and follow README instructions:
Add MCP servers to your client's configuration file:
Cursor IDE - Edit ~/.cursor/mcp.json or %USERPROFILE%\.cursor\mcp.json (Windows):
{
"mcpServers": {
"sqlite-mcp": {
"command": "docker",
"args": ["run", "-i", "--rm", "-v", "$(pwd):/workspace",
"writenotenow/sqlite-mcp-server:latest",
"--db-path", "/workspace/database.db"]
},
"postgres-mcp": {
"command": "docker",
"args": ["run", "-i", "--rm", "-e", "DATABASE_URI",
"writenotenow/postgres-mcp-enhanced:latest",
"--access-mode=restricted"],
"env": {
"DATABASE_URI": "postgresql://user:pass@localhost:5432/db"
}
},
"memory-journal": {
"command": "docker",
"args": ["run", "-i", "--rm", "-v", "./data:/app/data",
"writenotenow/memory-journal-mcp:latest",
"python", "src/server.py"]
}
}
}Claude Desktop - Similar configuration in Claude's settings.
Restart your MCP client after configuration changes.
All MCP Servers:
- Python 3.12, 3.13, or 3.14 (for SQLite/PostgreSQL/Memory Journal)
- Node.js 18+ (for MySQL MCP)
- Docker (optional but recommended)
- MCP-compatible client (Cursor, Claude Desktop, etc.)
PostgreSQL MCP Server:
- PostgreSQL 13-18 installed and running
- DATABASE_URI environment variable
- Required extensions: pg_stat_statements (built-in)
- Optional: pgvector v0.8.0, PostGIS v3.5.0
MySQL MCP Server:
- MySQL 5.7+ or 8.0+ server
- Connection string or environment variables (MYSQL_HOST, MYSQL_PORT, MYSQL_USER, MYSQL_PASSWORD, MYSQL_DATABASE)
- Optional: MySQL Router, ProxySQL, or MySQL Shell for ecosystem integrations
SQLite MCP Server:
- SQLite 3.50+ (for JSONB support)
- No database setup required
Memory Journal MCP:
- Optional: GITHUB_TOKEN for GitHub integration
- Git installed for repository context
SQLite MCP Server
SQLite MCP Server v2.6.4 provides 73 tools across 14 categories:
- Core Database (15 tools) - CRUD operations, schema management, transactions
- JSON Helper Tools (6 tools) - Simplified JSON operations with auto-normalization
- Text Processing (9 tools) - Regex, fuzzy matching, phonetic search
- Statistical Analysis (8 tools) - Descriptive stats, percentiles, time series
- Virtual Tables (8 tools) - CSV, R-Tree, series generation
- Semantic Search (8 tools) - Embeddings, vector similarity, hybrid search
- Geospatial (7 tools) - SpatiaLite spatial operations
- Full-Text Search (3 tools) - FTS5 creation, indexing, BM25 ranking
See the complete documentation for details.
Tool filtering helps address MCP client tool limits (like Windsurf's 100-tool limit). Use the SQLITE_MCP_TOOL_FILTER environment variable:
Reduce to ~50 tools (Windsurf-compatible)
SQLITE_MCP_TOOL_FILTER="-vector,-stats,-spatial,-text"Core + JSON only
SQLITE_MCP_TOOL_FILTER="-fts,-vector,-virtual,-spatial,-text,-stats,-admin,-misc"Disable admin but keep vacuum and backup
SQLITE_MCP_TOOL_FILTER="-admin,+vacuum_database,+backup_database"Available groups: core, fts, vector, json, virtual, spatial, text, stats, admin, misc
JSON auto-normalization (v2.6.0+) automatically fixes Python-style JSON for SQLite compatibility:
- Single quotes → Double quotes
- Python
True/False→ JSONtrue/false - Python
None→ JSONnull - Trailing commas removed
- Security validation prevents malicious input
Use JSON Helper Tools like json_insert, json_update, and json_query for simplified operations.
PostgreSQL MCP Server
63 Tools across 9 categories:
- Core Database (9) - Schema management, SQL execution, health monitoring
- JSON Operations (11) - JSONB operations, validation, security
- Text Processing (5) - Similarity search, full-text, fuzzy matching
- Statistical Analysis (8) - Stats, correlation, regression, time series
- Performance Intelligence (6) - Query optimization, index tuning
- Vector/Semantic Search (8) - pgvector embeddings, similarity, clustering
- Geospatial (7) - PostGIS distance, spatial queries, GIS operations
- Backup & Recovery (4) - Backup planning, restore validation
- Monitoring & Alerting (5) - Real-time monitoring, capacity planning
10 Resources - Database meta-awareness: schema, capabilities, performance, health, extensions, indexes, connections, replication, vacuum, locks, statistics
10 Prompts - Guided workflows: optimize_query, index_tuning, database_health_check, setup_pgvector, json_operations, performance_baseline, backup_strategy, setup_postgis, explain_analyze_workflow, extension_setup
Restricted Mode (Production):
- Read-only operations
- Query validation
- Resource limits
- Recommended for production use
Unrestricted Mode (Development):
- Full database access
- Write operations allowed
- Parameter binding protection
- Use only in development environments
Restricted mode
postgres-mcp --access-mode=restrictedUnrestricted mode
postgres-mcp --access-mode=unrestrictedInstall pgvector extension and use the setup_pgvector prompt:
-- Install pgvector
CREATE EXTENSION IF NOT EXISTS vector;
-- Create table with vector column
CREATE TABLE embeddings (
id SERIAL PRIMARY KEY,
content TEXT,
embedding vector(1536) -- OpenAI embeddings dimension
);
-- Create HNSW index for fast similarity search
CREATE INDEX ON embeddings USING hnsw (embedding vector_cosine_ops);Use the MCP server's vector search tools for similarity queries and clustering.
MySQL MCP Server
MySQL MCP Server v2.1.0 provides 191 tools across 24 categories:
- Core Database (8 tools) - Read/write queries, tables, indexes
- Transactions (7 tools) - BEGIN, COMMIT, ROLLBACK, savepoints
- JSON Operations (17 tools) - MySQL 5.7+ native JSON functions, merge, diff, stats
- Text Processing (6 tools) - REGEXP, LIKE, SOUNDEX
- Full-Text Search (4 tools) - Natural language search with BM25
- Performance (8 tools) - EXPLAIN, query analysis, slow query detection
- Optimization (4 tools) - Index hints, recommendations
- Admin (6 tools) - OPTIMIZE, ANALYZE, CHECK tables
- Monitoring (7 tools) - PROCESSLIST, status variables
- Backup (4 tools) - Export, import, mysqldump integration
- Replication (5 tools) - Master/slave, binlog management
- Partitioning (4 tools) - Partition management
- Spatial/GIS (12 tools) - Geospatial operations
- Security (9 tools) - Audit, SSL, encryption, data masking
- Cluster (10 tools) - Group Replication, InnoDB Cluster
- Roles (8 tools) - MySQL 8.0 role management
- DocStore (9 tools) - Document Store / X DevAPI collections
- Router (9 tools) - MySQL Router REST API integration
- ProxySQL (12 tools) - ProxySQL management
- Shell (10 tools) - MySQL Shell utilities
See the complete documentation for details.
AI IDEs like Cursor have tool limits (typically 40-50 tools). With 191 tools available, you must use tool filtering via the --tool-filter argument:
Recommended Shortcuts:
starter(38 tools) - Core, JSON, transactions, text - best for most usersessential(15 tools) - Minimal footprintdev-power(45 tools) - Core, schema, performance, statsdba-monitor(35 tools) - Core, monitoring, performance, sysschemaecosystem(31 tools) - MySQL Router, ProxySQL, Shell
Use starter shortcut (recommended)
mysql-mcp --transport stdio --mysql mysql://user:pass@localhost:3306/db --tool-filter starterAdd specific groups to a shortcut
--tool-filter "starter,spatial"Remove specific tools
--tool-filter "starter,-mysql_drop_table"MySQL MCP provides first-class integrations for the MySQL ecosystem:
MySQL Router (for InnoDB Cluster)
Connect to the Router REST API for cluster topology and routing status:
MYSQL_ROUTER_URL=https://localhost:8443
MYSQL_ROUTER_USER=rest_api
MYSQL_ROUTER_PASSWORD=router_password
MYSQL_ROUTER_INSECURE=trueProxySQL Admin
Manage query rules, server pools, and caching:
PROXYSQL_HOST=localhost
PROXYSQL_PORT=6032
PROXYSQL_USER=radmin
PROXYSQL_PASSWORD=radminInnoDB Cluster
Connect directly to a cluster node for Group Replication status:
mysql-mcp --transport stdio --mysql mysql://cluster_admin:password@localhost:3307/mysql --tool-filter clusterUse --tool-filter ecosystem to load Router, ProxySQL, and Shell tools together.
Memory Journal MCP Server
Memory Journal MCP v3.0.0 provides comprehensive GitHub integration:
Automatic Context Capture:
- Current repository, branch, and commit
- GitHub Issues - Auto-detect from branch names
- GitHub Pull Requests - Auto-detect current PR
- GitHub Projects (user & org)
- GitHub Actions workflow runs
v3.0.0 TypeScript Rewrite Features:
- Pure JS Stack with no native dependencies
- Backup/restore tools with auto-backup safety
- Health diagnostics via memory://health resource
- MCP 2025-11-25 compliance with behavioral annotations
Issues & PRs Features:
- Auto-fetch and link entries
- PR lifecycle tracking
- 3 PR workflow prompts: pr-summary, code-review-prep, pr-retrospective
- 3 new resources: Issue/PR entries, PR timelines
Configure GitHub tokens for Projects, Issues, PRs, and Actions integration:
User repositories and projects
export GITHUB_TOKEN="your_personal_access_token"Organization projects (optional)
export GITHUB_ORG_TOKEN="your_org_token"export DEFAULT_ORG="your-org-name"Required Scopes:
repo- Access repositories, issues, and PRsproject- Access GitHub Projectsread:org- Read org projects (org token only)
Fallback: If tokens not set, Memory Journal uses the gh CLI if available.
Memory Journal builds knowledge graphs linking related work:
5 Relationship Types:
- implements - Implementation of a spec or design
- tests - Test coverage for an implementation
- depends_on - Dependency relationships
- related_to - General connections
- blocks - Blocker relationships
Mermaid Visualization:
Use visualize_relationships tool or access memory://graph/recent resource for live diagrams showing how different pieces of work connect.
Example: Link a spec → implementation → tests → PR for complete context visibility.
Cloudflare Managers: Setup & Authentication
All Cloudflare managers (D1, KV, R2, DO) use Cloudflare Access for enterprise authentication:
Step 1: Configure Zero Trust
- Go to Cloudflare Zero Trust
- Configure authentication provider (GitHub OAuth, Google, etc.)
- Create an Access Application for your domain
- Copy the Application Audience (AUD) tag
Step 2: Create API Token
- Go to Cloudflare API Tokens
- Create Custom Token with required permissions (see below)
Step 3: Set Secrets
npx wrangler secret put ACCOUNT_IDnpx wrangler secret put API_KEYnpx wrangler secret put TEAM_DOMAINnpx wrangler secret put POLICY_AUDCommon Issues:
- Authentication loop - Verify TEAM_DOMAIN includes
https:// - Access denied - Check POLICY_AUD matches your Access application
Required permissions by manager:
| Manager | Required Permissions |
|---|---|
| D1 Manager | Account → D1 → Edit |
| KV Manager | Account → Workers KV Storage → Edit |
| R2 Manager | Account → R2 → Edit |
| DO Manager | Account → Workers Scripts → Read Account → D1 → Edit (if managing D1-backed DOs) |
Note: Both API Tokens (Bearer auth) and Global API Keys (X-Auth-Key auth) are supported.
All managers follow a similar deployment process:
Prerequisites:
- Node.js 18+ and npm
- Cloudflare account
- Wrangler CLI installed
- Domain managed by Cloudflare (or use workers.dev)
Deployment Steps:
- Clone the repository
- Install dependencies:
npm install - Authenticate:
npx wrangler login - Create required resources (D1 database, R2 bucket, etc.)
- Configure
wrangler.toml - Set secrets (ACCOUNT_ID, API_KEY, TEAM_DOMAIN, POLICY_AUD)
- Build and deploy:
npm run build && npx wrangler deploy
See individual manager documentation for specific requirements.
Yes! D1 Manager and KV Manager support Docker deployment for development and testing:
D1 Manager
docker pull writenotenow/d1-manager:latestdocker run -d -p 8080:8080 \
-e ACCOUNT_ID=your_account_id \
-e API_KEY=your_api_token \
-e TEAM_DOMAIN=https://yourteam.cloudflareaccess.com \
-e POLICY_AUD=your_aud_tag \
writenotenow/d1-manager:latestKV Manager
docker pull writenotenow/kv-manager:latestdocker run -d -p 8787:8787 \
-e ACCOUNT_ID=your_account_id \
-e API_KEY=your_api_token \
-e TEAM_DOMAIN=https://yourteam.cloudflareaccess.com \
-e POLICY_AUD=your_aud_tag \
writenotenow/kv-manager:latestNote: For production, deploy to Cloudflare Workers for best performance and edge distribution.
D1 Manager
D1 Manager v2.0.0 is a comprehensive web application for managing Cloudflare D1 databases:
Database Management:
- Create, rename, clone, delete, download, and optimize databases
- Grid/List view toggle with quick actions
- Bulk operations (download, optimize, delete)
- Import databases (SQL files or paste content)
- Color picker for visual organization (27 colors)
Advanced Features:
- Drizzle ORM Console - Introspect schemas, migration status/history, generate SQL, push changes with dry-run
- Time Travel - Point-in-time recovery with bookmarks and checkpoint history
- Read Replication - Enable/disable global read replicas
- R2 Backup/Restore - Automated backups with scheduled daily/weekly/monthly options
- FTS5 Full-Text Search - Manage virtual tables with quick actions
D1 Manager provides comprehensive backup features via R2 integration:
Manual Backups:
- Backup databases to R2 before delete, rename, STRICT mode, or FTS5 conversion
- Create backups from database cards
- Unified hub with undo history and R2 snapshots
Scheduled Backups:
- Daily, weekly, or monthly schedules per database
- Cron triggers with next-run tracking
- Enable/disable controls
- Job history integration
Setup Requirements:
Create R2 bucket
npx wrangler r2 bucket create d1-manager-backupsAdd to wrangler.toml - R2 Bucket
[[r2_buckets]]
binding = "BACKUP_BUCKET"
bucket_name = "d1-manager-backups"Add to wrangler.toml - Durable Objects
[[durable_objects.bindings]]
name = "BACKUP_DO"
class_name = "BackupDO"
script_name = "d1-manager"KV Manager
KV Manager v2.1.0 provides two metadata storage options:
1. KV Native Metadata (1024 bytes limit):
- Stored directly in Cloudflare KV
- Fast access with KV reads
- Limited to 1024 bytes
- Good for simple key-value pairs
2. D1 Custom Metadata (unlimited):
- Stored in Cloudflare D1 database
- Unlimited size for complex metadata
- Supports tags and advanced search
- Better for structured data
Use both systems together: KV Native for fast access, D1 Custom for rich metadata and search capabilities.
KV Manager provides comprehensive bulk operations:
- Bulk Delete - Delete thousands of keys efficiently
- Bulk Copy - Copy keys between namespaces
- Bulk TTL Update - Set expiration on multiple keys
- Bulk Tag Operations - Add/remove tags from multiple keys
- Import/Export - JSON/NDJSON support with collision handling (replace/skip/fail)
- R2 Backup & Restore - Cloud-native backup with batch operations
All bulk operations are tracked in Job History with event timelines and progress monitoring.
R2 Manager
R2 Manager supports chunked uploads up to 500MB, but Cloudflare enforces plan-based limits:
| Plan | Max File Size | Features |
|---|---|---|
| Free | 100MB | Basic features |
| Pro | 100MB | Enhanced support |
| Business | 200MB | Priority support |
| Enterprise | 500MB | Full features + SLA |
Features:
- Chunked uploads with automatic retry (10MB chunks)
- MD5 checksum verification for integrity
- Upload progress tracking
R2 Manager v2.0.0 integrates with Cloudflare AI Search for semantic search:
Setup Steps:
- Add AI binding to
wrangler.toml:[ai] binding = "AI" - Check compatibility - View indexable file types (txt, md, json, yaml, etc.)
- Create AI Search instance via Cloudflare Dashboard or R2 Manager UI
- Sync bucket to index files (up to 4MB each)
Search Modes:
- AI Search - Get AI-generated answers based on your data
- Semantic Search - Retrieve relevant documents without AI generation
Features:
- Compatibility analysis with visual reports
- Instance management (list, sync, query)
- Direct Cloudflare Dashboard link
DO (Durable Objects) Manager
DO Manager requires admin hooks to manage Durable Objects. Two options:
Option A: NPM Package (Recommended)
npm install do-manager-admin-hooksimport { withAdminHooks } from 'do-manager-admin-hooks';
export class MyDurableObject extends withAdminHooks() {
async fetch(request: Request): Promise<Response> {
// Handle admin requests first
const adminResponse = await this.handleAdminRequest(request);
if (adminResponse) return adminResponse;
// Your custom logic here
return new Response('Hello from my Durable Object!');
}
}Option B: Manual Copy-Paste
Click "Get Admin Hook Code" in the namespace view to generate copy-paste TypeScript code for your DO class.
Configuration Options:
export class SecureDO extends withAdminHooks({
basePath: '/admin', // Change endpoint path (default: '/admin')
requireAuth: true, // Require authentication
adminKey: 'secret-key', // Admin key for auth
}) {
// ...
}DO Manager v1.2.0 provides comprehensive Durable Objects management:
v1.2.0 Features:
- Instance Migration - Move instances between namespaces with three cutover modes: Copy Only (non-destructive), Copy + Delete (removes source), Copy + Freeze (locks source)
- Freeze/Unfreeze - Read-only protection for critical instances with snowflake indicator and API enforcement (423 Locked)
Namespace Management:
- Auto-discover DO namespaces from Cloudflare API
- Manual configuration for custom setups
- Clone namespace configurations with deep clone support
- System namespace filtering (prevents accidental deletion)
- Support for SQLite and KV storage backends
Instance Management:
- Track instances by name or hex ID
- Create, clone, and delete instances
- Download instance storage as JSON
- Color tags and custom freeform tags for organization
- Instance diff - Compare storage between instances
Advanced Features:
- SQL Console V2 - Rich editor with syntax highlighting, auto-complete, and query templates
- Batch Operations - Multi-select for bulk download, delete, and backup
- Storage Management - View/edit storage with JSON support, import keys from JSON
- Automated Database Migrations - In-app prompts for schema upgrades
- R2 Backup & Restore - Snapshot DO storage to R2 with restore capability
- Global Search - Cross-namespace key and value search
Quick Reference: Feature Comparisons
| Feature | SQLite MCP | PostgreSQL MCP | MySQL MCP | Memory Journal |
|---|---|---|---|---|
| Version | v2.6.4 | v1.2.0 | v2.1.0 | v3.0.0 |
| Tools | 73 tools | 63 tools | 191 tools | 27 tools |
| Resources | 7 resources | 10 resources | 18 resources | 14 resources |
| Prompts | 7 prompts | 10 prompts | 19 prompts | 14 prompts |
| Runtime | Python 3.12-3.14 | Python 3.12-3.14 | Node.js 18+ | Python 3.12-3.14 |
| Database | SQLite 3.50+ (local) | PostgreSQL 13-18 | MySQL 5.7+ / 8.0+ | SQLite (embedded) |
| Setup | Easy (no DB setup) | Medium (requires PG) | Medium (requires MySQL) | Easy (auto-creates DB) |
| Geospatial | ✓ SpatiaLite (7 tools) | ✓ PostGIS (7 tools) | ✓ Spatial (12 tools) | - |
| JSON Operations | ✓ 6 helpers + normalize | ✓ 11 JSONB tools | ✓ 17 JSON tools | - |
| Tool Filtering | ✓ v2.6.4 feature | ✓ v1.2.0 feature | ✓ 24 groups + shortcuts | - |
| Ecosystem | - | - | ✓ Router, ProxySQL, Shell | ✓ GitHub integration |
| Security | SQL injection protection | Zero vulns, security modes | OAuth 2.1, TLS/SSL | Local-first |
| Best For | Local DBs, embedded apps | Enterprise PostgreSQL | MySQL, InnoDB Cluster | Project journaling, AI dev |
| Feature | D1 Manager | KV Manager | R2 Manager | DO Manager |
|---|---|---|---|---|
| Version | v2.0.0 | v2.1.0 | v2.0.0 | v1.0.0 |
| Manages | D1 Databases | KV Namespaces | R2 Buckets | Durable Objects |
| Authentication | ✓ Zero Trust | ✓ Zero Trust | ✓ Zero Trust | ✓ Zero Trust |
| Docker Support | ✓ | ✓ | ✓ (dev only) | - |
| Backup/Restore | ✓ R2 backups, scheduled | ✓ R2 backups | Native R2 storage | ✓ R2 backups |
| Bulk Operations | ✓ Download, optimize, delete | ✓ Delete, copy, TTL, tags | ✓ Multi-bucket download | ✓ Download, delete, backup |
| Search | ✓ FTS5 full-text | ✓ Cross-namespace | ✓ Cross-bucket + AI Search | ✓ Global key/value search |
| Job History | ✓ | ✓ | ✓ | ✓ |
| Unique Features | Drizzle ORM, Time Travel, Read Replication | Color tags, dual metadata, unlimited tags | AI Search integration, rate limiting, chunked uploads | Admin hooks, SQL console, alarms, instance cloning |
| Best For | SQLite database management, ORM integration | Key-value storage, metadata management | Object storage, large files, semantic search | Stateful objects, real-time apps |
Troubleshooting
Common causes and solutions:
- Configuration file location incorrect
- Cursor:
~/.cursor/mcp.json(Mac/Linux) or%USERPROFILE%\.cursor\mcp.json(Windows) - Claude Desktop: Check Claude settings for config location
- Cursor:
- JSON syntax errors
- Validate JSON syntax at jsonlint.com
- Check for missing commas, quotes, or brackets
- Docker not running
- Verify Docker is running:
docker ps - Check Docker Desktop is started
- Verify Docker is running:
- Restart required
- Restart your MCP client after configuration changes
- Some clients may need a full application restart
Troubleshooting steps:
- Verify PostgreSQL is running
pg_isready -h localhost -p 5432 - Check DATABASE_URI format
- Correct:
postgresql://user:pass@localhost:5432/dbname - Include port number (default: 5432)
- URL-encode special characters in password
- Correct:
- Firewall/network issues
- Check if port 5432 is accessible
- Verify
pg_hba.confallows connections - For Docker: Use
host.docker.internalinstead oflocalhost
- Authentication
- Verify username and password are correct
- Check user has sufficient privileges
Install required extensions:
-- Required (built-in)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;
-- Optional - pgvector
CREATE EXTENSION IF NOT EXISTS vector;
-- Optional - PostGIS
CREATE EXTENSION IF NOT EXISTS postgis;If extensions are missing:
- pgvector: Install from pgvector GitHub
- PostGIS: Install from PostGIS.net
- Restart PostgreSQL after installing extensions
Common fixes:
- TEAM_DOMAIN format
- Must include
https:// - Correct:
https://yourteam.cloudflareaccess.com - Wrong:
yourteam.cloudflareaccess.com
- Must include
- POLICY_AUD mismatch
- Verify AUD tag matches your Access application exactly
- Get AUD from Zero Trust dashboard → Access → Applications
- API token permissions
- Ensure token has required permissions (see "What API token permissions do I need?" above)
- Token must not be expired
- Clear browser cookies
- Cloudflare Access stores authentication cookies
- Clear cookies and try again
Solutions by platform:
Windows:
- Ensure Docker Desktop is running
- Enable file sharing in Docker Desktop settings
- Use absolute paths or
$(pwd)for volume mounts
Mac/Linux:
- Check file permissions:
chmod 755 /path/to/directory - Run Docker with correct user:
docker run --user $(id -u):$(id -g) - Verify volume mount paths exist
General:
- Use absolute paths for volume mounts
- Ensure directories exist before mounting
- Check Docker logs:
docker logs [container-id]
