Bring the Model Context Protocol to life with a clean, scalable TypeScript implementation using Express, Zod, and http-error-kit.
Recap from Day 1
In our previous article, we explored why MCP matters: it's a protocol layer that acts as the context brain for multi-agent systems. Instead of cramming memory and goals into prompts, we delegate context management to an MCP Server that can provide structured responses to any agent.
Today, we build that brain.
This article is not a checklist — it's a detailed walkthrough. You'll not only see code, but understand why each part exists and how it helps agents operate more intelligently.
💡 This sets the stage for future articles, where agents will begin to fetch, mutate, and even share context without frontend dependency.
What We’re Building
In this article, we’re building the first working version of the architectural flow introduced in Day 1 — where the frontend queries the MCP Server, receives structured context, and passes it to the agent or LLM for execution. This establishes the foundation of context routing and separation of concerns that will evolve further in later articles. We'll create a minimal but real MCP Server with the following features:
Accepts structured requests from agents
Responds with memory, persona, tools, and goals
Uses
http-error-kit
for clean error handlingIn-memory store for now (extendable later)
We'll use TypeScript + Express for clarity and familiarity.
How Agents Interact with the MCP
Let’s begin with the big picture.
What does an agent need to function?
A role/persona (Who am I?)
Memory (What have I done?)
Goals (What should I achieve?)
Tools (What can I use?)
The MCP server gives the agent all this in a single response. The agent sends a request like:
{
"agent_id": "planner-001",
"task_id": "goal-execution-42",
"request_type": "context_bundle"
}
This says: "Hey MCP, I'm PlannerGPT, working on Task 42. Give me what I need."
The MCP returns:
{
"persona": "PlannerGPT",
"system_prompt": "You are a planning agent...",
"memory": {...},
"tools": ["scheduler"],
"next_steps": ["Break down UI work"]
}
The agent now has everything to act independently.
🧠 This context-driven setup enables agents to become self-operating — requesting their own data without hardcoding prompt logic or relying on frontends.
Tech Stack
Project Structure
mcp-server/
├── src/
│ ├── index.ts # Entry point
│ ├── routes/context.ts # Core endpoint
│ ├── lib/contextBuilder.ts # Context logic (optional layer)
│ ├── lib/memoryStore.ts # Simulated in-memory data
│ └── types.ts # Shared types
├── package.json
├── tsconfig.json
This is intentionally minimal. You can later split logs, DB integrations, auth layers, and OpenAPI docs as the system grows.
Step 1: Schema Definitions (types.ts
)
Define the contract between agent and MCP:
type AgentContextRequest = {
agent_id: string;
task_id: string;
request_type: "context_bundle";
};
type AgentContextResponse = {
persona: string;
system_prompt: string;
memory: Record<string, any>;
tools: string[];
next_steps: string[];
};
This sets a standard format for every context exchange. MCP stays consistent no matter how many agents/tools you support.
Why this matters:
agent_id
is used to fetch memory/personatask_id
can be used for tracking session history or task-level memory laterrequest_type
supports extensibility: in the future, you can add"tool_request"
,"log_feedback"
, etc.
Step 2: Mock Memory Layer (lib/memoryStore.ts
)
For now, we simulate memory:
const agentMemory = {
"planner-001": {
memory: {
past_steps: ["Initial UI layout", "Tooling setup"],
user_feedback: "Focus on mobile responsiveness"
},
persona: "PlannerGPT",
system_prompt: "You are a planner...",
tools: ["notepad", "scheduler"],
next_steps: ["Break down frontend work"]
}
};
export const getAgentContext = (agent_id: string) => {
return agentMemory[agent_id];
};
This allows us to:
Store structured state per agent
Simulate retrieval of memory, persona, tools
Quickly prototype without DB overhead
Think of this as a mock database. In real setups, you'd connect to Redis, Postgres, Supabase, etc.
Step 3: The Context Endpoint (routes/context.ts
)
import express from 'express';
import { getAgentContext } from '../lib/memoryStore';
import { BadRequestError, NotFoundError } from 'http-error-kit';
const router = express.Router();
router.post('/', (req, res, next) => {
try {
const { agent_id, request_type } = req.body;
if (request_type !== "context_bundle") {
throw new BadRequestError("Unsupported request type");
}
const context = getAgentContext(agent_id);
if (!context) {
throw new NotFoundError("No context found for this agent");
}
res.json(context);
} catch (err) {
next(err);
}
});
export default router;
This route powers the frontend → MCP call chain.
By supporting one endpoint (/context
), we simplify the contract. In future versions, you could:
Add
/feedback
for reflectionAdd
/tool-result
to store tool outputsAdd
/task-complete
to log transitions
Step 4: Server Bootstrap (index.ts
)
import express from 'express';
import contextRouter from './routes/context';
const app = express();
const port = 3000;
app.use(express.json());
app.use('/context', contextRouter);
// Central error handler
app.use((err: any, req: any, res: any, next: any) => {
res.status(err.statusCode || 500).json({ message: err.message });
});
app.listen(port, () => {
console.log(`MCP Server running on port ${port}`);
});
Why this matters:
Plug-and-play entry point.
You can now POST to
/context
to simulate agent requests.
How the Agent Consumes It
Let’s say you're calling MCP from a GPT agent via fetch:
const res = await fetch("http://localhost:3000/context", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
agent_id: "planner-001",
task_id: "goal-001",
request_type: "context_bundle"
})
});
const context = await res.json();
// use context.persona, context.memory, etc.
Your agent now dynamically adjusts prompts, roles, and behaviors based on this response.
🤖 In Day 3, we’ll give agents even more power — letting them fetch, mutate, and adapt context in real-time. This is the first step toward autonomy.
✅ What We Achieved Today
You created an MCP server from scratch
You understand why request/response schemas matter
You know how agents plug into this
You can extend it with DBs, queues, and auth later
This is foundational infrastructure for LLM-native apps.
⏭️ Next Up: Connecting with Agents
In Day 3, we’ll:
Write agent code that fetches context from MCP
Customize prompts and behaviors per role
Explore memory mutation and task coordination
Let’s go from "protocol exists" to "agents are actually using it".
Stay weird, stay modular.
Stay tuned for Day 3.
Top comments (0)