Introduction
In recent years, the evolution of large language models (LLMs), such as Claude, has been remarkable. These models are now being widely used across various areas of software engineering, from development support to design discussions.
However, one inherent limitation of LLMs is that their knowledge is confined to the point in time when their training data was last updated. This means they often fail to properly understand or utilize relatively new open-source libraries.
For example, consider our recently published TypeScript package @praha/byethrow, a lightweight and simple Result type library inspired by neverthrow.
When asking Claude or ChatGPT to generate code using this library, they often return inaccurate or misleading responses due to lack of awareness about the library. In some cases, they even hallucinate functions that don't exist or suggest incorrect usage patterns.
To address this issue, we built an MCP (Model Context Provider) server that allows LLMs to reference TSDoc and understand new libraries more accurately.
https://www.npmjs.com/package/@praha/byethrow-mcp
This article outlines the concept and construction of that MCP server.
What is an MCP Server?
MCP (Model Context Provider) is a protocol designed to supply LLMs with additional context or access to external tools.
The typical workflow looks like this:
- The LLM sends a request to an external MCP server
- The MCP executes logic based on the request and returns a response
- The LLM uses that response to take further actions
How Our MCP Works
π§± Overall Structure
LLM (e.g., Claude)
β stdin
MCP Server (Node.js)
β
TSDoc Markdown (pre-generated with typedoc)
π Technologies Used
-
TypeDoc: Generates Markdown docs from TSDoc in
@praha/byethrow
- @modelcontextprotocol/sdk: MCP server framework
- Rslib: Bundles TypeScript code into ESM format
π Request Flow
- The LLM queries available tools from the MCP server
- It selects a tool and makes a request to the MCP
- The MCP returns pre-generated Markdown documentation
- The LLM uses this to generate more accurate completions and explanations
π¨ Sample Response
If you ask Claude:
Explain byethrow in 3 lines
Normally, Claude might respond inaccurately. But with MCP enabled, you could get a response like:
@praha/byethrow is a library that improves error handling in TypeScript using functional programming. It uses a Result type to explicitly handle errors instead of throwing exceptions, ensuring type-safe error flows. It provides combinators like pipe, andThen, and map to write readable, composable, and debuggable code.
This allows LLMs to answer based on accurate API knowledge.
Building the MCP
π Preparing Documentation
The core information source is TSDoc output in Markdown format. This makes it easier for LLMs to consume and understand the documentation.
Install Required Packages
npm install --save-dev typedoc typedoc-plugin-markdown
typedoc.json
Configuration
{
"$schema": "https://typedoc.org/schema.json",
"entryPoints": ["./src/index.ts"],
"plugin": ["typedoc-plugin-markdown"],
"router": "kind",
"readme": "none",
"indexFormat": "table",
"hidePageHeader": true,
"hideBreadcrumbs": true,
"useCodeBlocks": true,
"preserveLinkText": false
}
π Removing In-Page Links
In-page links like [FunctionName](./Functionname)
are often noisy for LLMs.
To remove them, create a custom plugin:
// @ts-check
import { MarkdownPageEvent } from 'typedoc-plugin-markdown';
export const load = (app) => {
app.renderer.on(MarkdownPageEvent.END, (page) => {
page.contents = page.contents
.replaceAll(/Defined in: \[[^\]]+\]\([^\)]+\)\s*/g, '')
.replaceAll(/\[(`?)(.+?)\1\]\([^\)]+\)/g, (_match, _quote, label) => `\`${label}\``);
});
};
βοΈ Rslib Build Configuration
Include Markdown as a build asset:
// rsbuild.config.ts
import { defineConfig } from '@rslib/core';
export default defineConfig({
source: {
tsconfigPath: './tsconfig.build.json',
},
lib: [{ format: 'esm' }],
tools: {
rspack: {
module: {
rules: [
{
test: /\.md$/,
type: 'asset/source',
},
],
},
},
},
});
Add this to avoid type errors with Rspack:
// global.d.ts
/// <reference types="@rspack/core/module" />
π§© MCP Server Implementation
Server Initialization
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import packageJson from '../package.json';
const server = new McpServer(
{
name: '@praha/byethrow',
version: packageJson.version,
},
{
instructions: 'Use this server to retrieve up-to-date documentation and code examples for @praha/byethrow.',
},
);
Document Loader
const loadDocument = (key: string) => {
return import.meta.webpackContext('../docs')(key) as string;
};
Tool Registration
-
ModuleReference
server.tool(
'ModuleReference',
'Returns an overview of @praha/byethrow and its exported modules.',
() => ({
content: [{
type: 'text',
text: loadDocument('./modules/Result.md'),
}],
}),
);
-
FunctionReference
import { z } from 'zod';
server.tool(
'FunctionReference',
'Returns reference and examples for specific functions.',
{ name: z.string().describe('Function name.') },
({ name }) => ({
content: [{
type: 'text',
text: loadDocument(`./functions/Result.${name}.md`),
}],
}),
);
-
TypeReference
server.tool(
'TypeReference',
'Returns reference and examples for specific types.',
{ name: z.string().describe('Type name.') },
({ name }) => ({
content: [{
type: 'text',
text: loadDocument(`./types/Result.${name}.md`),
}],
}),
);
Connecting via stdio
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
const transport = new StdioServerTransport();
await server.connect(transport);
β Verifying MCP Server
To verify the setup, add the following to VSCode settings.json
:
{
"mcp": {
"servers": {
"my-mcp-server": {
"type": "stdio",
"command": "node",
"args": ["{path-to-mcp}/dist/index.js"]
}
}
}
}
If the log shows the following, itβs working:
[info] Starting server my-mcp-server
[info] Connection state: starting
[info] Starting server from LocalProcess extension host
[info] Connection state: running
[info] Discovered 3 tools
π Public Repository
The full source code and configuration for the MCP server described in this article is available at:
https://github.com/praha-inc/byethrow/tree/main/packages/mcp
Feel free to clone and try it locally.
Summary
LLMs are powerful but not omniscient. They are bound by a "knowledge cutoff." However, by introducing context-providing layers like MCP, we can bridge this gap.
If you're developing new libraries or using internal tools in your organization, consider adopting this approach to enhance your LLM workflows.
Top comments (0)