DEV Community

balaji thadagam kandavel
balaji thadagam kandavel

Posted on

Step-by-Step Guide: Setting Up MCP Locally with LLMs Using TypeScript

Introduction

In this tutorial, we will walk through the step-by-step process of setting up Model Context Protocol (MCP) locally and integrating it with Large Language Models (LLMs) using TypeScript. MCP provides a structured way to expose API metadata, making it easier for LLMs to interact with API specifications dynamically.

By following this guide, you will:

  • Install and configure MCP locally.
  • Set up an MCP server to expose API specifications.
  • Use an LLM to modify API specifications.
  • Interact with MCP using an MCP client.

Let's get started!

Prerequisites

Before proceeding, ensure you have the following installed:

  • Node.js (16.x or later)
  • TypeScript (npm install -g typescript)
  • MCP SDK (npm install @modelcontextprotocol/sdk)
  • An OpenAI-compatible LLM SDK (e.g., npm install openai)

Step 1: Setting Up an MCP Server Locally

The MCP server will serve as a centralized repository for API specifications, which the LLM can modify.

Create a new project and install dependencies

mkdir mcp-llm-api
cd mcp-llm-api
npm init -y
npm install @modelcontextprotocol/sdk zod openai typescript @types/node
Enter fullscreen mode Exit fullscreen mode

Initialize an MCP server in TypeScript

Create a file server.ts and add the following code:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp";
import { z } from "zod";

const server = new McpServer({ name: "LocalMCPServer", version: "1.0.0" });

let apiSpec = { "openapi": "3.0.0", "info": { "title": "Sample API", "version": "1.0.0" }, "paths": {} };

server.resource("api-spec", "api://spec", async () => ({
  contents: [{ text: JSON.stringify(apiSpec) }]
}));

server.listen(4000, () => console.log("MCP server running on port 4000"));
Enter fullscreen mode Exit fullscreen mode

Compile and run the server: tsc server.ts && node server.js

Your MCP server is now running locally, exposing API specifications at http://localhost:4000.

Step 2: Adding LLM-Assisted API Updates

Next, we'll allow an LLM to propose updates to the API specification based on developer requests.

Install an LLM SDK npm install openai dotenv

Modify the MCP server to include an update tool

Edit server.ts to include an LLM-powered update tool:

import { OpenAI } from "openai";
import dotenv from "dotenv";

dotenv.config();

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

server.tool(
  "propose_update",
  { request: z.string() },
  async ({ request }) => {
    const prompt = `Modify the following OpenAPI spec based on this request: "${request}"\n${JSON.stringify(apiSpec)}`;
    const llmResponse = await openai.completions.create({
      model: "gpt-4",
      prompt,
      max_tokens: 500,
    });

    apiSpec = JSON.parse(llmResponse.choices[0].text.trim());
    return { content: [{ type: "text", text: JSON.stringify(apiSpec) }] };
  }
);

Enter fullscreen mode Exit fullscreen mode

Ensure you have an OpenAI API key stored in a .env file:

OPENAI_API_KEY=your_openai_api_key_here

Restart the server for changes to take effect:
tsc server.ts && node server.js

Step 3: Setting Up an MCP Client to Interact with the Server

Now, let's create an MCP client to retrieve and update API specifications.

Create a client script

Create a file client.ts and add:

import { Client, HttpClientTransport } from "@modelcontextprotocol/sdk/client";

const client = new Client({ name: "LocalMCPClient", version: "1.0.0" });
await client.connect(new HttpClientTransport("http://localhost:4000"));

const currentSpec = await client.readResource("api://spec");
console.log("Current API Spec:", currentSpec?.contents[0]?.text);

const changeRequest = "Add a GET /reports endpoint for user reports by date range.";
const toolResult = await client.callTool({ name: "propose_update", arguments: { request: changeRequest } });
console.log("Updated API Spec:", toolResult.content[0].text);
Enter fullscreen mode Exit fullscreen mode

Compile and run the client: tsc client.ts && node client.js

The client retrieves the current API spec, submits a change request, and receives the updated API specification from the LLM.

Step 4: Testing and Validating API Updates

Now that the MCP setup is complete, we need to ensure updates are handled correctly.

Testing Steps

Start the MCP server: tsc server.ts && node server.js

Run the client script: tsc client.ts && node client.js

Verify that the API specification is modified as expected.

If needed, restart the server and rerun the client with different API modifications.

Step 5: Automating API Updates with CI/CD

To fully leverage MCP, API updates can be integrated into CI/CD pipelines.

Example: Adding API Changes in a GitHub Action

name: MCP API Update
on:
  push:
    branches:
      - main
jobs:
  update-api:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repo
        uses: actions/checkout@v2
      - name: Install Dependencies
        run: npm install
      - name: Run MCP Client to Update API
        run: tsc client.ts && node client.js
Enter fullscreen mode Exit fullscreen mode

This ensures API specifications remain up-to-date as changes are pushed.

Conclusion

By setting up MCP locally with LLMs, we:
✔ Installed and configured an MCP server.
✔ Exposed an API specification as an MCP resource.
✔ Allowed an LLM to propose API updates dynamically.
✔ Created an MCP client to interact with the server.
✔ Automated API updates within CI/CD pipelines.

With this setup, API modifications become faster, automated, and AI-assisted, eliminating manual version tracking and ensuring consistent documentation updates.

Start integrating MCP + LLMs today to revolutionize your API development process!

Top comments (0)