MCP Client Using LangChain / Python 
This simple Model Context Protocol (MCP)β client demonstrates the use of MCP server tools by LangChain ReAct Agent.
It leverages a utility function convert_mcp_to_langchain_tools() from
langchain_mcp_tools.
This function handles parallel initialization of specified multiple MCP servers
and converts their available tools into a list of LangChain-compatible tools
(List[BaseTool]β).
LLMs from Anthropic, OpenAI and Groq are currently supported.
A typescript version of this MCP client is available hereβ
Prerequisites
- Python 3.11+
- [optional]
uv(uvx) installed to run Python package-based MCP servers - [optional] npm 7+ (
npx) to run Node.js package-based MCP servers - API keys from Anthropicβ, OpenAIβ, and/or Groqβ as needed
Setup
-
Install dependencies:
make install -
Setup API keys:
cp .env.template .env- Update
.envas needed. .gitignoreis configured to ignore.envto prevent accidental commits of the credentials.
- Update
-
Configure LLM and MCP Servers settings
llm_mcp_config.json5as needed.- The configuration file formatβ
for MCP servers follows the same structure as
Claude for Desktopβ,
with one difference: the key name
mcpServershas been changed tomcp_serversto follow the snake_case convention commonly used in JSON configuration files. - The file format is JSON5β, where comments and trailing commas are allowed.
- The format is further extended to replace
${...}notations with the values of corresponding environment variables. - Keep all the credentials and private info in the
.envfile and refer to them with${...}notation as needed.
- The configuration file formatβ
for MCP servers follows the same structure as
Claude for Desktopβ,
with one difference: the key name
Usage
Run the app:
make startIt takes a while on the first run.
Run in verbose mode:
make start-vSee commandline options:
make start-hAt the prompt, you can simply press Enter to use example queries that perform MCP server tool invocations.
Example queries can be configured in llm_mcp_config.json5