diff --git a/.github/actions/setup-python/action.yml b/.github/actions/setup-python/action.yml
index 382fd60..33816c5 100644
--- a/.github/actions/setup-python/action.yml
+++ b/.github/actions/setup-python/action.yml
@@ -5,7 +5,7 @@ inputs:
python-version:
description: Python version
required: false
- default: "3.9"
+ default: "3.10"
runs:
using: "composite"
diff --git a/README.md b/README.md
index 76ef536..356effc 100644
--- a/README.md
+++ b/README.md
@@ -8,7 +8,7 @@
# nonebot-plugin-llmchat
-_✨ 支持多API预设配置的AI群聊插件 ✨_
+_✨ 支持多API预设、MCP协议、联网搜索、视觉模型的AI群聊插件 ✨_
@@ -17,33 +17,39 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
-
+
+
## 📖 介绍
-1. **多API预设支持**
+1. **支持MCP协议**
+ - 可以连接各种支持MCP协议的LLM工具
+ - 通过连接一些搜索MCP服务器可以实现在线搜索
+ - 兼容 Claude.app 的配置格式
+
+2. **多API预设支持**
- 可配置多个LLM服务预设(如不同模型/API密钥)
- 支持运行时通过`API预设`命令热切换API配置
- 内置服务开关功能(预设名为`off`时停用)
-2. **多种回复触发方式**
+3. **多种回复触发方式**
- @触发 + 随机概率触发
- 支持处理回复消息
- 群聊消息顺序处理,防止消息错乱
-3. **分群聊上下文记忆管理**
+4. **分群聊上下文记忆管理**
- 分群聊保留对话历史记录(可配置保留条数)
- 自动合并未处理消息,降低API用量
- 支持`记忆清除`命令手动重置对话上下文
-4. **分段回复支持**
+5. **分段回复支持**
- 支持多段式回复(由LLM决定如何回复)
- 可@群成员(由LLM插入)
- 可选输出AI的思维过程(需模型支持)
-5. **可自定义性格**
+6. **可自定义性格**
- 可动态修改群组专属系统提示词(`/修改设定`)
- 支持自定义默认提示词
@@ -100,8 +106,11 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
| LLMCHAT__PAST_EVENTS_SIZE | 否 | 10 | 触发回复时发送的群消息数量(1-20),越大token消耗量越多 |
| LLMCHAT__REQUEST_TIMEOUT | 否 | 30 | API请求超时时间(秒) |
| LLMCHAT__DEFAULT_PRESET | 否 | off | 默认使用的预设名称,配置为off则为关闭 |
-| LLMCHAT__RANDOM_TRIGGER_PROB | 否 | 0.05 | 随机触发概率(0-1] |
+| LLMCHAT__RANDOM_TRIGGER_PROB | 否 | 0.05 | 默认随机触发概率 [0, 1] |
| LLMCHAT__DEFAULT_PROMPT | 否 | 你的回答应该尽量简洁、幽默、可以使用一些语气词、颜文字。你应该拒绝回答任何政治相关的问题。 | 默认提示词 |
+| LLMCHAT__BLACKLIST_USER_IDS | 否 | [] | 黑名单用户ID列表,机器人将不会处理黑名单用户的消息 |
+| LLMCHAT__IGNORE_PREFIXES | 否 | [] | 需要忽略的消息前缀列表,匹配到这些前缀的消息不会处理 |
+| LLMCHAT__MCP_SERVERS | 否 | {} | MCP服务器配置,具体见下表 |
其中LLMCHAT__API_PRESETS为一个列表,每项配置有以下的配置项
| 配置项 | 必填 | 默认值 | 说明 |
@@ -112,6 +121,24 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
| model_name | 是 | 无 | 模型名称 |
| max_tokens | 否 | 2048 | 最大响应token数 |
| temperature | 否 | 0.7 | 生成温度 |
+| proxy | 否 | 无 | 请求API时使用的HTTP代理 |
+| support_mcp | 否 | False | 是否支持MCP协议 |
+| support_image | 否 | False | 是否支持图片输入 |
+
+
+LLMCHAT__MCP_SERVERS同样为一个dict,key为服务器名称,value配置的格式基本兼容 Claude.app 的配置格式,具体支持如下
+| 配置项 | 必填 | 默认值 | 说明 |
+|:-----:|:----:|:----:|:----:|
+| command | stdio服务器必填 | 无 | stdio服务器MCP命令 |
+| arg | 否 | [] | stdio服务器MCP命令参数 |
+| env | 否 | {} | stdio服务器环境变量 |
+| url | sse服务器必填 | 无 | sse服务器地址 |
+
+以下为在 Claude.app 的MCP服务器配置基础上增加的字段
+| 配置项 | 必填 | 默认值 | 说明 |
+|:-----:|:----:|:----:|:----:|
+| friendly_name | 否 | 无 | 友好名称,用于调用时发送提示信息 |
+| additional_prompt | 否 | 无 | 关于这个工具的附加提示词 |
配置示例
@@ -125,15 +152,38 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
"name": "aliyun-deepseek-v3",
"api_key": "sk-your-api-key",
"model_name": "deepseek-v3",
- "api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1"
+ "api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1",
+ "proxy": "http://10.0.0.183:7890"
},
{
- "name": "deepseek-r1",
+ "name": "deepseek-v1",
"api_key": "sk-your-api-key",
- "model_name": "deepseek-reasoner",
- "api_base": "https://api.deepseek.com"
+ "model_name": "deepseek-chat",
+ "api_base": "https://api.deepseek.com",
+ "support_mcp": true
+ },
+ {
+ "name": "some-vison-model",
+ "api_key": "sk-your-api-key",
+ "model_name": "some-vison-model",
+ "api_base": "https://some-vison-model.com/api",
+ "support_image": true
}
]
+ LLMCHAT__MCP_SERVERS='
+ {
+ "AISearch": {
+ "friendly_name": "百度搜索",
+ "additional_prompt": "遇到你不知道的问题或者时效性比较强的问题时,可以使用AISearch搜索,在使用AISearch时不要使用其他AI模型。",
+ "url": "http://appbuilder.baidu.com/v2/ai_search/mcp/sse?api_key=Bearer+"
+ },
+ "fetch": {
+ "friendly_name": "网页浏览",
+ "command": "uvx",
+ "args": ["mcp-server-fetch"]
+ }
+ }
+ '
'
@@ -142,7 +192,7 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
**如果`LLMCHAT__DEFAULT_PRESET`没有配置,则插件默认为关闭状态,请使用`API预设+[预设名]`开启插件**
-配置完成后@机器人即可手动触发回复,另外在机器人收到群聊消息时会根据`LLMCHAT__RANDOM_TRIGGER_PROB`配置的概率随机自动触发回复。
+配置完成后@机器人即可手动触发回复,另外在机器人收到群聊消息时会根据`LLMCHAT__RANDOM_TRIGGER_PROB`配置的概率或群聊中使用指令设置的概率随机自动触发回复。
### 指令表
@@ -154,6 +204,8 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
| 修改设定 | 管理 | 否 | 群聊 | 设定 | 修改机器人的设定,最好在修改之后执行一次记忆清除 |
| 记忆清除 | 管理 | 否 | 群聊 | 无 | 清除机器人的记忆 |
| 切换思维输出 | 管理 | 否 | 群聊 | 无 | 切换是否输出AI的思维过程的开关(需模型支持) |
+| 设置主动回复概率 | 管理 | 否 | 群聊 | 主动回复概率 | 主动回复概率需为 [0, 1] 的浮点数,0为完全关闭主动回复 |
### 效果图
-
\ No newline at end of file
+
+
diff --git a/img/mcp_demo.jpg b/img/mcp_demo.jpg
new file mode 100644
index 0000000..159954c
Binary files /dev/null and b/img/mcp_demo.jpg differ
diff --git a/nonebot_plugin_llmchat/__init__.py b/nonebot_plugin_llmchat/__init__.py
old mode 100644
new mode 100755
index 96cce19..d3c6605
--- a/nonebot_plugin_llmchat/__init__.py
+++ b/nonebot_plugin_llmchat/__init__.py
@@ -1,14 +1,17 @@
import asyncio
+import base64
from collections import defaultdict, deque
from datetime import datetime
import json
import os
import random
import re
+import ssl
import time
-from typing import TYPE_CHECKING, Optional
+from typing import TYPE_CHECKING
import aiofiles
+import httpx
from nonebot import (
get_bot,
get_driver,
@@ -27,6 +30,7 @@ from nonebot.rule import Rule
from openai import AsyncOpenAI
from .config import Config, PresetConfig
+from .mcpclient import MCPClient
require("nonebot_plugin_localstore")
import nonebot_plugin_localstore as store
@@ -35,13 +39,14 @@ require("nonebot_plugin_apscheduler")
from nonebot_plugin_apscheduler import scheduler
if TYPE_CHECKING:
- from collections.abc import Iterable
-
- from openai.types.chat import ChatCompletionMessageParam
+ from openai.types.chat import (
+ ChatCompletionContentPartParam,
+ ChatCompletionMessageParam,
+ )
__plugin_meta__ = PluginMetadata(
name="llmchat",
- description="支持多API预设配置的AI群聊插件",
+ description="支持多API预设、MCP协议、联网搜索、视觉模型的AI群聊插件",
usage="""@机器人 + 消息 开启对话""",
type="application",
homepage="https://github.com/FuQuan233/nonebot-plugin-llmchat",
@@ -55,8 +60,8 @@ tasks: set["asyncio.Task"] = set()
def pop_reasoning_content(
- content: Optional[str],
-) -> tuple[Optional[str], Optional[str]]:
+ content: str | None,
+) -> tuple[str | None, str | None]:
if content is None:
return None, None
@@ -75,13 +80,14 @@ def pop_reasoning_content(
class GroupState:
def __init__(self):
self.preset_name = plugin_config.default_preset
- self.history = deque(maxlen=plugin_config.history_size)
+ self.history = deque(maxlen=plugin_config.history_size * 2)
self.queue = asyncio.Queue()
self.processing = False
self.last_active = time.time()
self.past_events = deque(maxlen=plugin_config.past_events_size)
- self.group_prompt: Optional[str] = None
+ self.group_prompt: str | None = None
self.output_reasoning_content = False
+ self.random_trigger_prob = plugin_config.random_trigger_prob
group_states: dict[int, GroupState] = defaultdict(GroupState)
@@ -159,6 +165,16 @@ async def is_triggered(event: GroupMessageEvent) -> bool:
if state.preset_name == "off":
return False
+ # 黑名单用户
+ if event.user_id in plugin_config.blacklist_user_ids:
+ return False
+
+ # 忽略特定前缀的消息
+ msg_text = event.get_plaintext().strip()
+ for prefix in plugin_config.ignore_prefixes:
+ if msg_text.startswith(prefix):
+ return False
+
state.past_events.append(event)
# 原有@触发条件
@@ -166,7 +182,7 @@ async def is_triggered(event: GroupMessageEvent) -> bool:
return True
# 随机触发条件
- if random.random() < plugin_config.random_trigger_prob:
+ if random.random() < state.random_trigger_prob:
return True
return False
@@ -175,7 +191,7 @@ async def is_triggered(event: GroupMessageEvent) -> bool:
# 消息处理器
handler = on_message(
rule=Rule(is_triggered),
- priority=10,
+ priority=99,
block=False,
)
@@ -196,17 +212,65 @@ async def handle_message(event: GroupMessageEvent):
task.add_done_callback(tasks.discard)
tasks.add(task)
+async def process_images(event: GroupMessageEvent) -> list[str]:
+ base64_images = []
+ for segement in event.get_message():
+ if segement.type == "image":
+ image_url = segement.data.get("url") or segement.data.get("file")
+ if image_url:
+ try:
+ # 处理高版本 httpx 的 [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] 报错
+ ssl_context = ssl.create_default_context()
+ ssl_context.check_hostname = False
+ ssl_context.verify_mode = ssl.CERT_NONE
+ ssl_context.set_ciphers("DEFAULT@SECLEVEL=2")
+
+ # 下载图片并将图片转换为base64
+ async with httpx.AsyncClient(verify=ssl_context) as client:
+ response = await client.get(image_url, timeout=10.0)
+ if response.status_code != 200:
+ logger.error(f"下载图片失败: {image_url}, 状态码: {response.status_code}")
+ continue
+ image_data = response.content
+ base64_data = base64.b64encode(image_data).decode("utf-8")
+ base64_images.append(base64_data)
+ except Exception as e:
+ logger.error(f"处理图片时出错: {e}")
+ logger.debug(f"共处理 {len(base64_images)} 张图片")
+ return base64_images
+
+async def send_split_messages(message_handler, content: str):
+ """
+ 将消息按分隔符分段并发送
+ """
+ logger.info(f"准备发送分段消息,分段数:{len(content.split(''))}")
+ for segment in content.split(""):
+ # 跳过空消息
+ if not segment.strip():
+ continue
+ segment = segment.strip() # 删除前后多余的换行和空格
+ await asyncio.sleep(2) # 避免发送过快
+ logger.debug(f"发送消息分段 内容:{segment[:50]}...") # 只记录前50个字符避免日志过大
+ await message_handler.send(Message(segment))
async def process_messages(group_id: int):
state = group_states[group_id]
preset = get_preset(group_id)
# 初始化OpenAI客户端
- client = AsyncOpenAI(
- base_url=preset.api_base,
- api_key=preset.api_key,
- timeout=plugin_config.request_timeout,
- )
+ if preset.proxy != "":
+ client = AsyncOpenAI(
+ base_url=preset.api_base,
+ api_key=preset.api_key,
+ timeout=plugin_config.request_timeout,
+ http_client=httpx.AsyncClient(proxy=preset.proxy),
+ )
+ else:
+ client = AsyncOpenAI(
+ base_url=preset.api_base,
+ api_key=preset.api_key,
+ timeout=plugin_config.request_timeout,
+ )
logger.info(
f"开始处理群聊消息 群号:{group_id} 当前队列长度:{state.queue.qsize()}"
@@ -232,78 +296,141 @@ async def process_messages(group_id: int):
下面是关于你性格的设定,如果设定中提到让你扮演某个人,或者设定中有提到名字,则优先使用设定中的名字。
{state.group_prompt or plugin_config.default_prompt}
"""
+ if preset.support_mcp:
+ systemPrompt += "你也可以使用一些工具,下面是关于这些工具的额外说明:\n"
+ for mcp_name, mcp_config in plugin_config.mcp_servers.items():
+ if mcp_config.addtional_prompt:
+ systemPrompt += f"{mcp_name}:{mcp_config.addtional_prompt}"
+ systemPrompt += "\n"
- messages: Iterable[ChatCompletionMessageParam] = [
+ messages: list[ChatCompletionMessageParam] = [
{"role": "system", "content": systemPrompt}
]
- messages += list(state.history)[-plugin_config.history_size :]
+ while len(state.history) > 0 and state.history[0]["role"] != "user":
+ state.history.popleft()
+
+ messages += list(state.history)[-plugin_config.history_size * 2 :]
# 没有未处理的消息说明已经被处理了,跳过
if state.past_events.__len__() < 1:
break
+ content: list[ChatCompletionContentPartParam] = []
+
# 将机器人错过的消息推送给LLM
- content = ",".join([format_message(ev) for ev in state.past_events])
+ past_events_snapshot = list(state.past_events)
+ for ev in past_events_snapshot:
+ text_content = format_message(ev)
+ content.append({"type": "text", "text": text_content})
+
+ # 将消息中的图片转成 base64
+ if preset.support_image:
+ base64_images = await process_images(ev)
+ for base64_image in base64_images:
+ content.append({"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}})
+
+ new_messages: list[ChatCompletionMessageParam] = [
+ {"role": "user", "content": content}
+ ]
logger.debug(
f"发送API请求 模型:{preset.model_name} 历史消息数:{len(messages)}"
)
+
+ client_config = {
+ "model": preset.model_name,
+ "max_tokens": preset.max_tokens,
+ "temperature": preset.temperature,
+ "timeout": 60,
+ }
+
+ mcp_client = MCPClient(plugin_config.mcp_servers)
+ if preset.support_mcp:
+ await mcp_client.connect_to_servers()
+ available_tools = await mcp_client.get_available_tools()
+ client_config["tools"] = available_tools
+
response = await client.chat.completions.create(
- model=preset.model_name,
- messages=[*messages, {"role": "user", "content": content}],
- max_tokens=preset.max_tokens,
- temperature=preset.temperature,
- timeout=60,
+ **client_config,
+ messages=messages + new_messages,
)
if response.usage is not None:
logger.debug(f"收到API响应 使用token数:{response.usage.total_tokens}")
- # 请求成功后再保存历史记录,保证user和assistant穿插,防止R1模型报错
- state.history.append({"role": "user", "content": content})
- state.past_events.clear()
+ message = response.choices[0].message
+
+ # 处理响应并处理工具调用
+ while preset.support_mcp and message.tool_calls:
+ new_messages.append({
+ "role": "assistant",
+ "tool_calls": [tool_call.model_dump() for tool_call in message.tool_calls]
+ })
+
+ # 发送LLM调用工具时的回复,一般没有
+ if message.content:
+ await send_split_messages(handler, message.content)
+
+ # 处理每个工具调用
+ for tool_call in message.tool_calls:
+ tool_name = tool_call.function.name
+ tool_args = json.loads(tool_call.function.arguments)
+
+ # 发送工具调用提示
+ await handler.send(Message(f"正在使用{mcp_client.get_friendly_name(tool_name)}"))
+
+ # 执行工具调用
+ result = await mcp_client.call_tool(tool_name, tool_args)
+
+ new_messages.append({
+ "role": "tool",
+ "tool_call_id": tool_call.id,
+ "content": str(result)
+ })
+
+ # 将工具调用的结果交给 LLM
+ response = await client.chat.completions.create(
+ **client_config,
+ messages=messages + new_messages,
+ )
+
+ message = response.choices[0].message
+
+ await mcp_client.cleanup()
reply, matched_reasoning_content = pop_reasoning_content(
response.choices[0].message.content
)
- reasoning_content: Optional[str] = (
+ reasoning_content: str | None = (
getattr(response.choices[0].message, "reasoning_content", None)
or matched_reasoning_content
)
+ new_messages.append({
+ "role": "assistant",
+ "content": reply,
+ })
+
+ # 请求成功后再保存历史记录,保证user和assistant穿插,防止R1模型报错
+ for message in new_messages:
+ state.history.append(message)
+ state.past_events.clear()
+
if state.output_reasoning_content and reasoning_content:
- bot = get_bot(str(event.self_id))
- await bot.send_group_forward_msg(
- group_id=group_id,
- messages=build_reasoning_forward_nodes(
- bot.self_id, reasoning_content
- ),
- )
+ try:
+ bot = get_bot(str(event.self_id))
+ await bot.send_group_forward_msg(
+ group_id=group_id,
+ messages=build_reasoning_forward_nodes(
+ bot.self_id, reasoning_content
+ ),
+ )
+ except Exception as e:
+ logger.error(f"合并转发消息发送失败:\n{e!s}\n")
assert reply is not None
- logger.info(
- f"准备发送回复消息 群号:{group_id} 消息分段数:{len(reply.split(''))}"
- )
- for r in reply.split(""):
- # 似乎会有空消息的情况导致string index out of range异常
- if len(r) == 0 or r.isspace():
- continue
- # 删除前后多余的换行和空格
- r = r.strip()
- await asyncio.sleep(2)
- logger.debug(
- f"发送消息分段 内容:{r[:50]}..."
- ) # 只记录前50个字符避免日志过大
- await handler.send(Message(r))
-
- # 添加助手回复到历史
- state.history.append(
- {
- "role": "assistant",
- "content": reply,
- }
- )
+ await send_split_messages(handler, reply)
except Exception as e:
logger.opt(exception=e).error(f"API请求失败 群号:{group_id}")
@@ -357,7 +484,7 @@ async def handle_edit_preset(event: GroupMessageEvent, args: Message = CommandAr
reset_handler = on_command(
"记忆清除",
- priority=99,
+ priority=1,
block=True,
permission=(SUPERUSER | GROUP_ADMIN | GROUP_OWNER),
)
@@ -372,6 +499,30 @@ async def handle_reset(event: GroupMessageEvent, args: Message = CommandArg()):
await reset_handler.finish("记忆已清空")
+set_prob_handler = on_command(
+ "设置主动回复概率",
+ priority=1,
+ block=True,
+ permission=(SUPERUSER | GROUP_ADMIN | GROUP_OWNER),
+)
+
+
+@set_prob_handler.handle()
+async def handle_set_prob(event: GroupMessageEvent, args: Message = CommandArg()):
+ group_id = event.group_id
+ prob = 0
+
+ try:
+ prob = float(args.extract_plain_text().strip())
+ if prob < 0 or prob > 1:
+ raise ValueError
+ except Exception as e:
+ await reset_handler.finish(f"输入有误,请使用 [0,1] 的浮点数\n{e!s}")
+
+ group_states[group_id].random_trigger_prob = prob
+ await reset_handler.finish(f"主动回复概率已设为 {prob}")
+
+
# 预设切换命令
think_handler = on_command(
"切换思维输出",
@@ -409,6 +560,7 @@ async def save_state():
"last_active": state.last_active,
"group_prompt": state.group_prompt,
"output_reasoning_content": state.output_reasoning_content,
+ "random_trigger_prob": state.random_trigger_prob,
}
for gid, state in group_states.items()
}
@@ -430,11 +582,12 @@ async def load_state():
state = GroupState()
state.preset_name = state_data["preset"]
state.history = deque(
- state_data["history"], maxlen=plugin_config.history_size
+ state_data["history"], maxlen=plugin_config.history_size * 2
)
state.last_active = state_data["last_active"]
state.group_prompt = state_data["group_prompt"]
state.output_reasoning_content = state_data["output_reasoning_content"]
+ state.random_trigger_prob = state_data.get("random_trigger_prob", plugin_config.random_trigger_prob)
group_states[int(gid)] = state
diff --git a/nonebot_plugin_llmchat/config.py b/nonebot_plugin_llmchat/config.py
old mode 100644
new mode 100755
index c5e4f37..d658875
--- a/nonebot_plugin_llmchat/config.py
+++ b/nonebot_plugin_llmchat/config.py
@@ -10,7 +10,20 @@ class PresetConfig(BaseModel):
model_name: str = Field(..., description="模型名称")
max_tokens: int = Field(2048, description="最大响应token数")
temperature: float = Field(0.7, description="生成温度(0-2]")
+ proxy: str = Field("", description="HTTP代理服务器")
+ support_mcp: bool = Field(False, description="是否支持MCP")
+ support_image: bool = Field(False, description="是否支持图片输入")
+class MCPServerConfig(BaseModel):
+ """MCP服务器配置"""
+ command: str | None = Field(None, description="stdio模式下MCP命令")
+ args: list[str] | None = Field([], description="stdio模式下MCP命令参数")
+ env: dict[str, str] | None = Field({}, description="stdio模式下MCP命令环境变量")
+ url: str | None = Field(None, description="sse模式下MCP服务器地址")
+
+ # 额外字段
+ friendly_name: str | None = Field(None, description="MCP服务器友好名称")
+ addtional_prompt: str | None = Field(None, description="额外提示词")
class ScopedConfig(BaseModel):
"""LLM Chat Plugin配置"""
@@ -29,6 +42,12 @@ class ScopedConfig(BaseModel):
"你的回答应该尽量简洁、幽默、可以使用一些语气词、颜文字。你应该拒绝回答任何政治相关的问题。",
description="默认提示词",
)
+ mcp_servers: dict[str, MCPServerConfig] = Field({}, description="MCP服务器配置")
+ blacklist_user_ids: set[int] = Field(set(), description="黑名单用户ID列表")
+ ignore_prefixes: list[str] = Field(
+ default_factory=list,
+ description="需要忽略的消息前缀列表,匹配到这些前缀的消息不会处理"
+ )
class Config(BaseModel):
diff --git a/nonebot_plugin_llmchat/mcpclient.py b/nonebot_plugin_llmchat/mcpclient.py
new file mode 100644
index 0000000..55e1b44
--- /dev/null
+++ b/nonebot_plugin_llmchat/mcpclient.py
@@ -0,0 +1,83 @@
+import asyncio
+from contextlib import AsyncExitStack
+
+from mcp import ClientSession, StdioServerParameters
+from mcp.client.sse import sse_client
+from mcp.client.stdio import stdio_client
+from nonebot import logger
+
+from .config import MCPServerConfig
+
+
+class MCPClient:
+ def __init__(self, server_config: dict[str, MCPServerConfig]):
+ logger.info(f"正在初始化MCPClient,共有{len(server_config)}个服务器配置")
+ self.server_config = server_config
+ self.sessions = {}
+ self.exit_stack = AsyncExitStack()
+ logger.debug("MCPClient初始化成功")
+
+ async def connect_to_servers(self):
+ logger.info(f"开始连接{len(self.server_config)}个MCP服务器")
+ for server_name, config in self.server_config.items():
+ logger.debug(f"正在连接服务器[{server_name}]")
+ if config.url:
+ sse_transport = await self.exit_stack.enter_async_context(sse_client(url=config.url))
+ read, write = sse_transport
+ self.sessions[server_name] = await self.exit_stack.enter_async_context(ClientSession(read, write))
+ await self.sessions[server_name].initialize()
+ elif config.command:
+ stdio_transport = await self.exit_stack.enter_async_context(
+ stdio_client(StdioServerParameters(**config.model_dump()))
+ )
+ read, write = stdio_transport
+ self.sessions[server_name] = await self.exit_stack.enter_async_context(ClientSession(read, write))
+ await self.sessions[server_name].initialize()
+ else:
+ raise ValueError("Server config must have either url or command")
+
+ logger.info(f"已成功连接到MCP服务器[{server_name}]")
+
+ async def get_available_tools(self):
+ logger.info(f"正在从{len(self.sessions)}个已连接的服务器获取可用工具")
+ available_tools = []
+
+ for server_name, session in self.sessions.items():
+ logger.debug(f"正在列出服务器[{server_name}]中的工具")
+ response = await session.list_tools()
+ tools = response.tools
+ logger.debug(f"在服务器[{server_name}]中找到{len(tools)}个工具")
+
+ available_tools.extend(
+ {
+ "type": "function",
+ "function": {
+ "name": f"{server_name}___{tool.name}",
+ "description": tool.description,
+ "parameters": tool.inputSchema,
+ },
+ }
+ for tool in tools
+ )
+ return available_tools
+
+ async def call_tool(self, tool_name: str, tool_args: dict):
+ server_name, real_tool_name = tool_name.split("___")
+ logger.info(f"正在服务器[{server_name}]上调用工具[{real_tool_name}]")
+ session = self.sessions[server_name]
+ try:
+ response = await asyncio.wait_for(session.call_tool(real_tool_name, tool_args), timeout=30)
+ except asyncio.TimeoutError:
+ logger.error(f"调用工具[{real_tool_name}]超时")
+ return f"调用工具[{real_tool_name}]超时"
+ logger.debug(f"工具[{real_tool_name}]调用完成,响应: {response}")
+ return response.content
+
+ def get_friendly_name(self, tool_name: str):
+ server_name, real_tool_name = tool_name.split("___")
+ return (self.server_config[server_name].friendly_name or server_name) + " - " + real_tool_name
+
+ async def cleanup(self):
+ logger.debug("正在清理MCPClient资源")
+ await self.exit_stack.aclose()
+ logger.debug("MCPClient资源清理完成")
diff --git a/poetry.lock b/poetry.lock
index 4cc2b18..50d98ce 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -1,4 +1,4 @@
-# This file is automatically @generated by Poetry 2.0.1 and should not be changed by hand.
+# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
[[package]]
name = "aiofiles"
@@ -44,7 +44,7 @@ typing_extensions = {version = ">=4.5", markers = "python_version < \"3.13\""}
[package.extras]
doc = ["Sphinx (>=7.4,<8.0)", "packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx_rtd_theme"]
-test = ["anyio[trio]", "coverage[toml] (>=7)", "exceptiongroup (>=1.2.0)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "trustme", "truststore (>=0.9.1)", "uvloop (>=0.21)"]
+test = ["anyio[trio]", "coverage[toml] (>=7)", "exceptiongroup (>=1.2.0)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "trustme", "truststore (>=0.9.1) ; python_version >= \"3.10\"", "uvloop (>=0.21) ; platform_python_implementation == \"CPython\" and platform_system != \"Windows\" and python_version < \"3.14\""]
trio = ["trio (>=0.26.1)"]
[[package]]
@@ -70,7 +70,7 @@ mongodb = ["pymongo (>=3.0)"]
redis = ["redis (>=3.0)"]
rethinkdb = ["rethinkdb (>=2.4.0)"]
sqlalchemy = ["sqlalchemy (>=1.4)"]
-test = ["APScheduler[etcd,mongodb,redis,rethinkdb,sqlalchemy,tornado,zookeeper]", "PySide6", "anyio (>=4.5.2)", "gevent", "pytest", "pytz", "twisted"]
+test = ["APScheduler[etcd,mongodb,redis,rethinkdb,sqlalchemy,tornado,zookeeper]", "PySide6 ; platform_python_implementation == \"CPython\" and python_version < \"3.14\"", "anyio (>=4.5.2)", "gevent ; python_version < \"3.14\"", "pytest", "pytz", "twisted ; python_version < \"3.14\""]
tornado = ["tornado (>=4.3)"]
twisted = ["twisted"]
zookeeper = ["kazoo"]
@@ -99,6 +99,21 @@ files = [
{file = "cfgv-3.4.0.tar.gz", hash = "sha256:e52591d4c5f5dead8e0f673fb16db7949d2cfb3f7da4582893288f0ded8fe560"},
]
+[[package]]
+name = "click"
+version = "8.1.8"
+description = "Composable command line interface toolkit"
+optional = false
+python-versions = ">=3.7"
+groups = ["main"]
+files = [
+ {file = "click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2"},
+ {file = "click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a"},
+]
+
+[package.dependencies]
+colorama = {version = "*", markers = "platform_system == \"Windows\""}
+
[[package]]
name = "colorama"
version = "0.4.6"
@@ -166,7 +181,7 @@ files = [
[package.extras]
docs = ["furo (>=2024.8.6)", "sphinx (>=8.1.3)", "sphinx-autodoc-typehints (>=3)"]
testing = ["covdefaults (>=2.3)", "coverage (>=7.6.10)", "diff-cover (>=9.2.1)", "pytest (>=8.3.4)", "pytest-asyncio (>=0.25.2)", "pytest-cov (>=6)", "pytest-mock (>=3.14)", "pytest-timeout (>=2.3.1)", "virtualenv (>=20.28.1)"]
-typing = ["typing-extensions (>=4.12.2)"]
+typing = ["typing-extensions (>=4.12.2) ; python_version < \"3.11\""]
[[package]]
name = "h11"
@@ -221,12 +236,24 @@ httpcore = "==1.*"
idna = "*"
[package.extras]
-brotli = ["brotli", "brotlicffi"]
+brotli = ["brotli ; platform_python_implementation == \"CPython\"", "brotlicffi ; platform_python_implementation != \"CPython\""]
cli = ["click (==8.*)", "pygments (==2.*)", "rich (>=10,<14)"]
http2 = ["h2 (>=3,<5)"]
socks = ["socksio (==1.*)"]
zstd = ["zstandard (>=0.18.0)"]
+[[package]]
+name = "httpx-sse"
+version = "0.4.0"
+description = "Consume Server-Sent Event (SSE) messages with HTTPX."
+optional = false
+python-versions = ">=3.8"
+groups = ["main"]
+files = [
+ {file = "httpx-sse-0.4.0.tar.gz", hash = "sha256:1e81a3a3070ce322add1d3529ed42eb5f70817f45ed6ec915ab753f961139721"},
+ {file = "httpx_sse-0.4.0-py3-none-any.whl", hash = "sha256:f329af6eae57eaa2bdfd962b42524764af68075ea87370a2de920af5341e318f"},
+]
+
[[package]]
name = "identify"
version = "2.6.7"
@@ -360,7 +387,34 @@ colorama = {version = ">=0.3.4", markers = "sys_platform == \"win32\""}
win32-setctime = {version = ">=1.0.0", markers = "sys_platform == \"win32\""}
[package.extras]
-dev = ["Sphinx (==8.1.3)", "build (==1.2.2)", "colorama (==0.4.5)", "colorama (==0.4.6)", "exceptiongroup (==1.1.3)", "freezegun (==1.1.0)", "freezegun (==1.5.0)", "mypy (==v0.910)", "mypy (==v0.971)", "mypy (==v1.13.0)", "mypy (==v1.4.1)", "myst-parser (==4.0.0)", "pre-commit (==4.0.1)", "pytest (==6.1.2)", "pytest (==8.3.2)", "pytest-cov (==2.12.1)", "pytest-cov (==5.0.0)", "pytest-cov (==6.0.0)", "pytest-mypy-plugins (==1.9.3)", "pytest-mypy-plugins (==3.1.0)", "sphinx-rtd-theme (==3.0.2)", "tox (==3.27.1)", "tox (==4.23.2)", "twine (==6.0.1)"]
+dev = ["Sphinx (==8.1.3) ; python_version >= \"3.11\"", "build (==1.2.2) ; python_version >= \"3.11\"", "colorama (==0.4.5) ; python_version < \"3.8\"", "colorama (==0.4.6) ; python_version >= \"3.8\"", "exceptiongroup (==1.1.3) ; python_version >= \"3.7\" and python_version < \"3.11\"", "freezegun (==1.1.0) ; python_version < \"3.8\"", "freezegun (==1.5.0) ; python_version >= \"3.8\"", "mypy (==v0.910) ; python_version < \"3.6\"", "mypy (==v0.971) ; python_version == \"3.6\"", "mypy (==v1.13.0) ; python_version >= \"3.8\"", "mypy (==v1.4.1) ; python_version == \"3.7\"", "myst-parser (==4.0.0) ; python_version >= \"3.11\"", "pre-commit (==4.0.1) ; python_version >= \"3.9\"", "pytest (==6.1.2) ; python_version < \"3.8\"", "pytest (==8.3.2) ; python_version >= \"3.8\"", "pytest-cov (==2.12.1) ; python_version < \"3.8\"", "pytest-cov (==5.0.0) ; python_version == \"3.8\"", "pytest-cov (==6.0.0) ; python_version >= \"3.9\"", "pytest-mypy-plugins (==1.9.3) ; python_version >= \"3.6\" and python_version < \"3.8\"", "pytest-mypy-plugins (==3.1.0) ; python_version >= \"3.8\"", "sphinx-rtd-theme (==3.0.2) ; python_version >= \"3.11\"", "tox (==3.27.1) ; python_version < \"3.8\"", "tox (==4.23.2) ; python_version >= \"3.8\"", "twine (==6.0.1) ; python_version >= \"3.11\""]
+
+[[package]]
+name = "mcp"
+version = "1.6.0"
+description = "Model Context Protocol SDK"
+optional = false
+python-versions = ">=3.10"
+groups = ["main"]
+files = [
+ {file = "mcp-1.6.0-py3-none-any.whl", hash = "sha256:7bd24c6ea042dbec44c754f100984d186620d8b841ec30f1b19eda9b93a634d0"},
+ {file = "mcp-1.6.0.tar.gz", hash = "sha256:d9324876de2c5637369f43161cd71eebfd803df5a95e46225cab8d280e366723"},
+]
+
+[package.dependencies]
+anyio = ">=4.5"
+httpx = ">=0.27"
+httpx-sse = ">=0.4"
+pydantic = ">=2.7.2,<3.0.0"
+pydantic-settings = ">=2.5.2"
+sse-starlette = ">=1.6.1"
+starlette = ">=0.27"
+uvicorn = ">=0.23.1"
+
+[package.extras]
+cli = ["python-dotenv (>=1.0.0)", "typer (>=0.12.4)"]
+rich = ["rich (>=13.9.4)"]
+ws = ["websockets (>=15.0.1)"]
[[package]]
name = "msgpack"
@@ -867,7 +921,7 @@ typing-extensions = ">=4.12.2"
[package.extras]
email = ["email-validator (>=2.0.0)"]
-timezone = ["tzdata"]
+timezone = ["tzdata ; python_version >= \"3.9\" and platform_system == \"Windows\""]
[[package]]
name = "pydantic-core"
@@ -982,6 +1036,30 @@ files = [
[package.dependencies]
typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0"
+[[package]]
+name = "pydantic-settings"
+version = "2.9.1"
+description = "Settings management using Pydantic"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "pydantic_settings-2.9.1-py3-none-any.whl", hash = "sha256:59b4f431b1defb26fe620c71a7d3968a710d719f5f4cdbbdb7926edeb770f6ef"},
+ {file = "pydantic_settings-2.9.1.tar.gz", hash = "sha256:c509bf79d27563add44e8446233359004ed85066cd096d8b510f715e6ef5d268"},
+]
+
+[package.dependencies]
+pydantic = ">=2.7.0"
+python-dotenv = ">=0.21.0"
+typing-inspection = ">=0.4.0"
+
+[package.extras]
+aws-secrets-manager = ["boto3 (>=1.35.0)", "boto3-stubs[secretsmanager]"]
+azure-key-vault = ["azure-identity (>=1.16.0)", "azure-keyvault-secrets (>=4.8.0)"]
+gcp-secret-manager = ["google-cloud-secret-manager (>=2.23.1)"]
+toml = ["tomli (>=2.0.1)"]
+yaml = ["pyyaml (>=6.0.1)"]
+
[[package]]
name = "pygtrie"
version = "2.5.0"
@@ -1112,6 +1190,44 @@ files = [
{file = "sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc"},
]
+[[package]]
+name = "sse-starlette"
+version = "2.3.3"
+description = "SSE plugin for Starlette"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "sse_starlette-2.3.3-py3-none-any.whl", hash = "sha256:8b0a0ced04a329ff7341b01007580dd8cf71331cc21c0ccea677d500618da1e0"},
+ {file = "sse_starlette-2.3.3.tar.gz", hash = "sha256:fdd47c254aad42907cfd5c5b83e2282be15be6c51197bf1a9b70b8e990522072"},
+]
+
+[package.dependencies]
+anyio = ">=4.7.0"
+starlette = ">=0.41.3"
+
+[package.extras]
+examples = ["fastapi"]
+uvicorn = ["uvicorn (>=0.34.0)"]
+
+[[package]]
+name = "starlette"
+version = "0.46.2"
+description = "The little ASGI library that shines."
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "starlette-0.46.2-py3-none-any.whl", hash = "sha256:595633ce89f8ffa71a015caed34a5b2dc1c0cdb3f0f1fbd1e69339cf2abeec35"},
+ {file = "starlette-0.46.2.tar.gz", hash = "sha256:7f7361f34eed179294600af672f565727419830b54b7b084efe44bb82d2fccd5"},
+]
+
+[package.dependencies]
+anyio = ">=3.6.2,<5"
+
+[package.extras]
+full = ["httpx (>=0.27.0,<0.29.0)", "itsdangerous", "jinja2", "python-multipart (>=0.0.18)", "pyyaml"]
+
[[package]]
name = "tomli"
version = "2.2.1"
@@ -1119,7 +1235,7 @@ description = "A lil' TOML parser"
optional = false
python-versions = ">=3.8"
groups = ["main"]
-markers = "python_version < \"3.11\""
+markers = "python_version == \"3.10\""
files = [
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
{file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"},
@@ -1189,6 +1305,21 @@ files = [
{file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"},
]
+[[package]]
+name = "typing-inspection"
+version = "0.4.0"
+description = "Runtime typing introspection tools"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "typing_inspection-0.4.0-py3-none-any.whl", hash = "sha256:50e72559fcd2a6367a19f7a7e610e6afcb9fac940c650290eed893d61386832f"},
+ {file = "typing_inspection-0.4.0.tar.gz", hash = "sha256:9765c87de36671694a67904bf2c96e395be9c6439bb6c87b5142569dcdd65122"},
+]
+
+[package.dependencies]
+typing-extensions = ">=4.12.0"
+
[[package]]
name = "tzdata"
version = "2025.1"
@@ -1220,6 +1351,26 @@ tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["check-manifest", "pytest (>=4.3)", "pytest-cov", "pytest-mock (>=3.3)", "zest.releaser"]
+[[package]]
+name = "uvicorn"
+version = "0.34.2"
+description = "The lightning-fast ASGI server."
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "uvicorn-0.34.2-py3-none-any.whl", hash = "sha256:deb49af569084536d269fe0a6d67e3754f104cf03aba7c11c40f01aadf33c403"},
+ {file = "uvicorn-0.34.2.tar.gz", hash = "sha256:0e929828f6186353a80b58ea719861d2629d766293b6d19baf086ba31d4f3328"},
+]
+
+[package.dependencies]
+click = ">=7.0"
+h11 = ">=0.8"
+typing-extensions = {version = ">=4.0", markers = "python_version < \"3.11\""}
+
+[package.extras]
+standard = ["colorama (>=0.4) ; sys_platform == \"win32\"", "httptools (>=0.6.3)", "python-dotenv (>=0.13)", "pyyaml (>=5.1)", "uvloop (>=0.14.0,!=0.15.0,!=0.15.1) ; sys_platform != \"win32\" and sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\"", "watchfiles (>=0.13)", "websockets (>=10.4)"]
+
[[package]]
name = "virtualenv"
version = "20.29.2"
@@ -1239,7 +1390,7 @@ platformdirs = ">=3.9.1,<5"
[package.extras]
docs = ["furo (>=2023.7.26)", "proselint (>=0.13)", "sphinx (>=7.1.2,!=7.3)", "sphinx-argparse (>=0.4)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=23.6)"]
-test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.4)", "pytest-env (>=0.8.2)", "pytest-freezer (>=0.4.8)", "pytest-mock (>=3.11.1)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=68)", "time-machine (>=2.10)"]
+test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.4)", "pytest-env (>=0.8.2)", "pytest-freezer (>=0.4.8) ; platform_python_implementation == \"PyPy\" or platform_python_implementation == \"CPython\" and sys_platform == \"win32\" and python_version >= \"3.13\"", "pytest-mock (>=3.11.1)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=68)", "time-machine (>=2.10) ; platform_python_implementation == \"CPython\""]
[[package]]
name = "wcwidth"
@@ -1267,7 +1418,7 @@ files = [
]
[package.extras]
-dev = ["black (>=19.3b0)", "pytest (>=4.6.2)"]
+dev = ["black (>=19.3b0) ; python_version >= \"3.6\"", "pytest (>=4.6.2)"]
[[package]]
name = "yarl"
@@ -1368,5 +1519,5 @@ propcache = ">=0.2.0"
[metadata]
lock-version = "2.1"
-python-versions = "^3.9"
-content-hash = "5675eb652e3b158a0e30e448971b218da514dae36513a5ab99ea5c2f7a216a05"
+python-versions = "^3.10"
+content-hash = "c33b411db9144768bcd4d912397c3a9789dd34edfc67b8e1458b00d2a2e2733a"
diff --git a/pyproject.toml b/pyproject.toml
index 73df72e..7c17df2 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "nonebot-plugin-llmchat"
-version = "0.1.8"
+version = "0.2.5"
description = "Nonebot AI group chat plugin supporting multiple API preset configurations"
license = "GPL"
authors = ["FuQuan i@fuquan.moe"]
@@ -11,13 +11,14 @@ documentation = "https://github.com/FuQuan233/nonebot-plugin-llmchat#readme"
keywords = ["nonebot", "nonebot2", "llm", "ai"]
[tool.poetry.dependencies]
-python = "^3.9"
+python = "^3.10"
openai = ">=1.0.0"
nonebot2 = "^2.2.0"
aiofiles = ">=24.0.0"
nonebot-plugin-apscheduler = "^0.5.0"
nonebot-adapter-onebot = "^2.0.0"
nonebot-plugin-localstore = "^0.7.3"
+mcp = "^1.6.0"
[tool.poetry.group.dev.dependencies]
ruff = "^0.8.0"
@@ -26,7 +27,7 @@ pre-commit = "^4.0.0"
[tool.ruff]
line-length = 130
-target-version = "py39"
+target-version = "py310"
[tool.ruff.format]
line-ending = "lf"
@@ -64,7 +65,7 @@ force-sort-within-sections = true
keep-runtime-typing = true
[tool.pyright]
-pythonVersion = "3.9"
+pythonVersion = "3.10"
pythonPlatform = "All"
defineConstant = { PYDANTIC_V2 = true }
executionEnvironments = [{ root = "./" }]