Compare commits

...

48 commits
v0.1.8 ... main

Author SHA1 Message Date
d640f16abe 🔖 bump llmchat version 0.2.5
Some checks failed
Pyright Lint / Pyright Lint (push) Has been cancelled
Ruff Lint / Ruff Lint (push) Has been cancelled
2025-09-01 10:56:31 +08:00
1600cba172 支持忽略特定前缀的消息 #21 2025-09-01 10:51:30 +08:00
9f81a38d5b 🐛 将mcp超时延长到30秒,避免执行失败 2025-09-01 10:45:18 +08:00
53d57beba3 🔖 bump llmchat version 0.2.4 2025-08-20 12:48:13 +08:00
ea635fd147 🐛 修复重复发送消息给llm的问题 2025-08-20 12:38:39 +08:00
5014d3014b 🐛 修复mcp服务器卡住导致的卡死 2025-08-20 11:40:54 +08:00
89baec6abc 📘 更新 README 2025-05-19 14:17:25 +08:00
19ff0026c0 🐛 修复deque mutated during iteration 2025-05-16 21:43:08 +08:00
52ada66616 🔖 bump llmchat version 0.2.3 2025-05-13 14:02:23 +08:00
cf2d549f02 📘 更新meta信息 2025-05-13 14:02:03 +08:00
6c27cf56fa 🐛 修复命令本身会触发回复的问题 2025-05-13 13:43:06 +08:00
3d85ea90ef 🐛 修复多条消息中只处理最后一条消息的图片的问题 2025-05-13 13:41:28 +08:00
7edd7c913e 🐛 修复MCP调用过程中回复不分条的问题 2025-05-13 11:23:52 +08:00
84d3851936 🐛 修复某些协议端找不到图片url的问题 2025-05-12 15:26:39 +08:00
ee2a045116 🔖 bump llmchat version 0.2.2 2025-05-11 15:45:57 +08:00
6f69cc3cff 支持用户黑名单 #20 2025-05-11 15:42:13 +08:00
ed1b9792e7 📘 更新 README 2025-05-11 15:05:26 +08:00
FuQuan
0ddf8e5626
Merge pull request #19 from duolanda/main
 support vision models
2025-05-11 14:51:14 +08:00
duolanda
5e048c9472 ♻️ fix lint problems 2025-05-11 00:41:05 +08:00
duolanda
f2d1521158 support vision models 2025-05-10 22:58:44 +08:00
db9794a18a 🐛 修复可能出现首条消息不为user消息导致报错的问题 2025-04-28 20:19:47 +08:00
FuQuan
c9c22a8630
📘 更新 README 2025-04-27 18:08:50 +08:00
8013df564a 🔖 bump llmchat version 0.2.1 2025-04-27 11:57:34 +08:00
e3973baa37 🐛 修复assistant消息没有正确添加到历史记录的问题 2025-04-27 11:56:38 +08:00
fd18f41f17 🔖 bump llmchat version 0.2.0 2025-04-26 23:00:14 +08:00
3e9e691faf 📘 更新插件description 2025-04-26 22:47:40 +08:00
48fe2515e9 📘 更新 README 2025-04-26 22:46:32 +08:00
506024c8f5 ♻️ 增加MCP服务器友好名称为空时的处理 2025-04-26 22:45:53 +08:00
318e5b91c1 ♻️ fix lint problems 2025-04-26 22:02:23 +08:00
6aea492281 ♻️ fix lint problems 2025-04-26 21:55:44 +08:00
0f09015042 ♻️ fix lint problems 2025-04-26 21:54:37 +08:00
3c1ac4b68b ♻️ fix lint problems 2025-04-26 00:27:10 +08:00
ed2f9051ef 🐛 add missing dependencies 2025-04-26 00:18:36 +08:00
eb1038e09e support Model Context Protocol (MCP) 2025-04-25 23:52:18 +08:00
dfe3b5308c 🔖 bump llmchat version 0.1.11 2025-03-01 21:57:52 +08:00
17564b5463 🐛 fix error when the history_size was set to an odd number 2025-03-01 21:57:27 +08:00
7408f90b3c 📘 update README #14 2025-02-28 21:09:27 +08:00
da5621abbe ♻️ fix lint problems 2025-02-28 21:06:59 +08:00
f89e41754c ♻️ fix lint problems 2025-02-28 18:27:50 +08:00
05eb132f85 🔖 bump llmchat version 0.1.10 2025-02-28 18:19:43 +08:00
a7b57ae375 support api proxy #14 2025-02-28 18:19:04 +08:00
4af60b8145 🐛 fix random_trigger_prob per group not working 2025-02-28 18:02:15 +08:00
b01f2825d1 🔖 bump llmchat version 0.1.9 2025-02-28 11:31:07 +08:00
405d5d367f ♻️ use logging when forwarding messages fails 2025-02-28 11:28:20 +08:00
a6fa27dd9a 📘 update README 2025-02-28 11:23:36 +08:00
e8db066647 ♻️ code format 2025-02-28 11:21:15 +08:00
6db55055a2 set random_trigger_prob per group 2025-02-28 11:15:22 +08:00
505fab406b 🐛 Fix message history order and add error handling for reasoning content 2025-02-21 18:07:28 +08:00
8 changed files with 549 additions and 90 deletions

View file

@ -5,7 +5,7 @@ inputs:
python-version:
description: Python version
required: false
default: "3.9"
default: "3.10"
runs:
using: "composite"

View file

@ -8,7 +8,7 @@
# nonebot-plugin-llmchat
_✨ 支持多API预设配置的AI群聊插件 ✨_
_✨ 支持多API预设、MCP协议、联网搜索、视觉模型的AI群聊插件 ✨_
<a href="./LICENSE">
@ -17,33 +17,39 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
<a href="https://pypi.python.org/pypi/nonebot-plugin-llmchat">
<img src="https://img.shields.io/pypi/v/nonebot-plugin-llmchat.svg" alt="pypi">
</a>
<img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="python">
<img src="https://img.shields.io/badge/python-3.10+-blue.svg" alt="python">
<a href="https://deepwiki.com/FuQuan233/nonebot-plugin-llmchat"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a>
</div>
## 📖 介绍
1. **多API预设支持**
1. **支持MCP协议**
- 可以连接各种支持MCP协议的LLM工具
- 通过连接一些搜索MCP服务器可以实现在线搜索
- 兼容 Claude.app 的配置格式
2. **多API预设支持**
- 可配置多个LLM服务预设如不同模型/API密钥
- 支持运行时通过`API预设`命令热切换API配置
- 内置服务开关功能(预设名为`off`时停用)
2. **多种回复触发方式**
3. **多种回复触发方式**
- @触发 + 随机概率触发
- 支持处理回复消息
- 群聊消息顺序处理,防止消息错乱
3. **分群聊上下文记忆管理**
4. **分群聊上下文记忆管理**
- 分群聊保留对话历史记录(可配置保留条数)
- 自动合并未处理消息降低API用量
- 支持`记忆清除`命令手动重置对话上下文
4. **分段回复支持**
5. **分段回复支持**
- 支持多段式回复由LLM决定如何回复
- 可@群成员由LLM插入
- 可选输出AI的思维过程需模型支持
5. **可自定义性格**
6. **可自定义性格**
- 可动态修改群组专属系统提示词(`/修改设定`
- 支持自定义默认提示词
@ -100,8 +106,11 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
| LLMCHAT__PAST_EVENTS_SIZE | 否 | 10 | 触发回复时发送的群消息数量1-20越大token消耗量越多 |
| LLMCHAT__REQUEST_TIMEOUT | 否 | 30 | API请求超时时间 |
| LLMCHAT__DEFAULT_PRESET | 否 | off | 默认使用的预设名称配置为off则为关闭 |
| LLMCHAT__RANDOM_TRIGGER_PROB | 否 | 0.05 | 随机触发概率0-1] |
| LLMCHAT__RANDOM_TRIGGER_PROB | 否 | 0.05 | 默认随机触发概率 [0, 1] |
| LLMCHAT__DEFAULT_PROMPT | 否 | 你的回答应该尽量简洁、幽默、可以使用一些语气词、颜文字。你应该拒绝回答任何政治相关的问题。 | 默认提示词 |
| LLMCHAT__BLACKLIST_USER_IDS | 否 | [] | 黑名单用户ID列表机器人将不会处理黑名单用户的消息 |
| LLMCHAT__IGNORE_PREFIXES | 否 | [] | 需要忽略的消息前缀列表,匹配到这些前缀的消息不会处理 |
| LLMCHAT__MCP_SERVERS | 否 | {} | MCP服务器配置具体见下表 |
其中LLMCHAT__API_PRESETS为一个列表每项配置有以下的配置项
| 配置项 | 必填 | 默认值 | 说明 |
@ -112,6 +121,24 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
| model_name | 是 | 无 | 模型名称 |
| max_tokens | 否 | 2048 | 最大响应token数 |
| temperature | 否 | 0.7 | 生成温度 |
| proxy | 否 | 无 | 请求API时使用的HTTP代理 |
| support_mcp | 否 | False | 是否支持MCP协议 |
| support_image | 否 | False | 是否支持图片输入 |
LLMCHAT__MCP_SERVERS同样为一个dictkey为服务器名称value配置的格式基本兼容 Claude.app 的配置格式,具体支持如下
| 配置项 | 必填 | 默认值 | 说明 |
|:-----:|:----:|:----:|:----:|
| command | stdio服务器必填 | 无 | stdio服务器MCP命令 |
| arg | 否 | [] | stdio服务器MCP命令参数 |
| env | 否 | {} | stdio服务器环境变量 |
| url | sse服务器必填 | 无 | sse服务器地址 |
以下为在 Claude.app 的MCP服务器配置基础上增加的字段
| 配置项 | 必填 | 默认值 | 说明 |
|:-----:|:----:|:----:|:----:|
| friendly_name | 否 | 无 | 友好名称,用于调用时发送提示信息 |
| additional_prompt | 否 | 无 | 关于这个工具的附加提示词 |
<details open>
<summary>配置示例</summary>
@ -125,15 +152,38 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
"name": "aliyun-deepseek-v3",
"api_key": "sk-your-api-key",
"model_name": "deepseek-v3",
"api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1"
"api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"proxy": "http://10.0.0.183:7890"
},
{
"name": "deepseek-r1",
"name": "deepseek-v1",
"api_key": "sk-your-api-key",
"model_name": "deepseek-reasoner",
"api_base": "https://api.deepseek.com"
"model_name": "deepseek-chat",
"api_base": "https://api.deepseek.com",
"support_mcp": true
},
{
"name": "some-vison-model",
"api_key": "sk-your-api-key",
"model_name": "some-vison-model",
"api_base": "https://some-vison-model.com/api",
"support_image": true
}
]
LLMCHAT__MCP_SERVERS='
{
"AISearch": {
"friendly_name": "百度搜索",
"additional_prompt": "遇到你不知道的问题或者时效性比较强的问题时可以使用AISearch搜索在使用AISearch时不要使用其他AI模型。",
"url": "http://appbuilder.baidu.com/v2/ai_search/mcp/sse?api_key=Bearer+<your-api-key>"
},
"fetch": {
"friendly_name": "网页浏览",
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
'
'
</details>
@ -142,7 +192,7 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
**如果`LLMCHAT__DEFAULT_PRESET`没有配置,则插件默认为关闭状态,请使用`API预设+[预设名]`开启插件**
配置完成后@机器人即可手动触发回复,另外在机器人收到群聊消息时会根据`LLMCHAT__RANDOM_TRIGGER_PROB`配置的概率随机自动触发回复。
配置完成后@机器人即可手动触发回复,另外在机器人收到群聊消息时会根据`LLMCHAT__RANDOM_TRIGGER_PROB`配置的概率或群聊中使用指令设置的概率随机自动触发回复。
### 指令表
@ -154,6 +204,8 @@ _✨ 支持多API预设配置的AI群聊插件 ✨_
| 修改设定 | 管理 | 否 | 群聊 | 设定 | 修改机器人的设定,最好在修改之后执行一次记忆清除 |
| 记忆清除 | 管理 | 否 | 群聊 | 无 | 清除机器人的记忆 |
| 切换思维输出 | 管理 | 否 | 群聊 | 无 | 切换是否输出AI的思维过程的开关需模型支持 |
| 设置主动回复概率 | 管理 | 否 | 群聊 | 主动回复概率 | 主动回复概率需为 [0, 1] 的浮点数0为完全关闭主动回复 |
### 效果图
![](img/demo.png)
![](img/mcp_demo.jpg)
![](img/demo.png)

BIN
img/mcp_demo.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 404 KiB

271
nonebot_plugin_llmchat/__init__.py Normal file → Executable file
View file

@ -1,14 +1,17 @@
import asyncio
import base64
from collections import defaultdict, deque
from datetime import datetime
import json
import os
import random
import re
import ssl
import time
from typing import TYPE_CHECKING, Optional
from typing import TYPE_CHECKING
import aiofiles
import httpx
from nonebot import (
get_bot,
get_driver,
@ -27,6 +30,7 @@ from nonebot.rule import Rule
from openai import AsyncOpenAI
from .config import Config, PresetConfig
from .mcpclient import MCPClient
require("nonebot_plugin_localstore")
import nonebot_plugin_localstore as store
@ -35,13 +39,14 @@ require("nonebot_plugin_apscheduler")
from nonebot_plugin_apscheduler import scheduler
if TYPE_CHECKING:
from collections.abc import Iterable
from openai.types.chat import ChatCompletionMessageParam
from openai.types.chat import (
ChatCompletionContentPartParam,
ChatCompletionMessageParam,
)
__plugin_meta__ = PluginMetadata(
name="llmchat",
description="支持多API预设配置的AI群聊插件",
description="支持多API预设、MCP协议、联网搜索、视觉模型的AI群聊插件",
usage="""@机器人 + 消息 开启对话""",
type="application",
homepage="https://github.com/FuQuan233/nonebot-plugin-llmchat",
@ -55,8 +60,8 @@ tasks: set["asyncio.Task"] = set()
def pop_reasoning_content(
content: Optional[str],
) -> tuple[Optional[str], Optional[str]]:
content: str | None,
) -> tuple[str | None, str | None]:
if content is None:
return None, None
@ -75,13 +80,14 @@ def pop_reasoning_content(
class GroupState:
def __init__(self):
self.preset_name = plugin_config.default_preset
self.history = deque(maxlen=plugin_config.history_size)
self.history = deque(maxlen=plugin_config.history_size * 2)
self.queue = asyncio.Queue()
self.processing = False
self.last_active = time.time()
self.past_events = deque(maxlen=plugin_config.past_events_size)
self.group_prompt: Optional[str] = None
self.group_prompt: str | None = None
self.output_reasoning_content = False
self.random_trigger_prob = plugin_config.random_trigger_prob
group_states: dict[int, GroupState] = defaultdict(GroupState)
@ -159,6 +165,16 @@ async def is_triggered(event: GroupMessageEvent) -> bool:
if state.preset_name == "off":
return False
# 黑名单用户
if event.user_id in plugin_config.blacklist_user_ids:
return False
# 忽略特定前缀的消息
msg_text = event.get_plaintext().strip()
for prefix in plugin_config.ignore_prefixes:
if msg_text.startswith(prefix):
return False
state.past_events.append(event)
# 原有@触发条件
@ -166,7 +182,7 @@ async def is_triggered(event: GroupMessageEvent) -> bool:
return True
# 随机触发条件
if random.random() < plugin_config.random_trigger_prob:
if random.random() < state.random_trigger_prob:
return True
return False
@ -175,7 +191,7 @@ async def is_triggered(event: GroupMessageEvent) -> bool:
# 消息处理器
handler = on_message(
rule=Rule(is_triggered),
priority=10,
priority=99,
block=False,
)
@ -196,17 +212,65 @@ async def handle_message(event: GroupMessageEvent):
task.add_done_callback(tasks.discard)
tasks.add(task)
async def process_images(event: GroupMessageEvent) -> list[str]:
base64_images = []
for segement in event.get_message():
if segement.type == "image":
image_url = segement.data.get("url") or segement.data.get("file")
if image_url:
try:
# 处理高版本 httpx 的 [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] 报错
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
ssl_context.set_ciphers("DEFAULT@SECLEVEL=2")
# 下载图片并将图片转换为base64
async with httpx.AsyncClient(verify=ssl_context) as client:
response = await client.get(image_url, timeout=10.0)
if response.status_code != 200:
logger.error(f"下载图片失败: {image_url}, 状态码: {response.status_code}")
continue
image_data = response.content
base64_data = base64.b64encode(image_data).decode("utf-8")
base64_images.append(base64_data)
except Exception as e:
logger.error(f"处理图片时出错: {e}")
logger.debug(f"共处理 {len(base64_images)} 张图片")
return base64_images
async def send_split_messages(message_handler, content: str):
"""
将消息按分隔符<botbr>分段并发送
"""
logger.info(f"准备发送分段消息,分段数:{len(content.split('<botbr>'))}")
for segment in content.split("<botbr>"):
# 跳过空消息
if not segment.strip():
continue
segment = segment.strip() # 删除前后多余的换行和空格
await asyncio.sleep(2) # 避免发送过快
logger.debug(f"发送消息分段 内容:{segment[:50]}...") # 只记录前50个字符避免日志过大
await message_handler.send(Message(segment))
async def process_messages(group_id: int):
state = group_states[group_id]
preset = get_preset(group_id)
# 初始化OpenAI客户端
client = AsyncOpenAI(
base_url=preset.api_base,
api_key=preset.api_key,
timeout=plugin_config.request_timeout,
)
if preset.proxy != "":
client = AsyncOpenAI(
base_url=preset.api_base,
api_key=preset.api_key,
timeout=plugin_config.request_timeout,
http_client=httpx.AsyncClient(proxy=preset.proxy),
)
else:
client = AsyncOpenAI(
base_url=preset.api_base,
api_key=preset.api_key,
timeout=plugin_config.request_timeout,
)
logger.info(
f"开始处理群聊消息 群号:{group_id} 当前队列长度:{state.queue.qsize()}"
@ -232,78 +296,141 @@ async def process_messages(group_id: int):
下面是关于你性格的设定如果设定中提到让你扮演某个人或者设定中有提到名字则优先使用设定中的名字
{state.group_prompt or plugin_config.default_prompt}
"""
if preset.support_mcp:
systemPrompt += "你也可以使用一些工具,下面是关于这些工具的额外说明:\n"
for mcp_name, mcp_config in plugin_config.mcp_servers.items():
if mcp_config.addtional_prompt:
systemPrompt += f"{mcp_name}{mcp_config.addtional_prompt}"
systemPrompt += "\n"
messages: Iterable[ChatCompletionMessageParam] = [
messages: list[ChatCompletionMessageParam] = [
{"role": "system", "content": systemPrompt}
]
messages += list(state.history)[-plugin_config.history_size :]
while len(state.history) > 0 and state.history[0]["role"] != "user":
state.history.popleft()
messages += list(state.history)[-plugin_config.history_size * 2 :]
# 没有未处理的消息说明已经被处理了,跳过
if state.past_events.__len__() < 1:
break
content: list[ChatCompletionContentPartParam] = []
# 将机器人错过的消息推送给LLM
content = ",".join([format_message(ev) for ev in state.past_events])
past_events_snapshot = list(state.past_events)
for ev in past_events_snapshot:
text_content = format_message(ev)
content.append({"type": "text", "text": text_content})
# 将消息中的图片转成 base64
if preset.support_image:
base64_images = await process_images(ev)
for base64_image in base64_images:
content.append({"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}})
new_messages: list[ChatCompletionMessageParam] = [
{"role": "user", "content": content}
]
logger.debug(
f"发送API请求 模型:{preset.model_name} 历史消息数:{len(messages)}"
)
client_config = {
"model": preset.model_name,
"max_tokens": preset.max_tokens,
"temperature": preset.temperature,
"timeout": 60,
}
mcp_client = MCPClient(plugin_config.mcp_servers)
if preset.support_mcp:
await mcp_client.connect_to_servers()
available_tools = await mcp_client.get_available_tools()
client_config["tools"] = available_tools
response = await client.chat.completions.create(
model=preset.model_name,
messages=[*messages, {"role": "user", "content": content}],
max_tokens=preset.max_tokens,
temperature=preset.temperature,
timeout=60,
**client_config,
messages=messages + new_messages,
)
if response.usage is not None:
logger.debug(f"收到API响应 使用token数{response.usage.total_tokens}")
# 请求成功后再保存历史记录保证user和assistant穿插防止R1模型报错
state.history.append({"role": "user", "content": content})
state.past_events.clear()
message = response.choices[0].message
# 处理响应并处理工具调用
while preset.support_mcp and message.tool_calls:
new_messages.append({
"role": "assistant",
"tool_calls": [tool_call.model_dump() for tool_call in message.tool_calls]
})
# 发送LLM调用工具时的回复一般没有
if message.content:
await send_split_messages(handler, message.content)
# 处理每个工具调用
for tool_call in message.tool_calls:
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
# 发送工具调用提示
await handler.send(Message(f"正在使用{mcp_client.get_friendly_name(tool_name)}"))
# 执行工具调用
result = await mcp_client.call_tool(tool_name, tool_args)
new_messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": str(result)
})
# 将工具调用的结果交给 LLM
response = await client.chat.completions.create(
**client_config,
messages=messages + new_messages,
)
message = response.choices[0].message
await mcp_client.cleanup()
reply, matched_reasoning_content = pop_reasoning_content(
response.choices[0].message.content
)
reasoning_content: Optional[str] = (
reasoning_content: str | None = (
getattr(response.choices[0].message, "reasoning_content", None)
or matched_reasoning_content
)
new_messages.append({
"role": "assistant",
"content": reply,
})
# 请求成功后再保存历史记录保证user和assistant穿插防止R1模型报错
for message in new_messages:
state.history.append(message)
state.past_events.clear()
if state.output_reasoning_content and reasoning_content:
bot = get_bot(str(event.self_id))
await bot.send_group_forward_msg(
group_id=group_id,
messages=build_reasoning_forward_nodes(
bot.self_id, reasoning_content
),
)
try:
bot = get_bot(str(event.self_id))
await bot.send_group_forward_msg(
group_id=group_id,
messages=build_reasoning_forward_nodes(
bot.self_id, reasoning_content
),
)
except Exception as e:
logger.error(f"合并转发消息发送失败:\n{e!s}\n")
assert reply is not None
logger.info(
f"准备发送回复消息 群号:{group_id} 消息分段数:{len(reply.split('<botbr>'))}"
)
for r in reply.split("<botbr>"):
# 似乎会有空消息的情况导致string index out of range异常
if len(r) == 0 or r.isspace():
continue
# 删除前后多余的换行和空格
r = r.strip()
await asyncio.sleep(2)
logger.debug(
f"发送消息分段 内容:{r[:50]}..."
) # 只记录前50个字符避免日志过大
await handler.send(Message(r))
# 添加助手回复到历史
state.history.append(
{
"role": "assistant",
"content": reply,
}
)
await send_split_messages(handler, reply)
except Exception as e:
logger.opt(exception=e).error(f"API请求失败 群号:{group_id}")
@ -357,7 +484,7 @@ async def handle_edit_preset(event: GroupMessageEvent, args: Message = CommandAr
reset_handler = on_command(
"记忆清除",
priority=99,
priority=1,
block=True,
permission=(SUPERUSER | GROUP_ADMIN | GROUP_OWNER),
)
@ -372,6 +499,30 @@ async def handle_reset(event: GroupMessageEvent, args: Message = CommandArg()):
await reset_handler.finish("记忆已清空")
set_prob_handler = on_command(
"设置主动回复概率",
priority=1,
block=True,
permission=(SUPERUSER | GROUP_ADMIN | GROUP_OWNER),
)
@set_prob_handler.handle()
async def handle_set_prob(event: GroupMessageEvent, args: Message = CommandArg()):
group_id = event.group_id
prob = 0
try:
prob = float(args.extract_plain_text().strip())
if prob < 0 or prob > 1:
raise ValueError
except Exception as e:
await reset_handler.finish(f"输入有误,请使用 [0,1] 的浮点数\n{e!s}")
group_states[group_id].random_trigger_prob = prob
await reset_handler.finish(f"主动回复概率已设为 {prob}")
# 预设切换命令
think_handler = on_command(
"切换思维输出",
@ -409,6 +560,7 @@ async def save_state():
"last_active": state.last_active,
"group_prompt": state.group_prompt,
"output_reasoning_content": state.output_reasoning_content,
"random_trigger_prob": state.random_trigger_prob,
}
for gid, state in group_states.items()
}
@ -430,11 +582,12 @@ async def load_state():
state = GroupState()
state.preset_name = state_data["preset"]
state.history = deque(
state_data["history"], maxlen=plugin_config.history_size
state_data["history"], maxlen=plugin_config.history_size * 2
)
state.last_active = state_data["last_active"]
state.group_prompt = state_data["group_prompt"]
state.output_reasoning_content = state_data["output_reasoning_content"]
state.random_trigger_prob = state_data.get("random_trigger_prob", plugin_config.random_trigger_prob)
group_states[int(gid)] = state

19
nonebot_plugin_llmchat/config.py Normal file → Executable file
View file

@ -10,7 +10,20 @@ class PresetConfig(BaseModel):
model_name: str = Field(..., description="模型名称")
max_tokens: int = Field(2048, description="最大响应token数")
temperature: float = Field(0.7, description="生成温度0-2]")
proxy: str = Field("", description="HTTP代理服务器")
support_mcp: bool = Field(False, description="是否支持MCP")
support_image: bool = Field(False, description="是否支持图片输入")
class MCPServerConfig(BaseModel):
"""MCP服务器配置"""
command: str | None = Field(None, description="stdio模式下MCP命令")
args: list[str] | None = Field([], description="stdio模式下MCP命令参数")
env: dict[str, str] | None = Field({}, description="stdio模式下MCP命令环境变量")
url: str | None = Field(None, description="sse模式下MCP服务器地址")
# 额外字段
friendly_name: str | None = Field(None, description="MCP服务器友好名称")
addtional_prompt: str | None = Field(None, description="额外提示词")
class ScopedConfig(BaseModel):
"""LLM Chat Plugin配置"""
@ -29,6 +42,12 @@ class ScopedConfig(BaseModel):
"你的回答应该尽量简洁、幽默、可以使用一些语气词、颜文字。你应该拒绝回答任何政治相关的问题。",
description="默认提示词",
)
mcp_servers: dict[str, MCPServerConfig] = Field({}, description="MCP服务器配置")
blacklist_user_ids: set[int] = Field(set(), description="黑名单用户ID列表")
ignore_prefixes: list[str] = Field(
default_factory=list,
description="需要忽略的消息前缀列表,匹配到这些前缀的消息不会处理"
)
class Config(BaseModel):

View file

@ -0,0 +1,83 @@
import asyncio
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.sse import sse_client
from mcp.client.stdio import stdio_client
from nonebot import logger
from .config import MCPServerConfig
class MCPClient:
def __init__(self, server_config: dict[str, MCPServerConfig]):
logger.info(f"正在初始化MCPClient共有{len(server_config)}个服务器配置")
self.server_config = server_config
self.sessions = {}
self.exit_stack = AsyncExitStack()
logger.debug("MCPClient初始化成功")
async def connect_to_servers(self):
logger.info(f"开始连接{len(self.server_config)}个MCP服务器")
for server_name, config in self.server_config.items():
logger.debug(f"正在连接服务器[{server_name}]")
if config.url:
sse_transport = await self.exit_stack.enter_async_context(sse_client(url=config.url))
read, write = sse_transport
self.sessions[server_name] = await self.exit_stack.enter_async_context(ClientSession(read, write))
await self.sessions[server_name].initialize()
elif config.command:
stdio_transport = await self.exit_stack.enter_async_context(
stdio_client(StdioServerParameters(**config.model_dump()))
)
read, write = stdio_transport
self.sessions[server_name] = await self.exit_stack.enter_async_context(ClientSession(read, write))
await self.sessions[server_name].initialize()
else:
raise ValueError("Server config must have either url or command")
logger.info(f"已成功连接到MCP服务器[{server_name}]")
async def get_available_tools(self):
logger.info(f"正在从{len(self.sessions)}个已连接的服务器获取可用工具")
available_tools = []
for server_name, session in self.sessions.items():
logger.debug(f"正在列出服务器[{server_name}]中的工具")
response = await session.list_tools()
tools = response.tools
logger.debug(f"在服务器[{server_name}]中找到{len(tools)}个工具")
available_tools.extend(
{
"type": "function",
"function": {
"name": f"{server_name}___{tool.name}",
"description": tool.description,
"parameters": tool.inputSchema,
},
}
for tool in tools
)
return available_tools
async def call_tool(self, tool_name: str, tool_args: dict):
server_name, real_tool_name = tool_name.split("___")
logger.info(f"正在服务器[{server_name}]上调用工具[{real_tool_name}]")
session = self.sessions[server_name]
try:
response = await asyncio.wait_for(session.call_tool(real_tool_name, tool_args), timeout=30)
except asyncio.TimeoutError:
logger.error(f"调用工具[{real_tool_name}]超时")
return f"调用工具[{real_tool_name}]超时"
logger.debug(f"工具[{real_tool_name}]调用完成,响应: {response}")
return response.content
def get_friendly_name(self, tool_name: str):
server_name, real_tool_name = tool_name.split("___")
return (self.server_config[server_name].friendly_name or server_name) + " - " + real_tool_name
async def cleanup(self):
logger.debug("正在清理MCPClient资源")
await self.exit_stack.aclose()
logger.debug("MCPClient资源清理完成")

175
poetry.lock generated
View file

@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.0.1 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
[[package]]
name = "aiofiles"
@ -44,7 +44,7 @@ typing_extensions = {version = ">=4.5", markers = "python_version < \"3.13\""}
[package.extras]
doc = ["Sphinx (>=7.4,<8.0)", "packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx_rtd_theme"]
test = ["anyio[trio]", "coverage[toml] (>=7)", "exceptiongroup (>=1.2.0)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "trustme", "truststore (>=0.9.1)", "uvloop (>=0.21)"]
test = ["anyio[trio]", "coverage[toml] (>=7)", "exceptiongroup (>=1.2.0)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "trustme", "truststore (>=0.9.1) ; python_version >= \"3.10\"", "uvloop (>=0.21) ; platform_python_implementation == \"CPython\" and platform_system != \"Windows\" and python_version < \"3.14\""]
trio = ["trio (>=0.26.1)"]
[[package]]
@ -70,7 +70,7 @@ mongodb = ["pymongo (>=3.0)"]
redis = ["redis (>=3.0)"]
rethinkdb = ["rethinkdb (>=2.4.0)"]
sqlalchemy = ["sqlalchemy (>=1.4)"]
test = ["APScheduler[etcd,mongodb,redis,rethinkdb,sqlalchemy,tornado,zookeeper]", "PySide6", "anyio (>=4.5.2)", "gevent", "pytest", "pytz", "twisted"]
test = ["APScheduler[etcd,mongodb,redis,rethinkdb,sqlalchemy,tornado,zookeeper]", "PySide6 ; platform_python_implementation == \"CPython\" and python_version < \"3.14\"", "anyio (>=4.5.2)", "gevent ; python_version < \"3.14\"", "pytest", "pytz", "twisted ; python_version < \"3.14\""]
tornado = ["tornado (>=4.3)"]
twisted = ["twisted"]
zookeeper = ["kazoo"]
@ -99,6 +99,21 @@ files = [
{file = "cfgv-3.4.0.tar.gz", hash = "sha256:e52591d4c5f5dead8e0f673fb16db7949d2cfb3f7da4582893288f0ded8fe560"},
]
[[package]]
name = "click"
version = "8.1.8"
description = "Composable command line interface toolkit"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2"},
{file = "click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a"},
]
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "colorama"
version = "0.4.6"
@ -166,7 +181,7 @@ files = [
[package.extras]
docs = ["furo (>=2024.8.6)", "sphinx (>=8.1.3)", "sphinx-autodoc-typehints (>=3)"]
testing = ["covdefaults (>=2.3)", "coverage (>=7.6.10)", "diff-cover (>=9.2.1)", "pytest (>=8.3.4)", "pytest-asyncio (>=0.25.2)", "pytest-cov (>=6)", "pytest-mock (>=3.14)", "pytest-timeout (>=2.3.1)", "virtualenv (>=20.28.1)"]
typing = ["typing-extensions (>=4.12.2)"]
typing = ["typing-extensions (>=4.12.2) ; python_version < \"3.11\""]
[[package]]
name = "h11"
@ -221,12 +236,24 @@ httpcore = "==1.*"
idna = "*"
[package.extras]
brotli = ["brotli", "brotlicffi"]
brotli = ["brotli ; platform_python_implementation == \"CPython\"", "brotlicffi ; platform_python_implementation != \"CPython\""]
cli = ["click (==8.*)", "pygments (==2.*)", "rich (>=10,<14)"]
http2 = ["h2 (>=3,<5)"]
socks = ["socksio (==1.*)"]
zstd = ["zstandard (>=0.18.0)"]
[[package]]
name = "httpx-sse"
version = "0.4.0"
description = "Consume Server-Sent Event (SSE) messages with HTTPX."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "httpx-sse-0.4.0.tar.gz", hash = "sha256:1e81a3a3070ce322add1d3529ed42eb5f70817f45ed6ec915ab753f961139721"},
{file = "httpx_sse-0.4.0-py3-none-any.whl", hash = "sha256:f329af6eae57eaa2bdfd962b42524764af68075ea87370a2de920af5341e318f"},
]
[[package]]
name = "identify"
version = "2.6.7"
@ -360,7 +387,34 @@ colorama = {version = ">=0.3.4", markers = "sys_platform == \"win32\""}
win32-setctime = {version = ">=1.0.0", markers = "sys_platform == \"win32\""}
[package.extras]
dev = ["Sphinx (==8.1.3)", "build (==1.2.2)", "colorama (==0.4.5)", "colorama (==0.4.6)", "exceptiongroup (==1.1.3)", "freezegun (==1.1.0)", "freezegun (==1.5.0)", "mypy (==v0.910)", "mypy (==v0.971)", "mypy (==v1.13.0)", "mypy (==v1.4.1)", "myst-parser (==4.0.0)", "pre-commit (==4.0.1)", "pytest (==6.1.2)", "pytest (==8.3.2)", "pytest-cov (==2.12.1)", "pytest-cov (==5.0.0)", "pytest-cov (==6.0.0)", "pytest-mypy-plugins (==1.9.3)", "pytest-mypy-plugins (==3.1.0)", "sphinx-rtd-theme (==3.0.2)", "tox (==3.27.1)", "tox (==4.23.2)", "twine (==6.0.1)"]
dev = ["Sphinx (==8.1.3) ; python_version >= \"3.11\"", "build (==1.2.2) ; python_version >= \"3.11\"", "colorama (==0.4.5) ; python_version < \"3.8\"", "colorama (==0.4.6) ; python_version >= \"3.8\"", "exceptiongroup (==1.1.3) ; python_version >= \"3.7\" and python_version < \"3.11\"", "freezegun (==1.1.0) ; python_version < \"3.8\"", "freezegun (==1.5.0) ; python_version >= \"3.8\"", "mypy (==v0.910) ; python_version < \"3.6\"", "mypy (==v0.971) ; python_version == \"3.6\"", "mypy (==v1.13.0) ; python_version >= \"3.8\"", "mypy (==v1.4.1) ; python_version == \"3.7\"", "myst-parser (==4.0.0) ; python_version >= \"3.11\"", "pre-commit (==4.0.1) ; python_version >= \"3.9\"", "pytest (==6.1.2) ; python_version < \"3.8\"", "pytest (==8.3.2) ; python_version >= \"3.8\"", "pytest-cov (==2.12.1) ; python_version < \"3.8\"", "pytest-cov (==5.0.0) ; python_version == \"3.8\"", "pytest-cov (==6.0.0) ; python_version >= \"3.9\"", "pytest-mypy-plugins (==1.9.3) ; python_version >= \"3.6\" and python_version < \"3.8\"", "pytest-mypy-plugins (==3.1.0) ; python_version >= \"3.8\"", "sphinx-rtd-theme (==3.0.2) ; python_version >= \"3.11\"", "tox (==3.27.1) ; python_version < \"3.8\"", "tox (==4.23.2) ; python_version >= \"3.8\"", "twine (==6.0.1) ; python_version >= \"3.11\""]
[[package]]
name = "mcp"
version = "1.6.0"
description = "Model Context Protocol SDK"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "mcp-1.6.0-py3-none-any.whl", hash = "sha256:7bd24c6ea042dbec44c754f100984d186620d8b841ec30f1b19eda9b93a634d0"},
{file = "mcp-1.6.0.tar.gz", hash = "sha256:d9324876de2c5637369f43161cd71eebfd803df5a95e46225cab8d280e366723"},
]
[package.dependencies]
anyio = ">=4.5"
httpx = ">=0.27"
httpx-sse = ">=0.4"
pydantic = ">=2.7.2,<3.0.0"
pydantic-settings = ">=2.5.2"
sse-starlette = ">=1.6.1"
starlette = ">=0.27"
uvicorn = ">=0.23.1"
[package.extras]
cli = ["python-dotenv (>=1.0.0)", "typer (>=0.12.4)"]
rich = ["rich (>=13.9.4)"]
ws = ["websockets (>=15.0.1)"]
[[package]]
name = "msgpack"
@ -867,7 +921,7 @@ typing-extensions = ">=4.12.2"
[package.extras]
email = ["email-validator (>=2.0.0)"]
timezone = ["tzdata"]
timezone = ["tzdata ; python_version >= \"3.9\" and platform_system == \"Windows\""]
[[package]]
name = "pydantic-core"
@ -982,6 +1036,30 @@ files = [
[package.dependencies]
typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0"
[[package]]
name = "pydantic-settings"
version = "2.9.1"
description = "Settings management using Pydantic"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pydantic_settings-2.9.1-py3-none-any.whl", hash = "sha256:59b4f431b1defb26fe620c71a7d3968a710d719f5f4cdbbdb7926edeb770f6ef"},
{file = "pydantic_settings-2.9.1.tar.gz", hash = "sha256:c509bf79d27563add44e8446233359004ed85066cd096d8b510f715e6ef5d268"},
]
[package.dependencies]
pydantic = ">=2.7.0"
python-dotenv = ">=0.21.0"
typing-inspection = ">=0.4.0"
[package.extras]
aws-secrets-manager = ["boto3 (>=1.35.0)", "boto3-stubs[secretsmanager]"]
azure-key-vault = ["azure-identity (>=1.16.0)", "azure-keyvault-secrets (>=4.8.0)"]
gcp-secret-manager = ["google-cloud-secret-manager (>=2.23.1)"]
toml = ["tomli (>=2.0.1)"]
yaml = ["pyyaml (>=6.0.1)"]
[[package]]
name = "pygtrie"
version = "2.5.0"
@ -1112,6 +1190,44 @@ files = [
{file = "sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc"},
]
[[package]]
name = "sse-starlette"
version = "2.3.3"
description = "SSE plugin for Starlette"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "sse_starlette-2.3.3-py3-none-any.whl", hash = "sha256:8b0a0ced04a329ff7341b01007580dd8cf71331cc21c0ccea677d500618da1e0"},
{file = "sse_starlette-2.3.3.tar.gz", hash = "sha256:fdd47c254aad42907cfd5c5b83e2282be15be6c51197bf1a9b70b8e990522072"},
]
[package.dependencies]
anyio = ">=4.7.0"
starlette = ">=0.41.3"
[package.extras]
examples = ["fastapi"]
uvicorn = ["uvicorn (>=0.34.0)"]
[[package]]
name = "starlette"
version = "0.46.2"
description = "The little ASGI library that shines."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "starlette-0.46.2-py3-none-any.whl", hash = "sha256:595633ce89f8ffa71a015caed34a5b2dc1c0cdb3f0f1fbd1e69339cf2abeec35"},
{file = "starlette-0.46.2.tar.gz", hash = "sha256:7f7361f34eed179294600af672f565727419830b54b7b084efe44bb82d2fccd5"},
]
[package.dependencies]
anyio = ">=3.6.2,<5"
[package.extras]
full = ["httpx (>=0.27.0,<0.29.0)", "itsdangerous", "jinja2", "python-multipart (>=0.0.18)", "pyyaml"]
[[package]]
name = "tomli"
version = "2.2.1"
@ -1119,7 +1235,7 @@ description = "A lil' TOML parser"
optional = false
python-versions = ">=3.8"
groups = ["main"]
markers = "python_version < \"3.11\""
markers = "python_version == \"3.10\""
files = [
{file = "tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249"},
{file = "tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6"},
@ -1189,6 +1305,21 @@ files = [
{file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"},
]
[[package]]
name = "typing-inspection"
version = "0.4.0"
description = "Runtime typing introspection tools"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "typing_inspection-0.4.0-py3-none-any.whl", hash = "sha256:50e72559fcd2a6367a19f7a7e610e6afcb9fac940c650290eed893d61386832f"},
{file = "typing_inspection-0.4.0.tar.gz", hash = "sha256:9765c87de36671694a67904bf2c96e395be9c6439bb6c87b5142569dcdd65122"},
]
[package.dependencies]
typing-extensions = ">=4.12.0"
[[package]]
name = "tzdata"
version = "2025.1"
@ -1220,6 +1351,26 @@ tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["check-manifest", "pytest (>=4.3)", "pytest-cov", "pytest-mock (>=3.3)", "zest.releaser"]
[[package]]
name = "uvicorn"
version = "0.34.2"
description = "The lightning-fast ASGI server."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "uvicorn-0.34.2-py3-none-any.whl", hash = "sha256:deb49af569084536d269fe0a6d67e3754f104cf03aba7c11c40f01aadf33c403"},
{file = "uvicorn-0.34.2.tar.gz", hash = "sha256:0e929828f6186353a80b58ea719861d2629d766293b6d19baf086ba31d4f3328"},
]
[package.dependencies]
click = ">=7.0"
h11 = ">=0.8"
typing-extensions = {version = ">=4.0", markers = "python_version < \"3.11\""}
[package.extras]
standard = ["colorama (>=0.4) ; sys_platform == \"win32\"", "httptools (>=0.6.3)", "python-dotenv (>=0.13)", "pyyaml (>=5.1)", "uvloop (>=0.14.0,!=0.15.0,!=0.15.1) ; sys_platform != \"win32\" and sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\"", "watchfiles (>=0.13)", "websockets (>=10.4)"]
[[package]]
name = "virtualenv"
version = "20.29.2"
@ -1239,7 +1390,7 @@ platformdirs = ">=3.9.1,<5"
[package.extras]
docs = ["furo (>=2023.7.26)", "proselint (>=0.13)", "sphinx (>=7.1.2,!=7.3)", "sphinx-argparse (>=0.4)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=23.6)"]
test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.4)", "pytest-env (>=0.8.2)", "pytest-freezer (>=0.4.8)", "pytest-mock (>=3.11.1)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=68)", "time-machine (>=2.10)"]
test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.4)", "pytest-env (>=0.8.2)", "pytest-freezer (>=0.4.8) ; platform_python_implementation == \"PyPy\" or platform_python_implementation == \"CPython\" and sys_platform == \"win32\" and python_version >= \"3.13\"", "pytest-mock (>=3.11.1)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=68)", "time-machine (>=2.10) ; platform_python_implementation == \"CPython\""]
[[package]]
name = "wcwidth"
@ -1267,7 +1418,7 @@ files = [
]
[package.extras]
dev = ["black (>=19.3b0)", "pytest (>=4.6.2)"]
dev = ["black (>=19.3b0) ; python_version >= \"3.6\"", "pytest (>=4.6.2)"]
[[package]]
name = "yarl"
@ -1368,5 +1519,5 @@ propcache = ">=0.2.0"
[metadata]
lock-version = "2.1"
python-versions = "^3.9"
content-hash = "5675eb652e3b158a0e30e448971b218da514dae36513a5ab99ea5c2f7a216a05"
python-versions = "^3.10"
content-hash = "c33b411db9144768bcd4d912397c3a9789dd34edfc67b8e1458b00d2a2e2733a"

View file

@ -1,6 +1,6 @@
[tool.poetry]
name = "nonebot-plugin-llmchat"
version = "0.1.8"
version = "0.2.5"
description = "Nonebot AI group chat plugin supporting multiple API preset configurations"
license = "GPL"
authors = ["FuQuan i@fuquan.moe"]
@ -11,13 +11,14 @@ documentation = "https://github.com/FuQuan233/nonebot-plugin-llmchat#readme"
keywords = ["nonebot", "nonebot2", "llm", "ai"]
[tool.poetry.dependencies]
python = "^3.9"
python = "^3.10"
openai = ">=1.0.0"
nonebot2 = "^2.2.0"
aiofiles = ">=24.0.0"
nonebot-plugin-apscheduler = "^0.5.0"
nonebot-adapter-onebot = "^2.0.0"
nonebot-plugin-localstore = "^0.7.3"
mcp = "^1.6.0"
[tool.poetry.group.dev.dependencies]
ruff = "^0.8.0"
@ -26,7 +27,7 @@ pre-commit = "^4.0.0"
[tool.ruff]
line-length = 130
target-version = "py39"
target-version = "py310"
[tool.ruff.format]
line-ending = "lf"
@ -64,7 +65,7 @@ force-sort-within-sections = true
keep-runtime-typing = true
[tool.pyright]
pythonVersion = "3.9"
pythonVersion = "3.10"
pythonPlatform = "All"
defineConstant = { PYDANTIC_V2 = true }
executionEnvironments = [{ root = "./" }]