Compare commits

..

17 commits

Author SHA1 Message Date
7c7e270851 🔖 更新版本号至0.5.0
Some checks failed
Pyright Lint / Pyright Lint (push) Has been cancelled
Ruff Lint / Ruff Lint (push) Has been cancelled
2025-11-07 17:17:00 +08:00
162afd426c 📘 更新 README 2025-11-07 17:16:25 +08:00
2f78d4023a 支持主人在私聊中修改他人预设并提供当前预设信息 2025-11-07 17:13:27 +08:00
a47ae8ef16 🐛 更新工具列表缓存机制,修复工具调用问题 2025-11-07 16:59:19 +08:00
FuQuan
e082f7db9f
Merge pull request #23 from KawakazeNotFound/main
 增加私聊功能
2025-11-07 16:54:33 +08:00
FuQuan
b8afa12c9f
📘 更新 README 2025-11-07 14:23:04 +08:00
FuQuan
fe39e2aba4
♻️ fix lint problems 2025-11-07 12:13:53 +08:00
FuQuan
e542deabdb
合并私聊指令,更新相关权限和状态管理 2025-11-07 11:58:16 +08:00
FuQuan
1f41ed084e
合并私聊指令,更新相关权限和状态管理 2025-11-07 11:56:34 +08:00
KawakazeNotFound
53f3f185e7 修复在长文本下容易触发tool_call错误 2025-11-06 16:33:41 +08:00
KawakazeNotFound
3089bb51ae 修正ruff问题 2025-11-06 15:14:44 +08:00
KawakazeNotFound
e293b05fa1 私聊初步测试通过 2025-11-06 15:11:49 +08:00
KawakazeNotFound
a8d3213e48 Update Poerty 2025-11-06 14:52:44 +08:00
KawakazeNotFound
7ea7a26681 取消跟踪.DS_Store 2025-11-06 14:45:25 +08:00
KawakazeNotFound
2c04afc86a 尝试解决Ruff Lint问题 2025-11-06 11:27:41 +08:00
KawakazeNotFound
2fecb746b3 触发检查 2025-11-06 11:20:33 +08:00
XokoukioX
36a47fa5e2 WIP::私聊功能实现 2025-11-06 00:31:17 +08:00
7 changed files with 422 additions and 158 deletions

1
.gitignore vendored
View file

@ -174,3 +174,4 @@ pyrightconfig.json
!.vscode/launch.json
!.vscode/extensions.json
!.vscode/*.code-snippets
.DS_Store

View file

@ -8,7 +8,7 @@
# nonebot-plugin-llmchat
_✨ 支持多API预设、MCP协议、内置工具、联网搜索、视觉模型的AI群聊插件 ✨_
_✨ 支持多API预设、MCP协议、内置工具、联网搜索、视觉模型、群聊&私聊的AI对话插件 ✨_
<a href="./LICENSE">
@ -48,6 +48,12 @@ _✨ 支持多API预设、MCP协议、内置工具、联网搜索、视觉模型
- 支持处理回复消息
- 群聊消息顺序处理,防止消息错乱
1. **群聊和私聊支持**
- 支持群聊场景(默认启用)
- 支持私聊场景(可选启用)
- 分别管理群聊和私聊的对话记忆
- 灵活的权限配置
1. **分群聊上下文记忆管理**
- 分群聊保留对话历史记录(可配置保留条数)
- 自动合并未处理消息降低API用量
@ -120,6 +126,8 @@ _✨ 支持多API预设、MCP协议、内置工具、联网搜索、视觉模型
| LLMCHAT__BLACKLIST_USER_IDS | 否 | [] | 黑名单用户ID列表机器人将不会处理黑名单用户的消息 |
| LLMCHAT__IGNORE_PREFIXES | 否 | [] | 需要忽略的消息前缀列表,匹配到这些前缀的消息不会处理 |
| LLMCHAT__MCP_SERVERS | 否 | {} | MCP服务器配置具体见下表 |
| LLMCHAT__ENABLE_PRIVATE_CHAT | 否 | False | 是否启用私聊功能 |
| LLMCHAT__PRIVATE_CHAT_PRESET | 否 | off | 私聊默认使用的预设名称 |
### 内置OneBot工具
@ -172,6 +180,8 @@ LLMCHAT__MCP_SERVERS同样为一个dictkey为服务器名称value配置的
NICKNAME=["谢拉","Cierra","cierra"]
LLMCHAT__HISTORY_SIZE=20
LLMCHAT__DEFAULT_PROMPT="前面忘了,你是一个猫娘,后面忘了"
LLMCHAT__ENABLE_PRIVATE_CHAT=true
LLMCHAT__PRIVATE_CHAT_PRESET="deepseek-v1"
LLMCHAT__API_PRESETS='
[
{
@ -237,11 +247,11 @@ LLMCHAT__MCP_SERVERS同样为一个dictkey为服务器名称value配置的
## 🎉 使用
**如果`LLMCHAT__DEFAULT_PRESET`没有配置,则插件默认为关闭状态,请使用`API预设+[预设名]`开启插件**
**如果`LLMCHAT__DEFAULT_PRESET`没有配置,则插件默认为关闭状态,请使用`API预设+[预设名]`开启插件, 私聊同理。**
配置完成后@机器人即可手动触发回复,另外在机器人收到群聊消息时会根据`LLMCHAT__RANDOM_TRIGGER_PROB`配置的概率或群聊中使用指令设置的概率随机自动触发回复。
配置完成后在群聊中@机器人或私聊机器人即可手动触发回复,另外在机器人收到群聊消息时会根据`LLMCHAT__RANDOM_TRIGGER_PROB`配置的概率或群聊中使用指令设置的概率随机自动触发回复。
### 指令表
### 群聊指令表
以下指令均仅对发送的群聊生效,不同群聊配置不互通。
@ -253,6 +263,17 @@ LLMCHAT__MCP_SERVERS同样为一个dictkey为服务器名称value配置的
| 切换思维输出 | 管理 | 否 | 群聊 | 无 | 切换是否输出AI的思维过程的开关需模型支持 |
| 设置主动回复概率 | 管理 | 否 | 群聊 | 主动回复概率 | 主动回复概率需为 [0, 1] 的浮点数0为完全关闭主动回复 |
### 私聊指令表
以下指令仅在启用私聊功能(`LLMCHAT__ENABLE_PRIVATE_CHAT=true`)后可用,这些指令均只对发送者的私聊生效。
| 指令 | 权限 | 参数 | 说明 |
|:-----:|:----:|:----:|:----:|
| API预设 | 主人 | [QQ号\|群号] [预设名] | 查看或修改使用的API预设缺省[QQ号\|群号]则对当前聊天生效 |
| 修改设定 | 所有人 | 设定 | 修改私聊机器人的设定 |
| 记忆清除 | 所有人 | 无 | 清除私聊的机器人记忆 |
| 切换思维输出 | 所有人 | 无 | 切换是否输出私聊AI的思维过程的开关需模型支持 |
### 效果图
![](img/mcp_demo.jpg)
![](img/demo.png)

View file

@ -21,8 +21,8 @@ from nonebot import (
on_message,
require,
)
from nonebot.adapters.onebot.v11 import GroupMessageEvent, Message, MessageSegment
from nonebot.adapters.onebot.v11.permission import GROUP_ADMIN, GROUP_OWNER
from nonebot.adapters.onebot.v11 import GroupMessageEvent, Message, MessageSegment, PrivateMessageEvent
from nonebot.adapters.onebot.v11.permission import GROUP_ADMIN, GROUP_OWNER, PRIVATE
from nonebot.params import CommandArg
from nonebot.permission import SUPERUSER
from nonebot.plugin import PluginMetadata
@ -86,16 +86,36 @@ class GroupState:
self.last_active = time.time()
self.past_events = deque(maxlen=plugin_config.past_events_size)
self.group_prompt: str | None = None
self.user_prompt: str | None = None
self.output_reasoning_content = False
self.random_trigger_prob = plugin_config.random_trigger_prob
# 初始化私聊状态
class PrivateChatState:
def __init__(self):
self.preset_name = plugin_config.private_chat_preset
self.history = deque(maxlen=plugin_config.history_size * 2)
self.queue = asyncio.Queue()
self.processing = False
self.last_active = time.time()
self.past_events = deque(maxlen=plugin_config.past_events_size)
self.group_prompt: str | None = None
self.user_prompt: str | None = None
self.output_reasoning_content = False
group_states: dict[int, GroupState] = defaultdict(GroupState)
private_chat_states: dict[int, PrivateChatState] = defaultdict(PrivateChatState)
# 获取当前预设配置
def get_preset(group_id: int) -> PresetConfig:
state = group_states[group_id]
def get_preset(context_id: int, is_group: bool = True) -> PresetConfig:
if is_group:
state = group_states[context_id]
else:
state = private_chat_states[context_id]
for preset in plugin_config.api_presets:
if preset.name == state.preset_name:
return preset
@ -103,12 +123,12 @@ def get_preset(group_id: int) -> PresetConfig:
# 消息格式转换
def format_message(event: GroupMessageEvent) -> str:
def format_message(event: GroupMessageEvent | PrivateMessageEvent) -> str:
text_message = ""
if event.reply is not None:
if isinstance(event, GroupMessageEvent) and event.reply is not None:
text_message += f"[回复 {event.reply.sender.nickname} 的消息 {event.reply.message.extract_plain_text()}]\n"
if event.is_tome():
if isinstance(event, GroupMessageEvent) and event.is_tome():
text_message += f"@{next(iter(driver.config.nickname))} "
for msgseg in event.get_message():
@ -123,6 +143,7 @@ def format_message(event: GroupMessageEvent) -> str:
elif msgseg.type == "text":
text_message += msgseg.data.get("text", "")
if isinstance(event, GroupMessageEvent):
message = {
"SenderNickname": str(event.sender.card or event.sender.nickname),
"SenderUserId": str(event.user_id),
@ -130,6 +151,14 @@ def format_message(event: GroupMessageEvent) -> str:
"MessageID": event.message_id,
"SendTime": datetime.fromtimestamp(event.time).isoformat(),
}
else: # PrivateMessageEvent
message = {
"SenderNickname": str(event.sender.nickname),
"SenderUserId": str(event.user_id),
"Message": text_message,
"MessageID": event.message_id,
"SendTime": datetime.fromtimestamp(event.time).isoformat(),
}
return json.dumps(message, ensure_ascii=False)
@ -157,9 +186,10 @@ def build_reasoning_forward_nodes(self_id: str, reasoning_content: str):
return nodes
async def is_triggered(event: GroupMessageEvent) -> bool:
async def is_triggered(event: GroupMessageEvent | PrivateMessageEvent) -> bool:
"""扩展后的消息处理规则"""
if isinstance(event, GroupMessageEvent):
state = group_states[event.group_id]
if state.preset_name == "off":
@ -187,6 +217,33 @@ async def is_triggered(event: GroupMessageEvent) -> bool:
return False
elif isinstance(event, PrivateMessageEvent):
# 检查私聊功能是否启用
if not plugin_config.enable_private_chat:
return False
state = private_chat_states[event.user_id]
if state.preset_name == "off":
return False
# 黑名单用户
if event.user_id in plugin_config.blacklist_user_ids:
return False
# 忽略特定前缀的消息
msg_text = event.get_plaintext().strip()
for prefix in plugin_config.ignore_prefixes:
if msg_text.startswith(prefix):
return False
state.past_events.append(event)
# 私聊默认触发
return True
return False
# 消息处理器
handler = on_message(
@ -197,22 +254,31 @@ handler = on_message(
@handler.handle()
async def handle_message(event: GroupMessageEvent):
async def handle_message(event: GroupMessageEvent | PrivateMessageEvent):
if isinstance(event, GroupMessageEvent):
group_id = event.group_id
logger.debug(
f"收到群聊消息 群号:{group_id} 用户:{event.user_id} 内容:{event.get_plaintext()}"
)
state = group_states[group_id]
context_id = group_id
else: # PrivateMessageEvent
user_id = event.user_id
logger.debug(
f"收到私聊消息 用户:{user_id} 内容:{event.get_plaintext()}"
)
state = private_chat_states[user_id]
context_id = user_id
await state.queue.put(event)
if not state.processing:
state.processing = True
task = asyncio.create_task(process_messages(group_id))
is_group = isinstance(event, GroupMessageEvent)
task = asyncio.create_task(process_messages(context_id, is_group))
task.add_done_callback(tasks.discard)
tasks.add(task)
async def process_images(event: GroupMessageEvent) -> list[str]:
async def process_images(event: GroupMessageEvent | PrivateMessageEvent) -> list[str]:
base64_images = []
for segement in event.get_message():
if segement.type == "image":
@ -253,9 +319,16 @@ async def send_split_messages(message_handler, content: str):
logger.debug(f"发送消息分段 内容:{segment[:50]}...") # 只记录前50个字符避免日志过大
await message_handler.send(Message(segment))
async def process_messages(group_id: int):
async def process_messages(context_id: int, is_group: bool = True):
if is_group:
group_id = context_id
state = group_states[group_id]
preset = get_preset(group_id)
else:
user_id = context_id
state = private_chat_states[user_id]
group_id = None
preset = get_preset(context_id, is_group)
# 初始化OpenAI客户端
if preset.proxy != "":
@ -272,32 +345,56 @@ async def process_messages(group_id: int):
timeout=plugin_config.request_timeout,
)
chat_type = "群聊" if is_group else "私聊"
context_type = "群号" if is_group else "用户"
logger.info(
f"开始处理群聊消息 群号:{group_id} 当前队列长度:{state.queue.qsize()}"
f"开始处理{chat_type}消息 {context_type}{context_id} 当前队列长度:{state.queue.qsize()}"
)
while not state.queue.empty():
event = await state.queue.get()
logger.debug(f"从队列获取消息 群号:{group_id} 消息ID{event.message_id}")
if is_group:
logger.debug(f"从队列获取消息 群号:{context_id} 消息ID{event.message_id}")
group_id = context_id
else:
logger.debug(f"从队列获取消息 用户:{context_id} 消息ID{event.message_id}")
group_id = None
past_events_snapshot = []
mcp_client = MCPClient.get_instance(plugin_config.mcp_servers)
try:
systemPrompt = f"""
我想要你帮我在群聊中闲聊大家一般叫你{"".join(list(driver.config.nickname))}我将会在后面的信息中告诉你每条群聊信息的发送者和发送时间你可以直接称呼发送者为他对应的昵称
你的回复需要遵守以下几点规则
- 你可以使用多条消息回复每两条消息之间使用<botbr>分隔<botbr>前后不需要包含额外的换行和空格
- <botbr>消息中不应该包含其他类似的标记
- 不要使用markdown或者html聊天软件不支持解析换行请用换行符
- 你应该以普通人的方式发送消息每条消息字数要尽量少一些应该倾向于使用更多条的消息回复
- 代码则不需要分段用单独的一条消息发送
- 请使用发送者的昵称称呼发送者你可以礼貌地问候发送者但只需要在第一次回答这位发送者的问题时问候他
- 你有at群成员的能力只需要在某条消息中插入[CQ:at,qq=QQ号]也就是CQ码at发送者是非必要的你可以根据你自己的想法at某个人
- 你有引用某条消息的能力使用[CQ:reply,id=消息id]来引用
- 如果有多条消息你应该优先回复提到你的一段时间之前的就不要回复了也可以直接选择不回复
- 如果你选择完全不回复你只需要直接输出一个<botbr>
- 如果你需要思考的话你应该思考尽量少以节省时间
下面是关于你性格的设定如果设定中提到让你扮演某个人或者设定中有提到名字则优先使用设定中的名字
{state.group_prompt or plugin_config.default_prompt}
"""
# 构建系统提示,分成多行以满足行长限制
chat_type = "群聊" if is_group else "私聊"
bot_names = "".join(list(driver.config.nickname))
default_prompt = (state.group_prompt if is_group else state.user_prompt) or plugin_config.default_prompt
system_lines = [
f"我想要你帮我在{chat_type}中闲聊,大家一般叫你{bot_names}",
"我将会在后面的信息中告诉你每条信息的发送者和发送时间,你可以直接称呼发送者为他对应的昵称。",
"你的回复需要遵守以下几点规则:",
"- 你可以使用多条消息回复,每两条消息之间使用<botbr>分隔,<botbr>前后不需要包含额外的换行和空格。",
"- 除<botbr>外,消息中不应该包含其他类似的标记。",
"- 不要使用markdown或者html聊天软件不支持解析换行请用换行符。",
"- 你应该以普通人的方式发送消息,每条消息字数要尽量少一些,应该倾向于使用更多条的消息回复。",
"- 代码则不需要分段,用单独的一条消息发送。",
"- 请使用发送者的昵称称呼发送者,你可以礼貌地问候发送者,但只需要在"
"第一次回答这位发送者的问题时问候他。",
"- 你有引用某条消息的能力,使用[CQ:reply,id=消息id]来引用。",
"- 如果有多条消息,你应该优先回复提到你的,一段时间之前的就不要回复了,也可以直接选择不回复。",
"- 如果你选择完全不回复,你只需要直接输出一个<botbr>。",
"- 如果你需要思考的话,你应该尽量少思考,以节省时间。",
]
if is_group:
system_lines += [
"- 你有at群成员的能力只需要在某条消息中插入[CQ:at,qq=QQ号]"
"也就是CQ码。at发送者是非必要的你可以根据你自己的想法at某个人。",
]
system_lines += [
"下面是关于你性格的设定,如果设定中提到让你扮演某个人,或者设定中有提到名字,则优先使用设定中的名字。",
default_prompt,
]
systemPrompt = "\n".join(system_lines)
if preset.support_mcp:
systemPrompt += "你也可以使用一些工具,下面是关于这些工具的额外说明:\n"
for mcp_name, mcp_config in plugin_config.mcp_servers.items():
@ -349,7 +446,7 @@ async def process_messages(group_id: int):
}
if preset.support_mcp:
available_tools = await mcp_client.get_available_tools()
available_tools = await mcp_client.get_available_tools(is_group)
client_config["tools"] = available_tools
response = await client.chat.completions.create(
@ -363,7 +460,7 @@ async def process_messages(group_id: int):
message = response.choices[0].message
# 处理响应并处理工具调用
while preset.support_mcp and message.tool_calls:
while preset.support_mcp and message and message.tool_calls:
new_messages.append({
"role": "assistant",
"tool_calls": [tool_call.model_dump() for tool_call in message.tool_calls]
@ -381,13 +478,19 @@ async def process_messages(group_id: int):
# 发送工具调用提示
await handler.send(Message(f"正在使用{mcp_client.get_friendly_name(tool_name)}"))
# 执行工具调用传递群组和机器人信息用于QQ工具
if is_group:
result = await mcp_client.call_tool(
tool_name,
tool_args,
group_id=event.group_id,
bot_id=str(event.self_id)
)
else:
result = await mcp_client.call_tool(
tool_name,
tool_args,
bot_id=str(event.self_id)
)
new_messages.append({
"role": "tool",
@ -403,11 +506,17 @@ async def process_messages(group_id: int):
message = response.choices[0].message
# 安全检查:确保 message 不为 None
if not message:
logger.error("API 响应中的 message 为 None")
await handler.send(Message("服务暂时不可用,请稍后再试"))
return
reply, matched_reasoning_content = pop_reasoning_content(
response.choices[0].message.content
message.content
)
reasoning_content: str | None = (
getattr(response.choices[0].message, "reasoning_content", None)
getattr(message, "reasoning_content", None)
or matched_reasoning_content
)
@ -416,7 +525,7 @@ async def process_messages(group_id: int):
"content": reply,
}
reply_images = getattr(response.choices[0].message, "images", None)
reply_images = getattr(message, "images", None)
if reply_images:
# openai的sdk里的assistant消息暂时没有images字段需要单独处理
@ -452,7 +561,7 @@ async def process_messages(group_id: int):
await handler.send(image_msg)
except Exception as e:
logger.opt(exception=e).error(f"API请求失败 群号:{group_id}")
logger.opt(exception=e).error(f"API请求失败 {'群号' if is_group else '用户'}{context_id}")
# 如果在处理过程中出现异常恢复未处理的消息到state中
state.past_events.extendleft(reversed(past_events_snapshot))
await handler.send(Message(f"服务暂时不可用,请稍后再试\n{e!s}"))
@ -468,22 +577,94 @@ preset_handler = on_command("API预设", priority=1, block=True, permission=SUPE
@preset_handler.handle()
async def handle_preset(event: GroupMessageEvent, args: Message = CommandArg()):
group_id = event.group_id
preset_name = args.extract_plain_text().strip()
async def handle_preset(event: GroupMessageEvent | PrivateMessageEvent, args: Message = CommandArg()):
# 解析命令参数
args_text = args.extract_plain_text().strip()
args_parts = args_text.split(maxsplit=1)
if preset_name == "off":
group_states[group_id].preset_name = preset_name
await preset_handler.finish("已关闭llmchat")
target_id = None
preset_name = None
# 可用预设列表
available_presets = {p.name for p in plugin_config.api_presets}
# 只在私聊中允许 SUPERUSER 修改他人预设
if isinstance(event, PrivateMessageEvent) and args_parts and args_parts[0].isdigit():
# 第一个参数是纯数字,且不是预设名
if args_parts[0] not in available_presets:
target_id = int(args_parts[0])
# 判断目标是群聊还是私聊
if target_id in group_states:
state = group_states[target_id]
is_group_target = True
elif target_id in private_chat_states:
state = private_chat_states[target_id]
is_group_target = False
else:
# 默认创建私聊状态
state = private_chat_states[target_id]
is_group_target = False
# 如果只有目标 ID没有预设名返回当前预设
if len(args_parts) == 1:
context_type = "群聊" if is_group_target else "私聊"
available_presets_str = "\n- ".join(available_presets)
await preset_handler.finish(
f"{context_type} {target_id} 当前API预设{state.preset_name}\n可用API预设\n- {available_presets_str}"
)
# 有预设名,进行修改
preset_name = args_parts[1]
context_id = target_id
else:
# 第一个参数虽然是数字但也是预设名,按普通流程处理
target_id = None
preset_name = args_text
if not plugin_config.enable_private_chat:
return
context_id = event.user_id
state = private_chat_states[context_id]
is_group_target = False
else:
# 普通情况:修改自己的预设
preset_name = args_text
if isinstance(event, GroupMessageEvent):
context_id = event.group_id
state = group_states[context_id]
is_group_target = True
else: # PrivateMessageEvent
if not plugin_config.enable_private_chat:
return
context_id = event.user_id
state = private_chat_states[context_id]
is_group_target = False
# 处理关闭功能
if preset_name == "off":
state.preset_name = preset_name
if target_id:
context_type = "群聊" if is_group_target else "私聊"
await preset_handler.finish(f"已关闭 {context_type} {context_id} 的llmchat功能")
elif isinstance(event, GroupMessageEvent):
await preset_handler.finish("已关闭llmchat群聊功能")
else:
await preset_handler.finish("已关闭llmchat私聊功能")
# 检查预设是否存在
if preset_name not in available_presets:
available_presets_str = "\n- ".join(available_presets)
await preset_handler.finish(
f"当前API预设{group_states[group_id].preset_name}\n可用API预设\n- {available_presets_str}"
f"当前API预设{state.preset_name}\n可用API预设\n- {available_presets_str}"
)
group_states[group_id].preset_name = preset_name
# 切换预设
state.preset_name = preset_name
if target_id:
context_type = "群聊" if is_group_target else "私聊"
await preset_handler.finish(f"已将 {context_type} {context_id} 切换至API预设{preset_name}")
else:
await preset_handler.finish(f"已切换至API预设{preset_name}")
@ -491,16 +672,23 @@ edit_preset_handler = on_command(
"修改设定",
priority=1,
block=True,
permission=(SUPERUSER | GROUP_ADMIN | GROUP_OWNER),
permission=(SUPERUSER | GROUP_ADMIN | GROUP_OWNER | PRIVATE),
)
@edit_preset_handler.handle()
async def handle_edit_preset(event: GroupMessageEvent, args: Message = CommandArg()):
group_id = event.group_id
group_prompt = args.extract_plain_text().strip()
async def handle_edit_preset(event: GroupMessageEvent | PrivateMessageEvent, args: Message = CommandArg()):
if isinstance(event, GroupMessageEvent):
context_id = event.group_id
state = group_states[context_id]
else: # PrivateMessageEvent
if not plugin_config.enable_private_chat:
return
context_id = event.user_id
state = private_chat_states[context_id]
group_states[group_id].group_prompt = group_prompt
group_prompt = args.extract_plain_text().strip()
state.group_prompt = group_prompt
await edit_preset_handler.finish("修改成功")
@ -508,16 +696,23 @@ reset_handler = on_command(
"记忆清除",
priority=1,
block=True,
permission=(SUPERUSER | GROUP_ADMIN | GROUP_OWNER),
permission=(SUPERUSER | GROUP_ADMIN | GROUP_OWNER | PRIVATE),
)
@reset_handler.handle()
async def handle_reset(event: GroupMessageEvent, args: Message = CommandArg()):
group_id = event.group_id
async def handle_reset(event: GroupMessageEvent | PrivateMessageEvent, args: Message = CommandArg()):
if isinstance(event, GroupMessageEvent):
context_id = event.group_id
state = group_states[context_id]
else: # PrivateMessageEvent
if not plugin_config.enable_private_chat:
return
context_id = event.user_id
state = private_chat_states[context_id]
group_states[group_id].past_events.clear()
group_states[group_id].history.clear()
state.past_events.clear()
state.history.clear()
await reset_handler.finish("记忆已清空")
@ -531,32 +726,39 @@ set_prob_handler = on_command(
@set_prob_handler.handle()
async def handle_set_prob(event: GroupMessageEvent, args: Message = CommandArg()):
group_id = event.group_id
prob = 0
context_id = event.group_id
state = group_states[context_id]
try:
prob = float(args.extract_plain_text().strip())
if prob < 0 or prob > 1:
raise ValueError
except Exception as e:
await reset_handler.finish(f"输入有误,请使用 [0,1] 的浮点数\n{e!s}")
raise ValueError("概率值必须在0-1之间")
except ValueError as e:
await set_prob_handler.finish(f"输入有误,请使用 [0,1] 的浮点数\n{e!s}")
return
group_states[group_id].random_trigger_prob = prob
await reset_handler.finish(f"主动回复概率已设为 {prob}")
state.random_trigger_prob = prob
await set_prob_handler.finish(f"主动回复概率已设为 {prob}")
# 预设切换命令
# 思维输出切换命令
think_handler = on_command(
"切换思维输出",
priority=1,
block=True,
permission=(SUPERUSER | GROUP_ADMIN | GROUP_OWNER),
permission=(SUPERUSER | GROUP_ADMIN | GROUP_OWNER | PRIVATE),
)
@think_handler.handle()
async def handle_think(event: GroupMessageEvent, args: Message = CommandArg()):
async def handle_think(event: GroupMessageEvent | PrivateMessageEvent, args: Message = CommandArg()):
if isinstance(event, GroupMessageEvent):
state = group_states[event.group_id]
else: # PrivateMessageEvent
if not plugin_config.enable_private_chat:
return
state = private_chat_states[event.user_id]
state.output_reasoning_content = not state.output_reasoning_content
await think_handler.finish(
@ -570,6 +772,7 @@ async def handle_think(event: GroupMessageEvent, args: Message = CommandArg()):
data_dir = store.get_plugin_data_dir()
# 获取插件数据文件
data_file = store.get_plugin_data_file("llmchat_state.json")
private_data_file = store.get_plugin_data_file("llmchat_private_state.json")
async def save_state():
@ -591,6 +794,24 @@ async def save_state():
async with aiofiles.open(data_file, "w", encoding="utf8") as f:
await f.write(json.dumps(data, ensure_ascii=False))
# 保存私聊状态
if plugin_config.enable_private_chat:
logger.info(f"开始保存私聊状态到文件:{private_data_file}")
private_data = {
uid: {
"preset": state.preset_name,
"history": list(state.history),
"last_active": state.last_active,
"group_prompt": state.group_prompt,
"output_reasoning_content": state.output_reasoning_content,
}
for uid, state in private_chat_states.items()
}
os.makedirs(os.path.dirname(private_data_file), exist_ok=True)
async with aiofiles.open(private_data_file, "w", encoding="utf8") as f:
await f.write(json.dumps(private_data, ensure_ascii=False))
async def load_state():
"""从文件加载群组状态"""
@ -612,6 +833,23 @@ async def load_state():
state.random_trigger_prob = state_data.get("random_trigger_prob", plugin_config.random_trigger_prob)
group_states[int(gid)] = state
# 加载私聊状态
if plugin_config.enable_private_chat:
logger.info(f"从文件加载私聊状态:{private_data_file}")
if os.path.exists(private_data_file):
async with aiofiles.open(private_data_file, encoding="utf8") as f:
private_data = json.loads(await f.read())
for uid, state_data in private_data.items():
state = PrivateChatState()
state.preset_name = state_data["preset"]
state.history = deque(
state_data["history"], maxlen=plugin_config.history_size * 2
)
state.last_active = state_data["last_active"]
state.group_prompt = state_data["group_prompt"]
state.output_reasoning_content = state_data["output_reasoning_content"]
private_chat_states[int(uid)] = state
# 注册生命周期事件
@driver.on_startup

View file

@ -49,6 +49,8 @@ class ScopedConfig(BaseModel):
default_factory=list,
description="需要忽略的消息前缀列表,匹配到这些前缀的消息不会处理"
)
enable_private_chat: bool = Field(False, description="是否启用私聊功能")
private_chat_preset: str = Field("off", description="私聊默认使用的预设名称")
class Config(BaseModel):

View file

@ -106,21 +106,11 @@ class MCPClient:
return SessionContext()
async def get_available_tools(self):
"""获取可用工具列表,使用缓存机制"""
if self._tools_cache is not None:
logger.debug("返回缓存的工具列表")
return self._tools_cache
logger.info(f"初始化工具列表缓存,需要连接{len(self.server_config)}个服务器")
async def init_tools_cache(self):
"""初始化工具列表缓存"""
if not self._cache_initialized:
available_tools = []
# 添加OneBot内置工具
onebot_tools = self.onebot_tools.get_available_tools()
available_tools.extend(onebot_tools)
logger.debug(f"添加了{len(onebot_tools)}个OneBot内置工具")
# 添加MCP服务器工具
logger.info(f"初始化工具列表缓存,需要连接{len(self.server_config)}个服务器")
for server_name in self.server_config.keys():
logger.debug(f"正在从服务器[{server_name}]获取工具列表")
async with self._create_session_context(server_name) as session:
@ -143,7 +133,19 @@ class MCPClient:
# 缓存工具列表
self._tools_cache = available_tools
self._cache_initialized = True
logger.info(f"工具列表缓存完成,共缓存{len(available_tools)}个工具")
async def get_available_tools(self, is_group: bool):
"""获取可用工具列表,使用缓存机制"""
await self.init_tools_cache()
available_tools = self._tools_cache.copy() if self._tools_cache else []
if is_group:
# 群聊场景包含OneBot工具和MCP工具
available_tools.extend(self.onebot_tools.get_available_tools())
logger.debug(f"获取可用工具列表,共{len(available_tools)}个工具")
return available_tools
async def call_tool(self, tool_name: str, tool_args: dict, group_id: int | None = None, bot_id: str | None = None):

18
poetry.lock generated
View file

@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
[[package]]
name = "aiofiles"
@ -662,14 +662,14 @@ typing-extensions = ">=4.0.0,<5.0.0"
[[package]]
name = "nonebot2"
version = "2.4.1"
version = "2.4.4"
description = "An asynchronous python bot framework."
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "nonebot2-2.4.1-py3-none-any.whl", hash = "sha256:fec95f075efc89dbe9ce148618b413b02f46ba284200367749b035e794695111"},
{file = "nonebot2-2.4.1.tar.gz", hash = "sha256:8fea364318501ed79721403a8ecd76587bc884d58c356260f691a8bbda9b05e6"},
{file = "nonebot2-2.4.4-py3-none-any.whl", hash = "sha256:8885d02906f1def83c138f298a7aa99ca1975351f44d8d290ea0eeec5aec1f0b"},
{file = "nonebot2-2.4.4.tar.gz", hash = "sha256:b367c17f31ae0d548e374bb80b719ed12885620f29f3cbc305a5a88a6175f4e3"},
]
[package.dependencies]
@ -679,17 +679,17 @@ loguru = ">=0.6.0,<1.0.0"
pydantic = ">=1.10.0,<2.5.0 || >2.5.0,<2.5.1 || >2.5.1,<2.10.0 || >2.10.0,<2.10.1 || >2.10.1,<3.0.0"
pygtrie = ">=2.4.1,<3.0.0"
python-dotenv = ">=0.21.0,<2.0.0"
tomli = {version = ">=2.0.1,<3.0.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=4.4.0,<5.0.0"
tomli = {version = ">=2.0.1,<3.0.0", markers = "python_full_version < \"3.11.0\""}
typing-extensions = ">=4.6.0,<5.0.0"
yarl = ">=1.7.2,<2.0.0"
[package.extras]
aiohttp = ["aiohttp[speedups] (>=3.11.0,<4.0.0)"]
all = ["Quart (>=0.18.0,<1.0.0)", "aiohttp[speedups] (>=3.11.0,<4.0.0)", "fastapi (>=0.93.0,<1.0.0)", "httpx[http2] (>=0.26.0,<1.0.0)", "uvicorn[standard] (>=0.20.0,<1.0.0)", "websockets (>=10.0)"]
all = ["aiohttp[speedups] (>=3.11.0,<4.0.0)", "fastapi (>=0.93.0,<1.0.0)", "httpx[http2] (>=0.26.0,<1.0.0)", "uvicorn[standard] (>=0.20.0,<1.0.0)", "websockets (>=15.0)"]
fastapi = ["fastapi (>=0.93.0,<1.0.0)", "uvicorn[standard] (>=0.20.0,<1.0.0)"]
httpx = ["httpx[http2] (>=0.26.0,<1.0.0)"]
quart = ["Quart (>=0.18.0,<1.0.0)", "uvicorn[standard] (>=0.20.0,<1.0.0)"]
websockets = ["websockets (>=10.0)"]
quart = ["quart (>=0.18.0,<1.0.0)", "uvicorn[standard] (>=0.20.0,<1.0.0)"]
websockets = ["websockets (>=15.0)"]
[[package]]
name = "nonemoji"

View file

@ -1,6 +1,6 @@
[tool.poetry]
name = "nonebot-plugin-llmchat"
version = "0.4.1"
version = "0.5.0"
description = "Nonebot AI group chat plugin supporting multiple API preset configurations"
license = "GPL"
authors = ["FuQuan i@fuquan.moe"]