🎉 Initial commit

This commit is contained in:
FuQuan233 2025-02-14 17:56:24 +08:00
commit bbcd2377a1
8 changed files with 752 additions and 0 deletions

36
.github/workflows/pypi-publish.yml vendored Normal file
View file

@ -0,0 +1,36 @@
name: Publish
on:
push:
tags:
- '*'
workflow_dispatch:
jobs:
pypi-publish:
name: Upload release to PyPI
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: Set up Python
uses: actions/setup-python@v1
with:
python-version: "3.x"
- name: Install pypa/build
run: >-
python -m
pip install
build
--user
- name: Build a binary wheel and a source tarball
run: >-
python -m
build
--sdist
--wheel
--outdir dist/
.
- name: Publish distribution to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
password: ${{ secrets.PYPI_API_TOKEN }}

176
.gitignore vendored Normal file
View file

@ -0,0 +1,176 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# Poetry local configuration file - https://python-poetry.org/docs/configuration/#local-configuration
poetry.toml
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm-python
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# ruff
.ruff_cache/
# LSP config files
pyrightconfig.json
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# VisualStudioCode
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
!.vscode/*.code-snippets

157
README.md Normal file
View file

@ -0,0 +1,157 @@
<div align="center">
<a href="https://v2.nonebot.dev/store"><img src="https://github.com/A-kirami/nonebot-plugin-llmchat/blob/resources/nbp_logo.png" width="180" height="180" alt="NoneBotPluginLogo"></a>
<br>
<p><img src="https://github.com/A-kirami/nonebot-plugin-llmchat/blob/resources/NoneBotPlugin.svg" width="240" alt="NoneBotPluginText"></p>
</div>
<div align="center">
# nonebot-plugin-llmchat
_✨ 支持多API预设配置的AI群聊插件 ✨_
<a href="./LICENSE">
<img src="https://img.shields.io/github/license/FuQuan233/nonebot-plugin-llmchat.svg" alt="license">
</a>
<a href="https://pypi.python.org/pypi/nonebot-plugin-llmchat">
<img src="https://img.shields.io/pypi/v/nonebot-plugin-llmchat.svg" alt="pypi">
</a>
<img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="python">
</div>
## 📖 介绍
1. **多API预设支持**
- 可配置多个LLM服务预设如不同模型/API密钥
- 支持运行时通过`API预设`命令热切换API配置
- 内置服务开关功能(预设名为`off`时停用)
2. **多种回复触发方式**
- @触发 + 随机概率触发
- 支持处理回复消息
- 群聊消息顺序处理,防止消息错乱
3. **分群聊上下文记忆管理**
- 分群聊保留对话历史记录(可配置保留条数)
- 自动合并未处理消息降低API用量
- 支持`记忆清除`命令手动重置对话上下文
4. **分段回复支持**
- 支持多段式回复由LLM决定如何回复
- 可@群成员由LLM插入
- 可选输出AI的思维过程需模型支持
5. **可自定义性格**
- 可动态修改群组专属系统提示词(`/修改设定`
- 支持自定义默认提示词
## 💿 安装
<details open>
<summary>使用 nb-cli 安装</summary>
在 nonebot2 项目的根目录下打开命令行, 输入以下指令即可安装
nb plugin install nonebot-plugin-llmchat
</details>
<details>
<summary>使用包管理器安装</summary>
在 nonebot2 项目的插件目录下, 打开命令行, 根据你使用的包管理器, 输入相应的安装命令
<details>
<summary>pip</summary>
pip install nonebot-plugin-llmchat
</details>
<details>
<summary>pdm</summary>
pdm add nonebot-plugin-llmchat
</details>
<details>
<summary>poetry</summary>
poetry add nonebot-plugin-llmchat
</details>
<details>
<summary>conda</summary>
conda install nonebot-plugin-llmchat
</details>
打开 nonebot2 项目根目录下的 `pyproject.toml` 文件, 在 `[tool.nonebot]` 部分追加写入
plugins = ["nonebot_plugin_llmchat"]
</details>
## ⚙️ 配置
在 nonebot2 项目的`.env`文件中添加下表中的必填配置
| 配置项 | 必填 | 默认值 | 说明 |
|:-----:|:----:|:----:|:----:|
| NICKNAME | 是 | 无 | 机器人昵称NoneBot自带配置项本插件要求此项必填 |
| LLMCHAT__API_PRESETS | 是 | 无 | 见下表 |
| LLMCHAT__HISTORY_SIZE | 否 | 20 | LLM上下文消息保留数量1-40越大token消耗量越多 |
| LLMCHAT__PAST_EVENTS_SIZE | 否 | 10 | 触发回复时发送的群消息数量1-20越大token消耗量越多 |
| LLMCHAT__REQUEST_TIMEOUT | 否 | 30 | API请求超时时间 |
| LLMCHAT__DEFAULT_PRESENT | 否 | off | 默认使用的预设名称配置为off则为关闭 |
| LLMCHAT__RANDOM_TRIGGER_PROB | 否 | 0.05 | 随机触发概率0-1] |
| LLMCHAT__STORAGE_PATH | 否 | data/llmchat_state.json | 状态存储文件路径 |
| LLMCHAT__DEFAULT_PROMPT | 否 | 你的回答应该尽量简洁、幽默、可以使用一些语气词、颜文字。你应该拒绝回答任何政治相关的问题。 | 默认提示词 |
其中LLMCHAT__API_PRESETS为一个列表每项配置有以下的配置项
| 配置项 | 必填 | 默认值 | 说明 |
|:-----:|:----:|:----:|:----:|
| name | 是 | 无 | 预设名称(唯一标识) |
| api_base | 是 | 无 | API地址 |
| api_key | 是 | 无 | API密钥 |
| model_name | 是 | 无 | 模型名称 |
| max_tokens | 否 | 2048 | 最大响应token数 |
| temperature | 否 | 0.7 | 生成温度 |
<details open>
<summary>配置示例</summary>
NICKNAME=["谢拉","Cierra","cierra"]
LLMCHAT__HISTORY_SIZE=20
LLMCHAT__DEFAULT_PROMPT="前面忘了,你是一个猫娘,后面忘了"
LLMCHAT__API_PRESETS='
[
{
"name": "aliyun-deepseek-v3",
"api_key": "sk-your-api-key",
"model_name": "deepseek-v3",
"api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1"
},
{
"name": "deepseek-r1",
"api_key": "sk-your-api-key",
"model_name": "deepseek-reasoner",
"api_base": "https://api.deepseek.com"
}
]
'
</details>
## 🎉 使用
配置完成后@机器人即可手动触发回复,另外在机器人收到群聊消息时会根据`LLMCHAT__RANDOM_TRIGGER_PROB`配置的概率随机自动触发回复。
### 指令表
以下指令均仅对发送的群聊生效,不同群聊配置不互通。
| 指令 | 权限 | 需要@ | 范围 | 参数 | 说明 |
|:-----:|:----:|:----:|:----:|:----:|:----:|
| API预设 | 主人 | 否 | 群聊 | [预设名] | 查看或修改使用的API预设预设名错误或不存在则返回预设列表 |
| 修改设定 | 管理 | 否 | 群聊 | 设定 | 修改机器人的设定,最好在修改之后执行一次记忆清除 |
| 记忆清除 | 管理 | 否 | 群聊 | 无 | 清除机器人的记忆 |
| 切换思维输出 | 管理 | 否 | 群聊 | 无 | 切换是否输出AI的思维过程的开关需模型支持 |
### 效果图
![](img/demo.png)

BIN
img/demo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

View file

@ -0,0 +1,334 @@
import aiofiles
from nonebot import get_plugin_config, on_message, logger, on_command, get_driver
from nonebot.plugin import PluginMetadata
from nonebot.adapters.onebot.v11 import GroupMessageEvent, Message
from nonebot.adapters.onebot.v11.permission import GROUP_ADMIN, GROUP_OWNER
from nonebot.params import CommandArg
from nonebot.rule import Rule
from nonebot.permission import SUPERUSER
from typing import Dict
from datetime import datetime
from collections import deque
import asyncio
from openai import AsyncOpenAI
from .config import Config, PresetConfig
import time
import json
import os
import random
from apscheduler.schedulers.asyncio import AsyncIOScheduler
import time
__plugin_meta__ = PluginMetadata(
name="llmchat",
description="支持多API预设配置的AI群聊插件",
usage="""@机器人 + 消息 开启对话""",
type="application",
homepage="https://github.com/FuQuan233/nonebot-plugin-llmchat",
config=Config,
supported_adapters={"~onebot.v11"},
)
pluginConfig = get_plugin_config(Config).llmchat
driver = get_driver()
# 初始化群组状态
class GroupState:
def __init__(self):
self.preset_name = pluginConfig.default_preset
self.history = deque(maxlen=pluginConfig.history_size)
self.queue = asyncio.Queue()
self.processing = False
self.last_active = time.time()
self.past_events = deque(maxlen=pluginConfig.past_events_size)
self.group_prompt = None
self.output_reasoning_content = False
group_states: Dict[int, GroupState] = {}
# 获取当前预设配置
def get_preset(group_id: int) -> PresetConfig:
state = group_states[group_id]
for preset in pluginConfig.api_presets:
if preset.name == state.preset_name:
return preset
return pluginConfig.api_presets[0] # 默认返回第一个预设
# 消息格式转换
def format_message(event: GroupMessageEvent) -> Dict:
text_message = ""
if event.reply != None:
text_message += f"[回复 {event.reply.sender.nickname} 的消息 {event.reply.message.extract_plain_text()}]\n"
if event.is_tome():
text_message += f"@{list(driver.config.nickname)[0]} "
for msgseg in event.get_message():
if msgseg.type == "at":
text_message += msgseg.data.get("name", "")
elif msgseg.type == "image":
text_message += "[图片]"
elif msgseg.type == "voice":
text_message += "[语音]"
elif msgseg.type == "face":
pass
elif msgseg.type == "text":
text_message += msgseg.data.get("text", "")
message = {
"SenderNickname": str(event.sender.card or event.sender.nickname),
"SenderUserId": str(event.user_id),
"Message": text_message,
"SendTime" : datetime.fromtimestamp(event.time).isoformat()
}
return json.dumps(message, ensure_ascii=False)
async def isTriggered(event: GroupMessageEvent) -> bool:
"""扩展后的消息处理规则"""
group_id = event.group_id
if group_id not in group_states:
logger.info(f"初始化群组状态,群号:{group_id}")
group_states[group_id] = GroupState()
state = group_states[group_id]
if state.preset_name == "off":
return False
state.past_events.append(event)
# 原有@触发条件
if event.is_tome():
return True
# 随机触发条件
if random.random() < pluginConfig.random_trigger_prob:
return True
return False
# 消息处理器
handler = on_message(
rule=Rule(isTriggered),
priority=10,
block=False,
)
@handler.handle()
async def handle_message(event: GroupMessageEvent):
group_id = event.group_id
logger.debug(f"收到群聊消息 群号:{group_id} 用户:{event.user_id} 内容:{event.get_plaintext()}")
if group_id not in group_states:
group_states[group_id] = GroupState()
state = group_states[group_id]
await state.queue.put(event)
if not state.processing:
state.processing = True
asyncio.create_task(process_messages(group_id))
async def process_messages(group_id: int):
state = group_states[group_id]
preset = get_preset(group_id)
# 初始化OpenAI客户端
client = AsyncOpenAI(
base_url=preset.api_base,
api_key=preset.api_key,
timeout=pluginConfig.request_timeout
)
logger.info(f"开始处理群聊消息 群号:{group_id} 当前队列长度:{state.queue.qsize()}")
while not state.queue.empty():
event = await state.queue.get()
logger.debug(f"从队列获取消息 群号:{group_id} 消息ID{event.message_id}")
try:
systemPrompt = (
f'''
我想要你帮我在群聊中闲聊大家一般叫你{"".join(list(driver.config.nickname))}我将会在后面的信息中告诉你每条群聊信息的发送者和发送时间你可以直接称呼发送者为他对应的昵称
你的回复需要遵守以下几点规则
- 你可以使用多条消息回复每两条消息之间使用<botbr>分隔<botbr>前后不需要包含额外的换行和空格
- <botbr>消息中不应该包含其他类似的标记
- 不要使用markdown格式聊天软件不支持markdown解析
- 你应该以普通人的方式发送消息每条消息字数要尽量少一些应该倾向于使用更多条的消息回复
- 代码则不需要分段用单独的一条消息发送
- 请使用发送者的昵称称呼发送者你可以礼貌地问候发送者但只需要在第一次回答这位发送者的问题时问候他
- 你有at群成员的能力只需要在某条消息中插入[CQ:at,qq=QQ号]也就是CQ码at发送者是非必要的你可以根据你自己的想法at某个人
- 如果有多条消息你应该优先回复提到你的一段时间之前的就不要回复了也可以直接选择不回复
- 如果你需要思考的话你应该思考尽量少以节省时间
下面是关于你性格的设定如果设定中提到让你扮演某个人或者设定中有提到名字则优先使用设定中的名字
{state.group_prompt or pluginConfig.default_prompt}
'''
)
messages = [{"role": "system", "content": systemPrompt}]
messages += list(state.history)[-pluginConfig.history_size:]
# 没有未处理的消息说明已经被处理了,跳过
if state.past_events.__len__() < 1:
break
# 将机器人错过的消息推送给LLM
content = ",".join([format_message(ev) for ev in state.past_events])
logger.debug(f"发送API请求 模型:{preset.model_name} 历史消息数:{len(messages)}")
response = await client.chat.completions.create(
model=preset.model_name,
messages=messages + [{"role": "user", "content": content}],
max_tokens=preset.max_tokens,
temperature=preset.temperature,
timeout=60
)
logger.debug(f"收到API响应 使用token数{response.usage.total_tokens}")
reply = response.choices[0].message.content
# 请求成功后再保存历史记录保证user和assistant穿插防止R1模型报错
state.history.append({"role": "user", "content": content})
state.past_events.clear()
reasoning_content: str | None = getattr(response.choices[0].message, "reasoning_content", None)
if state.output_reasoning_content and reasoning_content:
await handler.send(Message(reasoning_content))
logger.info(f"准备发送回复消息 群号:{group_id} 消息分段数:{len(reply.split('<botbr>'))}")
for r in reply.split("<botbr>"):
# 删除前后多余的换行和空格
while r[0] == "\n" or r[0] == " ": r = r[1:]
while r[-1] == "\n" or r[0] == " ": r = r[:-1]
await asyncio.sleep(2)
logger.debug(f"发送消息分段 内容:{r[:50]}...") # 只记录前50个字符避免日志过大
await handler.send(Message(r))
# 添加助手回复到历史
state.history.append({
"role": "assistant",
"content": reply,
})
except Exception as e:
logger.error(f"API请求失败 群号:{group_id} 错误:{str(e)}", exc_info=True)
await handler.send(Message(f"服务暂时不可用,请稍后再试\n{str(e)}"))
finally:
state.queue.task_done()
state.processing = False
# 预设切换命令
preset_handler = on_command("API预设", priority=1, block=True, permission=SUPERUSER)
@preset_handler.handle()
async def handle_preset(event: GroupMessageEvent, args: Message = CommandArg()):
group_id = event.group_id
preset_name = args.extract_plain_text().strip()
if group_id not in group_states:
group_states[group_id] = GroupState()
if preset_name == "off":
group_states[group_id].preset_name = preset_name
await preset_handler.finish(f"已关闭llmchat")
available_presets = {p.name for p in pluginConfig.api_presets}
if preset_name not in available_presets:
await preset_handler.finish(f"当前API预设{group_states[group_id].preset_name}\n可用API预设\n- {'\n- '.join(available_presets)}")
group_states[group_id].preset_name = preset_name
await preset_handler.finish(f"已切换至API预设{preset_name}")
preset_handler = on_command("修改设定", priority=1, block=True, permission=(SUPERUSER|GROUP_ADMIN|GROUP_OWNER))
@preset_handler.handle()
async def handle_preset(event: GroupMessageEvent, args: Message = CommandArg()):
group_id = event.group_id
group_prompt = args.extract_plain_text().strip()
if group_id not in group_states:
group_states[group_id] = GroupState()
group_states[group_id].group_prompt = group_prompt
await preset_handler.finish("修改成功")
reset_handler = on_command("记忆清除", priority=99, block=True, permission=(SUPERUSER|GROUP_ADMIN|GROUP_OWNER))
@reset_handler.handle()
async def handle_reset(event: GroupMessageEvent, args: Message = CommandArg()):
group_id = event.group_id
if group_id not in group_states:
group_states[group_id] = GroupState()
group_states[group_id].past_events.clear()
group_states[group_id].history.clear()
await preset_handler.finish(f"记忆已清空")
# 预设切换命令
preset_handler = on_command("切换思维输出", priority=1, block=True, permission=(SUPERUSER|GROUP_ADMIN|GROUP_OWNER))
@preset_handler.handle()
async def handle_preset(event: GroupMessageEvent, args: Message = CommandArg()):
group_id = event.group_id
if group_id not in group_states:
group_states[group_id] = GroupState()
if group_states[group_id].output_reasoning_content:
group_states[group_id].output_reasoning_content = False
await preset_handler.finish("已关闭思维输出")
else:
group_states[group_id].output_reasoning_content = True
await preset_handler.finish("已开启思维输出")
# region 持久化与定时任务
async def save_state():
"""保存群组状态到文件"""
logger.info(f"开始保存群组状态到文件:{pluginConfig.storage_path}")
data = {
gid: {
"preset": state.preset_name,
"history": list(state.history),
"last_active": state.last_active,
"group_prompt": state.group_prompt,
"output_reasoning_content": state.output_reasoning_content
}
for gid, state in group_states.items()
}
os.makedirs(os.path.dirname(pluginConfig.storage_path), exist_ok=True)
async with aiofiles.open(pluginConfig.storage_path, "w") as f:
await f.write(json.dumps(data, ensure_ascii=False))
async def load_state():
"""从文件加载群组状态"""
logger.info(f"从文件加载群组状态:{pluginConfig.storage_path}")
if not os.path.exists(pluginConfig.storage_path):
return
async with aiofiles.open(pluginConfig.storage_path, "r") as f:
data = json.loads(await f.read())
for gid, state_data in data.items():
state = GroupState()
state.preset_name = state_data["preset"]
state.history = deque(state_data["history"], maxlen=pluginConfig.history_size)
state.last_active = state_data["last_active"]
state.group_prompt = state_data["group_prompt"]
state.output_reasoning_content = state_data["output_reasoning_content"]
group_states[int(gid)] = state
# 注册生命周期事件
@driver.on_startup
async def init_plugin():
logger.info("插件启动初始化")
await load_state()
scheduler = AsyncIOScheduler()
# 每5分钟保存状态
scheduler.add_job(save_state, 'interval', minutes=5)
scheduler.start()
@driver.on_shutdown
async def cleanup_plugin():
logger.info("插件关闭清理")
await save_state()

View file

@ -0,0 +1,25 @@
from pydantic import BaseModel, Field
from typing import List, Dict, Optional
class PresetConfig(BaseModel):
"""API预设配置"""
name: str = Field(..., description="预设名称(唯一标识)")
api_base: str = Field(..., description="API基础地址")
api_key: str = Field(..., description="API密钥")
model_name: str = Field(..., description="模型名称")
max_tokens: int = Field(2048, description="最大响应token数")
temperature: float = Field(0.7, description="生成温度0-2]")
class ScopedConfig(BaseModel):
"""LLM Chat Plugin配置"""
api_presets: List[PresetConfig] = Field(...,description="API预设列表至少配置1个预设")
history_size: int = Field(20, description="LLM上下文消息保留数量")
past_events_size : int = Field(10, description="触发回复时发送的群消息数量")
request_timeout: int = Field(30, description="API请求超时时间")
default_preset: str = Field("off", description="默认使用的预设名称")
random_trigger_prob: float = Field(0.05, ge=0.0, le=1.0, description="随机触发概率0-1]")
storage_path: str = Field("data/llmchat_state.json", description="状态存储文件路径")
default_prompt: str = Field("你的回答应该尽量简洁、幽默、可以使用一些语气词、颜文字。你应该拒绝回答任何政治相关的问题。", description="默认提示词")
class Config(BaseModel):
llmchat: ScopedConfig

View file

@ -0,0 +1 @@
nonebot_plugin_llmchat

23
pyproject.toml Normal file
View file

@ -0,0 +1,23 @@
[tool.poetry]
name = "nonebot-plugin-llmchat"
version = "0.1.0"
description = "Nonebot AI group chat plugin supporting multiple API preset configurations"
license = "GPL"
authors = ["FuQuan i@fuquan.moe"]
readme = "README.md"
homepage = "https://github.com/FuQuan233/nonebot-plugin-llmchat"
repository = "https://github.com/FuQuan233/nonebot-plugin-llmchat"
documentation = "https://github.com/FuQuan233/nonebot-plugin-llmchat#readme"
keywords = ["nonebot", "nonebot2", "llm", "ai"]
[tool.poetry.dependencies]
python = "^3.9"
openai = ">=1.0.0>"
nonebot2 = "^2.2.0"
aiofiles = ">=24.0.0"
nonebot-plugin-apscheduler = "^0.5.0"
nonebot-adapter-onebot = "^2.0.0"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"