Skip to content

Conversation

@dingyi222666
Copy link
Member

@dingyi222666 dingyi222666 commented Sep 16, 2025

概述

基于 Vue 响应式系统重构 ChatLuna 平台架构,实现现代化插件生命周期管理,提升系统性能和稳定性。

🔧 核心变更

响应式系统:

  • 集成 @vue/reactivity 实现集中式状态管理和自动更新
  • 静态方法调用转换为计算响应式值
  • 基于 effect 的清理系统,防止内存泄漏
  • 响应式监听器替换手动事件监听

插件系统现代化:

  • 移除手动插件注册,实现自动安装/卸载
  • 异步工具创建转为同步模式
  • 统一插件生命周期管理
  • 添加 registerRenderer 方法保持 API 一致性

错误处理与中断信号:

  • 完善 RunnableConfig 中断信号支持
  • 改进 ChatLunaError 传播机制
  • 增强房间可用性检查和错误处理
  • 优化 SSE 响应错误处理

性能与稳定性:

  • 配置解析现代化为同步操作
  • 改进内存管理和空值安全
  • 标准化工具注册模式
  • 消除冗余异步传播

📦 包更新

  • 核心包: Vue 响应式集成,工具管理和模式系统
  • 所有适配器: 更新插件架构和响应式模式,增强中断信号处理
  • 搜索服务: 响应式浏览链和优化模型创建
  • 本地化: 更新模型可用性显示格式

🎯 主要优势

  1. 自动 UI 更新: 工具/模型变更时 UI 自动刷新
  2. 简化开发: 移除插件注册样板代码
  3. 更好性能: 同步操作和减少事件开销
  4. 内存安全: 自动清理和proper资源释放
  5. 增强稳定性: 集中式状态管理和改进错误处理
  6. 健壮中断处理: 支持操作取消和资源清理

版本: 1.3.0-alpha.41

Replace static method calls with computed reactive values throughout the platform service layer.
Key changes include:
- Added @vue/reactivity dependency for computed refs and reactive objects
- Converted PlatformService methods to return ComputedRef values
- Updated all consumers to use .value accessor for computed properties
- Implemented reactive model and embeddings initialization in ChatInterface
- Added automatic chain recreation on model/embeddings changes via watchers
- Converted static arrays/objects to reactive equivalents in service layer

This enables automatic UI updates when platform models or configurations change,
improving the user experience with real-time platform availability updates.
…ema systems

- Update plugin chat chain to use ComputedRef for reactive tool management
- Replace manual schema updates with centralized reactive schema utilities
- Extract schema update logic into shared utility functions
- Add schema.ts utility with reactive model, embeddings, and vector store schemas
- Simplify config modules by delegating to reactive schema system
- Enable automatic UI updates when tools/models change through Vue reactivity

This change improves performance by eliminating redundant schema updates and
provides a more maintainable architecture for managing dynamic configurations.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 16, 2025

Walkthrough

大量内部状态与公开 getter/工厂切换为 Vue 响应式(reactive / ComputedRef / Ref),调用处统一读取 .value;新增 Koishi schema helpers 并在启动时注册;若干 createTool/fromLLM/fromLLMAndTools 同步化;引入 RunnableConfig 支持取消与更细化错误传播;若干日志与文档文字微调。

Changes

Cohort / File(s) Summary
房间与可用性
packages/core/src/chains/rooms.ts, packages/core/src/middlewares/room/*.ts, packages/core/src/middlewares/model/resolve_model.ts, packages/core/src/middlewares/room/list_room.ts, packages/core/src/middlewares/room/room_info.ts
将平台/模型/链返回的 ComputedRef 统一解包使用 .valuefixConversationRoomAvailability 改为返回 boolean 并在 upsert 后返回再验证结果;房间列表/详情中加入异步可用性查询并展示。
平台核心与类型签名
packages/core/src/llm-core/platform/service.ts, packages/core/src/llm-core/platform/types.ts, 多个 adapter/client/requester(openai/gemini/deepseek/...)
内部 maps 改为 reactive,多数 getter 改为返回 ComputedRef;若干 createTool/createFunction 从 async→sync;getModels/refreshModels 等新增可选 RunnableConfig(signal 支持)并调整错误传播(保留/重抛 ChatLunaError)。
核心导出与 ready 流
packages/core/src/index.ts
新增 export * from '@vue/reactivity';ready 流中部分 setup 不再 await;日志格式与字符串调整。
链与工具重命名与响应式化
packages/core/src/llm-core/chain/chat_chain.ts, packages/core/src/llm-core/chain/plugin_chat_chain.ts, packages/search-service/src/chain/browsing_chain.ts
ChatHub... 重命名为 ChatLuna...;插件链/浏览链的 tools/summaryModel 改为 ComputedRef/Ref 并以 .value 访问,静态工厂改为同步返回,活跃工具差分与 executor 重建逻辑调整。
Chat 应用重构(reactive)
packages/core/src/llm-core/chat/app.ts, packages/core/src/llm-core/chat/default.ts, packages/core/src/services/chat.ts
llm/embeddings/modelInfo 改为 ComputedRef 并用 computed/watch 自动重建链;createChatModel/createEmbeddings 返回 ComputedRef;事件驱动的 schema 更新被 utils/schema 替代。
Koishi schema helpers
packages/core/src/utils/schema.ts, packages/search-service/src/config.ts, packages/long-memory/src/plugins/config.ts
新增 modelSchema/embeddingsSchema/chatChainSchema/vectorStoreSchema,使用 computed/watch 填充 ctx.schema,替代模块内联或事件驱动的 schema 维护。
插件/服务生命周期与工具注册
多个 packages/*-adapter/src/index.ts, packages/plugin-common/src/plugins/*, packages/mcp-client/src/service.ts, packages/core/src/services/chat.ts
大量移除 ready 中的 plugin.registerToService();将 await plugin.parseConfig(...) 改为非 await(fire-and-forget);众多 createTool 从 async→sync;ChatLunaService API 有若干命名/签名调整(install/uninstall 等)。
图像/多模态处理
packages/image-service/src/index.ts, packages/core/src/middlewares/chat/read_chat_message.ts
image-service 启动时预加载模型(ComputedRef),消息处理读取 .value 并做能力检查;中间件中的 capability / model 检查改为访问 .value
长记忆 / 向量 / 知识路径
packages/long-memory/src/*, packages/long-memory/src/utils/*, packages/plugin-common/src/plugins/knowledge.ts
使用 modelSchema 注册模型选项;运行时通过 createChatModel(...).then(m => m.value)model.value 解包模型/embeddings;部分工具同步化并加入空/回退检查。
网络/流与错误微调
packages/core/src/llm-core/platform/api.ts, packages/core/src/utils/sse.ts, 多个 adapter/requester 文件
ModelRequester.get 增加可选 params(fetch init);流错误处理在网络/超时/Abort 情况下提前 rethrow 避免误计数;sse.checkResponse 在非 ok 时改为读取 response.text() 获取错误字符串。
文档与日志微调
README*.MD, packages/core/README.MD, 若干中间件日志
文档中“插件”替换为“Agent”等文本更新;多处日志首字母/格式与少量 TODO 注释调整(示例:Auto delete task running、Call tool 首字母大写等)。

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor 用户
  participant UI as Koishi UI
  participant Schema as utils/schema
  participant Platform as PlatformService (ComputedRef)
  participant Chat as Chat App

  Note over Platform: 平台 getter 返回 ComputedRef,需要读取 .value
  UI->>Schema: 初始化 model/chat-mode/schema
  Schema->>Platform: 订阅 getAllModels/getChatChains(computed/watch)
  Platform-->>Schema: ComputedRef 更新(.value 变化)
  Schema-->>UI: ctx.schema.set(...)

  用户->>Chat: 请求创建/使用对话
  Chat->>Platform: 调用 createChatModel/getAllModels(得到 ComputedRef)
  Chat->>Chat: createChain 使用 llm.value / embeddings.value / modelInfo.value
  Chat-->>用户: 返回链实例
  Note over Chat,Platform: watch(llm/embeddings) 触发链重建
Loading
sequenceDiagram
  autonumber
  participant Plugin as ChatLunaPlugin
  participant Platform as PlatformService
  participant Tools as ComputedRef<Tools>
  participant Chain as PluginChain

  Plugin->>Platform: registerChatChainProvider(name, create sync)
  Plugin->>Tools: getTools() -> ComputedRef
  Plugin->>Chain: fromLLMAndTools(llm, tools: ComputedRef)
  Chain->>Tools: 读取 tools.value 计算差分
  Note over Chain: _getActiveTools 基于 tools.value 更新 this.activeTools 并重建 executor
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

兔儿轻敲键与线,.value 星光悄然现,
响应流转链又建,schema 花开在眼前。
小兔欢跳庆更改,代码整洁又欢颜。 🐇✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed 标题 “[Refactor] 响应式系统实现,更好的重载实现,减少 async 传播性” 准确反映了 PR 的主要意图:将代码库迁移到 @vue/reactivity 并大范围引入 ComputedRef/.value,同时将许多异步工厂(如 createTool/createChatChain)改为同步以减少 async 传播,标题简洁且与 raw_summary 中对大量文件改动的描述一致。该标题既表明是重构(Refactor),也点明了响应式与减少异步传播两个核心方向,便于团队在变更历史中快速识别主题与影响面。
Description Check ✅ Passed PR 描述详尽且与变更集高度一致,清楚列出了响应式集成、插件生命周期现代化、RunnableConfig 中断支持、错误处理改进以及将异步创建改为同步等关键点,且与 raw_summary 中对多处文件(ComputedRef 引入、移除 plugin.registerToService、createTool 从 async→sync 等)的具体改动相符。描述覆盖了影响范围与预期收益,因此满足本项宽松的描述检查要求。
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch refactor/computed-system

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@dingyi222666 dingyi222666 changed the title [Refactor] 使用Vue响应式系统实现计算属性重构 [Refactor] 响应式系统实现计算属性 Sep 16, 2025
@dingyi222666 dingyi222666 changed the title [Refactor] 响应式系统实现计算属性 [Refactor] 响应式系统实现,更好的重载实现 Sep 16, 2025
…and optimize model creation

- Update browsing chain to use ComputedRef for reactive tool management
- Optimize summary model creation to use shared computed reference with watch
- Fix deprecated parameter documentation in CreateToolParams interface
- Remove redundant model creation calls in tool registration
- Improve performance by eliminating duplicate model instantiation
- Add reactive tool filtering with computed properties

This change extends the Vue reactivity integration to the search service,
improving efficiency and maintaining consistency with the core reactive system.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
packages/core/src/middlewares/auth/set_auth_group.ts (1)

109-115: 模型校验逻辑取反,导致合法模型列表被误判为非法

checkModelList 返回“存在非法模型”时为 true,此处用了取反 !,会在全部合法时报错,在含非法时放行。

请改为直接判断为真即非法:

-                    if (
-                        supportModels != null &&
-                        !checkModelList(service, supportModels)
-                    ) {
+                    if (
+                        supportModels != null &&
+                        checkModelList(service, supportModels)
+                    ) {
packages/core/src/middlewares/auth/create_auth_group.ts (1)

66-72: 同类取反错误:合法模型被判为非法

确认创建流程中与 set_auth_group 相同的问题,需去掉取反。

请修正如下:

-                    if (
-                        supportModels != null &&
-                        !checkModelList(service, supportModels)
-                    ) {
+                    if (
+                        supportModels != null &&
+                        checkModelList(service, supportModels)
+                    ) {
packages/core/src/middlewares/model/set_default_embeddings.ts (1)

34-64: 缺少错误分支的早返回,可能导致读取 targetEmbeddings[0] 为 undefined

当匹配到 0 个或多个条目时已设置 context.message,但仍继续使用 targetEmbeddings[0],会产生未定义行为。

建议在两处分支添加 STOP 返回:

             if (targetEmbeddings.length > 1) {
                 const buffer: string[] = []
                 ...
-                context.message = buffer.join('\n')
+                context.message = buffer.join('\n')
+                return ChainMiddlewareRunStatus.STOP
             } else if (targetEmbeddings.length === 0) {
-                context.message = session.text('.model_not_found')
+                context.message = session.text('.model_not_found')
+                return ChainMiddlewareRunStatus.STOP
             }
 
             const fullName = targetEmbeddings[0]
packages/core/src/index.ts (1)

1-1: 修复:import.meta 在 CJS 输出为空 — 改为 ESM 或添加运行时守卫

已定位到 import.meta 使用点,会在 CJS 输出中导致 fileURLToPath(import.meta.url) 失败:

  • packages/core/src/preset.ts:321
  • packages/image-renderer/src/index.ts:36, 81

要求:将相应包改为 ESM 产物(package.json "type":"module" 或构建器输出为 esm),或在运行时保护 import.meta.url(例如:const root = (typeof __dirname !== 'undefined' && __dirname) || (typeof import.meta === 'object' && import.meta.url ? fileURLToPath(import.meta.url) : undefined);)。修改后重新构建并确认流水线警告已消失。

packages/search-service/src/index.ts (1)

99-137: 修复 TS1308(同步回调中使用 await):将回调标记为 async

回调当前为同步但内部使用 await,编译会报 TS1308;最小修复:将 (params) => 改为 async (params) =>。
文件:packages/search-service/src/index.ts 行:99-137

-            (params) => {
+            async (params) => {
                 const tools = getTools(
                     ctx.chatluna.platform,
                     (name) =>
                         name === 'web-search' ||
                         name === 'web-browser' ||
                         name === 'puppeteer_browser'
                 )

                 const keywordExtractModel =
                     config.summaryModel.length > 0
                         ? await createModel(ctx, config.summaryModel)
                         : undefined

变更后重新运行构建以验证无 TS1308。

🧹 Nitpick comments (14)
packages/core/src/index.ts (1)

25-25: 避免在核心包中整包 re-export Vue Reactivity,建议只暴露稳定子集或以命名空间导出

整包透出会放大公共 API 面积、与上游版本强耦合,并可能引入命名冲突/Tree‑Shaking 退化。建议最小化导出或以 reactivity 命名空间转出。

建议改成如下最小导出(或 export * as reactivity from '@vue/reactivity'):

-export * from '@vue/reactivity'
+export {
+  computed,
+  ref,
+  shallowRef,
+  reactive,
+  unref,
+  toRaw,
+  effectScope,
+  watch
+} from '@vue/reactivity'
packages/core/src/llm-core/platform/types.ts (1)

21-41: 文案小问题:注释中 “is no passed” 建议更正为 “is not passed”

细节但会影响文档专业度。

packages/core/src/middlewares/model/search_model.ts (1)

38-44: .value 解包正确,但需兜底处理 query 为空/未定义

当 query 为 undefined/空串时,String.includes 的表现可能不符合预期。建议显式归一化后再搜索,避免将 "undefined" 当作关键字匹配。

应用如下补丁:

-            context.message = await pagination.searchPage(
-                (value) => value.includes(query),
-                page,
-                limit
-            )
+            const q = (query ?? '').trim()
+            context.message = await pagination.searchPage(
+                q ? (value) => value.includes(q) : () => true,
+                page,
+                limit
+            )
packages/core/src/middlewares/model/set_default_vectorstore.ts (1)

22-25: 向用户输入友好:建议不区分大小写匹配

当前 includes 为区分大小写匹配。为提升可用性建议统一小写后再匹配。

补丁示例:

-            const targetVectorStoreProviders =
-                service.vectorStores.value.filter((vectorStoreProviderName) =>
-                    vectorStoreProviderName.includes(setVectorStore)
-                )
+            const needle = setVectorStore.toLowerCase()
+            const targetVectorStoreProviders =
+                service.vectorStores.value.filter((name) =>
+                    name.toLowerCase().includes(needle)
+                )
packages/core/src/middlewares/chat/chat_time_limit_check.ts (1)

118-126: 返回值类型不一致:应设置 message 并返回 STOP

oldChatLimitCheck 在无可用模型时直接返回字符串,可能破坏中间件期望的枚举返回。

建议修改为:

-        if (
+        if (
             (config.defaultModel === '无' ||
                 config.defaultModel.trim().length < 1) &&
             ctx.chatluna.platform.getAllModels(ModelType.all).value.length < 1
         ) {
-            return session.text('chatluna.not_available_model')
+            context.message = session.text('chatluna.not_available_model')
+            return ChainMiddlewareRunStatus.STOP
         }
packages/core/src/middlewares/room/set_room.ts (1)

315-319: 提示参数有误:应传递当前 chatMode 而非 visibility

错误会导致提示内容与实际无效项不一致。

补丁:

-                    session.text('.invalid_chat_mode', [
-                        visibility,
-                        availableChatModes.join(', ')
-                    ])
+                    session.text('.invalid_chat_mode', [
+                        chatMode,
+                        availableChatModes.join(', ')
+                    ])
packages/search-service/src/index.ts (1)

143-150: 建议:getTools 返回 ComputedRef 以保持与核心响应式工具链一致

核心已迁移为 ComputedRef<ChatLunaTool[]>。此处返回静态数组会失去响应式更新能力。

示例重构(需按项目实际类型适配):

+import { computed } from '@vue/reactivity'
@@
-function getTools(service: PlatformService, filter: (name: string) => boolean) {
-  const tools = service.getTools().filter(filter)
-
-  return tools.map((name) => ({
-    name,
-    tool: service.getTool(name)
-  }))
-}
+function getTools(service: PlatformService, filter: (name: string) => boolean) {
+  return computed(() =>
+    service
+      .getTools()
+      .filter(filter)
+      .map((name) => ({ name, tool: service.getTool(name) }))
+  )
+}

若 ChatLunaBrowsingChain.fromLLMAndTools 既支持数组也支持 ComputedRef,可保留现实现;否则建议上面的重构。

packages/core/src/llm-core/chain/chat_chain.ts (1)

33-33: 新增的公共成员可能暴露内部实现细节

chain: ChatLunaLLMChain 作为公共成员暴露可能会破坏封装性。考虑是否真的需要外部直接访问这个内部链实现。

如果确实需要暴露链的某些功能,建议考虑通过特定的方法来暴露所需的功能,而不是整个链对象:

-    chain: ChatLunaLLMChain
+    private _chain: ChatLunaLLMChain
+
+    // 仅暴露需要的方法
+    get llm() {
+        return this._chain.llm
+    }
packages/core/src/services/chat.ts (2)

136-143: watch 回调中缺少清理逻辑

在 watch 回调中调用 resolve() 后应该停止监听,但 timeoutId() 只是清除了超时定时器,没有停止 watch。

建议添加 watch 的停止逻辑:

-        watch(
+        const stopWatcher = watch(
             models,
             () => {
                 resolve()
                 timeoutId()
+                stopWatcher()
             },
             { deep: true }
         )

240-258: createChatModel 返回 ComputedRef 可能导致类型不一致

方法返回 ComputedRef<ChatLunaChatModel | undefined>,当 client 不可用时返回 undefined。这可能会在消费端造成额外的空值检查负担。

建议考虑在 computed 内部抛出错误而不是返回 undefined,或者在方法文档中明确说明返回值可能为 undefined 的情况。

packages/core/src/llm-core/platform/service.ts (2)

235-241: getClient 的懒加载实现合理但需注意并发问题

在 client 不存在时自动创建是好的设计,但可能存在并发创建的问题。

建议添加并发控制:

+    private _clientCreationPromises: Map<string, Promise<BasePlatformClient>> = new Map()
+
     async getClient(platform: string) {
         if (!this._platformClients[platform]) {
-            await this.createClient(platform)
+            // 防止并发创建
+            if (!this._clientCreationPromises.has(platform)) {
+                this._clientCreationPromises.set(platform, this.createClient(platform))
+            }
+            await this._clientCreationPromises.get(platform)
+            this._clientCreationPromises.delete(platform)
         }
 
         return computed(() => this._platformClients[platform])
     }

342-348: dispose 方法的响应式重置可能导致内存泄漏

使用 reactive({}) 重新初始化可能不会正确清理旧的响应式引用。

建议使用 Object.keys 清理现有键而不是重新赋值:

     dispose() {
         this._tmpVectorStores.clear()
-        this._platformClients = reactive({})
-        this._models = reactive({})
-        this._tools = reactive({})
-        this._chatChains = reactive({})
+        // 清理现有键以保持响应式引用
+        Object.keys(this._platformClients).forEach(key => delete this._platformClients[key])
+        Object.keys(this._models).forEach(key => delete this._models[key])
+        Object.keys(this._tools).forEach(key => delete this._tools[key])
+        Object.keys(this._chatChains).forEach(key => delete this._chatChains[key])
+        Object.keys(this._vectorStore).forEach(key => delete this._vectorStore[key])
     }
packages/core/src/llm-core/chat/app.ts (2)

252-256: embeddings watch 缺少 undefined 检查

与 llm 的 watch 不同,embeddings 的 watch 没有检查 undefined 值。

建议添加 undefined 检查以保持一致性:

         watch(embeddings, (newValue: Embeddings | undefined) => {
+            if (newValue === undefined) {
+                this._embeddings = undefined
+                this._chain = undefined
+                return
+            }
             this._embeddings = newValue
             this._chain = createChain()
         })

385-393: 类型检查逻辑可以简化

使用 instanceof 检查后立即返回,else 分支实际上是不可达的。

简化类型检查逻辑:

-        if (llmModel.value instanceof ChatLunaChatModel) {
-            return [llmModel, llmInfo]
-        }
-
-        throw new ChatLunaError(
-            ChatLunaErrorCode.MODEL_INIT_ERROR,
-            new Error(`Model ${llmModelName} is not a chat model`)
-        )
+        if (!(llmModel.value instanceof ChatLunaChatModel)) {
+            throw new ChatLunaError(
+                ChatLunaErrorCode.MODEL_INIT_ERROR,
+                new Error(`Model ${llmModelName} is not a chat model`)
+            )
+        }
+        
+        return [llmModel, llmInfo]
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 957ffed and ba2939e.

⛔ Files ignored due to path filters (1)
  • packages/core/package.json is excluded by !**/*.json
📒 Files selected for processing (25)
  • packages/core/src/chains/rooms.ts (4 hunks)
  • packages/core/src/index.ts (1 hunks)
  • packages/core/src/llm-core/chain/chat_chain.ts (4 hunks)
  • packages/core/src/llm-core/chain/plugin_chat_chain.ts (6 hunks)
  • packages/core/src/llm-core/chat/app.ts (5 hunks)
  • packages/core/src/llm-core/chat/default.ts (4 hunks)
  • packages/core/src/llm-core/platform/service.ts (5 hunks)
  • packages/core/src/llm-core/platform/types.ts (1 hunks)
  • packages/core/src/middlewares/auth/create_auth_group.ts (1 hunks)
  • packages/core/src/middlewares/auth/set_auth_group.ts (1 hunks)
  • packages/core/src/middlewares/chat/chat_time_limit_check.ts (2 hunks)
  • packages/core/src/middlewares/chat/read_chat_message.ts (1 hunks)
  • packages/core/src/middlewares/model/list_all_embeddings.ts (1 hunks)
  • packages/core/src/middlewares/model/list_all_model.ts (1 hunks)
  • packages/core/src/middlewares/model/list_all_vectorstore.ts (1 hunks)
  • packages/core/src/middlewares/model/search_model.ts (1 hunks)
  • packages/core/src/middlewares/model/set_default_embeddings.ts (1 hunks)
  • packages/core/src/middlewares/model/set_default_vectorstore.ts (1 hunks)
  • packages/core/src/middlewares/room/create_room.ts (2 hunks)
  • packages/core/src/middlewares/room/resolve_room.ts (1 hunks)
  • packages/core/src/middlewares/room/set_room.ts (4 hunks)
  • packages/core/src/services/chat.ts (8 hunks)
  • packages/core/src/utils/schema.ts (1 hunks)
  • packages/search-service/src/config.ts (1 hunks)
  • packages/search-service/src/index.ts (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-15T09:29:19.344Z
Learnt from: dingyi222666
PR: ChatLunaLab/chatluna#543
File: packages/core/src/llm-core/vectorstores/base.ts:0-0
Timestamp: 2025-09-15T09:29:19.344Z
Learning: 在 packages/core/src/llm-core/vectorstores/base.ts 中,ChatLunaSaveableVectorStore 的 free() 方法设计为等待子类重写实现,不在基类中添加额外的资源清理操作。资源清理由具体实现类负责处理。

Applied to files:

  • packages/core/src/llm-core/platform/service.ts
🧬 Code graph analysis (11)
packages/search-service/src/config.ts (1)
packages/core/src/utils/schema.ts (1)
  • modelSchema (14-26)
packages/core/src/utils/schema.ts (1)
packages/core/src/llm-core/platform/service.ts (1)
  • PlatformService (28-349)
packages/core/src/middlewares/chat/chat_time_limit_check.ts (1)
packages/mcp-client/src/service.ts (1)
  • client (231-233)
packages/core/src/llm-core/chat/default.ts (5)
packages/core/src/services/types.ts (1)
  • Context (21-23)
packages/core/src/llm-core/platform/service.ts (2)
  • PlatformService (28-349)
  • getTools (174-176)
packages/core/src/utils/schema.ts (4)
  • modelSchema (14-26)
  • vectorStoreSchema (77-89)
  • embeddingsSchema (34-49)
  • chatChainSchema (57-69)
packages/core/src/llm-core/chain/chat_chain.ts (1)
  • ChatLunaChatChain (27-135)
packages/core/src/llm-core/chain/plugin_chat_chain.ts (1)
  • ChatLunaPluginChain (42-396)
packages/core/src/middlewares/model/set_default_embeddings.ts (1)
packages/core/src/llm-core/chat/app.ts (1)
  • embeddings (268-270)
packages/core/src/chains/rooms.ts (1)
packages/core/src/llm-core/platform/client.ts (1)
  • config (72-74)
packages/core/src/llm-core/chain/plugin_chat_chain.ts (1)
packages/core/src/llm-core/platform/types.ts (1)
  • ChatLunaTool (59-63)
packages/core/src/services/chat.ts (5)
packages/mcp-client/src/service.ts (1)
  • client (231-233)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/llm-core/model/in_memory.ts (1)
  • EmptyEmbeddings (26-38)
packages/core/src/llm-core/platform/types.ts (1)
  • CreateChatLunaLLMChainParams (49-57)
packages/core/src/llm-core/platform/client.ts (1)
  • config (72-74)
packages/core/src/llm-core/platform/service.ts (2)
packages/core/src/llm-core/platform/types.ts (6)
  • CreateClientFunction (69-69)
  • ChatLunaTool (59-63)
  • ModelInfo (77-82)
  • ChatLunaChainInfo (11-17)
  • CreateVectorStoreFunction (65-67)
  • PlatformClientNames (75-75)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (194-199)
packages/core/src/llm-core/chain/chat_chain.ts (3)
packages/core/src/llm-core/prompt/type.ts (1)
  • PresetTemplate (80-110)
packages/core/src/llm-core/memory/langchain/buffer_memory.ts (1)
  • BufferMemory (50-97)
packages/core/src/services/variable.ts (1)
  • ChatLunaVariableService (28-329)
packages/core/src/llm-core/chat/app.ts (2)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (194-199)
packages/core/src/llm-core/model/in_memory.ts (1)
  • emptyEmbeddings (40-40)
🪛 GitHub Actions: Build
packages/core/src/index.ts

[warning] 36-36: "import.meta" is not available with the "cjs" output format and will be empty


[warning] 81-81: "import.meta" is not available with the "cjs" output format and will be empty

packages/search-service/src/index.ts

[error] 110-110: TS1308: 'await' expressions are only allowed within async functions and at the top levels of modules.


[warning] 36-36: "import.meta" is not available with the "cjs" output format and will be empty


[warning] 81-81: "import.meta" is not available with the "cjs" output format and will be empty

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build
  • GitHub Check: lint
🔇 Additional comments (40)
packages/core/src/middlewares/auth/set_auth_group.ts (1)

404-405: LGTM:使用 .value 解包可用模型列表是正确的

与平台服务返回 ComputedRef<string[]> 的约定一致。

packages/core/src/middlewares/model/list_all_model.ts (1)

38-39: LGTM:在分页前解包 models.value

与新的响应式返回类型匹配,避免将 Ref 传入分页器。

packages/core/src/middlewares/model/list_all_vectorstore.ts (1)

34-35: LGTM:分页时使用 service.vectorStores.value

符合响应式封装的读取方式。

packages/core/src/middlewares/auth/create_auth_group.ts (1)

348-349: LGTM:使用 availableModels.value.includes 进行校验

与平台服务的 ComputedRef 返回值一致。

packages/core/src/middlewares/model/set_default_embeddings.ts (1)

28-32: LGTM:对 embeddings 使用 .value 进行过滤

与新的响应式返回契合。

packages/core/src/llm-core/platform/types.ts (1)

14-17: 保持 createFunction 的异步兼容(使用 MaybePromise / Promise 联合)
强制同步返回为破坏性变更;仓库搜索仅在类型声明处发现定义(packages/core/src/llm-core/platform/types.ts:14-17),未找到实现或调用,无法确认是否存在异步初始化实现。应将类型改为联合以保持向后兼容:

-    ) => ChatLunaLLMChainWrapper
+    ) => ChatLunaLLMChainWrapper | Promise<ChatLunaLLMChainWrapper>

需人工确认仓库中所有实现/调用是否为异步并据此修正类型。

packages/core/src/middlewares/room/resolve_room.ts (1)

150-152: LGTM:长度判断改为基于 .value 的响应式数组

与全局迁移一致,语义正确。

packages/core/src/middlewares/model/list_all_embeddings.ts (1)

37-38: LGTM:分页入参改为 models.value

与响应式迁移一致,行为正确。

packages/core/src/middlewares/room/set_room.ts (3)

181-182: LGTM:模型列表改为基于 .value 查找

与响应式迁移一致。


305-308: LGTM:聊天模式来源改为 chatChains.value

保持与全局一致。


371-388: LGTM:checkRoomAvailability 中的 .value 解包

与迁移一致,语义正确。

packages/core/src/middlewares/room/create_room.ts (1)

150-153: 确认 getAllModels/chatChains 返回类型及空数组处理(已验证)

  • service.getAllModels(type) 实现为 computed(() => string[]),返回类型为 ComputedRef<string[]>(packages/core/src/llm-core/platform/service.ts)。
  • ctx.chatluna.platform.chatChains 实现为 computed(() => Object.values(this._chatChains)),使用处以 { name, description } 访问,类型为 ComputedRef<Array<{name: string, description?: string}>>(packages/core/src/llm-core/platform/service.ts、packages/core/src/utils/schema.ts)。
  • 空数组场景已处理:packages/core/src/middlewares/room/resolve_room.ts 在 models 为空时返回 chatluna.not_available_model,create_room/set_room 对 findModel == null 也有相应处理;无需修改。
packages/core/src/middlewares/chat/chat_time_limit_check.ts (1)

139-149: 确认返回值是否为 ComputedRef(并说明修复建议)

  • 经查:PlatformService.getClient 返回 computed(() => this._platformClients[platform]),因此 service.getClient(...) 的确返回一个 ComputedRef<BasePlatformClient | undefined>(所以在调用处用 await 得到的 client 是一个 ComputedRef)。
  • 经查:ClientConfigPool.getConfig 的签名是 getConfig(lockSelectConfig = false): ClientConfigWrapper(返回一个封装类型 ClientConfigWrapper,其调用处通过 .value 访问配置,例如 this.configPool.getConfig(true).value)。因此 client.value.configPool.getConfig(true) 返回的是一个 ClientConfigWrapper(不是直接的原始配置类型),需要通过 .value 访问实际配置。getConfig 在找不到可用配置时会抛出/返回包含 undefined 的 wrapper(调用代码应对 wrapper 或其 value 做空检查)。
  • 建议:保持当前写法(client.value.configPool.getConfig(true))是合理的,但要确保对 client(ComputedRef)和 clientConfig.value(实际配置)都做空校验;在 chat_time_limit_check.ts 已有对 client 和 clientConfig 的空检查,符合预期。若想减少混淆可以:
    • 明确标注变量类型为 ComputedRef(例如 const clientRef = await ctx.chatluna.platform.getClient(...); const client = clientRef.value),或
    • 直接在使用处把 .value 取出并命名为 client 或 config,以避免链式 .value 调用带来误解。
packages/search-service/src/config.ts (1)

1-7: 改动看起来不错!

成功将动态事件驱动的模式更新迁移到了集中化的响应式配置。通过导入并调用 modelSchema(ctx) 来配置模型相关的 schema,这种方式更加简洁且与整个代码库的响应式模式保持一致。

packages/core/src/chains/rooms.ts (4)

124-124: 正确使用了 .value 访问器

使用 .value 来获取 ComputedRef 包装器的实际数组值,这与整个项目向 Vue 响应式系统的迁移保持一致。


156-156: 响应式访问模式应用得当

同样正确地使用了 .value 来解包 ComputedRef,保持了代码的一致性。


177-182: 重构提升了代码质量

models 的获取提升到函数顶层,避免了重复调用,同时正确使用了 .value 访问实际的模型数组。这种方式更加高效且符合响应式编程的最佳实践。


192-197: 保持了响应式访问的一致性

在所有需要访问模型列表的地方都正确使用了 .value,包括平台模型获取和默认模型选择逻辑。

packages/core/src/llm-core/chat/default.ts (4)

16-19: 成功实现了集中化的 schema 初始化

将原本分散的事件驱动 schema 更新替换为启动时的集中初始化,这是一个很好的架构改进。通过在启动时调用 modelSchemavectorStoreSchemaembeddingsSchemachatChainSchema,简化了配置管理并提高了代码的可维护性。


70-76: ChatLunaChatChain 的同步返回模式更简洁

移除了 async 包装器,直接返回 ChatLunaChatChain.fromLLM 的结果,这种同步模式更加简洁高效。


85-98: 正确实现了响应式工具管理

使用 getTools(service) 返回 ComputedRef 类型的工具列表,并传递给 ChatLunaPluginChain.fromLLMAndTools。这种响应式工具管理方式使得工具变化可以自动传播到使用它们的组件。


102-106: 优雅的响应式工具转换实现

getTools 函数巧妙地将平台服务的工具名称列表转换为响应式的工具实例数组。使用 computed 包装确保了当工具列表变化时,依赖它的组件会自动更新。

packages/core/src/utils/schema.ts (4)

14-26: 响应式 schema 更新机制设计精良

modelSchema 函数通过 watch 监听模型名称的变化,并使用 immediate: true 确保初始化时立即更新。这种模式避免了手动事件监听,实现了自动化的 UI schema 更新。


91-102: 辅助函数实现合理

getModelNames 函数正确使用了 computed 包装,确保返回值是响应式的。添加 "无" 选项作为默认值是个贴心的设计。


104-112: 向量存储 schema 处理得当

getVectorStores 函数的实现与 getModelNames 保持一致,同样提供了 "无" 选项,保证了 API 的一致性。


114-121: 聊天链 schema 的 i18n 支持很好

getChatChainNames 函数使用 Schema.i18n 来支持多语言描述,这是国际化的最佳实践。

packages/core/src/llm-core/chain/plugin_chat_chain.ts (5)

58-58: 响应式工具属性类型更新正确

tools 属性从 ChatLunaTool[] 更新为 ComputedRef<ChatLunaTool[]>,这与整个项目向响应式系统的迁移保持一致。


94-104: 静态工厂方法签名更新合理

fromLLMAndTools 方法不再是 async,并且接受 ComputedRef<ChatLunaTool[]> 类型的 tools 参数。这种同步返回模式简化了调用方的使用。


167-170: 巧妙的响应式值解包

_getActiveTools 中使用 this.tools.value 获取实际的工具数组,并将其赋值给 toolsRef 变量,使后续代码更清晰。


184-199: 活动工具管理逻辑保持正确

虽然使用了 oldActiveTools 作为 this.activeTools 的引用,但由于 JavaScript 的引用特性,对 oldActiveTools 的修改会直接影响 this.activeTools。这种实现方式保持了原有逻辑的正确性。


205-205: 返回值逻辑正确更新

当没有工具变化时,返回 [toolsRef, oldActiveTools.length === toolsRef.length],正确地使用了响应式解包后的值进行比较。

packages/core/src/llm-core/chain/chat_chain.ts (1)

19-30: 接口和类重命名看起来合理

ChatHubChatLuna 的重命名与整个 PR 的命名规范保持一致,这是一个合理的品牌统一改动。

packages/core/src/services/chat.ts (2)

111-119: 使用响应式系统替代事件监听器的实现很优雅

将事件驱动的模型加载改为响应式 watch 机制,简化了代码并减少了手动事件处理的开销。这种方式更符合 Vue 响应式系统的最佳实践。


598-611: 响应式更新支持的模型列表实现得很好

使用 watch 自动更新 _supportModels 列表,避免了手动维护状态,这是响应式系统的良好应用。

packages/core/src/llm-core/platform/service.ts (3)

29-39: 响应式系统的引入提升了代码的响应性

将内部状态管理从普通对象迁移到 Vue 的 reactive 系统,配合 ComputedRef 提供了更好的响应式数据流。这种改动与整个 PR 的响应式重构方向一致。


139-156: getModels 返回 ComputedRef 实现良好

返回 ComputedRef 确保了数据的响应式更新,排序逻辑也保持不变。空数组的处理也很恰当。


119-122: createChatChainFunction 类型变更是破坏性改动 — 需要确认所有实现已更新

从异步返回 Promise 改为同步返回 ChatLunaLLMChainWrapper,是破坏性 API 变更。初步搜索结果:packages/core/src/llm-core/chat/default.ts 的注册使用同步箭头函数返回 ChatLunaChatChain.fromLLM(...)(看起来兼容);packages/core/src/services/chat.ts 在第694行通过变量 func 注册,尚未确认 func 的签名是否为同步返回。请确认/修复:所有通过 registerChatChain 注册的实现均返回 ChatLunaLLMChainWrapper(非 Promise),或将类型改回异步。

定位(需要开发者操作)

  • packages/core/src/llm-core/chat/default.ts (registerChatChain 用法)
  • packages/core/src/services/chat.ts (第 ~694 行,传入 func)

建议运行验证脚本(在仓库根目录):

#!/bin/bash
set -euo pipefail
rg -n "registerChatChain\(" packages/core -n -C3 || true
sed -n '1,240p' packages/core/src/llm-core/chat/default.ts || true
sed -n '650,740p' packages/core/src/services/chat.ts || true
rg -n "fromLLM" packages/core -S || true
rg -n "CreateChatLunaLLMChainParams|ChatLunaLLMChainWrapper" packages/core -S || true
packages/core/src/llm-core/chat/app.ts (3)

178-183: 响应式类型定义清晰

使用 ComputedRef 包装 embeddings、llm 和 modelInfo,类型定义清晰,符合响应式系统的设计模式。


231-243: createChain 函数封装良好

将 chain 创建逻辑提取为独立函数,便于在响应式更新时重用,设计合理。


320-371: _initEmbeddings 的响应式实现优秀

使用 computed 包装 embeddings 初始化逻辑,包含了完善的回退机制和错误处理,实现得很好。日志记录也很充分。

- Update ChatLunaService to return computed refs for embeddings with proper error handling
- Migrate image-service to use reactive schema system and computed model references
- Replace manual schema listeners with centralized modelSchema utility in long-memory
- Update long-memory utilities to handle computed model references correctly
- Modify plugin-common knowledge system to work with reactive models
- Enhance search providers to use computed embeddings and model references
- Add proper null checks and fallbacks for reactive model access
- Improve error handling in Wikipedia provider with computed model validation

This completes the Vue reactivity migration, ensuring all services properly
handle computed model references and maintain consistency across the platform.
…ency

- Fix package.json indentation for schema export
- Improve null checking consistency in chat app watchers
- Fix computed ref access in image capability validation middleware
- Clean up unused imports in chat service
- Enhance error handling with proper null checks for computed values

These minor fixes ensure proper handling of computed refs and maintain
code quality standards across the reactive system implementation.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (9)
packages/core/src/middlewares/chat/read_chat_message.ts (3)

118-131: 文件名推导与日志有 3 处问题:重复空合并、意外的多余花括号、错误的扩展名来源

  • element.attrs['filename'] ?? element.attrs['filename'] 恒等,缺少有效后备字段(如 name)。
  • 模板串多了一个 },导致文件名异常。
  • 调试输出使用了 element.attrs.ext,与上方解析的 ext 不一致,可能为 undefined
-            const { buffer } = await readImage(ctx, url)
-            let fileName =
-                element.attrs['filename'] ?? element.attrs['filename']
+            const { buffer } = await readImage(ctx, url)
+            let fileName = element.attrs['filename'] ?? element.attrs['name']
@@
-            if (fileName == null || fileName.length > 50) {
-                fileName = `${await hashString(url, 8)}}`
+            if (!fileName || fileName.length > 50) {
+                fileName = `${await hashString(url, 8)}.${ext}`
             }
@@
-            logger.debug(
-                fileName,
-                `${await hashString(url, 8)}.${element.attrs.ext}`
-            )
+            logger.debug(fileName)

170-174: 避免对 message.content 直接 +=,可能破坏结构化消息

message.content 可能是数组;+= 会把对象数组隐式转成字符串,破坏消息结构。应复用已有的 addMessageContent

-                    logger.debug(`audio2text: ${content}`)
-                    message.content += content
+                    logger.debug(`audio2text: ${content}`)
+                    addMessageContent(message, content)

188-195: readImage 存在 SSRF/DoS 风险:未校验协议/主机,且无超时与大小限制

当前会无条件请求任意 URL。应限制为 http/https,拦截本地地址(localhost/127.0.0.1/::1 等),并添加超时与最大响应体大小,降低 SSRF 与大文件 DoS 风险。

-    const response = await ctx.http(url, {
+    // 基础校验:仅允许 http/https,拦截本地主机名
+    if (!/^https?:\/\//i.test(url)) {
+        throw new Error('only http(s) protocol is allowed for image url')
+    }
+    const u = new URL(url)
+    const blockedHosts = new Set(['localhost', '127.0.0.1', '::1'])
+    if (blockedHosts.has(u.hostname)) {
+        throw new Error(`blocked host: ${u.hostname}`)
+    }
+
+    const response = await ctx.http(url, {
         responseType: 'arraybuffer',
         method: 'get',
+        timeout: 10_000,
+        // 若 ctx.http 基于 axios,可使用 maxContentLength;否则请采用等效配置
+        maxContentLength: 10 * 1024 * 1024,
         headers: {
             'User-Agent':
                 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'
         }
     })
packages/plugin-common/src/plugins/knowledge.ts (1)

71-81: 修复:当未选中任何知识库时 this.chain 为 null 会导致运行时异常

createSearchChain 在未找到知识库时返回 null,随后 this.chain(input, []) 会抛出 TypeError。建议在调用处短路返回,避免 NPE。

   async _call(input: string) {
     try {
       if (!this.chain) {
         this.chain = await createSearchChain(this.ctx, this.knowledgeId)
       }
 
+      if (!this.chain) {
+        this.ctx.logger.warn('Knowledge tool is not initialized (no knowledge selected).')
+        return 'No knowledge is configured or selected.'
+      }
+
       const documents = await this.chain(input, [])
 
       return documents.map((document) => document.pageContent).join('\n')
     } catch (e) {
packages/long-memory/src/utils/layer.ts (2)

90-97: 返回类型不一致:应返回空数组而非 undefined

retrieveMemory 在未初始化时直接 return,签名承诺返回 Promise<EnhancedMemory[]>

   async retrieveMemory(searchContent: string): Promise<EnhancedMemory[]> {
     if (!this.vectorStore) {
       logger?.warn('Vector store not initialized')
-      return
+      return []
     }

238-243: 阈值计算错误:minSimilarityScore 可能为负数

使用 Math.min(0.1, X-0.3) 会在多数场景固定为 0.1,且当 X<0.3 时变为负值。应做 0~1 之间的夹取并设定合理下限。

-  const retriever = ScoreThresholdRetriever.fromVectorStore(vectorStore, {
-    minSimilarityScore: Math.min(0.1, config.longMemorySimilarity - 0.3),
+  const retriever = ScoreThresholdRetriever.fromVectorStore(vectorStore, {
+    // 将目标阈值下调 0.3,并夹取到 [0.1, 0.9]
+    minSimilarityScore: Math.max(0.1, Math.min(0.9, config.longMemorySimilarity - 0.3)),
packages/search-service/src/providers/wikipedia.ts (2)

106-113: 逻辑错误:错误判断反了,成功时会误报错误并返回空结果

应在存在 error 字段时才早退。

-  if (!searchResults.error) {
-    logger.error(
-      `Error fetching search results for query "${query}" in ${this.baseUrl}: ${JSON.stringify(searchResults.error)}`
-    )
-    return []
-  }
+  if (searchResults.error) {
+    logger?.error(
+      `Error fetching search results for query "${query}" in ${this.baseUrl}: ${JSON.stringify(searchResults.error)}`
+    )
+    return []
+  }

293-299: 空值传播风险:summaryModel 可能构建失败,后续使用会 NPE

建议类型放宽并在失败时禁用关键词提取(或提供返回 null 的计算属性)。

-  let summaryModel: ComputedRef<ChatLunaChatModel>
+  let summaryModel: ComputedRef<ChatLunaChatModel> | undefined
 
   try {
     summaryModel = await createModel(ctx, config.summaryModel)
   } catch (error) {
     logger?.error(error)
   }

并在 apply 里传参前做判空(或改造 WikipediaSearchProvider 构造函数类型为 ComputedRef<ChatLunaChatModel> | undefined)。

packages/core/src/services/chat.ts (1)

111-144: watch 未释放,存在内存泄漏与重复触发风险

awaitLoadPlatform 内部的 watch 没有在 resolve/reject 后停止,会在同一上下文中常驻监听并反复触发回调。建议拿到 stop 句柄并在完成后停止监听;同时在完成后取消超时更清晰。

建议改动:

-        watch(
-            models,
-            () => {
-                resolve()
-                timeoutId()
-            },
-            { deep: true }
-        )
+        const stop = watch(
+            models,
+            () => {
+                resolve()
+                timeoutId()
+                stop()
+            },
+            { deep: true }
+        )
♻️ Duplicate comments (4)
packages/core/src/middlewares/chat/read_chat_message.ts (1)

91-96: 修复 parsedModelInfo 判空缺陷,防止在 undefined 上读取 .value 触发运行时异常

model 为空或 getModelInfo 返回 undefined 时,parsedModelInfo.value 会直接抛错。应以可选链/提前解构安全访问,同时兼容缺失 capabilities 的场景。

-            if (
-                parsedModelInfo.value != null &&
-                !parsedModelInfo.value.capabilities.includes(
-                    ModelCapabilities.ImageInput
-                )
-            ) {
+            const capabilities = parsedModelInfo?.value?.capabilities
+            if (capabilities && !capabilities.includes(ModelCapabilities.ImageInput)) {
packages/core/src/services/chat.ts (3)

269-287: Embeddings 的空回退与告警处理已到位,修复了重复抛错问题

改为记录 warn 并返回 emptyEmbeddings,符合响应式期望。


679-683: registerChatChainProvider 改为同步返回:请确认调用方与文档均已迁移

此处接口已同步化,符合此前建议;请确认所有调用处不再返回 Promise,并在迁移文档中明确此破坏性变更。

可用脚本检查残留异步用法:

#!/bin/bash
set -euo pipefail
rg -nP --hidden 'registerChatChainProvider\([^)]*,\s*async\s*\(' || true
rg -nP --hidden 'Promise\s*<\s*ChatLunaLLMChainWrapper\s*>' || true

245-257: 与响应式约定不一致:createChatModel 在 client 不可用时直接抛错

外层先判断并抛错,随后返回 computed 内又允许返回 undefined。建议统一行为:不抛错,记录告警并在 computed 内返回 undefined,以便上层以 .value 判空进行降级;否则像 long-memory 等未先 awaitLoadPlatform 的调用会被同步打断。

-        if (client.value == null) {
-            throw new ChatLunaError(
-                ChatLunaErrorCode.MODEL_ADAPTER_NOT_FOUND,
-                new Error(`The platform ${platformName} no available`)
-            )
-        }
-
         return computed(() => {
             if (client.value == null) {
-                return undefined
+                this.ctx.logger.warn(
+                    `The platform ${platformName} not available`
+                )
+                return undefined
             }
             return client.value.createModel(model) as ChatLunaChatModel
         })
🧹 Nitpick comments (23)
packages/long-memory/src/plugins/config.ts (2)

1-1: 仅类型导入,避免运行时绑定与潜在打包开销

Context 改为 type‑only 导入更干净。

-import { Context } from 'koishi'
+import type { Context } from 'koishi'

(额外建议,超出本段范围)同理可将 Line 2 的 Config 改为 type‑only:

-import { Config } from '..'
+import type { Config } from '..'

6-6: apply 无需 async,未使用的 config 建议前缀下划线;同时请确认 watcher 的清理策略

此处未使用 await,且 config 未被使用。建议去掉 async 并将参数重命名为 _config 以静默 Lint。

-export async function apply(ctx: Context, config: Config) {
-    modelSchema(ctx)
-}
+export function apply(ctx: Context, _config: Config) {
+    modelSchema(ctx)
+}

另外,请确认 modelSchema(ctx) 内部通过与 ctx 生命周期绑定(例如返回 stop 句柄并在插件卸载时调用,或在内部完成绑定)正确清理 watch 监听。否则热重载或多次启停可能造成重复订阅与内存泄漏。若已在核心实现中处理,请忽略本条并标注说明。

packages/core/src/middlewares/chat/read_chat_message.ts (1)

107-112: 补全扩展名白名单,兼容 jpg

常见扩展名 jpg 未被包含,建议同时识别并归一到 jpeg

-            if (!['png', 'jpeg'].includes(ext)) {
+            if (ext === 'jpg') ext = 'jpeg'
+            if (!['png', 'jpeg'].includes(ext)) {
                 ext = 'jpeg'
             }
packages/search-service/src/provide.ts (3)

9-10: 从插件根导入 ComputedRef 是否必要?建议直接用 @vue/reactivity 降耦合

除非 koishi-plugin-chatluna 明确稳定地 re-export 了 ComputedRef,否则这里的导入会增加包耦合与循环依赖风险。更稳妥是直接从 Vue 响应式源头导入。

如需调整,建议改成:

-import { ComputedRef } from 'koishi-plugin-chatluna'
+import { ComputedRef } from '@vue/reactivity'

27-27: _embeddings 未初始化:在 strictPropertyInitialization 下可能报错

字段未在构造器初始化,严格模式会报 “Property has no initializer…”;且后续 _getEmbeddings() 存在返回 null 的分支,字段类型也应允许“未就绪”状态。

建议将字段改为可选或显式置 null

-private _embeddings: ComputedRef<ChatLunaBaseEmbeddings>
+private _embeddings?: ComputedRef<ChatLunaBaseEmbeddings>

109-128: 为 _getEmbeddings 标注返回类型并简化赋值,便于后续去重与缓存扩展

当前未标注返回类型,推断为 any/unknown 容易扩散;同时直接在赋值表达式里 await 不利于后续加入“in‑flight 去重”。

建议:

-    private async _getEmbeddings() {
+    private async _getEmbeddings(): Promise<ComputedRef<ChatLunaBaseEmbeddings> | null> {

         try {
           const [platform, model] = parseRawModelName(
               this.ctx.chatluna.config.defaultEmbeddings
           )
-            this._embeddings = await this.ctx.chatluna.createEmbeddings(
-                platform,
-                model
-            )
+            const emb = await this.ctx.chatluna.createEmbeddings(platform, model)
+            this._embeddings = emb
         } catch (e) {
             logger.warn(
                 `Get embeddings failed: ${e}. Try check your defaultEmbeddings`
             )
             return null
         }

         return this._embeddings
     }

此外,如果 _reRankResults 可能被并发调用,考虑后续引入一个 private _embeddingsP?: Promise<ComputedRef<ChatLunaBaseEmbeddings> | null> 做并发去重。

packages/plugin-common/src/plugins/knowledge.ts (2)

100-114: 健壮性:对 searchKnowledge 做长度检查并正确构造正则

当前仅判断 truthy,空数组也会走分支并构造空正则,可能匹配过多结果。

-  if (searchKnowledge) {
-    const regex =
-      typeof searchKnowledge === 'string'
-        ? searchKnowledge
-        : searchKnowledge.join('|')
+  if (Array.isArray(searchKnowledge) && searchKnowledge.length > 0) {
+    // 简单转义特殊字符,避免用户输入破坏正则
+    const escape = (s: string) => s.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')
+    const regex =
+      typeof (searchKnowledge as unknown) === 'string'
+        ? escape(searchKnowledge as unknown as string)
+        : searchKnowledge.map(escape).join('|')

146-153: 与响应式改造保持一致:避免过早解包模型或未处理空值

当前通过 .then(...).value 解包,丢失响应式能力;且未处理 model.value 为空的情况。建议先拿到 ComputedRef,再在使用前判空;如果知识链已支持响应式,优先传入 ComputedRef

-  const [platform, modelName] = parseRawModelName(config?.model)
-
-  // TODO: send computed value to knowledge
-  const model = await ctx.chatluna
-    .createChatModel(platform, modelName)
-    .then((model) => model.value as ChatLunaChatModel)
-
-  return ctx.chatluna_knowledge.chains[config.mode](model, retriever)
+  const [platform, modelName] = parseRawModelName(config.model)
+  const modelRef = await ctx.chatluna.createChatModel(platform, modelName)
+  const model = modelRef.value as ChatLunaChatModel | null
+  if (!model) {
+    throw new ChatLunaError(
+      ChatLunaErrorCode.KNOWLEDGE_CONFIG_INVALID,
+      new Error(`model "${modelName}" is not loaded`)
+    )
+  }
+  // 如知识链已支持 ComputedRef,可直接传入 modelRef 保持响应式
+  return ctx.chatluna_knowledge.chains[config.mode](model, retriever)

请确认 ctx.chatluna_knowledge.chains[mode] 是否已有接收 ComputedRef<ChatLunaChatModel> 的重载;若有,建议改为传入 modelRef

packages/long-memory/src/utils/layer.ts (2)

221-229: 健壮性:嵌入模型可能为 null,应显式校验并失败快

createEmbeddings(...).then(m => m.value) 可能得到 null,随后传入 createVectorStore 将在运行时失败。

-  const embeddingModel = await ctx.chatluna
-    .createEmbeddings(platform, model)
-    .then((model) => model.value)
+  const embeddingModel = await ctx.chatluna
+    .createEmbeddings(platform, model)
+    .then(ref => ref.value)
+
+  if (!embeddingModel) {
+    throw new Error(`Embeddings model "${model}" is not loaded.`)
+  }

Also applies to: 230-236


139-145: 日志一致性:避免裸用 console.error

统一使用 Koishi/插件日志器,便于观测与收敛输出。

-      } catch (e) {
-        console.error(e)
-      }
+      } catch (e) {
+        logger?.error(e)
+      }
packages/core/src/llm-core/platform/types.ts (2)

20-41: 文档注释表述问题(英文):语法小误与指引

注释中的 “is no passed to the function” 语法有误,应为 “is not passed...”。且建议统一强调通过 parentConfig.configurable 获取上下文。

- * @deprecated This parameter is no passed to the function.
+ * @deprecated This parameter is not passed to the function.
  * Please use the `configurable` in `parentConfig` parameter of {@link StructuredTool._call} to access `model`.

其余三处同改。


11-17: API 变更风险:createFunction 由异步改同步,可能破坏第三方实现

建议兼容异步返回,过渡期保留联合返回类型,降低生态破坏面。

 export interface ChatLunaChainInfo {
   name: string
   description?: Dict<string>
   createFunction: (
     params: CreateChatLunaLLMChainParams
-  ) => ChatLunaLLMChainWrapper
+  ) => ChatLunaLLMChainWrapper | Promise<ChatLunaLLMChainWrapper>
 }

请同时巡检仓内对 createFunction 的调用是否存在 await 依赖:

#!/bin/bash
rg -nP "createFunction\s*\(" -C2
packages/search-service/src/chain/browsing_chain.ts (2)

198-204: 空值防护:工具列表可能为空或未找到目标工具

this.tools.value.find(...) 在工具未加载或未命名匹配时会返回 undefined,随后 .createTool 会报错。

-  private async _selectTool(name: string): Promise<StructuredTool> {
-    const chatLunaTool = this.tools.value.find((tool) => tool.name === name)
+  private async _selectTool(name: string): Promise<StructuredTool> {
+    const list = this.tools?.value ?? []
+    const chatLunaTool = list.find((tool) => tool.name === name)
+    if (!chatLunaTool) {
+      throw new ChatLunaError(
+        ChatLunaErrorCode.UNKNOWN_ERROR,
+        new Error(`Tool "${name}" not found`)
+      )
+    }
 
     return chatLunaTool.tool.createTool({
       embeddings: this.embeddings
     })
   }

366-373: 细节优化:AbortSignal 监听可使用 { once: true } 避免潜在泄漏

虽然影响极小,但可用一次性监听器更干净。

-  signal?.addEventListener('abort', (event) => {
+  signal?.addEventListener('abort', (event) => {
     reject(new ChatLunaError(ChatLunaErrorCode.ABORTED))
-  })
+  }, { once: true })

两处相同代码同改。

Also applies to: 399-404

packages/search-service/src/providers/wikipedia.ts (2)

87-92: 无效检查:ComputedRef 永远是 truthy,应检查 .value

否则即使模型未加载也不会告警。

-  if (!model) {
+  if (!model?.value) {
     logger?.warn(
       'No keywordExtract model provided, skip enhanced keyword extract'
     )
   }

98-101: 一致性:条件判断与抽取逻辑保持一致

search() 中以 this.model 判断是否抽取关键词,但真正的空值检查在 _extractKeyword 内。建议统一以 .value 判空或直接始终调用 _extractKeyword 交由其内部判定。

-  if (this.model) {
+  if (this.model?.value) {
     query = await this._extractKeyword(query)
     logger?.debug(`Extracted keyword For Wikipedia: ${query}`)
   }

Also applies to: 152-164

packages/image-service/src/index.ts (4)

37-39: 健壮性:模型创建失败应兜底,避免插件初始化阶段崩溃

建议加 try/catch 并在后续拦截逻辑中判空早退。

-  const [platform, modelName] = parseRawModelName(config.model)
-  const model = await ctx.chatluna.createChatModel(platform, modelName)
+  const [platform, modelName] = parseRawModelName(config.model)
+  let model
+  try {
+    model = await ctx.chatluna.createChatModel(platform, modelName)
+  } catch (e) {
+    logger.warn(`Create model failed for image-service: ${String(e)}`)
+  }

142-150: 格式兼容:扩展名与 MIME 推断更稳健,覆盖 jpg/webp 等

当前仅允许 png/jpeg,可补充常见格式,或优先使用响应头 content-type 推断。

-  let ext = url.match(/\.([^.]*)$/)?.[1]
-
-  if (!['png', 'jpeg'].includes(ext)) {
-    ext = 'jpeg'
-  }
+  let ext = url.match(/\.([a-zA-Z0-9]+)(?:$|\?)/)?.[1]?.toLowerCase()
+  const ct = (response.headers['content-type'] as string | undefined)?.toLowerCase()
+  if (!ext && ct?.startsWith('image/')) {
+    ext = ct.split('/')[1]
+  }
+  if (ext === 'jpg') ext = 'jpeg'
+  if (!['png', 'jpeg', 'webp', 'gif'].includes(ext ?? '')) {
+    ext = 'jpeg'
+  }

163-171: 移除调试输出

console.log(images) 应改为调试日志或删除。

-  console.log(images)
+  logger?.debug(() => `images to process: ${images.length}`)

133-140: 安全/运维建议:下载外部图片存在 SSRF 与大文件风险

  • 建议限制协议(仅 http/https)、加入域名/网段白名单或阻止内网地址。
  • 限制最大响应体大小并设置超时。
  • 记录下载失败与来源域名,便于追踪。

Also applies to: 151-155

packages/core/src/services/chat.ts (3)

136-143: 不需要 deep 监听

models 是一个 ComputedRef 的数组快照,通常无需 deep: true,避免不必要的触发。

-            { deep: true }
+            {}

588-601: watch 未登记到可释放资源;建议 immediate 初始化 + 可释放

当前 _supportModels 初始值直到 models 首次变化才会更新,且 watch 没有纳入 _disposables 回收。建议立即执行一次,并在 dispose 时可停止。

-        watch(
-            models,
-            () => {
-                this._supportModels = (models.value ?? []).map(
-                    (model) => `${this.platformName}/${model.name}`
-                )
-            },
-            { deep: true }
-        )
+        const stop = watch(
+            models,
+            () => {
+                this._supportModels = (models.value ?? []).map(
+                    (model) => `${this.platformName}/${model.name}`
+                )
+            },
+            { deep: true, immediate: true }
+        )
+        this._disposables.push(stop)

773-774: 简化获取配置的路径,降低耦合

BasePlatformClient 已提供 config getter,可直接取用,避免深入到 configPool。

-        const config = client.value.configPool.getConfig(true).value
+        const config = client.value.config
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ba2939e and 3dccd12.

⛔ Files ignored due to path filters (1)
  • packages/core/package.json is excluded by !**/*.json
📒 Files selected for processing (13)
  • packages/core/src/llm-core/chat/app.ts (5 hunks)
  • packages/core/src/llm-core/platform/types.ts (1 hunks)
  • packages/core/src/middlewares/chat/read_chat_message.ts (1 hunks)
  • packages/core/src/services/chat.ts (8 hunks)
  • packages/image-service/src/index.ts (4 hunks)
  • packages/long-memory/src/plugins/config.ts (1 hunks)
  • packages/long-memory/src/utils/chat-history.ts (2 hunks)
  • packages/long-memory/src/utils/layer.ts (1 hunks)
  • packages/plugin-common/src/plugins/knowledge.ts (1 hunks)
  • packages/search-service/src/chain/browsing_chain.ts (5 hunks)
  • packages/search-service/src/index.ts (9 hunks)
  • packages/search-service/src/provide.ts (4 hunks)
  • packages/search-service/src/providers/wikipedia.ts (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/core/src/llm-core/chat/app.ts
🧰 Additional context used
🧬 Code graph analysis (9)
packages/plugin-common/src/plugins/knowledge.ts (1)
packages/core/src/llm-core/platform/model.ts (2)
  • modelName (553-555)
  • ChatLunaChatModel (96-572)
packages/long-memory/src/utils/layer.ts (1)
packages/core/src/services/chat.ts (1)
  • platform (294-296)
packages/long-memory/src/plugins/config.ts (1)
packages/core/src/utils/schema.ts (1)
  • modelSchema (14-26)
packages/long-memory/src/utils/chat-history.ts (1)
packages/core/src/services/chat.ts (1)
  • platform (294-296)
packages/search-service/src/provide.ts (2)
packages/core/src/llm-core/model/in_memory.ts (1)
  • EmptyEmbeddings (26-38)
packages/core/src/llm-core/vectorstores/memory.ts (1)
  • MemoryVectorStore (142-350)
packages/image-service/src/index.ts (3)
packages/core/src/utils/schema.ts (1)
  • modelSchema (14-26)
packages/core/src/services/chat.ts (1)
  • platform (294-296)
packages/core/src/llm-core/platform/model.ts (2)
  • modelName (553-555)
  • ChatLunaChatModel (96-572)
packages/core/src/services/chat.ts (2)
packages/core/src/llm-core/model/in_memory.ts (1)
  • emptyEmbeddings (40-40)
packages/core/src/llm-core/platform/client.ts (1)
  • config (72-74)
packages/search-service/src/providers/wikipedia.ts (2)
packages/search-service/src/chain/browsing_chain.ts (1)
  • model (515-517)
packages/core/src/llm-core/platform/model.ts (1)
  • ChatLunaChatModel (96-572)
packages/search-service/src/index.ts (1)
packages/core/src/llm-core/platform/client.ts (2)
  • config (72-74)
  • createModel (99-105)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: build
  • GitHub Check: lint
  • GitHub Check: build
  • GitHub Check: lint
🔇 Additional comments (7)
packages/long-memory/src/plugins/config.ts (1)

3-3: 确认子路径导出与依赖声明以保障构建可用

koishi-plugin-chatluna/utils/schema 依赖 package.exports 的子路径导出。请确认:

  • 核心包 koishi-plugin-chatlunapackage.json 中存在 exports["./utils/schema"]
  • 本包 packages/long-memorypackage.jsonkoishi-plugin-chatluna 列为 dependenciespeerDependencies
  • TS 的 moduleResolution/打包器配置支持 package.exports(Node16/NodeNext/Bundler 之一)。

如任一不满足,可能在生产构建或 ESM/CJS 交互时失败。

可直接运行以下脚本进行核验(只读):

#!/bin/bash
set -euo pipefail

echo "== 检查 long-memory 依赖中的 koishi-plugin-chatluna =="
jq -r '.dependencies["koishi-plugin-chatluna"] // .peerDependencies["koishi-plugin-chatluna"] // empty' packages/long-memory/package.json

echo "== 寻找名为 koishi-plugin-chatluna 的包并检查其 exports['./utils/schema'] =="
pkg_json=$(fd -t f package.json packages | while read -r f; do
  name=$(jq -r '.name // empty' "$f")
  if [ "$name" = "koishi-plugin-chatluna" ]; then
    echo "$f"
    break
  fi
done)

if [ -z "${pkg_json:-}" ]; then
  echo "未找到 name=koishi-plugin-chatluna 的 package.json" && exit 1
fi

echo "核心包: $pkg_json"
jq -r '.exports["./utils/schema"] // "MISSING_EXPORTS_SUBPATH"' "$pkg_json"

echo "== 粗略检查 tsconfig 的 moduleResolution(可能存在多个 tsconfig) =="
fd -t f tsconfig.json | xargs -I{} sh -c 'echo {}; jq -r ".compilerOptions.moduleResolution // empty" "{}"'
packages/search-service/src/provide.ts (1)

145-145: 向量库构造传入 embeddings.value 正确

MemoryVectorStore 期望 EmbeddingsInterface 实例,这里传入 embeddings.value 与新的响应式封装契合。LGTM。

packages/image-service/src/index.ts (1)

44-59: 能力判定 OK,但建议在日志中包含平台与模型名上下文

便于排障;同时保持早退逻辑。

packages/plugin-common/src/plugins/knowledge.ts (1)

106-111: 命名一致性/迁移验证:仍使用 chathub_knowledge 集合

全仓已从 ChatHub 迁移至 ChatLuna,数据库集合名是否也应迁移?如需兼容,是否存在同时读/写到两个集合的策略?

可用脚本快速检索引用位置与迁移痕迹:

#!/bin/bash
rg -nP "chathub_knowledge|chatluna_knowledge" -C2

Also applies to: 115-119

packages/search-service/src/index.ts (3)

138-147: getTools 响应式封装合理

将 PlatformService.getTools() 的 ComputedRef 进一步映射为工具实例集合,写法简洁可读。


155-158: createModel 流程妥当

已在内部 awaitLoadPlatform 后返回 ComputedRef,符合新的响应式约定。


16-17: 确认 re-export 来源

computed、watch 来自 'koishi-plugin-chatluna' 的再导出?为避免构建期错误,请确认 core 包确已导出。

可用脚本校验:

#!/bin/bash
set -euo pipefail
rg -nP 'export\s+\{\s*computed\s*,\s*watch\s*\}' packages/core/src/index.ts || true

- Wrap reactive watchers in effect scope to prevent memory leaks in chat service
- Add proper null checking for optional keyword extract model in search service
- Improve model initialization logic to handle 'empty' model configuration
- Fix computed ref access patterns for embeddings validation
- Add proper cleanup for reactive watchers using ctx.effect

This ensures reactive watchers are properly disposed and prevents potential
memory leaks while maintaining robust null safety throughout the system.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
packages/search-service/src/index.ts (2)

36-47: 已正确对可空模型做守卫并关联清理,避免 watch(null) 崩溃与重启风暴

  • 判空后再 watch,且通过 ctx.effect 注册 stop 清理,生命周期管理到位。与此前反馈相符。

56-56: 仍有一次未守卫的 .value 解引用,默认配置会直接抛错(阻断路径)

summaryModel 默认是 'empty',此时 keywordExtractModel 为 null;这里直接使用 .value 会在构造 PuppeteerBrowserTool 时抛出异常。请与下文一致提供回退到对话模型。

应用此修复:

-                const browserTool = new PuppeteerBrowserTool(
-                    ctx,
-                    keywordExtractModel.value,
-                    params.embeddings,
-                    {
+                const browserTool = new PuppeteerBrowserTool(
+                    ctx,
+                    keywordExtractModel?.value ?? params.model,
+                    params.embeddings,
+                    {
🧹 Nitpick comments (4)
packages/search-service/src/index.ts (4)

59-68: 避免硬编码超时,尊重配置项 puppeteerTimeout/puppeteerIdleTimeout

当前固定 6s/30s 与 3min,忽略了配置里的默认/用户值。建议使用配置,并在质量模式下设置“至少 30s”的下限。

可按下述方式改造:

-                        timeout:
-                            summaryType !== SummaryType.Quality
-                                ? 6 * Time.second
-                                : 30 * Time.second,
-                        idleTimeout: 3 * Time.minute
+                        timeout:
+                            summaryType !== SummaryType.Quality
+                                ? config.puppeteerTimeout
+                                : Math.max(config.puppeteerTimeout, 30 * Time.second),
+                        idleTimeout: config.puppeteerIdleTimeout

85-89: 与 web-search 保持一致:为 web-browser 的模型也提供回退

当前仅可选链,未回退到 params.model。为一致性与健壮性建议添加回退。

-                return new PuppeteerBrowserTool(
-                    ctx,
-                    keywordExtractModel?.value,
-                    params.embeddings
-                )
+                return new PuppeteerBrowserTool(
+                    ctx,
+                    keywordExtractModel?.value ?? params.model,
+                    params.embeddings
+                )

142-149: 命名阴影易混淆,微调可读性

内部 const tools = service.getTools() 与返回的 computed 名称重复,阅读时容易混淆。建议改名为 toolNamestoolListRef

-function getTools(service: PlatformService, filter: (name: string) => boolean) {
-    const tools = service.getTools()
-    return computed(() =>
-        tools.value.filter(filter).map((name) => ({
+function getTools(service: PlatformService, filter: (name: string) => boolean) {
+    const toolNames = service.getTools()
+    return computed(() =>
+        toolNames.value.filter(filter).map((name) => ({
             name,
             tool: service.getTool(name)
         }))
     )
 }

152-161: createModel 返回类型建议明确(可选)

如条件允许,考虑补充返回类型(如 Promise<ComputedRef<ChatLunaChatModel> | null>),便于下游正确使用 .value 并让编译期发现误用;若暂不引入类型依赖,可先保持现状。

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3dccd12 and fc60a25.

📒 Files selected for processing (3)
  • packages/core/src/services/chat.ts (8 hunks)
  • packages/search-service/src/index.ts (9 hunks)
  • packages/search-service/src/provide.ts (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/search-service/src/provide.ts
  • packages/core/src/services/chat.ts
🧰 Additional context used
🧬 Code graph analysis (1)
packages/search-service/src/index.ts (2)
packages/core/src/llm-core/platform/client.ts (1)
  • createModel (99-105)
packages/core/src/services/chat.ts (2)
  • stop (324-330)
  • platform (296-298)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: build
  • GitHub Check: lint
  • GitHub Check: lint
  • GitHub Check: build
🔇 Additional comments (5)
packages/search-service/src/index.ts (5)

16-16: 引入响应式 API 合理

新增的 computed/watch 来源一致,满足后续响应式改造使用场景。


74-76: 统一采用可选链并回退,避免空引用

这里已改为 keywordExtractModel?.value,并向下游传递,符合预期。


118-118: summaryModel 的回退策略合理

keywordExtractModel?.value ?? params.model 能在缺省模型时平滑回退。


391-391: 移除多余类型断言 OK

.i18n(... ) 去掉 as Schema<Config> 更简洁,无功能影响。


102-135: 确认:fromLLMAndTools 接受 ComputedRef,provider 可为同步

已核实:packages/search-service/src/chain/browsing_chain.ts 中 fromLLMAndTools 的第二个参数类型为 ComputedRef<ChatLunaToolWrapper[]>;packages/core/src/services/chat.ts 中 registerChatChainProvider 期望同步返回 ChatLunaLLMChainWrapper。

- Update core package version to 1.3.0-alpha.41
- Update all adapter packages to depend on new core version
- Update all service packages to use latest core version
- Maintain version consistency across the monorepo

This version bump includes the Vue reactivity integration and
all related improvements from the refactor/computed-system branch.
@dingyi222666 dingyi222666 changed the title [Refactor] 响应式系统实现,更好的重载实现 [Refactor] 响应式系统实现,更好的重载实现,减少 async 传播性 Sep 16, 2025
…nagement

- Remove manual plugin registration requirement across all packages
- Implement automatic plugin installation/uninstallation in core service
- Convert async tool creation to synchronous pattern for better performance
- Add automatic effect-based cleanup for all plugin registrations
- Unify plugin lifecycle management with proper dispose handling
- Modernize config parsing to synchronous operations
- Add registerRenderer method to ChatLunaPlugin for consistent API
- Update all adapters and services to use new plugin architecture
- Improve memory management with automatic disposable cleanup
- Standardize tool registration patterns across all packages

This change simplifies plugin development by removing boilerplate code and
provides a more robust, automatically managed plugin lifecycle system.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (27)
packages/mcp-client/src/service.ts (5)

110-122: type 可能为 undefined 导致运行时崩溃;并补齐未覆盖的传输类型分支

当 url 存在但不含 "sse" 且 type 未定义时,调用 type.includes('sse') 会抛异常;同时缺少兜底分支会导致 transport 未赋值。

应用以下修复,使用安全判断并在不匹配时抛出明确错误:

-            } else if (url.includes('sse') || type.includes('sse')) {
+            } else if ((typeof type === 'string' && type.includes('sse')) || url.includes('sse')) {
               transport = new SSEClientTransport(new URL(url), {
                 requestInit: {
                   headers: headers ?? {}
                 }
               })
-            } else if (url.startsWith('http')) {
+            } else if (url.startsWith('http')) {
               transport = new StreamableHTTPClientTransport(new URL(url), {
                 requestInit: {
                   headers: headers ?? {}
                 }
               })
-            }
+            } else {
+              throw new Error('Unsupported MCP server transport config: expected stdio, sse, or http(s).')
+            }

124-137: 日志可能外泄敏感信息(env/headers/命令行)

把完整 serverConfig/headers/env 打到日志中有泄密风险。仅记录必要字段(如 url/type),错误日志附上 error 即可。

-            logger.debug(
-                `Connecting to server at ${JSON.stringify(serverConfig)}`
-            )
+            logger.debug('Connecting to MCP server', { url, type })
...
-                logger.debug('MCP client connected at', serverConfig)
+                logger.debug('MCP client connected', { url, type })
...
-                logger.error(
-                    `Failed to connect to server at ${JSON.stringify(
-                        serverConfig
-                    )}`,
-                    error
-                )
+                logger.error('Failed to connect to MCP server', { url, type, error })

77-83: 解析失败时不应把原始配置串写入日志

this.config.servers 可能包含凭据,直接输出到日志存在泄露风险。

-            logger.error(
-                'Failed to parse MCP servers configuration',
-                error,
-                this.config.servers
-            )
+            logger.error('Failed to parse MCP servers configuration', error)

179-187: 忽略了 enabled 开关:被标记禁用的工具仍会注册

未按配置跳过禁用工具,可能引入意外工具调用。

         for (const name in this._globalTools) {
           const toolConfig = this._globalTools[name]
           const mcpTool = mcpTools.tools.find((t) => t.name === name)

           if (!mcpTool) {
             logger.warn(`Tool ${name} not found in MCP`)
             continue
           }
+          if (toolConfig.enabled === false) {
+            logger.info(`Skip disabled MCP tool: ${name}`)
+            continue
+          }
...
-            this._plugin.registerTool(langChainTool.name, {
+            this._plugin.registerTool(langChainTool.name, {
               createTool: () => langChainTool,
-              selector(history) {
+              selector(history) {
+                if (toolConfig.enabled === false) return false
                 if (toolConfig.selector.length === 0) {
                   return true
                 }
                 return history.some((message) =>
                   toolConfig.selector.some((selector) =>
                     getMessageContent(message.content).includes(
                       selector
                     )
                   )
                 )
               }
             })

Also applies to: 206-221


189-197: serverName 传入错误 — 目前传入的是工具名,需改为 MCP 服务标识或明确由 SDK 路由

验证:packages/mcp-client/src/service.ts(registerClientTools,约行190)向 callTool 传入了 serverName: name(name 为工具名);packages/mcp-client/src/utils.ts(callTool,约行431)将 { name: toolName, arguments: args } 传给 client.callTool,未把 serverName 用于请求路由,serverName 仅用于错误/结果处理。原评论成立。

操作建议:

  • 在 packages/mcp-client/src/service.ts(约行190)将 serverName 改为真实的 MCP 服务标识(或删除/重命名该字段以避免混淆)。
  • 或者,若需按服务器路由,则在 packages/mcp-client/src/utils.ts 将 serverName 加入传给 SDK 的参数(callToolArgs)并确认 @modelcontextprotocol/sdk 支持该参数;或为不同 MCP 服务使用独立 Client 实例以确保路由明确。
packages/rwkv-adapter/src/index.ts (1)

60-60: 插件名称拼写错误将导致插件识别/加载失败

应为 chatluna-rwkv-adapter 而非 chatluna-rmkv-adapter。这会直接影响插件注册与依赖注入。

应用以下修复:

-export const name = 'chatluna-rmkv-adapter'
+export const name = 'chatluna-rwkv-adapter'
packages/hunyuan-adapter/src/index.ts (1)

64-64: 插件名称大小写不一致,建议统一为全小写以避免注册/匹配问题

建议改为 chatluna-hunyuan-adapter(与仓库其它适配器命名保持一致)。

-export const name = 'chatluna-Hunyuan-adapter'
+export const name = 'chatluna-hunyuan-adapter'
packages/core/src/services/chat.ts (4)

239-257: createChatModel 提前抛错与响应式退化策略不一致

当前在 computed 外部若 client.value == null 直接抛错,但 createEmbeddings 采用“记录 warn + 返回可用占位/undefined”的策略。为保持一致性与可组合性,建议移除外部抛错,在 computed 内返回 undefined(或记录 warn)。

- if (client.value == null) {
-   throw new ChatLunaError(
-     ChatLunaErrorCode.MODEL_ADAPTER_NOT_FOUND,
-     new Error(`The platform ${platformName} no available`)
-   )
- }
-
 return computed(() => {
   if (client.value == null) {
-    return undefined
+    this.ctx.logger.warn(`The platform ${platformName} not available`)
+    return undefined
   }
   return client.value.createModel(model) as ChatLunaChatModel
 })

733-734: WebSocket 清理未绑定 this,可能导致未正确关闭

ctx.effect(() => webSocket.close) 返回的是未绑定的函数引用,dispose 时 this 丢失会导致 close 失效,形成资源泄漏。

- this.ctx.effect(() => webSocket.close)
+ this.ctx.effect(() => () => webSocket.close())

779-800: 修复 client 可能为 null 导致的 NPE:为配置读取添加空检查并回退并发限制

packages/core/src/services/chat.ts 行 779-800:避免直接访问 client.value;使用可选链或显式空检查读取配置(例如 client?.value?.configPool.getConfig(true)?.value),并在调用 this._modelQueue.wait 时使用回退并发限制:优先 config?.concurrentMaxSize,次之 this._service.config.chatConcurrentMaxSize,最后回退到 1。

补充端到端用例:平台未加载或 client 为 null 时触发 chat 不应抛错,应按照回退并发限制排队。


957-986: 按平台 dispose 未能中止请求 — 使用了 conversationId 而非 requestId

问题:_requestIdMap 的 key 是 requestId(见 packages/core/src/services/chat.ts:812–814 的 this._requestIdMap.set(requestId, abortController)),但 dispose(platform) 用 conversationId 去 this._requestIdMap.get(conversationId),因此不会中止对应的 AbortController,导致请求/资源泄漏(dispose 位于 packages/core/src/services/chat.ts:957–986)。
最小修复:维护 requestId -> platform(或 conversationId -> Set)映射;在 dispose(platform) 时遍历该映射,abort 并删除匹配 platform 的 requestId(同时同步清理映射)。示例 patch:

@@ class ChatInterfaceWrapper {
- private _requestIdMap: Map<string, AbortController> = new Map()
+ private _requestIdMap: Map<string, AbortController> = new Map()
+ private _requestIdToPlatform: Map<string, string> = new Map()
@@ chat(...)
- this._requestIdMap.set(requestId, abortController)
+ this._requestIdMap.set(requestId, abortController)
+ this._requestIdToPlatform.set(requestId, platform)
@@ finally
- this._requestIdMap.delete(requestId)
+ this._requestIdMap.delete(requestId)
+ this._requestIdToPlatform.delete(requestId)
@@ dispose(platform?: string)
- const conversationIds = this._platformToConversations.get(platform)
- if (!conversationIds?.length) return
-
- for (const conversationId of conversationIds) {
-   this._conversations.delete(conversationId)
-   // Terminate platform-related requests
-   const controller = this._requestIdMap.get(conversationId)
-   if (controller) {
-     controller.abort()
-     this._requestIdMap.delete(conversationId)
-   }
- }
+ for (const [reqId, plat] of this._requestIdToPlatform.entries()) {
+   if (plat === platform) {
+     this._requestIdMap.get(reqId)?.abort()
+     this._requestIdMap.delete(reqId)
+     this._requestIdToPlatform.delete(reqId)
+   }
+ }
+ // 清理缓存的会话
+ const conversationIds = this._platformToConversations.get(platform)
+ conversationIds?.forEach((id) => this._conversations.delete(id))
  this._platformToConversations.delete(platform)

已用 rg 确认 set/delete/abort 的位置,问题成立。

packages/long-memory/src/plugins/tool.ts (2)

52-64: Schema 与实现不一致:layer 实际被当作可选项使用,应加默认值

实现中以空值回退默认层,但当前 Zod schema 将 layer 定义为必填数组。建议将其改为可选并提供默认值,减少调用端负担并避免解析失败。

应用以下变更(同类建议见下文另两个工具):

-    layer: z
-        .array(
-            z.union([
-                z.literal('user'),
-                z.literal('preset_user'),
-                z.literal('preset'),
-                z.literal('global')
-            ])
-        )
-        .describe('The layer of the memory')
+    layer: z
+        .enum(['user', 'preset_user', 'preset', 'global'])
+        .array()
+        .default(['preset_user'])
+        .optional()
+        .describe('The layer(s) of the memory')

243-249: 修复建议:将 deleteMemories 的默认层参数统一为数组并收紧 MemoryRetrievalLayerType 索引

packages/long-memory/src/plugins/tool.ts 在多处(约行 ~86、~175、~243)将 input.layer 映射为 MemoryRetrievalLayerType 时未使用 keyof 收紧索引,且在无输入时回退为单个枚举值,这与统一传入数组的预期不一致并可能导致运行时或类型异常,建议按下述修改统一为数组并加上类型断言:

-                input.layer != null
-                    ? input.layer.map(
-                          (layer) =>
-                              MemoryRetrievalLayerType[layer.toUpperCase()]
-                      )
-                    : MemoryRetrievalLayerType.PRESET_USER
+                input.layer != null
+                    ? input.layer.map(
+                          (layer) =>
+                              MemoryRetrievalLayerType[
+                                  layer.toUpperCase() as keyof typeof MemoryRetrievalLayerType
+                              ]
+                      )
+                    : [MemoryRetrievalLayerType.PRESET_USER]
packages/plugin-common/src/plugins/command.ts (1)

301-336: 命令拼接未处理空格/引号,存在解析歧义

当前直接用空格拼接 args/options,值中若含空格或引号会导致语义改变。建议对参数和值进行安全引用。

   private parseInput(input: Record<string, any>): string {
     try {
-      const args: string[] = []
-      const options: string[] = []
+      const args: string[] = []
+      const options: string[] = []
+      const quote = (s: string) =>
+        typeof s === 'string' && /\s|["']/.test(s) ? JSON.stringify(s) : String(s)

       // 处理参数
       this.command.arguments.forEach((arg) => {
         if (arg.name in input) {
-          args.push(String(input[arg.name]))
+          args.push(quote(input[arg.name]))
         }
       })

       // 处理选项
       this.command.options.forEach((opt) => {
         if (opt.name in input && opt.name !== 'help') {
           if (opt.type === 'boolean') {
             if (input[opt.name]) {
               options.push(`--${opt.name}`)
             }
           } else {
-            options.push(`--${opt.name}`, String(input[opt.name]))
+            options.push(`--${opt.name}`, quote(input[opt.name]))
           }
         }
       })
packages/plugin-common/src/plugins/cron.ts (1)

118-138: 定时命令参数未做引用处理,带空格文本会被切分

与 command 工具类似,echo/command 内容包含空格时需加引号,避免 schedule 解析错误。

-    if (type === 'command') {
-        return `schedule ${interval} -- ${args[0]}`
-    }
+    const quote = (s: string) => (/\s|["']/.test(s) ? JSON.stringify(s) : s)
+    if (type === 'command') {
+        return `schedule ${interval} -- ${quote(args[0])}`
+    }
@@
-    if (args[1] === 'group') {
-        result.push(args[0])
+    if (args[1] === 'group') {
+        result.push(quote(args[0]))
         return result.join(' ')
     }
@@
-    result.push('-u')
-    result.push('@' + args[1])
-    result.push(args[0])
+    result.push('-u')
+    result.push('@' + args[1])
+    result.push(quote(args[0]))
packages/plugin-common/src/plugins/fs.ts (4)

12-12: Zod 导入方式错误(默认导入),运行时会是 undefined

zod 无默认导出;当前写法在启用 esModuleInterop 也会得到 undefined,调用 z.object 会崩溃。

-import z from 'zod'
+import { z } from 'zod'

145-156: 路径越界校验不安全:startsWith 容易被前缀绕过(/scope 与 /scopee)且未处理符号链接

需统一用 realpath/resolve + relative 校验,确保访问在 scope 内。建议提取工具方法并在所有文件操作中复用。

 class FileStore implements BaseFileStore {
   constructor(private _scope: string) {}
+
+  private async resolveInScope(p: string): Promise<string> {
+    const scope = path.resolve(this._scope || '/')
+    const real = await fs.realpath(path.resolve(p))
+    const rel = path.relative(scope, real)
+    if (rel === '' || (!rel.startsWith('..') && !path.isAbsolute(rel))) {
+      return real
+    }
+    throw new Error(`path "${p}" is not in scope "${this._scope}"`)
+  }
@@
-  async readFile(path: string): Promise<string> {
-    if (!path.startsWith(this._scope)) {
-      throw new Error(`path "${path}" is not in scope "${this._scope}"`)
-    }
-    return JSON.stringify({
-      path,
-      content: (await fs.readFile(path)).toString()
-    })
-  }
+  async readFile(p: string): Promise<string> {
+    const file = await this.resolveInScope(p)
+    return JSON.stringify({
+      path: file,
+      content: (await fs.readFile(file, 'utf-8')).toString()
+    })
+  }

说明:请同样在 writeFile/listFiles/grep/glob/editFile/rename 等入口使用 resolveInScope,对目录也应先 realpath 后校验;遍历过程若遇到符号链接,需再次 realpath 校验,防止通过 symlink 跳出作用域。


265-291: glob/grep 递归遍历未对 symlink 出作用域做二次校验

_findFiles 中对符号链接仅判断文件类型,未校验 realpath 是否仍在 scope 内。请在加入 results 或递归前调用 resolveInScope。


694-701: MultiRenameTool 使用字符串 replace 与 glob 模式不匹配,重命名会失效或产生异常结果

pattern 是 micromatch 的 glob,直接对完整路径做 String.replace(pattern, replacement) 并不生效。建议:

  • 方案A(最小变更):用 micromatch.makeRe(pattern, { dot: true }) 生成 RegExp 后再执行 replace;
  • 方案B(更稳妥):仅对 basename 做替换,并新增 replacePattern(正则)或提供模板占位符(如 {name}{ext})。
-const newFileName = file.replace(pattern, replacement)
+const re = micromatch.makeRe(pattern, { dot: true })
+const newFileName = file.replace(re, (_m) =>
+  path.join(path.dirname(file), replacement)
+)

并请补充测试:含子目录、隐藏文件、Windows 路径、中文文件名、重复命名冲突(需去重或报错)。

packages/plugin-common/src/plugins/code_sandbox.ts (1)

55-69: 已杀死的解释器句柄未置空,后续复用会失败

定时 kill 后 this.interpreter 仍为已关闭实例,createSandBox 不会重建,导致后续 runCode 失败。

-  private interpreter: Sandbox
+  private interpreter?: Sandbox
@@
-  ctx.setInterval(
-      async () => {
-          await this.interpreter?.kill()
-      },
-      1000 * 60 * 30
-  )
+  ctx.setInterval(
+    async () => {
+      try { await this.interpreter?.kill() } finally { this.interpreter = undefined }
+    },
+    1000 * 60 * 30
+  )

并在 createSandBox 中相应将类型调整为可选:

- if (this.interpreter == null) {
+ if (!this.interpreter) {
     this.interpreter = await Sandbox.create({ apiKey: this.apiKey })
 }
packages/plugin-common/src/plugins/todos.ts (1)

9-23: 必须修复:todosStore 为无界内存占用(添加 TTL/回收/持久化)

packages/plugin-common/src/plugins/todos.ts 中的 todosStore(声明:9–23)为 Map,代码中未发现清理或过期逻辑;写入位于 135–137,读取/抛错位于 175–177 与 213–215。长期运行会导致内存无限增长。

  • 建议至少采取其一:添加 expire 字段并定期清理(定时任务);或设置最大容量并实现 LRU/FIFO 淘汰;或将数据持久化到数据库并按需加载/回收。
  • 在写/读处加入存在性与过期校验,读取过期项时自动清理或返回明确错误/提示。
packages/plugin-common/src/plugins/openapi.ts (3)

163-172: 未定义 args.path 时会抛错(for...in on undefined)

当无 path 参数时,for (const key in args.path) 会抛错。请做空值防护。

应用如下修复:

-        for (const key in args.path) {
-            if (decodedPathname.includes(`{${key}}`)) {
-                decodedPathname = decodedPathname.replace(
-                    `{${key}}`,
-                    encodeURIComponent(args.path[key])
-                )
-            }
-        }
+        for (const [key, val] of Object.entries((args as any).path ?? {})) {
+            if (decodedPathname.includes(`{${key}}`)) {
+                decodedPathname = decodedPathname.replace(
+                    `{${key}}`,
+                    encodeURIComponent(String(val))
+                )
+            }
+        }

176-178: query/header 同样缺少空值防护

args.queryargs.header 可能为 undefined,当前循环会抛错。

-        for (const key in args.query) {
-            queryParams.append(key, args.query[key])
-        }
+        for (const [key, val] of Object.entries((args as any).query ?? {})) {
+            queryParams.append(key, String(val))
+        }

-        for (const key in args.header) {
-            ;(init.headers as Headers).append(key, args.header[key])
-        }
+        for (const [key, val] of Object.entries((args as any).header ?? {})) {
+            ;(init.headers as Headers).append(key, String(val))
+        }

Also applies to: 185-187


341-347: 工具名生成可能为空或以数字开头且存在碰撞风险

Math.random().toString(36).substring(7) 可能返回空串;且无去重。建议使用 UUID 并确保首字符为字母。

-        let normalizedName = generateRandomString()
-
-        while (/^[0-9]/.test(normalizedName[0])) {
-            normalizedName = generateRandomString()
-        }
+        let normalizedName = (typeof crypto !== 'undefined' && 'randomUUID' in crypto)
+            ? crypto.randomUUID().replace(/-/g, '')
+            : (Date.now().toString(36) + Math.random().toString(36).slice(2))
+        if (!/^[A-Za-z]/.test(normalizedName)) {
+            normalizedName = `t_${normalizedName}`
+        }

如需保持辅助函数,请在 generateRandomString 内实现上述逻辑并去重(可在注册前用 plugin.hasTool(normalizedName) 重试生成)。

packages/plugin-common/src/plugins/knowledge.ts (1)

100-105: 空数组判断导致默认知识库分支永不走

searchKnowledge 是数组,空数组也被视为 truthy,因此 else 分支不可达。应判断长度。

-    if (searchKnowledge) {
+    if (searchKnowledge && searchKnowledge.length > 0) {
packages/plugin-common/src/plugins/request.ts (2)

104-116: 缺少 URL 校验与超时控制,存在 SSRF/资源耗尽风险(GET)

需显式限制协议与内网目标,并为外部请求设置超时。

-        try {
-            const res = await this._plugin.fetch(url, {
-                headers: this.headers
-            })
+        try {
+            const u = new URL(url)
+            if (!/^https?:$/.test(u.protocol)) {
+                throw new Error('Only http/https protocols are allowed')
+            }
+            if (isBlockedHost(u.hostname)) {
+                throw new Error('Target host is not allowed')
+            }
+            const res = await this._plugin.fetch(url, {
+                headers: this.headers,
+                // Node 18+ / undici: AbortSignal.timeout
+                // 若运行时不支持,可降级为自定义超时控制
+                signal: (AbortSignal as any).timeout?.(15000)
+            })
             const text = await res.text()
             return text.slice(0, this.maxOutputLength)
         } catch (error) {
-            return `Web fetch failed: ${error.message}`
+            const msg = error instanceof Error ? error.message : String(error)
+            return `Web fetch failed: ${msg}`
         }

额外新增的辅助函数(放在文件底部或合适位置):

import net from 'node:net'

function isBlockedHost(host: string): boolean {
  const lowered = host.toLowerCase()
  if (['localhost', '127.0.0.1', '::1', '0.0.0.0'].includes(lowered)) return true
  const ipType = net.isIP(lowered)
  if (ipType) {
    // 简单私网段拦截
    const octets = lowered.split('.').map(Number)
    const inRange =
      (octets[0] === 10) ||
      (octets[0] === 172 && octets[1] >= 16 && octets[1] <= 31) ||
      (octets[0] === 192 && octets[1] === 168) ||
      (octets[0] === 169 && octets[1] === 254)
    return inRange
  }
  return false
}

151-168: POST 同步完善同样的 URL/超时与错误处理

与 GET 一致的安全与鲁棒性要求。

-        try {
-            const res = await this._plugin.fetch(url, {
+        try {
+            const u = new URL(url)
+            if (!/^https?:$/.test(u.protocol)) {
+                throw new Error('Only http/https protocols are allowed')
+            }
+            if (isBlockedHost(u.hostname)) {
+                throw new Error('Target host is not allowed')
+            }
+            const res = await this._plugin.fetch(url, {
                 method: 'POST',
                 headers: {
                     'Content-Type': 'application/json',
                     ...this.headers
                 },
-                body: JSON.stringify(data)
+                body: JSON.stringify(data),
+                signal: (AbortSignal as any).timeout?.(15000)
             })
             const text = await res.text()
             return text.slice(0, this.maxOutputLength)
         } catch (error) {
-            return `Web POST failed: ${error.message}`
+            const msg = error instanceof Error ? error.message : String(error)
+            return `Web POST failed: ${msg}`
         }
♻️ Duplicate comments (5)
packages/claude-adapter/src/index.ts (1)

17-35: 与 spark-adapter 相同的初始化竞态风险:parseConfig 未 await

同样需要确认 parseConfig 是否同步,否则需在 initClients 前确保配置可用。

packages/rwkv-adapter/src/index.ts (1)

9-26: parseConfig 未 await 的竞态风险

与其它适配器一致,请确认 parseConfig 已同步化或在 initClients 前等待配置可用。

packages/hunyuan-adapter/src/index.ts (1)

13-31: parseConfig 未 await 的竞态风险

需与其它适配器一致确认同步化,或在 initClients 前等待配置就绪。

packages/qwen-adapter/src/index.ts (1)

9-27: parseConfig 未 await 的竞态风险

同类问题,请确认 parseConfig 行为,必要时在 initClients 前等待配置可用。

packages/wenxin-adapter/src/index.ts (1)

12-30: parseConfig 未 await 的竞态风险

同类问题,请确认 parseConfig 行为,必要时在 initClients 前等待配置可用。

🧹 Nitpick comments (50)
packages/variable-extension/src/index.ts (1)

12-14: ready 钩子缺少错误处理,建议补充日志与降级

若中间件初始化抛错,当前不会记录日志,易于“静默失败”。建议 try/catch 并输出错误,避免难以排查。

-    ctx.on('ready', async () => {
-        await plugins(ctx, config, plugin)
-    })
+    ctx.on('ready', async () => {
+        try {
+            await plugins(ctx, config, plugin)
+        } catch (err) {
+            logger?.error(err, 'variable-extension plugins init failed')
+        }
+    })
packages/zhipu-adapter/src/index.ts (3)

13-27: 对 apiKeys 去重与清洗,避免重复或空 key 生成多余客户端

重复 key 会浪费并发与配额;空字符串会导致鉴权失败。建议先去重并过滤空值后再映射。

-            return config.apiKeys.map((apiKey) => {
+            const apiKeys = Array.from(new Set(config.apiKeys.filter(Boolean)))
+            return apiKeys.map((apiKey) => {
               return {
                 apiKey,
                 apiEndpoint: '',
                 platform: 'zhipu',
                 chatLimit: config.chatTimeLimit,
                 timeout: config.timeout,
                 maxRetries: config.maxRetries,
                 concurrentMaxSize: config.chatConcurrentMaxSize,
                 webSearch: config.webSearch,
                 retrieval: config.retrieval
                   .filter((item) => item[1])
                   .map((item) => item[0])
               } satisfies ZhipuClientConfig
             })

24-26: 用解构提升可读性并减少魔法下标

当前对二元组使用 [0]/[1] 可读性一般,建议用解构。

-                    retrieval: config.retrieval
-                        .filter((item) => item[1])
-                        .map((item) => item[0])
+                    retrieval: config.retrieval
+                        .filter(([, enabled]) => enabled)
+                        .map(([name]) => name)

12-28: 避免参数名遮蔽外层 config

回调参数名与外层 config 同名,易混淆,建议重命名为 cfg。

-        plugin.parseConfig((config) => {
-            return config.apiKeys.map((apiKey) => {
+        plugin.parseConfig((cfg) => {
+            return cfg.apiKeys.map((apiKey) => {
                 return {
                     apiKey,
                     apiEndpoint: '',
                     platform: 'zhipu',
-                    chatLimit: config.chatTimeLimit,
-                    timeout: config.timeout,
-                    maxRetries: config.maxRetries,
-                    concurrentMaxSize: config.chatConcurrentMaxSize,
-                    webSearch: config.webSearch,
-                    retrieval: config.retrieval
+                    chatLimit: cfg.chatTimeLimit,
+                    timeout: cfg.timeout,
+                    maxRetries: cfg.maxRetries,
+                    concurrentMaxSize: cfg.chatConcurrentMaxSize,
+                    webSearch: cfg.webSearch,
+                    retrieval: cfg.retrieval
                         .filter((item) => item[1])
                         .map((item) => item[0])
                 } satisfies ZhipuClientConfig
             })
         })
packages/mcp-client/src/service.ts (6)

99-107: parsedArgs 清洗逻辑不稳健(对象/数组被 toString 误判)

对 env/args 等非字符串字段使用 toString().trim() 会产生误判。请基于类型做判空与空集合判断。

-                for (const key in parsedArgs) {
-                    if (
-                        parsedArgs[key] === undefined ||
-                        parsedArgs[key] === null ||
-                        parsedArgs[key].toString().trim() === ''
-                    ) {
-                        delete parsedArgs[key]
-                    }
-                }
+                for (const key in parsedArgs) {
+                  const val = (parsedArgs as any)[key]
+                  if (val == null) {
+                    delete (parsedArgs as any)[key]
+                  } else if (typeof val === 'string' && val.trim() === '') {
+                    delete (parsedArgs as any)[key]
+                  } else if (Array.isArray(val) && val.length === 0) {
+                    delete (parsedArgs as any)[key]
+                  } else if (typeof val === 'object' && !Array.isArray(val) && Object.keys(val).length === 0) {
+                    delete (parsedArgs as any)[key]
+                  }
+                }

141-146: listTools 缺少容错;失败会中断后续逻辑

建议捕获异常并回退为空列表,避免服务初始化被整体打断。

-        const mcpTools = await this._client.listTools()
+        const mcpTools = await this._client.listTools().catch((e) => {
+          logger.error('Failed to list MCP tools', e)
+          return { tools: [] }
+        })

170-173: 同上:注册前的 listTools 也需容错

与前面建议一致,避免局部失败阻断工具注册。

-        const mcpTools = await this._client.listTools()
+        const mcpTools = await this._client.listTools().catch((e) => {
+          logger.error('Failed to list MCP tools', e)
+          return { tools: [] }
+        })

62-62: serverConfigs 的类型标注可读性与健壮性不足

使用 Config['server'][0][] 较诡异,建议改为 Array<Config['server'][number]> 提升直观与兼容性。

-        let serverConfigs: Config['server'][0][] = []
+        let serverConfigs: Array<Config['server'][number]> = []

52-58: 通过 setTimeout 延迟注册易引入竞态

建议以明确的生命周期信号(如连接完成/工具清单加载完成的 Promise)驱动注册,而非固定 100ms 延迟。


1-1: 移除无用的 ESLint 规则豁免

未使用 eval,无需禁用 no-eval。

-/* eslint-disable no-eval */
packages/gemini-adapter/src/index.ts (2)

15-27: 在映射前过滤空 API Key,避免创建无效客户端实例

Schema 默认给出 ['', defaultEndpoint],当前会生成无效实例。建议过滤空 key:

-        plugin.parseConfig((config) => {
-            return config.apiKeys.map(([apiKey, apiEndpoint]) => {
+        plugin.parseConfig((config) => {
+            return config.apiKeys
+                .filter(([apiKey]) => apiKey?.trim().length > 0)
+                .map(([apiKey, apiEndpoint]) => {
                 return {
                     apiKey,
                     apiEndpoint,
                     platform: config.platform,
                     chatLimit: config.chatTimeLimit,
                     timeout: config.timeout,
                     maxRetries: config.maxRetries,
                     concurrentMaxSize: config.chatConcurrentMaxSize
                 }
-            })
-        })
+                })
+        })

14-14: 在 ctx.dispose 时释放资源,防止监听/作用域泄漏

建议补充释放钩子(示例):

ctx.on('dispose', () => {
  // 如 ChatLunaPlugin 暴露了 stop/close/dispose,请在此调用
  // (示例) plugin.dispose?.()
})
packages/doubao-adapter/src/index.ts (2)

14-30: parseConfig 非阻塞→请确认是否安全;必要时恢复 await

同类改动在多个适配器出现。若 parseConfig 仍为异步,会与 initClients 存在竞态风险。请结合上条核验脚本结论决定是否需要 await。


14-26: 过滤空 API Key,避免无效实例与无意义重试

与其他适配器保持一致,建议在 map 前 filter:

-        plugin.parseConfig((config) => {
-            return config.apiKeys.map(([apiKey, apiEndpoint]) => {
+        plugin.parseConfig((config) => {
+            return config.apiKeys
+                .filter(([apiKey]) => apiKey?.trim().length > 0)
+                .map(([apiKey, apiEndpoint]) => {
packages/openai-adapter/src/index.ts (2)

14-30: 确认 parseConfig 的同步语义,避免与 initClients 的竞态

逻辑同上;若 parseConfig 含异步准备,应恢复 await 或在插件内部提供 ready 等待点。


14-26: 过滤空 API Key

保持与 Schema 默认值兼容同时避免创建无效客户端:

-        plugin.parseConfig((config) => {
-            return config.apiKeys.map(([apiKey, apiEndpoint]) => {
+        plugin.parseConfig((config) => {
+            return config.apiKeys
+                .filter(([apiKey]) => apiKey?.trim().length > 0)
+                .map(([apiKey, apiEndpoint]) => {
packages/ollama-adapter/src/index.ts (1)

14-30: 确认 parseConfig 非阻塞调用是否安全

尽管本适配器不依赖 API Key,但若 parseConfig 内仍存在异步拉取/校验,仍可能与 initClients 竞争。建议按全局结论处理(同步/await/ready)。

packages/azure-openai-adapter/src/index.ts (3)

19-40: parseConfig 非阻塞→与 initClients 存在潜在时序问题

同系列适配器问题,建议依据 parseConfig 的真实签名决定是否需要 await 或内部 ready。


25-29: TypeScript 类型收窄不充分:reduce 初始值为 {} 导致 acc 推断为 {}

当前写法易触发索引类型报错或 any 泄漏。建议显式标注 Record 类型或改用 Object.fromEntries:

-                    supportModels: config.supportModels.reduce((acc, value) => {
-                        acc[value.model] = value
-                        return acc
-                    }, {}),
+                    supportModels: config.supportModels.reduce<
+                        Record<string, Config['supportModels'][number]>
+                    >((acc, value) => {
+                        acc[value.model] = value
+                        return acc
+                    }, {} as Record<string, Config['supportModels'][number]>),

或(可读性更好):

supportModels: Object.fromEntries(
  config.supportModels.map(v => [v.model, v])
) as Record<string, Config['supportModels'][number]>,

19-36: (可选)过滤空 API Key

减少无效客户端与错误日志噪声:

-        plugin.parseConfig((config) => {
-            return config.apiKeys.map(([apiKey, apiEndpoint]) => {
+        plugin.parseConfig((config) => {
+            return config.apiKeys
+                .filter(([apiKey]) => apiKey?.trim().length > 0)
+                .map(([apiKey, apiEndpoint]) => {
packages/deepseek-adapter/src/index.ts (2)

14-30: 确认 parseConfig 的同步性/幂等性,避免与 initClients 竞态

与其他适配器一致的改动,请按全局验证结论处理。


14-26: 过滤空 API Key

与 Schema 默认值配合,避免空 key 产生无效实例:

-        plugin.parseConfig((config) => {
-            return config.apiKeys.map(([apiKey, apiEndpoint]) => {
+        plugin.parseConfig((config) => {
+            return config.apiKeys
+                .filter(([apiKey]) => apiKey?.trim().length > 0)
+                .map(([apiKey, apiEndpoint]) => {
packages/dify-adapter/src/index.ts (1)

14-37: parseConfig 非阻塞→请确认是否会早于 initClients 完成

本适配器用单一配置对象,同样需要确认 parseConfig 是否同步完成后再 init,避免后续客户端在 additionalModel 尚未构建时启动。

packages/openai-like-adapter/src/index.ts (1)

29-45: parseConfig 非阻塞→潜在竞态;另外建议过滤空 API Key

  • 若 parseConfig 仍包含异步流程,请恢复 await 或引入 ready。
  • 过滤空 key,避免初始化无效客户端:
-        plugin.parseConfig((config) => {
-            return config.apiKeys.map(([apiKey, apiEndpoint]) => {
+        plugin.parseConfig((config) => {
+            return config.apiKeys
+                .filter(([apiKey]) => apiKey?.trim().length > 0)
+                .map(([apiKey, apiEndpoint]) => {
packages/hunyuan-adapter/src/index.ts (1)

14-14: Prettier 风格告警

静态检查提示空白字符问题,请统一通过格式化工具修复(如 pnpm lint --fix)。

packages/core/src/llm-core/platform/types.ts (1)

20-43: JSDoc 文案小错误:语法应为 “is not passed to the function”

当前为 “is no passed…”。建议统一更正 4 处注释,避免歧义。

- * @deprecated This parameter is no passed to the function.
+ * @deprecated This parameter is not passed to the function.

同时已在说明中指向 StructuredTool._callparentConfig.configurable,表述方向正确,无需改动。

packages/core/src/services/chat.ts (4)

100-101: 日志占位符使用 %c 可能无效

Koishi Logger/Node.js 不支持浏览器样式占位符 %c。建议改为 %s 或模板字符串,避免日志噪音。

- this.ctx.logger.success(`Plugin %c was installed`, platformName)
+ this.ctx.logger.success('Plugin %s was installed', platformName)

594-605: _supportModels 的 watch 建议使用 immediate

当前不触发变更时 _supportModels 仍为空。用 immediate: true 可在初始化时填充,避免在首次读取前为空。

- ctx.effect(() =>
-   watch(
-     models,
-     () => {
-       this._supportModels = (models.value ?? []).map(
-         (model) => `${this.platformName}/${model.name}`
-       )
-     },
-     { deep: true }
-   )
- )
+ ctx.effect(() =>
+   watch(
+     models,
+     (val) => {
+       this._supportModels = (val ?? []).map(
+         (m) => `${this.platformName}/${m.name}`
+       )
+     },
+     { immediate: true }
+   )
+ )

649-658: 弃用方法应避免抛异常制造堆栈开销

registerToService 内通过抛异常再捕获只为日志提供堆栈,不必要。直接 logger.warn 即可。

- try {
-   throw new Error('Please remove this method')
- } catch (e) {
-   this.ctx.logger.warn(
-     `Now the plugin support auto installation, Please remove call this method`,
-     e
-   )
- }
+ this.ctx.logger.warn(
+   'Auto-install is supported. registerToService() is deprecated and can be removed.'
+ )

802-807: _platformToConversations 累积重复项,存在内存增长风险

多次对同一会话聊天会重复 push。建议改为 Set 或去重。

- const conversationIds = this._platformToConversations.get(platform) ?? []
- conversationIds.push(conversationId)
- this._platformToConversations.set(platform, conversationIds)
+ const set = this._platformToConversations.get(platform) ?? []
+ const next = new Set(set)
+ next.add(conversationId)
+ this._platformToConversations.set(platform, Array.from(next))
packages/image-renderer/src/index.ts (1)

58-65: 改用 plugin.registerRenderer 的迁移正确,但建议使用传入参数避免捕获旧配置

当前回调忽略传入的 (ctx, config),而是闭包外部变量,可能在热更新/重载后使用陈旧配置。

- plugin.registerRenderer('image', (_: Context) => {
-   return new ImageRenderer(ctx, config)
- })
+ plugin.registerRenderer('image', (ctx, cfg) => new ImageRenderer(ctx, cfg))
 
- plugin.registerRenderer('mixed-image', (_: Context) => {
-   return new MixedImageRenderer(ctx, config)
- })
+ plugin.registerRenderer('mixed-image', (ctx, cfg) => new MixedImageRenderer(ctx, cfg))
packages/long-memory/src/plugins/tool.ts (9)

13-17: apply 可去掉 async,避免不必要的 Promise 传播

本函数体内未使用 await。建议移除 async 以匹配本 PR“减少 async 传播性”的目标。

应用以下变更:

-export async function apply(
+export function apply(
   ctx: Context,
   config: Config,
   plugin: ChatLunaPlugin
 ) {

请确认调用方未依赖 Promise<void> 签名。若有,可保留 async 或在调用方去掉多余的 await


66-71: 构造器参数未使用:移除私有字段以减小对象尺寸

params 仅被保存但未使用。建议不存为字段,仅作为位置参数接收。

-    constructor(
-        private ctx: Context,
-        private params: CreateToolParams
-    ) {
+    constructor(
+        private ctx: Context,
+        _params: CreateToolParams
+    ) {
         super({})
     }

113-139: 与上方一致:memory_addlayer 也应可选且有默认值

保持一致性,避免 schema 与实现分叉。

-    layer: z
-        .array(
-            z.union([
-                z.literal('user'),
-                z.literal('preset_user'),
-                z.literal('preset'),
-                z.literal('global')
-            ])
-        )
-        .describe('The layer of the memory')
+    layer: z
+        .enum(['user', 'preset_user', 'preset', 'global'])
+        .array()
+        .default(['preset_user'])
+        .optional()
+        .describe('The layer(s) of the memory')

141-146: 构造器未使用的 params 字段:同样精简

MemorySearchTool 保持一致。

-    constructor(
-        private ctx: Context,
-        private params: CreateToolParams
-    ) {
+    constructor(
+        private ctx: Context,
+        _params: CreateToolParams
+    ) {
         super({})
     }

181-183: 在内部记录错误日志,但对外保持简洁消息

工具输出保留简洁文案即可,但建议补充内部日志便于排障。

-            return 'An error occurred while adding memories.'
+            this.ctx.logger('chatluna/long-memory').error(error, 'memory_add failed')
+            return 'An error occurred while adding memories.'

请确认 Koishi 版本下 ctx.logger(scope) 的使用方式是否一致;若不同,可改为在模块顶部创建 const logger = ctx.logger('chatluna/long-memory') 并注入/闭包使用。


208-223: 移除 as any 并统一 layer 的 schema 定义

as any 掩盖类型问题。可用 z.enum([...]).array().default(...).optional() 消除。

-    schema = z.object({
-        memoryIds: z
-            .array(z.string())
-            .describe('Array of memory IDs to delete'),
-        layer: z
-            .array(
-                z.union([
-                    z.literal('user'),
-                    z.literal('preset_user'),
-                    z.literal('preset'),
-                    z.literal('global')
-                ])
-            )
-            .describe('The layer of the memory')
-        // eslint-disable-next-line @typescript-eslint/no-explicit-any
-    }) as any
+    schema = z.object({
+        memoryIds: z.array(z.string()).describe('Array of memory IDs to delete'),
+        layer: z
+            .enum(['user', 'preset_user', 'preset', 'global'])
+            .array()
+            .default(['preset_user'])
+            .optional()
+            .describe('The layer(s) of the memory')
+    })

80-91: 去重建议:抽取层映射工具函数,统一默认与索引

三处出现相同的层字符串→枚举映射与默认处理逻辑。建议在本文件(或共享 util)中抽取,例如:

const toRetrievalLayers = (
  layers?: Array<'user'|'preset_user'|'preset'|'global'>
) => (layers?.map(l =>
  MemoryRetrievalLayerType[l.toUpperCase() as keyof typeof MemoryRetrievalLayerType]
) ?? [MemoryRetrievalLayerType.PRESET_USER]);

随后各处直接调用 toRetrievalLayers(input.layer),降低重复与出错面。

Also applies to: 168-179, 239-251


90-95: 错误处理:建议统一记录异常,返回稳定的用户文案

三处 catch 统一打印内部日志(含会话/工具名/层)并返回简短提示,便于排障与观测一致性。

如需,我可以补上结构化日志字段(conversationId、toolName、layers、error.name 等)并接入你们的日志采集。

Also applies to: 181-183, 252-254


168-179: 增强类型安全:对 MemoryRetrievalLayerType 索引做 keyof 断言,并可选将默认层改为数组

经核实,ChatLunaLongMemoryService.addMemories 的 types 参数接受 MemoryRetrievalLayerType | MemoryRetrievalLayerType[],传入单个枚举不是错误;但应增强索引的类型安全并(可选地)与其它调用保持一致。

建议修改(packages/long-memory/src/plugins/tool.ts — 参考行:86、175、246):

-                input.layer != null
-                    ? input.layer.map(
-                          (layer) =>
-                              MemoryRetrievalLayerType[layer.toUpperCase()]
-                      )
-                    : MemoryRetrievalLayerType.PRESET_USER
+                input.layer != null
+                    ? input.layer.map(
+                          (layer) =>
+                              MemoryRetrievalLayerType[
+                                  layer.toUpperCase() as keyof typeof MemoryRetrievalLayerType
+                              ]
+                      )
+                    : [MemoryRetrievalLayerType.PRESET_USER]
packages/plugin-common/src/plugins/draw.ts (1)

58-61: 占位符替换建议:用 replaceAll 以避免遗漏多处 {prompt}

若 drawCommand 中出现多个 {prompt},String.replace 只会替换首个。可改为 replaceAll。

-const elements = await session.execute(
-    this.drawCommand.replace('{prompt}', input),
-    true
-)
+const elements = await session.execute(
+    this.drawCommand.replaceAll('{prompt}', input),
+    true
+)
packages/plugin-common/src/plugins/command.ts (1)

33-41: randomUUID 来源不明确:建议显式引入 node:crypto,避免编译/类型不一致

部分环境未开启 DOM lib 时,直接使用全局 crypto 可能导致 TS 报错或运行时不一致。

-import { Context, h } from 'koishi'
+import { Context, h } from 'koishi'
+import { randomUUID } from 'node:crypto'
@@
-          normalizedName = crypto.randomUUID().substring(0, 16)
+          normalizedName = randomUUID().substring(0, 16)
@@
-            while (/^[0-9]/.test(normalizedName[0])) {
-                normalizedName = crypto.randomUUID().substring(0, 16)
-            }
+            while (/^[0-9]/.test(normalizedName[0])) {
+                normalizedName = randomUUID().substring(0, 16)
+            }
packages/plugin-common/src/plugins/cron.ts (1)

24-27: selector 直接访问 history[-1] 存在空数组风险

当 history 为空时会抛异常。建议先判空。

-    selector(history) {
-        return fuzzyQuery(
-            getMessageContent(history[history.length - 1].content),
-            [
+    selector(history) {
+        if (!history?.length) return false
+        return fuzzyQuery(
+            getMessageContent(history[history.length - 1].content),
+            [
packages/plugin-common/src/plugins/code_sandbox.ts (1)

114-131: 微调:result.text 判断重复且发送优先级可简化

存在两处 result.text。可去重并按优先级合并。

- result.markdown ||
-   result.javascript ||
-   result.json ||
-   result.html ||
-   result.latex ||
-   result.text ||
-   result.svg
+ result.markdown ??
+   result.javascript ??
+   result.json ??
+   result.html ??
+   result.latex ??
+   result.text ??
+   result.svg
packages/plugin-common/src/plugins/openapi.ts (3)

200-212: 死代码与主体处理不一致

const body = {} 从未赋值,后续 else if (Object.keys(body).length > 0) 永远不走分支。应删除或实现 formData 等分支。

-        const body: any = {}
...
-        } else if (Object.keys(body).length > 0) {
-            init.body = JSON.stringify(body)
-            ;(init.headers as Headers).append(
-                'Content-Type',
-                'application/json'
-            )
-        }
+        }

103-106: typo:类名标识

lc_name() 返回 'OpenAPIluginTool' 少了 P,会影响 LangChain 序列化标识。

-    static lc_name() {
-        return 'OpenAPIluginTool'
-    }
+    static lc_name() {
+        return 'OpenAPIPluginTool'
+    }

399-410: 返回类型与实现不一致

parseSpec 声明返回 OpenAPIV3.Document,但出错时返回 undefined。请调整类型并避免控制台直打错误(统一使用 logger)。

-function parseSpec(content: string): OpenAPIV3.Document {
+function parseSpec(content: string): OpenAPIV3.Document | undefined {
     try {
         if (content.trim().startsWith('{')) {
             return JSON.parse(content) as OpenAPIV3.Document
         } else {
             return YAML.load(content) as OpenAPIV3.Document
         }
     } catch (error) {
-        console.error('Error parsing the OpenAPI spec:', error)
+        logger?.warn?.('Error parsing the OpenAPI spec:', error)
         return undefined
     }
 }
packages/plugin-common/src/plugins/knowledge.ts (1)

122-125: 返回类型与实现不一致(可能返回 null)

createSearchChain 声明 Promise<ReturnType<Chain>>,但可能返回 null。应放宽类型或返回 no-op 链。

-): Promise<ReturnType<Chain>> {
+): Promise<ReturnType<Chain> | null> {

或返回 (async () => []) as ReturnType<Chain> 之类的空链以避免上游判空分支扩散。

packages/plugin-common/src/plugins/request.ts (2)

82-88: 模式校验可更严格:使用 z.string().url()

当前仅 string() 未校验 URL 格式。建议使用 z.string().url()

-        url: z
-            .string()
+        url: z
+            .string()
+            .url()

两处同改。

Also applies to: 124-135


137-139: 避免默认 Infinity 输出上限

默认 Infinity 可能导致内存压力;建议与 GET 对齐的有限默认值(如 30000)。

-    maxOutputLength = Infinity
+    maxOutputLength = 30000
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fc60a25 and fcfabf6.

⛔ Files ignored due to path filters (26)
  • packages/azure-openai-adapter/package.json is excluded by !**/*.json
  • packages/claude-adapter/package.json is excluded by !**/*.json
  • packages/core/package.json is excluded by !**/*.json
  • packages/deepseek-adapter/package.json is excluded by !**/*.json
  • packages/dify-adapter/package.json is excluded by !**/*.json
  • packages/doubao-adapter/package.json is excluded by !**/*.json
  • packages/embeddings-service/package.json is excluded by !**/*.json
  • packages/gemini-adapter/package.json is excluded by !**/*.json
  • packages/hunyuan-adapter/package.json is excluded by !**/*.json
  • packages/image-renderer/package.json is excluded by !**/*.json
  • packages/image-service/package.json is excluded by !**/*.json
  • packages/long-memory/package.json is excluded by !**/*.json
  • packages/mcp-client/package.json is excluded by !**/*.json
  • packages/ollama-adapter/package.json is excluded by !**/*.json
  • packages/openai-adapter/package.json is excluded by !**/*.json
  • packages/openai-like-adapter/package.json is excluded by !**/*.json
  • packages/plugin-common/package.json is excluded by !**/*.json
  • packages/qwen-adapter/package.json is excluded by !**/*.json
  • packages/rwkv-adapter/package.json is excluded by !**/*.json
  • packages/search-service/package.json is excluded by !**/*.json
  • packages/shared/package.json is excluded by !**/*.json
  • packages/spark-adapter/package.json is excluded by !**/*.json
  • packages/variable-extension/package.json is excluded by !**/*.json
  • packages/vector-store-service/package.json is excluded by !**/*.json
  • packages/wenxin-adapter/package.json is excluded by !**/*.json
  • packages/zhipu-adapter/package.json is excluded by !**/*.json
📒 Files selected for processing (41)
  • packages/azure-openai-adapter/src/index.ts (1 hunks)
  • packages/claude-adapter/src/index.ts (1 hunks)
  • packages/core/src/llm-core/platform/config.ts (1 hunks)
  • packages/core/src/llm-core/platform/service.ts (6 hunks)
  • packages/core/src/llm-core/platform/types.ts (2 hunks)
  • packages/core/src/services/chat.ts (12 hunks)
  • packages/deepseek-adapter/src/index.ts (1 hunks)
  • packages/dify-adapter/src/index.ts (1 hunks)
  • packages/doubao-adapter/src/index.ts (1 hunks)
  • packages/embeddings-service/src/index.ts (0 hunks)
  • packages/gemini-adapter/src/index.ts (1 hunks)
  • packages/hunyuan-adapter/src/index.ts (1 hunks)
  • packages/image-renderer/src/index.ts (2 hunks)
  • packages/image-service/src/index.ts (5 hunks)
  • packages/long-memory/src/index.ts (0 hunks)
  • packages/long-memory/src/plugins/tool.ts (3 hunks)
  • packages/mcp-client/src/service.ts (1 hunks)
  • packages/ollama-adapter/src/index.ts (1 hunks)
  • packages/openai-adapter/src/index.ts (1 hunks)
  • packages/openai-like-adapter/src/index.ts (1 hunks)
  • packages/plugin-common/src/index.ts (0 hunks)
  • packages/plugin-common/src/plugins/code_sandbox.ts (1 hunks)
  • packages/plugin-common/src/plugins/command.ts (1 hunks)
  • packages/plugin-common/src/plugins/cron.ts (1 hunks)
  • packages/plugin-common/src/plugins/draw.ts (1 hunks)
  • packages/plugin-common/src/plugins/fs.ts (1 hunks)
  • packages/plugin-common/src/plugins/group.ts (1 hunks)
  • packages/plugin-common/src/plugins/knowledge.ts (2 hunks)
  • packages/plugin-common/src/plugins/music.ts (1 hunks)
  • packages/plugin-common/src/plugins/openapi.ts (1 hunks)
  • packages/plugin-common/src/plugins/request.ts (1 hunks)
  • packages/plugin-common/src/plugins/think.ts (4 hunks)
  • packages/plugin-common/src/plugins/todos.ts (1 hunks)
  • packages/qwen-adapter/src/index.ts (1 hunks)
  • packages/rwkv-adapter/src/index.ts (1 hunks)
  • packages/search-service/src/index.ts (9 hunks)
  • packages/spark-adapter/src/index.ts (1 hunks)
  • packages/variable-extension/src/index.ts (1 hunks)
  • packages/vector-store-service/src/index.ts (0 hunks)
  • packages/wenxin-adapter/src/index.ts (1 hunks)
  • packages/zhipu-adapter/src/index.ts (1 hunks)
💤 Files with no reviewable changes (4)
  • packages/long-memory/src/index.ts
  • packages/plugin-common/src/index.ts
  • packages/embeddings-service/src/index.ts
  • packages/vector-store-service/src/index.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • packages/search-service/src/index.ts
  • packages/core/src/llm-core/platform/service.ts
  • packages/image-service/src/index.ts
🧰 Additional context used
🧬 Code graph analysis (6)
packages/plugin-common/src/plugins/fs.ts (1)
packages/plugin-common/src/plugin.ts (1)
  • plugin (18-49)
packages/core/src/llm-core/platform/config.ts (1)
packages/core/src/llm-core/platform/client.ts (1)
  • config (72-74)
packages/plugin-common/src/plugins/request.ts (1)
packages/plugin-common/src/plugin.ts (1)
  • plugin (18-49)
packages/variable-extension/src/index.ts (3)
packages/variable-extension/src/plugin.ts (1)
  • plugin (8-26)
packages/core/src/services/chat.ts (1)
  • ChatLunaPlugin (554-741)
packages/core/src/utils/logger.ts (1)
  • createLogger (7-17)
packages/image-renderer/src/index.ts (2)
packages/image-renderer/src/renders/image.ts (1)
  • ImageRenderer (31-197)
packages/image-renderer/src/renders/mixed-image.ts (1)
  • MixedImageRenderer (33-301)
packages/core/src/services/chat.ts (3)
packages/core/src/services/types.ts (1)
  • Context (21-23)
packages/core/src/llm-core/model/in_memory.ts (1)
  • emptyEmbeddings (40-40)
packages/core/src/llm-core/platform/client.ts (1)
  • config (72-74)
🪛 GitHub Check: CodeFactor
packages/hunyuan-adapter/src/index.ts

[warning] 14-14: packages/hunyuan-adapter/src/index.ts#L14
Insert · (prettier/prettier)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: lint
  • GitHub Check: build
  • GitHub Check: lint
  • GitHub Check: build

- Add gh pr edit permission to Claude auto-approve list
- Fix indentation in hunyuan-adapter parseConfig call
… management

- Wrap all watch calls in modelSchema, embeddingsSchema, chatChainSchema, and vectorStoreSchema with ctx.effect
- Ensures proper cleanup and lifecycle management for reactive watchers
- Aligns with Vue reactivity integration pattern across the codebase
- Add method overloads for createChatModel and createEmbeddings to support both separate and combined parameter formats
- Remove deprecated randomChatModel and randomEmbeddings methods
- Improve logging messages in auto delete task with clearer formatting
- Add proper TypeScript types (ComputedRef, Embeddings) for better type safety
- Remove push trigger from GitHub Actions build workflow to reduce unnecessary builds

Breaking Changes:
- Removed randomChatModel() and randomEmbeddings() methods
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
packages/core/src/services/chat.ts (1)

701-709: registerChatChainProvider 签名已同步化

🧹 Nitpick comments (1)
packages/core/src/services/chat.ts (1)

671-680: registerToService 已弃用并给出明确警告

通过警告提示开发者移除此方法的调用,因为插件现在支持自动安装。

是否需要我帮助创建迁移指南,说明如何从旧的 registerToService 模式迁移到新的自动安装模式?

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7aae96a and 2d62d1b.

⛔ Files ignored due to path filters (1)
  • .github/workflows/build.yml is excluded by !**/*.yml
📒 Files selected for processing (2)
  • packages/core/src/index.ts (3 hunks)
  • packages/core/src/services/chat.ts (11 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/core/src/index.ts
🧰 Additional context used
🧬 Code graph analysis (1)
packages/core/src/services/chat.ts (3)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (194-199)
packages/core/src/llm-core/model/in_memory.ts (1)
  • emptyEmbeddings (40-40)
packages/core/src/llm-core/platform/client.ts (1)
  • config (72-74)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build
  • GitHub Check: lint
🔇 Additional comments (12)
packages/core/src/services/chat.ts (12)

89-102: 安装方法命名已更新为 installPlugin

插件生命周期方法从 registerToService 改为 installPlugin/uninstallPlugin,更清晰地表达了插件的安装/卸载语义。


113-121: 响应式模型加载检测实现正确

使用 ComputedRef 包装的 models 值并通过 .value 访问,符合 Vue 响应式系统的使用模式。


138-147: 使用 effect 和 watch 监听模型变化

正确使用了 ctx[Context.origin].effect() 来注册 watch,确保在上下文销毁时自动清理监听器。


243-274: createChatModel 返回 ComputedRef 包装的模型

将返回类型改为 ComputedRef<ChatLunaChatModel | undefined> 符合响应式架构设计,能够自动响应平台客户端的变化。


276-313: createEmbeddings 优雅降级处理

当平台不可用或模型不是 embeddings 类型时,返回 emptyEmbeddings 作为降级方案,避免了抛出异常导致的中断。


345-346: 停止时调用 uninstallPlugin 进行清理

在服务停止时正确调用 uninstallPlugin 而不是 dispose,与新的插件生命周期管理保持一致。


593-598: 插件生命周期钩子设置正确

dispose 时自动卸载插件,在 ready 时自动安装插件,实现了插件的自动生命周期管理。


616-626: 使用 watch 监听模型列表变化

通过 ctx.effect() 注册 watch 来监听模型列表变化,自动更新 _supportModels,避免了手动管理监听器的复杂性。


629-635: parseConfig 改为同步方法

parseConfig 从异步改为同步符合响应式系统的设计,减少了不必要的异步传播。


686-699: 使用 effect 注册客户端、向量存储和工具

所有注册方法都使用 ctx.effect() 包装,确保在上下文销毁时自动清理注册的资源。


711-718: 新增 registerRenderer 方法

新增的渲染器注册方法使用 ctx.effect() 确保自动清理,符合统一的生命周期管理模式。


801-802: 响应式访问客户端配置

通过 client.value.configPool.getConfig(true).value 链式访问响应式值,正确处理了多层 ComputedRef 包装。

- Change "plugin mode" to "agent mode" terminology across all README files
- Add DeepWiki badge and link to core package README
- Update adapter list with new Doubao adapter and remove deprecated adapters
- Remove GPT Free and ChatGLM adapters from supported list
- Add completed roadmap items (image multi-modal support, MCP protocol client)
- Improve Star History Chart with theme support for dark/light modes
- Update documentation links and badges for better user experience
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (7)
README_EN.MD (2)

21-21: 术语一致性:同步替换残留“Plugin”表述为“Agent”

为避免读者混淆,建议将 README 中仍显示的“Plugin”改为“Agent”(示例两处):

  • Line 21 表头中的“Plugin Mode & Streaming Output”
  • Line 58 OpenAI 行的“plugin/browsing modes”

可按下列方式修正:

- | Preset | Plugin Mode & Streaming Output | Image Rendering  |
+ | Preset | Agent Mode & Streaming Output | Image Rendering  |
-| [OpenAI](./packages/openai-adapter/README.md) | Official API | Customizable, plugin/browsing modes | Paid API |
+| [OpenAI](./packages/openai-adapter/README.md) | Official API | Customizable, agent/browsing modes | Paid API |

请确认其它文档(README.MD、README_JP.MD、站点文档与截图注释)是否也需要统一替换为“Agent”。需要的话我可以批量给出建议变更清单。

Also applies to: 58-58


15-15: 版本与状态文案核对

此处显示“1.0 Official Release (preparing for v2)”,而本 PR 标记版本为 1.3.0‑alpha.41。建议确认是否需要在英文 README 同步当前主线状态或添加简短说明(例如:v1 进入稳定维护,v2 正在开发)。

packages/core/README.MD (5)

13-13: 社群与文档徽章整体可用;补充两点小优化

  • QQ 加群链接使用 http,若官方支持 https 建议切换以避免混合内容告警。
  • Doc 徽章文案 “See-Document” 略别扭,建议改成更常见的 “Docs”。如需,可直接采用下方 diff。
-[![doc](https://img.shields.io/badge/See-Document-green)](https://chatluna.chat/)
+[![doc](https://img.shields.io/badge/Docs-website-green)](https://chatluna.chat/)

32-32: 术语一致性:将“agent”与上文“Plugin Mode”对齐

截图表头仍写“Plugin Mode & Streaming Output”,此处改为 “agent” 后建议统一术语(Agent 模式/插件模式二选一并在全局一致),以免给用户造成模式差异的误解。


46-47: Roadmap 链接路径错误与可发现性

  • MCP 文档相对路径在当前文件位置(packages/core/)下无效,应指向上级 sibling 包。
  • 建议给“图像多模态输入支持”也补一处官方文档/指南链接,提升可达性。

应用此修正:

-- [x] [MCP Protocol Client Support](./packages/mcp-client/README.md)
+- [x] [MCP Protocol Client Support](../mcp-client/README.md)

需要我顺手补充“多模态输入”文档链接吗?可提供到 image-service 或指南页的直达链接。


64-64: 新模型 Doubao 的文档链接相对路径无效(同表其他项亦存在同类问题)

当前文件位于 packages/core/,表格中使用的 ./packages/... 前缀会解析到 packages/core/packages/...,导致 404。应改为指向同级包的相对路径 ../xxx。建议先修正本行,并在同表全量替换该前缀。

-| [Doubao](./packages/doubao-adapter/README.md) | Local Client, Official API Access | Model developed by ByteDance, offers free quota | Performance slightly better than Zhipu in tests |
+| [Doubao](../doubao-adapter/README.md) | Local Client, Official API Access | Model developed by ByteDance, offers free quota | Performance slightly better than Zhipu in tests |

如需,我可以提交一个脚本批量检查并修复 README 中的相对路径前缀问题。


143-149: Star History 片段轻量优化

GitHub 渲染 <picture> 没问题;为降低首次绘制抖动,建议为 <img> 增加 loading="lazy"(以及可选的固定尺寸)。示例:

-   <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=ChatLunaLab/chatluna&type=Date" />
+   <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=ChatLunaLab/chatluna&type=Date" loading="lazy" />

此外,页面已包含 Repobeats 图,若担心第三方外链较多,可考虑合并或下移本段以减少视觉噪音。

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2d62d1b and 740e29b.

📒 Files selected for processing (3)
  • README.MD (1 hunks)
  • README_EN.MD (1 hunks)
  • packages/core/README.MD (5 hunks)
✅ Files skipped from review due to trivial changes (1)
  • README.MD
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: lint
  • GitHub Check: build
🔇 Additional comments (1)
README_EN.MD (1)

32-32: LGTM:术语从 plugin → agent 的文案更新正确

与本次 PR 的“Agent 化/响应式重构”方向一致。

- Update screenshot table headers to use "Agent Mode" instead of "Plugin Mode" in both Chinese and English README files
- Standardize debug logging messages with proper capitalization:
  - "original content" → "Original content"
  - "call tool" → "Call tool"
  - "current balance" → "Current balance"
- Ensure consistent terminology across documentation and code
- Remove async keyword from setupServices, setupPermissions, and setupEntryPoint functions
- This change improves performance as these functions don't need to return promises
- Fix tool comparison logic in plugin chat chain to use tool IDs instead of object references
- Add unique ID and name assignment to tools during registration using randomUUID
- Improve executor recreation detection with better tool difference calculation
- Add debug logging for executor recreation with tool names and IDs
- Extend ChatLunaTool interface with optional name and id properties
- Fix tool filtering logic to properly handle active/inactive state changes
- Remove unnecessary Promise.all wrapper for tool creation in executor
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/core/src/llm-core/chain/plugin_chat_chain.ts (1)

167-210: 致命:工具差异计算存在变量遮蔽与返回值错误,导致运行期异常与工具误选

问题:

  • 变量遮蔽:oldActiveTools.find((tool) => tool.id === tool[0].id) 内外均使用 tool 命名,内层 tool[0] 将在运行期取到 undefined 并访问 .id,易抛错。
  • 逻辑错误:当 differenceTools.length < 1 时返回 toolsRef(全部工具),而不是当前的激活子集,导致错误启用所有工具。

建议直接计算下一轮激活集并按集合相等性决定是否重建:

-    private _getActiveTools(
-        session: Session,
-        messages: BaseMessage[]
-    ): [ChatLunaTool[], boolean] {
-        const oldActiveTools: ChatLunaTool[] = this.activeTools
-        const toolsRef = this.tools.value
-        const newActiveTools: [ChatLunaTool, boolean][] = toolsRef.map(
-            (tool) => {
-                const base = tool.selector(messages)
-                if (tool.authorization) {
-                    return [tool, tool.authorization(session) && base]
-                }
-                return [tool, base]
-            }
-        )
-        const differenceTools = newActiveTools.filter((tool) => {
-            const include = oldActiveTools.find(
-                (tool) => tool.id === tool[0].id
-            )
-            return !include || (include && tool[1] === false)
-        })
-        if (differenceTools.length < 1) {
-            return [toolsRef, oldActiveTools.length === toolsRef.length]
-        }
-        for (const differenceTool of differenceTools) {
-            if (differenceTool[1] === true) {
-                oldActiveTools.push(differenceTool[0])
-                continue
-            }
-            const index = oldActiveTools.findIndex(
-                (tool) => tool === differenceTool[0]
-            )
-            if (index > -1) {
-                oldActiveTools.splice(index, 1)
-            }
-        }
-        return [oldActiveTools, true]
-    }
+    private _getActiveTools(
+        session: Session,
+        messages: BaseMessage[]
+    ): [ChatLunaTool[], boolean] {
+        const nextActive = this.tools.value.filter((tool) => {
+            const base = tool.selector(messages)
+            const auth = tool.authorization ? tool.authorization(session) : true
+            return base && auth
+        })
+
+        const prev = this.activeTools
+        const prevIds = new Set(prev.map((t) => t.id))
+        const nextIds = new Set(nextActive.map((t) => t.id))
+
+        const sameSize = prevIds.size === nextIds.size
+        const sameSet = sameSize && [...prevIds].every((id) => nextIds.has(id))
+
+        if (sameSet) {
+            return [prev, false]
+        }
+
+        this.activeTools = nextActive
+        return [this.activeTools, true]
+    }
🧹 Nitpick comments (6)
packages/core/src/index.ts (4)

25-25: 避免对 @vue/reactivity 整库做星号导出,缩小公共 API 面

跨包复用 OK,但星号导出会放大耦合与潜在重复实例风险(依赖方若直接/间接引入不同版本会更难排查)。建议限缩导出必要符号或提供薄封装。

可选改动示例:

-export * from '@vue/reactivity'
+export {
+  computed,
+  reactive,
+  ref,
+  watch,
+  type ComputedRef,
+  type Ref
+} from '@vue/reactivity'

71-101: 在 ready 内动态安装 entry_point 插件的触发时机

按 Koishi 语义,若在应用已 ready 状态下安装插件,其 ready 回调应被立即触发;逻辑上可行。但如果仅为隔离作用域/依赖,也可以直接在此处调用 initializeComponents() 以减少一层间接(取决于是否要借助插件作用域自动清理)。

建议加一条日志以确认该 ready 回调在运行期确实被触发一次(便于现场排查)。


212-229: 删除数量统计与文案不一致

logger.success 使用的是 rooms.length,但真正成功的是 success.length。当存在失败项时会误报成功数量。

建议修正为:

-        logger.success(
-            `Successfully deleted %d rooms: %s`,
-            rooms.length,
-            success.map((room) => room.roomName).join(',')
-        )
+        logger.success(
+            `Successfully deleted %d rooms: %s`,
+            success.length,
+            success.map((room) => room.roomName).join(',') || 'N/A'
+        )

58-63: ready 回调里不再 await 的同步 setup:移除 async(已验证)

packages/core/src/index.ts 中 ctx.on('ready', async () => { ... })(约行 58–63)所调用的 setupProxy、setupServices、setupPermissions、setupEntryPoint 均为普通 function(定义于约行 71 / 157 / 168),将该回调改为同步 (() => { ... });entryPointPlugin 中使用 await 的 ready 回调应保留 async。

packages/core/src/llm-core/chat/app.ts (2)

231-241: 基于 ComputedRef 的链创建逻辑清晰

使用 .value 展开 llm/embeddings/modelInfo 并集中在 createChain(),可读性好。

可额外 watch(modelInfo, ...) 以在能力变更时同步重建链,避免仅 LLM/Embeddings 变化才触发。


253-256: _embeddings 赋值可能出现 undefined 类型不匹配

newValue 的类型为 Embeddings | undefined,而字段是 Embeddings。虽运行期大概率不为 undefined,但类型层面更稳妥地兜底为 emptyEmbeddings

建议:

-        watch(embeddings, (newValue: Embeddings | undefined) => {
-            this._embeddings = newValue
-            this._chain = createChain()
-        })
+        watch(embeddings, (newValue: Embeddings | undefined) => {
+            this._embeddings = newValue ?? emptyEmbeddings
+            this._chain = createChain()
+        })
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 740e29b and 02948cf.

📒 Files selected for processing (8)
  • README.MD (2 hunks)
  • README_EN.MD (2 hunks)
  • packages/core/src/index.ts (7 hunks)
  • packages/core/src/llm-core/chain/plugin_chat_chain.ts (7 hunks)
  • packages/core/src/llm-core/chat/app.ts (6 hunks)
  • packages/core/src/llm-core/platform/service.ts (7 hunks)
  • packages/core/src/llm-core/platform/types.ts (2 hunks)
  • packages/core/src/middlewares/model/request_model.ts (2 hunks)
✅ Files skipped from review due to trivial changes (3)
  • packages/core/src/middlewares/model/request_model.ts
  • README.MD
  • README_EN.MD
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/core/src/llm-core/platform/types.ts
  • packages/core/src/llm-core/platform/service.ts
🧰 Additional context used
🧬 Code graph analysis (3)
packages/core/src/index.ts (2)
packages/core/src/services/types.ts (1)
  • Context (21-23)
packages/core/src/utils/types.ts (1)
  • PromiseLikeDisposable (3-3)
packages/core/src/llm-core/chat/app.ts (3)
packages/core/src/llm-core/platform/types.ts (1)
  • ModelInfo (81-86)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (194-199)
packages/core/src/llm-core/model/in_memory.ts (1)
  • emptyEmbeddings (40-40)
packages/core/src/llm-core/chain/plugin_chat_chain.ts (2)
packages/core/src/llm-core/platform/types.ts (1)
  • ChatLunaTool (61-67)
packages/core/src/index.ts (1)
  • logger (38-38)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: lint
  • GitHub Check: build
🔇 Additional comments (7)
packages/core/src/index.ts (2)

157-166: 服务安装与可卸载性处理得当

通过 forkScopeToDisposable(ctx.plugin(...)) 推入 disposables,与外层 dispose 钩子配合,符合可逆/可卸载要求。


168-191: 权限定义/提供的可清理封装 OK

把返回的清理函数包一层推入 disposables,可在外层统一回收,做法合理。

packages/core/src/llm-core/chat/app.ts (2)

162-162: 日志文案小改动无功能影响

首字母大写更统一,保持即可。


245-251: LLM 变化时的重建逻辑已避免“清空后立即重建”问题

newValue == null 时直接返回,行为正确。

packages/core/src/llm-core/chain/plugin_chat_chain.ts (3)

58-58: tools 改为 ComputedRef 的公开属性变更

与整体响应式迁移一致,变更合理。

请确认所有调用点已由数组改为 ComputedRef<ChatLunaTool[]>(尤其是工厂与适配器)。


94-105: fromLLMAndTools 改为同步返回

向后兼容(await 非 Promise 值也能工作),签名上更贴合当前实现。


249-254: 重建执行器时的调试日志很有用

附带工具名与 id,便于追踪工具选择变更。

- Remove redundant synchronous error check in createChatModel method
- Delegate error handling to computed reactive wrapper for better consistency
- Simplify logging messages for plugin installation/uninstallation
- Improve reactive pattern by handling null client values in computed function
- Align error handling with Vue reactivity best practices
- Fix context reference from this.ctx[Context.origin] to this[Context.origin]
- Ensure proper scope binding for effect lifecycle management in awaitLoadPlatform
- Maintain consistency with Vue reactivity integration pattern
… management

- Convert summaryModel from direct ChatLunaChatModel to Ref<ChatLunaChatModel> in browsing chain
- Update PuppeteerBrowserTool to accept ComputedRef<ChatLunaChatModel> for reactive model handling
- Replace manual watch-based plugin restart with computed reactive summary model fallback
- Improve model access patterns with null-safe value extraction (summaryModel.value ?? fallback)
- Remove unnecessary watch setup and manual plugin restarts for better reactivity integration
- Standardize debug logging message capitalization for consistency
- Simplify plugin initialization by moving it inside ready event handler
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/search-service/src/tools/puppeteerBrowserTool.ts (1)

726-732: 结构化文本被全量压扁成单空格,破坏段落/表格结构

.replace(/\s+/g, ' ') 会移除所有换行,导致摘要严重降质。保留换行,仅归一化连续空格/制表符。

-            return text
-                .trim()
-                .replace(/\n{3,}/g, '\n\n')
-                .trim()
-                .replace(/\s+/g, ' ')
+            return text
+                .trim()
+                .replace(/\n{3,}/g, '\n\n')
+                // 保留换行,仅压缩连续的空格/制表符
+                .replace(/[ \t]{2,}/g, ' ')
♻️ Duplicate comments (1)
packages/search-service/src/index.ts (1)

102-104: 未判空直接解引用 keywordExtractModel.value(之前已反馈)

-                const summaryModel = computed(
-                    () => keywordExtractModel.value ?? params.model
-                )
+                const summaryModel = computed(
+                    () => keywordExtractModel?.value ?? params.model
+                )
🧹 Nitpick comments (9)
packages/search-service/src/tools/puppeteerBrowserTool.ts (5)

115-118: summarize 分支对 config 解引用未判空,易引发运行时异常

当外部调用未提供 config 或未带 configurable.model 时,这里会抛错。建议使用可选链。

-                        this.model?.value ?? config.configurable.model,
+                        this.model?.value ?? config?.configurable?.model,

150-157: 错误处理直接使用 error.message,非 Error 场景会报错,且日志不统一

建议统一封装错误字符串化逻辑,并用 ctx.logger 记录。

示例修改(其余 catch 同理):

@@
-        } catch (error) {
-            console.error(error)
-            return `Error opening page: ${error.message}`
+        } catch (error) {
+            this.ctx.logger?.error(error)
+            const msg = error instanceof Error ? error.message : String(error)
+            return `Error opening page: ${msg}`
         }

Also applies to: 171-174, 733-735, 795-797, 810-812, 825-827, 837-838, 884-886


41-43: idleTimeout 注释与数值不一致(180000 实际为 3 分钟)

要么改注释,要么将默认值调为 300000 保持 5 分钟。

-    private readonly idleTimeout: number = 180000 // 5 minutes idle timeout
+    private readonly idleTimeout: number = 300000 // 5 minutes idle timeout

46-47: waitUntil 初值未设定,建议提供安全默认值

减少不同行为不一致,默认使用 'domcontentloaded'。

-    private waitUntil: PuppeteerLifeCycleEvent
+    private waitUntil: PuppeteerLifeCycleEvent = 'domcontentloaded'

144-145: 无效的 get 调用

此分支对缓存命中仅调用 get 且丢弃返回值,等效空操作。

-        } else {
-            this.pages.get(url)
-        }
+        }
packages/search-service/src/chain/browsing_chain.ts (4)

199-204: tools.value.find 可能返回 undefined,随后解引用会抛错

在工具未注册或被过滤时会出现。应显式报错或降级处理。

-        const chatLunaTool = this.tools.value.find((tool) => tool.name === name)
-
-        return chatLunaTool.tool.createTool({
+        const chatLunaTool = this.tools.value.find((tool) => tool.name === name)
+        if (!chatLunaTool) {
+            throw new ChatLunaError(
+                ChatLunaErrorCode.UNKNOWN_ERROR,
+                new Error(`Tool "${name}" not found`)
+            )
+        }
+        return chatLunaTool.tool.createTool({
             embeddings: this.embeddings
         })

166-176: 子链使用 summaryModel.value 的“快照”,模型变更后不会热更新

如果运行期切换摘要模型,formatQuestionChain/contextualCompressionChain 不会感知。确认这是否符合设计;否则需 watch summaryModel 重建子链或在 call 时动态选择 llm。

Also applies to: 170-176


495-498: 用户可见提示英文语法错误

建议修正为自然表达。

-                new AIMessage(
-                    "OK. I understand. I will respond to the your's question using the same language as your input. What's the your's question?"
-                )
+                new AIMessage(
+                    "OK, I understand. I will respond to your question using the same language as your input. What's your question?"
+                )

280-281: 无效的 %c 占位符(Node 环境下无效)

建议移除或改为普通占位。

-        logger?.debug(`final response %c`, finalResponse.text)
+        logger?.debug(`final response: ${finalResponse.text}`)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 02948cf and 637461b.

📒 Files selected for processing (4)
  • packages/core/src/services/chat.ts (11 hunks)
  • packages/search-service/src/chain/browsing_chain.ts (7 hunks)
  • packages/search-service/src/index.ts (8 hunks)
  • packages/search-service/src/tools/puppeteerBrowserTool.ts (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/core/src/services/chat.ts
🧰 Additional context used
🧬 Code graph analysis (3)
packages/search-service/src/tools/puppeteerBrowserTool.ts (1)
packages/core/src/llm-core/platform/model.ts (1)
  • ChatLunaChatModel (96-572)
packages/search-service/src/chain/browsing_chain.ts (2)
packages/core/src/llm-core/platform/model.ts (1)
  • ChatLunaChatModel (96-572)
packages/core/src/llm-core/chain/base.ts (1)
  • ChatLunaLLMChain (271-359)
packages/search-service/src/index.ts (3)
packages/core/src/utils/logger.ts (1)
  • createLogger (7-17)
packages/search-service/src/tools/puppeteerBrowserTool.ts (1)
  • PuppeteerBrowserTool (21-925)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (194-199)
🔇 Additional comments (1)
packages/search-service/src/index.ts (1)

65-67: SearchTool 接收的模型可能为 undefined,确认其内部是否有回退到对话模型

若无回退,应传入 params.model 作为兜底。

-                    keywordExtractModel?.value,
+                    keywordExtractModel?.value ?? params.model,

- Fix variable naming conflict in plugin chat chain tool filtering (tool -> newTool/oldTool)
- Add proper null safety to PuppeteerBrowserTool model parameter (ChatLunaChatModel | undefined)
- Create dedicated browserModelRef computed for better type isolation in search service
- Add time unit 's' to delay timeout logging messages for clarity
- Ensure consistent type handling across reactive model references
Enhance client initialization and model availability checking by adding RunnableConfig support and implementing reactive lifecycle management.

- Add RunnableConfig parameter to client initialization and model operations
- Implement abort controller for cancelling operations when context is disposed
- Add room model availability display in both English and Chinese locales
- Improve error handling and logging in platform service
- Fix context effect lifecycle management in chat service
- Replace broken syntax with proper scope checks in client error handling

This improves the user experience by showing real-time model availability status and prevents resource leaks through proper cleanup.
Enhance error handling across all adapters and add proper abort signal propagation for cancellable operations.

- Add RunnableConfig abort signal support to all adapter clients and requesters
- Improve ChatLunaError propagation throughout the platform stack
- Fix room availability checking with better error handling and UI feedback
- Enhance SSE response error handling for better debugging
- Add proper error handling in embeddings initialization with early returns
- Update locale strings for better model availability display formatting
- Strengthen platform service error handling and remove unnecessary logging

This ensures proper cleanup of resources when operations are cancelled and provides better error diagnostics for users and administrators.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (9)
packages/openai-like-adapter/src/client.ts (1)

84-93: formattedModels 未设置 maxTokens,违背 ModelInfo 契约,可能导致后续依赖误用

ModelInfo 接口要求 maxTokens: number。此处通过 as ModelInfo 绕过检查,存在契约漂移风险;后续 getModelMaxContextSize(info) 或其它使用处可能假定该字段存在。

建议至少与 additionalModels 保持一致提供合理默认值(例如 4096),或引入基于型号名的上下文窗口推断。先给出最小修复:

   const formattedModels = filteredModels.map(
     (model) =>
       ({
         name: model,
         type: isEmbeddingModel(model)
           ? ModelType.embeddings
           : ModelType.llm,
+        maxTokens: 4096,
         ...supportToolCalling(model)
       }) as ModelInfo
   )

如已有根据型号名计算上下文长度的工具函数,优先使用该函数替换硬编码默认值。

packages/azure-openai-adapter/src/client.ts (1)

60-71: 不要为所有模型无条件宣称支持图片输入(ImageInput)

capabilities 里无条件加入 ModelCapabilities.ImageInput 可能导致 UI 误导或在不支持多模态的模型上触发调用错误(例如传统纯文本模型或 embeddings)。建议仅对已知支持多模态/视觉输入的模型开启,或优先从配置层显式声明能力。

可先用保守启发式替代(同时引入类型守卫,避免 filter(Boolean) 的类型漂移):

-                        capabilities: [
-                            ModelCapabilities.ImageInput,
-                            llmType === 'LLM 大语言模型(函数调用)'
-                                ? ModelCapabilities.ToolCall
-                                : undefined,
-                            model.includes('gpt-5') ||
-                            model.includes('o1') ||
-                            model.includes('o3') ||
-                            model.includes('o4')
-                                ? ModelCapabilities.Thinking
-                                : undefined
-                        ].filter(Boolean),
+                        capabilities: [
+                            (model.includes('gpt-4o') ||
+                                model.includes('gpt-4.1') ||
+                                model.includes('gpt-4-vision') ||
+                                model.includes('o1') ||
+                                model.includes('o3')) &&
+                            llmType !== 'Embeddings 嵌入模型'
+                                ? ModelCapabilities.ImageInput
+                                : undefined,
+                            llmType === 'LLM 大语言模型(函数调用)'
+                                ? ModelCapabilities.ToolCall
+                                : undefined,
+                            model.includes('gpt-5') ||
+                            model.includes('o1') ||
+                            model.includes('o3') ||
+                            model.includes('o4')
+                                ? ModelCapabilities.Thinking
+                                : undefined
+                        ].filter(
+                            (c): c is ModelCapabilities => Boolean(c)
+                        ),

注:最佳做法是由配置显式给出 capabilities,此处启发式仅作为回退。

packages/deepseek-adapter/src/requester.ts (1)

65-88: 中止信号未按约定向上冒泡为 ChatLunaError(ABORTED),会被上层误判为一般初始化错误

捕获到 AbortError 时当前分支会走到“包一层普通 Error”路径,导致上层 client 将其包装为 MODEL_INIT_ERROR,而不是保留“取消”语义。应在 catch 中专门判断 AbortError 并抛出 ChatLunaError(ABORTED)。

应用如下补丁(含必要的枚举导入):

-import { ChatLunaError } from 'koishi-plugin-chatluna/utils/error'
+import { ChatLunaError, ChatLunaErrorCode } from 'koishi-plugin-chatluna/utils/error'
@@
   } catch (e) {
-            if (e instanceof ChatLunaError) {
-                throw e
-            }
+            if (e instanceof ChatLunaError) {
+                throw e
+            }
+            if ((e as Error)?.name === 'AbortError') {
+                throw new ChatLunaError(ChatLunaErrorCode.ABORTED, e as Error)
+            }
             const error = new Error(
                 'error when listing deepseek models, Result: ' +
                     JSON.stringify(data)
             )
             throw error
   }
packages/openai-adapter/src/requester.ts (1)

64-89: AbortError 未转译为 ChatLunaError(ABORTED),取消语义丢失

与平台 client 的 ABORTED 分支不匹配,易被误当成初始化失败。

-import { ChatLunaError } from 'koishi-plugin-chatluna/utils/error'
+import { ChatLunaError, ChatLunaErrorCode } from 'koishi-plugin-chatluna/utils/error'
@@
   } catch (e) {
-            if (e instanceof ChatLunaError) {
-                throw e
-            }
+            if (e instanceof ChatLunaError) {
+                throw e
+            }
+            if ((e as Error)?.name === 'AbortError') {
+                throw new ChatLunaError(ChatLunaErrorCode.ABORTED, e as Error)
+            }
 
             const error = new Error(
                 'error when listing openai models, Result: ' +
                     JSON.stringify(data)
             )
             throw error
   }
packages/core/src/services/chat.ts (2)

817-839: 可能的空引用与并发上限读取不安全

client.value 可能为 undefined,且直接穿透 configPool 读取。卸载/未就绪时将触发 NPE。建议:等待平台就绪一次并提供并发上限兜底。

-        const client = await this._platformService.getClient(platform)
-        const config = client.value.configPool.getConfig(true).value
+        const clientRef = await this._platformService.getClient(platform)
+        let clientVal = clientRef.value
+        if (!clientVal) {
+            await this._service.awaitLoadPlatform(platform, 30_000)
+            clientVal = clientRef.value
+        }
+        if (!clientVal) {
+            throw new ChatLunaError(ChatLunaErrorCode.MODEL_INIT_ERROR)
+        }
+        const concurrentMaxSize =
+            clientVal.config?.concurrentMaxSize ??
+            this._service.config.chatConcurrentMaxSize ??
+            3
...
-                this._modelQueue.wait(
-                    platform,
-                    requestId,
-                    config.concurrentMaxSize
-                )
+                this._modelQueue.wait(platform, requestId, concurrentMaxSize)

797-799: 按平台卸载时误杀所有请求且键错配导致泄漏

  • dispose(platform?) 无条件 abort 全部请求,卸载单个平台也会终止其他平台请求。
  • _requestIdMap 以 requestId 为键,但平台分支用 conversationId 取值,永远取不到并残留映射。

建议将 _requestIdMap 存储扩展为包含平台与会话 ID,并仅按匹配项中止;同时删除无意义的 conversationId 读操作。

-    private _requestIdMap: Map<string, AbortController> = new Map()
+    private _requestIdMap: Map<
+        string,
+        { controller: AbortController; platform: string; conversationId: string }
+    > = new Map()
-            const abortController = new AbortController()
-            this._requestIdMap.set(requestId, abortController)
+            const abortController = new AbortController()
+            this._requestIdMap.set(requestId, {
+                controller: abortController,
+                platform,
+                conversationId
+            })
-        const abortController = this._requestIdMap.get(requestId)
-        if (!abortController) {
+        const entry = this._requestIdMap.get(requestId)
+        if (!entry) {
             return false
         }
-        abortController.abort(
+        entry.controller.abort(
             new ChatLunaError(ChatLunaErrorCode.ABORTED, undefined, true)
         )
-        // Terminate all related requests
-        for (const controller of this._requestIdMap.values()) {
-            controller.abort(
-                new ChatLunaError(ChatLunaErrorCode.ABORTED, undefined, true)
-            )
-        }
+        // Terminate requests
+        if (!platform) {
+            for (const entry of this._requestIdMap.values()) {
+                entry.controller.abort(
+                    new ChatLunaError(ChatLunaErrorCode.ABORTED, undefined, true)
+                )
+            }
+            this._requestIdMap.clear()
+        } else {
+            for (const [rid, entry] of this._requestIdMap.entries()) {
+                if (entry.platform === platform) {
+                    entry.controller.abort(
+                        new ChatLunaError(
+                            ChatLunaErrorCode.ABORTED,
+                            undefined,
+                            true
+                        )
+                    )
+                    this._requestIdMap.delete(rid)
+                }
+            }
+        }
-        // Clean up resources for specific platform
-        const conversationIds = this._platformToConversations.get(platform)
-        if (!conversationIds?.length) return
-
-        for (const conversationId of conversationIds) {
-            this._conversations.delete(conversationId)
-            // Terminate platform-related requests
-            const controller = this._requestIdMap.get(conversationId)
-            if (controller) {
-                controller.abort(
-                    new ChatLunaError(
-                        ChatLunaErrorCode.ABORTED,
-                        undefined,
-                        true
-                    )
-                )
-                this._requestIdMap.delete(conversationId)
-            }
-        }
+        // Clean up resources for specific platform
+        const conversationIds = this._platformToConversations.get(platform)
+        if (!conversationIds?.length) return
+        for (const conversationId of conversationIds) {
+            this._conversations.delete(conversationId)
+        }

Also applies to: 851-853, 909-918, 998-1004, 1014-1035

packages/core/src/chains/rooms.ts (1)

202-213: 逻辑自相矛盾:未找到目标模型时仍 find(...modelName),会解引用 undefined.name

需要选择一个存在的备选模型(如第一个)。

-            } else if (
-                !platformModels.some((model) => model.name === modelName)
-            ) {
-                const model =
-                    platformName +
-                    '/' +
-                    platformModels.find((model) => model.name === modelName)
-                        .name
-
-                config.defaultModel = model
+            } else if (
+                !platformModels.some((m) => m.name === modelName)
+            ) {
+                const fallback =
+                    platformName + '/' + platformModels[0].name
+                config.defaultModel = fallback
packages/core/src/llm-core/platform/client.ts (1)

43-59: ABORTED 分支未释放锁,会导致永久卡死

throw 之前没有 unlock();应使用 try/finally 包裹加锁周期,移除分支内散落的 unlock() 调用,确保所有路径都释放。

-        const unlock = await this._lock.lock()
-
-        let retryCount = 0
-        while (retryCount < (this.config.maxRetries ?? 1)) {
-            try {
-                await this.init(config)
-                unlock()
-                return true
-            } catch (e) {
+        const unlock = await this._lock.lock()
+        try {
+            let retryCount = 0
+            const maxRetries = this.config?.maxRetries ?? 1
+            while (retryCount < maxRetries) {
+                try {
+                    await this.init(config)
+                    return true
+                } catch (e) {
                     if (
                         e instanceof ChatLunaError &&
                         e.errorCode === ChatLunaErrorCode.ABORTED
                     ) {
-                        throw e
+                        throw e
                     }
-
-                if (retryCount === this.config.maxRetries - 1) {
+                    if (retryCount === maxRetries - 1) {
                         const oldConfig = this.configPool.getConfig(true)
                         // refresh
                         this.configPool.getConfig(false)
                         this.configPool.markConfigStatus(oldConfig.value, false)
                         this.ctx.logger.error(e)
                         if (this.configPool.findAvailableConfig() !== null) {
                             retryCount = 0
                             continue
                         }
-                        unlock()
                         return false
                     }
-            }
-
-            retryCount++
-        }
-
-        unlock()
-
-        return false
+                }
+                retryCount++
+            }
+            return false
+        } finally {
+            unlock()
+        }

Also applies to: 83-85

packages/core/src/llm-core/platform/service.ts (1)

247-281: 模型列表合并会保留过期条目,且不会更新已存在的元数据

仅在不存在时追加,导致已存在模型的 capabilities/maxTokens 更新无法生效;删除的模型也不会被移除。

用“新覆盖旧”的去重合并,或直接以新结果为准:

-        const availableModels = this._models[platform] ?? []
-        // filter existing models
-        this._models[platform] = availableModels.concat(
-            models.filter(
-                (m) => !availableModels.some((am) => am.name === m.name)
-            )
-        )
+        const prev = this._models[platform] ?? []
+        const next = new Map<string, ModelInfo>(prev.map(m => [m.name, m]))
+        for (const m of models) next.set(m.name, m) // 用新信息覆盖旧条目
+        this._models[platform] = Array.from(next.values())

如需删除已下线模型,可在此处对 prev 做差集。

♻️ Duplicate comments (1)
packages/core/src/llm-core/chat/app.ts (1)

321-336: 致命:在判空前解析 embeddings,回退逻辑被短路(回归)

parseRawModelName(this._input.embeddings) 发生在空值与“无”判断之前,parseRawModelName(undefined) 会抛出 MODEL_NOT_FOUND,无法走到 emptyEmbeddings 兜底。此问题在先前评论已指出并修复过,本次变更出现回归。

请先判空再解析:

-        const [platform, modelName] = parseRawModelName(this._input.embeddings)
-
-        if (
-            this._input.embeddings == null ||
-            this._input.embeddings.length < 1 ||
-            this._input.embeddings === '无'
-        ) {
-            return computed(() => emptyEmbeddings)
-        }
+        if (
+            this._input.embeddings == null ||
+            this._input.embeddings.length < 1 ||
+            this._input.embeddings === '无'
+        ) {
+            return computed(() => emptyEmbeddings)
+        }
+        const [platform, modelName] = parseRawModelName(this._input.embeddings)
🧹 Nitpick comments (26)
packages/core/src/utils/sse.ts (4)

140-155: 按 Content-Type 提取错误体,并用 in 进行类型缩小,避免 instanceof Reader 与字符串被 JSON.stringify 二次加引号

当前把错误体改为 response.text() 可兼容非 JSON,但拼接消息时对字符串再 JSON.stringify 会产生额外引号;同时对联合类型使用 instanceof ReadableStreamDefaultReader 在多运行时/多 realm 下可脆弱。建议基于 in 做类型缩小,仅在 Response!ok 时进入分支,并按 content-type 智能解析错误体,最终避免多余引号。

建议补丁(仅本段落所选范围内变更):

 export async function checkResponse(
-    response: fetchType.Response | ReadableStreamDefaultReader<string>
+    response: fetchType.Response | ReadableStreamDefaultReader<string>
 ) {
-    if (!(response instanceof ReadableStreamDefaultReader || response.ok)) {
-        const error = await response.text().catch(() => '')
-
-        throw new ChatLunaError(
-            ChatLunaErrorCode.NETWORK_ERROR,
-            new Error(
-                `${response.status} ${response.statusText} ${JSON.stringify(
-                    error
-                )}`
-            )
-        )
-    }
+    // 使用 `in` 做类型缩小,避免对 Reader 做 instanceof 检查
+    if ('ok' in response && !response.ok) {
+        let errorText = ''
+        try {
+            const ct = response.headers?.get('content-type') ?? ''
+            if (ct.includes('application/json')) {
+                const data = await response.json()
+                errorText = typeof data === 'string' ? data : JSON.stringify(data)
+            } else {
+                errorText = await response.text()
+            }
+        } catch {
+            // ignore
+        }
+        const message =
+            `${response.status} ${response.statusText}` +
+            (errorText ? ` ${errorText}` : '')
+        throw new ChatLunaError(
+            ChatLunaErrorCode.NETWORK_ERROR,
+            new Error(message),
+        )
+    }
 }

可回归点:

  • 非 2xx 响应在 JSON/text 两种 content-type 下的最终错误消息格式(不应带多余引号)。
  • SSE Reader 传入时不应触发错误分支。

141-142: ReadableStreamDefaultReader 的泛型应为 Uint8Array(fetch body),而非 string

Web Streams ReadableStreamDefaultReaderfetch 返回的 body 读取到的是 Uint8Array,后续再经 TextDecoder 转为文本。将泛型标注为 string 容易误导类型使用,并削弱 IDE/类型检查效力。

建议补丁(多处一致化泛型标注):

 export async function checkResponse(
-    response: fetchType.Response | ReadableStreamDefaultReader<string>
+    response: fetchType.Response | ReadableStreamDefaultReader<Uint8Array>
 ) {
@@
-async function* readSSE(reader: ReadableStreamDefaultReader) {
+async function* readSSE(reader: ReadableStreamDefaultReader<Uint8Array>) {
@@
 export async function sse(
-    response: fetchType.Response | ReadableStreamDefaultReader<string>,
+    response: fetchType.Response | ReadableStreamDefaultReader<Uint8Array>,
@@
 export async function* rawSeeAsIterable(
-    response: fetchType.Response | ReadableStreamDefaultReader<string>,
+    response: fetchType.Response | ReadableStreamDefaultReader<Uint8Array>,
     cacheCount: number = 0
 ) {
@@
-            : (response.body.getReader() as ReadableStreamDefaultReader<string>)
+            : (response.body.getReader() as ReadableStreamDefaultReader<Uint8Array>)

Also applies to: 158-161, 179-184, 191-201


185-187: 允许 onEvent 返回 false 来提前停止流消费(向后兼容的增强)

当前忽略 onEvent 的返回值。为便于调用方在满足条件时中止 SSE 消费,建议当 onEvent 显式返回 false 时跳出循环。

建议补丁:

-    for await (const rawChunk of rawSeeAsIterable(response, cacheCount)) {
-        await onEvent(rawChunk)
-    }
+    for await (const rawChunk of rawSeeAsIterable(response, cacheCount)) {
+        const ret = await onEvent(rawChunk)
+        if (ret === false) break
+    }

191-194: 命名小误:rawSeeAsIterable 更像是 rawSseAsIterable;建议新增别名导出以增强可读性且不破坏外部 API

为提升可读性,保留现有导出并增加语义化别名(非破坏性改进)。

建议在文件末尾增加别名导出:

+// 兼容命名别名:更贴近 SSE 语义
+export { rawSeeAsIterable as rawSseAsIterable }

Also applies to: 235-237

packages/openai-like-adapter/src/client.ts (2)

71-82: capabilities 过滤包含 null,使用类型守卫确保精确类型

.filter(Boolean) 在 TS 中不会收窄到 ModelCapabilities[],建议使用类型守卫,避免后续类型断言。

可改为:

   return {
     capabilities: [
       ModelCapabilities.ToolCall,
       supportImageInput(model)
         ? ModelCapabilities.ImageInput
         : null
-    ].filter(Boolean)
+    ].filter((c): c is ModelCapabilities => c != null)
   }

67-70: 保留嵌入模型(不要在先过滤时误删 embeddings)

isNonLLMModel(packages/shared/src/client.ts)会基于 'image'、'whisper'、'tts'、'dall-e'、'rerank' 等关键字过滤模型;isEmbeddingModel 匹配 'embed'、'bge'、'instructor-large'、'm3e'。openai-like-adapter 中当前先用 !isNonLLMModel 过滤再按 isEmbeddingModel 分类,若某模型名同时包含 'image' 与 'embed' 等,会把 embedding 模型误删。建议修改过滤逻辑以保留 embeddings,例如:

  • rawModels.filter(m => !isNonLLMModel(m) || isEmbeddingModel(m))
  • 或在 isNonLLMModel 内先排除 isEmbeddingModel(m)

修改位置:packages/openai-like-adapter/src/client.ts(约 67–90 行);函数定义:packages/shared/src/client.ts。

packages/azure-openai-adapter/src/client.ts (3)

76-81: 错误传播策略方向正确;建议归一 catch 变量类型并避免非 Error

当前直接把 e 传入 ChatLunaError,若 e 为非 Error(或跨 realm 导致的 instanceof 失效),会丢失栈信息或类型不匹配。可在保留现有语义的前提下做一次归一化处理。

         } catch (e) {
-            if (e instanceof ChatLunaError) {
-                throw e
-            }
-            throw new ChatLunaError(ChatLunaErrorCode.MODEL_INIT_ERROR, e)
+            if (e instanceof ChatLunaError) throw e
+            const err =
+                e instanceof Error
+                    ? e
+                    : new Error(typeof e === 'string' ? e : JSON.stringify(e))
+            throw new ChatLunaError(ChatLunaErrorCode.MODEL_INIT_ERROR, err)
         }

57-64: 用中文字符串判断模型类型/能力,存在强耦合与易碎性

llmType === 'Embeddings 嵌入模型'llmType === 'LLM 大语言模型(函数调用)' 的判断依赖本地化文案,易在 i18n 或配置重构后失效。建议:

  • modelType 改为稳定枚举(如 ModelType.llm | ModelType.embeddings),
  • 或在 supportModels 中显式提供 capabilities: ModelCapabilities[],此处仅做兜底推断。

72-73: 修正 Azure 适配器的 maxTokens 回退(避免 100_000)

packages/azure-openai-adapter/src/client.ts(refreshModels,约行 52–73)当前使用 maxTokens: token ?? 100_000。仓内已在 packages/azure-openai-adapter/src/index.ts 的 schema 为 supportModels.contextSize 提供默认(4096),但当配置项缺失时回退到 100_000 会令 embeddings 或旧模型被赋予过大上下文,增加截断/拒绝或成本风险。建议:

  • 优先使用 supportModels 的 contextSize;
  • 若缺失,改为更保守的回退(建议 16_384 或 32_768),或按模型族(embeddings/老模型使用小值,先进 LLM 使用更大值)差异化回退。
packages/deepseek-adapter/src/requester.ts (1)

69-76: 建议使用 response.ok + response.json(),并在非 2xx 时抛出带上下文的 ChatLunaError

当前 text()->JSON.parse 对错误响应不友好且信息不足。

-            const response = await this.get(
+            const response = await this.get(
                 'models',
                 {},
                 { signal: config?.signal }
             )
-            data = await response.text()
-            data = JSON.parse(data as string)
+            if (!response.ok) {
+                const body = await response.text().catch(() => '')
+                throw new ChatLunaError(
+                    ChatLunaErrorCode.NETWORK_ERROR,
+                    new Error(`HTTP ${response.status} ${response.statusText}: ${body}`)
+                )
+            }
+            data = await response.json()
packages/openai-adapter/src/requester.ts (1)

68-76: 同样建议使用 response.ok + response.json(),并在非 2xx 时抛出可诊断错误

提升可观测性与健壮性。

-            data = await response.text()
-            data = JSON.parse(data as string)
+            if (!response.ok) {
+                const body = await response.text().catch(() => '')
+                throw new ChatLunaError(
+                    ChatLunaErrorCode.NETWORK_ERROR,
+                    new Error(`HTTP ${response.status} ${response.statusText}: ${body}`)
+                )
+            }
+            data = await response.json()
packages/openai-adapter/src/client.ts (1)

75-87: ModelInfo 未填写 maxTokens,依赖断言规避类型检查

建议显式设置,避免后续读取上限时出现不一致。

                 .map((model) => {
                     return {
                         name: model,
                         type: model.includes('embedding')
                             ? ModelType.embeddings
                             : ModelType.llm,
+                        maxTokens: this._config.maxTokens,
                         capabilities: [
                             ModelCapabilities.ToolCall,
                             supportImageInput(model)
                                 ? ModelCapabilities.ImageInput
                                 : undefined
                         ].filter(Boolean)
                     } as ModelInfo
                 })
packages/deepseek-adapter/src/client.ts (1)

58-66: ModelInfo 未设置 maxTokens,存在潜在上限读取不一致

与 OpenAI client 同样问题,建议补齐。

                 .map((model) => {
                     return {
                         name: model,
                         type: model.includes('deepseek')
                             ? ModelType.llm
                             : ModelType.embeddings,
+                        maxTokens: this._config.maxTokens,
                         capabilities: [ModelCapabilities.ToolCall]
                     } as ModelInfo
                 })
packages/core/src/llm-core/platform/api.ts (1)

123-135: 注释与行为不符:这里是“直接抛出以跳过错误计数/降级”,而非“忽略”

为减少歧义,建议更新注释。

-                // Ignore network errors
+                // 直接抛出网络/取消/不安全内容错误:不计入错误次数与降级
packages/shared/src/requester.ts (1)

406-414: 统一错误类型为 ChatLunaError,保留原始异常上下文

getModels 在非 ChatLunaError 分支抛出原生 Error,与全文件风格不一致且丢失上下文。建议改为 ChatLunaError(API_REQUEST_FAILED, e)

-        throw new Error(
-            'error when listing openai models, Result: ' + JSON.stringify(data)
-        )
+        throw new ChatLunaError(
+            ChatLunaErrorCode.API_REQUEST_FAILED,
+            e as Error
+        )
packages/core/src/services/chat.ts (4)

126-132: 不必要的 try/catch 构造 Error

这里用抛出-捕获的方式创建 timeoutError 可读性差,直接 new Error 即可。

-        let timeoutError: Error | null = null
-
-        try {
-            throw new Error(
-                `Timeout waiting for platform ${pluginName} to load`
-            )
-        } catch (e) {
-            timeoutError = e
-        }
+        const timeoutError = new Error(
+            `Timeout waiting for platform ${pluginName} to load`
+        )

102-102: 日志占位符使用 %c 不生效,建议改为 %s

Koishi/Node 端无 %c(浏览器样式)语义,输出将包含多余字符。

-        this.ctx.logger.success(`Plugin %c installed`, platformName)
+        this.ctx.logger.success(`Plugin %s installed`, platformName)
-        this.ctx.logger.success(
-            'Plugin %c uninstalled',
-            targetPlugin.platformName
-        )
+        this.ctx.logger.success(
+            'Plugin %s uninstalled',
+            targetPlugin.platformName
+        )

Also applies to: 170-173


291-296: 警告日志文案小瑕疵

no available 语法不通顺,建议改为 not available 或中文提示。

-                    this.ctx.logger.warn(
-                        `The platform ${platformName} no available`
-                    )
+                    this.ctx.logger.warn(
+                        `The platform ${platformName} not available`
+                    )

689-696: 避免通过抛出异常实现提示

registerToService 仅用于提示时机变化,直接 logger.warn 即可,无需 throw/catch。

-        try {
-            throw new Error('Please remove this method')
-        } catch (e) {
-            this.ctx.logger.warn(
-                `Now the plugin support auto installation, Please remove call this method`,
-                e
-            )
-        }
+        this.ctx.logger.warn(
+            'Auto-install is supported now. Please remove calls to registerToService().'
+        )
packages/core/src/middlewares/room/list_room.ts (1)

90-95: 重复可用性检查(每个房间两次)

同一房间在 room_modelroom_availability 各调用一次 checkConversationRoomAvailability,可合并为一次。

 async function formatRoomInfo(
   ctx: Context,
   session: Session,
   room: ConversationRoom
 ) {
   const buffer = []
+  const available = await checkConversationRoomAvailability(ctx, room)

   buffer.push(session.text('.room_name', [room.roomName]))
 ...
-  buffer.push(
-      session.text('.room_model', [
-          room.model,
-          await checkConversationRoomAvailability(ctx, room)
-      ])
-  )
+  buffer.push(session.text('.room_model', [room.model, available]))
 ...
-  buffer.push(
-      session.text('.room_availability', [
-          await checkConversationRoomAvailability(ctx, room)
-      ])
-  )
+  buffer.push(session.text('.room_availability', [available]))

Also applies to: 100-103

packages/gemini-adapter/src/requester.ts (1)

209-219: 统一错误类型并保留上下文

非 ChatLunaError 分支建议改为 ChatLunaError(MODEL_INIT_ERROR, e),与其他请求保持一致且不丢栈。

-            const error = new Error(
-                'error when listing gemini models, Error: ' + e.message
-            )
-            error.stack = e.stack
-            error.cause = e.cause
-            throw error
+            throw new ChatLunaError(
+                ChatLunaErrorCode.MODEL_INIT_ERROR,
+                e as Error
+            )
packages/core/src/llm-core/chat/app.ts (2)

231-241: modelInfo 可能为 null,应在支持判断前做保护

service.getModelInfo() 返回 ComputedRef<ModelInfo | null>;此处直接 this._supportChatMode(modelInfo.value) 可能为 null

最小改动:

-                supportChatChain: this._supportChatMode(modelInfo.value)
+                supportChatChain:
+                    !!modelInfo.value && this._supportChatMode(modelInfo.value)

或在 _initModel() 返回类型上反映可空(见下条)。


347-362: 确保 computed 分支覆盖并始终返回 Embeddings

当前分支若出现未知 client 类型将隐式返回 undefined。建议在末尾增加兜底:

         return computed(() => {
           // ...现有分支
-        })
+          return emptyEmbeddings
+        })
packages/core/src/llm-core/platform/service.ts (3)

70-75: 为工具保留自带 id/name(若已提供),否则生成

覆写传入的 id/name 可能影响工具稳定标识。建议仅在未提供时填充。

-        toolCreator.id = randomUUID()
-        toolCreator.name = name
+        toolCreator.id ??= randomUUID()
+        toolCreator.name ??= name

283-304: 在刷新前注册 client,避免事件期间读取不到 client

refreshClient() 内会触发 model-added 等事件,但此时 _platformClients[platform] 尚未赋值;监听方通过 getClient(platform) 读取到的是 undefined

-        const client = createClientFunction(this.ctx)
-
-        await this.refreshClient(client, platform, config)
-
-        this._platformClients[platform] = client
+        const client = createClientFunction(this.ctx)
+        this._platformClients[platform] = client
+        await this.refreshClient(client, platform, config)

337-343: dispose 未清理工具与向量存储注册缓存

仅清了 _tmpVectorStores 与若干 Map,遗漏 _tmpTools(以及如需的话 _vectorStore/_createClientFunctions 的策略性重置)。至少应清空 _tmpTools

     dispose() {
         this._tmpVectorStores.clear()
         this._platformClients = reactive({})
         this._models = reactive({})
-        this._tools = reactive({})
+        this._tools = reactive({})
+        this._tmpTools = {}
         this._chatChains = reactive({})
     }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 637461b and af3fe8a.

⛔ Files ignored due to path filters (2)
  • packages/core/src/locales/en-US.yml is excluded by !**/*.yml
  • packages/core/src/locales/zh-CN.yml is excluded by !**/*.yml
📒 Files selected for processing (25)
  • packages/azure-openai-adapter/src/client.ts (1 hunks)
  • packages/core/src/chains/rooms.ts (4 hunks)
  • packages/core/src/llm-core/chain/plugin_chat_chain.ts (7 hunks)
  • packages/core/src/llm-core/chat/app.ts (6 hunks)
  • packages/core/src/llm-core/platform/api.ts (3 hunks)
  • packages/core/src/llm-core/platform/client.ts (4 hunks)
  • packages/core/src/llm-core/platform/service.ts (11 hunks)
  • packages/core/src/middlewares/chat/chat_time_limit_check.ts (2 hunks)
  • packages/core/src/middlewares/chat/message_delay.ts (2 hunks)
  • packages/core/src/middlewares/model/resolve_model.ts (1 hunks)
  • packages/core/src/middlewares/room/list_room.ts (1 hunks)
  • packages/core/src/middlewares/room/room_info.ts (2 hunks)
  • packages/core/src/services/chat.ts (14 hunks)
  • packages/core/src/utils/sse.ts (1 hunks)
  • packages/deepseek-adapter/src/client.ts (3 hunks)
  • packages/deepseek-adapter/src/requester.ts (2 hunks)
  • packages/gemini-adapter/src/client.ts (3 hunks)
  • packages/gemini-adapter/src/requester.ts (3 hunks)
  • packages/openai-adapter/src/client.ts (3 hunks)
  • packages/openai-adapter/src/requester.ts (2 hunks)
  • packages/openai-like-adapter/src/client.ts (3 hunks)
  • packages/openai-like-adapter/src/requester.ts (3 hunks)
  • packages/search-service/src/index.ts (8 hunks)
  • packages/search-service/src/tools/puppeteerBrowserTool.ts (4 hunks)
  • packages/shared/src/requester.ts (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/core/src/middlewares/chat/message_delay.ts
🚧 Files skipped from review as they are similar to previous changes (4)
  • packages/core/src/middlewares/chat/chat_time_limit_check.ts
  • packages/search-service/src/tools/puppeteerBrowserTool.ts
  • packages/search-service/src/index.ts
  • packages/core/src/llm-core/chain/plugin_chat_chain.ts
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-15T09:29:19.344Z
Learnt from: dingyi222666
PR: ChatLunaLab/chatluna#543
File: packages/core/src/llm-core/vectorstores/base.ts:0-0
Timestamp: 2025-09-15T09:29:19.344Z
Learning: 在 packages/core/src/llm-core/vectorstores/base.ts 中,ChatLunaSaveableVectorStore 的 free() 方法设计为等待子类重写实现,不在基类中添加额外的资源清理操作。资源清理由具体实现类负责处理。

Applied to files:

  • packages/core/src/llm-core/platform/service.ts
🧬 Code graph analysis (19)
packages/deepseek-adapter/src/requester.ts (1)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/gemini-adapter/src/client.ts (2)
packages/core/src/llm-core/platform/types.ts (1)
  • ModelInfo (81-86)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/azure-openai-adapter/src/client.ts (1)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/openai-like-adapter/src/requester.ts (2)
packages/core/src/llm-core/platform/client.ts (2)
  • config (88-90)
  • getModels (92-118)
packages/shared/src/requester.ts (1)
  • getModels (382-414)
packages/openai-adapter/src/requester.ts (1)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/middlewares/room/list_room.ts (1)
packages/core/src/chains/rooms.ts (1)
  • checkConversationRoomAvailability (102-139)
packages/core/src/middlewares/model/resolve_model.ts (1)
packages/core/src/chains/rooms.ts (1)
  • fixConversationRoomAvailability (141-173)
packages/core/src/middlewares/room/room_info.ts (1)
packages/core/src/chains/rooms.ts (1)
  • checkConversationRoomAvailability (102-139)
packages/openai-adapter/src/client.ts (2)
packages/core/src/llm-core/platform/types.ts (1)
  • ModelInfo (81-86)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/deepseek-adapter/src/client.ts (2)
packages/core/src/llm-core/platform/types.ts (1)
  • ModelInfo (81-86)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/openai-like-adapter/src/client.ts (2)
packages/core/src/llm-core/platform/types.ts (1)
  • ModelInfo (81-86)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/llm-core/platform/client.ts (3)
packages/azure-openai-adapter/src/client.ts (1)
  • config (31-33)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/llm-core/platform/types.ts (1)
  • ModelInfo (81-86)
packages/core/src/llm-core/platform/api.ts (1)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/shared/src/requester.ts (4)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/llm-core/platform/client.ts (2)
  • getModels (92-118)
  • config (88-90)
packages/openai-like-adapter/src/requester.ts (1)
  • getModels (116-126)
packages/core/src/services/chat.ts (3)
  • ChatLunaPlugin (572-780)
  • Config (1074-1082)
  • Config (1084-1113)
packages/core/src/llm-core/chat/app.ts (4)
packages/core/src/llm-core/platform/model.ts (3)
  • ChatLunaChatModel (96-572)
  • modelInfo (557-559)
  • modelName (553-555)
packages/core/src/llm-core/platform/types.ts (1)
  • ModelInfo (81-86)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (194-199)
packages/core/src/llm-core/model/in_memory.ts (1)
  • emptyEmbeddings (40-40)
packages/gemini-adapter/src/requester.ts (2)
packages/gemini-adapter/src/types.ts (1)
  • GeminiModelInfo (121-133)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/chains/rooms.ts (3)
packages/core/src/services/types.ts (1)
  • Context (21-23)
packages/core/src/llm-core/platform/client.ts (1)
  • config (88-90)
packages/core/src/services/chat.ts (2)
  • Config (1074-1082)
  • Config (1084-1113)
packages/core/src/services/chat.ts (4)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (194-199)
packages/core/src/llm-core/model/in_memory.ts (1)
  • emptyEmbeddings (40-40)
packages/core/src/llm-core/platform/client.ts (1)
  • config (88-90)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/llm-core/platform/service.ts (3)
packages/core/src/llm-core/platform/types.ts (6)
  • CreateClientFunction (73-73)
  • ChatLunaTool (61-67)
  • ModelInfo (81-86)
  • ChatLunaChainInfo (11-17)
  • CreateVectorStoreFunction (69-71)
  • CreateToolParams (19-43)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (194-199)
packages/core/src/llm-core/platform/client.ts (2)
  • config (88-90)
  • isAvailable (38-86)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: lint
  • GitHub Check: build
🔇 Additional comments (17)
packages/openai-like-adapter/src/client.ts (3)

105-109: 错误重新抛出策略合理,避免二次包装 ChatLunaError

保持原始 ChatLunaError 不被再次包装,其他错误统一包装为 MODEL_INIT_ERROR,符合预期。


48-52: 确认 — getModels 签名与透传正确

OpenAIRequester.getModels(config?: RunnableConfig) 已在 packages/openai-like-adapter/src/requester.ts 中定义,并将 config 透传给下游 getModels(requestContext, config)。refreshModels(config?: RunnableConfig) 的透传无问题。


26-26: 使用 type-only 导入,避免引入运行时依赖

该导入仅用于类型标注,改为 type-only 导入以避免编译后产生运行时 import:

-import { RunnableConfig } from '@langchain/core/runnables'
+import type { RunnableConfig } from '@langchain/core/runnables'

先前运行的校验脚本未搜索到文件(rg 返回 "No files were searched"),无法在当前环境确认仓库中是否已声明 @langchain/core 或 tsconfig 的相关设置。在仓库根目录运行下列命令并把输出贴回以便验证:

#!/bin/bash
set -euo pipefail
echo "1) 查找声明了 @langchain/core 的 package.json:"
find . -name 'package.json' -not -path './node_modules/*' -not -path '*/node_modules/*' -print0 | \
  while IFS= read -r -d '' f; do
    name=$(jq -r '.name // "(no-name)"' "$f" 2>/dev/null || echo "(no-name)")
    ver=$(jq -r '(.dependencies["@langchain/core"] // .devDependencies["@langchain/core"]) // empty' "$f" 2>/dev/null || true)
    if [[ -n "$ver" ]]; then echo "$name -> @langchain/core@$ver ($f)"; fi
  done

echo "2) 检查 tsconfig 文件的 compilerOptions 设置:"
find . -name 'tsconfig*.json' -not -path './node_modules/*' -not -path '*/node_modules/*' -print0 | \
  while IFS= read -r -d '' f; do
    ina=$(jq -r '.compilerOptions.importsNotUsedAsValues // empty' "$f" 2>/dev/null || true)
    vms=$(jq -r '.compilerOptions.verbatimModuleSyntax // empty' "$f" 2>/dev/null || true)
    echo "$f: importsNotUsedAsValues=${ina:-(absent)}, verbatimModuleSyntax=${vms:-(absent)}"
  done
packages/azure-openai-adapter/src/client.ts (1)

104-105: llmType: 'openai' 命名语义需确认

在 Azure 适配器内将 llmType 设为 'openai',若下游按该字段做供应商路由/统计,可能把 Azure 归类为 OpenAI 公有云。请确认消费方语义;若代表“协议/兼容层”,建议重命名为更明确的字段(如 provider: 'azure-openai' | 'openai')或使用独立枚举。

packages/openai-adapter/src/client.ts (1)

48-51: RunnableConfig 透传刷新模型:LGTM
与 requester 的新签名对齐,便于取消与超时控制。

packages/deepseek-adapter/src/client.ts (1)

47-51: RunnableConfig 透传刷新模型:LGTM
与 requester 的新签名对齐。

packages/openai-like-adapter/src/requester.ts (1)

116-126: 批准 — getModels 透传 RunnableConfig,向后兼容

已核验调用点(rg 输出),存在既有传入 config 的调用也有不传入的调用;因参数为可选(config?: RunnableConfig),签名向后兼容,变更安全,批准合并。

packages/core/src/middlewares/model/resolve_model.ts (1)

33-50: 不可用房间时立刻终止链路与增强提示:LGTM

新增的提示与失败即停更安全,和 fixConversationRoomAvailability 的返回布尔一致。

packages/gemini-adapter/src/client.ts (1)

49-101: 为 refreshModels 增加取消信号与错误传递:LGTM

与共享 requester/平台层的 RunnableConfig 透传保持一致。请同步确认 requester 的 getModels 形参为可选(见对应文件评论)。

packages/gemini-adapter/src/requester.ts (1)

646-653: GET 封装:headers 合并顺序合理,LGTM

默认头在前,允许调用方覆盖。

packages/core/src/chains/rooms.ts (1)

121-125: .value 解包使用一致性:LGTM

getModels(...).value 的统一解包与全局响应式改造一致。

Also applies to: 153-157

packages/core/src/middlewares/room/room_info.ts (1)

46-51: i18n 参数位数已变更 — 同步更新多语言文案

.room_model 现在传入两个插值参数 [model, available]。已在以下位置以两参数调用:packages/core/src/middlewares/room/list_room.ts (lines 91–93) 与 packages/core/src/middlewares/room/room_info.ts (lines 47–49)。确保所有 locale 文案已包含两个占位符,否则占位符会错位或显示异常。

packages/core/src/llm-core/platform/client.ts (2)

92-118: 模型刷新与容错改造 LGTM

ABORTED 直接重抛、其他错误清空缓存并返回空列表,语义合理。


120-125: init 委托 getModels 一致化

将初始化流程聚焦到 refreshModels 路径,接口清晰。

请确认所有具体平台 client 已同步将 refreshModels(config?: RunnableConfig) 签名更新,并在实现中正确传递 config

packages/core/src/llm-core/chat/app.ts (1)

162-168: 日志微调 LGTM

统一首字母大写可读性更好。

packages/core/src/llm-core/platform/service.ts (2)

239-246: getClient 返回 ComputedRef 的设计 LGTM

懒创建 + 响应式读取,一致于全局迁移方向。


142-160: 按数值感知排序与类型过滤 LGTM

localeCompare({ numeric: true }) 配合类型过滤实现良好。

Enhance type safety and fix potential null pointer issues across core components and reactive system integration.

- Add proper null checks and optional types for config, model info, and chat components
- Fix ChatInterface lifecycle management with proper dispose handlers
- Improve reactive watch conditions to prevent premature resolution
- Enhance fetch parameter handling in ModelRequester to avoid method conflicts
- Add effect-wrapped watch calls for proper cleanup in ChatInterface
- Fix retry logic to handle undefined config gracefully
- Update Gemini adapter parameter type consistency

These changes ensure better stability when components are disposed or when models become unavailable during reactive updates.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
packages/core/src/chains/rooms.ts (2)

151-157: 缺少无效模型的兜底校验,可能向 getModels() 传入 null 导致异常

parseRawModelName(room.model) 结果未校验即使用,platformName/modelName 为空时会把 null/undefined 传入 getModels()。请在解析失败时回退到模板模型。

应用此补丁:

-    const [platformName, modelName] = parseRawModelName(room.model)
+    if (!room.model) {
+        // 回退到模板模型
+        room.model = (await getTemplateConversationRoom(ctx, config)).model
+    }
+    const [platformName, modelName] = parseRawModelName(room.model)
+    if (!platformName || !modelName) {
+        // 无法解析平台/模型名,回退
+        room.model = (await getTemplateConversationRoom(ctx, config)).model
+    }
     const platformModels = platformService.getModels(
         platformName,
         ModelType.llm
     ).value

204-213: 逻辑错误:判定“不包含”后仍使用 .find(=== modelName),将得到 undefined.name

分支进入条件是“不存在目标模型”,随后又去找同名模型并解构 .name,会抛异常。应改为选择一个可用的回退模型。

应用此补丁:

-            } else if (
-                !platformModels.some((model) => model.name === modelName)
-            ) {
-                const model =
-                    platformName +
-                    '/' +
-                    platformModels.find((model) => model.name === modelName)
-                        .name
-
-                config.defaultModel = model
-            }
+            } else if (!platformModels.some((m) => m.name === modelName)) {
+                const fallback =
+                    platformModels.find((m) => m.name.includes('4o')) ??
+                    platformModels[0]
+                config.defaultModel = `${platformName}/${fallback.name}`
+            }
packages/core/src/services/chat.ts (1)

1000-1014: 按平台卸载时未正确终止请求:以 conversationId 访问 _requestIdMap(键不匹配)

_requestIdMaprequestId 为键,按 conversationId 获取将失效,导致平台卸载时请求未中止。

应用以下修复(新增 requestId 到平台/会话的映射,并在卸载时按平台终止):

@@
-            this._requestIdMap.set(requestId, abortController)
+            this._requestIdMap.set(requestId, abortController)
+            this._requestIdToPlatform.set(requestId, platform)
+            this._requestIdToConversationId.set(requestId, conversationId)
@@
-            this._requestIdMap.delete(requestId)
+            this._requestIdMap.delete(requestId)
+            this._requestIdToPlatform.delete(requestId)
+            this._requestIdToConversationId.delete(requestId)
@@
         if (!platform) {
             // Clean up all resources
             this._conversations.clear()
             this._requestIdMap.clear()
             this._platformToConversations.clear()
+            this._requestIdToPlatform.clear()
+            this._requestIdToConversationId.clear()
             return
         }
@@
-        // Clean up resources for specific platform
-        const conversationIds = this._platformToConversations.get(platform)
-        if (!conversationIds?.length) return
-
-        for (const conversationId of conversationIds) {
-            this._conversations.delete(conversationId)
-            // Terminate platform-related requests
-            const controller = this._requestIdMap.get(conversationId)
-            if (controller) {
-                controller.abort(
-                    new ChatLunaError(
-                        ChatLunaErrorCode.ABORTED,
-                        undefined,
-                        true
-                    )
-                )
-                this._requestIdMap.delete(conversationId)
-            }
-        }
-
-        this._platformToConversations.delete(platform)
+        // Clean up resources for specific platform
+        const conversationIds = this._platformToConversations.get(platform)
+        if (conversationIds?.length) {
+            for (const conversationId of conversationIds) {
+                this._conversations.delete(conversationId)
+            }
+            this._platformToConversations.delete(platform)
+        }
+
+        for (const [requestId, controller] of this._requestIdMap.entries()) {
+            if (this._requestIdToPlatform.get(requestId) === platform) {
+                controller.abort(
+                    new ChatLunaError(ChatLunaErrorCode.ABORTED, undefined, true)
+                )
+                this._requestIdMap.delete(requestId)
+                this._requestIdToPlatform.delete(requestId)
+                this._requestIdToConversationId.delete(requestId)
+            }
+        }

在类上补充字段(在适当位置新增):

// 新增:请求维度的反向索引
private _requestIdToPlatform: Map<string, string> = new Map()
private _requestIdToConversationId: Map<string, string> = new Map()

Also applies to: 1016-1037, 853-855, 905-909

♻️ Duplicate comments (5)
packages/core/src/chains/rooms.ts (1)

199-201: 仍有一次误用 models[0](应为 models.value[0]

这会在响应式场景下返回 undefined,导致默认模型回退失效。

应用此补丁:

-                    models.value.find((model) => model.includes('4o')) ??
-                    models[0]
+                    models.value.find((m) => m.includes('4o')) ??
+                    models.value[0]
packages/core/src/llm-core/chat/app.ts (3)

182-187: ModelInfo 可空类型不一致(null vs undefined),请统一为 null

service.getModelInfo() 返回 ComputedRef<ModelInfo | null>,本处声明为 | undefined 且处处需判空。统一为 null 可降低歧义。

@@
-        let llm: ComputedRef<ChatLunaChatModel>
-
-        let modelInfo: ComputedRef<ModelInfo>
+        let llm: ComputedRef<ChatLunaChatModel>
+        let modelInfo: ComputedRef<ModelInfo | null>
@@
-    ): Promise<
-        [ComputedRef<ChatLunaChatModel>, ComputedRef<ModelInfo | undefined>]
-    > {
+    ): Promise<
+        [ComputedRef<ChatLunaChatModel>, ComputedRef<ModelInfo | null>]
+    > {

createChain 处已使用 modelInfo?.value != null,无需额外修改。)

Also applies to: 235-246, 379-397


331-340: 致命:在判空前解析 embeddings,仍会提前抛错(回归)

parseRawModelName(this._input.embeddings) 发生在空值判断之前,embeddings 为空/“无”将直接抛 MODEL_NOT_FOUND。请先判空再解析。

应用如下修复:

-    private async _initEmbeddings(service: PlatformService) {
-        const [platform, modelName] = parseRawModelName(this._input.embeddings)
-
-        if (
-            this._input.embeddings == null ||
-            this._input.embeddings.length < 1 ||
-            this._input.embeddings === '无'
-        ) {
-            return computed(() => emptyEmbeddings)
-        }
-
-        const clientRef = await service.getClient(platform)
+    private async _initEmbeddings(service: PlatformService) {
+        if (
+            this._input.embeddings == null ||
+            this._input.embeddings.length < 1 ||
+            this._input.embeddings === '无'
+        ) {
+            return computed(() => emptyEmbeddings)
+        }
+
+        const [platform, modelName] = parseRawModelName(this._input.embeddings)
+        const clientRef = await service.getClient(platform)

Also applies to: 342-345


251-266: watch 未与实例生命周期解耦,delete 后仍可能重建链(泄漏/竞态)

watch 通过 ctx.effect 绑定到上下文销毁,而非 ChatInterface 实例。delete() 后若 llm/embeddings 变化,会再次触发重建链。

建议保存 stop 句柄并在 delete() 主动停止:

@@
-    private _chain: ChatLunaLLMChainWrapper | undefined
-    private _embeddings: Embeddings | undefined
+    private _chain: ChatLunaLLMChainWrapper | undefined
+    private _embeddings: Embeddings | undefined
+    private _stopWatchLlm?: () => void
+    private _stopWatchEmb?: () => void
@@
-        this.ctx.effect(() =>
-            watch(llm, (newValue: ChatLunaChatModel | undefined) => {
-                if (newValue == null) {
-                    this._chain = undefined
-                    return
-                }
-                this._chain = createChain()
-            })
-        )
+        this._stopWatchLlm = watch(
+            llm,
+            (newValue: ChatLunaChatModel | undefined) => {
+                if (newValue == null) {
+                    this._chain = undefined
+                    return
+                }
+                this._chain = createChain()
+            }
+        )
+        this.ctx.effect(() => this._stopWatchLlm!)
@@
-        this.ctx.effect(() =>
-            watch(embeddings, (newValue: Embeddings | undefined) => {
-                this._embeddings = newValue
-                this._chain = createChain()
-            })
-        )
+        this._stopWatchEmb = watch(
+            embeddings,
+            (newValue: Embeddings | undefined) => {
+                this._embeddings = newValue
+                this._chain = createChain()
+            }
+        )
+        this.ctx.effect(() => this._stopWatchEmb!)
@@
     async delete(ctx: Context, room: ConversationRoom): Promise<void> {
         await this.clearChatHistory()
+        this._stopWatchLlm?.()
+        this._stopWatchEmb?.()
+        this._stopWatchLlm = undefined
+        this._stopWatchEmb = undefined
         this._chain = undefined

Also applies to: 287-296, 36-38

packages/core/src/llm-core/platform/api.ts (1)

187-205: GET:合并顺序已修正;进一步建议对 HeadersInit 做规范化

  • 👍 合并顺序已按先内建、再 initHeaders、最后调用方 headers 排列,避免 method/headersparams 覆盖。(延续此前建议)
  • 建议同上使用 new Headers() 规范化,提升对 HeadersInit 兼容性。

应用此补丁:

-        // eslint-disable-next-line @typescript-eslint/no-unused-vars
-        const { headers: initHeaders, method: _m, ...rest } = params
-        return this._plugin.fetch(requestUrl, {
-            ...rest,
-            method: 'GET',
-            headers: {
-                ...this.buildHeaders(),
-                ...(initHeaders as Record<string, string> | undefined),
-                ...headers
-            }
-        })
+        // eslint-disable-next-line @typescript-eslint/no-unused-vars
+        const { headers: initHeaders, method: _m, ...rest } = params
+        const mergedHeaders = new Headers(this.buildHeaders())
+        if (initHeaders) {
+            new Headers(initHeaders).forEach((v, k) => mergedHeaders.set(k, v))
+        }
+        if (headers) {
+            Object.entries(headers).forEach(([k, v]) => mergedHeaders.set(k, v))
+        }
+        return this._plugin.fetch(requestUrl, {
+            ...rest,
+            method: 'GET',
+            headers: mergedHeaders
+        })
🧹 Nitpick comments (6)
packages/core/src/llm-core/chat/app.ts (2)

279-281: embeddings getter 可能返回 undefined(运行期风险)

字段可空但 getter 声明非空。建议兜底到 emptyEmbeddings

-    get embeddings(): Embeddings {
-        return this._embeddings
-    }
+    get embeddings(): Embeddings {
+        return this._embeddings ?? emptyEmbeddings
+    }

46-49: ctx.on('dispose') 中仅清字段未停表观副作用

仅置空 _chain/_embeddings,未停止 watch。结合上文建议,一并调用存储的 stop 句柄。

 ctx.on('dispose', () => {
-    this._chain = undefined
-    this._embeddings = undefined
+    this._stopWatchLlm?.()
+    this._stopWatchEmb?.()
+    this._stopWatchLlm = undefined
+    this._stopWatchEmb = undefined
+    this._chain = undefined
+    this._embeddings = undefined
 })
packages/core/src/services/chat.ts (3)

114-121: awaitLoadPlatform 首次快速路径未做空值保护

models.value 可能为 undefined。与下方 watch 处保持一致,使用空值安全判断。

-        if (models.value.length > 0) {
+        if ((models.value?.length ?? 0) > 0) {
             resolve()
             return promise
         }

262-270: 类型安全:createChatModel 需校验实例类型

当前直接断言为 ChatLunaChatModel,不安全;平台可能返回 embeddings。

         return computed(() => {
             if (client.value == null) {
                 return undefined
             }
-            return client.value.createModel(model) as ChatLunaChatModel
+            const created = client.value.createModel(model)
+            return created instanceof ChatLunaChatModel ? created : undefined
         })

272-280: API 语义建议:createEmbeddings 始终有兜底,返回类型可收紧

实现总是返回有效 Embeddings(或 emptyEmbeddings),签名无需 | undefined

-    ): Promise<ComputedRef<Embeddings | undefined>>
+    ): Promise<ComputedRef<Embeddings>>
@@
-    ): Promise<ComputedRef<Embeddings | undefined>>
+    ): Promise<ComputedRef<Embeddings>>

(实现已符合该语义,无需改动函数体逻辑。若对外为破坏性变更,请在变更日志中注明。)

Also applies to: 281-311

packages/core/src/llm-core/platform/api.ts (1)

174-185: POST:headers 合并对 HeadersInit 形式不鲁棒(建议统一规范化)

当前把 params.headers 强转为 Record<string,string> 展开;若调用方传入 Headersstring[][],展开会丢失内容。建议用 new Headers() 规范化,避免丢头。

应用此补丁:

-        // eslint-disable-next-line @typescript-eslint/no-unused-vars
-        const { headers: initHeaders, method: _m, ...rest } = params
-        return this._plugin.fetch(requestUrl, {
-            ...rest,
-            method: 'POST',
-            headers: {
-                ...this.buildHeaders(),
-                ...(initHeaders as Record<string, string> | undefined)
-            },
-            body
-        })
+        // eslint-disable-next-line @typescript-eslint/no-unused-vars
+        const { headers: initHeaders, method: _m, ...rest } = params
+        const mergedHeaders = new Headers(this.buildHeaders())
+        if (initHeaders) {
+            new Headers(initHeaders).forEach((v, k) => mergedHeaders.set(k, v))
+        }
+        return this._plugin.fetch(requestUrl, {
+            ...rest,
+            method: 'POST',
+            headers: mergedHeaders,
+            body
+        })

另外,避免原地修改入参 data(副作用):可改为先复制后清理(非本变更范围,仅供参考):

const payload = Object.fromEntries(
  Object.entries(data ?? {}).filter(([, v]) => v !== undefined)
)
const body = JSON.stringify(payload)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between af3fe8a and 6a845a7.

📒 Files selected for processing (6)
  • packages/core/src/chains/rooms.ts (4 hunks)
  • packages/core/src/llm-core/chat/app.ts (7 hunks)
  • packages/core/src/llm-core/platform/api.ts (3 hunks)
  • packages/core/src/llm-core/platform/client.ts (4 hunks)
  • packages/core/src/services/chat.ts (14 hunks)
  • packages/gemini-adapter/src/requester.ts (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/gemini-adapter/src/requester.ts
🧰 Additional context used
🧬 Code graph analysis (5)
packages/core/src/llm-core/platform/client.ts (3)
packages/azure-openai-adapter/src/client.ts (1)
  • config (31-33)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/llm-core/platform/types.ts (1)
  • ModelInfo (81-86)
packages/core/src/chains/rooms.ts (2)
packages/core/src/services/types.ts (1)
  • Context (21-23)
packages/core/src/llm-core/platform/client.ts (1)
  • config (90-92)
packages/core/src/llm-core/platform/api.ts (1)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/services/chat.ts (5)
packages/core/src/services/types.ts (1)
  • Context (21-23)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (194-199)
packages/core/src/llm-core/model/in_memory.ts (1)
  • emptyEmbeddings (40-40)
packages/core/src/llm-core/platform/client.ts (1)
  • config (90-92)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/llm-core/chat/app.ts (3)
packages/core/src/llm-core/platform/model.ts (3)
  • ChatLunaChatModel (96-572)
  • modelInfo (557-559)
  • modelName (553-555)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (194-199)
packages/core/src/llm-core/model/in_memory.ts (1)
  • emptyEmbeddings (40-40)
🔇 Additional comments (10)
packages/core/src/chains/rooms.ts (4)

121-126: 正确:对响应式结果取 .value

getModels(...).value 解包无误,避免把 ComputedRef 当作数组使用。


179-185: 集中获取全量模型并用 .value 读取:OK

偏好选择包含 “4o”,否则回退首个条目,符合预期。


192-196: 正确:平台模型列表按响应式接口解包

getModels(...).value 使用一致。


171-173: 签名变更为 Promise — 已确认调用方使用返回值

仓库搜索显示唯一调用点:packages/core/src/middlewares/model/resolve_model.ts:38,调用为 const success = await fixConversationRoomAvailability(...)。函数定义在 packages/core/src/chains/rooms.ts。无需额外修改。

packages/core/src/services/chat.ts (1)

734-736: 确认 addRenderer 返回清理函数 — 无需修改

已搜索并确认 packages/core/src/render.ts 中 addRenderer 签名为 public addRenderer(...): () => void(见行 85–88),因此当前写法 this.ctx.effect(() => this.ctx.chatluna.renderer.addRenderer(name, renderer)) 会把返回的清理函数作为 effect 的 disposer 自动使用,无需更改。

packages/core/src/llm-core/platform/api.ts (2)

11-14: 引入错误类型:一致性良好

新增对 ChatLunaError/ChatLunaErrorCode 的引入与后续使用一致,无问题。


123-135: 修复 catch 分支对 AbortError 的类型收窄并修正文案

原因:直接读取 e.name 在启用 useUnknownInCatchVariables 时不安全;同时兼容 undici 的 ABORT_ERR;注释应与实际行为(向上抛出)一致。
文件:packages/core/src/llm-core/platform/api.ts(约行 123–135)

-            if (
-                (e instanceof ChatLunaError &&
-                    (e.errorCode === ChatLunaErrorCode.NETWORK_ERROR ||
-                        e.errorCode === ChatLunaErrorCode.API_REQUEST_TIMEOUT ||
-                        e.errorCode === ChatLunaErrorCode.ABORTED ||
-                        e.errorCode ===
-                            ChatLunaErrorCode.API_UNSAFE_CONTENT)) ||
-                e.name === 'AbortError'
-            ) {
-                // Ignore network errors
-                throw e
-            }
+            // 不计入错误统计的可忽略/中断错误:直接向上抛出
+            if (
+                (e instanceof ChatLunaError &&
+                    (e.errorCode === ChatLunaErrorCode.NETWORK_ERROR ||
+                        e.errorCode === ChatLunaErrorCode.API_REQUEST_TIMEOUT ||
+                        e.errorCode === ChatLunaErrorCode.ABORTED ||
+                        e.errorCode === ChatLunaErrorCode.API_UNSAFE_CONTENT)) ||
+                (e instanceof Error &&
+                    (e.name === 'AbortError' ||
+                        // undici/WHATWG 可能使用 ABORT_ERR code
+                        (e as any).code === 'ABORT_ERR'))
+            ) {
+                throw e
+            }

请确认仓库是否启用了 useUnknownInCatchVariables(若启用,上述类型保护必要)。

packages/core/src/llm-core/platform/client.ts (3)

101-119: getModels 错误传播与清理逻辑合理

  • ABORTED 直接上抛;
  • 其他错误记录日志、清空缓存并返回空列表,符合预期。

122-127: init/refreshModels 签名统一传递 RunnableConfig:一致性良好

与调用方的新配置流一致,无问题。


90-92: 确认 this.config 改为可选 (T | undefined) 后需验证下游调用

getter 已从 T 变为 T | undefined,直接访问成员可能导致 undefined 访问。沙箱中运行的自动搜索未能遍历仓库(rg 报 "No files were searched"),因此无法自动确认是否存在未用可选链的调用。

操作:在本地运行下列命令并贴出输出,或手动确认所有 this.config.xxx 的调用已使用可选链或显式判空:

# 列出所有 this.config 出现
rg -n -uu --hidden --no-ignore -g '!**/dist/**' -P '\bthis\.config' -C2

# 可选链形式 (this.config?.foo)
rg -n -uu --hidden --no-ignore -g '!**/dist/**' -P '\bthis\.config\?\.' -C2

# 直接点访问 (this.config.foo)
rg -n -uu --hidden --no-ignore -g '!**/dist/**' -P '\bthis\.config\.[A-Za-z_]\w*' -C2

# 直接点访问中排除可选链条目(快速比对)
rg -n -uu --hidden --no-ignore -g '!**/dist/**' -P '\bthis\.config\.[A-Za-z_]\w*' -C2 | rg -v '\bthis\.config\?\.'

确认无直接点访问或已做显式判空后关闭此条。

Fix resource cleanup and add platform availability validation to prevent runtime errors.

- Add unlock() call before throwing aborted errors in BasePlatformClient to prevent deadlocks
- Add platform availability checks in ChatInterfaceWrapper before accessing client config
- Wait for platform loading if client is not initially available
- Throw descriptive error when platform remains unavailable after loading attempt
- Remove redundant request termination logic in platform disposal

These changes prevent lock leaks during abort scenarios and ensure platform clients are properly loaded before use.
Remove redundant request termination code from platform-specific disposal to avoid duplicate cleanup.

- Remove manual request controller abortion in platform disposal
- Simplify conversation cleanup to only remove conversation records
- Request termination is already handled by the global dispose method

This prevents potential double-cleanup issues and simplifies the disposal flow.
@dingyi222666 dingyi222666 force-pushed the refactor/computed-system branch from 59d0a3b to 4771480 Compare September 17, 2025 00:30
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/core/src/llm-core/platform/client.ts (1)

64-75: 切换配置后应清理模型实例缓存,避免复用旧凭据/端点

仅刷新配置与 _modelInfos,不清空 _modelPool 将导致 createModel() 继续返回旧实例,出现“配置已刷新但仍不可用/仍命中旧路由”的现象。

                     this.configPool.markConfigStatus(oldConfig.value, false)
                     this.ctx.logger.error(e)
                     if (this.configPool.findAvailableConfig() !== null) {
-                        retryCount = 0
+                        // 切换到新配置,清理旧缓存以避免复用旧实例
+                        this._modelPool = {}
+                        this._modelInfos = {}
+                        retryCount = 0
                         continue
                     }
packages/core/src/services/chat.ts (1)

114-151: awaitLoadPlatform:防早解与资源泄漏(需收敛 watch 与超时清理)

  • 早判断使用 models.value.length 可能在 valueundefined 时抛错。
  • watch 挂在 ctx.effect 下但 Promise 解析/拒绝后未停止,重复监听造成内存/CPU 开销;超时后也未停止。
  • 监听 ComputedRef 一般无需 { deep: true }

建议改为显式持有 stop 并在 resolve/reject 时调用,同时做空值安全判断与移除 deep

-        if (models.value.length > 0) {
+        if ((models.value?.length ?? 0) > 0) {
             resolve()
             return promise
         }
 ...
-        const timeoutId = this.ctx.setTimeout(() => {
-            reject(timeoutError)
-        }, timeout)
-
-        this[Context.origin].effect(() =>
-            watch(
-                models,
-                () => {
-                    if ((models.value?.length ?? 0) > 0) {
-                        resolve()
-                        timeoutId()
-                    }
-                },
-                { deep: true }
-            )
-        )
+        let stop: (() => void) | undefined
+        const timeoutDispose = this.ctx.setTimeout(() => {
+            stop?.()
+            reject(timeoutError)
+        }, timeout)
+
+        stop = watch(models, () => {
+            if ((models.value?.length ?? 0) > 0) {
+                resolve()
+                timeoutDispose()
+                stop?.()
+            }
+        })
+        // 绑定生命周期,确保 ctx.dispose 时也会清理
+        this[Context.origin].effect(() => stop!)
♻️ Duplicate comments (1)
packages/core/src/llm-core/platform/client.ts (1)

47-50: maxRetries 未随配置刷新而重算;0 次尝试语义不明;循环条件多余空值判断

  • 刷新配置后未更新 maxRetries,重试次数可能仍按旧配置执行(此前已提过)。
  • 当配置显式为 0 时会导致“完全不尝试”,这是否符合预期?若否,应钳制到至少 1。
  • while (retryCount < (maxRetries ?? 1))?? 1 多余。
-        const maxRetries = this.config?.maxRetries ?? 1
+        let maxRetries = Math.max(1, this.config?.maxRetries ?? 1)
@@
-        while (retryCount < (maxRetries ?? 1)) {
+        while (retryCount < maxRetries) {
@@
-                if (retryCount === maxRetries - 1) {
+                if (retryCount === maxRetries - 1) {
                     const oldConfig = this.configPool.getConfig(true)
                     // refresh
                     this.configPool.getConfig(false)
                     this.configPool.markConfigStatus(oldConfig.value, false)
                     this.ctx.logger.error(e)
                     if (this.configPool.findAvailableConfig() !== null) {
                         retryCount = 0
+                        // 配置已切换,按新配置重算最大重试次数
+                        maxRetries = Math.max(1, this.config?.maxRetries ?? 1)
                         continue
                     }

注:若“0 表示完全不尝试”是你的设计,请将 Math.max(1, …) 改回直返并明确注释语义。

Also applies to: 63-75

🧹 Nitpick comments (7)
packages/core/src/llm-core/platform/client.ts (2)

38-41: 避免忽略传入的 RunnableConfig(缓存短路应受控)

当前 _modelInfos 非空即直接返回 true,会忽略调用方传入的取消/超时等配置。建议仅在未传入 config 时走缓存短路。

-        if (Object.values(this._modelInfos).length > 0) {
-            return true
-        }
+        if (Object.values(this._modelInfos).length > 0 && !config) {
+            return true
+        }

106-109: 用 fromEntries 简化 map 构建(可选)

-            this._modelInfos = {}
-
-            for (const model of models) {
-                this._modelInfos[model.name] = model
-            }
+            this._modelInfos = Object.fromEntries(
+                models.map((m) => [m.name, m] as const)
+            )
packages/core/src/services/chat.ts (5)

255-270: createChatModel:返回前做运行时类型守卫,避免错误的强转

当前直接 as ChatLunaChatModel,当名称指向的是 embeddings 时会造成调用方潜在运行时错误。建议加入 instanceof 检查并在不匹配时返回 undefined(与函数签名一致):

-        return computed(() => {
-            if (client.value == null) {
-                return undefined
-            }
-            return client.value.createModel(model) as ChatLunaChatModel
-        })
+        return computed(() => {
+            if (client.value == null) return undefined
+            const m = client.value.createModel(model!)
+            if (m instanceof ChatLunaChatModel) return m
+            this.ctx.logger.warn(
+                `The model ${model} is not a chat model, returning undefined`
+            )
+            return undefined
+        })

291-310: 日志文案小修(非功能):更自然的英文表述

-                    this.ctx.logger.warn(
-                        `The platform ${platformName} no available`
-                    )
+                    this.ctx.logger.warn(
+                        `The platform ${platformName} is not available`
+                    )
...
-            this.ctx.logger.warn(
-                `The model ${modelName} is not embeddings, return empty embeddings`
-            )
+            this.ctx.logger.warn(
+                `The model ${modelName} is not an embeddings model, returning empty embeddings`
+            )

838-850: 并发上限来源不明时的兜底处理

config.concurrentMaxSize 取自平台客户端配置,若不存在将导致不受控并发或实现依赖隐式默认值。建议合并插件层默认值作为兜底:

-            await Promise.all([
+            const maxConcurrent =
+                (config as any)?.concurrentMaxSize ??
+                this._service.config.chatConcurrentMaxSize ??
+                3
+            await Promise.all([
                 this._conversationQueue.wait(conversationId, requestId, 0),
-                this._modelQueue.wait(
-                    platform,
-                    requestId,
-                    config.concurrentMaxSize
-                )
+                this._modelQueue.wait(platform, requestId, maxConcurrent)
             ])

请确认 ClientConfig.concurrentMaxSize 是否为已定义字段;若无,请采用上述兜底或在类型上补全。


58-60: 将仅用于类型的导入改为 type-only,减少运行时依赖

-import { Renderer } from 'koishi-plugin-chatluna'
-import { Embeddings } from '@langchain/core/embeddings'
-import { RunnableConfig } from '@langchain/core/runnables'
+import type { Renderer } from 'koishi-plugin-chatluna'
+import type { Embeddings } from '@langchain/core/embeddings'
+import type { RunnableConfig } from '@langchain/core/runnables'

272-280: 返回类型与实现不一致(可选)

createEmbeddings 的签名为 ComputedRef<Embeddings | undefined>,实现始终返回 emptyEmbeddings(不为 undefined)。若无历史兼容负担,建议将签名收窄为 ComputedRef<Embeddings>;否则请在文档中明确“永不为 undefined”的语义。

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 59d0a3b and 4771480.

📒 Files selected for processing (2)
  • packages/core/src/llm-core/platform/client.ts (4 hunks)
  • packages/core/src/services/chat.ts (13 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-17T00:25:27.172Z
Learnt from: dingyi222666
PR: ChatLunaLab/chatluna#548
File: packages/core/src/llm-core/chat/app.ts:0-0
Timestamp: 2025-09-17T00:25:27.172Z
Learning: 在 ChatInterface 类中,响应式 watch 调用已经通过 ctx.effect() 包装来自动处理清理工作,避免内存泄漏。字段 _chain 和 _embeddings 的类型已更新为可空类型。

Applied to files:

  • packages/core/src/services/chat.ts
📚 Learning: 2025-09-17T00:25:27.172Z
Learnt from: dingyi222666
PR: ChatLunaLab/chatluna#548
File: packages/core/src/llm-core/chat/app.ts:0-0
Timestamp: 2025-09-17T00:25:27.172Z
Learning: 在 ChatInterface 类中,响应式 watch 调用通过 ctx.effect() 包装来自动处理清理工作,避免内存泄漏。字段 _chain 和 _embeddings 的类型已更新为可空类型 (| undefined),并添加 ctx.on('dispose') 处理器提供额外的清理保障。这种使用 Koishi effect 系统的方式比手动管理 stop 句柄更优雅。

Applied to files:

  • packages/core/src/services/chat.ts
🧬 Code graph analysis (2)
packages/core/src/llm-core/platform/client.ts (2)
packages/azure-openai-adapter/src/client.ts (1)
  • config (31-33)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/services/chat.ts (5)
packages/core/src/services/types.ts (1)
  • Context (21-23)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (194-199)
packages/core/src/llm-core/model/in_memory.ts (1)
  • emptyEmbeddings (40-40)
packages/core/src/llm-core/platform/client.ts (1)
  • config (91-93)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
🔇 Additional comments (8)
packages/core/src/llm-core/platform/client.ts (5)

15-19: 引入 RunnableConfig/ChatLunaError:合理且必要

类型依赖与后续使用一致,无问题。


55-61: ABORTED 分支先解锁再抛出 — 赞

已避免死锁路径。


110-120: 错误处理策略清晰

  • ABORTED 直接上抛;
  • 其他异常记录日志、清空缓存并返回空数组。

123-128: init/refreshModels 的 config 透传符合新流程

接口对齐、语义清晰。


91-93: 确保 BasePlatformClient.config 的可选性与子类签名一致

packages/core/src/llm-core/platform/client.ts:91 定义 get config(): T | undefined;packages/azure-openai-adapter/src/client.ts:31 定义 get config(): AzureOpenAIClientConfig(非可选)。请确认 TypeScript 编译器是否报覆盖签名不兼容;若有报错,二选一处理:

  • 将基类改为非可选(get config(): T)并保证不返回 undefined;
  • 或将子类改为可选(get config(): AzureOpenAIClientConfig | undefined),在使用处显式校验或使用非空断言。
packages/core/src/services/chat.ts (3)

590-596: 插件生命周期自动化(就绪/销毁时自动注册/卸载)实现到位

ctx.on('ready') 安装、ctx.on('dispose') 卸载,契合本 PR 的自动化目标,能避免手动遗漏导致的资源泄漏。LGTM。


925-927: 使用 ABORTED 错误对象作为中止原因(且静默日志)是合理做法

stopChatdispose 中传入 new ChatLunaError(ChatLunaErrorCode.ABORTED, undefined, true),可在上游精准识别中止且避免冗余错误日志。LGTM。

Also applies to: 1012-1014


705-737: 确认:register 已返回注销函数,ctx.effect 会正确清理*

在 packages/core/src/llm-core/platform/service.ts 已核验:registerClient / registerTool / registerVectorStore / registerChatChain 均返回对应的注销函数(示例:return () => this.unregisterClient(name);相关返回约在第 66、74、117、133 行),因此 packages/core/src/services/chat.ts 中使用 ctx.effect(() => this._platformService.register...) 的写法不会导致资源泄漏。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants