Skip to content

Conversation

@dingyi222666
Copy link
Member

This PR refactors the timeout handling mechanism in model requests to use standard Promise APIs instead of custom utilities, improving code maintainability and following JavaScript best practices.

Key Changes

  • Replace custom utilities with Promise.race: Removed runAsync and withResolver dependencies in favor of native Promise.race for timeout handling
  • Simplify _withTimeout method: Cleaner implementation using standard Promise APIs with proper cleanup in finally blocks
  • Refactor _embeddingWithRetry: Eliminated nested async Promise executor anti-pattern, restructured to use Promise.race directly
  • Improve error handling: Preserved complete error stack traces by pre-creating timeout errors
  • Add stream validation: Added hasChunk validation in _streamResponseChunks to detect empty responses
  • Code cleanup: Removed redundant null checks and unused imports

Technical Details

The refactoring maintains identical functionality and error handling behavior while:

  • Using Promise.race for timeout competition between actual requests and timeout promises
  • Ensuring proper cleanup of timeout timers via finally blocks to prevent memory leaks
  • Preserving error stack traces by throwing and catching timeout errors upfront
  • Following Promise best practices and avoiding common anti-patterns

Testing

All existing functionality remains unchanged. The refactoring is purely structural and does not alter the external API or behavior.

Other Changes

  • Bump version to 1.3.0-alpha.66
  • Update peer dependencies across all adapter packages

- Replace runAsync/withResolver with Promise.race for cleaner async flow
- Simplify _withTimeout method using standard Promise APIs
- Refactor _embeddingWithRetry to remove nested async Promise executor
- Add proper timeout cleanup with finally blocks
- Preserve error stack traces by pre-creating timeout errors
- Add hasChunk validation in _streamResponseChunks
- Remove redundant null checks in _generateWithRetry
- Remove unused imports (runAsync, withResolver)

This improves code maintainability and follows Promise best practices
while maintaining the same functionality and error handling behavior.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 15, 2025

Caution

Review failed

The pull request is closed.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

对聊天与嵌入请求的超时与错误处理进行统一改造;流式聊天新增 hasChunk 检测并在无片段时抛出 API_REQUEST_FAILED;嵌入请求改为通过 this.caller.call(makeRequest) 并统一超时/错误封装;OpenAI agent 在输入为 null 时改为返回单个工具化异常响应。

Changes

Cohort / File(s) Change Summary
LLM 平台超时与错误处理重构
packages/core/src/llm-core/platform/model.ts
- 移除对 runAsync/withResolver 的依赖
- 聊天流新增 hasChunk 检测,若无片段抛出 API_REQUEST_FAILED
- 以 _withTimeout + Promise.race 统一超时处理并清理定时器
- 嵌入请求改为通过 this.caller.call(makeRequest) 进行请求与重试
- 统一将非 ChatLunaError 错误封装为 API_REQUEST_FAILED,并新增 timeoutError 流程
OpenAI agent 空输入错误表达改动
packages/core/src/llm-core/agent/openai/index.ts
- 当输入为 null 时,不再返回空数组;改为返回单个工具调用对象:tool: '_Exception', toolInput: 'Something unknown error. Please try again.', log: 'Input is null',将空输入以工具化异常结果表达。

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Client
  participant ChatModel as ChatModel
  participant Timeout as _withTimeout (Promise.race)
  participant API as Provider API
  participant Errors as Error Mapper

  Client->>ChatModel: streamChat(request)
  ChatModel->>Timeout: run(operation, timeoutMs)
  Timeout->>API: start streaming
  note over ChatModel,API: hasChunk = false
  API-->>ChatModel: onChunk(data)
  ChatModel->>ChatModel: hasChunk = true
  Timeout-->>ChatModel: settled (data or timeout)
  alt timeout
    ChatModel->>Errors: TimeoutError
    Errors-->>Client: throw API_REQUEST_FAILED
  else completed
    alt no chunks produced
      ChatModel->>Errors: No chunks produced
      Errors-->>Client: throw API_REQUEST_FAILED
    else chunks produced
      ChatModel-->>Client: stream results
    end
  end
  note right of Timeout: 清理定时器
Loading
sequenceDiagram
  autonumber
  actor Client
  participant Agent as OpenAI Agent
  participant Tool as ToolInvoker

  Client->>Agent: run(input)
  alt input == null
    Agent->>Tool: return tool {_Exception, "Something unknown error. Please try again."}
    Tool-->>Client: tool-based error result
  else normal
    Agent->>...: normal workflow
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

小兔敲键整清流,超时赛跑不再愁。
Promise.race 风中舞,定时清理一并收。
若输入空白成疑问,用工具之声来报忧。 🐇✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed 该标题清晰地概括了此次变更的核心内容,即在模型请求中优化超时处理机制,简洁明了且与实际更改高度相关,符合标题应描述主要改动的要求。
Description Check ✅ Passed 该描述详尽地列出了使用 Promise.race 重构超时处理、清理无用依赖、改进错误处理和流验证等关键改动,与实际变更集高度一致且非离题,满足描述相关性的宽松要求。
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5782c80 and 71fe419.

📒 Files selected for processing (1)
  • packages/core/src/llm-core/agent/openai/index.ts (1 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dingyi222666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the asynchronous timeout handling mechanism within model requests. By migrating from custom utility functions to standard JavaScript Promise.race and other native Promise APIs, the changes aim to improve code maintainability, readability, and adherence to modern JavaScript best practices. The refactoring ensures identical functionality and error handling behavior, focusing purely on structural improvements without altering external APIs.

Highlights

  • Refactored Timeout Handling: Replaced custom runAsync and withResolver utilities with native Promise.race for managing request timeouts, aligning with modern JavaScript best practices.
  • Simplified _withTimeout Method: The timeout logic is now cleaner, leveraging standard Promise APIs and ensuring proper timer cleanup using finally blocks to prevent memory leaks.
  • Restructured _embeddingWithRetry: Eliminated a nested async Promise executor anti-pattern, directly integrating Promise.race for timeout management and simplifying the overall structure.
  • Enhanced Error Handling: Timeout errors are now pre-created to preserve complete stack traces, significantly improving debugging capabilities.
  • Added Stream Validation: Introduced hasChunk validation in _streamResponseChunks to detect and handle cases where no data chunks are received from a stream, throwing an API_REQUEST_FAILED error.
  • Dependency Updates: Bumped the core koishi-plugin-chatluna version to 1.3.0-alpha.66 and updated its peer dependencies across all adapter packages.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is an excellent refactoring that significantly improves the timeout handling logic by replacing custom promise utilities with the standard Promise.race API. The changes make the code cleaner, more maintainable, and less prone to errors like un-cleared timeouts. The addition of stream validation to detect empty responses is also a great enhancement for robustness. I have one minor suggestion to further refine the error handling to avoid nesting errors.

Replace silent empty array return with explicit error response when
input is null in the OpenAI agent. This provides better error handling
and debugging information by returning an _Exception tool call with a
descriptive error message instead of silently failing.

This improves error visibility and helps diagnose issues when the
agent receives unexpected null input values.
@dingyi222666 dingyi222666 merged commit 6fb287e into v1-dev Oct 15, 2025
1 of 2 checks passed
@dingyi222666 dingyi222666 deleted the fix/model-request branch October 15, 2025 21:25
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
packages/core/src/llm-core/platform/model.ts (2)

197-198: 空流检测建议补充上下文信息

hasChunk 检测本身合理,但仅抛出 API_REQUEST_FAILED,缺少诊断信息。建议附上模型名/请求信息,便于定位问题。

可应用如下修改:

-        if (!hasChunk) {
-            throw new ChatLunaError(ChatLunaErrorCode.API_REQUEST_FAILED)
-        }
+        if (!hasChunk) {
+            throw new ChatLunaError(
+                ChatLunaErrorCode.API_REQUEST_FAILED,
+                new Error(`Empty stream: no chunks received from model ${this.modelName}`)
+            )
+        }

Also applies to: 247-252


708-733: 去重与健壮性:复用 _withTimeout 或在本地实现中增加超时防护与类型兼容

当前实现与 _withTimeout 重复,且缺少对非正/非法超时值的绕过处理;计时器类型也仅限 Node。

可选方案 A(复用 _withTimeout,最简):

-        const makeRequest = async () => {
-            let timeoutId: NodeJS.Timeout
-
-            const timeoutPromise = new Promise<number[] | number[][]>(
-                // eslint-disable-next-line promise/param-names
-                (_, reject) => {
-                    timeoutId = setTimeout(() => {
-                        reject(timeoutError)
-                    }, request.timeout)
-                }
-            )
-
-            try {
-                const data = await Promise.race([
-                    this._client.embeddings(request),
-                    timeoutPromise
-                ])
-                return data
-            } catch (e) {
-                if (e instanceof ChatLunaError) {
-                    throw e
-                }
-                throw new ChatLunaError(ChatLunaErrorCode.API_REQUEST_FAILED, e)
-            } finally {
-                clearTimeout(timeoutId)
-            }
-        }
+        const makeRequest = async () => {
+            try {
+                // 非正或非法超时:直接请求(不设超时)
+                if (!Number.isFinite(request.timeout) || (request.timeout as number) <= 0) {
+                    return await this._client.embeddings(request)
+                }
+                return await this._withTimeout(
+                    () => this._client.embeddings(request),
+                    request.timeout as number
+                )
+            } catch (e) {
+                if (e instanceof ChatLunaError) throw e
+                throw new ChatLunaError(ChatLunaErrorCode.API_REQUEST_FAILED, e as Error)
+            }
+        }

可选方案 B(保留本地实现,但更稳健):

  • 使用 ReturnType | undefined
  • 非正/非法超时直接 return this._client.embeddings(request)
  • finally 判空 clearTimeout
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e5a020a and 5782c80.

⛔ Files ignored due to path filters (26)
  • packages/adapter-azure-openai/package.json is excluded by !**/*.json
  • packages/adapter-claude/package.json is excluded by !**/*.json
  • packages/adapter-deepseek/package.json is excluded by !**/*.json
  • packages/adapter-dify/package.json is excluded by !**/*.json
  • packages/adapter-doubao/package.json is excluded by !**/*.json
  • packages/adapter-gemini/package.json is excluded by !**/*.json
  • packages/adapter-hunyuan/package.json is excluded by !**/*.json
  • packages/adapter-ollama/package.json is excluded by !**/*.json
  • packages/adapter-openai-like/package.json is excluded by !**/*.json
  • packages/adapter-openai/package.json is excluded by !**/*.json
  • packages/adapter-qwen/package.json is excluded by !**/*.json
  • packages/adapter-rwkv/package.json is excluded by !**/*.json
  • packages/adapter-spark/package.json is excluded by !**/*.json
  • packages/adapter-wenxin/package.json is excluded by !**/*.json
  • packages/adapter-zhipu/package.json is excluded by !**/*.json
  • packages/core/package.json is excluded by !**/*.json
  • packages/extension-long-memory/package.json is excluded by !**/*.json
  • packages/extension-mcp/package.json is excluded by !**/*.json
  • packages/extension-tools/package.json is excluded by !**/*.json
  • packages/extension-variable/package.json is excluded by !**/*.json
  • packages/renderer-image/package.json is excluded by !**/*.json
  • packages/service-embeddings/package.json is excluded by !**/*.json
  • packages/service-image/package.json is excluded by !**/*.json
  • packages/service-search/package.json is excluded by !**/*.json
  • packages/service-vector-store/package.json is excluded by !**/*.json
  • packages/shared-adapter/package.json is excluded by !**/*.json
📒 Files selected for processing (1)
  • packages/core/src/llm-core/platform/model.ts (4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
packages/core/src/llm-core/platform/model.ts (1)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build
  • GitHub Check: lint

Comment on lines +368 to +381
new Promise((resolve, reject) => {
setTimeout(() => {
reject(timeoutError)
}, request.timeout)
}),

new Promise<number[] | number[][]>(
// eslint-disable-next-line no-async-promise-executor
async (resolve, reject) => {
let data: number[] | number[][]

try {
data =
await this._client.embeddings(request)
} catch (e) {
if (e instanceof ChatLunaError) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

timeout 为 undefined/<=0 会导致“立即超时”;需加防护并兼容计时器类型

当前 setTimeout(undefined) 等价 0,会立刻 reject,造成所有缺省/0 超时的请求立即失败。另,NodeJS.Timeout 在浏览器环境不兼容。

建议:

  • 对非正/非法超时值直接绕过超时包装,按无限超时执行。
  • 计时器类型改为 ReturnType,并在 finally 中判空清理。
-        let timeoutId: NodeJS.Timeout
+        // 非正或非法超时:不启用超时,直接执行
+        if (!Number.isFinite(timeout) || timeout <= 0) {
+            return await func()
+        }
+        let timeoutId: ReturnType<typeof setTimeout> | undefined

         // eslint-disable-next-line promise/param-names
         const timeoutPromise = new Promise<T>((_, reject) => {
             timeoutId = setTimeout(() => {
                 reject(timeoutError)
             }, timeout)
         })

         try {
             return await Promise.race([func(), timeoutPromise])
         } finally {
-            clearTimeout(timeoutId)
+            if (timeoutId != null) clearTimeout(timeoutId)
         }

为确认所有调用点均传入了有效超时值,可运行以下脚本检查调用与默认值:


🏁 Script executed:

#!/bin/bash
# 查找 Chat/Embeddings 请求中对 timeout 的设置与传递
rg -nP -C3 '\btimeout\s*:' packages | sed -n '1,200p'
echo "---- constructors ----"
rg -nP -C3 'new\s+ChatLunaChatModel|class\s+ChatLunaChatModel' packages
rg -nP -C3 'class\s+ChatLunaEmbeddings|new\s+ChatLunaEmbeddings' packages

Length of output: 32411


🏁 Script executed:

#!/bin/bash
rg -n -C3 "Promise.race" packages/core/src/llm-core/platform/model.ts

Length of output: 405


为 chat & embeddings 调用添加 timeout 非正值兼容及浏览器定时器类型

  • packages/core/src/llm-core/platform/model.ts 中 ChatLunaChatModel(368–381 行)和 ChatLunaEmbeddings(720–724 行)调用处:
    • timeout 非正或非有限,跳过超时包装,直接 await func() 或 embeddings 请求;
    • let timeoutId: NodeJS.Timeout 改为 let timeoutId: ReturnType<typeof setTimeout> | undefined
    • finally 中对 timeoutId 判空后再 clearTimeout(timeoutId)
🤖 Prompt for AI Agents
In packages/core/src/llm-core/platform/model.ts around lines 368–381 (and
similarly update ChatLunaEmbeddings at lines ~720–724), the timeout wrapper
assumes a positive finite timeout and NodeJS timer type; update logic so that if
timeout is not a finite number > 0 you skip the timeout wrapping and simply
await func(), change let timeoutId: NodeJS.Timeout to let timeoutId:
ReturnType<typeof setTimeout> | undefined, assign timeoutId when setTimeout is
called, and in the finally block only call clearTimeout(timeoutId) after
checking timeoutId is not undefined; apply the same pattern to the embeddings
call sites.

Comment on lines +696 to +707
)
}
}

if (data) {
resolve(data)
}

reject(
Error(
`error when calling ${this.modelName} embeddings, Result: ` +
JSON.stringify(data)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

嵌入请求:0/未定义超时将“立即超时”;需与聊天路径一致做防护

request.timeout 通过 null 合并从 this.timeout 兜底,但显式传入 0 时会被当作 0ms 立即失败。通常 0 表示“不超时”。

建议在构造超时前做同样的防护;同时保持跨环境计时器类型:

-        let timeoutError: Error | null = null
+        let timeoutError: Error | null = null
         try {
             throw new ChatLunaError(
                 ChatLunaErrorCode.API_REQUEST_TIMEOUT,
                 new Error(`timeout when calling ${this.modelName} embeddings`),
                 true
             )
         } catch (e) {
             timeoutError = e
         }

并在 makeRequest 内加入防护(见下一段 diff)。

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/core/src/llm-core/platform/model.ts around lines 696-707, the
timeout construction for embedding requests throws an immediate timeout when
request.timeout is explicitly 0; mirror the chat-path guard by computing the
effective timeout as (request.timeout ?? this.timeout) and treat 0 as "no
timeout" (i.e., coalesce 0 to undefined/null) before creating the ChatLunaError;
ensure timer variables use a cross-environment type (e.g., ReturnType<typeof
setTimeout> | null) rather than number, and add the same defensive check inside
makeRequest so an explicitly passed 0 does not cause an immediate timeout.

Comment on lines +736 to 740
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

避免对 ChatLunaError 的二次包装;与变更说明保持一致

外层 catch 会把内部已抛出的 ChatLunaError 再包一层 API_REQUEST_FAILED,导致丢失原始错误码(如 TIMEOUT)并重复日志。这与“仅对非 ChatLunaError 进行包装”的目标不一致。

修改为仅包装非 ChatLunaError:

-        try {
-            return await this.caller.call(makeRequest)
-        } catch (e) {
-            throw new ChatLunaError(ChatLunaErrorCode.API_REQUEST_FAILED, e)
-        }
+        try {
+            return await this.caller.call(makeRequest)
+        } catch (e) {
+            if (e instanceof ChatLunaError) throw e
+            throw new ChatLunaError(ChatLunaErrorCode.API_REQUEST_FAILED, e as Error)
+        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
return await this.caller.call(makeRequest)
} catch (e) {
if (e instanceof ChatLunaError) {
throw new ChatLunaError(ChatLunaErrorCode.API_REQUEST_FAILED, e)
} else {
throw new ChatLunaError(ChatLunaErrorCode.API_REQUEST_FAILED, e)
}
throw new ChatLunaError(ChatLunaErrorCode.API_REQUEST_FAILED, e)
}
try {
return await this.caller.call(makeRequest)
} catch (e) {
if (e instanceof ChatLunaError) throw e
throw new ChatLunaError(ChatLunaErrorCode.API_REQUEST_FAILED, e as Error)
}
🤖 Prompt for AI Agents
In packages/core/src/llm-core/platform/model.ts around lines 736-740, the
current catch always wraps the caught error into a
ChatLunaError(API_REQUEST_FAILED), which double-wraps and hides existing
ChatLunaError codes; change the catch to rethrow the error if it is already a
ChatLunaError, otherwise wrap non-ChatLunaError exceptions in a new
ChatLunaError with code API_REQUEST_FAILED so original error codes and stack are
preserved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants