Skip to content

Conversation

@dingyi222666
Copy link
Member

@dingyi222666 dingyi222666 commented Jan 2, 2026

This pr introduces configurable thresholds for Infinite Context compression and adds a manual compression command to the room management.

New Features

  • Configurable Infinite Context Threshold: Users can now set the infiniteContextThreshold in configuration (range: 50% - 95%, default 85%) to control when history compression is triggered.
  • Manual Compression Command: Added chatluna.room.compress command to manually trigger context compression for a specified room.
  • Improved File Message Handling: Enhanced read_chat_message to preserve and include file URLs in message attributes.

Bug fixes

N/A

Other Changes

  • Updated InfiniteContextManager to honor the configurable threshold.
  • Added i18n support (zh-CN, en-US) for the new configuration setting and the compress command.
  • Integrated the new compress_room middleware into the core middleware stack.

…l compression

- Add `infiniteContextThreshold` configuration to control when compression triggers.
- Implement manual compression command `chatluna.room.compress`.
- Update `InfiniteContextManager` to honor the configurable threshold.
- Add i18n support for the new configuration and command.
- Small fix in `read_chat_message` to preserve file URLs.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 2, 2026

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

新增房间上下文压缩功能:添加 CLI 命令、配置项、聊天接口压缩方法、中间件路由与服务封装,链路通过并发队列协调并调用 InfiniteContextManager 的阈值驱动压缩逻辑。

Changes

Cohort / File(s) 变更概述
CLI 命令层
packages/core/src/commands/room.ts
新增 chatluna.room.compress 命令,触发压缩流程并设置 room_resolve.name
配置层
packages/core/src/config.ts
在公共 Config 中新增 infiniteContextThreshold: number,并在 schema 中加入 infiniteContext.infiniteContextThreshold(0.5–0.95,默认0.85)
聊天接口核心
packages/core/src/llm-core/chat/app.ts
ChatInterface 上新增 async compressContext(): Promise<boolean> 并在创建 InfiniteContextManager 时传入配置阈值
InfiniteContext 管理
packages/core/src/llm-core/chat/infinite_context.ts
InfiniteContextManagerOptions 新增可选 threshold?: number,阈值计算使用 options.threshold ?? 0.85
中间件层
packages/core/src/middleware.ts, packages/core/src/middlewares/room/compress_room.ts
新增 compress_room 中间件文件并将其插入中间件链;实现房间解析、调用服务压缩并返回成功/失败消息;补充 Chain 类型声明
服务层
packages/core/src/services/chat.ts
ChatLunaServiceChatInterfaceWrapper 中新增 compressContext(room: ConversationRoom),通过会话/模型队列控制并发并委托到 ChatInterface
文件处理
packages/core/src/middlewares/chat/read_chat_message.ts
在处理临时文件元素后设置 element.attrs['chatluna_file_url'] = file.url

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant User as 用户
    participant CLI as CLI
    participant Chain as ChatChain
    participant Service as ChatLunaService
    participant Wrapper as ChatInterfaceWrapper
    participant Chat as ChatInterface
    participant ICM as InfiniteContextManager

    User->>CLI: 执行 chatluna.room.compress
    CLI->>Chain: 发起 compress_room 命令
    Chain->>Chain: 解析 room / room_resolve

    alt 未找到房间
        Chain-->>User: 返回 no_room 消息
    else 房间已解析
        Chain->>Service: compressContext(room)
        Service->>Wrapper: compressContext(room)
        Wrapper->>Wrapper: 排队/获取队列锁
        Wrapper->>Chat: compressContext()
        Chat->>ICM: 请求压缩(含阈值)
        ICM->>ICM: 执行压缩逻辑
        ICM-->>Chat: 返回结果(true/false)
        Chat-->>Wrapper: 返回结果
        Wrapper->>Wrapper: 释放队列锁
        Wrapper-->>Chain: 传回结果
        Chain-->>User: 返回成功或失败消息
    end
Loading

Estimated code review effort

🎯 3 (中等) | ⏱️ ~25 分钟

Possibly related PRs

🐰 兔子说:
房间塔里藏旧词,阈值轻敲做整理,
命令一按链起程,队列护航压缩行,
兔耳一抖喜洋洋,数据轻盈又明净。

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed 标题清晰准确地概括了本次变更的核心内容:支持可配置的无限上下文阈值和手动压缩功能。
Description check ✅ Passed 描述与变更集密切相关,涵盖了无限上下文阈值配置、手动压缩命令、文件URL处理等主要变更,内容充分且具体。
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/context-system

📜 Recent review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 402e63b and b31eb05.

📒 Files selected for processing (1)
  • packages/core/src/services/chat.ts
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-12-04T20:25:49.237Z
Learnt from: dingyi222666
Repo: ChatLunaLab/chatluna PR: 653
File: packages/core/src/llm-core/platform/model.ts:169-176
Timestamp: 2025-12-04T20:25:49.237Z
Learning: In packages/core/src/llm-core/platform/model.ts, the code intentionally uses unsafe nested property access (e.g., options?.variables_hide['built']['conversationId']) without full optional chaining to enable explicit TypeErrors for debugging and diagnosis when the expected structure is missing. This fail-fast approach is preferred over silently returning undefined.

Applied to files:

  • packages/core/src/services/chat.ts
🧬 Code graph analysis (1)
packages/core/src/services/chat.ts (4)
packages/core/src/types.ts (1)
  • ConversationRoom (4-19)
packages/core/src/llm-core/utils/count_tokens.ts (1)
  • parseRawModelName (195-205)
packages/core/src/utils/error.ts (1)
  • ChatLunaError (13-41)
packages/core/src/llm-core/platform/client.ts (1)
  • config (91-93)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: lint
  • GitHub Check: build
🔇 Additional comments (1)
packages/core/src/services/chat.ts (1)

237-242: LGTM!

方法实现与现有的 clearChatHistory 模式保持一致,正确地将调用委托给底层的 ChatInterfaceWrapper


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dingyi222666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the chat system's context management capabilities by introducing both configurable automatic compression and a new manual compression command. It provides users with greater flexibility and control over how conversation history is managed, ensuring efficient use of context windows. Additionally, it refines file message processing to better retain important metadata.

Highlights

  • Configurable Infinite Context Threshold: Users can now set the infiniteContextThreshold in the configuration, allowing them to control when history compression is triggered. This threshold can be set between 50% and 95% of the model's context limit, with a default of 85%.
  • Manual Compression Command: A new command, chatluna.room.compress, has been added to enable users to manually trigger context compression for a specified room, providing more direct control over conversation history management.
  • Improved File Message Handling: The read_chat_message function has been enhanced to preserve and include file URLs within the message attributes, specifically under element.attrs['chatluna_file_url'].

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a configurable threshold for infinite context compression and adds a manual compression command, which are great enhancements for managing conversation history. The implementation is solid and follows existing patterns in the codebase. I've provided a couple of suggestions for the new compress_room middleware to improve its robustness in room resolution and error handling. Overall, this is a valuable feature addition.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
packages/core/src/llm-core/chat/app.ts (1)

409-422: 返回值语义不够清晰,建议增强错误处理

该方法返回的 true 仅表示调用了压缩逻辑,而非真正执行了压缩。由于 compressIfNeeded 内部会根据阈值判断是否压缩,调用者无法从返回值得知是否实际发生了压缩。另外,与 processChat 方法(第 143-149 行)中对压缩错误的处理不同,这里缺少错误捕获。

建议:

  1. 考虑返回更明确的状态(例如枚举或对象),指示"已压缩"、"未达到阈值"或"无法压缩"
  2. 参考 processChat 的模式添加 try-catch 错误处理
🔎 建议的改进方案
 async compressContext(): Promise<boolean> {
     const wrapper = await this.getChatLunaLLMChainWrapper()
     if (!wrapper) {
         return false
     }
 
     const manager = this._ensureInfiniteContextManager()
     if (!manager) {
         return false
     }
 
+    try {
         await manager.compressIfNeeded(wrapper)
-        return true
+        return true
+    } catch (error) {
+        logger.error('Error compressing context:', error)
+        return false
+    }
 }
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d4434ce and 402e63b.

⛔ Files ignored due to path filters (4)
  • packages/core/src/locales/en-US.schema.yml is excluded by !**/*.yml
  • packages/core/src/locales/en-US.yml is excluded by !**/*.yml
  • packages/core/src/locales/zh-CN.schema.yml is excluded by !**/*.yml
  • packages/core/src/locales/zh-CN.yml is excluded by !**/*.yml
📒 Files selected for processing (8)
  • packages/core/src/commands/room.ts
  • packages/core/src/config.ts
  • packages/core/src/llm-core/chat/app.ts
  • packages/core/src/llm-core/chat/infinite_context.ts
  • packages/core/src/middleware.ts
  • packages/core/src/middlewares/chat/read_chat_message.ts
  • packages/core/src/middlewares/room/compress_room.ts
  • packages/core/src/services/chat.ts
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-17T00:25:27.195Z
Learnt from: dingyi222666
Repo: ChatLunaLab/chatluna PR: 548
File: packages/core/src/llm-core/chat/app.ts:0-0
Timestamp: 2025-09-17T00:25:27.195Z
Learning: 在 ChatInterface 类中,响应式 watch 调用通过 ctx.effect() 包装来自动处理清理工作,避免内存泄漏。字段 _chain 和 _embeddings 的类型已更新为可空类型 (| undefined),并添加 ctx.on('dispose') 处理器提供额外的清理保障。这种使用 Koishi effect 系统的方式比手动管理 stop 句柄更优雅。

Applied to files:

  • packages/core/src/middlewares/room/compress_room.ts
🧬 Code graph analysis (2)
packages/core/src/middlewares/room/compress_room.ts (2)
packages/core/src/chains/chain.ts (1)
  • ChatChain (14-366)
packages/core/src/chains/rooms.ts (1)
  • getAllJoinedConversationRoom (390-445)
packages/core/src/services/chat.ts (1)
packages/core/src/types.ts (1)
  • ConversationRoom (4-19)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: lint
  • GitHub Check: build
🔇 Additional comments (10)
packages/core/src/middleware.ts (1)

42-42: 变更看起来不错!

新增的 compress_room 中间件已正确导入并注册到中间件链中,放置在 clear_room 之后,符合房间操作的逻辑顺序。

Also applies to: 110-110

packages/core/src/commands/room.ts (1)

109-117: LGTM!

新增的 chatluna.room.compress 命令实现与其他房间命令保持一致,正确地通过 chain.receiveCommand 调用 compress_room 操作。

packages/core/src/llm-core/chat/infinite_context.ts (2)

21-21: 可配置阈值选项实现正确。

InfiniteContextManagerOptions 中添加可选的 threshold 参数,允许外部配置压缩阈值,同时保持向后兼容性。


65-67: 阈值计算逻辑正确。

使用 Math.floor(maxTokenLimit * (this.options.threshold ?? 0.85)) 替换了硬编码的 0.85 阈值。当未提供 threshold 时,默认回退到 0.85,保持了向后兼容性。使用 Math.floor 确保 token 计数为整数。

packages/core/src/config.ts (2)

32-32: 配置接口扩展正确。

Config 接口中添加 infiniteContextThreshold: number 字段,用于存储无限上下文压缩阈值。


114-118: 配置 schema 定义合理。

infiniteContextThreshold 的 schema 定义使用 Schema.percent() 并设置了合理的约束条件:

  • 最小值 50%,避免过于激进的压缩
  • 最大值 95%,确保在达到限制前触发压缩
  • 默认值 85%,与之前的硬编码值保持一致
  • 步进 1%,提供细粒度控制

这些配置符合 PR 目标,并与 infinite_context.ts 中的实现正确对接。

packages/core/src/llm-core/chat/app.ts (1)

545-546: LGTM!

将配置的阈值传递给 InfiniteContextManager 的实现正确,与其他配置参数的使用模式保持一致。

packages/core/src/services/chat.ts (1)

237-242: LGTM!

该方法遵循了与 clearChatHistory 相同的委托模式,实现简洁且一致。

packages/core/src/middlewares/room/compress_room.ts (2)

39-50: LGTM!

压缩执行逻辑的错误处理完善,正确设置了成功和失败消息,并进行了适当的日志记录。


6-56: LGTM!

中间件的整体结构符合项目的标准模式,正确集成到了链的生命周期中(在 lifecycle-handle_command 之后、lifecycle-request_model 之前),并且类型声明完善。

Mirror the pattern used in the chat method by acquiring a model-queue
token in compressContext to enforce platform concurrentMaxSize limits.
@dingyi222666 dingyi222666 merged commit c776c3e into v1-dev Jan 3, 2026
5 checks passed
@dingyi222666 dingyi222666 deleted the feat/context-system branch January 3, 2026 09:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants