Skip to content

Conversation

@dingyi222666
Copy link
Member

This PR fixes inconsistent modelMaxContextSize handling across adapters and bumps package versions.

Bug Fixes

  • Standardized modelMaxContextSize handling across all adapters to ensure consistent behavior
  • Fixed adapters that were not properly respecting the modelMaxContextSize configuration
  • Improved context size validation logic in multiple adapter implementations

Other Changes

  • Bumped core package version to 1.3.0-alpha.76
  • Bumped shared-adapter package version to 1.0.15
  • Updated all adapter and extension packages to reference new core and shared-adapter versions

…apters

- Extract modelMaxContextSize calculation to a consistent pattern
- Use getModelMaxContextSize helper where available
- Ensure fallback to 128_000 tokens when maxTokens is undefined
- Apply consistent calculation for maxTokenLimit across all adapters
- Remove unused logger import from EMGAS extractor

This ensures all adapters handle context size limits uniformly and prevents
potential issues with undefined maxTokens values.
Update package versions across all adapters, extensions, and services:
- Core: 1.3.0-alpha.75 -> 1.3.0-alpha.76
- Shared adapter: 1.0.14 -> 1.0.15
- All adapter packages updated to reference new core and shared-adapter versions
- Extension and service packages updated with new core peer dependency

This version bump follows the previous fix for modelMaxContextSize handling.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 27, 2025

Note

Currently processing new changes in this PR. This may take a few minutes, please wait...

📥 Commits

Reviewing files that changed from the base of the PR and between 1314d63 and 0204d12.

⛔ Files ignored due to path filters (26)
  • packages/adapter-azure-openai/package.json is excluded by !**/*.json
  • packages/adapter-claude/package.json is excluded by !**/*.json
  • packages/adapter-deepseek/package.json is excluded by !**/*.json
  • packages/adapter-dify/package.json is excluded by !**/*.json
  • packages/adapter-doubao/package.json is excluded by !**/*.json
  • packages/adapter-gemini/package.json is excluded by !**/*.json
  • packages/adapter-hunyuan/package.json is excluded by !**/*.json
  • packages/adapter-ollama/package.json is excluded by !**/*.json
  • packages/adapter-openai-like/package.json is excluded by !**/*.json
  • packages/adapter-openai/package.json is excluded by !**/*.json
  • packages/adapter-qwen/package.json is excluded by !**/*.json
  • packages/adapter-rwkv/package.json is excluded by !**/*.json
  • packages/adapter-spark/package.json is excluded by !**/*.json
  • packages/adapter-wenxin/package.json is excluded by !**/*.json
  • packages/adapter-zhipu/package.json is excluded by !**/*.json
  • packages/core/package.json is excluded by !**/*.json
  • packages/extension-long-memory/package.json is excluded by !**/*.json
  • packages/extension-mcp/package.json is excluded by !**/*.json
  • packages/extension-tools/package.json is excluded by !**/*.json
  • packages/extension-variable/package.json is excluded by !**/*.json
  • packages/renderer-image/package.json is excluded by !**/*.json
  • packages/service-embeddings/package.json is excluded by !**/*.json
  • packages/service-image/package.json is excluded by !**/*.json
  • packages/service-search/package.json is excluded by !**/*.json
  • packages/service-vector-store/package.json is excluded by !**/*.json
  • packages/shared-adapter/package.json is excluded by !**/*.json
📒 Files selected for processing (10)
  • packages/adapter-azure-openai/src/client.ts (2 hunks)
  • packages/adapter-claude/src/client.ts (1 hunks)
  • packages/adapter-openai-like/src/client.ts (1 hunks)
  • packages/adapter-openai/src/client.ts (2 hunks)
  • packages/adapter-qwen/src/client.ts (1 hunks)
  • packages/adapter-rwkv/src/client.ts (2 hunks)
  • packages/adapter-spark/src/client.ts (1 hunks)
  • packages/adapter-wenxin/src/client.ts (1 hunks)
  • packages/adapter-zhipu/src/client.ts (1 hunks)
  • packages/extension-long-memory/src/layers/emgas/extractor.ts (0 hunks)
 _______________________________________
< For a good time, call 1-800-COD-RABT. >
 ---------------------------------------
  \
   \   (\__/)
       (•ㅅ•)
       /   づ
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch chore/update-version

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dingyi222666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the robustness and consistency of context window management within the system's various model adapters. It ensures that all adapters correctly interpret and apply modelMaxContextSize configurations, preventing potential issues with model context limits. Concurrently, it includes a comprehensive update of package versions across the entire project, aligning all components with the latest alpha releases of the core and shared adapter libraries.

Highlights

  • Standardized Context Size Handling: The pull request standardizes the handling of modelMaxContextSize across all adapters, ensuring consistent behavior for managing model context limits.
  • Improved Context Size Validation: Adapters that were not properly respecting the modelMaxContextSize configuration have been fixed, and the underlying context size validation logic has been improved.
  • Core Package Version Bump: The koishi-plugin-chatluna core package has been updated to version 1.3.0-alpha.76.
  • Shared Adapter Version Bump: The @chatluna/v1-shared-adapter package has been updated to version 1.0.15.
  • Dependency Updates Across Packages: All adapter and extension packages have been updated to reference the new koishi-plugin-chatluna core and @chatluna/v1-shared-adapter versions, along with their own minor version increments.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively standardizes modelMaxContextSize handling across various adapters and bumps package versions. The changes are a good step towards consistency. However, I've identified a few areas where the implementation can be improved for clarity, correctness, and to prevent potential runtime issues. Specifically, I've pointed out a potential bug in the claude adapter's token limit calculation and suggested refactoring in other adapters to handle potentially undefined values for modelMaxContextSize more safely and clearly. Applying these suggestions will enhance the robustness and maintainability of the codebase.

@dingyi222666 dingyi222666 merged commit e850d43 into v1-dev Oct 27, 2025
4 of 5 checks passed
@dingyi222666 dingyi222666 deleted the chore/update-version branch October 27, 2025 12:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants