pi0_fast: fix off-by-one in next-token positions; add regression test #707
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR fixes an off-by-one bug in pi0_fast.py when computing positions for the next decoded token. Prefix token positions are 0-indexed (0..L-1), so the first new token after a prefix of length L must be placed at position L (not L+1).
Fix: drop the + 1 when building positions.
Fixes #705.
Changes-
src/openpi/models/pi0_fast.py: 1-line change to compute contiguous, zero-indexed positions.
src/openpi/models/pi0_fast_positions_test.py: small regression test asserting:
next token after length L is at L
advancing step increments positions by 1
Scope / Compatibility
Affects JAX pi0-FAST decode path only.
No API changes. No impact on the PyTorch path (pi0-FAST not supported there).
No performance impact.
How I tested
uv run pytest -q src/openpi/models/pi0_fast_positions_test.py passes.
Verified the minimal repro before/after values locally.
Ran ruff + ruff-format hooks.