Skip to content

Comments

[src/MaxText/inference/offline_engine.py] Make OfflineEngine a fluent interface#2983

Open
SamuelMarks wants to merge 1 commit intoAI-Hypercomputer:mainfrom
SamuelMarks:offline-inference-init-FLUENT
Open

[src/MaxText/inference/offline_engine.py] Make OfflineEngine a fluent interface#2983
SamuelMarks wants to merge 1 commit intoAI-Hypercomputer:mainfrom
SamuelMarks:offline-inference-init-FLUENT

Conversation

@SamuelMarks
Copy link
Collaborator

Description

[src/MaxText/inference/offline_engine.py] OfflineEngine.__init__ takes many configuration flags ( enable_batch_prefillmin_decode_stepsprefill_lengthseos_ids) to configure internal workers and prefill helpers. The setup logic branches based on these flags (e.g., choosing BatchedPrefillProcessor vs PrefillProcessor) ; [tests/{grpo_trainer_correctness_test.py,inference/benchmark_offline_engine.py,offline_engine_test.py}] Update tests for new fluent interface

TL;DR: Instead of passing a long list of arguments (some mutually exclusive or dependent on flags like enable_batch_prefill) directly to __init__, we now use a Builder pattern. This validates configuration (e.g. checking scan_layers vs batch_prefill) before the engine is created.

Before:

# Initialization was brittle with many positional/keyword args
engine = OfflineEngine(
    config=self.config,
    params=self.params,
    min_decode_steps=10,
    enable_batch_prefill=True,
    batch_prefill_max_batch_size=16,
    tokenizer=self.tokenizer,
    eos_ids=[100, 101],
    rng=self.rng
)

After:

# Initialization is now fluent, explicit, and pre-validated
engine = (
    OfflineEngineBuilder(self.config)
    .set_params(self.params)
    .enable_batch_prefill(max_batch_size=16)
    .set_decoding_params(min_steps=10)
    .set_tokenizer(self.tokenizer_path) # Or strictly passing tokenizer instance via builder
    .set_eos_ids([100, 101])
    .set_rng(self.rng)
    .build() # Validation happens here
)

Tests

CI

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

…kes many configuration flags ( `enable_batch_prefill`, `min_decode_steps`, `prefill_lengths`, `eos_ids`) to configure internal workers and prefill helpers. The setup logic branches based on these flags (e.g., choosing `BatchedPrefillProcessor` vs `PrefillProcessor`) ; [tests/{grpo_trainer_correctness_test.py,inference/benchmark_offline_engine.py,offline_engine_test.py}] Update tests for new fluent interface
@codecov
Copy link

codecov bot commented Jan 21, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

@github-actions
Copy link

This PR has been automatically marked as stale because it has not had recent activity. It will be closed soon if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale Automatically applied to stale PRs. label Feb 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

stale Automatically applied to stale PRs.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant