Conversation
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
There was a problem hiding this comment.
Code Review
This pull request effectively addresses an inference precision issue by ensuring that the specified inference_precision is respected throughout the InferenceEngineCacheKV. The changes correctly cast tensors to the desired data type, which resolves the inconsistencies noted. Additionally, the modification to conditionally add "thinking tokens" only when not using a KV cache is a logical improvement for consistency. The code is well-structured, and I have a couple of minor suggestions to enhance conciseness.
| inference_dtype = ( | ||
| force_inference_dtype | ||
| if force_inference_dtype is not None | ||
| else torch.float32 | ||
| ) |
There was a problem hiding this comment.
would you make this change? This way it will match what we do in _prepare_model_inputs().
| inference_dtype = ( | ||
| self.force_inference_dtype | ||
| if self.force_inference_dtype is not None | ||
| else torch.float32 | ||
| ) |
There was a problem hiding this comment.
Pull request overview
Fixes prediction inconsistencies across fit_modes by aligning inference-time dtype handling and preventing KV-cache inference from adding extra “thinking” tokens that would change context length / cache behavior.
Changes:
- Make preprocessing reproducible across repeated
predict()calls by overriding preprocessing random state in the on-demand inference engine. - Force model parameters and input tensors to the requested
inference_precisiondtype for KV-cache inference. - Skip adding thinking tokens during KV-cache prediction (
single_eval_pos == 0) to keep cacheable context stable.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
src/tabpfn/inference.py |
Adjusts preprocessing seeding override and forces inference dtype casting for KV-cache path. |
src/tabpfn/architectures/base/transformer.py |
Avoids adding thinking tokens during KV-cache prediction to preserve cache consistency. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| y_train=self.y_train, | ||
| feature_schema=self.feature_schema, | ||
| parallel_mode="in-order", | ||
| override_random_state=np.random.default_rng(self.static_seed), | ||
| override_random_state=self.static_seed, | ||
| ) |
There was a problem hiding this comment.
override_random_state is now passed as an int (self.static_seed). In TabPFNEnsemblePreprocessor.fit_transform_ensemble_members_iterator the random_state is selected via override_random_state or self.random_state, which will ignore an override of 0 (since 0 is falsy) and fall back to self.random_state, reintroducing non-deterministic preprocessing across predict calls. Prefer either passing a truthy override (e.g., a np.random.Generator like before) or (better) changing the downstream selection to override_random_state if override_random_state is not None else self.random_state so that seed 0 is respected.
| is_kv_cache_prediction = ( | ||
| self.cache_trainset_representation and single_eval_pos == 0 | ||
| ) | ||
| if self.add_thinking_tokens is not None and not is_kv_cache_prediction: | ||
| embedded_input, single_eval_pos = self.add_thinking_tokens( |
There was a problem hiding this comment.
This change alters when thinking tokens are added (they’re skipped for KV-cache prediction when single_eval_pos==0). There’s currently no test covering this specific behavior/contract (e.g., that fit_with_cache prediction path doesn’t append thinking tokens and stays consistent with other fit modes for a fixed seed). Please add/adjust a unit/integration test to lock this in—re-enabling the existing skipped “fit modes return equal results” tests (or adding a targeted regression test for #631) would help prevent regressions.
|
@klemens-floege hey I checked the failing tests but it is not sth I changed in my pr. #757 after this change this test can be failed but my local changes don't touch modality_detection.py or test_modality_detection.py. what would you suggest? macos test I can fix. it is consistency test so no big issue I already mentioned it in pr description but other platform errors shouldn't depend on my changes. The test seems flaky or at least environment-sensitive. |
|
@egeonur could you pls try a simple rebase to main? For the changelog test you need to add an .md file in the changelog folder. Thank you :) |
ca19e16 to
c65229e
Compare
|
@klemens-floege I rebased, fixed the consistency test and added .md file. Can you run again to see whether the tests are fixed? 🤞 |
|
@klemens-floege I get the same errors again but #815 had the same failing tests. my local python version is 3.11 and I guess this error happens since CI paths use Python 3.14 so I guess behaviour of the of pd.to_numeric or s.isna() changed in Python 3.14 |
klemens-floege
left a comment
There was a problem hiding this comment.
Hi, thanks again for contributing! I ran the reproduction script locally from your branch (on CPU) and the inconsistencies are still present:
- no_cache vs cache_preproc: 0.013
- no_cache vs kv_cache: 1.423
A few thoughts:
On the consistency tests: I'd prefer not to adjust the random dataset inside the estimators to fix the test failures — those tests are there to verify that the internal model behaviour hasn't changed, so relaxing them isn't a great signal. If adjusting the random state inside the estimators is truly unavoidable to fix the inconsistency, that may be acceptable, but it should be a deliberate decision.
What I think would be valuable: Could you add a test to test_regressor.py and test_classifier.py that explicitly verifies the different fit_mode options produce consistent predictions? Something like:
def test_fit_mode_consistency(regressor_or_classifier):
# assert predictions from low_memory, fit_preprocessors, fit_with_cache
# are all within float tolerance of each other
This would both document the expected behaviour and catch regressions going forward.
when I run this on with uv run my local CPU I get this result:
|
|
@egeonur I ran you script with device set to cpu: It could be that the pandas version makes a difference will do more digging, I think a concrete good next step in this PR is to add a new test to |
|
hey @egeonur , I'm just merging #852 which makes Re the tests, we have |
|
Hey @oscarkey — I was about to write the same update. I tested my fork again on a different MacBook with an M3 chip yesterday, and I still got zero diffs. That matches what I saw before. What confuses me is that @klemens-floege said he still did not see improvements, and reported these max relative diffs: So from his side it looks like the fix is not taking effect. I am wondering whether this could be a branch/version mismatch somewhere, although I am not certain. I tested with the same library versions he mentioned:
On my side, with float64 precision, I again got zero diffs. The key thing I found is this change in PR #802: That part seems to solve the issue by overriding the random state setting to So I am not sure why Klemens still sees no improvement with my local version, especially because my solution was built on top of your solution. Even if it did not fully fix everything on his side, I would have expected at least some improvement. If you have time to check whether switching to this PR changes the results when you run it, I’d really appreciate it. Then I can pull your latest changes from main and try again with the seed improvement. |
|
hey @egeonur . I applied your static seed fix and ran I think it should also be safe to update |
cea20d8 to
080cf3b
Compare
|
@oscarkey I removed the On your earlier comment about simplifying
If that’s what you had in mind, I’m happy to make that cleanup too. I just wanted to confirm exactly what you’d prefer to remove. Other than that, I think this issue is fixed. All relevant tests passed locally. I’m still seeing some unrelated mypy errors in |
oscarkey
left a comment
There was a problem hiding this comment.
do you mean removing the fit-mode-specific cases in
TEST_CASES
yes exactly! To update the reference predictions, I would suggest renaming [prefix]_fit_preprocessors.json to just [prefix].json, and deleting the low_memory/fit_with_cache reference predictions. You can also revert all other reference prediction changes, as the predictions should not have changed.
| inference_dtype = ( | ||
| self.force_inference_dtype | ||
| if self.force_inference_dtype is not None | ||
| else torch.float32 | ||
| ) | ||
| X_test = torch.as_tensor( | ||
| X_test, | ||
| dtype=inference_dtype, | ||
| device=self.device, | ||
| ) |
There was a problem hiding this comment.
Same again? This already happens below?
There was a problem hiding this comment.
The lower cast only overlaps partially. My change makes the KV-cache path normalize inputs up front with inference_dtype = force_inference_dtype or
torch.float32, even when X and y already arrive as tensors. Before that, this branch only cast non-tensor inputs to float32, so tensor inputs could retain
an arbitrary dtype, and force_inference_dtype was not applied consistently during cache building. The later block only re-applies the forced dtype after
GPU preprocessing, so it does not replace the up-front normalization or the default float32 path.
With the seed held fixed, that difference was observable in output consistency. Before this change, I saw:
max relative diffs
no_cache vs no_cache_repeat: 0.0
no_cache vs cache_preproc: 0.0
no_cache vs kv_cache: 1.191185e-06
After normalizing the KV-cache path to use self.force_inference_dtype instead of defaulting to float32, the discrepancy went to zero:
max relative diffs
no_cache vs no_cache_repeat: 0.0
no_cache vs cache_preproc: 0.0
no_cache vs kv_cache: 0.0
When I last checked TabPFN-2.6 doesn't support currently fit_with_cache fit_mode so these were the results from TabPFN-2.5. but if you say they are redundant I can revert them back but without that change small difference will stay imo.
There was a problem hiding this comment.
ahhh I see my mistake! I think there might still be some redundancy (see my other comments), but if we fix that I think we're ready to go :)
cc4ad6d to
146ca15
Compare
|
@oscarkey I updated as you sugessted. I simplified tests/test_consistency.py to snapshot only the default estimator behavior, removed the fit-mode-specific consistency cases, renamed |
oscarkey
left a comment
There was a problem hiding this comment.
hey, thanks for making the changes, I think we're nearly there!
I think the changes in reference_predictions can be reverted? The only thing that should change is *_fit_preprocessors.json should be renamed to *.json, but the contents of the files shouldn't change.
| inference_dtype = ( | ||
| self.force_inference_dtype | ||
| if self.force_inference_dtype is not None | ||
| else torch.float32 | ||
| ) | ||
| X_test = torch.as_tensor( | ||
| X_test, | ||
| dtype=inference_dtype, | ||
| device=self.device, | ||
| ) |
There was a problem hiding this comment.
ahhh I see my mistake! I think there might still be some redundancy (see my other comments), but if we fix that I think we're ready to go :)
|
|
||
| if self.force_inference_dtype is not None: | ||
| model.type(self.force_inference_dtype) | ||
| X_test = X_test.type(self.force_inference_dtype) |
There was a problem hiding this comment.
Same as above, hopefully we can delete this line.
| inference_dtype = ( | ||
| self.force_inference_dtype | ||
| if self.force_inference_dtype is not None | ||
| else torch.float32 | ||
| ) |
| inference_dtype = ( | ||
| force_inference_dtype | ||
| if force_inference_dtype is not None | ||
| else torch.float32 | ||
| ) |
There was a problem hiding this comment.
would you make this change? This way it will match what we do in _prepare_model_inputs().
| X = X.type(force_inference_dtype) | ||
| y = y.type(force_inference_dtype) |
There was a problem hiding this comment.
Hopefully we can delete these lines as we do it above? I guess there's a risk _maybe_run_gpu_preprocessing() doesn't preserve the type. But, judging by the other inference engines, it looks like it does?
Issue
#631 fix for this issue
I tried to fix the precision issue.First fix was come from #784 which doesn't add thinking tokens so that single eval pos stays zero and kv cache can be used during prediction. I casted all tensors for the given dtype so results became
no_cache vs repeat: 0.0
no_cache vs fit_preprocessors: 0.0
no_cache vs fit_with_cache: 0.0
only caveat is that when I run script with float32 there were still some inconsistencies like:
no_cache vs repeat: 5.3390077e-06
no_cache vs fit_preprocessors: 5.3390077e-06
no_cache vs fit_with_cache: 5.5486857e-06
but I am guessing it might be related to low precision 64 vs 32.
Also tests/test_consistency.py fails locally but I assume they are stored for future comparisons if sth deviates
Motivation and Context
code to run on local machine for above results:
Public API Changes
How Has This Been Tested?
Tested locally without GPU only on macbook cpu.
Collecting system and dependency information...
PyTorch version: 2.10.0
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.7.3 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.0.13.5)
CMake version: version 3.31.1
Libc version: N/A
Python version: 3.11.9 (main, Nov 22 2024, 14:33:40) [Clang 14.0.3 (clang-1403.0.22.14.1)] (64-bit runtime)
Python platform: macOS-15.7.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Dependency Versions:
tabpfn: 6.4.1
torch: 2.10.0
numpy: 2.4.2
scipy: 1.17.1
pandas: 2.3.3
scikit-learn: 1.8.0
typing_extensions: 4.15.0
einops: 0.8.2
huggingface-hub: 1.5.0
Checklist
changelog/README.md), or "no changelog needed" label requested.