Skip to content

Conversation

@Stonepia
Copy link

@Stonepia Stonepia commented Dec 19, 2025

[Currently only draft] This PR is not ready and only for reference

This PR loads the model layer by layer and then do quantization to avoid OOM.

Current logic first load the full model and then do quantization, it would raise OOM when loading the model. This PR switch to a load one layer, do quantization, and then load another layer to avoid big memory usage on GPU.

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 19, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3518

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant