Question
I'm trying to find practical recommended global settings for DCP minContextLimit / maxContextLimit, especially when mixing several models.
Right now I mainly use:
- GPT-5.4
- other GPT-family models
- DeepSeek models
- GLM models
Problem
With GPT-5.4 in particular, DCP feels too aggressive for me with default behavior. It starts pushing compression earlier and more often than I would like.
I understand that:
- percentages are supported
- per-model overrides exist
- very high percentages on 1M-class models may be too late
But I still haven't found a clear answer to this question:
What global thresholds do you actually recommend as a good starting point for users who mix GPT / DeepSeek / GLM models?
What I'm looking for
I would really appreciate guidance like:
- a recommended global range, for example whether something like
25% / 35%, 30% / 40%, or another pair is a better default starting point
- whether GPT-family models usually need more relaxed settings than DeepSeek / GLM
- whether you recommend global percentage settings first, then adding per-model overrides only if needed
- whether there are any known good defaults specifically for GPT-5.4 if it feels compression-happy
My environment
- OpenCode:
1.14.33
- DCP:
@tarquinen/opencode-dcp@latest (currently resolves to 3.1.9 in my environment)
- Main model often used:
gpt-5.4
Why I'm asking
There are many discussions about DCP being too aggressive or about large-context percentages being tricky, but I haven't found a concrete, maintained recommendation for:
- global settings that work reasonably well across multiple model families, and
- what people should try first when GPT-5.4 feels too eager to compress.
Even a short maintainer recommendation like "start with X/Y globally, then override GPT-5.4 to A/B if needed" would be very helpful.
Question
I'm trying to find practical recommended global settings for DCP
minContextLimit/maxContextLimit, especially when mixing several models.Right now I mainly use:
Problem
With GPT-5.4 in particular, DCP feels too aggressive for me with default behavior. It starts pushing compression earlier and more often than I would like.
I understand that:
But I still haven't found a clear answer to this question:
What global thresholds do you actually recommend as a good starting point for users who mix GPT / DeepSeek / GLM models?
What I'm looking for
I would really appreciate guidance like:
25% / 35%,30% / 40%, or another pair is a better default starting pointMy environment
1.14.33@tarquinen/opencode-dcp@latest(currently resolves to3.1.9in my environment)gpt-5.4Why I'm asking
There are many discussions about DCP being too aggressive or about large-context percentages being tricky, but I haven't found a concrete, maintained recommendation for:
Even a short maintainer recommendation like "start with X/Y globally, then override GPT-5.4 to A/B if needed" would be very helpful.