mirror of
https://github.com/router-for-me/CLIProxyAPIPlus.git
synced 2026-04-12 09:14:15 +00:00
fix(copilot): use dynamic API limits to prevent prompt token overflow
The Copilot API enforces per-account prompt token limits (128K individual, 168K business) that differ from the static 200K context length advertised by the proxy. This mismatch caused Claude Code to accumulate context beyond the actual limit, triggering "prompt token count exceeds the limit of 128000" errors. Changes: - Extract max_prompt_tokens and max_output_tokens from the Copilot /models API response (capabilities.limits) and use them as the authoritative ContextLength and MaxCompletionTokens values - Add CopilotModelLimits struct and Limits() helper to parse limits from the existing Capabilities map - Fix GitLab Duo context-1m beta header not being set when routing through the Anthropic gateway (gitlab_duo_force_context_1m attr was set but only gin headers were checked) - Fix flaky parallel tests that shared global model registry state
This commit is contained in:
@@ -1626,6 +1626,21 @@ func FetchGitHubCopilotModels(ctx context.Context, auth *cliproxyauth.Auth, cfg
|
||||
m.MaxCompletionTokens = defaultCopilotMaxCompletionTokens
|
||||
}
|
||||
|
||||
// Override with real limits from the Copilot API when available.
|
||||
// The API returns per-account limits (individual vs business) under
|
||||
// capabilities.limits, which are more accurate than our static
|
||||
// fallback values. We use max_prompt_tokens as ContextLength because
|
||||
// that's the hard limit the Copilot API enforces on prompt size —
|
||||
// exceeding it triggers "prompt token count exceeds the limit" errors.
|
||||
if limits := entry.Limits(); limits != nil {
|
||||
if limits.MaxPromptTokens > 0 {
|
||||
m.ContextLength = limits.MaxPromptTokens
|
||||
}
|
||||
if limits.MaxOutputTokens > 0 {
|
||||
m.MaxCompletionTokens = limits.MaxOutputTokens
|
||||
}
|
||||
}
|
||||
|
||||
models = append(models, m)
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user