- Introduced `DisableImageGenerationMode` with support for `false`, `true`, and `chat` values.
- Updated payload handling to preserve `image_generation` on images endpoints when `chat` mode is enabled.
- Modified OpenAI image handlers (`ImagesGenerations`, `ImagesEdits`) to respect tri-state logic.
- Added unit tests for `DisableImageGenerationMode` behavior and endpoint-specific handling.
- Enhanced configuration diff logging to support `DisableImageGenerationMode`.
- Added logic to remove `tool_choice` entries of type `image_generation` from payloads when `disable-image-generation` is enabled.
- Updated `ApplyPayloadConfigWithRoot` to handle new removal logic.
- Added unit tests to verify `tool_choice` removal behavior.
- Added `disable-image-generation` configuration flag to disable the `image_generation` tool globally.
- Updated payload handling to remove `image_generation` tools from request payload arrays when the flag is enabled.
- Modified OpenAI image handlers (`ImagesGenerations`, `ImagesEdits`) to return 404 when the feature is disabled.
- Enhanced configuration diff logging to track changes for the `disable-image-generation` flag.
- Added accompanying unit tests for the new feature in payload helpers and image handler logic.
The loadCodeAssist polling call hardcoded the User-Agent to
google-api-nodejs-client/9.15.1. Google Cloud Code returns the
paidTier object WITHOUT the availableCredits array for that UA,
so updateAntigravityCreditsBalance always saw "no credits", set the
hint to Available=false for every Google One AI Ultra account, and
the conductor-level credits fallback could never find a candidate.
Switching to resolveUserAgent(auth) (the same UA used for
streamGenerateContent / generateContent) makes the response include
availableCredits, so the credits hint is populated correctly and the
fallback can actually inject enabledCreditTypes:["GOOGLE_ONE_AI"]
when free tier is exhausted.
- Implemented `appendReasoningContent` to support processing of `thinking` signature and text as reasoning input.
- Added test cases to validate reasoning content conversion with and without text.
- Added `buildAdditionalModelRecord` to filter out zero-token usage details.
- Introduced `hasNonZeroTokenUsage` helper function for token usage validation.
- Updated tests to cover scenarios for zero and non-zero token usage.
- Introduced a `Disabled` flag to OpenAI compatibility configurations.
- Updated routing, auth selection, and API handling logic to respect the `Disabled` state.
- Extended relevant APIs, YAML configurations, and data structures to include the `Disabled` field.
- Adjusted all relevant loops and filters to skip disabled providers.
Closes: #3060#3059#2977
GPT-5.5 was correctly removed from codex-free tier in 7b89583c
(since free accounts cannot access it), but the test was not updated
to reflect this. This caused TestCodexStaticModelsIncludeGPT55 to
fail on the free subtest.
Changes:
- Remove free tier from GPT-5.5 inclusion test
- Add new TestCodexFreeModelsExcludeGPT55 to explicitly verify
that free tier does NOT include GPT-5.5
- Added IP ban logic to `AuthenticateManagementKey` and Redis protocol handlers, blocking requests after multiple failed attempts.
- Introduced unit tests to validate IP ban behavior across localhost and remote clients.
- Synchronized Redis protocol's authentication policy with management key validation.
- Added `protocol_multiplexer.go`, enabling support for both HTTP and Redis protocols on a single listener.
- Introduced `redis_queue_protocol.go` to handle Redis-compatible RESP commands for queue management.
- Integrated `redisqueue` package, supporting in-memory queuing with expiration pruning.
- Updated server initialization to manage a shared listener and multiplex connections.
- Adjusted `Handler` to adopt `AuthenticateManagementKey` for modular key validation, supporting both HTTP and Redis flows.
Remove deferred body optimization and maxErrorLog constants that were
unrelated to credits fallback. Keep only MarkCreditsUsed/CreditsUsed
helpers for flagging requests that consumed AI credits.
- findAllAntigravityCreditsCandidateAuths now filters by PinnedAuthMetadataKey
to prevent credential isolation violations during credits fallback
- Release deferredBody reference on success path to avoid holding large
payloads in memory for the lifetime of the gin context
CountTokens upstream API does not support enabledCreditTypes, so
remove the dead credits fallback path from ExecuteCount and delete
the unused tryAntigravityCreditsExecuteCount method. Fix gofmt on
credits test file.
Move credits handling from executor-level retry to conductor-level
orchestration. When all free-tier auths are exhausted (429/503), the
conductor discovers auths with available Google One AI credits and
retries with enabledCreditTypes injected via context flag.
Key changes:
- Add AntigravityCreditsHint system for tracking per-auth credits state
- Conductor tries credits fallback after all auths fail (Execute/Stream/Count)
- Executor injects enabledCreditTypes only when conductor sets context flag
- Credits fallback respects provider scope (requires antigravity in providers)
- Add context cancellation check in credits fallback to avoid wasted requests
- Remove executor-level attemptCreditsFallback and preferCredits machinery
- Restructure 429 decision logic (parse details first, keyword fallback)
- Expand shouldAbort to cover INVALID_ARGUMENT/FAILED_PRECONDITION/500+UNKNOWN
- Support human-readable retry delay parsing (e.g. "1h43m56s")
Codex CLI gates the built-in image_generation tool behind
AuthMode::Chatgpt (OAuth only). When clients connect via API key
auth through CPA, the tool is absent from requests, making image
generation unavailable through the reverse proxy.
Changes:
1. Inject image_generation tool (codex_executor.go):
Add ensureImageGenerationTool() that appends
{"type":"image_generation","output_format":"png"} to the tools
array if not already present. Applied to all three execution
paths: Execute, executeCompact, and ExecuteStream.
2. Route aliases for Codex CLI direct access (server.go):
Add /backend-api/codex/responses routes that map to the same
OpenAI Responses API handlers as /v1/responses. This allows
Codex CLI to connect via chatgpt_base_url config while keeping
AuthMode::Chatgpt, which enables the built-in image_generation
tool on the client side.
3. Unit tests (codex_executor_imagegen_test.go):
Cover no-tools, existing tools, already-present, empty array,
and mixed built-in tool scenarios.
- Added `GPT-Image-2` as a built-in model to avoid dependency on remote updates for Codex.
- Updated model tier functions (`CodexFree`, `CodexTeam`, etc.) to include built-in models via `WithCodexBuiltins`.
- Introduced new handlers for image generation and edit operations under `OpenAIAPIHandler`.
- Extended tests to validate 503 response for unsupported image model requests.
- Refactored `/healthz` handler to support `HEAD` requests alongside `GET`.
- Updated tests to include validation for `HEAD` requests with expected status and empty body.
Closes: #2929
Anthropic has moved the 1M-context-window feature to General Availability,
so the context-1m-2025-08-07 beta flag is no longer accepted and now causes
400 Bad Request errors when forwarded upstream.
Remove the X-CPA-CLAUDE-1M detection and the corresponding injection of the
now-invalid beta header. Also drop the unused net/textproto import that was
only needed for the header-key lookup.