LLMClient.prepare(request) returned a PreparedRequest with target: unknown.
Callers building debug UIs / request previews / plan rendering had to cast
target to the adapter's native shape at every read.
Adds PreparedRequestOf<Target> in schema and a generic Target = unknown
parameter on LLMClient.prepare so callers can opt in to a typed view:
const prepared = yield* client.prepare<OpenAIChatTarget>(request)
prepared.target.model // typed
prepared.target.messages // typed
The runtime payload is unchanged — the adapter still emits target: unknown
and the consumer asserts the shape they expect from the configured adapter.
The cast lives at the public boundary in adapter.ts; everything else stays
honest about runtime types.
Existing callers without the type argument still get target: unknown and
nothing breaks. Test in openai-chat.test.ts proves the narrowing at the
type level.
Schema.toTaggedUnion('type') already provides LLMEvent.guards but uses
kebab-case bracket access (LLMEvent.guards['tool-call']). Adds an LLMEvent.is
namespace with camelCase aliases that delegate to the same guards, so
consumers can write events.filter(LLMEvent.is.toolCall) instead of
events.filter(LLMEvent.guards['tool-call']).
Migrated all callsites in src/llm.ts and the two test files for consistency.
LLMEvent.guards / .match / .cases / .isAnyOf remain available for callers
who want the Effect-canonical API.
After the auth-axis migration, the OpenCode bridge consults this enum
solely to decide whether to read `provider.key` and stamp it on
`model.apiKey`. The bearer / anthropic-api-key / google-api-key
distinctions used to control which header the bridge wrote; that is now
the adapter's Auth axis's job.
Three of four variants were write-only after the migration. Collapse to:
- 'key' — provider needs an API key
- 'none' — provider does not (e.g. local)
Updated all six provider resolvers and the resolver test fixtures.
Two cleanups to make the adapter constructor surface honest about what is
canonical and what is an escape hatch:
- Adapter.compose existed to override pieces of an existing adapter, used
by OpenAI-compatible Chat before the four-axis migration. After the
migration nothing references it; OpenAI-compatible Chat composes via
fromProtocol({ protocol: OpenAIChat.protocol, ... }) instead. Delete
the function and its ComposeInput type.
- Adapter.define is the lower-level escape hatch for adapters whose
behavior genuinely cannot fit the Protocol/Endpoint/Auth/Framing model.
Its name implied it was the canonical entry point. Renamed to
Adapter.unsafe so the four-axis Adapter.fromProtocol(...) reads as the
obvious primary path and the escape hatch carries its escape semantics
in its name.
Updated test fixtures in adapter.test.ts and the AGENTS.md guidance.
The optional 'provider' field on Adapter / AdapterInput / FromProtocolInput
existed as a registry filter: requests with a different model.provider could
not find adapters that set it. After the four-axis migration no adapter
needed it (and an earlier pass removed it from the five migrated providers
because setting it broke session/llm-native tests).
Drop the field entirely and collapse the registry to a single-tier protocol
lookup. If a future deployment genuinely needs to be scoped (e.g. an
Azure-only OpenAI Responses adapter), reintroduce as 'scopedTo' with an
explicit name. Solve when needed, not before.
Also drops the test that exercised the now-removed two-tier lookup
('prefers provider-specific adapters over protocol fallbacks').
The commit that promoted queryParams to a typed ModelRef field updated
the implementation but left two JSDoc/doc references pointing at the old
model.native.queryParams path.
Promotes queryParams to a first-class ModelRef field used by Endpoint.baseURL,
so deployment-level URL query params (Azure api-version, OpenAI-compatible
provider knobs) live in a typed home instead of an opaque `native` bag.
Also removes write-only dead fields from `native`:
- openaiCompatibleProvider (set by family helper, never read)
- opencodeProviderID, opencodeModelID (set by opencode bridge + native session
builder, never read)
- npm (set by opencode bridge, never read)
After this commit `model.native` only carries genuinely provider-specific
opaque options that no other adapter cares about (Bedrock's aws_credentials
+ aws_region for SigV4). Drops the now-dead ProviderShared.queryParams
helper. Updates AGENTS.md doc on native is implicit through the new schema
JSDoc.
The two paths are independent: `model.apiKey` produces a synchronous
Bearer auth, while AWS credentials need an effectful sigv4 sign.
Hoist the bearer path out of `Effect.gen` and reuse `Auth.bearer`
directly, keeping the SigV4 path as a focused `Effect.gen` that owns
the credential lookup, signing, and header merge.
Inlines the now single-use `headersForSigning` and `signed` setup.
After the apiKey migration, every adapter explicitly specified `auth`,
and three of them (OpenAI Chat, OpenAI Responses, OpenAI-compatible Chat)
all wrote `auth: Auth.bearer`. `Auth.bearer` is a no-op when
`model.apiKey` is unset, so making it the default is strictly safer than
the previous `Auth.passthrough` default — bearer-style adapters drop
their explicit `auth` line, and adapters that need a different scheme
opt out via `Auth.apiKeyHeader(...)` (Anthropic, Gemini) or a custom
`Auth` (Bedrock SigV4 + Bearer).
Update doc comments on `fromProtocol.auth`, `Auth` type, and
`packages/llm/AGENTS.md` to reflect the new default.
Add an optional `apiKey` field to `ModelRef` so authentication is no
longer baked into `model.headers` at construction time. Each provider
adapter now passes an `Auth` to `Adapter.fromProtocol` that reads
`request.model.apiKey` per request:
- OpenAI Chat / Responses / OpenAI-compatible Chat: `Auth.bearer`
- Anthropic Messages: `Auth.apiKeyHeader("x-api-key")`
- Gemini: `Auth.apiKeyHeader("x-goog-api-key")`
- Bedrock Converse: custom auth that uses `apiKey` for Bearer auth
and falls back to SigV4 with AWS credentials
The `model()` constructors no longer fold the API key into
`model.headers`. The OpenCode bridge sets `apiKey` directly instead of
building auth headers via the now-deleted `authHeader` helper. Test
assertions move from `headers: { authorization: "Bearer ..." }` to
`apiKey: "..."`.
Both helpers had the same shape: read `request.model.apiKey`, no-op if
absent, otherwise merge a one-key header object. Lift that into a tiny
`fromApiKey(from)` helper and define both in terms of it.
The public surface (`Auth.bearer`, `Auth.apiKeyHeader`) is unchanged.
Removes the provider field from the five migrated Adapter.fromProtocol
calls. Setting provider scopes the adapter in the registry so requests
must use the same provider id, which broke session/llm-native tests
that build models with provider 'amazon-bedrock' against the
bedrock-converse adapter.
Adapters should stay protocol-only by default and only set provider
when the deployment is genuinely scoped (e.g. an Azure-only adapter
that does not work for native OpenAI). Restoring the original
protocol-only registration.
Per style guide, single-use values should be inlined. Each adapter had
a module-private constant used exactly once in its Adapter.fromProtocol
call. Inlining removes 5 named constants (4 DEFAULT_BASE_URL + 1
defaultBaseURL + ANTHROPIC_VERSION) without loss of clarity — the
string literal appears at the point of use.
After migration to Adapter.fromProtocol, the sse() convenience wrapper
and withQuery() URL builder are no longer called anywhere — Framing.sse
and Endpoint.baseURL handle their responsibilities directly. Also
inlines two exported-but-unused test constants (helloPrompt,
weatherPrompt) per style guide.
Updates the AGENTS.md adapter section to describe the four orthogonal
axes that make up an adapter today (Protocol + Endpoint + Auth + Framing)
and the canonical Adapter.fromProtocol composition. Adds a folder layout
overview so the dependency direction (provider/* imports protocol/auth/
endpoint/framing, never the other way) is visible.
Extracts a Protocol implementation per provider and wires the adapter
through Adapter.fromProtocol with explicit Endpoint, Auth, and Framing:
- OpenAI Responses — Endpoint.baseURL with /responses path.
- Anthropic Messages — adds anthropic-version header via the headers slot.
- Gemini — endpoint embeds the model id and pins ?alt=sse at the URL level.
- Bedrock Converse — keeps SigV4-or-Bearer auth as a typed Auth function;
AWS event-stream framing is a typed Framing value alongside the protocol;
Endpoint.baseURL gains a function-typed default so the URL host can carry
the per-request region.
Recorded replay byte-identical across all six adapters; full provider
suite 83 pass, full llm suite 122 pass, opencode typecheck clean.
Extracts OpenAIChat.protocol so that:
- openai-chat is now a four-line Adapter.fromProtocol composition over
the protocol, the OpenAI base URL, default passthrough auth, and SSE
framing.
- openai-compatible-chat reuses OpenAIChat.protocol verbatim. The whole
adapter is one Adapter.fromProtocol call that pins protocolId to
openai-compatible-chat and requires a caller-supplied baseURL.
Bug fixes in OpenAIChat.protocol now propagate to DeepSeek, TogetherAI,
Cerebras, Baseten, Fireworks, DeepInfra, and any future OpenAI-compatible
deployment without touching their files. Recorded replay byte-identical.
Introduces the four orthogonal axes that an LLM adapter is composed of:
- Protocol — semantic API contract (lowering, validation, encoding,
parsing). Examples: OpenAI Chat, Anthropic Messages, Bedrock Converse.
- Endpoint — URL construction (baseURL + path + query params).
- Auth — per-request transport authentication. Defaults to passthrough
for adapters whose auth header is baked into model.headers.
- Framing — byte stream to frames (SSE today; AWS event stream next).
Adds Adapter.fromProtocol(...) which composes these into the existing
AdapterDefinition shape so LLMClient.make(...) and the runtime registry
do not change. Existing adapters keep working through Adapter.define
until they migrate one at a time.
Frees up the Protocol name for the upcoming Protocol implementation type
that owns request lowering, target validation, and stream parsing as a
single composable unit. Field names on ModelRef and Adapter stay as
'protocol' since they carry the string discriminator value.
Wires the prompt-side tool resolver to also surface opencode-native
`Tool.Def[]` alongside the AI SDK record it already builds. With
`OPENCODE_EXPERIMENTAL_LLM_NATIVE=1` set, real production sessions
that satisfy the gate now stream through `LLMNativeTools.runWithTools`
instead of `streamText` — the LLM-native path goes from
"plumbing-only" to "actually used."
Changes:
- `prompt.ts:resolveTools` collects `Tool.Def[]` from the registry
loop and tracks a feasibility flag. MCP tools (which only have AI
SDK shape) flip the flag off; the synthesized `StructuredOutput`
tool that the json_schema branch injects also flips it. The return
shape becomes `{ tools, nativeTools }` where `nativeTools` is
`undefined` whenever any non-registry tool source contributes —
callers fall through to the AI SDK path automatically. The
registry path stays in sync because every `tools[item.id] =
tool({...})` is paired with a `nativeTools.push(item)` at the same
loop iteration.
- The single caller (`prompt.ts:1396`) destructures the new shape
and passes `nativeTools` through to `handle.process(...)`. The
json_schema branch sets `nativeTools = undefined` after injecting
`StructuredOutput` so the gate falls through for structured-output
sessions.
- `runNative` (in `session/llm.ts`) gains two safety nets that work
regardless of caller behavior:
1. Coverage check: if AI SDK tools are non-empty, every key must
have a matching `Tool.Def` in `nativeTools`. A partial set
falls through. Defends against future callers that might
emit a partial native list.
2. Filter parity: `runNative` now calls the existing
`resolveTools(input)` (the in-file permission/user-disabled
filter) and intersects its keys with `nativeTools`, then
feeds the filtered AI SDK record to the dispatcher and the
filtered native list to `LLMNative.request`. Without this,
sessions could see permission-disabled tools advertised on
one path but not the other.
- The dispatch path uses the filtered AI SDK tools record as the
execute table: `LLMNativeTools.runWithTools({ tools:
filteredAITools, ... })`. Tool definitions sent to the model are
the filtered native list. Every tool the model sees can dispatch.
What this enables: a session opted into the experimental flag, with
a clean toolset (registry-only, no MCP, no structured output),
running an Anthropic model, now exercises the streaming-dispatch
loop end-to-end. Tool calls fire as soon as the model finishes
streaming each tool's input; results land in the stream the moment
each handler resolves. Multi-round behavior matches phase 2 step 2b.
What this still does NOT do (deferred to step 4):
- Parity test harness comparing native vs AI SDK event sequences for
the same scripted session. Until that lands, broader confidence
comes from running real sessions with the flag set.
- MCP support on the native path. Sessions with MCP servers
configured stay on AI SDK indefinitely.
- Native support for the synthesized `StructuredOutput` tool.
Verification: opencode typecheck clean for `src/session/*` (the
TUI-side errors visible in the working tree are Kit's parallel
work, untouched here); bridge area tests 36/0/0 across
`llm-native.test.ts` + `llm-native-stream.test.ts` +
`llm-bridge.test.ts`; `prompt.test.ts` still 47/0/0 (no regression
from the resolveTools shape change).
Lands the streaming-dispatch tool loop for the LLM-native path. When
the gate-passing session has `nativeTools` populated, the native
runner forks an AI SDK `tool.execute(...)` the moment a `tool-call`
event arrives mid-stream and injects a synthetic `tool-result` event
back into the same stream when the handler resolves. Long-running
tools no longer block subsequent tool-call streaming; the user sees
each result land as soon as that specific handler completes.
The driver loops across rounds: when a round ends with `reason:
"tool-calls"` AND the dispatchers produced at least one result, the
runner builds a continuation `LLMRequest` (assistant message echoing
text/reasoning/tool-call content + tool messages with results) and
recurses. Stops on a non-`tool-calls` finish, when `maxSteps`
(default 10, mirrors `ToolRuntime.run`) is reached, or when the
underlying scope is interrupted.
New file `session/llm-native-tools.ts`:
- `runWithTools({ client, request, tools, abort, maxSteps? })` is the
public entry point. Returns a `Stream<LLMEvent, LLMError,
RequestExecutor.Service>` of merged model events + synthetic tool
results, ready to flow through `LLMNativeEvents.mapper` for
consumption by the existing session processor.
- `runOneRound` is the internal building block. It opens an unbounded
`Queue<LLMEvent, LLMError | Cause.Done>`, forks a producer that
streams the model and pushes each event to the queue, and forks a
dispatcher (via a scope-bound `FiberSet`) for every
non-provider-executed `tool-call`. Each dispatcher's result is
pushed back into the same queue. After the model stream completes,
the producer awaits `FiberSet.awaitEmpty` and ends the queue;
consumers see end-of-stream. A `Deferred<RoundState>` resolves
alongside so the multi-round driver can decide whether to recurse.
- `dispatchTool` wraps the AI SDK `tool.execute(input, { toolCallId,
messages, abortSignal })` call. Unknown-tool and execute-throws
paths produce `tool-error` events instead of failing the stream
(mirrors `ToolRuntime.run`'s defect-vs-recoverable boundary), so
the model can self-correct on the next round.
Wired into `runNative` (`session/llm.ts`): when `input.nativeTools`
is non-empty, the upstream becomes `LLMNativeTools.runWithTools(...)`
instead of `nativeClient.stream(...)`; the AI SDK `tools` record
flows in as the dispatch table. Zero-tool sessions still take the
direct-stream path (one round, no dispatch overhead).
Mapper update (`session/llm-native-events.ts`): `tool-result` events
whose `result.value` matches the opencode `Tool.ExecuteResult` shape
(`{ output: string, title?: string, metadata?: object }`) now flow
through to the AI-SDK-shaped session event with their `title` and
`metadata` preserved. Provider-executed and synthetic results that
don't match still fall back to `stringifyResult`. Without this, the
session processor would see every native tool result as
`{ title: "", metadata: {}, output: <JSON of the whole record> }`.
Smoke test (`test/session/llm-native-stream.test.ts`): scripts a
two-round Anthropic SSE backend — round 1 issues a `lookup` tool
call, round 2 replies with text after the tool result feeds back.
Asserts the full event sequence threads through `runWithTools`,
the dispatcher, and the mapper:
- `tool-call` event has the streamed JSON input parsed.
- `tool-result` event carries the `ExecuteResult` shape with
`title` + `output` populated (proving the mapper update works).
- Round 2 text-delta arrives after the synthetic tool-result.
- Final `finish` event has `finishReason: "stop"` (loop terminated).
What this still does NOT do (deferred to step 3):
- No production caller populates `nativeTools` yet; that's the
`prompt.ts:resolveTools` change. Until that lands, the gate keeps
every real session on the AI SDK path.
- No parity harness comparing native + AI SDK event sequences for
the same scripted session. That's step 4.
Verification: opencode typecheck clean; 36/0/0 across the three
bridge-area tests; 125/0/0 across the LLM package.
Adds opt-in `nativeTools?: ReadonlyArray<Tool.Def>` to `LLM.StreamInput`
so callers that route through the native path can attach typed
opencode tool definitions alongside the AI SDK `tools` record. The
gate in `runNative` widens accordingly: a session can use the native
path when it has zero tools (existing behavior) OR when it explicitly
provides `nativeTools` matching its AI SDK `tools` (new opt-in). When
`nativeTools` reaches `LLMNative.request`, the existing
`toolDefinition` converter folds each `Tool.Def` into the request's
`tools` array and the LLM core lowers it onto the wire.
This commit deliberately does NOT include the dispatch loop. A
session that opts in by setting `nativeTools` and that triggers a
`tool-call` from the model will see the call event but no
`tool-result` because the native path has no execute handler yet.
That's why no production caller populates `nativeTools`: phase 2
step 2b will land the dispatch loop and only then will real
production sessions route through here.
What this lays in place:
- `StreamInput.nativeTools` typed against `Tool.Def[]` from `@/tool`.
Aliased to `OpenCodeTool` at the import to dodge a clash with the
AI SDK `Tool` type that the same file already imports.
- The `runNative` gate flips from "no tools allowed" to "either no
tools, or `nativeTools` is supplied". An AI SDK tool count > 0
with `nativeTools` undefined still falls through, so existing
production sessions are unaffected.
- `LLMNative.request` already accepted `tools: ReadonlyArray<Tool.Def>`
and converts via `toolDefinition`. We just forward the input
through; no LLM-bridge change.
Smoke coverage: a new test in `llm-native-stream.test.ts` builds a
typed `Tool.Def` (Effect Schema parameters), routes it through
`LLMNative.request` + `LLMClient.prepare`, and asserts the prepared
Anthropic target carries the tool as an `input_schema` block with
the expected JSON Schema shape. This validates the conversion path
that phase 2 step 2b will exercise from inside `runNative`.
Verification: opencode typecheck clean; 35/0/0 across the three
bridge-area tests (`llm-native.test.ts`, `llm-native-stream.test.ts`,
`llm-bridge.test.ts`).
Adds `test/session/llm-native-stream.test.ts` — one focused test that
proves the end-to-end wire-up `runNative` relies on actually produces
session events from a scripted Anthropic SSE response.
The test stays self-contained:
- Builds a fake Anthropic `Provider.Info` + `Provider.Model` via
`ProviderTest`.
- Builds an `LLMRequest` via `LLMNative.request(...)` from a
`MessageV2.WithParts` user message — the same call shape `runNative`
uses inside `session/llm.ts`.
- Creates an `LLMClient` with the same adapters list + `ProviderPatch.defaults`
list as `runNative`. The adapters are imported directly from
`@opencode-ai/llm`; if `runNative`'s `NATIVE_ADAPTERS` array changes,
this test's `adapters` constant has to follow (commented).
- Provides a single fixed-response HTTP layer that returns a scripted
Anthropic SSE body. The layer helper is inlined (12 lines) rather
than imported from `packages/llm/test/lib/http.ts` so the test
doesn't reach across package boundaries.
- Pipes the LLM stream through `LLMNativeEvents.mapper()` exactly as
`runNative` does (`Stream.flatMap` + lazy `Stream.concat` for
flush), runs it to completion, and asserts the key session events:
`text-start` precedes `text-delta`, `finish-step` carries
`finishReason: "stop"`, and `finish` carries the merged usage totals.
This does NOT test the dispatch gate inside `session/llm.ts`
(`!Flag.OPENCODE_EXPERIMENTAL_LLM_NATIVE`, missing `nativeMessages`,
tools present, non-Anthropic protocol). Those are simple boolean
conditions and don't need separate coverage. It also does not exercise
the production `Service` layer — that's deferred to Phase 2 step 2
(tool support) and Phase 2 step 3 (production caller wiring).
What the test buys: confidence that the conversion pipeline works and
catches regressions in `LLMNative.request`, the LLM adapter set, or
`LLMNativeEvents.mapper` before they would surface in a real session.
Verification: 34/0/0 across the three bridge-area tests
(`llm-native.test.ts` + `llm-native-stream.test.ts` +
`llm-bridge.test.ts`); opencode typecheck clean.
Adds the parallel `runNative()` path inside `session/llm.ts` so a narrow
slice of sessions can flow through `@opencode-ai/llm` instead of the AI
SDK `streamText`. Behavior is gated and shipped off by default; only
callers that opt in see any difference.
The full migration plan (audit gap #4) is parallel-path-with-flag,
prove parity test-by-test, flip default last. This commit is phase 1:
get the wire-up in place behind a flag with one protocol so we can see
whether the design holds before committing to the full migration.
Wire-up summary:
- New flag `OPENCODE_EXPERIMENTAL_LLM_NATIVE` (also enabled by the
umbrella `OPENCODE_EXPERIMENTAL`). Off by default.
- The session-LLM `live` layer now consumes `RequestExecutor.Service`,
and the `defaultLayer` provides `RequestExecutor.defaultLayer` so a
Node fetch HTTP client backs every native stream.
- `runNative(input)` returns `Stream<Event> | undefined`. `undefined`
means "fall through to AI SDK." It returns a real stream only when
every gate passes: the flag is set, the caller populated
`input.nativeMessages` (the bridge needs typed `MessageV2.WithParts`,
not the AI SDK `messages` array), the session has zero tools (Phase
2 will lift this), and the bridge routes the model to a protocol in
`NATIVE_PROTOCOLS`.
- `NATIVE_PROTOCOLS` is a single-entry set today: `anthropic-messages`.
Other adapters are imported and registered with the client so the
Phase 2 expansion is a one-line edit, not an architecture change.
- Stream wiring: client.stream(req) -> Stream.flatMap(event ->
fromIterable(map.map(event))) -> Stream.concat(suspended
fromIterable(map.flush())) -> Stream.provideService(
RequestExecutor.Service, executor). The flush stream is built lazily
with `Stream.unwrap(Effect.sync(...))` so it observes the mapper
final state after every upstream event has been mapped.
- The mapper (`LLMNativeEvents.mapper`) emits AI-SDK-shaped session
events from `LLMEvent` so downstream consumers see one shape.
What this does NOT do (deferred to later phases):
- No tool support on the native path (skipped, falls through).
- No parity harness yet; Phase 2 builds it.
- No production traffic; flag is off by default and no production
caller populates `nativeMessages`.
- No reasoning/cache/multi-modal coverage. Anthropic supports reasoning
and cache via existing patches, so those start working as soon as a
caller routes a real session through.
Verification: opencode typecheck clean, bridge tests still green
(33/0/0 across llm-native.test.ts + llm-bridge.test.ts); LLM package
tests green (123/0/0).
Five review findings; all small, all independent.
H2: Bedrock used raw `JSON.parse` and `JSON.stringify` despite the
package rule against ad-hoc JSON encoders. The in-loop parse on each
event-stream frame goes through `ProviderShared.parseJson` (yielded
inside `Effect.gen`); the `decodeChunk` error fallback uses
`ProviderShared.encodeJson` instead of `JSON.stringify` for the raw
field on `ProviderChunkError`. No behavior change — just channels
JSON through the shared Schema-driven codec.
H3: `BedrockConverse.toHttp` built a `baseHeaders` record with
`content-type: application/json` and passed it through both auth
paths. The bearer path called `jsonPost` with the raw model headers
(no manual content-type), the SigV4 path used `baseHeaders` plus the
signed result. Two paths produced subtly different header sets and
both relied on `jsonPost` overwriting/adding the same content-type
key. Simplify: drop the unused bearer-side construction; rename the
SigV4 input to `headersForSigning` and document why content-type
must be present at signing time (signature covers it).
M4: Lift `isRecord` from `gemini.ts` into `ProviderShared.isRecord`
so adapters share one definition. The duplicates in `llm.ts` (LLM IR
layer) and `llm-native.ts` (OpenCode bridge) stay where they are —
those are at different layers and importing from `provider/` would
invert the dependency direction. Net effect: the provider layer
goes from 2 copies to 1.
L8: `TransportError` lost everything but the message string.
Surface the originating reason tag (`Timeout` / `TransportError` /
`ResponseError` / `RequestError`) and the request URL when
available, both as optional Schema fields. Consumers that don't
care keep getting the same `message` rendering; consumers that do
can finally render "timed out connecting to https://..." instead
of "HTTP transport failed".
M9 + L3: Two dead branches. Anthropic's `processChunk` had
`?? ""` fallbacks for `partial_json` after an early-return guard
already proved it non-empty. OpenAI Chat's `mapFinishReason` had
`if (reason === undefined || reason === null) return "unknown"`
followed by `return "unknown"` — both branches went to the same
place. Drop the unreachable code.
120 LLM-package tests + 33 OpenCode bridge tests still green.
Three review findings collapsed into one ProviderShared pass.
M1: Five adapters duplicated the same six-line block:
const ChunkJson = Schema.fromJsonString(Chunk)
const TargetJson = Schema.fromJsonString(Target)
const decodeChunkSync = Schema.decodeUnknownSync(ChunkJson)
const encodeTarget = Schema.encodeSync(TargetJson)
const decodeTarget = Schema.decodeUnknownEffect(Draft.pipe(Schema.decodeTo(Target)))
const decodeChunk = (data) => Effect.try({...chunkError(...)})
Lift it into `ProviderShared.codecs({ adapter, draft, target, chunk,
chunkErrorMessage })` returning `{ encodeTarget, decodeTarget,
decodeChunk }`. The result drops directly into `Adapter.define`'s
`validate` field (uses `validateWith` internally to map parse errors
to InvalidRequestError). Adopted in OpenAI Chat, OpenAI Responses,
Anthropic Messages, and Gemini. Bedrock has a custom event-stream
`decodeChunk` that takes `unknown` (not `string`) so it keeps its
inline codecs.
M2: Four adapters defined an identical `ToolAccumulator` interface
(`{ readonly id: string; readonly name: string; readonly input:
string }`). Lift to `ProviderShared.ToolAccumulator`. Anthropic
extends it locally with `providerExecuted` for hosted tools.
M3: The five `mapUsage` implementations had subtly different
`totalTokens` policies — OpenAI Chat passed through whatever the
provider sent, OpenAI Responses unconditionally summed inputs and
output (publishing `totalTokens: 0` when both were `undefined`),
Anthropic and Gemini guarded with conditionals, Bedrock used a
`(...) || undefined` falsy fallback. Add `ProviderShared.totalTokens`
with one rule: prefer provider-supplied total, else sum inputs and
outputs only when at least one is defined, else `undefined`. Fixes
the OpenAI Responses `totalTokens: 0` bug.
M6: Anthropic's `mergeUsage` recomputed `totalTokens` from the merged
input/output via two nested ?? chains and a conditional sum.
Simplified to use the same totalTokens helper, with `inputTokens` and
`outputTokens` extracted as locals so the merge is one ?? per field
and the comment explains why merging exists (Anthropic emits usage
on `message_start` and `message_delta`).
No behavior changes other than the OpenAI Responses fix; existing
tests pass unchanged. 120 LLM-package tests + 33 OpenCode bridge
tests green.
Two issues from the review of the LLM package's six adapters.
H1: Inconsistent apiKey precedence. Five of six adapters spread the
caller's headers first then set the auth header (apiKey wins), but
`OpenAICompatibleChat.model` did the opposite (caller headers won).
That meant a user passing both `apiKey` and `headers.authorization`
would get auth from a different source depending on which adapter
they routed through. Flip the OpenAI-compatible adapter to match the
rest, and add a comment documenting the rule: apiKey wins, callers
who want their own auth header should omit `apiKey` entirely.
H4: Gemini tool-schema sanitization was split across two functions
that both ran on every Gemini request — `convertJsonSchema` in the
adapter (lossy projection: drop empty objects, derive nullable from
type-array, allowlist of preserved keys, recursive properties/items)
and `sanitizeGeminiSchemaNode` registered as a default `tool-schema`
patch (fix-up: integer enums to strings, dangling required filtering,
untyped array typing, scalar property stripping). Both passes only
ran on Gemini models; debugging a tool schema rejection meant
checking both files.
Fold the patch's rules into the adapter as `sanitizeToolSchemaNode`,
running before the existing projection step (renamed
`projectToolSchemaNode`). Compose them in `convertToolSchema` and use
that in `lowerTool`. Delete the patch from `provider/patch.ts` and
`ProviderPatch.defaults`. The behavior is unchanged — same input,
same output — but the rules now live in one file with a header
comment explaining the two concerns.
The matching test in `gemini.test.ts` no longer needs to opt into a
patch list; it now asserts the adapter alone produces the sanitized
shape.
Closes audit gap #3. The bridge now extracts the encrypted reasoning
blob from `MessageV2.ReasoningPart.metadata` and surfaces it on
`LLM.ReasoningPart.encrypted`, where the Anthropic and Bedrock
adapters lower it to the wire — Anthropic emits `thinking.signature`,
Bedrock emits `reasoningContent.reasoningText.signature`. Without
this, multi-turn sessions with reasoning models would lose the
encrypted state on every step and break the chain.
The encrypted blob originates in three different places depending on
how the session was started:
1. AI-SDK Anthropic sessions store it as
`metadata.anthropic.signature` (per AI SDK provider-keyed
convention).
2. AI-SDK OpenAI sessions store it as
`metadata.openai.reasoningEncryptedContent`.
3. Future LLM-native sessions will store it as a top-level
`metadata.encrypted` string (cleanest shape — provider-agnostic,
matches the LLM IR field name).
The new `encryptedReasoning` helper probes all three locations in
order, so existing OpenCode sessions can be served by the LLM-native
path without re-recording reasoning content. The full `metadata`
record continues to flow through to `LLM.ReasoningPart.metadata`
unchanged, preserving any provider-specific fields adapters might
read in the future.
OpenAI Responses encrypted reasoning round-trip is intentionally out
of scope: the LLM-package adapter doesn't yet model reasoning items
in the request body. That's a separate adapter feature requiring new
input-item schema variants and is deferred until needed.
Tests (5 new in llm-native.test.ts):
- AI-SDK Anthropic signature extracted into LLM.ReasoningPart.encrypted.
- End-to-end Anthropic lowering: bridge \u2192 client.prepare \u2192 target with
`thinking.signature` populated correctly.
- AI-SDK OpenAI reasoningEncryptedContent extracted (forward
compatibility — useful when the OpenAI Responses adapter gains
reasoning-item lowering).
- Top-level metadata.encrypted extracted (LLM-native session shape).
- No known key in metadata leaves `encrypted` undefined.
Verified: 33/0/0 across native + bridge tests (was 28; +5 from the
new reasoning extraction tests).
Closes audit gap #2 (FilePart \u2192 MediaPart not implemented).
The bridge now lowers `MessageV2.FilePart` on user messages into
`LLM.MediaPart`, unblocking image and document inputs. The first
pass supports `data:` URLs only — the inline base64 form most
commonly produced by the OpenCode UI for pasted screenshots and
attached files. `http(s):` and `file:` URLs are explicitly
rejected with a clear error so a future fetch / filesystem-read
path can plug in cleanly without regressing safety.
Implementation:
- New `lowerFilePart` helper extracts the base64 payload from a
data URL via a single regex; failure yields a typed
`UnsupportedContentError` carrying both the partType and a
`reason` that includes the offending URL for debuggability.
- New `lowerUserPart` dispatches user-side parts: text \u2192
`LLM.text`, file \u2192 `MediaPart`. Returns identity-empty
for any unsupported part type the static gate would have caught.
- `userMessage` is now `Effect.fnUntraced` so file conversion can
yield typed errors. `lowerMessage` (the per-message dispatcher,
renamed from `messages` to free the local name) cascades the
Effect through the request flow via `Effect.forEach`.
- `supportsPart` static gate now allows `file` parts on user
messages. Assistant messages still reject file parts (the LLM
IR's MediaPart isn't valid in assistant content for any
adapter we ship today).
- `UnsupportedContentError` gains an optional `reason` field that
appends to the canonical message as `<base>: <reason>`. Existing
static-gate failures keep the same shape (no reason).
Tests (3 new, 1 rewritten):
- Image data URL with filename round-trips to MediaPart with
base64-stripped data.
- PDF data URL preserves filename and base64 payload.
- `https:` URL rejected with an error mentioning both the file
partType, the message ID, and the offending URL.
- The pre-existing "fails instead of dropping unsupported native
parts" test now uses a reasoning part on a user message
(reasoning is valid for assistants only) since file parts with
data URLs are no longer rejected by the static gate.
Out of scope, intentional follow-ups:
- HTTP/HTTPS URL fetching (would need HttpClient.HttpClient and a
decision on caching, retries, size limits).
- File path / file:// URL reading (would need FileSystem.FileSystem
and a permission check against the session's working directory).
- File parts on assistant messages (LLM IR doesn't model
assistant-side media; defer until we hit a provider that needs it).
- text/plain and application/x-directory file parts that the
AI-SDK path converts to text inline at message-v2.ts:791 — for
the bridge, those should be converted upstream before reaching
LLMNative.request rather than handled here.
Verified: bun typecheck clean, 28/0/0 across native + bridge
tests (was 21; +7 from the FilePart additions plus the rewritten
unsupported-parts test).
Lift the prompt-cache policy out of OpenCode's bridge and into the
LLM package as a typed, gated patch. The policy mirrors the AI-SDK
applyCaching path (packages/opencode/src/provider/transform.ts:229):
mark the first 2 system parts and the last 2 messages with an
ephemeral cache hint, gated on `model.capabilities.cache.prompt`.
Adapters lower the hint structurally — Anthropic emits
`cache_control: { type: "ephemeral" }` on the marked block,
Bedrock emits a positional `cachePoint: { type: "default" }`
after the marked block (added in 9d7d518ac). The capability gate
keeps non-cache adapters (OpenAI Responses, Gemini, OpenAI-compat
Chat) hint-free.
Why a Patch and not bridge code:
- packages/llm/AGENTS.md TODO explicitly calls for cache hint patches
- Other consumers of @opencode-ai/llm get caching for free
- The bridge stays focused on shape conversion (MessageV2 \u2192 LLMRequest)
- Patches compose via ProviderPatch.defaults (now includes this one)
- The capability gate is a typed predicate, not provider-name matching
Implementation:
- New `cachePromptHints` patch in provider/patch.ts. The
`withCacheOnLastText` helper uses Array.findLastIndex (codebase
idiom) and short-circuits when no text part exists so messages
with only tool-result content are returned identity-equal.
- `EPHEMERAL_CACHE` is a single shared CacheHint instance — no
per-request allocation, preserves `instanceof` for any consumer
that checks class identity.
- Added to `ProviderPatch.defaults` so existing callers that pass
`defaults` get cache support automatically.
Tests (5 new in patch.test.ts):
- Marks first 2 system parts on cache-capable models.
- Marks last text part of last 2 messages.
- Targets the last text part when a message has trailing
non-text content (assistant text + tool-call).
- Returns content unchanged (identity-equal) when no text part
exists, so pure tool-result messages don't allocate.
- No-op when the model does not advertise prompt caching.
Bridge cleanup:
- Removed `applyCachePolicy`, `withCacheOnLastText`,
`updateMessageContent`, `EPHEMERAL_CACHE` from llm-native.ts
(-30 lines of bridge-side cache code).
- Dropped now-unused `CacheHint`, `LLMRequest`, `Message` imports.
- The bridge's only responsibility is now MessageV2 lowering;
callers wire `patches: ProviderPatch.defaults` at client
construction.
OpenCode tests rewritten:
- Old: assert on `request.system[N].cache` (bridge internals).
- New: assert on `prepared.target` after running through
`LLMClient.make({ adapters, patches: ProviderPatch.defaults })
.prepare(request)` — verifies the full lowering end-to-end.
- Anthropic: target.system[0..1] carry `cache_control: ephemeral`,
target.messages[1..2] carry it on the final text block.
- Bedrock: target has `cachePoint` markers after each cached block.
- Non-cache (OpenAI Responses): JSON.stringify(target) contains
none of `cache_control` / `cachePoint` / `ephemeral`.
Verified: bun typecheck clean across both packages, 120/0/0 in LLM
package (was 113; +7 from new patch tests counting parameter
variations), 21/0/0 in OpenCode native+bridge tests.
Close the parity gaps deferred from the original Bedrock pass.
Schema additions on the Converse target:
- BedrockImageBlock for { image: { format, source: { bytes } } }.
Supported formats per Converse docs: png, jpeg, gif, webp.
- BedrockDocumentBlock for { document: { format, name, source: { bytes } } }.
Supported formats: pdf, csv, doc, docx, xls, xlsx, html, txt, md.
- BedrockCachePointBlock for the positional { cachePoint: { type } }
marker. Currently emits the only Bedrock cache type, 'default'. A
TODO marks where to map ttlSeconds → ttl ('5m' | '1h') once we have
a recorded cassette to validate the wire shape.
Lowering:
- TextPart and SystemPart cache hints emit a positional cachePoint
marker right after their text block. Both 'ephemeral' and
'persistent' CacheHint types map onto Bedrock's 'default' since
Bedrock does not distinguish — this matches the convention the
Anthropic adapter uses (cache?.type === 'ephemeral' check).
- MediaPart routes by mediaType: 'image/*' → image block, everything
else → document block. MIME type → format mapping is via
IMAGE_FORMATS / DOCUMENT_FORMATS records typed with 'as const
satisfies' so the keys stay narrow at compile time.
- A small textWithCache helper collapses the 'push text, push
cachePoint if cache is set' pattern that would otherwise repeat at
three callsites (system, user-text, assistant-text).
- Bytes are encoded via ProviderShared.mediaBytes — the shared
helper Kit landed in c3346f7dc.
Bug fix: lowerSystem was dead code in the previous draft. The
prepare() function still inlined the pre-cache .map(...) that
discarded system cache hints. prepare() now calls lowerSystem so
the cachePoint markers actually flow through.
Tests (7 new fixtures, all green):
- Cache hint on system / user-text / assistant-text emits cachePoint
after text in each context.
- No cache hint → no cachePoint emitted (regression guard).
- Image lowering covers png / jpeg / jpg-alias / webp.
- Uint8Array image bytes are base64-encoded ([1,2,3,4,5] → AQIDBAU=).
- Document lowering with filename round-trip and missing-filename
fallback to 'document.<format>'.
- Unsupported image MIME (image/svg+xml) is rejected with a clear
error message.
- Unsupported document MIME (application/x-tar) is rejected with a
clear error message.
Recorded cassettes for cache hints, images, and documents are still
TODO — the wire shapes are exercised deterministically here and will
be validated against a live model in a follow-up cassette pass.
Verified: bun typecheck clean, 113 pass / 0 fail / 0 skip (was 106;
+7 from the new fixture tests).
Phase A continuation of the ProviderShared dedupe pass. Three more
patterns lifted into ProviderShared so they're written once:
ProviderShared.invalidRequest(message) — replaces six identical
`const invalid = (message) => new InvalidRequestError({ message })`
one-liners across openai-chat, openai-responses, anthropic-messages,
gemini, openai-compatible-chat, and bedrock-converse. Each adapter
keeps a short `const invalid = ProviderShared.invalidRequest` alias
so the 27 callsite `yield* invalid("...")` patterns are unchanged.
Bedrock's SigV4 catch path and the openai-compatible-chat baseURL
guard both go through the helper now too.
ProviderShared.validateWith(decode) — replaces the identical
`(draft) => decode(draft).pipe(Effect.mapError((e) =>
invalid(e.message)))` lambda body in five adapters. Same line count
but shorter, names the pattern, and keeps the `decode → mapError →
InvalidRequestError` translation in one canonical spot.
ProviderShared.jsonPost({ url, body, headers }) — replaces the
five-adapter pattern of `HttpClientRequest.post(url).pipe(setHeaders,
bodyText)` for JSON-body POSTs. Sets `content-type: application/json`
last so caller headers can override everything except the
content-type. Bedrock uses it for both the bearer-auth and SigV4-
signed paths; SigV4 still signs against `baseHeaders` (which already
contained content-type) so the signature matches what the helper
ultimately sends.
Net change: -73 / +86 (+13 in shared.ts mostly JSDoc; -86 across the
six adapters). The `HttpClientRequest` and `InvalidRequestError`
imports are dropped from the five SSE adapters and from Bedrock since
they're no longer referenced directly.
Verified: `bun typecheck` clean, 106 pass / 0 fail / 0 skip
(unchanged).
Update the adapter authoring guide to reflect the dedupe pass:
- Generalize the `parse` bullet from `ProviderShared.sse` to
`ProviderShared.framed` and call out the two framing dialects
in use today (SSE for OpenAI/Anthropic/Gemini/compat, AWS event
stream for Bedrock).
- Spell out that `framed`'s `framing` parameter is the seam for
new wire formats; the rest of the pipeline is shared.
- New 'Shared adapter helpers' subsection enumerating the
`ProviderShared` exports a new adapter author should reach for
before hand-rolling: `framed`, `sse`, `sseFraming`, `joinText`,
`parseToolInput`, `parseJson`, `chunkError`.
- Closing nudge: lift 3-5 line repeats into ProviderShared rather
than copy them between adapters.
Doc-only — no code or test changes.
Promote three repeated patterns out of individual adapters into
ProviderShared so a fifth or sixth adapter doesn't write the same
glue code over again.
ProviderShared.joinText(parts) — replaces the per-adapter `text()`
helper that joined an array of parts with newlines. Used by OpenAI
Chat (system content, user text, assistant text), OpenAI Responses
(system content), and Gemini (systemInstruction). The dead copies in
Anthropic Messages and Bedrock are gone.
ProviderShared.parseToolInput(adapter, name, raw) — replaces the
identical `parseJson(adapter, raw || "{}", \`Invalid JSON input
for <adapter> tool call <name>\`)` invocation in finishToolCall
across Anthropic, OpenAI Chat, OpenAI Responses, and Bedrock. Uniform
error message and the empty-string-to-"{}" fallback handled in one
place.
ProviderShared.framed(...) — generalizes the existing `sse()` helper
so the protocol-specific framing layer is pluggable. The shared
shape is bytes → frames → chunk → (state, events) with mapError /
mapEffect / mapAccumEffect / catchCause as the spine; framing is
the only varying step.
ProviderShared.sseFraming — the SSE-specific framing implementation
(decodeText + Sse.decode + filter [DONE]). The existing `sse()`
helper now delegates to `framed` with this framing, keeping the
adapter API surface identical.
Bedrock's parseStream — collapses to a single `ProviderShared.framed`
call with its own `eventStreamFraming` step. The cursor-based byte
buffer + AWS event-stream codec live as inputs to framed; everything
else is shared with the SSE adapters. Bedrock now has the same
`catchCause → streamError` terminal-error normalization that SSE
adapters have (it was missing before this refactor).
Net effect across the llm package: -66 lines / +114 lines but the
+114 is mostly JSDoc on the new helpers; adapter implementations
shrink. A future protocol (Bedrock InvokeModel, Vertex Gemini binary
streaming, etc.) plugs in by supplying its `framing` step.
Verified: `bun typecheck` clean, 106 pass / 0 fail / 0 skip
(unchanged from before the refactor).
Cleanup of the Bedrock adapter (ba1705d) following parallel review
passes for code reuse, code quality, and efficiency.
- Drop dead `text` join helper and unused `TextPart` import.
- Schema-validate `model.native.aws_credentials` instead of seven
manual `typeof` guards in `credentialsFromInput`. Removes the
unsafe `as Record<string, unknown>` cast and fixes the dead
`native?.region` fallback (the `model()` constructor only writes
`aws_region`).
- Skip the JSON.parse → JSON.stringify → Schema.fromJsonString triple
round-trip in the frame consumer. The eventstream codec already
hands us a UTF-8 payload; parse once and feed the wrapped object
directly to `Schema.decodeUnknownSync(BedrockChunk)`.
- Replace O(n²) buffer concat in `consumeFrames` with a cursor-based
state `{ buffer, offset }`. Compaction happens once per network
chunk via `appendChunk` instead of per frame; frame slicing is
zero-copy via `subarray`. Bounded buffer growth regardless of
stream length.
- Rename `ParserState.finishReason` → `pendingStopReason` (raw
string) and defer the `mapFinishReason` call to the single emit
site, plus the `onHalt` fallback. Tightens the helper's signature
to `(reason: string)` so the chunk-typed `messageStop.stopReason`
flows through without the optional widening.
- Restructure `signRequest` to take an object parameter (was four
positional args), and replace the manual `forEach`-into-record with
`Object.fromEntries(signed.headers.entries())`.
- Inline single-use `status` and `useTools` variables.
- Widen `fixedResponse` to accept `ConstructorParameters<Response>[0]`
so binary fixtures (`Uint8Array`, streams) flow without casts. The
Bedrock test's `fixedBytes` helper now wraps it cleanly.
- Tidy `captureResponseBody` into a ternary returning the union shape
directly so the call site spreads the captured object without
reaching for `bodyEncoding` explicitly.
Verified: `bun typecheck` clean, 106 pass / 0 fail / 0 skip
(unchanged from before the refactor).
Implements the AWS Bedrock Converse streaming protocol as the 5th
first-class adapter in @opencode-ai/llm. Single `bedrock-converse`
adapter covers all underlying models (Anthropic, Llama, Mistral,
Cohere, Nova, Titan) since Converse is uniform.
Wire format: messages with text / reasoning / toolUse / toolResult
content blocks, system blocks, inferenceConfig, toolConfig with
toolSpec + toolChoice. Image / document / cache-point content types
are still TODO.
Streaming: AWS event stream binary framing via @smithy/eventstream-codec.
Each frame is decoded then dispatched by `:event-type` header into
the chunk schema. Bedrock splits the finish across `messageStop`
(reason) and `metadata` (usage) — the parser stashes the reason and
emits a single consolidated `request-finish` event when metadata
arrives, with an `onHalt` fallback for truncated streams.
Auth: two paths. Bearer API key (newer) when the consumer sets
`model.headers.authorization = 'Bearer <key>'`. SigV4 signing via
aws4fetch otherwise — credentials live on `model.native.aws_credentials`
and are signed at `toHttp` time so STS-vended tokens are picked up
when the consumer rebuilds the model. The adapter rejects requests
with neither auth path with a clear InvalidRequestError.
Routing: `@ai-sdk/amazon-bedrock` lowers to `bedrock-converse` via
the new `AmazonBedrock` provider routing module; the OpenCode
`llm-bridge.ts` registers it.
Cassette format: response bodies under
`application/vnd.amazon.eventstream` and `application/octet-stream`
content types are now stored as base64 with `bodyEncoding: 'base64'`
on the response snapshot — text round-tripping mangled the CRC32
fields in event-stream frames. Existing cassettes (SSE/JSON) omit
the field and decode as text unchanged.
Tests: 11 deterministic fixtures (prepare / lower messages / lower
tool config / decode text+usage / decode tool calls / decode
reasoning / decode throttling exception / auth path validation /
SigV4 plumbing) + 2 recorded cassettes against live Bedrock
(`us.amazon.nova-micro-v1:0` in us-east-1) for streaming text and
streaming tool calls.
AGENTS.md: documents the Bedrock auth model, binary cassette format,
and updates the protocol coverage / cassette backlog.
Deps: @smithy/eventstream-codec, @smithy/util-utf8, aws4fetch (~40KB
combined; matches AI SDK's approach).
Add a `providerExecuted: boolean` flag to `tool-call` and `tool-result`
events plus the persisted `ToolResultPart`. When set, the tool runtime
skips client dispatch (the provider already executed the tool) and folds
both events into the assistant message so the next round's history
carries the call + result for context.
Anthropic: decode `server_tool_use` blocks and the three server tool
result block types (`web_search_tool_result`, `code_execution_tool_result`,
`web_fetch_tool_result`) into `tool-call` / `tool-result` events with
`providerExecuted: true`. Round-trip the same parts back into the
provider when the assistant message is replayed in subsequent requests.
Result block error payloads (`*_tool_result_error`) surface as
`result.type === "error"`.
OpenAI Responses: decode hosted tool items emitted via
`response.output_item.done` (`web_search_call`, `file_search_call`,
`code_interpreter_call`, `computer_use_call`, `image_generation_call`,
`mcp_call`, `local_shell_call`) as `tool-call` + `tool-result` pairs
with `providerExecuted: true`. Each tool's input fields are pulled out
explicitly; the full item is passed through as the result payload so
consumers can read outputs / sources / status without re-decoding.
Tool runtime: extend the dispatch decision so provider-executed
tool-calls bypass the handler lookup, and tool-result events with
`providerExecuted: true` are appended to the assistant content for
round-trip rather than being treated as a separate tool message.
Tests: 7 new deterministic fixtures cover Anthropic decode (success +
error result + round-trip + unknown server tool name), OpenAI Responses
decode (web_search_call, code_interpreter_call), and tool-runtime
skip-dispatch.
AGENTS.md updates the runtime section to describe pass-through behavior
and notes the transport-agnostic design that keeps a future WebSocket
adapter (e.g. OpenAI Codex backend) as a sibling rather than a core
rewrite.
Captures both model rounds of the typed ToolRuntime tool loop into a
single multi-interaction cassette: round 1 carries the user prompt and
returns a get_weather tool call; round 2 carries the assistant tool call
plus tool result and returns a final answer.
Verifies the multi-interaction cassette infrastructure end-to-end against
a real provider.
The cassette layer already stored interactions in an array, but replay
always used find-first structural matching and cassettes were written
as one minified JSON line. That makes tool-loop and retry recordings
unworkable: identical requests collapse to one response, and large
recordings are unreadable on review.
- Add `sequentialMatcher` for position-based dispatch so identical
retries map to recorded responses in order via an internal cursor.
- Pretty-print cassette JSON on write and reformat existing fixtures so
multi-interaction diffs stay reviewable.
- Add deterministic `record-replay.test.ts` covering default vs
sequential dispatch and cursor exhaustion.
- Add an OpenAI Chat tool-loop recorded test scaffold gated behind
`OPENAI_API_KEY` so a single `RECORD=true` run captures every
model round of the loop into one cassette file.
- Update AGENTS.md to document multi-interaction cassettes and the
matcher options, and mark the cassette ergonomics TODO complete.
Simplify pass after the typed ToolRuntime initial drop. Findings from a
parallel review (code reuse + quality + perf):
src/tool.ts
- Tool now carries memoized decode/encode codecs and a precomputed
ToolDefinition, derived once at tool() construction time. The runtime no
longer rebuilds Schema closures or JSON Schema docs per call/per run.
- Constrains parameters/success to Schema.Codec<T, any, never, never> so
the codecs have no service requirements. Drops the 'as unknown as' casts
the runtime needed previously.
- Fixes a latent bug: schemas with $ref now correctly emit $defs on
ToolDefinition.inputSchema (toJsonSchemaDocument's definitions were
silently dropped before).
src/tool-runtime.ts
- Uses LLMRequest constructor instead of 'as LLMRequest' casts.
- Default tool dispatch concurrency is 10 (was 'unbounded'); exposed via
RunOptions.concurrency. Unbounded is still available for handlers that
do not share a saturable resource.
- Drops dead 'usage' state, the single-use Dispatched interface, and the
DEFAULT_MAX_STEPS constant per the inline-when-used style rule.
- accumulate() now factors text-delta and reasoning-delta into one helper.
test/lib/openai-chunks.ts (new)
- Shared deltaChunk / usageChunk / toolCallChunk / finishChunk helpers.
test/lib/http.ts
- scriptedResponses moved here from tool-runtime.test.ts so future
multi-step adapter tests can reuse it. Also picks up parallel work that
swapped HandlerInput to a 'respond' callback for cleaner Response
construction.
test/tool-runtime.test.ts
- Uses LLMEvent.guards for typed event filtering instead of cast-and-check.
- Concurrent test now uses sseEvents + deltaChunk instead of a hand-rolled
body string.
Includes parallel callsite updates in test/adapter.test.ts and
test/provider/openai-compatible-chat.test.ts that adopt the 'respond' API
in lib/http.ts.
Schema-first, Effect-first tool loop:
- 'tool({ description, parameters, success, execute })' constructs a fully
typed Tool. parameters and success are Effect Schemas; execute is typed
against them and returns Effect<Success, ToolFailure>. Handler dependencies
are closed over at construction time so the runtime never sees per-tool
services.
- 'ToolRuntime.run(client, { request, tools, maxSteps?, stopWhen? })' streams
the model, decodes tool-call inputs against parameters, dispatches to the
matching handler, encodes results against success, emits tool-result events,
appends assistant + tool messages, and re-streams. Stops on non-tool-calls
finish, maxSteps, or stopWhen.
- Three recoverable error paths emit tool-error events so the model can
self-correct: unknown tool name, input fails parameters Schema, handler
returns ToolFailure. Defects fail the stream.
- 'ToolFailure' added to the schema and exported as the single forced error
channel for handlers.
- Tool definitions on the LLMRequest are derived via toJsonSchemaDocument so
consumers don't write JSON Schema by hand.
8 deterministic fixture tests cover the loop, errors, maxSteps, stopWhen, and
parallel tool calls in one step.
Per the package style guide, sync if/return functions that need to fail
should yield the error directly via Effect.gen rather than ladder
Effect.fail / Effect.succeed across every branch.
Touches all four adapters' tool-choice lowering. The naming-required
validation now reads as 'guard, then return' rather than embedded in a
chain of monadic returns. Behavior unchanged.
Every adapter's parse already produces LLMEvents (via the process callback in
the shared sse helper), and every raise was Stream.make(event). The Chunk type
parameter, the raise field, the RaiseState interface, and the Stream.flatMap
raise step in client.stream were all pure overhead.
- Adapter contract shrinks from <Draft, Target, Chunk> to <Draft, Target>.
- All four adapters drop their raise: (event) => Stream.make(event) line.
- client.stream skips the no-op flatMap.
- AGENTS.md adapter section reflects the simpler contract.
Updates the AGENTS.md TODO list:
- mark Responses, Anthropic, and Gemini adapter coverage as done
- mark the Gemini schema sanitizer port as done
- add concrete next-step items for OpenCode integration: ModelRef bridge,
request bridge, provider-quirk patches, request/stream parity tests, and
a flagged rollout against existing session/llm.test.ts cases
- add OpenAI-compatible Chat, Bedrock Converse, and Vertex routing as
outstanding adapter/dispatch decisions
Gemini rejects integer enums, dangling required fields, untyped arrays, and
object keywords on scalar schemas. The sanitizer was previously a divergent
copy in OpenCode; this lands it in the package as a tool-schema patch with
deterministic tests and selects it for Gemini-protocol or Gemini-named models.
Also tightens the Gemini test suite: covers tool-choice none, drops the
tool-input-delta assertion that Gemini does not actually emit, and confirms
total usage stays undefined when only thoughtsTokenCount arrives.
- shared sse helper now expects Effectful decodeChunk and process callbacks,
so adapter parsers can be Effect.gen and yield typed ProviderChunkError
instead of throwing across the sync mapAccum boundary.
- parseJson returns Effect<unknown, ProviderChunkError> via Effect.try,
matching the package style guide on yieldable errors.
- OpenAI Chat finalizes accumulated tool inputs eagerly when finish_reason
arrives, surfacing JSON parse failures at the boundary instead of at halt.
onHalt stays sync and just emits from state.
- generate's runFold reducer now mutates the accumulator instead of
reallocating the events array on every chunk, dropping O(n^2) growth on
long streams.
- Structurally match recorded requests by canonical JSON so non-deterministic
field ordering doesn't break replay.
- Pluggable header allow-list and body redaction hook on the record/replay
layer, so adapters with non-default auth (Anthropic, Bedrock) can plug in
without touching this file.
- Move the cassette-name dedupe set inside recordedTests() so two describe
files using different prefixes can run in parallel.
- Replace inline SSE template literals and per-file HTTP layers with shared
test/lib helpers (sseEvents, fixedResponse, dynamicResponse, truncatedStream).
- Tighten recorded-test assertions to exact text and usage so adapter parser
regressions surface immediately instead of passing fuzzy length>0 checks.
- Add cancellation and mid-stream transport-error tests for the OpenAI Chat
adapter.
- Add cross-phase patch tests that verify each phase sees an updated
PatchContext and that same-order patches sort deterministically by id.
Reorganizes the Installation service implementation by grouping info, method, latest, and upgrade methods into a single result object. This improves code locality and makes the service interface more maintainable. Also adds a clarifying comment explaining why the package manager's resolver is used for version lookups (to ensure registries, mirrors, auth, proxies, and dist-tags match upgrade behavior).
The cross-spawn-spawner module has been moved from src/effect/ to src/
to simplify the core package structure. The src/types.d.ts file which
contained unused type declarations has also been removed. All imports
throughout the codebase have been updated to reflect the new location.
This change reduces the package's internal complexity by flattening the
module hierarchy and removing dead code, making future maintenance easier.
Moved the cross-spawn-spawner module from packages/opencode to packages/core
to enable code sharing across the monorepo. This consolidates the process
spawning infrastructure into the core package so other packages can use
cross-platform child process spawning without duplicating the implementation.
Updated all import statements across the codebase to reference the new
location (@opencode-ai/core/effect/cross-spawn-spawner). Removed the
local copy from the opencode package along with its tests.
Move the Global module from packages/opencode/src/global to packages/core/src/global
to provide a unified location for managing XDG directories and application paths.
This eliminates duplicate path definitions across packages and ensures consistent
access to data, config, cache, state, log, and bin directories throughout the codebase.
Moves effect logging, observability, runtime utilities, flags, installation
version info, and process utilities from opencode to core package. This
enables better code sharing across packages and establishes core as the
single source of truth for foundational utilities.
All internal imports updated to use @opencode-ai/core paths for consistency.
The permission configuration previously used a generic record type that didn't offer editor completions. Updated the schema to explicitly list all tool permission keys (read, edit, glob, grep, list, bash, task, external_directory, lsp, skill, todowrite, question, webfetch, websearch, codesearch, doom_loop) with proper types, enabling autocomplete when editing permission files.
Fixes an issue where GitHub artifact downloads could strip executable bits
from binaries, causing Docker builds to fail when using unpacked dist files
directly rather than published tarballs. The chmod now runs before the
publish check to guarantee binaries are executable.
Users can now pass custom OpenTelemetry resource attributes via the OTEL_RESOURCE_ATTRIBUTES environment variable (comma-separated key=value format). These attributes are automatically included in all telemetry data sent from both the main process and workspace environments, enabling better observability integration with existing monitoring systems that rely on custom resource tags.
users can now see when transient failures occur during assistant responses,
such as rate limits or provider overloads, giving visibility into what
issues were encountered and automatically resolved before the final response
Move the step function from session-entry.ts to session-entry-stepper.ts and remove immer dependency. Add static fromEvent factory methods to Synthetic, Assistant, and Compaction classes for cleaner event-to-entry conversion.
- switch plugin loader tests to the effect npm module
- return Option.none() for mocked npm entrypoints
- keep test fixtures aligned with the current Npm.add contract
Previously when passing a session ID directly, the route was set during initial
render which could cause navigation issues before the router was fully ready.
Now the session navigation happens after initialization completes, ensuring
the TUI properly loads the requested session when users resume with --session-id.
On Windows, native terminals don't support POSIX suspend (ctrl+z), so we now
assign ctrl+z to input undo instead of terminal suspend. Terminal suspend is
disabled on Windows to avoid conflicts with the undo functionality.
Config is now loaded eagerly during project bootstrap so users can see config loading in traces during startup. This helps diagnose configuration issues earlier in the initialization flow.
NPM installation logic has been refactored with a unified reify function and improved InstallFailedError that includes both the packages being installed and the target directory. This provides users with complete context when package installations fail, making it easier to identify which dependency or project directory caused the issue.
Extract error handling, parsing logic, and variable substitution into dedicated
modules. This reduces duplication between tui.json and opencode.json parsing
and makes the config system easier to extend for future config formats.
Adds explanatory comments to config.ts and plugin.ts clarifying:
- How plugin specs are stored and normalized during config loading
- Why plugin_origins tracks provenance for location-sensitive decisions
- Why path-like specs are resolved early to prevent reinterpretation during merges
- How plugin deduplication works while keeping origin metadata for writes and diagnostics
Fixes potential plugin resolution issues when switching between projects by wrapping
plugin loading in Instance.provide(). This ensures each plugin resolves dependencies
relative to its correct project directory instead of inheriting context from whatever
instance happened to be active.
Also reorganizes config loading code into focused modules (command.ts, managed.ts,
plugin.ts) to make the codebase easier to maintain and test.
Ensures users on the prod channel have their data persisted to the same
database as latest and beta channels, preventing data fragmentation
across different release channels.
Skip Windows and Linux code signing, along with artifact downloads for
the beta branch to ensure beta builds don't go through production
release processes.
The CLI imports every top-level command before argument parsing has
decided which handler will run. This makes simple invocations pay for
the full command graph up front and slows down the default startup path.
Parse the root argv first and load only the command module that matches
the selected top-level command. Keep falling back to the default TUI
path for non-command positionals, and preserve root help, version and
completion handling
- Create script/github/close-issues.ts to close stale issues after 60 days
- Add GitHub Action workflow to run daily at 2 AM
- Remove old stale-issues workflow to avoid conflicts
The stale-issues workflow was hitting the default 30 operations limit,
preventing it from processing all 2900+ issues/PRs. Increased to 1000
to handle the full backlog. Also pinned to exact v10.2.0 for reproducibility.
The compaction message now correctly indicates the current session was compacted rather than the entire history, making it clearer to users which conversation data was optimized.
- Add concurrency settings to cancel outdated runs when new commits are pushed
- Add contents: read permission for security hardening
- Remove redundant required job that checked test results
- Replace Server.App() with Server.Default() for internal server access
- Extract server app creation into Server.createApp(opts) for testability
- Move CORS whitelist from module-level variable to function parameter
- Update all tests to use Server.Default() instead of Server.App()
When the sidebar was collapsed (not on mobile), the background color was showing as the stronger variant instead of matching the base background. This fixes the hover state detection so users see a consistent lighter background when the sidebar is in collapsed mode.
People change models and thinking settings while composing, so keeping those controls next to the Add file button avoids hunting in the footer and reduces context switching mid-message.
Auto-accept now lives in the footer dock beside the thinking control so it stays easy to find without crowding the text box.
The Add file button moves to the bottom-left of the editor and the input gets a bit more bottom padding so the control row doesn’t overlap what you’re typing.
Restore the previous prompt control layout after the dock/position changes made the composer feel less familiar.
This brings auto-accept back to its prior spot and returns Add file to the previous placement.
Auto-accept now lives in the footer dock beside the thinking control so it stays easy to find without crowding the text box.
The Add file button moves to the bottom-left of the editor and the input gets a bit more bottom padding so the control row doesn’t overlap what you’re typing.
The test now validates that the database file is named according to the current installation channel (latest/beta get 'opencode.db', others get sanitized names). This ensures users' data is stored in the correct location based on their update channel.
Allows users to skip automatic database migrations by setting the
OPENCODE_SKIP_MIGRATIONS environment variable. Useful for testing
scenarios or when manually managing database state.
Improve readability of the usage graph y-axis label by spelling out
'Requests per 5 hour' instead of the abbreviated 'Requests/5h'. Fix
layout on smaller screens by removing negative margin that was
causing the graph to overflow its container.
Improve visual clarity of request limits on the Go pricing page by replacing
dot-based chart with animated horizontal bars that directly show model names
and exact request counts. Add proper OpenGraph and Twitter Card meta tags for
better social sharing discovery.
Adds list of included AI models (GLM-5, Kimi K2.5, and MiniMax M2.5) to the Go page so users know exactly what model access their subscription provides
Users can now see a clear visual comparison of request limits between Free tier and Go tier across all available models, making it easier to understand the value of a Go subscription at a glance.
Fixed incorrect relative import paths in Bosnian, French, Italian,
Korean, Norwegian, Portuguese, Turkish, and Chinese docs that were
referencing config.mjs from the wrong directory level. This resolves
build errors when viewing translated documentation pages.
When merging PRs into the beta branch, the sync script now attempts to automatically resolve merge conflicts using opencode before failing. This reduces manual intervention needed for beta releases when multiple PRs have overlapping changes.
Replace Bun.Glob usage with a new Glob utility wrapper around the npm 'glob' package.
This moves us off Bun-specific APIs toward standard Node.js compatible solutions.
Changes:
- Add new src/util/glob.ts utility module with scan(), scanSync(), and match()
- Default include option is 'file' (only returns files, not directories)
- Add symlink option (default: false) to control symlink following
- Migrate all 12 files using Bun.Glob to use the new Glob utility
- Add comprehensive tests for the glob utility
Breaking changes:
- Removed support for include: 'dir' option (use include: 'all' and filter manually)
- symlink now defaults to false (was true in most Bun.Glob usages)
Files migrated:
- src/util/log.ts
- src/util/filesystem.ts
- src/tool/truncation.ts
- src/session/instruction.ts
- src/storage/json-migration.ts
- src/storage/storage.ts
- src/project/project.ts
- src/cli/cmd/tui/context/theme.tsx
- src/config/config.ts
- src/tool/registry.ts
- src/skill/skill.ts
- src/file/ignore.ts
The publish script now recursively transforms export paths to handle nested export objects. This ensures all SDK entry points are correctly mapped to their compiled dist/ locations and type definitions when publishing to npm.
Added default type parameter 'any' to readJson<T> so users can call it without specifying a type when they don't need strict typing. This reduces boilerplate for quick JSON reads where type safety isn't required.
We receive a large number of AI-generated security reports and don't have the resources to review them all. This policy clarifies that such submissions will result in an automatic ban to protect our maintainers' time.
Use consistent strong color for active mode icons instead of different
colors for shell vs normal mode, making the active state more visually
clear to users.
- Force process exit after TUI thread completes to prevent lingering processes
- Add 5-second timeout to worker shutdown to prevent indefinite hangs during cleanup
- Replace warning yellow with distinct orange color for modified files in git diff indicators
- Increase button padding for better visual balance in session header and status popover
Adds a visual warning indicator to permission request dialogs to make
them more noticeable and help users understand when the agent needs
approval to use tools. Also improves the layout with consistent
spacing and icon alignment.
Moves the user message meta row out of the bubble width constraints and truncates long metadata while keeping the timestamp visible with consistent middot spacing.
Use border-border-weak-base for the titlebar status and open actions (including the open button divider) and adjust the English copy-path label casing.
Please provide a description of the issue (if there is one), the changes you made to fix it, and why they work. It is expected that you understand why your changes work and if you do not understand why at least say as much so a maintainer knows how much to value the PR.
Please provide a description of the issue, the changes you made to fix it, and why they work. It is expected that you understand why your changes work and if you do not understand why at least say as much so a maintainer knows how much to value the PR.
**If you paste a large clearly AI generated description here your PR may be IGNORED or CLOSED!**
### How did you verify your code works?
### Screenshots / recordings
_If this is a UI change, please include a screenshot or recording._
### Checklist
- [ ] I have tested my changes locally
- [ ] I have not included unrelated changes in this PR
_If you do not follow this template your PR will be automatically rejected._
opencode run --agent docs --model opencode/gpt-5.3-codex <<'EOF'
Update localized docs to match the latest English docs changes.
Changed English doc files:
@@ -67,10 +80,12 @@ jobs:
2. You MUST use the Task tool for translation work and launch subagents with subagent_type `translator` (defined in .opencode/agent/translator.md).
3. Do not translate directly in the primary agent. Use translator subagent output as the source for locale text updates.
4. Run translator subagent Task calls in parallel whenever file/locale translation work is independent.
5. Preserve frontmatter keys, internal links, code blocks, and existing locale-specific metadata unless the English change requires an update.
6. Keep locale docs structure aligned with their corresponding English pages.
7. Do not modify English source docs in packages/web/src/content/docs/*.mdx.
8. If no locale updates are needed, make no changes.
5. Use only the minimum tools needed for this task (read/glob, file edits, and translator Task). Do not use shell, web, search, or GitHub tools for translation work.
6. Preserve frontmatter keys, internal links, code blocks, and existing locale-specific metadata unless the English change requires an update.
7. Keep locale docs structure aligned with their corresponding English pages.
8. Do not modify English source docs in packages/web/src/content/docs/*.mdx.
9. If no locale updates are needed, make no changes.
opencode run -m opencode/claude-sonnet-4-6 "Issue #${{ github.event.issue.number }} was previously flagged as non-compliant and has been edited.
Lookup this issue with gh issue view ${{ github.event.issue.number }}.
Re-check whether the issue now follows our contributing guidelines and issue templates.
This project has three issue templates that every issue MUST use one of:
1. Bug Report - requires a Description field with real content
2. Feature Request - requires a verification checkbox and description, title should start with [FEATURE]:
3. Question - requires the Question field with real content
Additionally check:
- No AI-generated walls of text (long, AI-generated descriptions are not acceptable)
- The issue has real content, not just template placeholder text left unchanged
- Bug reports should include some context about how to reproduce
- Feature requests should explain the problem or need
- We want to push for having the user provide system description & information
Do NOT be nitpicky about optional fields. Only flag real problems like: no template used, required fields empty or placeholder text only, obviously AI-generated walls of text, or completely empty/nonsensical content.
- Only include sections that have at least one notable entry
- Keep one bullet per commit you keep
- Skip commits that are entirely internal, CI, tests, refactors, or otherwise not user-facing
- Start each bullet with a capital letter
- Prefer what changed for users over what code changed internally
- Do not copy raw commit prefixes like `fix:` or `feat:` or trailing PR numbers like `(#123)`
- Community attribution is deterministic: only preserve an existing `(@username)` suffix from the changelog input
- If an input bullet has no `(@username)` suffix, do not add one
- Never add a new `(@username)` suffix from `git show`, commit authors, names, or email addresses
- If no notable entries remain and there is no contributor block, write exactly `No notable changes.`
- If no notable entries remain but there is a contributor block, omit all release sections and return only the contributor block
- If the input contains `## Community Contributors Input`, append the block below that heading to the end of the final file verbatim
- Do not add, remove, rewrite, or reorder contributor names or commit titles in that block
- Do not derive the thank-you section from the main summary bullets
- Do not include the heading `## Community Contributors Input` in the final file
- Focus on writing the least words to get your point across - users will skim read the changelog, so we should be precise
**Importantly, the changelog is for users (who are at least slightly technical), they may use the TUI, Desktop, SDK, Plugins and so forth. Be thorough in understanding flow on effects may not be immediately apparent. e.g. a package upgrade looks internal but may patch a bug. Or a refactor may also stabilise some race condition that fixes bugs for users. The PR title/body + commit message will give you the authors context, usually containing the outcome not just technical detail**
Use this folder for locale-specific translation guidance that supplements `.opencode/agent/translator.md`.
The global glossary in `translator.md` remains the source of truth for shared do-not-translate terms (commands, code, paths, product names, etc.). These locale files capture community learnings about phrasing and terminology preferences.
## File Naming
- One file per locale
- Use lowercase locale slugs that match docs locales when possible (for example, `zh-cn.md`, `zh-tw.md`)
- If only language-level guidance exists, use the language code (for example, `fr.md`)
- Some repo locale slugs may be aliases/non-BCP47 for consistency (for example, `br` for Brazilian Portuguese / `pt-BR`)
## What To Put In A Locale File
- **Sources**: PRs/issues/discussions that motivated the guidance
- **Do Not Translate (Locale Additions)**: locale-specific terms or casing decisions
- **Preferred Terms**: recurring UI/docs words with preferred translations
- **Guidance**: tone, style, and consistency notes
- **Avoid** (optional): common literal translations or wording we should avoid
- If the repo uses a locale alias slug, document the alias in **Guidance** (for example, prose may mention `pt-BR` while config/examples use `br`)
Prefer guidance that is:
- Repeated across multiple docs/screens
- Easy to apply consistently
- Backed by a community contribution or review discussion
description: Use this when you are working on file operations like reading, writing, scanning, or deleting files. It summarizes the preferred file APIs and patterns used in this repo. It also notes when to use filesystem helpers for directories.
---
## Use this when
- Editing file I/O or scans in `packages/opencode`
- Handling directory operations or external tools
## Bun file APIs (from Bun docs)
-`Bun.file(path)` is lazy; call `text`, `json`, `stream`, `arrayBuffer`, `bytes`, `exists` to read.
description: Work with Effect v4 / effect-smol TypeScript code in this repo
---
# Effect
This codebase uses Effect for typed, composable TypeScript services, schemas, and workflows.
## Source Of Truth
Use the current Effect v4 / effect-smol source, not memory or older Effect v2/v3 examples.
1. If `.opencode/references/effect-smol` is missing, clone `https://github.com/Effect-TS/effect-smol` there. Do this in the project, not in the skill folder.
2. Search `.opencode/references/effect-smol` for exact APIs, examples, tests, and naming patterns before answering or implementing Effect-specific code.
3. Also inspect existing repo code for local house style before introducing new patterns.
4. Prefer answers and implementations backed by specific source files or nearby repo examples.
## Guidelines
- Prefer current Effect v4 APIs and project-local patterns over old blog posts, examples, or package-memory guesses.
- Use `Effect.gen(function* () { ... })` for multi-step workflows.
- Use `Effect.fn("Name")` or `Effect.fnUntraced(...)` for named effects when adding reusable service methods or important workflows.
- Prefer Effect `Schema` for API and domain data shapes. Use branded schemas for IDs and `Schema.TaggedErrorClass` for typed domain errors when modeling new error surfaces.
- Keep HTTP handlers thin: decode input, read request context, call services, and map transport errors. Put business rules in services.
- In Effect service code, prefer Effect-aware platform abstractions and dependencies over ad hoc promises where the surrounding code already does so.
- Keep layer composition explicit. Avoid broad hidden provisioning that makes missing dependencies hard to see.
- In tests, prefer the repo's existing Effect test helpers and live tests for filesystem, git, child process, locks, or timing behavior.
- Do not introduce `any`, non-null assertions, unchecked casts, or older Effect APIs just to satisfy types.
- Do not answer from memory. Verify against `.opencode/references/effect-smol` or nearby code first.
Use this tool to assign and/or label a GitHub issue.
You can assign the following users:
- thdxr
- adamdotdevin
- fwang
- jayair
- kommander
- rekram1-node
You can use the following labels:
- nix
- opentui
- perf
- web
- zen
- docs
Always try to assign an issue, if in doubt, assign rekram1-node to it.
## Breakdown of responsibilities:
### thdxr
Dax is responsible for managing core parts of the application, for large feature requests, api changes, or things that require significant changes to the codebase assign him.
This relates to OpenCode server primarily but has overlap with just about anything
### adamdotdevin
Adam is responsible for managing the Desktop/Web app. If there is an issue relating to the desktop app or `opencode web` command. Assign him.
### fwang
Frank is responsible for managing Zen, if you see complaints about OpenCode Zen, maybe it's the dashboard, the model quality, billing issues, etc. Assign him to the issue.
### jayair
Jay is responsible for documentation. If there is an issue relating to documentation assign him.
### kommander
Sebastian is responsible for managing an OpenTUI (a library for building terminal user interfaces). OpenCode's TUI is built with OpenTUI. If there are issues about:
- random characters on screen
- keybinds not working on different terminals
- general terminal stuff
Then assign the issue to Him.
### rekram1-node
ALL BUGS SHOULD BE assigned to rekram1-node unless they have the `opentui` label.
Assign Aiden to an issue as a catch all, if you can't assign anyone else. Most of the time this will be bugs/polish things.
If no one else makes sense to assign, assign rekram1-node to it.
Always assign to aiden if the issue mentions "acp", "zed", or model performance issues
## Breakdown of Labels:
### nix
Any issue that mentions nix, or nixos should have a nix label
### opentui
Anything relating to the TUI itself should have an opentui label
### perf
Anything related to slow performance, high ram, high cpu usage, or any other performance related issue should have a perf label
### desktop
Anything related to `opencode web` command or the desktop app should have a desktop label. Never add this label for anything terminal/tui related
### zen
Anything related to OpenCode Zen, billing, or model quality from Zen should have a zen label
### docs
Anything related to the documentation should have a docs label
- Keep things in one function unless composable or reusable
- Avoid `try`/`catch` where possible
- Avoid using the `any` type
- Prefer single word variable names where possible
- Use Bun APIs when possible, like `Bun.file()`
- Rely on type inference when possible; avoid explicit type annotations or interfaces unless necessary for exports or clarity
- Prefer functional array methods (flatMap, filter, map) over for loops; use type guards on filter to maintain type inference downstream
### Naming
Prefer single word names for variables and functions. Only use multiple words if necessary.
```ts
// Good
constfoo=1
functionjournal(dir: string){}
// Bad
constfooBar=1
functionprepareJournal(dir: string){}
```
- In `src/config`, follow the existing self-export pattern at the top of the file (for example `export * as ConfigAgent from "./agent"`) when adding a new config module.
Reduce total variable count by inlining when a value is only used once.
brew install anomalyco/tap/opencode # macOS and Linux (recommended, always up to date)
brew install opencode # macOS and Linux (official brew formula, updated less)
sudo pacman -S opencode # Arch Linux (Stable)
paru -S opencode-bin # Arch Linux (Latest from AUR)
mise use -g opencode # Any OS
nix run nixpkgs#opencode # or github:anomalyco/opencode for latest dev branch
```
> [!TIP]
> ইনস্টল করার আগে ০.১.x এর চেয়ে পুরোনো ভার্সনগুলো মুছে ফেলুন।
### ডেস্কটপ অ্যাপ (BETA)
OpenCode ডেস্কটপ অ্যাপ্লিকেশন হিসেবেও উপলব্ধ। সরাসরি [রিলিজ পেজ](https://github.com/anomalyco/opencode/releases) অথবা [opencode.ai/download](https://opencode.ai/download) থেকে ডাউনলোড করুন।
OpenCode এ দুটি বিল্ট-ইন এজেন্ট রয়েছে যা আপনি `Tab` কি(key) দিয়ে পরিবর্তন করতে পারবেন।
- **build** - ডিফল্ট, ডেভেলপমেন্টের কাজের জন্য সম্পূর্ণ অ্যাক্সেসযুক্ত এজেন্ট
- **plan** - বিশ্লেষণ এবং কোড এক্সপ্লোরেশনের জন্য রিড-ওনলি এজেন্ট
- ডিফল্টভাবে ফাইল এডিট করতে দেয় না
- ব্যাশ কমান্ড চালানোর আগে অনুমতি চায়
- অপরিচিত কোডবেস এক্সপ্লোর করা বা পরিবর্তনের পরিকল্পনা করার জন্য আদর্শ
এছাড়াও জটিল অনুসন্ধান এবং মাল্টিস্টেপ টাস্কের জন্য একটি **general** সাবএজেন্ট অন্তর্ভুক্ত রয়েছে।
এটি অভ্যন্তরীণভাবে ব্যবহৃত হয় এবং মেসেজে `@general` লিখে ব্যবহার করা যেতে পারে।
এজেন্টদের সম্পর্কে আরও জানুন: [docs](https://opencode.ai/docs/agents)।
### ডকুমেন্টেশন (Documentation)
কিভাবে OpenCode কনফিগার করবেন সে সম্পর্কে আরও তথ্যের জন্য, [**আমাদের ডকস দেখুন**](https://opencode.ai/docs)।
### অবদান (Contributing)
আপনি যদি OpenCode এ অবদান রাখতে চান, অনুগ্রহ করে একটি পুল রিকোয়েস্ট সাবমিট করার আগে আমাদের [কন্ট্রিবিউটিং ডকস](./CONTRIBUTING.md) পড়ে নিন।
### OpenCode এর উপর বিল্ডিং (Building on OpenCode)
আপনি যদি এমন প্রজেক্টে কাজ করেন যা OpenCode এর সাথে সম্পর্কিত এবং প্রজেক্টের নামের অংশ হিসেবে "opencode" ব্যবহার করেন, উদাহরণস্বরূপ "opencode-dashboard" বা "opencode-mobile", তবে দয়া করে আপনার README তে একটি নোট যোগ করে স্পষ্ট করুন যে এই প্রজেক্টটি OpenCode দল দ্বারা তৈরি হয়নি এবং আমাদের সাথে এর কোনো সরাসরি সম্পর্ক নেই।
### সচরাচর জিজ্ঞাসিত প্রশ্নাবলী (FAQ)
#### এটি ক্লড কোড (Claude Code) থেকে কীভাবে আলাদা?
ক্যাপাবিলিটির দিক থেকে এটি ক্লড কোডের (Claude Code) মতই। এখানে মূল পার্থক্যগুলো দেওয়া হলো:
- ১০০% ওপেন সোর্স
- কোনো প্রোভাইডারের সাথে আবদ্ধ নয়। যদিও আমরা [OpenCode Zen](https://opencode.ai/zen) এর মাধ্যমে মডেলসমূহ ব্যবহারের পরামর্শ দিই, OpenCode ক্লড (Claude), ওপেনএআই (OpenAI), গুগল (Google), অথবা লোকাল মডেলগুলোর সাথেও ব্যবহার করা যেতে পারে। যেমন যেমন মডেলগুলো উন্নত হবে, তাদের মধ্যকার পার্থক্য কমে আসবে এবং দামও কমবে, তাই প্রোভাইডার-অজ্ঞাস্টিক হওয়া খুবই গুরুত্বপূর্ণ।
- আউট-অফ-দ্য-বক্স LSP সাপোর্ট
- TUI এর উপর ফোকাস। OpenCode নিওভিম (neovim) ব্যবহারকারী এবং [terminal.shop](https://terminal.shop) এর নির্মাতাদের দ্বারা তৈরি; আমরা টার্মিনালে কী কী সম্ভব তার সীমাবদ্ধতা ছাড়িয়ে যাওয়ার চেষ্টা করছি।
- ক্লায়েন্ট/সার্ভার আর্কিটেকচার। এটি যেমন OpenCode কে আপনার কম্পিউটারে চালানোর সুযোগ দেয়, তেমনি আপনি মোবাইল অ্যাপ থেকে রিমোটলি এটি নিয়ন্ত্রণ করতে পারবেন, অর্থাৎ TUI ফ্রন্টএন্ড কেবল সম্ভাব্য ক্লায়েন্টগুলোর মধ্যে একটি।
---
**আমাদের কমিউনিটিতে যুক্ত হোন** [Discord](https://discord.gg/opencode) | [X.com](https://x.com/opencode)
brew install anomalyco/tap/opencode # macOS και Linux (προτείνεται, πάντα ενημερωμένο)
brew install opencode # macOS και Linux (επίσημος τύπος brew, λιγότερο συχνές ενημερώσεις)
sudo pacman -S opencode # Arch Linux (Σταθερό)
paru -S opencode-bin # Arch Linux (Τελευταία έκδοση από AUR)
mise use -g opencode # Οποιοδήποτε λειτουργικό σύστημα
nix run nixpkgs#opencode # ή github:anomalyco/opencode με βάση την πιο πρόσφατη αλλαγή από το dev branch
```
> [!TIP]
> Αφαίρεσε παλαιότερες εκδόσεις από τη 0.1.x πριν από την εγκατάσταση.
### Εφαρμογή Desktop (BETA)
Το OpenCode είναι επίσης διαθέσιμο ως εφαρμογή. Κατέβασε το απευθείας από τη [σελίδα εκδόσεων](https://github.com/anomalyco/opencode/releases) ή το [opencode.ai/download](https://opencode.ai/download).
Το OpenCode περιλαμβάνει δύο ενσωματωμένους πράκτορες μεταξύ των οποίων μπορείτε να εναλλάσσεστε με το πλήκτρο `Tab`.
- **build** - Προεπιλεγμένος πράκτορας με πλήρη πρόσβαση για εργασία πάνω σε κώδικα
- **plan** - Πράκτορας μόνο ανάγνωσης για ανάλυση και εξερεύνηση κώδικα
- Αρνείται την επεξεργασία αρχείων από προεπιλογή
- Ζητά άδεια πριν εκτελέσει εντολές bash
- Ιδανικός για εξερεύνηση άγνωστων αρχείων πηγαίου κώδικα ή σχεδιασμό αλλαγών
Περιλαμβάνεται επίσης ένας **general** υποπράκτορας για σύνθετες αναζητήσεις και πολυβηματικές διεργασίες.
Χρησιμοποιείται εσωτερικά και μπορεί να κληθεί χρησιμοποιώντας `@general` στα μηνύματα.
Μάθετε περισσότερα για τους [πράκτορες](https://opencode.ai/docs/agents).
### Οδηγός Χρήσης
Για περισσότερες πληροφορίες σχετικά με τη ρύθμιση του OpenCode, [**πλοηγήσου στον οδηγό χρήσης μας**](https://opencode.ai/docs).
### Συνεισφορά
Εάν ενδιαφέρεσαι να συνεισφέρεις στο OpenCode, διαβάστε τα [οδηγό χρήσης συνεισφοράς](./CONTRIBUTING.md) πριν υποβάλεις ένα pull request.
### Δημιουργία πάνω στο OpenCode
Εάν εργάζεσαι σε ένα έργο σχετικό με το OpenCode και χρησιμοποιείτε το "opencode" ως μέρος του ονόματός του, για παράδειγμα "opencode-dashboard" ή "opencode-mobile", πρόσθεσε μια σημείωση στο README σας γιανα διευκρινίσεις ότι δεν είναι κατασκευασμένο από την ομάδα του OpenCode και δεν έχει καμία σχέση με εμάς.
### Συχνές Ερωτήσεις
#### Πώς διαφέρει αυτό από το Claude Code;
Είναι πολύ παρόμοιο με το Claude Code ως προς τις δυνατότητες. Ακολουθούν οι βασικές διαφορές:
- 100% ανοιχτού κώδικα
- Δεν είναι συνδεδεμένο με κανέναν πάροχο. Αν και συνιστούμε τα μοντέλα που παρέχουμε μέσω του [OpenCode Zen](https://opencode.ai/zen), το OpenCode μπορεί να χρησιμοποιηθεί με Claude, OpenAI, Google, ή ακόμα και τοπικά μοντέλα. Καθώς τα μοντέλα εξελίσσονται, τα κενά μεταξύ τους θα κλείσουν και οι τιμές θα μειωθούν, οπότε είναι σημαντικό να είσαι ανεξάρτητος από τον πάροχο.
- Out-of-the-box υποστήριξη LSP
- Εστίαση στο TUI. Το OpenCode είναι κατασκευασμένο από χρήστες που χρησιμοποιούν neovim και τους δημιουργούς του [terminal.shop](https://terminal.shop)· θα εξαντλήσουμε τα όρια του τι είναι δυνατό στο terminal.
- Αρχιτεκτονική client/server. Αυτό, για παράδειγμα, μπορεί να επιτρέψει στο OpenCode να τρέχει στον υπολογιστή σου ενώ το χειρίζεσαι εξ αποστάσεως από μια εφαρμογή κινητού, που σημαίνει ότι το TUI frontend είναι μόνο ένας από τους πιθανούς clients.
---
**Γίνε μέλος της κοινότητάς μας** [Discord](https://discord.gg/opencode) | [X.com](https://x.com/opencode)
brew install anomalyco/tap/opencode # macOS và Linux (khuyên dùng, luôn cập nhật)
brew install opencode # macOS và Linux (công thức brew chính thức, ít cập nhật hơn)
sudo pacman -S opencode # Arch Linux (Bản ổn định)
paru -S opencode-bin # Arch Linux (Bản mới nhất từ AUR)
mise use -g opencode # Mọi hệ điều hành
nix run nixpkgs#opencode # hoặc github:anomalyco/opencode cho nhánh dev mới nhất
```
> [!TIP]
> Hãy xóa các phiên bản cũ hơn 0.1.x trước khi cài đặt.
### Ứng dụng Desktop (BETA)
OpenCode cũng có sẵn dưới dạng ứng dụng desktop. Tải trực tiếp từ [trang releases](https://github.com/anomalyco/opencode/releases) hoặc [opencode.ai/download](https://opencode.ai/download).
OpenCode bao gồm hai agent được tích hợp sẵn mà bạn có thể chuyển đổi bằng phím `Tab`.
- **build** - Agent mặc định, có toàn quyền truy cập cho công việc lập trình
- **plan** - Agent chỉ đọc dùng để phân tích và khám phá mã nguồn
- Mặc định từ chối việc chỉnh sửa tệp
- Hỏi quyền trước khi chạy các lệnh bash
- Lý tưởng để khám phá các codebase lạ hoặc lên kế hoạch thay đổi
Ngoài ra còn có một subagent **general** dùng cho các tìm kiếm phức tạp và tác vụ nhiều bước.
Agent này được sử dụng nội bộ và có thể gọi bằng cách dùng `@general` trong tin nhắn.
Tìm hiểu thêm về [agents](https://opencode.ai/docs/agents).
### Tài liệu
Để biết thêm thông tin về cách cấu hình OpenCode, [**hãy truy cập tài liệu của chúng tôi**](https://opencode.ai/docs).
### Đóng góp
Nếu bạn muốn đóng góp cho OpenCode, vui lòng đọc [tài liệu hướng dẫn đóng góp](./CONTRIBUTING.md) trước khi gửi pull request.
### Xây dựng trên nền tảng OpenCode
Nếu bạn đang làm việc trên một dự án liên quan đến OpenCode và sử dụng "opencode" như một phần của tên dự án, ví dụ "opencode-dashboard" hoặc "opencode-mobile", vui lòng thêm một ghi chú vào README của bạn để làm rõ rằng dự án đó không được xây dựng bởi đội ngũ OpenCode và không liên kết với chúng tôi dưới bất kỳ hình thức nào.
### Các câu hỏi thường gặp (FAQ)
#### OpenCode khác biệt thế nào so với Claude Code?
Về mặt tính năng, nó rất giống Claude Code. Dưới đây là những điểm khác biệt chính:
- 100% mã nguồn mở
- Không bị ràng buộc với bất kỳ nhà cung cấp nào. Mặc dù chúng tôi khuyên dùng các mô hình được cung cấp qua [OpenCode Zen](https://opencode.ai/zen), OpenCode có thể được sử dụng với Claude, OpenAI, Google, hoặc thậm chí các mô hình chạy cục bộ. Khi các mô hình phát triển, khoảng cách giữa chúng sẽ thu hẹp lại và giá cả sẽ giảm, vì vậy việc không phụ thuộc vào nhà cung cấp là rất quan trọng.
- Hỗ trợ LSP ngay từ đầu
- Tập trung vào TUI (Giao diện người dùng dòng lệnh). OpenCode được xây dựng bởi những người dùng neovim và đội ngũ tạo ra [terminal.shop](https://terminal.shop); chúng tôi sẽ đẩy giới hạn của những gì có thể làm được trên terminal lên mức tối đa.
- Kiến trúc client/server. Chẳng hạn, điều này cho phép OpenCode chạy trên máy tính của bạn trong khi bạn điều khiển nó từ xa qua một ứng dụng di động, nghĩa là frontend TUI chỉ là một trong những client có thể dùng.
---
**Tham gia cộng đồng của chúng tôi** [Discord](https://discord.gg/opencode) | [X.com](https://x.com/opencode)
@@ -31,11 +31,10 @@ Your app is ready to be deployed!
## E2E Testing
Playwright starts the Vite dev server automatically via `webServer`, and UI tests need an opencode backend (defaults to`localhost:4096`).
Use the local runner to create a temp sandbox, seed data, and run the tests.
Playwright starts the Vite dev server automatically via `webServer`, and UI tests expect an opencode backend at`localhost:4096` by default.
```bash
bunx playwright install
bunx playwright install chromium
bun run test:e2e:local
bun run test:e2e:local -- --grep "settings"
```
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.