Compare commits

..

122 Commits

Author SHA1 Message Date
Kit Langton
40819e1865 refactor(http-recorder): extract cassette service 2026-05-03 20:12:01 -04:00
Kit Langton
4b54bc38f1 refactor(http-recorder): make record mode explicit 2026-05-03 17:55:06 -04:00
Kit Langton
73a9372e7a test(llm): add gemini golden loop coverage 2026-05-03 17:30:58 -04:00
Kit Langton
bc20399f6e test(llm): add groq recordings and cost report 2026-05-03 16:25:48 -04:00
Kit Langton
039c0506cf test(llm): expand flagship recorded coverage 2026-05-03 16:03:36 -04:00
Kit Langton
49340f517c test(llm): simplify golden tool loop scenarios 2026-05-03 15:29:25 -04:00
Kit Langton
db74983861 test(llm): add openrouter golden tool loops 2026-05-03 14:44:49 -04:00
Kit Langton
3cae82f708 docs(llm): document provider recording strategy 2026-05-03 14:32:36 -04:00
Kit Langton
9e3868fed7 test(llm): add openrouter recorded coverage 2026-05-03 14:07:33 -04:00
Kit Langton
a1c1d0766f chore(llm): use effect for recording env setup 2026-05-03 14:07:14 -04:00
Kit Langton
b560f98962 chore(llm): improve recording env setup ux 2026-05-03 13:50:03 -04:00
Kit Langton
cd866f32a1 chore(llm): add recording env setup 2026-05-03 13:40:53 -04:00
Kit Langton
fea96490e3 fix(llm): expand compatible provider bridge 2026-05-03 13:11:45 -04:00
Kit Langton
6224ce84fd fix(llm): align provider tool semantics 2026-05-03 12:58:14 -04:00
Kit Langton
c519ff2ce8 feat(llm): add consistent tagged checks 2026-05-03 12:49:56 -04:00
Kit Langton
6736923a35 fix(llm): preserve provider-native continuation state 2026-05-03 12:47:03 -04:00
Kit Langton
eea8764ce1 simplify(llm): extract conversation semantics 2026-05-03 12:22:51 -04:00
Kit Langton
6e23e56a49 simplify(llm): trim native runtime overhead 2026-05-03 10:13:09 -04:00
Kit Langton
b7930c75f4 simplify(llm): share tagged schema helper 2026-05-03 09:30:58 -04:00
Kit Langton
195e0bb0c3 simplify(llm): define events with schema tags 2026-05-03 09:14:37 -04:00
Kit Langton
de79fd7306 fix(test): use instance helper in native fallback test 2026-05-03 08:50:17 -04:00
Kit Langton
1caeaf70bb Merge remote-tracking branch 'origin/dev' into HEAD 2026-05-03 08:48:44 -04:00
Kit Langton
9065d79a9a fix(llm): preserve native protocol state 2026-05-03 00:20:50 -04:00
Kit Langton
046e459d65 fix(llm): map Responses tool calls finish reason 2026-05-01 17:47:46 -04:00
Kit Langton
652ef9c09a fix(llm): use Azure api-key auth for OpenAI adapters 2026-05-01 17:11:44 -04:00
Kit Langton
e9d84c6db7 fix(llm): preserve native stream fallback parity 2026-05-01 08:54:05 -04:00
Kit Langton
116a5c2e74 docs(llm): document prepare<Target>, PreparedRequestOf, and LLMEvent.is.* in AGENTS.md 2026-05-01 08:12:38 -04:00
Kit Langton
8f338ef6dc simplify(llm): inline single-use llmEventIs const and drop redundant as const 2026-05-01 08:12:37 -04:00
Kit Langton
75f467bae3 feat(llm): expose PreparedRequestOf<Target> on LLMClient.prepare
LLMClient.prepare(request) returned a PreparedRequest with target: unknown.
Callers building debug UIs / request previews / plan rendering had to cast
target to the adapter's native shape at every read.

Adds PreparedRequestOf<Target> in schema and a generic Target = unknown
parameter on LLMClient.prepare so callers can opt in to a typed view:

  const prepared = yield* client.prepare<OpenAIChatTarget>(request)
  prepared.target.model           // typed
  prepared.target.messages         // typed

The runtime payload is unchanged — the adapter still emits target: unknown
and the consumer asserts the shape they expect from the configured adapter.
The cast lives at the public boundary in adapter.ts; everything else stays
honest about runtime types.

Existing callers without the type argument still get target: unknown and
nothing breaks. Test in openai-chat.test.ts proves the narrowing at the
type level.
2026-05-01 08:12:37 -04:00
Kit Langton
f4de3e801e feat(llm): add LLMEvent.is.* camelCase narrowing helpers
Schema.toTaggedUnion('type') already provides LLMEvent.guards but uses
kebab-case bracket access (LLMEvent.guards['tool-call']). Adds an LLMEvent.is
namespace with camelCase aliases that delegate to the same guards, so
consumers can write events.filter(LLMEvent.is.toolCall) instead of
events.filter(LLMEvent.guards['tool-call']).

Migrated all callsites in src/llm.ts and the two test files for consistency.
LLMEvent.guards / .match / .cases / .isAnyOf remain available for callers
who want the Effect-canonical API.
2026-05-01 08:12:37 -04:00
Kit Langton
a0165b2ae8 docs(llm): fix stale field names in ProtocolID comment and AGENTS.md code example 2026-05-01 08:12:37 -04:00
Kit Langton
9363c70acd simplify(llm): drop redundant auth: "key" from resolvers (it is the default) 2026-05-01 08:12:37 -04:00
Kit Langton
5ec2673af2 simplify(llm): inline resolveAdapter into compile 2026-05-01 08:12:37 -04:00
Kit Langton
8b414cdb5a refactor(llm): collapse ProviderAuth to 'key' | 'none'
After the auth-axis migration, the OpenCode bridge consults this enum
solely to decide whether to read `provider.key` and stamp it on
`model.apiKey`. The bearer / anthropic-api-key / google-api-key
distinctions used to control which header the bridge wrote; that is now
the adapter's Auth axis's job.

Three of four variants were write-only after the migration. Collapse to:

- 'key'  — provider needs an API key
- 'none' — provider does not (e.g. local)

Updated all six provider resolvers and the resolver test fixtures.
2026-05-01 08:12:37 -04:00
Kit Langton
49913ff041 refactor(llm): rename Adapter.define -> Adapter.unsafe; drop Adapter.compose
Two cleanups to make the adapter constructor surface honest about what is
canonical and what is an escape hatch:

- Adapter.compose existed to override pieces of an existing adapter, used
  by OpenAI-compatible Chat before the four-axis migration. After the
  migration nothing references it; OpenAI-compatible Chat composes via
  fromProtocol({ protocol: OpenAIChat.protocol, ... }) instead. Delete
  the function and its ComposeInput type.

- Adapter.define is the lower-level escape hatch for adapters whose
  behavior genuinely cannot fit the Protocol/Endpoint/Auth/Framing model.
  Its name implied it was the canonical entry point. Renamed to
  Adapter.unsafe so the four-axis Adapter.fromProtocol(...) reads as the
  obvious primary path and the escape hatch carries its escape semantics
  in its name.

Updated test fixtures in adapter.test.ts and the AGENTS.md guidance.
2026-05-01 08:12:37 -04:00
Kit Langton
bb7f52b24d refactor(llm): remove ambiguous Adapter provider scoping field
The optional 'provider' field on Adapter / AdapterInput / FromProtocolInput
existed as a registry filter: requests with a different model.provider could
not find adapters that set it. After the four-axis migration no adapter
needed it (and an earlier pass removed it from the five migrated providers
because setting it broke session/llm-native tests).

Drop the field entirely and collapse the registry to a single-tier protocol
lookup. If a future deployment genuinely needs to be scoped (e.g. an
Azure-only OpenAI Responses adapter), reintroduce as 'scopedTo' with an
explicit name. Solve when needed, not before.

Also drops the test that exercised the now-removed two-tier lookup
('prefers provider-specific adapters over protocol fallbacks').
2026-05-01 08:12:37 -04:00
Kit Langton
e7ff19bb5f simplify(llm): stringify endpoint URL once in Adapter.fromProtocol
url.toString() was called twice on the same URL object — once for auth
and once for jsonPost. Convert to string immediately and reuse.
2026-05-01 08:12:37 -04:00
Kit Langton
bb859e2e2c simplify(llm): remove redundant queryParams from OpenAICompatibleChatModelInput
queryParams is now inherited from ModelInput (via ModelRef) after the
typed-field promotion. The explicit re-declaration was dead weight.
2026-05-01 08:12:37 -04:00
Kit Langton
61a18bdbd0 simplify(llm): fix stale model.native.queryParams references in docs
The commit that promoted queryParams to a typed ModelRef field updated
the implementation but left two JSDoc/doc references pointing at the old
model.native.queryParams path.
2026-05-01 08:12:37 -04:00
Kit Langton
f86a6790a2 refactor(llm): move queryParams off model.native to typed field
Promotes queryParams to a first-class ModelRef field used by Endpoint.baseURL,
so deployment-level URL query params (Azure api-version, OpenAI-compatible
provider knobs) live in a typed home instead of an opaque `native` bag.

Also removes write-only dead fields from `native`:

- openaiCompatibleProvider (set by family helper, never read)
- opencodeProviderID, opencodeModelID (set by opencode bridge + native session
  builder, never read)
- npm (set by opencode bridge, never read)

After this commit `model.native` only carries genuinely provider-specific
opaque options that no other adapter cares about (Bedrock's aws_credentials
+ aws_region for SigV4). Drops the now-dead ProviderShared.queryParams
helper. Updates AGENTS.md doc on native is implicit through the new schema
JSDoc.
2026-05-01 08:12:37 -04:00
Kit Langton
d8b9672234 simplify(llm): split Bedrock auth into bearer fast path + sigv4 gen
The two paths are independent: `model.apiKey` produces a synchronous
Bearer auth, while AWS credentials need an effectful sigv4 sign.
Hoist the bearer path out of `Effect.gen` and reuse `Auth.bearer`
directly, keeping the SigV4 path as a focused `Effect.gen` that owns
the credential lookup, signing, and header merge.

Inlines the now single-use `headersForSigning` and `signed` setup.
2026-05-01 08:12:37 -04:00
Kit Langton
042bf6c822 simplify(llm): default Adapter.fromProtocol auth to Auth.bearer
After the apiKey migration, every adapter explicitly specified `auth`,
and three of them (OpenAI Chat, OpenAI Responses, OpenAI-compatible Chat)
all wrote `auth: Auth.bearer`. `Auth.bearer` is a no-op when
`model.apiKey` is unset, so making it the default is strictly safer than
the previous `Auth.passthrough` default — bearer-style adapters drop
their explicit `auth` line, and adapters that need a different scheme
opt out via `Auth.apiKeyHeader(...)` (Anthropic, Gemini) or a custom
`Auth` (Bedrock SigV4 + Bearer).

Update doc comments on `fromProtocol.auth`, `Auth` type, and
`packages/llm/AGENTS.md` to reflect the new default.
2026-05-01 08:12:37 -04:00
Kit Langton
5d08e28cd9 refactor(llm): move auth secret from headers onto ModelRef.apiKey
Add an optional `apiKey` field to `ModelRef` so authentication is no
longer baked into `model.headers` at construction time. Each provider
adapter now passes an `Auth` to `Adapter.fromProtocol` that reads
`request.model.apiKey` per request:

- OpenAI Chat / Responses / OpenAI-compatible Chat: `Auth.bearer`
- Anthropic Messages:  `Auth.apiKeyHeader("x-api-key")`
- Gemini:              `Auth.apiKeyHeader("x-goog-api-key")`
- Bedrock Converse:    custom auth that uses `apiKey` for Bearer auth
                       and falls back to SigV4 with AWS credentials

The `model()` constructors no longer fold the API key into
`model.headers`. The OpenCode bridge sets `apiKey` directly instead of
building auth headers via the now-deleted `authHeader` helper. Test
assertions move from `headers: { authorization: "Bearer ..." }` to
`apiKey: "..."`.
2026-05-01 08:12:37 -04:00
Kit Langton
4f294852a6 simplify(llm): share core between Auth.bearer and Auth.apiKeyHeader
Both helpers had the same shape: read `request.model.apiKey`, no-op if
absent, otherwise merge a one-key header object. Lift that into a tiny
`fromApiKey(from)` helper and define both in terms of it.

The public surface (`Auth.bearer`, `Auth.apiKeyHeader`) is unchanged.
2026-05-01 08:12:36 -04:00
Kit Langton
a676b12b7b fix(llm): keep adapters provider-less by default
Removes the provider field from the five migrated Adapter.fromProtocol
calls. Setting provider scopes the adapter in the registry so requests
must use the same provider id, which broke session/llm-native tests
that build models with provider 'amazon-bedrock' against the
bedrock-converse adapter.

Adapters should stay protocol-only by default and only set provider
when the deployment is genuinely scoped (e.g. an Azure-only adapter
that does not work for native OpenAI). Restoring the original
protocol-only registration.
2026-05-01 08:12:36 -04:00
Kit Langton
31740c1d36 simplify(llm): inline single-use DEFAULT_BASE_URL / defaultBaseURL constants
Per style guide, single-use values should be inlined. Each adapter had
a module-private constant used exactly once in its Adapter.fromProtocol
call. Inlining removes 5 named constants (4 DEFAULT_BASE_URL + 1
defaultBaseURL + ANTHROPIC_VERSION) without loss of clarity — the
string literal appears at the point of use.
2026-05-01 08:12:36 -04:00
Kit Langton
9928917899 simplify(llm): remove dead ProviderShared.sse and withQuery helpers
After migration to Adapter.fromProtocol, the sse() convenience wrapper
and withQuery() URL builder are no longer called anywhere — Framing.sse
and Endpoint.baseURL handle their responsibilities directly. Also
inlines two exported-but-unused test constants (helloPrompt,
weatherPrompt) per style guide.
2026-05-01 08:12:36 -04:00
Kit Langton
98cb886faf docs(llm): document Protocol/Endpoint/Auth/Framing architecture
Updates the AGENTS.md adapter section to describe the four orthogonal
axes that make up an adapter today (Protocol + Endpoint + Auth + Framing)
and the canonical Adapter.fromProtocol composition. Adds a folder layout
overview so the dependency direction (provider/* imports protocol/auth/
endpoint/framing, never the other way) is visible.
2026-05-01 08:12:36 -04:00
Kit Langton
bdd01cad33 refactor(llm): migrate remaining adapters to fromProtocol
Extracts a Protocol implementation per provider and wires the adapter
through Adapter.fromProtocol with explicit Endpoint, Auth, and Framing:

- OpenAI Responses — Endpoint.baseURL with /responses path.
- Anthropic Messages — adds anthropic-version header via the headers slot.
- Gemini — endpoint embeds the model id and pins ?alt=sse at the URL level.
- Bedrock Converse — keeps SigV4-or-Bearer auth as a typed Auth function;
  AWS event-stream framing is a typed Framing value alongside the protocol;
  Endpoint.baseURL gains a function-typed default so the URL host can carry
  the per-request region.

Recorded replay byte-identical across all six adapters; full provider
suite 83 pass, full llm suite 122 pass, opencode typecheck clean.
2026-05-01 08:12:36 -04:00
Kit Langton
6ed160ae02 refactor(llm): migrate OpenAI Chat adapters to fromProtocol
Extracts OpenAIChat.protocol so that:

- openai-chat is now a four-line Adapter.fromProtocol composition over
  the protocol, the OpenAI base URL, default passthrough auth, and SSE
  framing.
- openai-compatible-chat reuses OpenAIChat.protocol verbatim. The whole
  adapter is one Adapter.fromProtocol call that pins protocolId to
  openai-compatible-chat and requires a caller-supplied baseURL.

Bug fixes in OpenAIChat.protocol now propagate to DeepSeek, TogetherAI,
Cerebras, Baseten, Fireworks, DeepInfra, and any future OpenAI-compatible
deployment without touching their files. Recorded replay byte-identical.
2026-05-01 08:12:36 -04:00
Kit Langton
7505da95d3 feat(llm): add Protocol, Endpoint, Auth, Framing primitives
Introduces the four orthogonal axes that an LLM adapter is composed of:

- Protocol — semantic API contract (lowering, validation, encoding,
  parsing). Examples: OpenAI Chat, Anthropic Messages, Bedrock Converse.
- Endpoint — URL construction (baseURL + path + query params).
- Auth — per-request transport authentication. Defaults to passthrough
  for adapters whose auth header is baked into model.headers.
- Framing — byte stream to frames (SSE today; AWS event stream next).

Adds Adapter.fromProtocol(...) which composes these into the existing
AdapterDefinition shape so LLMClient.make(...) and the runtime registry
do not change. Existing adapters keep working through Adapter.define
until they migrate one at a time.
2026-05-01 08:12:36 -04:00
Kit Langton
6099b3dfe9 refactor(llm): rename Protocol type to ProtocolID
Frees up the Protocol name for the upcoming Protocol implementation type
that owns request lowering, target validation, and stream parsing as a
single composable unit. Field names on ModelRef and Adapter stay as
'protocol' since they carry the string discriminator value.
2026-05-01 08:12:36 -04:00
Kit Langton
20bab34b01 test(llm): share recorded provider scenarios 2026-05-01 08:12:36 -04:00
Kit Langton
cd7487a73b test(llm): add focused recorded test filters 2026-05-01 08:12:36 -04:00
Kit Langton
a921eb88e6 test(opencode): cover Azure native request mapping 2026-05-01 08:12:36 -04:00
Kit Langton
f2f7a338de feat(llm): resolve Azure provider natively 2026-05-01 08:12:36 -04:00
Kit Langton
7141036ec4 refactor(llm): simplify provider resolver defaults 2026-05-01 08:12:36 -04:00
Kit Langton
b0be03facd refactor(llm): clarify provider resolution 2026-05-01 08:12:35 -04:00
Kit Langton
1cd53b27ec chore(llm): clean up PR docs 2026-05-01 08:12:35 -04:00
Kit Langton
59f39a922f chore(opencode): drop local LLM adapter spec from branch 2026-05-01 08:12:35 -04:00
Kit Langton
7fba0efbd9 fix(opencode): update native LLM imports after rebase 2026-05-01 08:12:35 -04:00
Kit Langton
0e558e13c7 feat(opencode): populate nativeTools from prompt.ts so production sessions can route through the native path (audit gap #4 phase 2 step 3)
Wires the prompt-side tool resolver to also surface opencode-native
`Tool.Def[]` alongside the AI SDK record it already builds. With
`OPENCODE_EXPERIMENTAL_LLM_NATIVE=1` set, real production sessions
that satisfy the gate now stream through `LLMNativeTools.runWithTools`
instead of `streamText` — the LLM-native path goes from
"plumbing-only" to "actually used."

Changes:

- `prompt.ts:resolveTools` collects `Tool.Def[]` from the registry
  loop and tracks a feasibility flag. MCP tools (which only have AI
  SDK shape) flip the flag off; the synthesized `StructuredOutput`
  tool that the json_schema branch injects also flips it. The return
  shape becomes `{ tools, nativeTools }` where `nativeTools` is
  `undefined` whenever any non-registry tool source contributes —
  callers fall through to the AI SDK path automatically. The
  registry path stays in sync because every `tools[item.id] =
  tool({...})` is paired with a `nativeTools.push(item)` at the same
  loop iteration.

- The single caller (`prompt.ts:1396`) destructures the new shape
  and passes `nativeTools` through to `handle.process(...)`. The
  json_schema branch sets `nativeTools = undefined` after injecting
  `StructuredOutput` so the gate falls through for structured-output
  sessions.

- `runNative` (in `session/llm.ts`) gains two safety nets that work
  regardless of caller behavior:

    1. Coverage check: if AI SDK tools are non-empty, every key must
       have a matching `Tool.Def` in `nativeTools`. A partial set
       falls through. Defends against future callers that might
       emit a partial native list.

    2. Filter parity: `runNative` now calls the existing
       `resolveTools(input)` (the in-file permission/user-disabled
       filter) and intersects its keys with `nativeTools`, then
       feeds the filtered AI SDK record to the dispatcher and the
       filtered native list to `LLMNative.request`. Without this,
       sessions could see permission-disabled tools advertised on
       one path but not the other.

- The dispatch path uses the filtered AI SDK tools record as the
  execute table: `LLMNativeTools.runWithTools({ tools:
  filteredAITools, ... })`. Tool definitions sent to the model are
  the filtered native list. Every tool the model sees can dispatch.

What this enables: a session opted into the experimental flag, with
a clean toolset (registry-only, no MCP, no structured output),
running an Anthropic model, now exercises the streaming-dispatch
loop end-to-end. Tool calls fire as soon as the model finishes
streaming each tool's input; results land in the stream the moment
each handler resolves. Multi-round behavior matches phase 2 step 2b.

What this still does NOT do (deferred to step 4):

- Parity test harness comparing native vs AI SDK event sequences for
  the same scripted session. Until that lands, broader confidence
  comes from running real sessions with the flag set.
- MCP support on the native path. Sessions with MCP servers
  configured stay on AI SDK indefinitely.
- Native support for the synthesized `StructuredOutput` tool.

Verification: opencode typecheck clean for `src/session/*` (the
TUI-side errors visible in the working tree are Kit's parallel
work, untouched here); bridge area tests 36/0/0 across
`llm-native.test.ts` + `llm-native-stream.test.ts` +
`llm-bridge.test.ts`; `prompt.test.ts` still 47/0/0 (no regression
from the resolveTools shape change).
2026-05-01 08:12:35 -04:00
Kit Langton
afa57acfda refactor(llm): extract HTTP recorder package 2026-05-01 08:12:35 -04:00
Kit Langton
189161ed62 feat(opencode): streaming tool dispatch and multi-round loop on the native path (audit gap #4 phase 2 step 2b)
Lands the streaming-dispatch tool loop for the LLM-native path. When
the gate-passing session has `nativeTools` populated, the native
runner forks an AI SDK `tool.execute(...)` the moment a `tool-call`
event arrives mid-stream and injects a synthetic `tool-result` event
back into the same stream when the handler resolves. Long-running
tools no longer block subsequent tool-call streaming; the user sees
each result land as soon as that specific handler completes.

The driver loops across rounds: when a round ends with `reason:
"tool-calls"` AND the dispatchers produced at least one result, the
runner builds a continuation `LLMRequest` (assistant message echoing
text/reasoning/tool-call content + tool messages with results) and
recurses. Stops on a non-`tool-calls` finish, when `maxSteps`
(default 10, mirrors `ToolRuntime.run`) is reached, or when the
underlying scope is interrupted.

New file `session/llm-native-tools.ts`:

- `runWithTools({ client, request, tools, abort, maxSteps? })` is the
  public entry point. Returns a `Stream<LLMEvent, LLMError,
  RequestExecutor.Service>` of merged model events + synthetic tool
  results, ready to flow through `LLMNativeEvents.mapper` for
  consumption by the existing session processor.
- `runOneRound` is the internal building block. It opens an unbounded
  `Queue<LLMEvent, LLMError | Cause.Done>`, forks a producer that
  streams the model and pushes each event to the queue, and forks a
  dispatcher (via a scope-bound `FiberSet`) for every
  non-provider-executed `tool-call`. Each dispatcher's result is
  pushed back into the same queue. After the model stream completes,
  the producer awaits `FiberSet.awaitEmpty` and ends the queue;
  consumers see end-of-stream. A `Deferred<RoundState>` resolves
  alongside so the multi-round driver can decide whether to recurse.
- `dispatchTool` wraps the AI SDK `tool.execute(input, { toolCallId,
  messages, abortSignal })` call. Unknown-tool and execute-throws
  paths produce `tool-error` events instead of failing the stream
  (mirrors `ToolRuntime.run`'s defect-vs-recoverable boundary), so
  the model can self-correct on the next round.

Wired into `runNative` (`session/llm.ts`): when `input.nativeTools`
is non-empty, the upstream becomes `LLMNativeTools.runWithTools(...)`
instead of `nativeClient.stream(...)`; the AI SDK `tools` record
flows in as the dispatch table. Zero-tool sessions still take the
direct-stream path (one round, no dispatch overhead).

Mapper update (`session/llm-native-events.ts`): `tool-result` events
whose `result.value` matches the opencode `Tool.ExecuteResult` shape
(`{ output: string, title?: string, metadata?: object }`) now flow
through to the AI-SDK-shaped session event with their `title` and
`metadata` preserved. Provider-executed and synthetic results that
don't match still fall back to `stringifyResult`. Without this, the
session processor would see every native tool result as
`{ title: "", metadata: {}, output: <JSON of the whole record> }`.

Smoke test (`test/session/llm-native-stream.test.ts`): scripts a
two-round Anthropic SSE backend — round 1 issues a `lookup` tool
call, round 2 replies with text after the tool result feeds back.
Asserts the full event sequence threads through `runWithTools`,
the dispatcher, and the mapper:

- `tool-call` event has the streamed JSON input parsed.
- `tool-result` event carries the `ExecuteResult` shape with
  `title` + `output` populated (proving the mapper update works).
- Round 2 text-delta arrives after the synthetic tool-result.
- Final `finish` event has `finishReason: "stop"` (loop terminated).

What this still does NOT do (deferred to step 3):

- No production caller populates `nativeTools` yet; that's the
  `prompt.ts:resolveTools` change. Until that lands, the gate keeps
  every real session on the AI SDK path.
- No parity harness comparing native + AI SDK event sequences for
  the same scripted session. That's step 4.

Verification: opencode typecheck clean; 36/0/0 across the three
bridge-area tests; 125/0/0 across the LLM package.
2026-05-01 08:12:35 -04:00
Kit Langton
fa8f7a1dca feat(opencode): plumb nativeTools through StreamInput (audit gap #4 phase 2 step 2a)
Adds opt-in `nativeTools?: ReadonlyArray<Tool.Def>` to `LLM.StreamInput`
so callers that route through the native path can attach typed
opencode tool definitions alongside the AI SDK `tools` record. The
gate in `runNative` widens accordingly: a session can use the native
path when it has zero tools (existing behavior) OR when it explicitly
provides `nativeTools` matching its AI SDK `tools` (new opt-in). When
`nativeTools` reaches `LLMNative.request`, the existing
`toolDefinition` converter folds each `Tool.Def` into the request's
`tools` array and the LLM core lowers it onto the wire.

This commit deliberately does NOT include the dispatch loop. A
session that opts in by setting `nativeTools` and that triggers a
`tool-call` from the model will see the call event but no
`tool-result` because the native path has no execute handler yet.
That's why no production caller populates `nativeTools`: phase 2
step 2b will land the dispatch loop and only then will real
production sessions route through here.

What this lays in place:

- `StreamInput.nativeTools` typed against `Tool.Def[]` from `@/tool`.
  Aliased to `OpenCodeTool` at the import to dodge a clash with the
  AI SDK `Tool` type that the same file already imports.
- The `runNative` gate flips from "no tools allowed" to "either no
  tools, or `nativeTools` is supplied". An AI SDK tool count > 0
  with `nativeTools` undefined still falls through, so existing
  production sessions are unaffected.
- `LLMNative.request` already accepted `tools: ReadonlyArray<Tool.Def>`
  and converts via `toolDefinition`. We just forward the input
  through; no LLM-bridge change.

Smoke coverage: a new test in `llm-native-stream.test.ts` builds a
typed `Tool.Def` (Effect Schema parameters), routes it through
`LLMNative.request` + `LLMClient.prepare`, and asserts the prepared
Anthropic target carries the tool as an `input_schema` block with
the expected JSON Schema shape. This validates the conversion path
that phase 2 step 2b will exercise from inside `runNative`.

Verification: opencode typecheck clean; 35/0/0 across the three
bridge-area tests (`llm-native.test.ts`, `llm-native-stream.test.ts`,
`llm-bridge.test.ts`).
2026-05-01 08:12:35 -04:00
Kit Langton
afba37d330 test(opencode): smoke test for LLM-native stream wire-up (audit gap #4 phase 2)
Adds `test/session/llm-native-stream.test.ts` — one focused test that
proves the end-to-end wire-up `runNative` relies on actually produces
session events from a scripted Anthropic SSE response.

The test stays self-contained:

- Builds a fake Anthropic `Provider.Info` + `Provider.Model` via
  `ProviderTest`.
- Builds an `LLMRequest` via `LLMNative.request(...)` from a
  `MessageV2.WithParts` user message — the same call shape `runNative`
  uses inside `session/llm.ts`.
- Creates an `LLMClient` with the same adapters list + `ProviderPatch.defaults`
  list as `runNative`. The adapters are imported directly from
  `@opencode-ai/llm`; if `runNative`'s `NATIVE_ADAPTERS` array changes,
  this test's `adapters` constant has to follow (commented).
- Provides a single fixed-response HTTP layer that returns a scripted
  Anthropic SSE body. The layer helper is inlined (12 lines) rather
  than imported from `packages/llm/test/lib/http.ts` so the test
  doesn't reach across package boundaries.
- Pipes the LLM stream through `LLMNativeEvents.mapper()` exactly as
  `runNative` does (`Stream.flatMap` + lazy `Stream.concat` for
  flush), runs it to completion, and asserts the key session events:
  `text-start` precedes `text-delta`, `finish-step` carries
  `finishReason: "stop"`, and `finish` carries the merged usage totals.

This does NOT test the dispatch gate inside `session/llm.ts`
(`!Flag.OPENCODE_EXPERIMENTAL_LLM_NATIVE`, missing `nativeMessages`,
tools present, non-Anthropic protocol). Those are simple boolean
conditions and don't need separate coverage. It also does not exercise
the production `Service` layer — that's deferred to Phase 2 step 2
(tool support) and Phase 2 step 3 (production caller wiring).

What the test buys: confidence that the conversion pipeline works and
catches regressions in `LLMNative.request`, the LLM adapter set, or
`LLMNativeEvents.mapper` before they would surface in a real session.

Verification: 34/0/0 across the three bridge-area tests
(`llm-native.test.ts` + `llm-native-stream.test.ts` +
`llm-bridge.test.ts`); opencode typecheck clean.
2026-05-01 08:12:35 -04:00
Kit Langton
fc3a1bfd34 feat(opencode): wire LLM-native stream path behind opt-in flag (audit gap #4 phase 1)
Adds the parallel `runNative()` path inside `session/llm.ts` so a narrow
slice of sessions can flow through `@opencode-ai/llm` instead of the AI
SDK `streamText`. Behavior is gated and shipped off by default; only
callers that opt in see any difference.

The full migration plan (audit gap #4) is parallel-path-with-flag,
prove parity test-by-test, flip default last. This commit is phase 1:
get the wire-up in place behind a flag with one protocol so we can see
whether the design holds before committing to the full migration.

Wire-up summary:

- New flag `OPENCODE_EXPERIMENTAL_LLM_NATIVE` (also enabled by the
  umbrella `OPENCODE_EXPERIMENTAL`). Off by default.
- The session-LLM `live` layer now consumes `RequestExecutor.Service`,
  and the `defaultLayer` provides `RequestExecutor.defaultLayer` so a
  Node fetch HTTP client backs every native stream.
- `runNative(input)` returns `Stream<Event> | undefined`. `undefined`
  means "fall through to AI SDK." It returns a real stream only when
  every gate passes: the flag is set, the caller populated
  `input.nativeMessages` (the bridge needs typed `MessageV2.WithParts`,
  not the AI SDK `messages` array), the session has zero tools (Phase
  2 will lift this), and the bridge routes the model to a protocol in
  `NATIVE_PROTOCOLS`.
- `NATIVE_PROTOCOLS` is a single-entry set today: `anthropic-messages`.
  Other adapters are imported and registered with the client so the
  Phase 2 expansion is a one-line edit, not an architecture change.
- Stream wiring: client.stream(req) -> Stream.flatMap(event ->
  fromIterable(map.map(event))) -> Stream.concat(suspended
  fromIterable(map.flush())) -> Stream.provideService(
  RequestExecutor.Service, executor). The flush stream is built lazily
  with `Stream.unwrap(Effect.sync(...))` so it observes the mapper
  final state after every upstream event has been mapped.
- The mapper (`LLMNativeEvents.mapper`) emits AI-SDK-shaped session
  events from `LLMEvent` so downstream consumers see one shape.

What this does NOT do (deferred to later phases):

- No tool support on the native path (skipped, falls through).
- No parity harness yet; Phase 2 builds it.
- No production traffic; flag is off by default and no production
  caller populates `nativeMessages`.
- No reasoning/cache/multi-modal coverage. Anthropic supports reasoning
  and cache via existing patches, so those start working as soon as a
  caller routes a real session through.

Verification: opencode typecheck clean, bridge tests still green
(33/0/0 across llm-native.test.ts + llm-bridge.test.ts); LLM package
tests green (123/0/0).
2026-05-01 08:12:35 -04:00
Kit Langton
0ba8ca63b6 refactor(llm): Bedrock JSON-codec compliance, signing-headers cleanup, and small dedup
Five review findings; all small, all independent.

H2: Bedrock used raw `JSON.parse` and `JSON.stringify` despite the
package rule against ad-hoc JSON encoders. The in-loop parse on each
event-stream frame goes through `ProviderShared.parseJson` (yielded
inside `Effect.gen`); the `decodeChunk` error fallback uses
`ProviderShared.encodeJson` instead of `JSON.stringify` for the raw
field on `ProviderChunkError`. No behavior change — just channels
JSON through the shared Schema-driven codec.

H3: `BedrockConverse.toHttp` built a `baseHeaders` record with
`content-type: application/json` and passed it through both auth
paths. The bearer path called `jsonPost` with the raw model headers
(no manual content-type), the SigV4 path used `baseHeaders` plus the
signed result. Two paths produced subtly different header sets and
both relied on `jsonPost` overwriting/adding the same content-type
key. Simplify: drop the unused bearer-side construction; rename the
SigV4 input to `headersForSigning` and document why content-type
must be present at signing time (signature covers it).

M4: Lift `isRecord` from `gemini.ts` into `ProviderShared.isRecord`
so adapters share one definition. The duplicates in `llm.ts` (LLM IR
layer) and `llm-native.ts` (OpenCode bridge) stay where they are —
those are at different layers and importing from `provider/` would
invert the dependency direction. Net effect: the provider layer
goes from 2 copies to 1.

L8: `TransportError` lost everything but the message string.
Surface the originating reason tag (`Timeout` / `TransportError` /
`ResponseError` / `RequestError`) and the request URL when
available, both as optional Schema fields. Consumers that don't
care keep getting the same `message` rendering; consumers that do
can finally render "timed out connecting to https://..." instead
of "HTTP transport failed".

M9 + L3: Two dead branches. Anthropic's `processChunk` had
`?? ""` fallbacks for `partial_json` after an early-return guard
already proved it non-empty. OpenAI Chat's `mapFinishReason` had
`if (reason === undefined || reason === null) return "unknown"`
followed by `return "unknown"` — both branches went to the same
place. Drop the unreachable code.

120 LLM-package tests + 33 OpenCode bridge tests still green.
2026-05-01 08:12:35 -04:00
Kit Langton
38af0dc6f8 refactor(llm): centralize codec scaffolding, ToolAccumulator, and totalTokens policy
Three review findings collapsed into one ProviderShared pass.

M1: Five adapters duplicated the same six-line block:

    const ChunkJson = Schema.fromJsonString(Chunk)
    const TargetJson = Schema.fromJsonString(Target)
    const decodeChunkSync = Schema.decodeUnknownSync(ChunkJson)
    const encodeTarget = Schema.encodeSync(TargetJson)
    const decodeTarget = Schema.decodeUnknownEffect(Draft.pipe(Schema.decodeTo(Target)))
    const decodeChunk = (data) => Effect.try({...chunkError(...)})

Lift it into `ProviderShared.codecs({ adapter, draft, target, chunk,
chunkErrorMessage })` returning `{ encodeTarget, decodeTarget,
decodeChunk }`. The result drops directly into `Adapter.define`'s
`validate` field (uses `validateWith` internally to map parse errors
to InvalidRequestError). Adopted in OpenAI Chat, OpenAI Responses,
Anthropic Messages, and Gemini. Bedrock has a custom event-stream
`decodeChunk` that takes `unknown` (not `string`) so it keeps its
inline codecs.

M2: Four adapters defined an identical `ToolAccumulator` interface
(`{ readonly id: string; readonly name: string; readonly input:
string }`). Lift to `ProviderShared.ToolAccumulator`. Anthropic
extends it locally with `providerExecuted` for hosted tools.

M3: The five `mapUsage` implementations had subtly different
`totalTokens` policies — OpenAI Chat passed through whatever the
provider sent, OpenAI Responses unconditionally summed inputs and
output (publishing `totalTokens: 0` when both were `undefined`),
Anthropic and Gemini guarded with conditionals, Bedrock used a
`(...) || undefined` falsy fallback. Add `ProviderShared.totalTokens`
with one rule: prefer provider-supplied total, else sum inputs and
outputs only when at least one is defined, else `undefined`. Fixes
the OpenAI Responses `totalTokens: 0` bug.

M6: Anthropic's `mergeUsage` recomputed `totalTokens` from the merged
input/output via two nested ?? chains and a conditional sum.
Simplified to use the same totalTokens helper, with `inputTokens` and
`outputTokens` extracted as locals so the merge is one ?? per field
and the comment explains why merging exists (Anthropic emits usage
on `message_start` and `message_delta`).

No behavior changes other than the OpenAI Responses fix; existing
tests pass unchanged. 120 LLM-package tests + 33 OpenCode bridge
tests green.
2026-05-01 08:12:35 -04:00
Kit Langton
8bbbceef92 fix(llm): unify apiKey precedence and consolidate Gemini schema conversion
Two issues from the review of the LLM package's six adapters.

H1: Inconsistent apiKey precedence. Five of six adapters spread the
caller's headers first then set the auth header (apiKey wins), but
`OpenAICompatibleChat.model` did the opposite (caller headers won).
That meant a user passing both `apiKey` and `headers.authorization`
would get auth from a different source depending on which adapter
they routed through. Flip the OpenAI-compatible adapter to match the
rest, and add a comment documenting the rule: apiKey wins, callers
who want their own auth header should omit `apiKey` entirely.

H4: Gemini tool-schema sanitization was split across two functions
that both ran on every Gemini request — `convertJsonSchema` in the
adapter (lossy projection: drop empty objects, derive nullable from
type-array, allowlist of preserved keys, recursive properties/items)
and `sanitizeGeminiSchemaNode` registered as a default `tool-schema`
patch (fix-up: integer enums to strings, dangling required filtering,
untyped array typing, scalar property stripping). Both passes only
ran on Gemini models; debugging a tool schema rejection meant
checking both files.

Fold the patch's rules into the adapter as `sanitizeToolSchemaNode`,
running before the existing projection step (renamed
`projectToolSchemaNode`). Compose them in `convertToolSchema` and use
that in `lowerTool`. Delete the patch from `provider/patch.ts` and
`ProviderPatch.defaults`. The behavior is unchanged — same input,
same output — but the rules now live in one file with a header
comment explaining the two concerns.

The matching test in `gemini.test.ts` no longer needs to opt into a
patch list; it now asserts the adapter alone produces the sanitized
shape.
2026-05-01 08:12:35 -04:00
Kit Langton
d00db17902 feat(opencode): add native LLM event bridge 2026-05-01 08:12:35 -04:00
Kit Langton
f59996362e feat(opencode): round-trip encrypted reasoning content through the bridge
Closes audit gap #3. The bridge now extracts the encrypted reasoning
blob from `MessageV2.ReasoningPart.metadata` and surfaces it on
`LLM.ReasoningPart.encrypted`, where the Anthropic and Bedrock
adapters lower it to the wire — Anthropic emits `thinking.signature`,
Bedrock emits `reasoningContent.reasoningText.signature`. Without
this, multi-turn sessions with reasoning models would lose the
encrypted state on every step and break the chain.

The encrypted blob originates in three different places depending on
how the session was started:

1. AI-SDK Anthropic sessions store it as
   `metadata.anthropic.signature` (per AI SDK provider-keyed
   convention).
2. AI-SDK OpenAI sessions store it as
   `metadata.openai.reasoningEncryptedContent`.
3. Future LLM-native sessions will store it as a top-level
   `metadata.encrypted` string (cleanest shape — provider-agnostic,
   matches the LLM IR field name).

The new `encryptedReasoning` helper probes all three locations in
order, so existing OpenCode sessions can be served by the LLM-native
path without re-recording reasoning content. The full `metadata`
record continues to flow through to `LLM.ReasoningPart.metadata`
unchanged, preserving any provider-specific fields adapters might
read in the future.

OpenAI Responses encrypted reasoning round-trip is intentionally out
of scope: the LLM-package adapter doesn't yet model reasoning items
in the request body. That's a separate adapter feature requiring new
input-item schema variants and is deferred until needed.

Tests (5 new in llm-native.test.ts):
- AI-SDK Anthropic signature extracted into LLM.ReasoningPart.encrypted.
- End-to-end Anthropic lowering: bridge \u2192 client.prepare \u2192 target with
  `thinking.signature` populated correctly.
- AI-SDK OpenAI reasoningEncryptedContent extracted (forward
  compatibility — useful when the OpenAI Responses adapter gains
  reasoning-item lowering).
- Top-level metadata.encrypted extracted (LLM-native session shape).
- No known key in metadata leaves `encrypted` undefined.

Verified: 33/0/0 across native + bridge tests (was 28; +5 from the
new reasoning extraction tests).
2026-05-01 08:12:35 -04:00
Kit Langton
b653261772 feat(opencode): bridge user FilePart to LLM MediaPart for vision input
Closes audit gap #2 (FilePart \u2192 MediaPart not implemented).

The bridge now lowers `MessageV2.FilePart` on user messages into
`LLM.MediaPart`, unblocking image and document inputs. The first
pass supports `data:` URLs only — the inline base64 form most
commonly produced by the OpenCode UI for pasted screenshots and
attached files. `http(s):` and `file:` URLs are explicitly
rejected with a clear error so a future fetch / filesystem-read
path can plug in cleanly without regressing safety.

Implementation:
- New `lowerFilePart` helper extracts the base64 payload from a
  data URL via a single regex; failure yields a typed
  `UnsupportedContentError` carrying both the partType and a
  `reason` that includes the offending URL for debuggability.
- New `lowerUserPart` dispatches user-side parts: text \u2192
  `LLM.text`, file \u2192 `MediaPart`. Returns identity-empty
  for any unsupported part type the static gate would have caught.
- `userMessage` is now `Effect.fnUntraced` so file conversion can
  yield typed errors. `lowerMessage` (the per-message dispatcher,
  renamed from `messages` to free the local name) cascades the
  Effect through the request flow via `Effect.forEach`.
- `supportsPart` static gate now allows `file` parts on user
  messages. Assistant messages still reject file parts (the LLM
  IR's MediaPart isn't valid in assistant content for any
  adapter we ship today).
- `UnsupportedContentError` gains an optional `reason` field that
  appends to the canonical message as `<base>: <reason>`. Existing
  static-gate failures keep the same shape (no reason).

Tests (3 new, 1 rewritten):
- Image data URL with filename round-trips to MediaPart with
  base64-stripped data.
- PDF data URL preserves filename and base64 payload.
- `https:` URL rejected with an error mentioning both the file
  partType, the message ID, and the offending URL.
- The pre-existing "fails instead of dropping unsupported native
  parts" test now uses a reasoning part on a user message
  (reasoning is valid for assistants only) since file parts with
  data URLs are no longer rejected by the static gate.

Out of scope, intentional follow-ups:
- HTTP/HTTPS URL fetching (would need HttpClient.HttpClient and a
  decision on caching, retries, size limits).
- File path / file:// URL reading (would need FileSystem.FileSystem
  and a permission check against the session's working directory).
- File parts on assistant messages (LLM IR doesn't model
  assistant-side media; defer until we hit a provider that needs it).
- text/plain and application/x-directory file parts that the
  AI-SDK path converts to text inline at message-v2.ts:791 — for
  the bridge, those should be converted upstream before reaching
  LLMNative.request rather than handled here.

Verified: bun typecheck clean, 28/0/0 across native + bridge
tests (was 21; +7 from the FilePart additions plus the rewritten
unsupported-parts test).
2026-05-01 08:12:34 -04:00
Kit Langton
5f08d6cbd6 feat(llm): cachePromptHints patch with first-2 system / last-2 messages policy
Lift the prompt-cache policy out of OpenCode's bridge and into the
LLM package as a typed, gated patch. The policy mirrors the AI-SDK
applyCaching path (packages/opencode/src/provider/transform.ts:229):
mark the first 2 system parts and the last 2 messages with an
ephemeral cache hint, gated on `model.capabilities.cache.prompt`.

Adapters lower the hint structurally — Anthropic emits
`cache_control: { type: "ephemeral" }` on the marked block,
Bedrock emits a positional `cachePoint: { type: "default" }`
after the marked block (added in 9d7d518ac). The capability gate
keeps non-cache adapters (OpenAI Responses, Gemini, OpenAI-compat
Chat) hint-free.

Why a Patch and not bridge code:
- packages/llm/AGENTS.md TODO explicitly calls for cache hint patches
- Other consumers of @opencode-ai/llm get caching for free
- The bridge stays focused on shape conversion (MessageV2 \u2192 LLMRequest)
- Patches compose via ProviderPatch.defaults (now includes this one)
- The capability gate is a typed predicate, not provider-name matching

Implementation:
- New `cachePromptHints` patch in provider/patch.ts. The
  `withCacheOnLastText` helper uses Array.findLastIndex (codebase
  idiom) and short-circuits when no text part exists so messages
  with only tool-result content are returned identity-equal.
- `EPHEMERAL_CACHE` is a single shared CacheHint instance — no
  per-request allocation, preserves `instanceof` for any consumer
  that checks class identity.
- Added to `ProviderPatch.defaults` so existing callers that pass
  `defaults` get cache support automatically.

Tests (5 new in patch.test.ts):
- Marks first 2 system parts on cache-capable models.
- Marks last text part of last 2 messages.
- Targets the last text part when a message has trailing
  non-text content (assistant text + tool-call).
- Returns content unchanged (identity-equal) when no text part
  exists, so pure tool-result messages don't allocate.
- No-op when the model does not advertise prompt caching.

Bridge cleanup:
- Removed `applyCachePolicy`, `withCacheOnLastText`,
  `updateMessageContent`, `EPHEMERAL_CACHE` from llm-native.ts
  (-30 lines of bridge-side cache code).
- Dropped now-unused `CacheHint`, `LLMRequest`, `Message` imports.
- The bridge's only responsibility is now MessageV2 lowering;
  callers wire `patches: ProviderPatch.defaults` at client
  construction.

OpenCode tests rewritten:
- Old: assert on `request.system[N].cache` (bridge internals).
- New: assert on `prepared.target` after running through
  `LLMClient.make({ adapters, patches: ProviderPatch.defaults })
  .prepare(request)` — verifies the full lowering end-to-end.
- Anthropic: target.system[0..1] carry `cache_control: ephemeral`,
  target.messages[1..2] carry it on the final text block.
- Bedrock: target has `cachePoint` markers after each cached block.
- Non-cache (OpenAI Responses): JSON.stringify(target) contains
  none of `cache_control` / `cachePoint` / `ephemeral`.

Verified: bun typecheck clean across both packages, 120/0/0 in LLM
package (was 113; +7 from new patch tests counting parameter
variations), 21/0/0 in OpenCode native+bridge tests.
2026-05-01 08:12:34 -04:00
Kit Langton
3cd13c87c4 refactor(llm): standardize native request APIs 2026-05-01 08:12:34 -04:00
Kit Langton
653a830cf6 refactor(llm): clarify tool definition API 2026-05-01 08:12:34 -04:00
Kit Langton
33ef3b01f8 test(opencode): cover native Gemini parity 2026-05-01 08:12:34 -04:00
Kit Langton
a26f2c905f test(opencode): cover native OpenAI-compatible parity 2026-05-01 08:12:34 -04:00
Kit Langton
03a97a64a3 chore(llm): fix low-hanging lint warnings 2026-05-01 08:12:34 -04:00
Kit Langton
096c305a55 feat(llm): Bedrock Converse cache hints, image, and document blocks
Close the parity gaps deferred from the original Bedrock pass.

Schema additions on the Converse target:
- BedrockImageBlock for { image: { format, source: { bytes } } }.
  Supported formats per Converse docs: png, jpeg, gif, webp.
- BedrockDocumentBlock for { document: { format, name, source: { bytes } } }.
  Supported formats: pdf, csv, doc, docx, xls, xlsx, html, txt, md.
- BedrockCachePointBlock for the positional { cachePoint: { type } }
  marker. Currently emits the only Bedrock cache type, 'default'. A
  TODO marks where to map ttlSeconds → ttl ('5m' | '1h') once we have
  a recorded cassette to validate the wire shape.

Lowering:
- TextPart and SystemPart cache hints emit a positional cachePoint
  marker right after their text block. Both 'ephemeral' and
  'persistent' CacheHint types map onto Bedrock's 'default' since
  Bedrock does not distinguish — this matches the convention the
  Anthropic adapter uses (cache?.type === 'ephemeral' check).
- MediaPart routes by mediaType: 'image/*' → image block, everything
  else → document block. MIME type → format mapping is via
  IMAGE_FORMATS / DOCUMENT_FORMATS records typed with 'as const
  satisfies' so the keys stay narrow at compile time.
- A small textWithCache helper collapses the 'push text, push
  cachePoint if cache is set' pattern that would otherwise repeat at
  three callsites (system, user-text, assistant-text).
- Bytes are encoded via ProviderShared.mediaBytes — the shared
  helper Kit landed in c3346f7dc.

Bug fix: lowerSystem was dead code in the previous draft. The
prepare() function still inlined the pre-cache .map(...) that
discarded system cache hints. prepare() now calls lowerSystem so
the cachePoint markers actually flow through.

Tests (7 new fixtures, all green):
- Cache hint on system / user-text / assistant-text emits cachePoint
  after text in each context.
- No cache hint → no cachePoint emitted (regression guard).
- Image lowering covers png / jpeg / jpg-alias / webp.
- Uint8Array image bytes are base64-encoded ([1,2,3,4,5] → AQIDBAU=).
- Document lowering with filename round-trip and missing-filename
  fallback to 'document.<format>'.
- Unsupported image MIME (image/svg+xml) is rejected with a clear
  error message.
- Unsupported document MIME (application/x-tar) is rejected with a
  clear error message.

Recorded cassettes for cache hints, images, and documents are still
TODO — the wire shapes are exercised deterministically here and will
be validated against a live model in a follow-up cassette pass.

Verified: bun typecheck clean, 113 pass / 0 fail / 0 skip (was 106;
+7 from the new fixture tests).
2026-05-01 08:12:34 -04:00
Kit Langton
ecd73f26fc refactor(llm): simplify adapter shared logic 2026-05-01 08:12:34 -04:00
Kit Langton
1a839c6233 refactor(opencode): tighten native LLM bridge boundaries 2026-05-01 08:12:34 -04:00
Kit Langton
c69f2bb15e refactor(llm): centralize InvalidRequestError, validate, and JSON POST
Phase A continuation of the ProviderShared dedupe pass. Three more
patterns lifted into ProviderShared so they're written once:

ProviderShared.invalidRequest(message) — replaces six identical
`const invalid = (message) => new InvalidRequestError({ message })`
one-liners across openai-chat, openai-responses, anthropic-messages,
gemini, openai-compatible-chat, and bedrock-converse. Each adapter
keeps a short `const invalid = ProviderShared.invalidRequest` alias
so the 27 callsite `yield* invalid("...")` patterns are unchanged.
Bedrock's SigV4 catch path and the openai-compatible-chat baseURL
guard both go through the helper now too.

ProviderShared.validateWith(decode) — replaces the identical
`(draft) => decode(draft).pipe(Effect.mapError((e) =>
invalid(e.message)))` lambda body in five adapters. Same line count
but shorter, names the pattern, and keeps the `decode → mapError →
InvalidRequestError` translation in one canonical spot.

ProviderShared.jsonPost({ url, body, headers }) — replaces the
five-adapter pattern of `HttpClientRequest.post(url).pipe(setHeaders,
bodyText)` for JSON-body POSTs. Sets `content-type: application/json`
last so caller headers can override everything except the
content-type. Bedrock uses it for both the bearer-auth and SigV4-
signed paths; SigV4 still signs against `baseHeaders` (which already
contained content-type) so the signature matches what the helper
ultimately sends.

Net change: -73 / +86 (+13 in shared.ts mostly JSDoc; -86 across the
six adapters). The `HttpClientRequest` and `InvalidRequestError`
imports are dropped from the five SSE adapters and from Bedrock since
they're no longer referenced directly.

Verified: `bun typecheck` clean, 106 pass / 0 fail / 0 skip
(unchanged).
2026-05-01 08:12:34 -04:00
Kit Langton
339db0e885 docs(llm): document ProviderShared helpers and framing seam
Update the adapter authoring guide to reflect the dedupe pass:

- Generalize the `parse` bullet from `ProviderShared.sse` to
  `ProviderShared.framed` and call out the two framing dialects
  in use today (SSE for OpenAI/Anthropic/Gemini/compat, AWS event
  stream for Bedrock).
- Spell out that `framed`'s `framing` parameter is the seam for
  new wire formats; the rest of the pipeline is shared.
- New 'Shared adapter helpers' subsection enumerating the
  `ProviderShared` exports a new adapter author should reach for
  before hand-rolling: `framed`, `sse`, `sseFraming`, `joinText`,
  `parseToolInput`, `parseJson`, `chunkError`.
- Closing nudge: lift 3-5 line repeats into ProviderShared rather
  than copy them between adapters.

Doc-only — no code or test changes.
2026-05-01 08:12:34 -04:00
Kit Langton
fa2a5d1fdb feat(opencode): convert native LLM message history 2026-05-01 08:12:34 -04:00
Kit Langton
3a94622e76 refactor(llm): dedupe adapter scaffolding into ProviderShared
Promote three repeated patterns out of individual adapters into
ProviderShared so a fifth or sixth adapter doesn't write the same
glue code over again.

ProviderShared.joinText(parts) — replaces the per-adapter `text()`
helper that joined an array of parts with newlines. Used by OpenAI
Chat (system content, user text, assistant text), OpenAI Responses
(system content), and Gemini (systemInstruction). The dead copies in
Anthropic Messages and Bedrock are gone.

ProviderShared.parseToolInput(adapter, name, raw) — replaces the
identical `parseJson(adapter, raw || "{}", \`Invalid JSON input
for <adapter> tool call <name>\`)` invocation in finishToolCall
across Anthropic, OpenAI Chat, OpenAI Responses, and Bedrock. Uniform
error message and the empty-string-to-"{}" fallback handled in one
place.

ProviderShared.framed(...) — generalizes the existing `sse()` helper
so the protocol-specific framing layer is pluggable. The shared
shape is bytes → frames → chunk → (state, events) with mapError /
mapEffect / mapAccumEffect / catchCause as the spine; framing is
the only varying step.

ProviderShared.sseFraming — the SSE-specific framing implementation
(decodeText + Sse.decode + filter [DONE]). The existing `sse()`
helper now delegates to `framed` with this framing, keeping the
adapter API surface identical.

Bedrock's parseStream — collapses to a single `ProviderShared.framed`
call with its own `eventStreamFraming` step. The cursor-based byte
buffer + AWS event-stream codec live as inputs to framed; everything
else is shared with the SSE adapters. Bedrock now has the same
`catchCause → streamError` terminal-error normalization that SSE
adapters have (it was missing before this refactor).

Net effect across the llm package: -66 lines / +114 lines but the
+114 is mostly JSDoc on the new helpers; adapter implementations
shrink. A future protocol (Bedrock InvokeModel, Vertex Gemini binary
streaming, etc.) plugs in by supplying its `framing` step.

Verified: `bun typecheck` clean, 106 pass / 0 fail / 0 skip
(unchanged from before the refactor).
2026-05-01 08:12:34 -04:00
Kit Langton
778b1762b0 feat(opencode): convert native LLM tool definitions 2026-05-01 08:12:34 -04:00
Kit Langton
bab2fbc7f6 refactor(llm): simplify Bedrock Converse adapter after review
Cleanup of the Bedrock adapter (ba1705d) following parallel review
passes for code reuse, code quality, and efficiency.

- Drop dead `text` join helper and unused `TextPart` import.
- Schema-validate `model.native.aws_credentials` instead of seven
  manual `typeof` guards in `credentialsFromInput`. Removes the
  unsafe `as Record<string, unknown>` cast and fixes the dead
  `native?.region` fallback (the `model()` constructor only writes
  `aws_region`).
- Skip the JSON.parse → JSON.stringify → Schema.fromJsonString triple
  round-trip in the frame consumer. The eventstream codec already
  hands us a UTF-8 payload; parse once and feed the wrapped object
  directly to `Schema.decodeUnknownSync(BedrockChunk)`.
- Replace O(n²) buffer concat in `consumeFrames` with a cursor-based
  state `{ buffer, offset }`. Compaction happens once per network
  chunk via `appendChunk` instead of per frame; frame slicing is
  zero-copy via `subarray`. Bounded buffer growth regardless of
  stream length.
- Rename `ParserState.finishReason` → `pendingStopReason` (raw
  string) and defer the `mapFinishReason` call to the single emit
  site, plus the `onHalt` fallback. Tightens the helper's signature
  to `(reason: string)` so the chunk-typed `messageStop.stopReason`
  flows through without the optional widening.
- Restructure `signRequest` to take an object parameter (was four
  positional args), and replace the manual `forEach`-into-record with
  `Object.fromEntries(signed.headers.entries())`.
- Inline single-use `status` and `useTools` variables.
- Widen `fixedResponse` to accept `ConstructorParameters<Response>[0]`
  so binary fixtures (`Uint8Array`, streams) flow without casts. The
  Bedrock test's `fixedBytes` helper now wraps it cleanly.
- Tidy `captureResponseBody` into a ternary returning the union shape
  directly so the call site spreads the captured object without
  reaching for `bodyEncoding` explicitly.

Verified: `bun typecheck` clean, 106 pass / 0 fail / 0 skip
(unchanged from before the refactor).
2026-05-01 08:12:34 -04:00
Kit Langton
0da7d8a2a1 feat(opencode): add native LLM request builder 2026-05-01 08:12:33 -04:00
Kit Langton
769d6123d5 feat(llm): add Bedrock Converse adapter
Implements the AWS Bedrock Converse streaming protocol as the 5th
first-class adapter in @opencode-ai/llm. Single `bedrock-converse`
adapter covers all underlying models (Anthropic, Llama, Mistral,
Cohere, Nova, Titan) since Converse is uniform.

Wire format: messages with text / reasoning / toolUse / toolResult
content blocks, system blocks, inferenceConfig, toolConfig with
toolSpec + toolChoice. Image / document / cache-point content types
are still TODO.

Streaming: AWS event stream binary framing via @smithy/eventstream-codec.
Each frame is decoded then dispatched by `:event-type` header into
the chunk schema. Bedrock splits the finish across `messageStop`
(reason) and `metadata` (usage) — the parser stashes the reason and
emits a single consolidated `request-finish` event when metadata
arrives, with an `onHalt` fallback for truncated streams.

Auth: two paths. Bearer API key (newer) when the consumer sets
`model.headers.authorization = 'Bearer <key>'`. SigV4 signing via
aws4fetch otherwise — credentials live on `model.native.aws_credentials`
and are signed at `toHttp` time so STS-vended tokens are picked up
when the consumer rebuilds the model. The adapter rejects requests
with neither auth path with a clear InvalidRequestError.

Routing: `@ai-sdk/amazon-bedrock` lowers to `bedrock-converse` via
the new `AmazonBedrock` provider routing module; the OpenCode
`llm-bridge.ts` registers it.

Cassette format: response bodies under
`application/vnd.amazon.eventstream` and `application/octet-stream`
content types are now stored as base64 with `bodyEncoding: 'base64'`
on the response snapshot — text round-tripping mangled the CRC32
fields in event-stream frames. Existing cassettes (SSE/JSON) omit
the field and decode as text unchanged.

Tests: 11 deterministic fixtures (prepare / lower messages / lower
tool config / decode text+usage / decode tool calls / decode
reasoning / decode throttling exception / auth path validation /
SigV4 plumbing) + 2 recorded cassettes against live Bedrock
(`us.amazon.nova-micro-v1:0` in us-east-1) for streaming text and
streaming tool calls.

AGENTS.md: documents the Bedrock auth model, binary cassette format,
and updates the protocol coverage / cassette backlog.

Deps: @smithy/eventstream-codec, @smithy/util-utf8, aws4fetch (~40KB
combined; matches AI SDK's approach).
2026-05-01 08:12:33 -04:00
Kit Langton
6c887b0faa refactor(llm): brand provider and model identifiers 2026-05-01 08:12:33 -04:00
Kit Langton
4e3f678b24 feat(llm): add provider-routed adapter composition 2026-05-01 08:12:33 -04:00
Kit Langton
e1c6bf92fb feat(llm): provider-executed tool pass-through
Add a `providerExecuted: boolean` flag to `tool-call` and `tool-result`
events plus the persisted `ToolResultPart`. When set, the tool runtime
skips client dispatch (the provider already executed the tool) and folds
both events into the assistant message so the next round's history
carries the call + result for context.

Anthropic: decode `server_tool_use` blocks and the three server tool
result block types (`web_search_tool_result`, `code_execution_tool_result`,
`web_fetch_tool_result`) into `tool-call` / `tool-result` events with
`providerExecuted: true`. Round-trip the same parts back into the
provider when the assistant message is replayed in subsequent requests.
Result block error payloads (`*_tool_result_error`) surface as
`result.type === "error"`.

OpenAI Responses: decode hosted tool items emitted via
`response.output_item.done` (`web_search_call`, `file_search_call`,
`code_interpreter_call`, `computer_use_call`, `image_generation_call`,
`mcp_call`, `local_shell_call`) as `tool-call` + `tool-result` pairs
with `providerExecuted: true`. Each tool's input fields are pulled out
explicitly; the full item is passed through as the result payload so
consumers can read outputs / sources / status without re-decoding.

Tool runtime: extend the dispatch decision so provider-executed
tool-calls bypass the handler lookup, and tool-result events with
`providerExecuted: true` are appended to the assistant content for
round-trip rather than being treated as a separate tool message.

Tests: 7 new deterministic fixtures cover Anthropic decode (success +
error result + round-trip + unknown server tool name), OpenAI Responses
decode (web_search_call, code_interpreter_call), and tool-runtime
skip-dispatch.

AGENTS.md updates the runtime section to describe pass-through behavior
and notes the transport-agnostic design that keeps a future WebSocket
adapter (e.g. OpenAI Codex backend) as a sibling rather than a core
rewrite.
2026-05-01 08:11:29 -04:00
Kit Langton
b5ca62d1ea test(llm): record OpenAI Chat tool-loop cassette
Captures both model rounds of the typed ToolRuntime tool loop into a
single multi-interaction cassette: round 1 carries the user prompt and
returns a get_weather tool call; round 2 carries the assistant tool call
plus tool result and returns a final answer.

Verifies the multi-interaction cassette infrastructure end-to-end against
a real provider.
2026-05-01 08:11:29 -04:00
Kit Langton
ca8d700a14 feat(llm): support multi-interaction cassettes with sequential matcher
The cassette layer already stored interactions in an array, but replay
always used find-first structural matching and cassettes were written
as one minified JSON line. That makes tool-loop and retry recordings
unworkable: identical requests collapse to one response, and large
recordings are unreadable on review.

- Add `sequentialMatcher` for position-based dispatch so identical
  retries map to recorded responses in order via an internal cursor.
- Pretty-print cassette JSON on write and reformat existing fixtures so
  multi-interaction diffs stay reviewable.
- Add deterministic `record-replay.test.ts` covering default vs
  sequential dispatch and cursor exhaustion.
- Add an OpenAI Chat tool-loop recorded test scaffold gated behind
  `OPENAI_API_KEY` so a single `RECORD=true` run captures every
  model round of the loop into one cassette file.
- Update AGENTS.md to document multi-interaction cassettes and the
  matcher options, and mark the cassette ergonomics TODO complete.
2026-05-01 08:11:29 -04:00
Kit Langton
ca198f739e refactor(llm): cache tool codecs and tighten ToolRuntime types
Simplify pass after the typed ToolRuntime initial drop. Findings from a
parallel review (code reuse + quality + perf):

src/tool.ts
- Tool now carries memoized decode/encode codecs and a precomputed
  ToolDefinition, derived once at tool() construction time. The runtime no
  longer rebuilds Schema closures or JSON Schema docs per call/per run.
- Constrains parameters/success to Schema.Codec<T, any, never, never> so
  the codecs have no service requirements. Drops the 'as unknown as' casts
  the runtime needed previously.
- Fixes a latent bug: schemas with $ref now correctly emit $defs on
  ToolDefinition.inputSchema (toJsonSchemaDocument's definitions were
  silently dropped before).

src/tool-runtime.ts
- Uses LLMRequest constructor instead of 'as LLMRequest' casts.
- Default tool dispatch concurrency is 10 (was 'unbounded'); exposed via
  RunOptions.concurrency. Unbounded is still available for handlers that
  do not share a saturable resource.
- Drops dead 'usage' state, the single-use Dispatched interface, and the
  DEFAULT_MAX_STEPS constant per the inline-when-used style rule.
- accumulate() now factors text-delta and reasoning-delta into one helper.

test/lib/openai-chunks.ts (new)
- Shared deltaChunk / usageChunk / toolCallChunk / finishChunk helpers.

test/lib/http.ts
- scriptedResponses moved here from tool-runtime.test.ts so future
  multi-step adapter tests can reuse it. Also picks up parallel work that
  swapped HandlerInput to a 'respond' callback for cleaner Response
  construction.

test/tool-runtime.test.ts
- Uses LLMEvent.guards for typed event filtering instead of cast-and-check.
- Concurrent test now uses sseEvents + deltaChunk instead of a hand-rolled
  body string.

Includes parallel callsite updates in test/adapter.test.ts and
test/provider/openai-compatible-chat.test.ts that adopt the 'respond' API
in lib/http.ts.
2026-05-01 08:11:29 -04:00
Kit Langton
6a7735e14c test(llm): cover OpenAI-compatible Chat parity 2026-05-01 08:11:29 -04:00
Kit Langton
3a2cb7f8ac feat(llm): add typed ToolRuntime
Schema-first, Effect-first tool loop:

- 'tool({ description, parameters, success, execute })' constructs a fully
  typed Tool. parameters and success are Effect Schemas; execute is typed
  against them and returns Effect<Success, ToolFailure>. Handler dependencies
  are closed over at construction time so the runtime never sees per-tool
  services.
- 'ToolRuntime.run(client, { request, tools, maxSteps?, stopWhen? })' streams
  the model, decodes tool-call inputs against parameters, dispatches to the
  matching handler, encodes results against success, emits tool-result events,
  appends assistant + tool messages, and re-streams. Stops on non-tool-calls
  finish, maxSteps, or stopWhen.
- Three recoverable error paths emit tool-error events so the model can
  self-correct: unknown tool name, input fails parameters Schema, handler
  returns ToolFailure. Defects fail the stream.
- 'ToolFailure' added to the schema and exported as the single forced error
  channel for handlers.
- Tool definitions on the LLMRequest are derived via toJsonSchemaDocument so
  consumers don't write JSON Schema by hand.

8 deterministic fixture tests cover the loop, errors, maxSteps, stopWhen, and
parallel tool calls in one step.
2026-05-01 08:11:29 -04:00
Kit Langton
b4a7cf638f feat(llm): add OpenAI-compatible provider helpers 2026-05-01 08:11:29 -04:00
Kit Langton
0cc992fc7c feat(llm): add OpenAI-compatible Chat adapter 2026-05-01 08:11:29 -04:00
Kit Langton
ca29f8a6ef test(llm): cover provider-error events and HTTP sad paths
Locks down the error contract before OpenCode integration:
- mid-stream provider errors (Anthropic 'event: error', OpenAI Responses
  'type: error') surface as 'provider-error' LLMEvents
- HTTP 4xx responses fail with ProviderRequestError before stream parsing
  begins (the executor contract)

Anthropic already had both. Adds:
- OpenAI Responses: provider-error fixture, code-fallback fixture, HTTP 400
- OpenAI Chat: HTTP 400 sad path
- AGENTS.md TODO refreshed; live recordings of provider errors still pending
2026-05-01 08:11:29 -04:00
Kit Langton
afe3990f27 refactor(llm): convert lowerToolChoice helpers to yieldable form
Per the package style guide, sync if/return functions that need to fail
should yield the error directly via Effect.gen rather than ladder
Effect.fail / Effect.succeed across every branch.

Touches all four adapters' tool-choice lowering. The naming-required
validation now reads as 'guard, then return' rather than embedded in a
chain of monadic returns. Behavior unchanged.
2026-05-01 08:11:29 -04:00
Kit Langton
8a4699e8e7 refactor(llm): drop vestigial Chunk type and raise step
Every adapter's parse already produces LLMEvents (via the process callback in
the shared sse helper), and every raise was Stream.make(event). The Chunk type
parameter, the raise field, the RaiseState interface, and the Stream.flatMap
raise step in client.stream were all pure overhead.

- Adapter contract shrinks from <Draft, Target, Chunk> to <Draft, Target>.
- All four adapters drop their raise: (event) => Stream.make(event) line.
- client.stream skips the no-op flatMap.
- AGENTS.md adapter section reflects the simpler contract.
2026-05-01 08:11:29 -04:00
Kit Langton
74b2e5781c refactor(llm): remove unused SSE invalid chunk option 2026-05-01 08:11:28 -04:00
Kit Langton
6573673875 docs(llm): mark Responses/Anthropic/Gemini done and outline OpenCode integration
Updates the AGENTS.md TODO list:
- mark Responses, Anthropic, and Gemini adapter coverage as done
- mark the Gemini schema sanitizer port as done
- add concrete next-step items for OpenCode integration: ModelRef bridge,
  request bridge, provider-quirk patches, request/stream parity tests, and
  a flagged rollout against existing session/llm.test.ts cases
- add OpenAI-compatible Chat, Bedrock Converse, and Vertex routing as
  outstanding adapter/dispatch decisions
2026-05-01 08:11:28 -04:00
Kit Langton
3561938e41 feat(llm): port Gemini tool-schema sanitizer as a patch
Gemini rejects integer enums, dangling required fields, untyped arrays, and
object keywords on scalar schemas. The sanitizer was previously a divergent
copy in OpenCode; this lands it in the package as a tool-schema patch with
deterministic tests and selects it for Gemini-protocol or Gemini-named models.

Also tightens the Gemini test suite: covers tool-choice none, drops the
tool-input-delta assertion that Gemini does not actually emit, and confirms
total usage stays undefined when only thoughtsTokenCount arrives.
2026-05-01 08:11:28 -04:00
Kit Langton
e476b63a28 refactor(llm): yieldable parser errors and linear runFold
- shared sse helper now expects Effectful decodeChunk and process callbacks,
  so adapter parsers can be Effect.gen and yield typed ProviderChunkError
  instead of throwing across the sync mapAccum boundary.
- parseJson returns Effect<unknown, ProviderChunkError> via Effect.try,
  matching the package style guide on yieldable errors.
- OpenAI Chat finalizes accumulated tool inputs eagerly when finish_reason
  arrives, surfacing JSON parse failures at the boundary instead of at halt.
  onHalt stays sync and just emits from state.
- generate's runFold reducer now mutates the accumulator instead of
  reallocating the events array on every chunk, dropping O(n^2) growth on
  long streams.
2026-05-01 08:11:28 -04:00
Kit Langton
850eeae24c test(llm): cover Gemini stream edge cases 2026-05-01 08:11:28 -04:00
Kit Langton
8d97b38983 feat(llm): add Gemini adapter 2026-05-01 08:11:28 -04:00
Kit Langton
9a05675200 refactor(llm): share provider stream parsing 2026-05-01 08:11:28 -04:00
Kit Langton
0f4e54d6e8 feat(llm): add Anthropic Messages adapter 2026-05-01 08:11:28 -04:00
Kit Langton
aec6c5983d feat(llm): add OpenAI Responses adapter 2026-05-01 08:11:28 -04:00
Kit Langton
18d618d051 test(llm): harden cassette matching and add streaming edge-case coverage
- Structurally match recorded requests by canonical JSON so non-deterministic
  field ordering doesn't break replay.
- Pluggable header allow-list and body redaction hook on the record/replay
  layer, so adapters with non-default auth (Anthropic, Bedrock) can plug in
  without touching this file.
- Move the cassette-name dedupe set inside recordedTests() so two describe
  files using different prefixes can run in parallel.
- Replace inline SSE template literals and per-file HTTP layers with shared
  test/lib helpers (sseEvents, fixedResponse, dynamicResponse, truncatedStream).
- Tighten recorded-test assertions to exact text and usage so adapter parser
  regressions surface immediately instead of passing fuzzy length>0 checks.
- Add cancellation and mid-stream transport-error tests for the OpenAI Chat
  adapter.
- Add cross-phase patch tests that verify each phase sees an updated
  PatchContext and that same-order patches sort deterministically by id.
2026-05-01 08:11:28 -04:00
Kit Langton
412a1bec44 test(llm): clean Effect test utilities 2026-05-01 08:11:28 -04:00
Kit Langton
04468304e7 refactor(llm): simplify adapter execution API 2026-05-01 08:11:28 -04:00
Kit Langton
ca9e0cfa3c test(llm): record OpenAI tool result flow 2026-05-01 08:11:28 -04:00
Kit Langton
f02652353e test(llm): add provider patch coverage 2026-05-01 08:11:28 -04:00
Kit Langton
1e0f6ee242 feat(llm): add adapter registry ergonomics 2026-05-01 08:11:28 -04:00
Kit Langton
36ab9fa584 docs(llm): add package todo list 2026-05-01 08:11:27 -04:00
Kit Langton
d96bf0d566 feat(llm): add OpenAI Chat adapter 2026-05-01 08:11:27 -04:00
Kit Langton
79683710c0 feat(llm): move core to package 2026-05-01 08:11:27 -04:00
Kit Langton
edd176c490 feat(llm): add initial patch API 2026-05-01 08:11:27 -04:00
729 changed files with 42712 additions and 18680 deletions

View File

@@ -209,6 +209,182 @@ jobs:
packages/opencode/dist/opencode-windows-x64
packages/opencode/dist/opencode-windows-x64-baseline
build-tauri:
needs:
- build-cli
- version
continue-on-error: false
env:
AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
AZURE_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
AZURE_TRUSTED_SIGNING_ACCOUNT_NAME: ${{ secrets.AZURE_TRUSTED_SIGNING_ACCOUNT_NAME }}
AZURE_TRUSTED_SIGNING_CERTIFICATE_PROFILE: ${{ secrets.AZURE_TRUSTED_SIGNING_CERTIFICATE_PROFILE }}
AZURE_TRUSTED_SIGNING_ENDPOINT: ${{ secrets.AZURE_TRUSTED_SIGNING_ENDPOINT }}
strategy:
fail-fast: false
matrix:
settings:
- host: macos-latest
target: x86_64-apple-darwin
- host: macos-latest
target: aarch64-apple-darwin
# github-hosted: blacksmith lacks ARM64 MSVC cross-compilation toolchain
- host: windows-2025
target: aarch64-pc-windows-msvc
- host: blacksmith-4vcpu-windows-2025
target: x86_64-pc-windows-msvc
- host: blacksmith-4vcpu-ubuntu-2404
target: x86_64-unknown-linux-gnu
- host: blacksmith-8vcpu-ubuntu-2404-arm
target: aarch64-unknown-linux-gnu
runs-on: ${{ matrix.settings.host }}
steps:
- uses: actions/checkout@v3
with:
fetch-tags: true
- uses: apple-actions/import-codesign-certs@v2
if: ${{ runner.os == 'macOS' }}
with:
keychain: build
p12-file-base64: ${{ secrets.APPLE_CERTIFICATE }}
p12-password: ${{ secrets.APPLE_CERTIFICATE_PASSWORD }}
- name: Verify Certificate
if: ${{ runner.os == 'macOS' }}
run: |
CERT_INFO=$(security find-identity -v -p codesigning build.keychain | grep "Developer ID Application")
CERT_ID=$(echo "$CERT_INFO" | awk -F'"' '{print $2}')
echo "CERT_ID=$CERT_ID" >> $GITHUB_ENV
echo "Certificate imported."
- name: Setup Apple API Key
if: ${{ runner.os == 'macOS' }}
run: |
echo "${{ secrets.APPLE_API_KEY_PATH }}" > $RUNNER_TEMP/apple-api-key.p8
- uses: ./.github/actions/setup-bun
- name: Azure login
if: runner.os == 'Windows'
uses: azure/login@v2
with:
client-id: ${{ env.AZURE_CLIENT_ID }}
tenant-id: ${{ env.AZURE_TENANT_ID }}
subscription-id: ${{ env.AZURE_SUBSCRIPTION_ID }}
- uses: actions/setup-node@v4
with:
node-version: "24"
- name: Cache apt packages
if: contains(matrix.settings.host, 'ubuntu')
uses: actions/cache@v4
with:
path: ~/apt-cache
key: ${{ runner.os }}-${{ matrix.settings.target }}-apt-${{ hashFiles('.github/workflows/publish.yml') }}
restore-keys: |
${{ runner.os }}-${{ matrix.settings.target }}-apt-
- name: install dependencies (ubuntu only)
if: contains(matrix.settings.host, 'ubuntu')
run: |
mkdir -p ~/apt-cache && chmod -R a+rw ~/apt-cache
sudo apt-get update
sudo apt-get install -y --no-install-recommends -o dir::cache::archives="$HOME/apt-cache" libwebkit2gtk-4.1-dev libappindicator3-dev librsvg2-dev patchelf
sudo chmod -R a+rw ~/apt-cache
- name: install Rust stable
uses: dtolnay/rust-toolchain@stable
with:
targets: ${{ matrix.settings.target }}
- uses: Swatinem/rust-cache@v2
with:
workspaces: packages/desktop/src-tauri
shared-key: ${{ matrix.settings.target }}
- name: Prepare
run: |
cd packages/desktop
bun ./scripts/prepare.ts
env:
OPENCODE_VERSION: ${{ needs.version.outputs.version }}
GITHUB_TOKEN: ${{ steps.committer.outputs.token }}
OPENCODE_CLI_ARTIFACT: ${{ (runner.os == 'Windows' && 'opencode-cli-windows') || 'opencode-cli' }}
RUST_TARGET: ${{ matrix.settings.target }}
GH_TOKEN: ${{ github.token }}
GITHUB_RUN_ID: ${{ github.run_id }}
- name: Resolve tauri portable SHA
if: contains(matrix.settings.host, 'ubuntu')
run: echo "TAURI_PORTABLE_SHA=$(git ls-remote https://github.com/tauri-apps/tauri.git refs/heads/feat/truly-portable-appimage | cut -f1)" >> "$GITHUB_ENV"
# Fixes AppImage build issues, can be removed when https://github.com/tauri-apps/tauri/pull/12491 is released
- name: Install tauri-cli from portable appimage branch
uses: taiki-e/cache-cargo-install-action@v3
if: contains(matrix.settings.host, 'ubuntu')
with:
tool: tauri-cli
git: https://github.com/tauri-apps/tauri
# branch: feat/truly-portable-appimage
rev: ${{ env.TAURI_PORTABLE_SHA }}
- name: Show tauri-cli version
if: contains(matrix.settings.host, 'ubuntu')
run: cargo tauri --version
- name: Setup git committer
id: committer
uses: ./.github/actions/setup-git-committer
with:
opencode-app-id: ${{ vars.OPENCODE_APP_ID }}
opencode-app-secret: ${{ secrets.OPENCODE_APP_SECRET }}
- name: Build and upload artifacts
uses: tauri-apps/tauri-action@390cbe447412ced1303d35abe75287949e43437a
timeout-minutes: 60
with:
projectPath: packages/desktop
uploadWorkflowArtifacts: true
tauriScript: ${{ (contains(matrix.settings.host, 'ubuntu') && 'cargo tauri') || '' }}
args: --target ${{ matrix.settings.target }} --config ${{ (github.ref_name == 'beta' && './src-tauri/tauri.beta.conf.json') || './src-tauri/tauri.prod.conf.json' }} --verbose
updaterJsonPreferNsis: true
releaseId: ${{ needs.version.outputs.release }}
tagName: ${{ needs.version.outputs.tag }}
releaseDraft: true
releaseAssetNamePattern: opencode-desktop-[platform]-[arch][ext]
repo: ${{ (github.ref_name == 'beta' && 'opencode-beta') || '' }}
releaseCommitish: ${{ github.sha }}
env:
GITHUB_TOKEN: ${{ steps.committer.outputs.token }}
TAURI_BUNDLER_NEW_APPIMAGE_FORMAT: true
TAURI_SIGNING_PRIVATE_KEY: ${{ secrets.TAURI_SIGNING_PRIVATE_KEY }}
TAURI_SIGNING_PRIVATE_KEY_PASSWORD: ${{ secrets.TAURI_SIGNING_PRIVATE_KEY_PASSWORD }}
APPLE_CERTIFICATE: ${{ secrets.APPLE_CERTIFICATE }}
APPLE_CERTIFICATE_PASSWORD: ${{ secrets.APPLE_CERTIFICATE_PASSWORD }}
APPLE_SIGNING_IDENTITY: ${{ env.CERT_ID }}
APPLE_API_ISSUER: ${{ secrets.APPLE_API_ISSUER }}
APPLE_API_KEY: ${{ secrets.APPLE_API_KEY }}
APPLE_API_KEY_PATH: ${{ runner.temp }}/apple-api-key.p8
- name: Verify signed Windows desktop artifacts
if: runner.os == 'Windows'
shell: pwsh
run: |
$files = @(
"${{ github.workspace }}\packages\desktop\src-tauri\sidecars\opencode-cli-${{ matrix.settings.target }}.exe"
)
$files += Get-ChildItem "${{ github.workspace }}\packages\desktop\src-tauri\target\${{ matrix.settings.target }}\release\bundle\nsis\*.exe" | Select-Object -ExpandProperty FullName
foreach ($file in $files) {
$sig = Get-AuthenticodeSignature $file
if ($sig.Status -ne "Valid") {
throw "Invalid signature for ${file}: $($sig.Status)"
}
}
build-electron:
needs:
- build-cli
@@ -304,7 +480,7 @@ jobs:
- name: Prepare
run: bun ./scripts/prepare.ts
working-directory: packages/desktop
working-directory: packages/desktop-electron
env:
OPENCODE_VERSION: ${{ needs.version.outputs.version }}
OPENCODE_CHANNEL: ${{ (github.ref_name == 'beta' && 'beta') || 'prod' }}
@@ -315,7 +491,7 @@ jobs:
- name: Build
run: bun run build
working-directory: packages/desktop
working-directory: packages/desktop-electron
env:
OPENCODE_CHANNEL: ${{ (github.ref_name == 'beta' && 'beta') || 'prod' }}
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
@@ -329,7 +505,7 @@ jobs:
- name: Package and publish
if: needs.version.outputs.release
run: npx electron-builder ${{ matrix.settings.platform_flag }} --publish always --config electron-builder.config.ts
working-directory: packages/desktop
working-directory: packages/desktop-electron
timeout-minutes: 60
env:
OPENCODE_CHANNEL: ${{ (github.ref_name == 'beta' && 'beta') || 'prod' }}
@@ -343,43 +519,19 @@ jobs:
- name: Package (no publish)
if: ${{ !needs.version.outputs.release }}
run: npx electron-builder ${{ matrix.settings.platform_flag }} --publish never --config electron-builder.config.ts
working-directory: packages/desktop
working-directory: packages/desktop-electron
timeout-minutes: 60
env:
OPENCODE_CHANNEL: ${{ (github.ref_name == 'beta' && 'beta') || 'prod' }}
- name: Create and upload macOS .app.tar.gz
if: runner.os == 'macOS' && needs.version.outputs.release
working-directory: packages/desktop/dist
env:
GH_TOKEN: ${{ steps.committer.outputs.token }}
run: |
if [[ "${{ matrix.settings.target }}" == "x86_64-apple-darwin" ]]; then
APP_DIR="mac"
OUT_NAME="opencode-desktop-mac-x64.app.tar.gz"
elif [[ "${{ matrix.settings.target }}" == "aarch64-apple-darwin" ]]; then
APP_DIR="mac-arm64"
OUT_NAME="opencode-desktop-mac-arm64.app.tar.gz"
else
echo "Unknown macOS target: ${{ matrix.settings.target }}"
exit 1
fi
APP_PATH=$(find "$APP_DIR" -maxdepth 1 -name "*.app" -type d | head -1)
if [ -z "$APP_PATH" ]; then
echo "No .app bundle found in $APP_DIR"
exit 1
fi
tar -czf "$OUT_NAME" -C "$(dirname "$APP_PATH")" "$(basename "$APP_PATH")"
gh release upload "v${{ needs.version.outputs.version }}" "$OUT_NAME" --clobber --repo "${{ needs.version.outputs.repo }}"
- name: Verify signed Windows Electron artifacts
if: runner.os == 'Windows'
shell: pwsh
run: |
$files = @()
$files += Get-ChildItem "${{ github.workspace }}\packages\desktop\dist\*.exe" | Select-Object -ExpandProperty FullName
$files += Get-ChildItem "${{ github.workspace }}\packages\desktop\dist\*unpacked\*.exe" | Select-Object -ExpandProperty FullName
$files += Get-ChildItem "${{ github.workspace }}\packages\desktop\dist\*unpacked\resources\opencode-cli.exe" -ErrorAction SilentlyContinue | Select-Object -ExpandProperty FullName
$files += Get-ChildItem "${{ github.workspace }}\packages\desktop-electron\dist\*.exe" | Select-Object -ExpandProperty FullName
$files += Get-ChildItem "${{ github.workspace }}\packages\desktop-electron\dist\*unpacked\*.exe" | Select-Object -ExpandProperty FullName
$files += Get-ChildItem "${{ github.workspace }}\packages\desktop-electron\dist\*unpacked\resources\opencode-cli.exe" -ErrorAction SilentlyContinue | Select-Object -ExpandProperty FullName
foreach ($file in $files | Select-Object -Unique) {
$sig = Get-AuthenticodeSignature $file
@@ -390,20 +542,21 @@ jobs:
- uses: actions/upload-artifact@v4
with:
name: opencode-desktop-${{ matrix.settings.target }}
path: packages/desktop/dist/*
name: opencode-electron-${{ matrix.settings.target }}
path: packages/desktop-electron/dist/*
- uses: actions/upload-artifact@v4
if: needs.version.outputs.release
with:
name: latest-yml-${{ matrix.settings.target }}
path: packages/desktop/dist/latest*.yml
path: packages/desktop-electron/dist/latest*.yml
publish:
needs:
- version
- build-cli
- sign-cli-windows
- build-tauri
- build-electron
if: always() && !failure() && !cancelled()
runs-on: blacksmith-4vcpu-ubuntu-2404
@@ -430,6 +583,13 @@ jobs:
node-version: "24"
registry-url: "https://registry.npmjs.org"
- name: Setup git committer
id: committer
uses: ./.github/actions/setup-git-committer
with:
opencode-app-id: ${{ vars.OPENCODE_APP_ID }}
opencode-app-secret: ${{ secrets.OPENCODE_APP_SECRET }}
- uses: actions/download-artifact@v4
with:
name: opencode-cli
@@ -451,13 +611,6 @@ jobs:
pattern: latest-yml-*
path: /tmp/latest-yml
- name: Setup git committer
id: committer
uses: ./.github/actions/setup-git-committer
with:
opencode-app-id: ${{ vars.OPENCODE_APP_ID }}
opencode-app-secret: ${{ secrets.OPENCODE_APP_SECRET }}
- name: Cache apt packages (AUR)
uses: actions/cache@v4
with:
@@ -486,5 +639,3 @@ jobs:
GH_REPO: ${{ needs.version.outputs.repo }}
NPM_CONFIG_PROVENANCE: false
LATEST_YML_DIR: /tmp/latest-yml
TAURI_SIGNING_PRIVATE_KEY: ${{ secrets.TAURI_SIGNING_PRIVATE_KEY }}
TAURI_SIGNING_PRIVATE_KEY_PASSWORD: ${{ secrets.TAURI_SIGNING_PRIVATE_KEY_PASSWORD }}

1
.gitignore vendored
View File

@@ -3,6 +3,7 @@ node_modules
.worktrees
.sst
.env
.env.local
.idea
.vscode
.codex

5
.gitleaksignore Normal file
View File

@@ -0,0 +1,5 @@
# Fake secret-looking strings used by HTTP recorder redaction tests.
afa57acfda894e0ebf3c637dd710310b705c0a2f:packages/http-recorder/test/record-replay.test.ts:generic-api-key:69
afa57acfda894e0ebf3c637dd710310b705c0a2f:packages/http-recorder/test/record-replay.test.ts:generic-api-key:92
afa57acfda894e0ebf3c637dd710310b705c0a2f:packages/http-recorder/test/record-replay.test.ts:generic-api-key:146
afa57acfda894e0ebf3c637dd710310b705c0a2f:packages/http-recorder/test/record-replay.test.ts:gcp-api-key:71

View File

@@ -18,12 +18,9 @@ Do not use `git log` or author metadata when deciding attribution.
Rules:
- Write the final file with release sections in this order:
- Write the final file with sections in this order:
`## Core`, `## TUI`, `## Desktop`, `## SDK`, `## Extensions`
- Only include sections that have at least one notable entry
- Within each release section, keep bug fixes grouped under `### Bugfixes`
- Keep other notable entries under `### Improvements` when a section has bug fixes too
- Omit empty subsections
- Keep one bullet per commit you keep
- Skip commits that are entirely internal, CI, tests, refactors, or otherwise not user-facing
- Start each bullet with a capital letter

341
bun.lock
View File

@@ -29,7 +29,7 @@
},
"packages/app": {
"name": "@opencode-ai/app",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@kobalte/core": "catalog:",
"@opencode-ai/core": "workspace:*",
@@ -85,7 +85,7 @@
},
"packages/console/app": {
"name": "@opencode-ai/console-app",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@cloudflare/vite-plugin": "1.15.2",
"@ibm/plex": "6.4.1",
@@ -119,7 +119,7 @@
},
"packages/console/core": {
"name": "@opencode-ai/console-core",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@aws-sdk/client-sts": "3.782.0",
"@jsx-email/render": "1.1.1",
@@ -146,7 +146,7 @@
},
"packages/console/function": {
"name": "@opencode-ai/console-function",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@ai-sdk/anthropic": "3.0.64",
"@ai-sdk/openai": "3.0.48",
@@ -170,7 +170,7 @@
},
"packages/console/mail": {
"name": "@opencode-ai/console-mail",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@jsx-email/all": "2.2.3",
"@jsx-email/cli": "1.4.3",
@@ -194,7 +194,7 @@
},
"packages/core": {
"name": "@opencode-ai/core",
"version": "1.14.38",
"version": "1.14.33",
"bin": {
"opencode": "./bin/opencode",
},
@@ -228,7 +228,42 @@
},
"packages/desktop": {
"name": "@opencode-ai/desktop",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@opencode-ai/app": "workspace:*",
"@opencode-ai/ui": "workspace:*",
"@sentry/solid": "catalog:",
"@solid-primitives/i18n": "2.2.1",
"@solid-primitives/storage": "catalog:",
"@solidjs/meta": "catalog:",
"@tauri-apps/api": "^2",
"@tauri-apps/plugin-clipboard-manager": "~2",
"@tauri-apps/plugin-deep-link": "~2",
"@tauri-apps/plugin-dialog": "~2",
"@tauri-apps/plugin-http": "~2",
"@tauri-apps/plugin-notification": "~2",
"@tauri-apps/plugin-opener": "^2",
"@tauri-apps/plugin-os": "~2",
"@tauri-apps/plugin-process": "~2",
"@tauri-apps/plugin-shell": "~2",
"@tauri-apps/plugin-store": "~2",
"@tauri-apps/plugin-updater": "~2",
"@tauri-apps/plugin-window-state": "~2",
"solid-js": "catalog:",
},
"devDependencies": {
"@actions/artifact": "4.0.0",
"@sentry/vite-plugin": "catalog:",
"@tauri-apps/cli": "^2",
"@types/bun": "catalog:",
"@typescript/native-preview": "catalog:",
"typescript": "~5.6.2",
"vite": "catalog:",
},
},
"packages/desktop-electron": {
"name": "@opencode-ai/desktop-electron",
"version": "1.14.33",
"dependencies": {
"drizzle-orm": "catalog:",
"effect": "catalog:",
@@ -274,7 +309,7 @@
},
"packages/enterprise": {
"name": "@opencode-ai/enterprise",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@opencode-ai/core": "workspace:*",
"@opencode-ai/ui": "workspace:*",
@@ -303,7 +338,7 @@
},
"packages/function": {
"name": "@opencode-ai/function",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@octokit/auth-app": "8.0.1",
"@octokit/rest": "catalog:",
@@ -317,9 +352,40 @@
"typescript": "catalog:",
},
},
"packages/http-recorder": {
"name": "@opencode-ai/http-recorder",
"version": "0.0.0",
"dependencies": {
"@effect/platform-node": "catalog:",
"effect": "catalog:",
},
"devDependencies": {
"@tsconfig/bun": "catalog:",
"@types/bun": "catalog:",
"@typescript/native-preview": "catalog:",
},
},
"packages/llm": {
"name": "@opencode-ai/llm",
"version": "1.14.25",
"dependencies": {
"@smithy/eventstream-codec": "4.2.14",
"@smithy/util-utf8": "4.2.2",
"aws4fetch": "1.0.20",
"effect": "catalog:",
},
"devDependencies": {
"@clack/prompts": "1.0.0-alpha.1",
"@effect/platform-node": "catalog:",
"@opencode-ai/http-recorder": "workspace:*",
"@tsconfig/bun": "catalog:",
"@types/bun": "catalog:",
"@typescript/native-preview": "catalog:",
},
},
"packages/opencode": {
"name": "opencode",
"version": "1.14.38",
"version": "1.14.33",
"bin": {
"opencode": "./bin/opencode",
},
@@ -361,6 +427,7 @@
"@octokit/graphql": "9.0.2",
"@octokit/rest": "catalog:",
"@openauthjs/openauth": "catalog:",
"@opencode-ai/llm": "workspace:*",
"@opencode-ai/plugin": "workspace:*",
"@opencode-ai/script": "workspace:*",
"@opencode-ai/sdk": "workspace:*",
@@ -461,7 +528,7 @@
},
"packages/plugin": {
"name": "@opencode-ai/plugin",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@opencode-ai/sdk": "workspace:*",
"effect": "catalog:",
@@ -496,7 +563,7 @@
},
"packages/sdk/js": {
"name": "@opencode-ai/sdk",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"cross-spawn": "catalog:",
},
@@ -511,7 +578,7 @@
},
"packages/slack": {
"name": "@opencode-ai/slack",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@opencode-ai/sdk": "workspace:*",
"@slack/bolt": "^3.17.1",
@@ -546,7 +613,7 @@
},
"packages/ui": {
"name": "@opencode-ai/ui",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@kobalte/core": "catalog:",
"@opencode-ai/core": "workspace:*",
@@ -595,7 +662,7 @@
},
"packages/web": {
"name": "@opencode-ai/web",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@astrojs/cloudflare": "12.6.3",
"@astrojs/markdown-remark": "6.3.1",
@@ -671,7 +738,7 @@
"@types/bun": "1.3.12",
"@types/cross-spawn": "6.0.6",
"@types/luxon": "3.7.1",
"@types/node": "24.12.2",
"@types/node": "22.13.9",
"@types/semver": "7.7.1",
"@typescript/native-preview": "7.0.0-dev.20251207.1",
"ai": "6.0.168",
@@ -1535,10 +1602,16 @@
"@opencode-ai/desktop": ["@opencode-ai/desktop@workspace:packages/desktop"],
"@opencode-ai/desktop-electron": ["@opencode-ai/desktop-electron@workspace:packages/desktop-electron"],
"@opencode-ai/enterprise": ["@opencode-ai/enterprise@workspace:packages/enterprise"],
"@opencode-ai/function": ["@opencode-ai/function@workspace:packages/function"],
"@opencode-ai/http-recorder": ["@opencode-ai/http-recorder@workspace:packages/http-recorder"],
"@opencode-ai/llm": ["@opencode-ai/llm@workspace:packages/llm"],
"@opencode-ai/plugin": ["@opencode-ai/plugin@workspace:packages/plugin"],
"@opencode-ai/script": ["@opencode-ai/script@workspace:packages/script"],
@@ -2233,8 +2306,54 @@
"@tauri-apps/api": ["@tauri-apps/api@2.10.1", "", {}, "sha512-hKL/jWf293UDSUN09rR69hrToyIXBb8CjGaWC7gfinvnQrBVvnLr08FeFi38gxtugAVyVcTa5/FD/Xnkb1siBw=="],
"@tauri-apps/cli": ["@tauri-apps/cli@2.10.1", "", { "optionalDependencies": { "@tauri-apps/cli-darwin-arm64": "2.10.1", "@tauri-apps/cli-darwin-x64": "2.10.1", "@tauri-apps/cli-linux-arm-gnueabihf": "2.10.1", "@tauri-apps/cli-linux-arm64-gnu": "2.10.1", "@tauri-apps/cli-linux-arm64-musl": "2.10.1", "@tauri-apps/cli-linux-riscv64-gnu": "2.10.1", "@tauri-apps/cli-linux-x64-gnu": "2.10.1", "@tauri-apps/cli-linux-x64-musl": "2.10.1", "@tauri-apps/cli-win32-arm64-msvc": "2.10.1", "@tauri-apps/cli-win32-ia32-msvc": "2.10.1", "@tauri-apps/cli-win32-x64-msvc": "2.10.1" }, "bin": { "tauri": "tauri.js" } }, "sha512-jQNGF/5quwORdZSSLtTluyKQ+o6SMa/AUICfhf4egCGFdMHqWssApVgYSbg+jmrZoc8e1DscNvjTnXtlHLS11g=="],
"@tauri-apps/cli-darwin-arm64": ["@tauri-apps/cli-darwin-arm64@2.10.1", "", { "os": "darwin", "cpu": "arm64" }, "sha512-Z2OjCXiZ+fbYZy7PmP3WRnOpM9+Fy+oonKDEmUE6MwN4IGaYqgceTjwHucc/kEEYZos5GICve35f7ZiizgqEnQ=="],
"@tauri-apps/cli-darwin-x64": ["@tauri-apps/cli-darwin-x64@2.10.1", "", { "os": "darwin", "cpu": "x64" }, "sha512-V/irQVvjPMGOTQqNj55PnQPVuH4VJP8vZCN7ajnj+ZS8Kom1tEM2hR3qbbIRoS3dBKs5mbG8yg1WC+97dq17Pw=="],
"@tauri-apps/cli-linux-arm-gnueabihf": ["@tauri-apps/cli-linux-arm-gnueabihf@2.10.1", "", { "os": "linux", "cpu": "arm" }, "sha512-Hyzwsb4VnCWKGfTw+wSt15Z2pLw2f0JdFBfq2vHBOBhvg7oi6uhKiF87hmbXOBXUZaGkyRDkCHsdzJcIfoJC2w=="],
"@tauri-apps/cli-linux-arm64-gnu": ["@tauri-apps/cli-linux-arm64-gnu@2.10.1", "", { "os": "linux", "cpu": "arm64" }, "sha512-OyOYs2t5GkBIvyWjA1+h4CZxTcdz1OZPCWAPz5DYEfB0cnWHERTnQ/SLayQzncrT0kwRoSfSz9KxenkyJoTelA=="],
"@tauri-apps/cli-linux-arm64-musl": ["@tauri-apps/cli-linux-arm64-musl@2.10.1", "", { "os": "linux", "cpu": "arm64" }, "sha512-MIj78PDDGjkg3NqGptDOGgfXks7SYJwhiMh8SBoZS+vfdz7yP5jN18bNaLnDhsVIPARcAhE1TlsZe/8Yxo2zqg=="],
"@tauri-apps/cli-linux-riscv64-gnu": ["@tauri-apps/cli-linux-riscv64-gnu@2.10.1", "", { "os": "linux", "cpu": "none" }, "sha512-X0lvOVUg8PCVaoEtEAnpxmnkwlE1gcMDTqfhbefICKDnOTJ5Est3qL0SrWxizDackIOKBcvtpejrSiVpuJI1kw=="],
"@tauri-apps/cli-linux-x64-gnu": ["@tauri-apps/cli-linux-x64-gnu@2.10.1", "", { "os": "linux", "cpu": "x64" }, "sha512-2/12bEzsJS9fAKybxgicCDFxYD1WEI9kO+tlDwX5znWG2GwMBaiWcmhGlZ8fi+DMe9CXlcVarMTYc0L3REIRxw=="],
"@tauri-apps/cli-linux-x64-musl": ["@tauri-apps/cli-linux-x64-musl@2.10.1", "", { "os": "linux", "cpu": "x64" }, "sha512-Y8J0ZzswPz50UcGOFuXGEMrxbjwKSPgXftx5qnkuMs2rmwQB5ssvLb6tn54wDSYxe7S6vlLob9vt0VKuNOaCIQ=="],
"@tauri-apps/cli-win32-arm64-msvc": ["@tauri-apps/cli-win32-arm64-msvc@2.10.1", "", { "os": "win32", "cpu": "arm64" }, "sha512-iSt5B86jHYAPJa/IlYw++SXtFPGnWtFJriHn7X0NFBVunF6zu9+/zOn8OgqIWSl8RgzhLGXQEEtGBdR4wzpVgg=="],
"@tauri-apps/cli-win32-ia32-msvc": ["@tauri-apps/cli-win32-ia32-msvc@2.10.1", "", { "os": "win32", "cpu": "ia32" }, "sha512-gXyxgEzsFegmnWywYU5pEBURkcFN/Oo45EAwvZrHMh+zUSEAvO5E8TXsgPADYm31d1u7OQU3O3HsYfVBf2moHw=="],
"@tauri-apps/cli-win32-x64-msvc": ["@tauri-apps/cli-win32-x64-msvc@2.10.1", "", { "os": "win32", "cpu": "x64" }, "sha512-6Cn7YpPFwzChy0ERz6djKEmUehWrYlM+xTaNzGPgZocw3BD7OfwfWHKVWxXzdjEW2KfKkHddfdxK1XXTYqBRLg=="],
"@tauri-apps/plugin-clipboard-manager": ["@tauri-apps/plugin-clipboard-manager@2.3.2", "", { "dependencies": { "@tauri-apps/api": "^2.8.0" } }, "sha512-CUlb5Hqi2oZbcZf4VUyUH53XWPPdtpw43EUpCza5HWZJwxEoDowFzNUDt1tRUXA8Uq+XPn17Ysfptip33sG4eQ=="],
"@tauri-apps/plugin-deep-link": ["@tauri-apps/plugin-deep-link@2.4.8", "", { "dependencies": { "@tauri-apps/api": "^2.10.1" } }, "sha512-Cd2Cs960MGuGONeIwxOPx9wqwedetAHOGlwK5boJ/SMTfAtAyfErpfVPEn+EJzgXsJun8EKzsEumHjr+64V4fw=="],
"@tauri-apps/plugin-dialog": ["@tauri-apps/plugin-dialog@2.7.0", "", { "dependencies": { "@tauri-apps/api": "^2.10.1" } }, "sha512-4nS/hfGMGCXiAS3LtVjH9AgsSAPJeG/7R+q8agTFqytjnMa4Zq95Bq8WzVDkckpanX+yyRHXnRtrKXkANKDHvw=="],
"@tauri-apps/plugin-http": ["@tauri-apps/plugin-http@2.5.8", "", { "dependencies": { "@tauri-apps/api": "^2.10.1" } }, "sha512-oxd7oypzQeu8kAfFCrw534Kq7Cw+NzozcnCY21O4rz3A+veJiIiuSCMIprgGcZOcLAXFP9GmDhKUbhuKWcunRw=="],
"@tauri-apps/plugin-notification": ["@tauri-apps/plugin-notification@2.3.3", "", { "dependencies": { "@tauri-apps/api": "^2.8.0" } }, "sha512-Zw+ZH18RJb41G4NrfHgIuofJiymusqN+q8fGUIIV7vyCH+5sSn5coqRv/MWB9qETsUs97vmU045q7OyseCV3Qg=="],
"@tauri-apps/plugin-opener": ["@tauri-apps/plugin-opener@2.5.3", "", { "dependencies": { "@tauri-apps/api": "^2.8.0" } }, "sha512-CCcUltXMOfUEArbf3db3kCE7Ggy1ExBEBl51Ko2ODJ6GDYHRp1nSNlQm5uNCFY5k7/ufaK5Ib3Du/Zir19IYQQ=="],
"@tauri-apps/plugin-os": ["@tauri-apps/plugin-os@2.3.2", "", { "dependencies": { "@tauri-apps/api": "^2.8.0" } }, "sha512-n+nXWeuSeF9wcEsSPmRnBEGrRgOy6jjkSU+UVCOV8YUGKb2erhDOxis7IqRXiRVHhY8XMKks00BJ0OAdkpf6+A=="],
"@tauri-apps/plugin-process": ["@tauri-apps/plugin-process@2.3.1", "", { "dependencies": { "@tauri-apps/api": "^2.8.0" } }, "sha512-nCa4fGVaDL/B9ai03VyPOjfAHRHSBz5v6F/ObsB73r/dA3MHHhZtldaDMIc0V/pnUw9ehzr2iEG+XkSEyC0JJA=="],
"@tauri-apps/plugin-shell": ["@tauri-apps/plugin-shell@2.3.5", "", { "dependencies": { "@tauri-apps/api": "^2.10.1" } }, "sha512-jewtULhiQ7lI7+owCKAjc8tYLJr92U16bPOeAa472LHJdgaibLP83NcfAF2e+wkEcA53FxKQAZ7byDzs2eeizg=="],
"@tauri-apps/plugin-store": ["@tauri-apps/plugin-store@2.4.2", "", { "dependencies": { "@tauri-apps/api": "^2.8.0" } }, "sha512-0ClHS50Oq9HEvLPhNzTNFxbWVOqoAp3dRvtewQBeqfIQ0z5m3JRnOISIn2ZVPCrQC0MyGyhTS9DWhHjpigQE7A=="],
"@tauri-apps/plugin-updater": ["@tauri-apps/plugin-updater@2.10.1", "", { "dependencies": { "@tauri-apps/api": "^2.10.1" } }, "sha512-NFYMg+tWOZPJdzE/PpFj2qfqwAWwNS3kXrb1tm1gnBJ9mYzZ4WDRrwy8udzWoAnfGCHLuePNLY1WVCNHnh3eRA=="],
"@tauri-apps/plugin-window-state": ["@tauri-apps/plugin-window-state@2.4.1", "", { "dependencies": { "@tauri-apps/api": "^2.8.0" } }, "sha512-OuvdrzyY8Q5Dbzpj+GcrnV1iCeoZbcFdzMjanZMMcAEUNy/6PH5pxZPXpaZLOR7whlzXiuzx0L9EKZbH7zpdRw=="],
"@tediousjs/connection-string": ["@tediousjs/connection-string@0.5.0", "", {}, "sha512-7qSgZbincDDDFyRweCIEvZULFAw5iz/DeunhvuxpL31nfntX3P4Yd4HkHBRg9H8CdqY1e5WFN1PZIz/REL9MVQ=="],
"@testing-library/dom": ["@testing-library/dom@10.4.1", "", { "dependencies": { "@babel/code-frame": "^7.10.4", "@babel/runtime": "^7.12.5", "@types/aria-query": "^5.0.1", "aria-query": "5.3.0", "dom-accessibility-api": "^0.5.9", "lz-string": "^1.5.0", "picocolors": "1.1.1", "pretty-format": "^27.0.2" } }, "sha512-o4PXJQidqJl82ckFaXUeoAW+XysPLauYI43Abki5hABd853iMhitooc6znOnczgbTYmEP6U6/y1ZyKAIsvMKGg=="],
@@ -2335,7 +2454,7 @@
"@types/nlcst": ["@types/nlcst@2.0.3", "", { "dependencies": { "@types/unist": "*" } }, "sha512-vSYNSDe6Ix3q+6Z7ri9lyWqgGhJTmzRjZRqyq15N0Z/1/UnVsno9G/N40NBijoYx2seFDIl0+B2mgAb9mezUCA=="],
"@types/node": ["@types/node@24.12.2", "", { "dependencies": { "undici-types": "~7.16.0" } }, "sha512-A1sre26ke7HDIuY/M23nd9gfB+nrmhtYyMINbjI1zHJxYteKR6qSMX56FsmjMcDb3SMcjJg5BiRRgOCC/yBD0g=="],
"@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/node-fetch": ["@types/node-fetch@2.6.13", "", { "dependencies": { "@types/node": "*", "form-data": "^4.0.4" } }, "sha512-QGpRVpzSaUs30JBSGPjOg4Uveu384erbHBoT1zeONvyCfwQxIkUshLAOqN/k9EjGviPRmWTTe6aH2qySWKTVSw=="],
@@ -4867,7 +4986,7 @@
"undici": ["undici@7.25.0", "", {}, "sha512-xXnp4kTyor2Zq+J1FfPI6Eq3ew5h6Vl0F/8d9XU5zZQf1tX9s2Su1/3PiMmUANFULpmksxkClamIZcaUqryHsQ=="],
"undici-types": ["undici-types@7.16.0", "", {}, "sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw=="],
"undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"unenv": ["unenv@2.0.0-rc.24", "", { "dependencies": { "pathe": "^2.0.3" } }, "sha512-i7qRCmY42zmCwnYlh9H2SvLEypEFGye5iRmEMKjcGi7zk9UquigRjFtTLz0TYqr0ZGLZhaMHl/foy1bZR+Cwlw=="],
@@ -5417,8 +5536,6 @@
"@gitlab/opencode-gitlab-auth/open": ["open@10.2.0", "", { "dependencies": { "default-browser": "^5.2.1", "define-lazy-prop": "^3.0.0", "is-inside-container": "^1.0.0", "wsl-utils": "^0.1.0" } }, "sha512-YgBpdJHPyQ2UE5x+hlSXcnejzAvD0b22U2OuAP+8OnlJT+PjWPxtgmGqKKc+RgTM63U9gN0YzrYc71R2WT/hTA=="],
"@happy-dom/global-registrator/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@hey-api/openapi-ts/open": ["open@11.0.0", "", { "dependencies": { "default-browser": "^5.4.0", "define-lazy-prop": "^3.0.0", "is-in-ssh": "^1.0.0", "is-inside-container": "^1.0.0", "powershell-utils": "^0.1.0", "wsl-utils": "^0.3.0" } }, "sha512-smsWv2LzFjP03xmvFoJ331ss6h+jixfA4UUV/Bsiyuu4YJPfN+FIQGOIiv4w9/+MoHkfkJ22UIaQWRVFRfH6Vw=="],
"@hey-api/openapi-ts/semver": ["semver@7.7.3", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q=="],
@@ -5547,10 +5664,18 @@
"@opencode-ai/desktop/@actions/artifact": ["@actions/artifact@4.0.0", "", { "dependencies": { "@actions/core": "^1.10.0", "@actions/github": "^6.0.1", "@actions/http-client": "^2.1.0", "@azure/core-http": "^3.0.5", "@azure/storage-blob": "^12.15.0", "@octokit/core": "^5.2.1", "@octokit/plugin-request-log": "^1.0.4", "@octokit/plugin-retry": "^3.0.9", "@octokit/request": "^8.4.1", "@octokit/request-error": "^5.1.1", "@protobuf-ts/plugin": "^2.2.3-alpha.1", "archiver": "^7.0.1", "jwt-decode": "^3.1.2", "unzip-stream": "^0.3.1" } }, "sha512-HCc2jMJRAfviGFAh0FsOR/jNfWhirxl7W6z8zDtttt0GltwxBLdEIjLiweOPFl9WbyJRW1VWnPUSAixJqcWUMQ=="],
"@opencode-ai/desktop/marked": ["marked@15.0.12", "", { "bin": { "marked": "bin/marked.js" } }, "sha512-8dD6FusOQSrpv9Z1rdNMdlSgQOIP880DHqnohobOmYLElGEqAL/JvxvuxZO16r4HtjTlfPRDC1hbvxC9dPN2nA=="],
"@opencode-ai/desktop/typescript": ["typescript@5.6.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-hjcS1mhfuyi4WW8IWtjP7brDrG2cuDZukyrYrSauoXGNgx0S7zceP07adYkJycEr56BOUTNPzbInooiN3fn1qw=="],
"@opencode-ai/desktop-electron/@actions/artifact": ["@actions/artifact@4.0.0", "", { "dependencies": { "@actions/core": "^1.10.0", "@actions/github": "^6.0.1", "@actions/http-client": "^2.1.0", "@azure/core-http": "^3.0.5", "@azure/storage-blob": "^12.15.0", "@octokit/core": "^5.2.1", "@octokit/plugin-request-log": "^1.0.4", "@octokit/plugin-retry": "^3.0.9", "@octokit/request": "^8.4.1", "@octokit/request-error": "^5.1.1", "@protobuf-ts/plugin": "^2.2.3-alpha.1", "archiver": "^7.0.1", "jwt-decode": "^3.1.2", "unzip-stream": "^0.3.1" } }, "sha512-HCc2jMJRAfviGFAh0FsOR/jNfWhirxl7W6z8zDtttt0GltwxBLdEIjLiweOPFl9WbyJRW1VWnPUSAixJqcWUMQ=="],
"@opencode-ai/desktop-electron/marked": ["marked@15.0.12", "", { "bin": { "marked": "bin/marked.js" } }, "sha512-8dD6FusOQSrpv9Z1rdNMdlSgQOIP880DHqnohobOmYLElGEqAL/JvxvuxZO16r4HtjTlfPRDC1hbvxC9dPN2nA=="],
"@opencode-ai/desktop-electron/typescript": ["typescript@5.6.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-hjcS1mhfuyi4WW8IWtjP7brDrG2cuDZukyrYrSauoXGNgx0S7zceP07adYkJycEr56BOUTNPzbInooiN3fn1qw=="],
"@opencode-ai/llm/@smithy/eventstream-codec": ["@smithy/eventstream-codec@4.2.14", "", { "dependencies": { "@aws-crypto/crc32": "5.2.0", "@smithy/types": "^4.14.1", "@smithy/util-hex-encoding": "^4.2.2", "tslib": "^2.6.2" } }, "sha512-erZq0nOIpzfeZdCyzZjdJb4nVSKLUmSkaQUVkRGQTXs30gyUGeKnrYEg+Xe1W5gE3aReS7IgsvANwVPxSzY6Pw=="],
"@opencode-ai/llm/@smithy/util-utf8": ["@smithy/util-utf8@4.2.2", "", { "dependencies": { "@smithy/util-buffer-from": "^4.2.2", "tslib": "^2.6.2" } }, "sha512-75MeYpjdWRe8M5E3AW0O4Cx3UadweS+cwdXjwYGBW5h/gxxnbeZ877sLPX/ZJA9GVTlL/qG0dXP29JWFCD1Ayw=="],
"@opencode-ai/ui/@solid-primitives/resize-observer": ["@solid-primitives/resize-observer@2.1.3", "", { "dependencies": { "@solid-primitives/event-listener": "^2.4.3", "@solid-primitives/rootless": "^1.5.2", "@solid-primitives/static-store": "^0.1.2", "@solid-primitives/utils": "^6.3.2" }, "peerDependencies": { "solid-js": "^1.6.12" } }, "sha512-zBLje5E06TgOg93S7rGPldmhDnouNGhvfZVKOp+oG2XU8snA+GoCSSCz1M+jpNAg5Ek2EakU5UVQqL152WmdXQ=="],
"@opencode-ai/web/@shikijs/transformers": ["@shikijs/transformers@3.20.0", "", { "dependencies": { "@shikijs/core": "3.20.0", "@shikijs/types": "3.20.0" } }, "sha512-PrHHMRr3Q5W1qB/42kJW6laqFyWdhrPF2hNR9qjOm1xcSiAO3hAHo7HaVyHE6pMyevmy3i51O8kuGGXC78uK3g=="],
@@ -5593,24 +5718,16 @@
"@slack/bolt/path-to-regexp": ["path-to-regexp@8.4.2", "", {}, "sha512-qRcuIdP69NPm4qbACK+aDogI5CBDMi1jKe0ry5rSQJz8JVLsC7jV8XpiJjGRLLol3N+R5ihGYcrPLTno6pAdBA=="],
"@slack/logger/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@slack/oauth/@slack/logger": ["@slack/logger@3.0.0", "", { "dependencies": { "@types/node": ">=12.0.0" } }, "sha512-DTuBFbqu4gGfajREEMrkq5jBhcnskinhr4+AnfJEk48zhVeEv3XnUKGIX98B74kxhYsIMfApGGySTn7V3b5yBA=="],
"@slack/oauth/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@slack/socket-mode/@slack/logger": ["@slack/logger@3.0.0", "", { "dependencies": { "@types/node": ">=12.0.0" } }, "sha512-DTuBFbqu4gGfajREEMrkq5jBhcnskinhr4+AnfJEk48zhVeEv3XnUKGIX98B74kxhYsIMfApGGySTn7V3b5yBA=="],
"@slack/socket-mode/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@slack/socket-mode/@types/ws": ["@types/ws@7.4.7", "", { "dependencies": { "@types/node": "*" } }, "sha512-JQbbmxZTZehdc2iszGKs5oC3NFnjeay7mtAWrdt7qNtAVK0g19muApzAy4bm9byz79xa2ZnO/BOBC2R8RC5Lww=="],
"@slack/socket-mode/ws": ["ws@7.5.10", "", { "peerDependencies": { "bufferutil": "^4.0.1", "utf-8-validate": "^5.0.2" }, "optionalPeers": ["bufferutil", "utf-8-validate"] }, "sha512-+dbF1tHwZpXcbOJdVOkzLDxZP1ailvSxM6ZweXTegylPny803bFhA+vqBYw4s31NSAk4S2Qz+AKXK9a4wkdjcQ=="],
"@slack/web-api/@slack/logger": ["@slack/logger@3.0.0", "", { "dependencies": { "@types/node": ">=12.0.0" } }, "sha512-DTuBFbqu4gGfajREEMrkq5jBhcnskinhr4+AnfJEk48zhVeEv3XnUKGIX98B74kxhYsIMfApGGySTn7V3b5yBA=="],
"@slack/web-api/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@slack/web-api/eventemitter3": ["eventemitter3@3.1.2", "", {}, "sha512-tvtQIeLVHjDkJYnzf2dgVMxfuSGJeM/7UCG17TT4EumTfNtF+0nebF/4zWOIkCreAbtNqhGEboB6BWrwqNaw4Q=="],
"@slack/web-api/form-data": ["form-data@2.5.5", "", { "dependencies": { "asynckit": "^0.4.0", "combined-stream": "^1.0.8", "es-set-tostringtag": "^2.1.0", "hasown": "^2.0.2", "mime-types": "^2.1.35", "safe-buffer": "^5.2.1" } }, "sha512-jqdObeR2rxZZbPSGL+3VckHMYtu+f9//KXBsVny6JSX/pa38Fy+bGjuG8eW/H6USNQWhLi8Num++cU2yOCNz4A=="],
@@ -5671,62 +5788,8 @@
"@testing-library/dom/dom-accessibility-api": ["dom-accessibility-api@0.5.16", "", {}, "sha512-X7BJ2yElsnOJ30pZF4uIIDfBEVgF4XEBxL9Bxhy6dnrm5hkzqmsWHGTiHqRiITNhMyFLyAiWndIJP7Z1NTteDg=="],
"@types/body-parser/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/cacache/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/cacheable-request/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/connect/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/cross-spawn/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/express-serve-static-core/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/fontkit/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/fs-extra/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/is-stream/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/jsonwebtoken/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/keyv/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/mssql/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/node-fetch/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/npm-registry-fetch/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/npmcli__arborist/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/npmlog/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/pacote/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/plist/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/plist/xmlbuilder": ["xmlbuilder@15.1.1", "", {}, "sha512-yMqGBqtXyeN1e3TGYvgNgDVZ3j84W4cwkOXQswghol6APgZWaff9lnbvN7MHYJOiXsvGPXtjTYJEiC9J2wv9Eg=="],
"@types/readable-stream/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/responselike/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/sax/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/send/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/serve-static/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/ssri/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/tunnel/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/ws/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@types/yauzl/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"@vitest/expect/@vitest/utils": ["@vitest/utils@3.2.4", "", { "dependencies": { "@vitest/pretty-format": "3.2.4", "loupe": "^3.1.4", "tinyrainbow": "^2.0.0" } }, "sha512-fB2V0JFrQSMsCo9HiSq3Ezpdv4iYaXRG1Sx8edX3MwxfyNn83mKiGzOcH+Fkxt4MHxr3y42fQi1oeAInqgX2QA=="],
"@vitest/expect/tinyrainbow": ["tinyrainbow@2.0.0", "", {}, "sha512-op4nsTR47R6p0vMUUoYl/a+ljLFVtlfaXkLQmqfLR1qHma1h/ysYk4hEXZ880bf2CYgTskvTa/e196Vd5dDQXw=="],
@@ -5803,8 +5866,6 @@
"builder-util-runtime/sax": ["sax@1.6.0", "", {}, "sha512-6R3J5M4AcbtLUdZmRv2SygeVaM7IhrLXu9BmnOGmmACak8fiUtOsYNWUS4uK7upbmHIBbLBeFeI//477BKLBzA=="],
"bun-types/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"bun-webgpu/@webgpu/types": ["@webgpu/types@0.1.69", "", {}, "sha512-RPmm6kgRbI8e98zSD3RVACvnuktIja5+yLgDAkTmxLr90BEwdTXRQWNLF3ETTTyH/8mKhznZuN5AveXYFEsMGQ=="],
"c12/chokidar": ["chokidar@5.0.0", "", { "dependencies": { "readdirp": "^5.0.0" } }, "sha512-TQMmc3w+5AxjpL8iIiwebF73dRDF4fBIieAqGn9RGCWaEVwQ6Fb2cGe31Yns0RRIzii5goJ1Y7xbMwo1TxMplw=="],
@@ -5813,8 +5874,6 @@
"clone-response/mimic-response": ["mimic-response@1.0.1", "", {}, "sha512-j5EctnkH7amfV/q5Hgmoal1g2QHFJRraOtmx0JpIqkxhBhI/lJSl1nMpQ45hVarwNETOoWEimndZ4QK0RHxuxQ=="],
"cloudflare/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"compress-commons/is-stream": ["is-stream@2.0.1", "", {}, "sha512-hFoiJiTl63nn+kstHGBtewWSKnQLpyb155KHheA1l39uvtO9nWIop1p3udqPcUd/xbF1VLMO4n7OI6p7RbngDg=="],
"condense-newlines/kind-of": ["kind-of@3.2.2", "", { "dependencies": { "is-buffer": "^1.1.5" } }, "sha512-NOW9QQXMoZGg/oqnVNoNTTIFEIid1627WCffUBJEdMxYApq7mNE7CpzucIPc+ZQg25Phej7IJSmX3hO+oblOtQ=="],
@@ -5849,8 +5908,6 @@
"effect/@standard-schema/spec": ["@standard-schema/spec@1.1.0", "", {}, "sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w=="],
"electron/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"electron-builder/chalk": ["chalk@4.1.2", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA=="],
"electron-builder/yargs": ["yargs@17.7.2", "", { "dependencies": { "cliui": "^8.0.1", "escalade": "^3.1.1", "get-caller-file": "^2.0.5", "require-directory": "^2.1.1", "string-width": "^4.2.3", "y18n": "^5.0.5", "yargs-parser": "^21.1.1" } }, "sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w=="],
@@ -5905,8 +5962,6 @@
"gray-matter/js-yaml": ["js-yaml@3.14.2", "", { "dependencies": { "argparse": "^1.0.7", "esprima": "^4.0.0" }, "bin": { "js-yaml": "bin/js-yaml.js" } }, "sha512-PMSmkqxr106Xa156c2M265Z+FTrPl+oxd/rgOQy2tijQeK5TxQ43psO1ZCwhVOSdnn+RzkzlRz/eY4BgJBYVpg=="],
"happy-dom/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"happy-dom/ws": ["ws@8.20.0", "", { "peerDependencies": { "bufferutil": "^4.0.1", "utf-8-validate": ">=5.0.2" }, "optionalPeers": ["bufferutil", "utf-8-validate"] }, "sha512-sAt8BhgNbzCtgGbt2OxmpuryO63ZoDk/sqaB/znQm94T4fCEsy/yV+7CdC1kJhOU9lboAEU7R3kquuycDoibVA=="],
"html-minifier-terser/commander": ["commander@10.0.1", "", {}, "sha512-y4Mg2tXshplEbSGzx7amzPwKKOCGuoSRP/CjEdwwk0FOGlUbq6lKuoyDZTNZkmxHdJtp54hdfY/JUrdL7Xfdug=="],
@@ -5919,8 +5974,6 @@
"iconv-corefoundation/node-addon-api": ["node-addon-api@1.7.2", "", {}, "sha512-ibPK3iA+vaY1eEjESkQkM0BbCqFOaZMiXRTtdB0u7b4djtY6JnsjvPdUHVMg6xQt3B8fpTTWHI9A+ADjM9frzg=="],
"image-q/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"js-beautify/glob": ["glob@10.5.0", "", { "dependencies": { "foreground-child": "^3.1.0", "jackspeak": "^3.1.2", "minimatch": "^9.0.4", "minipass": "^7.1.2", "package-json-from-dist": "^1.0.0", "path-scurry": "^1.11.1" }, "bin": { "glob": "dist/esm/bin.mjs" } }, "sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg=="],
"js-beautify/nopt": ["nopt@7.2.1", "", { "dependencies": { "abbrev": "^2.0.0" }, "bin": { "nopt": "bin/nopt.js" } }, "sha512-taM24ViiimT/XntxbPyJQzCG+p4EKOpgD3mxFwW38mGjVUrfERQOeY4EDHjdnptttfHuHQXFx+lTP08Q+mLa/w=="],
@@ -6041,8 +6094,6 @@
"proper-lockfile/signal-exit": ["signal-exit@3.0.7", "", {}, "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="],
"protobufjs/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"raw-body/iconv-lite": ["iconv-lite@0.4.24", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3" } }, "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA=="],
"readable-stream/buffer": ["buffer@6.0.3", "", { "dependencies": { "base64-js": "^1.3.1", "ieee754": "^1.2.1" } }, "sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA=="],
@@ -6073,8 +6124,6 @@
"shiki/@shikijs/types": ["@shikijs/types@3.20.0", "", { "dependencies": { "@shikijs/vscode-textmate": "^10.0.2", "@types/hast": "^3.0.4" } }, "sha512-lhYAATn10nkZcBQ0BlzSbJA3wcmL5MXUUF8d2Zzon6saZDlToKaiRX60n2+ZaHJCmXEcZRWNzn+k9vplr8Jhsw=="],
"sitemap/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"sitemap/sax": ["sax@1.6.0", "", {}, "sha512-6R3J5M4AcbtLUdZmRv2SygeVaM7IhrLXu9BmnOGmmACak8fiUtOsYNWUS4uK7upbmHIBbLBeFeI//477BKLBzA=="],
"slice-ansi/ansi-styles": ["ansi-styles@6.2.3", "", {}, "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg=="],
@@ -6097,14 +6146,10 @@
"strip-ansi-cjs/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
"stripe/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"sucrase/commander": ["commander@4.1.1", "", {}, "sha512-NOKm8xhkzAjzFx8B2v5OAHT+u5pRQc2UCa2Vq9jYL/31o2wi9mxBA7LIFs3sV5VSC49z6pEhfbMULvShKj26WA=="],
"tar/yallist": ["yallist@5.0.0", "", {}, "sha512-YgvUTfwqyc7UXVMrB+SImsVYSmTS8X/tSrtdNZMImM+n7+QTriRXyXim0mBrTXNeqzVF0KWGgHPeiyViFFrNDw=="],
"tedious/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"terser/commander": ["commander@2.20.3", "", {}, "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ=="],
"tiny-async-pool/semver": ["semver@5.7.2", "", { "bin": { "semver": "bin/semver" } }, "sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g=="],
@@ -6443,8 +6488,6 @@
"@gitlab/opencode-gitlab-auth/open/wsl-utils": ["wsl-utils@0.1.0", "", { "dependencies": { "is-wsl": "^3.1.0" } }, "sha512-h3Fbisa2nKGPxCpm89Hk33lBLsnaGBvctQopaBSOW/uIs6FTe1ATyAnKFJrzVs9vpGdsTe73WF3V4lIsk4Gacw=="],
"@happy-dom/global-registrator/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@jsx-email/cli/esbuild/@esbuild/aix-ppc64": ["@esbuild/aix-ppc64@0.19.12", "", { "os": "aix", "cpu": "ppc64" }, "sha512-bmoCYyWdEL3wDQIVbcyzRyeKLgk2WtWLTWz1ZIAZF/EGbNOwSA6ew3PftJ1PqMiOOGu0OyFMzG53L0zqIpPeNA=="],
"@jsx-email/cli/esbuild/@esbuild/android-arm": ["@esbuild/android-arm@0.19.12", "", { "os": "android", "cpu": "arm" }, "sha512-qg/Lj1mu3CdQlDEEiWrlC4eaPZ1KztwGJ9B6J+/6G+/4ewxJg7gqj8eVYWvao1bXrqGiW2rsBZFSX3q2lcW05w=="],
@@ -6615,8 +6658,12 @@
"@octokit/rest/@octokit/core/before-after-hook": ["before-after-hook@4.0.0", "", {}, "sha512-q6tR3RPqIB1pMiTRMFcZwuG5T8vwp+vUvEG0vuI6B+Rikh5BfPp2fQ82c925FOs+b0lcFQ8CFrL+KbilfZFhOQ=="],
"@opencode-ai/desktop-electron/@actions/artifact/@actions/http-client": ["@actions/http-client@2.2.3", "", { "dependencies": { "tunnel": "^0.0.6", "undici": "^5.25.4" } }, "sha512-mx8hyJi/hjFvbPokCg4uRd4ZX78t+YyRPtnKWwIl+RzNaVuFpQHfmlGVfsKEJN8LwTCvL+DfVgAM04XaHkm6bA=="],
"@opencode-ai/desktop/@actions/artifact/@actions/http-client": ["@actions/http-client@2.2.3", "", { "dependencies": { "tunnel": "^0.0.6", "undici": "^5.25.4" } }, "sha512-mx8hyJi/hjFvbPokCg4uRd4ZX78t+YyRPtnKWwIl+RzNaVuFpQHfmlGVfsKEJN8LwTCvL+DfVgAM04XaHkm6bA=="],
"@opencode-ai/llm/@smithy/eventstream-codec/@smithy/types": ["@smithy/types@4.14.1", "", { "dependencies": { "tslib": "^2.6.2" } }, "sha512-59b5HtSVrVR/eYNei3BUj3DCPKD/G7EtDDe7OEJE7i7FtQFugYo6MxbotS8mVJkLNVf8gYaAlEBwwtJ9HzhWSg=="],
"@opencode-ai/web/@shikijs/transformers/@shikijs/core": ["@shikijs/core@3.20.0", "", { "dependencies": { "@shikijs/types": "3.20.0", "@shikijs/vscode-textmate": "^10.0.2", "@types/hast": "^3.0.4", "hast-util-to-html": "^9.0.5" } }, "sha512-f2ED7HYV4JEk827mtMDwe/yQ25pRiXZmtHjWF8uzZKuKiEsJR7Ce1nuQ+HhV9FzDcbIo4ObBCD9GPTzNuy9S1g=="],
"@opencode-ai/web/@shikijs/transformers/@shikijs/types": ["@shikijs/types@3.20.0", "", { "dependencies": { "@shikijs/vscode-textmate": "^10.0.2", "@types/hast": "^3.0.4" } }, "sha512-lhYAATn10nkZcBQ0BlzSbJA3wcmL5MXUUF8d2Zzon6saZDlToKaiRX60n2+ZaHJCmXEcZRWNzn+k9vplr8Jhsw=="],
@@ -6637,14 +6684,6 @@
"@sentry/cli/which/isexe": ["isexe@2.0.0", "", {}, "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw=="],
"@slack/logger/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@slack/oauth/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@slack/socket-mode/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@slack/web-api/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@slack/web-api/form-data/mime-types": ["mime-types@2.1.35", "", { "dependencies": { "mime-db": "1.52.0" } }, "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="],
"@slack/web-api/p-queue/eventemitter3": ["eventemitter3@4.0.7", "", {}, "sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw=="],
@@ -6671,60 +6710,6 @@
"@tailwindcss/oxide-wasm32-wasi/@napi-rs/wasm-runtime/@tybys/wasm-util": ["@tybys/wasm-util@0.10.1", "", { "dependencies": { "tslib": "^2.4.0" } }, "sha512-9tTaPJLSiejZKx+Bmog4uSubteqTvFrVrURwkmHixBo0G4seD0zUxp98E1DzUBJxLQ3NPwXrGKDiVjwx/DpPsg=="],
"@types/body-parser/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/cacache/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/cacheable-request/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/connect/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/cross-spawn/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/express-serve-static-core/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/fontkit/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/fs-extra/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/is-stream/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/jsonwebtoken/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/keyv/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/mssql/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/node-fetch/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/npm-registry-fetch/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/npmcli__arborist/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/npmlog/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/pacote/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/plist/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/readable-stream/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/responselike/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/sax/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/send/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/serve-static/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/ssri/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/tunnel/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/ws/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@types/yauzl/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"@vitest/expect/@vitest/utils/@vitest/pretty-format": ["@vitest/pretty-format@3.2.4", "", { "dependencies": { "tinyrainbow": "^2.0.0" } }, "sha512-IVNZik8IVRJRTr9fxlitMKeJeXFFFN0JaB9PHPGQ8NKQbGpfjlTx9zO4RefN8gp7eqjNy8nyK3NZmBzOPeIxtA=="],
"accepts/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="],
@@ -6773,12 +6758,8 @@
"body-parser/debug/ms": ["ms@2.0.0", "", {}, "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="],
"bun-types/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"c12/chokidar/readdirp": ["readdirp@5.0.0", "", {}, "sha512-9u/XQ1pvrQtYyMpZe7DXKv2p5CNvyVwzUB6uhLAnQwHMSgKMBR62lc7AHljaeteeHXn11XTAaLLUVZYVZyuRBQ=="],
"cloudflare/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"crc/buffer/ieee754": ["ieee754@1.2.1", "", {}, "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA=="],
"cross-spawn/which/isexe": ["isexe@2.0.0", "", {}, "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw=="],
@@ -6797,8 +6778,6 @@
"electron-winstaller/fs-extra/universalify": ["universalify@0.1.2", "", {}, "sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg=="],
"electron/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"esbuild-plugin-copy/chokidar/readdirp": ["readdirp@3.6.0", "", { "dependencies": { "picomatch": "^2.2.1" } }, "sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA=="],
"express/debug/ms": ["ms@2.0.0", "", {}, "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="],
@@ -6811,14 +6790,10 @@
"gray-matter/js-yaml/argparse": ["argparse@1.0.10", "", { "dependencies": { "sprintf-js": "~1.0.2" } }, "sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg=="],
"happy-dom/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"iconv-corefoundation/cli-truncate/slice-ansi": ["slice-ansi@3.0.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "astral-regex": "^2.0.0", "is-fullwidth-code-point": "^3.0.0" } }, "sha512-pSyv7bSTC7ig9Dcgbw9AuRNUb5k5V6oDudjZoMBSr13qpLBG7tB+zgCkARjq7xIUgdz5P1Qe8u+rSGdouOOIyQ=="],
"iconv-corefoundation/cli-truncate/string-width": ["string-width@4.2.3", "", { "dependencies": { "emoji-regex": "^8.0.0", "is-fullwidth-code-point": "^3.0.0", "strip-ansi": "^6.0.1" } }, "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g=="],
"image-q/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"js-beautify/glob/jackspeak": ["jackspeak@3.4.3", "", { "dependencies": { "@isaacs/cliui": "^8.0.2" }, "optionalDependencies": { "@pkgjs/parseargs": "^0.11.0" } }, "sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw=="],
"js-beautify/glob/minimatch": ["minimatch@9.0.9", "", { "dependencies": { "brace-expansion": "^2.0.2" } }, "sha512-OBwBN9AL4dqmETlpS2zasx+vTeWclWzkblfZk7KTA5j3jeOONz/tRCnZomUyvNg83wL5Zv9Ss6HMJXAgL8R2Yg=="],
@@ -6837,8 +6812,6 @@
"motion/framer-motion/motion-utils": ["motion-utils@12.36.0", "", {}, "sha512-eHWisygbiwVvf6PZ1vhaHCLamvkSbPIeAYxWUuL3a2PD/TROgE7FvfHWTIH4vMl798QLfMw15nRqIaRDXTlYRg=="],
"mssql/tedious/@types/node": ["@types/node@22.13.9", "", { "dependencies": { "undici-types": "~6.20.0" } }, "sha512-acBjXdRJ3A6Pb3tqnw9HZmyR3Fiol3aGxRCK1x3d+6CDAMjl7I649wpSd+yNURCjbOUGu9tqtLKnTGxmK6CyGw=="],
"mssql/tedious/iconv-lite": ["iconv-lite@0.6.3", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" } }, "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw=="],
"opencode-gitlab-auth/open/wsl-utils": ["wsl-utils@0.1.0", "", { "dependencies": { "is-wsl": "^3.1.0" } }, "sha512-h3Fbisa2nKGPxCpm89Hk33lBLsnaGBvctQopaBSOW/uIs6FTe1ATyAnKFJrzVs9vpGdsTe73WF3V4lIsk4Gacw=="],
@@ -6889,8 +6862,6 @@
"pkg-up/find-up/locate-path": ["locate-path@3.0.0", "", { "dependencies": { "p-locate": "^3.0.0", "path-exists": "^3.0.0" } }, "sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A=="],
"protobufjs/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"readable-stream/buffer/ieee754": ["ieee754@1.2.1", "", {}, "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA=="],
"readdir-glob/minimatch/brace-expansion": ["brace-expansion@2.1.0", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-TN1kCZAgdgweJhWWpgKYrQaMNHcDULHkWwQIspdtjV4Y5aurRdZpjAqn6yX3FPqTA9ngHCc4hJxMAMgGfve85w=="],
@@ -6901,16 +6872,10 @@
"send/debug/ms": ["ms@2.0.0", "", {}, "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="],
"sitemap/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"storybook/open/wsl-utils": ["wsl-utils@0.1.0", "", { "dependencies": { "is-wsl": "^3.1.0" } }, "sha512-h3Fbisa2nKGPxCpm89Hk33lBLsnaGBvctQopaBSOW/uIs6FTe1ATyAnKFJrzVs9vpGdsTe73WF3V4lIsk4Gacw=="],
"string-width-cjs/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="],
"stripe/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"tedious/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"tw-to-css/tailwindcss/chokidar": ["chokidar@3.6.0", "", { "dependencies": { "anymatch": "~3.1.2", "braces": "~3.0.2", "glob-parent": "~5.1.2", "is-binary-path": "~2.1.0", "is-glob": "~4.0.1", "normalize-path": "~3.0.0", "readdirp": "~3.6.0" }, "optionalDependencies": { "fsevents": "~2.3.2" } }, "sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw=="],
"tw-to-css/tailwindcss/glob-parent": ["glob-parent@6.0.2", "", { "dependencies": { "is-glob": "^4.0.3" } }, "sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A=="],
@@ -7145,6 +7110,8 @@
"@octokit/rest/@octokit/core/@octokit/types/@octokit/openapi-types": ["@octokit/openapi-types@27.0.0", "", {}, "sha512-whrdktVs1h6gtR+09+QsNk2+FO+49j6ga1c55YZudfEG+oKJVvJLQi3zkOm5JjiUXAagWK2tI2kTGKJ2Ys7MGA=="],
"@opencode-ai/desktop-electron/@actions/artifact/@actions/http-client/undici": ["undici@5.29.0", "", { "dependencies": { "@fastify/busboy": "^2.0.0" } }, "sha512-raqeBD6NQK4SkWhQzeYKd1KmIG6dllBOTt55Rmkt4HtI9mwdWtJljnrXjAFUBLTSN67HWrOIZ3EPF4kjUw80Bg=="],
"@opencode-ai/desktop/@actions/artifact/@actions/http-client/undici": ["undici@5.29.0", "", { "dependencies": { "@fastify/busboy": "^2.0.0" } }, "sha512-raqeBD6NQK4SkWhQzeYKd1KmIG6dllBOTt55Rmkt4HtI9mwdWtJljnrXjAFUBLTSN67HWrOIZ3EPF4kjUw80Bg=="],
"@sentry/bundler-plugin-core/glob/minimatch/brace-expansion": ["brace-expansion@2.1.0", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-TN1kCZAgdgweJhWWpgKYrQaMNHcDULHkWwQIspdtjV4Y5aurRdZpjAqn6yX3FPqTA9ngHCc4hJxMAMgGfve85w=="],
@@ -7213,8 +7180,6 @@
"js-beautify/glob/path-scurry/lru-cache": ["lru-cache@10.4.3", "", {}, "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ=="],
"mssql/tedious/@types/node/undici-types": ["undici-types@6.20.0", "", {}, "sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg=="],
"opencontrol/@modelcontextprotocol/sdk/express/accepts": ["accepts@2.0.0", "", { "dependencies": { "mime-types": "^3.0.0", "negotiator": "^1.0.0" } }, "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng=="],
"opencontrol/@modelcontextprotocol/sdk/express/body-parser": ["body-parser@2.2.2", "", { "dependencies": { "bytes": "^3.1.2", "content-type": "^1.0.5", "debug": "^4.4.3", "http-errors": "^2.0.0", "iconv-lite": "^0.7.0", "on-finished": "^2.4.1", "qs": "^6.14.1", "raw-body": "^3.0.1", "type-is": "^2.0.1" } }, "sha512-oP5VkATKlNwcgvxi0vM0p/D3n2C3EReYVX+DNYs5TjZFn/oQt2j+4sVJtSMr18pdRr8wjTcBl6LoV+FUwzPmNA=="],

View File

@@ -1,8 +1,8 @@
{
"nodeModules": {
"x86_64-linux": "sha256-YBTnGuKDthi9wM4UrY0CMNqAzwnM6rN5XROyJOqYbQ8=",
"aarch64-linux": "sha256-7B1dxtYOc9t+e3lzF8O02YtjsvogyuZjHSanWw1XPio=",
"aarch64-darwin": "sha256-y0CzJRL4WHMxVbZPg3O7Dd+66TbITJbiv0oqhZ6URWw=",
"x86_64-darwin": "sha256-KW0Cx/ddKM4sQcpKhKwYu8qL6zYlm12kcUlgp66Wf50="
"x86_64-linux": "sha256-9wTDLZsuGjkWyVOb6AG2VRYPiaSj/lnXwVkSwNeDcns=",
"aarch64-linux": "sha256-gmKlL2fQxY8bo+//8m9e1TNYJK3RXa4i8xsgtd046bc=",
"aarch64-darwin": "sha256-ENSJK+7rZi3m342mjtGg9N0P6zWEypXMpI7QdFMydbc=",
"x86_64-darwin": "sha256-gkxCxGh5dlwj03vZdz20pbiAwFEDpAlu/5iU8cwZOGI="
}
}

View File

@@ -7,7 +7,7 @@
"packageManager": "bun@1.3.13",
"scripts": {
"dev": "bun run --cwd packages/opencode --conditions=browser src/index.ts",
"dev:desktop": "bun --cwd packages/desktop dev",
"dev:desktop": "bun --cwd packages/desktop-electron dev",
"dev:web": "bun --cwd packages/app dev",
"dev:console": "ulimit -n 10240 2>/dev/null; bun run --cwd packages/console/app dev",
"dev:storybook": "bun --cwd packages/storybook storybook",
@@ -39,7 +39,7 @@
"ulid": "3.0.1",
"@kobalte/core": "0.13.11",
"@types/luxon": "3.7.1",
"@types/node": "24.12.2",
"@types/node": "22.13.9",
"@types/semver": "7.7.1",
"@tsconfig/node22": "22.0.2",
"@tsconfig/bun": "1.0.9",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/app",
"version": "1.14.38",
"version": "1.14.33",
"description": "",
"type": "module",
"exports": {

View File

@@ -15,7 +15,6 @@ import { terminalFontFamily, useSettings } from "@/context/settings"
import type { LocalPTY } from "@/context/terminal"
import { disposeIfDisposable, getHoveredLinkText, setOptionIfSupported } from "@/utils/runtime-adapters"
import { terminalWriter } from "@/utils/terminal-writer"
import { terminalWebSocketURL } from "@/utils/terminal-websocket-url"
const TOGGLE_TERMINAL_ID = "terminal.toggle"
const DEFAULT_TOGGLE_TERMINAL_KEYBIND = "ctrl+`"
@@ -68,6 +67,13 @@ const debugTerminal = (...values: unknown[]) => {
console.debug("[terminal]", ...values)
}
const errorName = (err: unknown) => {
if (!err || typeof err !== "object") return
if (!("name" in err)) return
const errorName = err.name
return typeof errorName === "string" ? errorName : undefined
}
const useTerminalUiBindings = (input: {
container: HTMLDivElement
term: Term
@@ -472,34 +478,14 @@ export const Terminal = (props: TerminalProps) => {
const gone = () =>
client.pty
.get({ ptyID: id }, { throwOnError: false })
.then((result) => result.response.status === 404)
.get({ ptyID: id })
.then(() => false)
.catch((err) => {
if (errorName(err) === "NotFoundError") return true
debugTerminal("failed to inspect terminal session", err)
return false
})
const connectToken = async () => {
const result = await client.pty
.connectToken(
{ ptyID: id, directory },
{
throwOnError: false,
headers: { "x-opencode-ticket": "1" },
},
)
.catch((err: unknown) => {
if (err instanceof Error && err.message.includes("Request is not supported")) return
throw err
})
if (!result) return
if (result.response.status === 200 && result.data?.ticket) return result.data.ticket
if (result.response.status === 404 || result.response.status === 405) return
if (result.response.status === 403)
throw new Error("PTY connect ticket rejected by origin or CSRF checks. Check the server CORS config.")
throw new Error(`PTY connect ticket failed with ${result.response.status}`)
}
const retry = (err: unknown) => {
if (disposed) return
if (reconn !== undefined) return
@@ -519,30 +505,22 @@ export const Terminal = (props: TerminalProps) => {
}, ms)
}
const open = async () => {
const open = () => {
if (disposed) return
drop?.()
const ticket = await connectToken().catch((err) => {
fail(err)
return undefined
})
if (once.value) return
if (disposed) return
const next = new URL(url + `/pty/${id}/connect`)
next.searchParams.set("directory", directory)
next.searchParams.set("cursor", String(seek))
next.protocol = next.protocol === "https:" ? "wss:" : "ws:"
if (!sameOrigin && password) {
next.searchParams.set("auth_token", btoa(`${username}:${password}`))
// For same-origin requests, let the browser reuse the page's existing auth.
next.username = username
next.password = password
}
const socket = new WebSocket(
terminalWebSocketURL({
url,
id,
directory,
cursor: seek,
ticket,
sameOrigin,
username,
password,
authToken: server.current?.type === "http" ? server.current.authToken : false,
}),
)
const socket = new WebSocket(next)
socket.binaryType = "arraybuffer"
ws = socket

View File

@@ -35,9 +35,6 @@ type TauriApi = {
const tauriApi = () => (window as unknown as { __TAURI__?: TauriApi }).__TAURI__
const currentDesktopWindow = () => tauriApi()?.window?.getCurrentWindow?.()
const currentThemeWindow = () => tauriApi()?.webviewWindow?.getCurrentWebviewWindow?.()
const titlebarHeight = 40
const minTitlebarZoom = 0.25
const windowsControlsBaseWidth = 138 // 3 native Windows caption buttons at 46px each.
export function Titlebar() {
const layout = useLayout()
@@ -54,14 +51,7 @@ export function Titlebar() {
const windows = createMemo(() => platform.platform === "desktop" && platform.os === "windows")
const web = createMemo(() => platform.platform === "web")
const zoom = () => platform.webviewZoom?.() ?? 1
const titlebarZoom = () => (windows() ? Math.max(zoom(), minTitlebarZoom) : zoom())
const counterZoom = () => (windows() && titlebarZoom() < 1 ? 1 / titlebarZoom() : 1)
const minHeight = () => {
if (mac()) return `${titlebarHeight / zoom()}px`
if (windows()) return `${titlebarHeight / Math.min(titlebarZoom(), 1)}px`
return undefined
}
const windowsControlsWidth = () => `${windowsControlsBaseWidth / Math.max(titlebarZoom(), 1)}px`
const minHeight = () => (mac() ? `${40 / zoom()}px` : undefined)
const [history, setHistory] = createStore({
stack: [] as string[],
@@ -175,161 +165,156 @@ export function Titlebar() {
return (
<header
class="h-10 shrink-0 bg-background-base relative overflow-hidden"
class="h-10 shrink-0 bg-background-base relative grid grid-cols-[minmax(0,1fr)_auto_minmax(0,1fr)] items-center"
style={{ "min-height": minHeight() }}
data-tauri-drag-region
onMouseDown={drag}
onDblClick={maximize}
>
<div
class="grid h-full min-h-full w-full grid-cols-[minmax(0,1fr)_auto_minmax(0,1fr)] items-center"
style={{ zoom: counterZoom() }}
classList={{
"flex items-center min-w-0": true,
"pl-2": !mac(),
}}
>
<div
classList={{
"flex items-center min-w-0": true,
"pl-2": !mac(),
}}
>
<Show when={mac()}>
<div class="h-full shrink-0" style={{ width: `${72 / zoom()}px` }} />
<div class="xl:hidden w-10 shrink-0 flex items-center justify-center">
<IconButton
icon="menu"
variant="ghost"
class="titlebar-icon rounded-md"
onClick={layout.mobileSidebar.toggle}
aria-label={language.t("sidebar.menu.toggle")}
aria-expanded={layout.mobileSidebar.opened()}
/>
</div>
</Show>
<Show when={!mac()}>
<div class="xl:hidden w-[48px] shrink-0 flex items-center justify-center">
<IconButton
icon="menu"
variant="ghost"
class="titlebar-icon rounded-md"
onClick={layout.mobileSidebar.toggle}
aria-label={language.t("sidebar.menu.toggle")}
aria-expanded={layout.mobileSidebar.opened()}
/>
</div>
</Show>
<div class="flex items-center gap-1 shrink-0">
<TooltipKeybind
class={web() ? "hidden xl:flex shrink-0 ml-14" : "hidden xl:flex shrink-0 ml-2"}
placement="bottom"
title={language.t("command.sidebar.toggle")}
keybind={command.keybind("sidebar.toggle")}
<Show when={mac()}>
<div class="h-full shrink-0" style={{ width: `${72 / zoom()}px` }} />
<div class="xl:hidden w-10 shrink-0 flex items-center justify-center">
<IconButton
icon="menu"
variant="ghost"
class="titlebar-icon rounded-md"
onClick={layout.mobileSidebar.toggle}
aria-label={language.t("sidebar.menu.toggle")}
aria-expanded={layout.mobileSidebar.opened()}
/>
</div>
</Show>
<Show when={!mac()}>
<div class="xl:hidden w-[48px] shrink-0 flex items-center justify-center">
<IconButton
icon="menu"
variant="ghost"
class="titlebar-icon rounded-md"
onClick={layout.mobileSidebar.toggle}
aria-label={language.t("sidebar.menu.toggle")}
aria-expanded={layout.mobileSidebar.opened()}
/>
</div>
</Show>
<div class="flex items-center gap-1 shrink-0">
<TooltipKeybind
class={web() ? "hidden xl:flex shrink-0 ml-14" : "hidden xl:flex shrink-0 ml-2"}
placement="bottom"
title={language.t("command.sidebar.toggle")}
keybind={command.keybind("sidebar.toggle")}
>
<Button
variant="ghost"
class="group/sidebar-toggle titlebar-icon w-8 h-6 p-0 box-border"
onClick={layout.sidebar.toggle}
aria-label={language.t("command.sidebar.toggle")}
aria-expanded={layout.sidebar.opened()}
>
<Button
variant="ghost"
class="group/sidebar-toggle titlebar-icon w-8 h-6 p-0 box-border"
onClick={layout.sidebar.toggle}
aria-label={language.t("command.sidebar.toggle")}
aria-expanded={layout.sidebar.opened()}
<Icon size="small" name={layout.sidebar.opened() ? "sidebar-active" : "sidebar"} />
</Button>
</TooltipKeybind>
<div class="hidden xl:flex items-center shrink-0">
<Show when={params.dir}>
<div
class="flex items-center shrink-0 w-8 mr-1"
aria-hidden={layout.sidebar.opened() ? "true" : undefined}
>
<Icon size="small" name={layout.sidebar.opened() ? "sidebar-active" : "sidebar"} />
</Button>
</TooltipKeybind>
<div class="hidden xl:flex items-center shrink-0">
<Show when={params.dir}>
<div
class="flex items-center shrink-0 w-8 mr-1"
aria-hidden={layout.sidebar.opened() ? "true" : undefined}
class="transition-opacity"
classList={{
"opacity-100 duration-120 ease-out": !layout.sidebar.opened(),
"opacity-0 duration-120 ease-in delay-0 pointer-events-none": layout.sidebar.opened(),
}}
>
<div
class="transition-opacity"
classList={{
"opacity-100 duration-120 ease-out": !layout.sidebar.opened(),
"opacity-0 duration-120 ease-in delay-0 pointer-events-none": layout.sidebar.opened(),
}}
<TooltipKeybind
placement="bottom"
title={language.t("command.session.new")}
keybind={command.keybind("session.new")}
openDelay={2000}
>
<TooltipKeybind
placement="bottom"
title={language.t("command.session.new")}
keybind={command.keybind("session.new")}
openDelay={2000}
>
<Button
variant="ghost"
icon={creating() ? "new-session-active" : "new-session"}
class="titlebar-icon w-8 h-6 p-0 box-border"
disabled={layout.sidebar.opened()}
tabIndex={layout.sidebar.opened() ? -1 : undefined}
onClick={() => {
if (!params.dir) return
navigate(`/${params.dir}/session`)
}}
aria-label={language.t("command.session.new")}
aria-current={creating() ? "page" : undefined}
/>
</TooltipKeybind>
</div>
<Button
variant="ghost"
icon={creating() ? "new-session-active" : "new-session"}
class="titlebar-icon w-8 h-6 p-0 box-border"
disabled={layout.sidebar.opened()}
tabIndex={layout.sidebar.opened() ? -1 : undefined}
onClick={() => {
if (!params.dir) return
navigate(`/${params.dir}/session`)
}}
aria-label={language.t("command.session.new")}
aria-current={creating() ? "page" : undefined}
/>
</TooltipKeybind>
</div>
</div>
</Show>
<div
class="flex items-center shrink-0"
classList={{
"-translate-x-[36px]": layout.sidebar.opened() && !!params.dir,
"duration-180 ease-out": !layout.sidebar.opened(),
"duration-180 ease-in": layout.sidebar.opened(),
}}
>
<Show when={hasProjects() && nav()}>
<div class="flex items-center gap-0 transition-transform">
<Tooltip placement="bottom" value={language.t("common.goBack")} openDelay={2000}>
<Button
variant="ghost"
icon="chevron-left"
class="titlebar-icon w-6 h-6 p-0 box-border"
disabled={!canBack()}
onClick={back}
aria-label={language.t("common.goBack")}
/>
</Tooltip>
<Tooltip placement="bottom" value={language.t("common.goForward")} openDelay={2000}>
<Button
variant="ghost"
icon="chevron-right"
class="titlebar-icon w-6 h-6 p-0 box-border"
disabled={!canForward()}
onClick={forward}
aria-label={language.t("common.goForward")}
/>
</Tooltip>
</div>
</Show>
<div
class="flex items-center shrink-0"
classList={{
"-translate-x-[36px]": layout.sidebar.opened() && !!params.dir,
"duration-180 ease-out": !layout.sidebar.opened(),
"duration-180 ease-in": layout.sidebar.opened(),
}}
>
<Show when={hasProjects() && nav()}>
<div class="flex items-center gap-0 transition-transform">
<Tooltip placement="bottom" value={language.t("common.goBack")} openDelay={2000}>
<Button
variant="ghost"
icon="chevron-left"
class="titlebar-icon w-6 h-6 p-0 box-border"
disabled={!canBack()}
onClick={back}
aria-label={language.t("common.goBack")}
/>
</Tooltip>
<Tooltip placement="bottom" value={language.t("common.goForward")} openDelay={2000}>
<Button
variant="ghost"
icon="chevron-right"
class="titlebar-icon w-6 h-6 p-0 box-border"
disabled={!canForward()}
onClick={forward}
aria-label={language.t("common.goForward")}
/>
</Tooltip>
</div>
</Show>
<div id="opencode-titlebar-left" class="flex items-center gap-3 min-w-0 px-2" />
{["beta", "dev"].includes(import.meta.env.VITE_OPENCODE_CHANNEL) && (
<div class="bg-icon-interactive-base text-[#FFF] font-medium px-2 rounded-sm uppercase font-mono">
{import.meta.env.VITE_OPENCODE_CHANNEL.toUpperCase()}
</div>
)}
</div>
<div id="opencode-titlebar-left" class="flex items-center gap-3 min-w-0 px-2" />
{["beta", "dev"].includes(import.meta.env.VITE_OPENCODE_CHANNEL) && (
<div class="bg-icon-interactive-base text-[#FFF] font-medium px-2 rounded-sm uppercase font-mono">
{import.meta.env.VITE_OPENCODE_CHANNEL.toUpperCase()}
</div>
)}
</div>
</div>
</div>
</div>
<div class="min-w-0 flex items-center justify-center pointer-events-none">
<div id="opencode-titlebar-center" class="pointer-events-auto min-w-0 flex justify-center w-fit max-w-full" />
</div>
<div class="min-w-0 flex items-center justify-center pointer-events-none">
<div id="opencode-titlebar-center" class="pointer-events-auto min-w-0 flex justify-center w-fit max-w-full" />
</div>
<div
classList={{
"flex items-center min-w-0 justify-end": true,
"pr-2": !windows(),
}}
data-tauri-drag-region
onMouseDown={drag}
>
<div id="opencode-titlebar-right" class="flex items-center gap-1 shrink-0 justify-end" />
<Show when={windows()}>
{!tauriApi() && <div class="shrink-0" style={{ width: windowsControlsWidth() }} />}
<div data-tauri-decorum-tb class="flex flex-row" />
</Show>
</div>
<div
classList={{
"flex items-center min-w-0 justify-end": true,
"pr-2": !windows(),
}}
data-tauri-drag-region
onMouseDown={drag}
>
<div id="opencode-titlebar-right" class="flex items-center gap-1 shrink-0 justify-end" />
<Show when={windows()}>
{!tauriApi() && <div class="w-36 shrink-0" />}
<div data-tauri-decorum-tb class="flex flex-row" />
</Show>
</div>
</header>
)

View File

@@ -1,53 +0,0 @@
import { describe, expect, test } from "bun:test"
import { resolveServerList, ServerConnection } from "./server"
describe("resolveServerList", () => {
test("lets startup auth_token credentials override a persisted same-url server", () => {
const list = resolveServerList({
stored: [{ url: "https://server.example.test" }],
props: [
{
type: "http",
authToken: true,
http: {
url: "https://server.example.test",
username: "opencode",
password: "secret",
},
},
],
})
expect(list).toHaveLength(1)
expect(list[0]?.type).toBe("http")
expect(list[0]?.http).toEqual({
url: "https://server.example.test",
username: "opencode",
password: "secret",
})
expect(list[0]?.type === "http" ? list[0].authToken : false).toBe(true)
expect(ServerConnection.key(list[0]!) as string).toBe("https://server.example.test")
})
test("keeps persisted credentials when startup has no auth_token", () => {
const list = resolveServerList({
stored: [
{
url: "https://server.example.test",
username: "opencode",
password: "saved",
},
],
props: [{ type: "http", http: { url: "https://server.example.test" } }],
})
expect(list).toHaveLength(1)
expect(list[0]?.type).toBe("http")
expect(list[0]?.http).toEqual({
url: "https://server.example.test",
username: "opencode",
password: "saved",
})
expect(list[0]?.type === "http" ? list[0].authToken : true).toBeUndefined()
})
})

View File

@@ -33,33 +33,6 @@ function isLocalHost(url: string) {
if (host === "localhost" || host === "127.0.0.1") return "local"
}
export function resolveServerList(input: {
props?: Array<ServerConnection.Any>
stored: StoredServer[]
}): Array<ServerConnection.Any> {
const servers = [
...input.stored.map((value) =>
typeof value === "string"
? {
type: "http" as const,
http: { url: value },
}
: value,
),
...(input.props ?? []),
]
const deduped = new Map<ServerConnection.Key, ServerConnection.Any>()
for (const value of servers) {
const conn: ServerConnection.Any = "type" in value ? value : { type: "http", http: value }
const key = ServerConnection.key(conn)
if (deduped.has(key) && conn.type === "http" && !conn.authToken) continue
deduped.set(key, conn)
}
return [...deduped.values()]
}
export namespace ServerConnection {
type Base = { displayName?: string }
@@ -73,7 +46,6 @@ export namespace ServerConnection {
export type Http = {
type: "http"
http: HttpBase
authToken?: boolean
} & Base
export type Sidecar = {
@@ -141,7 +113,26 @@ export const { use: useServer, provider: ServerProvider } = createSimpleContext(
const url = (x: StoredServer) => (typeof x === "string" ? x : "type" in x ? x.http.url : x.url)
const allServers = createMemo((): Array<ServerConnection.Any> => {
return resolveServerList({ stored: store.list, props: props.servers })
const servers = [
...(props.servers ?? []),
...store.list.map((value) =>
typeof value === "string"
? {
type: "http" as const,
http: { url: value },
}
: value,
),
]
const deduped = new Map(
servers.map((value) => {
const conn: ServerConnection.Any = "type" in value ? value : { type: "http", http: value }
return [ServerConnection.key(conn), conn]
}),
)
return [...deduped.values()]
})
const [state, setState] = createStore({
@@ -183,7 +174,7 @@ export const { use: useServer, provider: ServerProvider } = createSimpleContext(
function add(input: ServerConnection.Http) {
const url_ = normalizeServerUrl(input.http.url)
if (!url_) return
const conn: ServerConnection.Http = { ...input, authToken: undefined, http: { ...input.http, url: url_ } }
const conn = { ...input, http: { ...input.http, url: url_ } }
return batch(() => {
const existing = store.list.findIndex((x) => url(x) === url_)
if (existing !== -1) {

View File

@@ -1,9 +1,6 @@
import { beforeAll, describe, expect, mock, test } from "bun:test"
type ServerKey = Parameters<typeof import("./terminal").getTerminalServerScope>[1]
let getWorkspaceTerminalCacheKey: (dir: string, scope?: string) => string
let getTerminalServerScope: typeof import("./terminal").getTerminalServerScope
let getWorkspaceTerminalCacheKey: (dir: string) => string
let getLegacyTerminalStorageKeys: (dir: string, legacySessionID?: string) => string[]
let migrateTerminalState: (value: unknown) => unknown
@@ -20,7 +17,6 @@ beforeAll(async () => {
}))
const mod = await import("./terminal")
getWorkspaceTerminalCacheKey = mod.getWorkspaceTerminalCacheKey
getTerminalServerScope = mod.getTerminalServerScope
getLegacyTerminalStorageKeys = mod.getLegacyTerminalStorageKeys
migrateTerminalState = mod.migrateTerminalState
})
@@ -29,45 +25,6 @@ describe("getWorkspaceTerminalCacheKey", () => {
test("uses workspace-only directory cache key", () => {
expect(getWorkspaceTerminalCacheKey("/repo")).toBe("/repo:__workspace__")
})
test("can include a server scope", () => {
expect(getWorkspaceTerminalCacheKey("/repo", "wsl:Debian")).toBe("wsl:Debian:/repo:__workspace__")
})
})
describe("getTerminalServerScope", () => {
test("preserves local server keys", () => {
expect(
getTerminalServerScope(
{ type: "sidecar", variant: "base", http: { url: "http://127.0.0.1:4096" } },
"sidecar" as ServerKey,
),
).toBeUndefined()
expect(
getTerminalServerScope(
{ type: "http", http: { url: "http://localhost:4096" } },
"http://localhost:4096" as ServerKey,
),
).toBeUndefined()
expect(
getTerminalServerScope({ type: "http", http: { url: "http://[::1]:4096" } }, "http://[::1]:4096" as ServerKey),
).toBeUndefined()
})
test("scopes non-local server keys", () => {
expect(
getTerminalServerScope(
{ type: "sidecar", variant: "wsl", distro: "Debian", http: { url: "http://127.0.0.1:4096" } },
"wsl:Debian" as ServerKey,
),
).toBe("wsl:Debian" as ServerKey)
expect(
getTerminalServerScope(
{ type: "http", http: { url: "https://example.com" } },
"https://example.com" as ServerKey,
),
).toBe("https://example.com" as ServerKey)
})
})
describe("getLegacyTerminalStorageKeys", () => {

View File

@@ -4,7 +4,6 @@ import { batch, createEffect, createMemo, createRoot, on, onCleanup } from "soli
import { useParams } from "@solidjs/router"
import { useSDK } from "./sdk"
import type { Platform } from "./platform"
import { ServerConnection, useServer } from "./server"
import { defaultTitle, titleNumber } from "./terminal-title"
import { Persist, persisted, removePersisted } from "@/utils/persist"
@@ -83,31 +82,10 @@ export function migrateTerminalState(value: unknown) {
}
}
export function getWorkspaceTerminalCacheKey(dir: string, scope?: string) {
if (scope) return `${scope}:${dir}:${WORKSPACE_KEY}`
export function getWorkspaceTerminalCacheKey(dir: string) {
return `${dir}:${WORKSPACE_KEY}`
}
export function getTerminalServerScope(conn: ServerConnection.Any | undefined, key: ServerConnection.Key) {
if (!conn) return
if (conn.type === "sidecar" && conn.variant === "base") return
if (conn.type === "http") {
try {
const url = new URL(conn.http.url)
if (
url.hostname === "localhost" ||
url.hostname === "127.0.0.1" ||
url.hostname === "::1" ||
url.hostname === "[::1]"
)
return
} catch {
return key
}
}
return key
}
export function getLegacyTerminalStorageKeys(dir: string, legacySessionID?: string) {
if (!legacySessionID) return [`${dir}/terminal.v1`]
return [`${dir}/terminal/${legacySessionID}.v1`, `${dir}/terminal.v1`]
@@ -132,16 +110,15 @@ const trimTerminal = (pty: LocalPTY) => {
}
}
export function clearWorkspaceTerminals(dir: string, sessionIDs?: string[], platform?: Platform, scope?: string) {
const key = getWorkspaceTerminalCacheKey(dir, scope)
export function clearWorkspaceTerminals(dir: string, sessionIDs?: string[], platform?: Platform) {
const key = getWorkspaceTerminalCacheKey(dir)
for (const cache of caches) {
const entry = cache.get(key)
entry?.value.clear()
}
void removePersisted(Persist.workspace(dir, scope ? `terminal:${scope}` : "terminal"), platform)
void removePersisted(Persist.workspace(dir, "terminal"), platform)
if (scope) return
const legacy = new Set(getLegacyTerminalStorageKeys(dir))
for (const id of sessionIDs ?? []) {
for (const key of getLegacyTerminalStorageKeys(dir, id)) {
@@ -153,17 +130,12 @@ export function clearWorkspaceTerminals(dir: string, sessionIDs?: string[], plat
}
}
function createWorkspaceTerminalSession(
sdk: ReturnType<typeof useSDK>,
dir: string,
legacySessionID?: string,
scope?: string,
) {
const legacy = scope ? [] : getLegacyTerminalStorageKeys(dir, legacySessionID)
function createWorkspaceTerminalSession(sdk: ReturnType<typeof useSDK>, dir: string, legacySessionID?: string) {
const legacy = getLegacyTerminalStorageKeys(dir, legacySessionID)
const [store, setStore, _, ready] = persisted(
{
...Persist.workspace(dir, scope ? `terminal:${scope}` : "terminal", legacy),
...Persist.workspace(dir, "terminal", legacy),
migrate: migrateTerminalState,
},
createStore<{
@@ -385,12 +357,8 @@ export const { use: useTerminal, provider: TerminalProvider } = createSimpleCont
gate: false,
init: () => {
const sdk = useSDK()
const server = useServer()
const params = useParams()
const cache = new Map<string, TerminalCacheEntry>()
const scope = createMemo(() => {
return getTerminalServerScope(server.current, server.key)
})
caches.add(cache)
onCleanup(() => caches.delete(cache))
@@ -414,9 +382,9 @@ export const { use: useTerminal, provider: TerminalProvider } = createSimpleCont
}
}
const loadWorkspace = (dir: string, legacySessionID: string | undefined, serverScope: string | undefined) => {
const loadWorkspace = (dir: string, legacySessionID?: string) => {
// Terminals are workspace-scoped so tabs persist while switching sessions in the same directory.
const key = getWorkspaceTerminalCacheKey(dir, serverScope)
const key = getWorkspaceTerminalCacheKey(dir)
const existing = cache.get(key)
if (existing) {
cache.delete(key)
@@ -425,7 +393,7 @@ export const { use: useTerminal, provider: TerminalProvider } = createSimpleCont
}
const entry = createRoot((dispose) => ({
value: createWorkspaceTerminalSession(sdk, dir, legacySessionID, serverScope),
value: createWorkspaceTerminalSession(sdk, dir, legacySessionID),
dispose,
}))
@@ -434,16 +402,16 @@ export const { use: useTerminal, provider: TerminalProvider } = createSimpleCont
return entry.value
}
const workspace = createMemo(() => loadWorkspace(params.dir!, params.id, scope()))
const workspace = createMemo(() => loadWorkspace(params.dir!, params.id))
createEffect(
on(
() => ({ dir: params.dir, id: params.id, scope: scope() }),
() => ({ dir: params.dir, id: params.id }),
(next, prev) => {
if (!prev?.dir) return
if (next.dir === prev.dir && next.id === prev.id && next.scope === prev.scope) return
if (next.dir === prev.dir && next.id && next.scope === prev.scope) return
loadWorkspace(prev.dir, prev.id, prev.scope).trimAll()
if (next.dir === prev.dir && next.id === prev.id) return
if (next.dir === prev.dir && next.id) return
loadWorkspace(prev.dir, prev.id).trimAll()
},
{ defer: true },
),

View File

@@ -7,7 +7,6 @@ import { type Platform, PlatformProvider } from "@/context/platform"
import { dict as en } from "@/i18n/en"
import { dict as zh } from "@/i18n/zh"
import { handleNotificationClick } from "@/utils/notification-click"
import { authFromToken } from "@/utils/server"
import pkg from "../package.json"
import { ServerConnection } from "./context/server"
@@ -112,13 +111,6 @@ const getDefaultUrl = () => {
return getCurrentUrl()
}
const clearAuthToken = () => {
const params = new URLSearchParams(location.search)
if (!params.has("auth_token")) return
params.delete("auth_token")
history.replaceState(null, "", location.pathname + (params.size ? `?${params}` : "") + location.hash)
}
const platform: Platform = {
platform: "web",
version: pkg.version,
@@ -154,16 +146,7 @@ if (import.meta.env.VITE_SENTRY_DSN) {
}
if (root instanceof HTMLElement) {
const auth = authFromToken(new URLSearchParams(location.search).get("auth_token"))
clearAuthToken()
const server: ServerConnection.Http = {
type: "http",
authToken: !!auth,
http: {
url: getCurrentUrl(),
...auth,
},
}
const server: ServerConnection.Http = { type: "http", http: { url: getCurrentUrl() } }
render(
() => (
<PlatformProvider value={platform}>

View File

@@ -35,7 +35,7 @@ import type { DragEvent } from "@thisbeyond/solid-dnd"
import { useProviders } from "@/hooks/use-providers"
import { showToast, Toast, toaster } from "@opencode-ai/ui/toast"
import { useGlobalSDK } from "@/context/global-sdk"
import { clearWorkspaceTerminals, getTerminalServerScope } from "@/context/terminal"
import { clearWorkspaceTerminals } from "@/context/terminal"
import { dropSessionCaches, pickSessionCacheEvictions } from "@/context/global-sync/session-cache"
import {
clearSessionPrefetchInflight,
@@ -1557,7 +1557,6 @@ export default function Layout(props: ParentProps) {
directory,
sessions.map((s) => s.id),
platform,
getTerminalServerScope(server.current, server.key),
)
await globalSDK.client.instance.dispose({ directory }).catch(() => undefined)

View File

@@ -37,7 +37,6 @@ export function TerminalPanel() {
const [store, setStore] = createStore({
autoCreated: false,
activeDraggable: undefined as string | undefined,
recovered: {} as Record<string, boolean>,
view: typeof window === "undefined" ? 1000 : (window.visualViewport?.height ?? window.innerHeight),
})
@@ -146,21 +145,6 @@ export function TerminalPanel() {
const all = terminal.all
const ids = createMemo(() => all().map((pty) => pty.id))
const recoverTerminal = (key: string, id: string, clone: (id: string) => Promise<void>) => {
if (store.recovered[key]) return
setStore("recovered", key, true)
void clone(id)
}
const terminalRecoveryKey = (pty: { id: string; title: string; titleNumber: number }) => {
return String(pty.titleNumber || pty.title || pty.id)
}
const markTerminalConnected = (key: string, id: string, trim: (id: string) => void) => {
setStore("recovered", key, false)
trim(id)
}
const handleTerminalDragStart = (event: unknown) => {
const id = getDraggableId(event)
if (!id) return
@@ -296,9 +280,9 @@ export function TerminalPanel() {
<Terminal
pty={pty()}
autoFocus={opened()}
onConnect={() => markTerminalConnected(terminalRecoveryKey(pty()), id, ops.trim)}
onConnect={() => ops.trim(id)}
onCleanup={ops.update}
onConnectError={() => recoverTerminal(terminalRecoveryKey(pty()), id, ops.clone)}
onConnectError={() => ops.clone(id)}
/>
</div>
)}

View File

@@ -1,23 +0,0 @@
import { describe, expect, test } from "bun:test"
import { authFromToken, authTokenFromCredentials } from "./server"
describe("authFromToken", () => {
test("decodes basic auth credentials from auth_token", () => {
expect(authFromToken(btoa("kit:secret"))).toEqual({ username: "kit", password: "secret" })
})
test("defaults blank username to opencode", () => {
expect(authFromToken(btoa(":secret"))).toEqual({ username: "opencode", password: "secret" })
})
test("ignores malformed tokens", () => {
expect(authFromToken("not base64")).toBeUndefined()
expect(authFromToken(btoa("missing-separator"))).toBeUndefined()
})
})
describe("authTokenFromCredentials", () => {
test("encodes credentials with the default username", () => {
expect(authTokenFromCredentials({ password: "secret" })).toBe(btoa("opencode:secret"))
})
})

View File

@@ -1,21 +1,5 @@
import { createOpencodeClient } from "@opencode-ai/sdk/v2/client"
import type { ServerConnection } from "@/context/server"
import { decode64 } from "@/utils/base64"
export function authTokenFromCredentials(input: { username?: string; password: string }) {
return btoa(`${input.username ?? "opencode"}:${input.password}`)
}
export function authFromToken(token: string | null) {
const decoded = decode64(token ?? undefined)
if (!decoded) return
const separator = decoded.indexOf(":")
if (separator === -1) return
return {
username: decoded.slice(0, separator) || "opencode",
password: decoded.slice(separator + 1),
}
}
export function createSdkForServer({
server,
@@ -26,7 +10,7 @@ export function createSdkForServer({
const auth = (() => {
if (!server.password) return
return {
Authorization: `Basic ${authTokenFromCredentials({ username: server.username, password: server.password })}`,
Authorization: `Basic ${btoa(`${server.username ?? "opencode"}:${server.password}`)}`,
}
})()

View File

@@ -1,52 +0,0 @@
import { describe, expect, test } from "bun:test"
import { terminalWebSocketURL } from "./terminal-websocket-url"
describe("terminalWebSocketURL", () => {
test("uses query auth without embedding credentials in websocket URL", () => {
const url = terminalWebSocketURL({
url: "http://127.0.0.1:49365",
id: "pty_test",
directory: "/tmp/project",
cursor: 0,
sameOrigin: false,
username: "opencode",
password: "secret",
})
expect(url.protocol).toBe("ws:")
expect(url.username).toBe("")
expect(url.password).toBe("")
expect(url.searchParams.get("auth_token")).toBe(btoa("opencode:secret"))
})
test("omits query auth for same-origin saved credentials", () => {
const url = terminalWebSocketURL({
url: "https://app.example.test",
id: "pty_test",
directory: "/tmp/project",
cursor: 10,
sameOrigin: true,
username: "opencode",
password: "secret",
})
expect(url.protocol).toBe("wss:")
expect(url.searchParams.has("auth_token")).toBe(false)
})
test("uses query auth for same-origin credentials from auth_token", () => {
const url = terminalWebSocketURL({
url: "https://app.example.test",
id: "pty_test",
directory: "/tmp/project",
cursor: 10,
sameOrigin: true,
username: "opencode",
password: "secret",
authToken: true,
})
expect(url.protocol).toBe("wss:")
expect(url.searchParams.get("auth_token")).toBe(btoa("opencode:secret"))
})
})

View File

@@ -1,28 +0,0 @@
import { authTokenFromCredentials } from "@/utils/server"
export function terminalWebSocketURL(input: {
url: string
id: string
directory: string
cursor: number
ticket?: string
sameOrigin?: boolean
username?: string
password?: string
authToken?: boolean
}) {
const next = new URL(`${input.url}/pty/${input.id}/connect`)
next.searchParams.set("directory", input.directory)
next.searchParams.set("cursor", String(input.cursor))
next.protocol = next.protocol === "https:" ? "wss:" : "ws:"
if (input.ticket) {
next.searchParams.set("ticket", input.ticket)
return next
}
if (input.password && (!input.sameOrigin || input.authToken))
next.searchParams.set(
"auth_token",
authTokenFromCredentials({ username: input.username, password: input.password }),
)
return next
}

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-app",
"version": "1.14.38",
"version": "1.14.33",
"type": "module",
"license": "MIT",
"scripts": {

View File

@@ -2,11 +2,11 @@ import type { APIEvent } from "@solidjs/start"
import type { DownloadPlatform } from "../types"
const prodAssetNames: Record<string, string> = {
"darwin-aarch64-dmg": "opencode-desktop-mac-arm64.dmg",
"darwin-x64-dmg": "opencode-desktop-mac-x64.dmg",
"windows-x64-nsis": "opencode-desktop-win-x64.exe",
"darwin-aarch64-dmg": "opencode-desktop-darwin-aarch64.dmg",
"darwin-x64-dmg": "opencode-desktop-darwin-x64.dmg",
"windows-x64-nsis": "opencode-desktop-windows-x64.exe",
"linux-x64-deb": "opencode-desktop-linux-amd64.deb",
"linux-x64-appimage": "opencode-desktop-linux-x86_64.AppImage",
"linux-x64-appimage": "opencode-desktop-linux-amd64.AppImage",
"linux-x64-rpm": "opencode-desktop-linux-x86_64.rpm",
} satisfies Record<DownloadPlatform, string>
@@ -32,6 +32,13 @@ export async function GET({ params: { platform, channel } }: APIEvent) {
const resp = await fetch(
`https://github.com/anomalyco/${channel === "stable" ? "opencode" : "opencode-beta"}/releases/latest/download/${assetName}`,
{
cf: {
// in case gh releases has rate limits
cacheTtl: 60 * 5,
cacheEverything: true,
},
} as any,
)
const downloadName = downloadNames[platform]

View File

@@ -158,13 +158,11 @@ export async function handler(
Object.entries(obj).flatMap(([k, v]) => {
if (Array.isArray(v)) return [[k, v]]
if (typeof v === "object") return [[k, replacer(v)]]
if (typeof v === "string") {
if (v === "$ip") return [[k, ip]]
if (v === "$workspace") return authInfo?.workspaceID ? [[k, authInfo?.workspaceID]] : []
if (v.startsWith("$header.")) {
const headerValue = input.request.headers.get(v.slice(8))
return headerValue ? [[k, headerValue]] : []
}
if (v === "$ip") return [[k, ip]]
if (v === "$workspace") return authInfo?.workspaceID ? [[k, authInfo?.workspaceID]] : []
if (v.startsWith("$header.")) {
const headerValue = input.request.headers.get(v.slice(8))
return headerValue ? [[k, headerValue]] : []
}
return [[k, v]]
}),
@@ -919,13 +917,6 @@ export async function handler(
"tokens.cache_read": cacheReadTokens,
"tokens.cache_write_5m": cacheWrite5mTokens,
"tokens.cache_write_1h": cacheWrite1hTokens,
"cost.input.microcents": centsToMicroCents(inputCost),
"cost.output.microcents": centsToMicroCents(outputCost),
"cost.reasoning.microcents": reasoningCost ? centsToMicroCents(reasoningCost) : undefined,
"cost.cache_read.microcents": cacheReadCost ? centsToMicroCents(cacheReadCost) : undefined,
"cost.cache_write.microcents": cacheWrite5mCost ? centsToMicroCents(cacheWrite5mCost) : undefined,
"cost.total.microcents": centsToMicroCents(totalCostInCent),
// deprecated - remove after May 20, 2026
"cost.input": Math.round(inputCost),
"cost.output": Math.round(outputCost),
"cost.reasoning": reasoningCost ? Math.round(reasoningCost) : undefined,

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/console-core",
"version": "1.14.38",
"version": "1.14.33",
"private": true,
"type": "module",
"license": "MIT",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-function",
"version": "1.14.38",
"version": "1.14.33",
"$schema": "https://json.schemastore.org/package.json",
"private": true,
"type": "module",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-mail",
"version": "1.14.38",
"version": "1.14.33",
"dependencies": {
"@jsx-email/all": "2.2.3",
"@jsx-email/cli": "1.4.3",

View File

@@ -1,6 +1,6 @@
{
"$schema": "https://json.schemastore.org/package.json",
"version": "1.14.38",
"version": "1.14.33",
"name": "@opencode-ai/core",
"type": "module",
"license": "MIT",

View File

@@ -24,7 +24,6 @@ export namespace AppFileSystem {
readonly isDir: (path: string) => Effect.Effect<boolean>
readonly isFile: (path: string) => Effect.Effect<boolean>
readonly existsSafe: (path: string) => Effect.Effect<boolean>
readonly readFileStringSafe: (path: string) => Effect.Effect<string | undefined, Error>
readonly readJson: (path: string) => Effect.Effect<unknown, Error>
readonly writeJson: (path: string, data: unknown, mode?: number) => Effect.Effect<void, Error>
readonly ensureDir: (path: string) => Effect.Effect<void, Error>
@@ -48,12 +47,6 @@ export namespace AppFileSystem {
return yield* fs.exists(path).pipe(Effect.orElseSucceed(() => false))
})
const readFileStringSafe = Effect.fn("FileSystem.readFileStringSafe")(function* (path: string) {
return yield* fs
.readFileString(path)
.pipe(Effect.catchReason("PlatformError", "NotFound", () => Effect.succeed(undefined)))
})
const isDir = Effect.fn("FileSystem.isDir")(function* (path: string) {
const info = yield* fs.stat(path).pipe(Effect.catch(() => Effect.void))
return info?.type === "Directory"
@@ -170,7 +163,6 @@ export namespace AppFileSystem {
return Service.of({
...fs,
existsSafe,
readFileStringSafe,
isDir,
isFile,
readDirectoryEntries,

View File

@@ -72,6 +72,13 @@ export const Flag = {
OPENCODE_ENABLE_EXA: truthy("OPENCODE_ENABLE_EXA") || OPENCODE_EXPERIMENTAL || truthy("OPENCODE_EXPERIMENTAL_EXA"),
OPENCODE_EXPERIMENTAL_BASH_DEFAULT_TIMEOUT_MS: number("OPENCODE_EXPERIMENTAL_BASH_DEFAULT_TIMEOUT_MS"),
OPENCODE_EXPERIMENTAL_OUTPUT_TOKEN_MAX: number("OPENCODE_EXPERIMENTAL_OUTPUT_TOKEN_MAX"),
// Opt-in to the LLM-native stream path in `session/llm.ts`. Today this
// routes a narrow slice of sessions (text-only, Anthropic, with explicit
// `nativeMessages` populated by the caller) through the
// `@opencode-ai/llm` core stack instead of `streamText` from the AI SDK.
// Everything else falls through to the existing path. The flag will go
// away once parity is proven across all six protocols.
OPENCODE_EXPERIMENTAL_LLM_NATIVE: OPENCODE_EXPERIMENTAL || truthy("OPENCODE_EXPERIMENTAL_LLM_NATIVE"),
OPENCODE_EXPERIMENTAL_OXFMT: OPENCODE_EXPERIMENTAL || truthy("OPENCODE_EXPERIMENTAL_OXFMT"),
OPENCODE_EXPERIMENTAL_LSP_TY: truthy("OPENCODE_EXPERIMENTAL_LSP_TY"),
OPENCODE_EXPERIMENTAL_LSP_TOOL: OPENCODE_EXPERIMENTAL || truthy("OPENCODE_EXPERIMENTAL_LSP_TOOL"),

View File

@@ -71,8 +71,6 @@ export const layer = Layer.effect(
Effect.sync(() => Service.of(make())),
)
export const defaultLayer = layer
export const layerWith = (input: Partial<Interface>) =>
Layer.effect(
Service,

View File

@@ -65,34 +65,6 @@ describe("AppFileSystem", () => {
)
})
describe("readFileStringSafe", () => {
it(
"returns file contents when file exists",
Effect.gen(function* () {
const fs = yield* AppFileSystem.Service
const filesys = yield* FileSystem.FileSystem
const tmp = yield* filesys.makeTempDirectoryScoped()
const file = path.join(tmp, "exists.txt")
yield* filesys.writeFileString(file, "hello")
const result = yield* fs.readFileStringSafe(file)
expect(result).toBe("hello")
}),
)
it(
"returns undefined for missing file (NotFound)",
Effect.gen(function* () {
const fs = yield* AppFileSystem.Service
const filesys = yield* FileSystem.FileSystem
const tmp = yield* filesys.makeTempDirectoryScoped()
const result = yield* fs.readFileStringSafe(path.join(tmp, "does-not-exist.txt"))
expect(result).toBeUndefined()
}),
)
})
describe("readJson / writeJson", () => {
it(
"round-trips JSON data",

28
packages/desktop-electron/.gitignore vendored Normal file
View File

@@ -0,0 +1,28 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
out/
resources/opencode-cli*
resources/icons

View File

@@ -0,0 +1,4 @@
# Desktop package notes
- Renderer process should only call `window.api` from `src/preload`.
- Main process should register IPC handlers in `src/main/ipc.ts`.

View File

@@ -0,0 +1,32 @@
# OpenCode Desktop
Native OpenCode desktop app, built with Tauri v2.
## Development
From the repo root:
```bash
bun install
bun run --cwd packages/desktop tauri dev
```
This starts the Vite dev server on http://localhost:1420 and opens the native window.
If you only want the web dev server (no native shell):
```bash
bun run --cwd packages/desktop dev
```
## Build
To create a production `dist/` and build the native app bundle:
```bash
bun run --cwd packages/desktop tauri build
```
## Prerequisites
Running the desktop app requires additional Tauri dependencies (Rust toolchain, platform-specific libraries). See the [Tauri prerequisites](https://v2.tauri.app/start/prerequisites/) for setup instructions.

View File

@@ -27,7 +27,7 @@ const channel = (() => {
})()
const getBase = (): Configuration => ({
artifactName: "opencode-desktop-${os}-${arch}.${ext}",
artifactName: "opencode-electron-${os}-${arch}.${ext}",
directories: {
output: "dist",
buildResources: "resources",

View File

Before

Width:  |  Height:  |  Size: 9.9 KiB

After

Width:  |  Height:  |  Size: 9.9 KiB

View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

View File

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

Before

Width:  |  Height:  |  Size: 3.5 KiB

After

Width:  |  Height:  |  Size: 3.5 KiB

View File

Before

Width:  |  Height:  |  Size: 7.4 KiB

After

Width:  |  Height:  |  Size: 7.4 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 13 KiB

After

Width:  |  Height:  |  Size: 13 KiB

View File

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 44 KiB

View File

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 53 KiB

View File

Before

Width:  |  Height:  |  Size: 2.1 KiB

After

Width:  |  Height:  |  Size: 2.1 KiB

View File

Before

Width:  |  Height:  |  Size: 4.0 KiB

After

Width:  |  Height:  |  Size: 4.0 KiB

View File

Before

Width:  |  Height:  |  Size: 5.6 KiB

After

Width:  |  Height:  |  Size: 5.6 KiB

View File

Before

Width:  |  Height:  |  Size: 2.5 KiB

After

Width:  |  Height:  |  Size: 2.5 KiB

View File

Before

Width:  |  Height:  |  Size: 2.0 KiB

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

Before

Width:  |  Height:  |  Size: 2.0 KiB

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

Before

Width:  |  Height:  |  Size: 5.6 KiB

After

Width:  |  Height:  |  Size: 5.6 KiB

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 48 KiB

View File

Before

Width:  |  Height:  |  Size: 168 KiB

After

Width:  |  Height:  |  Size: 168 KiB

View File

Before

Width:  |  Height:  |  Size: 687 B

After

Width:  |  Height:  |  Size: 687 B

View File

Before

Width:  |  Height:  |  Size: 1.6 KiB

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

Before

Width:  |  Height:  |  Size: 1.6 KiB

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

Before

Width:  |  Height:  |  Size: 2.9 KiB

After

Width:  |  Height:  |  Size: 2.9 KiB

View File

Before

Width:  |  Height:  |  Size: 1.0 KiB

After

Width:  |  Height:  |  Size: 1.0 KiB

View File

Before

Width:  |  Height:  |  Size: 2.8 KiB

After

Width:  |  Height:  |  Size: 2.8 KiB

View File

Before

Width:  |  Height:  |  Size: 2.8 KiB

After

Width:  |  Height:  |  Size: 2.8 KiB

View File

Before

Width:  |  Height:  |  Size: 4.9 KiB

After

Width:  |  Height:  |  Size: 4.9 KiB

View File

Before

Width:  |  Height:  |  Size: 1.6 KiB

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

Before

Width:  |  Height:  |  Size: 4.3 KiB

After

Width:  |  Height:  |  Size: 4.3 KiB

View File

Before

Width:  |  Height:  |  Size: 4.3 KiB

After

Width:  |  Height:  |  Size: 4.3 KiB

View File

Before

Width:  |  Height:  |  Size: 8.3 KiB

After

Width:  |  Height:  |  Size: 8.3 KiB

View File

Before

Width:  |  Height:  |  Size: 582 KiB

After

Width:  |  Height:  |  Size: 582 KiB

View File

Before

Width:  |  Height:  |  Size: 8.3 KiB

After

Width:  |  Height:  |  Size: 8.3 KiB

View File

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

Before

Width:  |  Height:  |  Size: 4.1 KiB

After

Width:  |  Height:  |  Size: 4.1 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 58 KiB

View File

Before

Width:  |  Height:  |  Size: 1.9 KiB

After

Width:  |  Height:  |  Size: 1.9 KiB

View File

Before

Width:  |  Height:  |  Size: 5.3 KiB

After

Width:  |  Height:  |  Size: 5.3 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 72 KiB

View File

Before

Width:  |  Height:  |  Size: 1.7 KiB

After

Width:  |  Height:  |  Size: 1.7 KiB

Some files were not shown because too many files have changed in this diff Show More