Compare commits

..

49 Commits

Author SHA1 Message Date
opencode
8ba48ed71d release: v1.0.68 2025-11-16 20:38:48 +00:00
Aiden Cline
cf266f6162 fix: promptCacheKey set unnecessarily 2025-11-16 14:32:57 -06:00
GitHub Action
1e6589526d ignore: update download stats 2025-11-16 2025-11-16 12:04:11 +00:00
Frank
f6b3ffaf64 wip: zen 2025-11-16 03:32:13 -05:00
GitHub Action
5d765d63d4 chore: format code 2025-11-16 08:30:36 +00:00
Frank
0e12dd62a3 zen: usage paging 2025-11-16 03:29:52 -05:00
opencode
2b957b5d1c release: v1.0.67 2025-11-16 07:49:52 +00:00
GitHub Action
31c7a0157c chore: format code 2025-11-16 07:44:06 +00:00
Aiden Cline
e728b94bca fix: panic when theme has 'none' 2025-11-16 01:43:23 -06:00
opencode
49040c0130 release: v1.0.66 2025-11-16 07:27:25 +00:00
Aiden Cline
0d05238ee6 fix: initial val 2025-11-16 01:14:49 -06:00
Aiden Cline
9b8a7da1e6 fix: history jsonl file corruption cases (#4364) 2025-11-16 00:50:13 -06:00
Zeno Jiricek
61fd21182c docs: mise installation command (#2938) 2025-11-15 21:44:28 -06:00
GitHub Action
487c2b5e76 chore: format code 2025-11-16 03:38:13 +00:00
xiaojie.zj
0e4703b227 add: add zenmux doc and header (#3597)
Co-authored-by: xiaojie.zj <xiaojie.zj@antgroup.com>
2025-11-15 21:37:30 -06:00
Alvin Johansson
84e0232bd5 Add Flexoki theme (#3986) 2025-11-15 21:28:13 -06:00
Luke Parker
35fbb011b2 fix: Diff view now ignores line endings changes/windows autocrlf (#4356) 2025-11-15 21:18:39 -06:00
Aiden Cline
6527a123f0 fix aur build (#4359) 2025-11-15 20:16:19 -06:00
Aiden Cline
0377cfd37c fix: omit ref for todo tool 2025-11-15 19:19:36 -06:00
Aiden Cline
edc933d816 tweak: make zod error more prompty 2025-11-15 13:19:24 -06:00
GitHub Action
0d608f6014 ignore: update download stats 2025-11-15 2025-11-15 12:04:09 +00:00
Chris Olszewski
69a45ef7d7 fix: snapshot history when running from git worktrees (#4312) 2025-11-15 01:02:00 -06:00
Baptiste Cavallo
1056b36eae experimental batch tool (#2983)
Co-authored-by: GitHub Action <action@github.com>
2025-11-15 00:54:36 -06:00
Aiden Cline
35c737ac68 tweak: only show dropdown for 3+ items (#4345) 2025-11-14 23:45:48 -06:00
Abílio Costa
725a2c2e95 docs: clarify that config files are merged, not replaced (#4342)
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-14 17:49:47 -06:00
Tyler Gannon
c724d2392f fix: replace union type with enum "true"/"false" in /find/file endpoint (#4338) 2025-11-14 17:48:23 -06:00
Frank
f5230d1f02 fix: incorrect sonnet price calculation 2025-11-14 18:46:43 -05:00
GitHub Action
078111bd96 chore: format code 2025-11-14 22:44:36 +00:00
sredfern
736f8882f5 fix(provider): support local file paths for custom providers (#4323) 2025-11-14 16:43:59 -06:00
Brian Cheung
37cf365927 feat: support images in mcp tool responses (#4100)
Co-authored-by: opencode-agent[bot] <opencode-agent[bot]@users.noreply.github.com>
Co-authored-by: rekram1-node <rekram1-node@users.noreply.github.com>
2025-11-14 15:00:52 -06:00
Aiden Cline
b939470302 fix: add azure exclusion 2025-11-14 11:54:00 -06:00
Aiden Cline
ef4b2baedc set verbosity to low for gpt-5.1 (match codex) 2025-11-14 11:52:29 -06:00
Dax Raad
64d28ea457 fix sdk types 2025-11-14 12:42:46 -05:00
Dax Raad
2520780846 fix sdk types 2025-11-14 12:42:46 -05:00
Shantur Rathore
986c60353e set promptCacheKey for openai compatible providers (#4203)
Co-authored-by: GitHub Action <action@github.com>
2025-11-14 11:41:01 -06:00
Dax Raad
5fc26c958a add global.event.subscribe() to sdk 2025-11-14 12:32:43 -05:00
Frank
c1cf9cda6a doc: add baseten provider 2025-11-14 12:19:58 -05:00
GitHub Action
10d376eab2 ignore: update download stats 2025-11-14 2025-11-14 12:04:48 +00:00
Frank
53fc8a861b zen: add gpt-5-nano model 2025-11-14 00:59:42 -05:00
Frank
1d8330331c zen: use gpt-5-nano as small model 2025-11-14 00:59:00 -05:00
Frank
7a03c7fe38 zen: add gpt5.1 to docs 2025-11-13 23:47:38 -05:00
Frank
09bd32169c zen: hide alpha models 2025-11-13 23:10:06 -05:00
Dax Raad
7ec32f834e improve read tool end-of-file detection to prevent infinite loops 2025-11-13 21:41:06 -05:00
GitHub Action
205492c7e8 chore: format code 2025-11-14 01:16:58 +00:00
Aiden Cline
4c2e888709 no mr llm, you may not read that 2025-11-13 19:16:07 -06:00
opencode
c78fd097d1 release: v1.0.65 2025-11-14 00:10:30 +00:00
Dax Raad
340966195b handle config errors gracefully 2025-11-13 18:59:09 -05:00
GitHub Action
92604b391b chore: format code 2025-11-13 22:39:53 +00:00
Aiden Cline
0c51feb9c2 fix: max tokens when using models like opus with providers other than anthropic (#4307) 2025-11-13 16:39:09 -06:00
59 changed files with 1946 additions and 912 deletions

View File

@@ -4,7 +4,7 @@ on:
push:
branches:
- dev
- windows
- fix-build
- v0
concurrency: ${{ github.workflow }}-${{ github.ref }}

View File

@@ -30,6 +30,7 @@ scoop bucket add extras; scoop install extras/opencode # Windows
choco install opencode # Windows
brew install opencode # macOS and Linux
paru -S opencode-bin # Arch Linux
mise use --pin -g ubi:sst/opencode # Any OS
```
> [!TIP]

View File

@@ -139,3 +139,6 @@
| 2025-11-11 | 729,769 (+7,481) | 677,501 (+9,276) | 1,407,270 (+16,757) |
| 2025-11-12 | 740,180 (+10,411) | 686,454 (+8,953) | 1,426,634 (+19,364) |
| 2025-11-13 | 749,905 (+9,725) | 696,157 (+9,703) | 1,446,062 (+19,428) |
| 2025-11-14 | 759,928 (+10,023) | 705,237 (+9,080) | 1,465,165 (+19,103) |
| 2025-11-15 | 765,955 (+6,027) | 712,870 (+7,633) | 1,478,825 (+13,660) |
| 2025-11-16 | 771,069 (+5,114) | 716,596 (+3,726) | 1,487,665 (+8,840) |

View File

@@ -40,7 +40,7 @@
},
"packages/console/core": {
"name": "@opencode-ai/console-core",
"version": "1.0.64",
"version": "1.0.68",
"dependencies": {
"@aws-sdk/client-sts": "3.782.0",
"@jsx-email/render": "1.1.1",
@@ -67,7 +67,7 @@
},
"packages/console/function": {
"name": "@opencode-ai/console-function",
"version": "1.0.64",
"version": "1.0.68",
"dependencies": {
"@ai-sdk/anthropic": "2.0.0",
"@ai-sdk/openai": "2.0.2",
@@ -91,7 +91,7 @@
},
"packages/console/mail": {
"name": "@opencode-ai/console-mail",
"version": "1.0.64",
"version": "1.0.68",
"dependencies": {
"@jsx-email/all": "2.2.3",
"@jsx-email/cli": "1.4.3",
@@ -115,7 +115,7 @@
},
"packages/desktop": {
"name": "@opencode-ai/desktop",
"version": "1.0.64",
"version": "1.0.68",
"dependencies": {
"@kobalte/core": "catalog:",
"@opencode-ai/sdk": "workspace:*",
@@ -155,7 +155,7 @@
},
"packages/function": {
"name": "@opencode-ai/function",
"version": "1.0.64",
"version": "1.0.68",
"dependencies": {
"@octokit/auth-app": "8.0.1",
"@octokit/rest": "22.0.0",
@@ -171,7 +171,7 @@
},
"packages/opencode": {
"name": "opencode",
"version": "1.0.64",
"version": "1.0.68",
"bin": {
"opencode": "./bin/opencode",
},
@@ -249,7 +249,7 @@
},
"packages/plugin": {
"name": "@opencode-ai/plugin",
"version": "1.0.64",
"version": "1.0.68",
"dependencies": {
"@opencode-ai/sdk": "workspace:*",
"zod": "catalog:",
@@ -269,7 +269,7 @@
},
"packages/sdk/js": {
"name": "@opencode-ai/sdk",
"version": "1.0.64",
"version": "1.0.68",
"devDependencies": {
"@hey-api/openapi-ts": "0.81.0",
"@tsconfig/node22": "catalog:",
@@ -280,7 +280,7 @@
},
"packages/slack": {
"name": "@opencode-ai/slack",
"version": "1.0.64",
"version": "1.0.68",
"dependencies": {
"@opencode-ai/sdk": "workspace:*",
"@slack/bolt": "^3.17.1",
@@ -293,7 +293,7 @@
},
"packages/ui": {
"name": "@opencode-ai/ui",
"version": "1.0.64",
"version": "1.0.68",
"dependencies": {
"@kobalte/core": "catalog:",
"@opencode-ai/sdk": "workspace:*",
@@ -323,7 +323,7 @@
},
"packages/web": {
"name": "@opencode-ai/web",
"version": "1.0.64",
"version": "1.0.68",
"dependencies": {
"@astrojs/cloudflare": "12.6.3",
"@astrojs/markdown-remark": "6.3.1",

View File

@@ -7,7 +7,7 @@
"dev:remote": "VITE_AUTH_URL=https://auth.dev.opencode.ai bun sst shell --stage=dev bun dev",
"build": "./script/generate-sitemap.ts && vinxi build && ../../opencode/script/schema.ts ./.output/public/config.json",
"start": "vinxi start",
"version": "1.0.64"
"version": "1.0.68"
},
"dependencies": {
"@ibm/plex": "6.4.1",

View File

@@ -22,8 +22,8 @@ const getModelsInfo = query(async (workspaceID: string) => {
return withActor(async () => {
return {
all: Object.entries(ZenData.list().models)
.filter(([id, _model]) => !["claude-3-5-haiku", "minimax-m2"].includes(id))
.filter(([id, _model]) => !id.startsWith("an-"))
.filter(([id, _model]) => !["claude-3-5-haiku"].includes(id))
.filter(([id, _model]) => !id.startsWith("alpha-"))
.sort(([_idA, modelA], [_idB, modelB]) => modelA.name.localeCompare(modelB.name))
.map(([id, model]) => ({ id, name: model.name })),
disabled: await Model.listDisabled(),

View File

@@ -1,88 +1,111 @@
.root {
[data-component="empty-state"] {
padding: var(--space-20) var(--space-6);
text-align: center;
border: 1px dashed var(--color-border);
border-radius: var(--border-radius-sm);
display: flex;
flex-direction: column;
gap: var(--space-2);
/* Empty state */
[data-component="empty-state"] {
padding: var(--space-20) var(--space-6);
text-align: center;
border: 1px dashed var(--color-border);
border-radius: var(--border-radius-sm);
p {
line-height: 1.5;
font-size: var(--font-size-sm);
color: var(--color-text-muted);
}
}
[data-slot="usage-table"] {
overflow-x: auto;
}
[data-slot="usage-table-element"] {
width: 100%;
border-collapse: collapse;
p {
font-size: var(--font-size-sm);
color: var(--color-text-muted);
}
}
thead {
border-bottom: 1px solid var(--color-border);
/* Table container */
[data-slot="usage-table"] {
overflow-x: auto;
}
/* Table element */
[data-slot="usage-table-element"] {
width: 100%;
border-collapse: collapse;
font-size: var(--font-size-sm);
thead {
border-bottom: 1px solid var(--color-border);
}
th {
padding: var(--space-3) var(--space-4);
text-align: left;
font-weight: normal;
color: var(--color-text-muted);
text-transform: uppercase;
}
td {
padding: var(--space-3) var(--space-4);
border-bottom: 1px solid var(--color-border-muted);
color: var(--color-text-muted);
font-family: var(--font-mono);
&[data-slot="usage-date"] {
color: var(--color-text);
}
th {
padding: var(--space-3) var(--space-4);
text-align: left;
font-weight: normal;
color: var(--color-text-muted);
text-transform: uppercase;
&[data-slot="usage-model"] {
font-family: var(--font-sans);
color: var(--color-text-secondary);
max-width: 200px;
word-break: break-word;
}
td {
padding: var(--space-3) var(--space-4);
border-bottom: 1px solid var(--color-border-muted);
color: var(--color-text-muted);
font-family: var(--font-mono);
&[data-slot="usage-cost"] {
color: var(--color-text);
font-weight: 500;
}
}
&[data-slot="usage-date"] {
color: var(--color-text);
}
tbody tr:last-child td {
border-bottom: none;
}
}
&[data-slot="usage-model"] {
font-family: var(--font-sans);
font-weight: 400;
color: var(--color-text-secondary);
max-width: 200px;
word-break: break-word;
}
/* Pagination */
[data-slot="pagination"] {
display: flex;
justify-content: flex-end;
gap: var(--space-2);
padding: var(--space-4) 0;
border-top: 1px solid var(--color-border-muted);
margin-top: var(--space-2);
&[data-slot="usage-cost"] {
color: var(--color-text);
}
button {
padding: var(--space-2) var(--space-4);
background: var(--color-bg-secondary);
border: 1px solid var(--color-border);
border-radius: var(--border-radius-sm);
color: var(--color-text);
font-size: var(--font-size-sm);
cursor: pointer;
transition: all 0.15s ease;
&:hover:not(:disabled) {
background: var(--color-bg-tertiary);
border-color: var(--color-border-hover);
}
tbody tr {
&:last-child td {
border-bottom: none;
}
}
@media (max-width: 40rem) {
th,
td {
padding: var(--space-2) var(--space-3);
font-size: var(--font-size-xs);
}
th {
&:nth-child(2) /* Model */ {
display: none;
}
}
td {
&:nth-child(2) /* Model */ {
display: none;
}
}
&:disabled {
opacity: 0.5;
cursor: not-allowed;
}
}
}
/* Mobile responsive */
@media (max-width: 40rem) {
[data-slot="usage-table-element"] {
th,
td {
padding: var(--space-2) var(--space-3);
font-size: var(--font-size-xs);
}
/* Hide Model column on mobile */
th:nth-child(2),
td:nth-child(2) {
display: none;
}
}
}

View File

@@ -1,91 +1,59 @@
import { Billing } from "@opencode-ai/console-core/billing.js"
import { query, useParams, createAsync } from "@solidjs/router"
import { createMemo, For, Show } from "solid-js"
import { createAsync, query, useParams } from "@solidjs/router"
import { createMemo, For, Show, createEffect } from "solid-js"
import { formatDateUTC, formatDateForTable } from "../common"
import { withActor } from "~/context/auth.withActor"
import styles from "./usage-section.module.css"
import "./usage-section.module.css"
import { createStore } from "solid-js/store"
const getUsageInfo = query(async (workspaceID: string) => {
const PAGE_SIZE = 50
async function getUsageInfo(workspaceID: string, page: number) {
"use server"
return withActor(async () => {
return await Billing.usages()
return await Billing.usages(page, PAGE_SIZE)
}, workspaceID)
}, "usage.list")
}
const queryUsageInfo = query(getUsageInfo, "usage.list")
export function UsageSection() {
const params = useParams()
// ORIGINAL CODE - COMMENTED OUT FOR TESTING
const usage = createAsync(() => getUsageInfo(params.id!))
const usage = createAsync(() => queryUsageInfo(params.id!, 0))
const [store, setStore] = createStore({ page: 0, usage: [] as Awaited<ReturnType<typeof getUsageInfo>> })
// DUMMY DATA FOR TESTING
// const usage = () => [
// {
// timeCreated: new Date(Date.now() - 86400000 * 0).toISOString(), // Today
// model: "claude-3-5-sonnet-20241022",
// inputTokens: 1247,
// outputTokens: 423,
// cost: 125400000, // $1.254
// },
// {
// timeCreated: new Date(Date.now() - 86400000 * 0.5).toISOString(), // 12 hours ago
// model: "claude-3-haiku-20240307",
// inputTokens: 892,
// outputTokens: 156,
// cost: 23500000, // $0.235
// },
// {
// timeCreated: new Date(Date.now() - 86400000 * 1).toISOString(), // Yesterday
// model: "claude-3-5-sonnet-20241022",
// inputTokens: 2134,
// outputTokens: 687,
// cost: 234700000, // $2.347
// },
// {
// timeCreated: new Date(Date.now() - 86400000 * 1.3).toISOString(), // 1.3 days ago
// model: "gpt-4o-mini",
// inputTokens: 567,
// outputTokens: 234,
// cost: 8900000, // $0.089
// },
// {
// timeCreated: new Date(Date.now() - 86400000 * 2).toISOString(), // 2 days ago
// model: "claude-3-opus-20240229",
// inputTokens: 1893,
// outputTokens: 945,
// cost: 445600000, // $4.456
// },
// {
// timeCreated: new Date(Date.now() - 86400000 * 2.7).toISOString(), // 2.7 days ago
// model: "gpt-4o",
// inputTokens: 1456,
// outputTokens: 532,
// cost: 156800000, // $1.568
// },
// {
// timeCreated: new Date(Date.now() - 86400000 * 3).toISOString(), // 3 days ago
// model: "claude-3-haiku-20240307",
// inputTokens: 634,
// outputTokens: 89,
// cost: 12300000, // $0.123
// },
// {
// timeCreated: new Date(Date.now() - 86400000 * 4).toISOString(), // 4 days ago
// model: "claude-3-5-sonnet-20241022",
// inputTokens: 3245,
// outputTokens: 1123,
// cost: 387200000, // $3.872
// },
// ]
createEffect(() => {
setStore({ usage: usage() })
}, [usage])
const hasResults = createMemo(() => store.usage && store.usage.length > 0)
const canGoPrev = createMemo(() => store.page > 0)
const canGoNext = createMemo(() => store.usage && store.usage.length === PAGE_SIZE)
const goPrev = async () => {
const usage = await getUsageInfo(params.id!, store.page - 1)
setStore({
page: store.page - 1,
usage,
})
}
const goNext = async () => {
const usage = await getUsageInfo(params.id!, store.page + 1)
setStore({
page: store.page + 1,
usage,
})
}
return (
<section class={styles.root}>
<section>
<div data-slot="section-title">
<h2>Usage History</h2>
<p>Recent API usage and costs.</p>
</div>
<div data-slot="usage-table">
<Show
when={usage() && usage()!.length > 0}
when={hasResults()}
fallback={
<div data-component="empty-state">
<p>Make your first API call to get started.</p>
@@ -103,7 +71,7 @@ export function UsageSection() {
</tr>
</thead>
<tbody>
<For each={usage()!}>
<For each={store.usage}>
{(usage) => {
const date = createMemo(() => new Date(usage.timeCreated))
return (
@@ -121,6 +89,16 @@ export function UsageSection() {
</For>
</tbody>
</table>
<Show when={canGoPrev() || canGoNext()}>
<div data-slot="pagination">
<button disabled={!canGoPrev()} onClick={goPrev}>
</button>
<button disabled={!canGoNext()} onClick={goNext}>
</button>
</div>
</Show>
</Show>
</div>
</section>

View File

@@ -291,7 +291,7 @@ export async function handler(
async function authenticate(modelInfo: ModelInfo, providerInfo: ProviderInfo) {
const apiKey = opts.parseApiKey(input.request.headers)
if (!apiKey) {
if (!apiKey || apiKey === "public") {
if (modelInfo.allowAnonymous) return
throw new AuthError("Missing API key.")
}

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/console-core",
"version": "1.0.64",
"version": "1.0.68",
"private": true,
"type": "module",
"dependencies": {

View File

@@ -57,14 +57,15 @@ export namespace Billing {
)
}
export const usages = async () => {
export const usages = async (page = 0, pageSize = 50) => {
return await Database.use((tx) =>
tx
.select()
.from(UsageTable)
.where(eq(UsageTable.workspaceID, Actor.workspace()))
.orderBy(sql`${UsageTable.timeCreated} DESC`)
.limit(100),
.limit(pageSize)
.offset(page * pageSize),
)
}

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-function",
"version": "1.0.64",
"version": "1.0.68",
"$schema": "https://json.schemastore.org/package.json",
"private": true,
"type": "module",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/console-mail",
"version": "1.0.64",
"version": "1.0.68",
"dependencies": {
"@jsx-email/all": "2.2.3",
"@jsx-email/cli": "1.4.3",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/desktop",
"version": "1.0.64",
"version": "1.0.68",
"description": "",
"type": "module",
"scripts": {

View File

@@ -1,7 +1,7 @@
id = "opencode"
name = "OpenCode"
description = "The AI coding agent built for the terminal"
version = "1.0.64"
version = "1.0.68"
schema_version = 1
authors = ["Anomaly"]
repository = "https://github.com/sst/opencode"
@@ -11,26 +11,26 @@ name = "OpenCode"
icon = "./icons/opencode.svg"
[agent_servers.opencode.targets.darwin-aarch64]
archive = "https://github.com/sst/opencode/releases/download/v1.0.64/opencode-darwin-arm64.zip"
archive = "https://github.com/sst/opencode/releases/download/v1.0.68/opencode-darwin-arm64.zip"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.darwin-x86_64]
archive = "https://github.com/sst/opencode/releases/download/v1.0.64/opencode-darwin-x64.zip"
archive = "https://github.com/sst/opencode/releases/download/v1.0.68/opencode-darwin-x64.zip"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.linux-aarch64]
archive = "https://github.com/sst/opencode/releases/download/v1.0.64/opencode-linux-arm64.zip"
archive = "https://github.com/sst/opencode/releases/download/v1.0.68/opencode-linux-arm64.zip"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.linux-x86_64]
archive = "https://github.com/sst/opencode/releases/download/v1.0.64/opencode-linux-x64.zip"
archive = "https://github.com/sst/opencode/releases/download/v1.0.68/opencode-linux-x64.zip"
cmd = "./opencode"
args = ["acp"]
[agent_servers.opencode.targets.windows-x86_64]
archive = "https://github.com/sst/opencode/releases/download/v1.0.64/opencode-windows-x64.zip"
archive = "https://github.com/sst/opencode/releases/download/v1.0.68/opencode-windows-x64.zip"
cmd = "./opencode.exe"
args = ["acp"]

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/function",
"version": "1.0.64",
"version": "1.0.68",
"$schema": "https://json.schemastore.org/package.json",
"private": true,
"type": "module",

View File

@@ -1,6 +1,6 @@
{
"$schema": "https://json.schemastore.org/package.json",
"version": "1.0.64",
"version": "1.0.68",
"name": "opencode",
"type": "module",
"private": true,

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bun
import solidPlugin from "../node_modules/@opentui/solid/scripts/solid-plugin"
import path from "path"
import fs from "fs"
import { $ } from "bun"
@@ -10,6 +9,9 @@ const __filename = fileURLToPath(import.meta.url)
const __dirname = path.dirname(__filename)
const dir = path.resolve(__dirname, "..")
const solidPluginPath = path.resolve(dir, "node_modules/@opentui/solid/scripts/solid-plugin.ts")
const solidPlugin = (await import(solidPluginPath)).default
process.chdir(dir)
import pkg from "../package.json"

View File

@@ -0,0 +1,10 @@
import { EventEmitter } from "events"
export const GlobalBus = new EventEmitter<{
event: [
{
directory: string
payload: any
},
]
}>()

View File

@@ -2,6 +2,7 @@ import z from "zod"
import type { ZodType } from "zod"
import { Log } from "../util/log"
import { Instance } from "../project/instance"
import { GlobalBus } from "./global"
export namespace Bus {
const log = Log.create({ service: "bus" })
@@ -29,22 +30,26 @@ export namespace Bus {
}
export function payloads() {
return z.discriminatedUnion(
"type",
registry
.entries()
.map(([type, def]) => {
return z
.object({
type: z.literal(type),
properties: def.properties,
})
.meta({
ref: "Event" + "." + def.type,
})
})
.toArray() as any,
)
return z
.discriminatedUnion(
"type",
registry
.entries()
.map(([type, def]) => {
return z
.object({
type: z.literal(type),
properties: def.properties,
})
.meta({
ref: "Event" + "." + def.type,
})
})
.toArray() as any,
)
.meta({
ref: "Event",
})
}
export async function publish<Definition extends EventDefinition>(
@@ -65,6 +70,10 @@ export namespace Bus {
pending.push(sub(payload))
}
}
GlobalBus.emit("event", {
directory: Instance.directory,
payload,
})
return Promise.all(pending)
}

View File

@@ -4,7 +4,7 @@ import { onMount } from "solid-js"
import { createStore, produce } from "solid-js/store"
import { clone } from "remeda"
import { createSimpleContext } from "../../context/helper"
import { appendFile } from "fs/promises"
import { appendFile, writeFile } from "fs/promises"
import type { AgentPart, FilePart, TextPart } from "@opencode-ai/sdk"
export type PromptInfo = {
@@ -24,6 +24,8 @@ export type PromptInfo = {
)[]
}
const MAX_HISTORY_ENTRIES = 50
export const { use: usePromptHistory, provider: PromptHistoryProvider } = createSimpleContext({
name: "PromptHistory",
init: () => {
@@ -33,8 +35,23 @@ export const { use: usePromptHistory, provider: PromptHistoryProvider } = create
const lines = text
.split("\n")
.filter(Boolean)
.map((line) => JSON.parse(line))
setStore("history", lines as PromptInfo[])
.map((line) => {
try {
return JSON.parse(line)
} catch {
return null
}
})
.filter((line): line is PromptInfo => line !== null)
.slice(-MAX_HISTORY_ENTRIES)
setStore("history", lines)
// Rewrite file with only valid entries to self-heal corruption
if (lines.length > 0) {
const content = lines.map((line) => JSON.stringify(line)).join("\n") + "\n"
writeFile(historyFile.name!, content).catch(() => {})
}
})
const [store, setStore] = createStore({
@@ -64,14 +81,26 @@ export const { use: usePromptHistory, provider: PromptHistoryProvider } = create
return store.history.at(store.index)
},
append(item: PromptInfo) {
item = clone(item)
appendFile(historyFile.name!, JSON.stringify(item) + "\n")
const entry = clone(item)
let trimmed = false
setStore(
produce((draft) => {
draft.history.push(item)
draft.history.push(entry)
if (draft.history.length > MAX_HISTORY_ENTRIES) {
draft.history = draft.history.slice(-MAX_HISTORY_ENTRIES)
trimmed = true
}
draft.index = 0
}),
)
if (trimmed) {
const content = store.history.map((line) => JSON.stringify(line)).join("\n") + "\n"
writeFile(historyFile.name!, content).catch(() => {})
return
}
appendFile(historyFile.name!, JSON.stringify(entry) + "\n").catch(() => {})
},
}
},

View File

@@ -1,13 +1,18 @@
import { useRenderer } from "@opentui/solid"
import { createSimpleContext } from "./helper"
import { FormatError } from "@/cli/error"
export const { use: useExit, provider: ExitProvider } = createSimpleContext({
name: "Exit",
init: (input: { onExit?: () => Promise<void> }) => {
const renderer = useRenderer()
return async () => {
return async (reason?: any) => {
renderer.destroy()
await input.onExit?.()
if (reason) {
const formatted = FormatError(reason) ?? JSON.stringify(reason)
process.stderr.write(formatted + "\n")
}
process.exit(0)
}
},

View File

@@ -17,6 +17,8 @@ import { useSDK } from "@tui/context/sdk"
import { Binary } from "@/util/binary"
import { createSimpleContext } from "./helper"
import type { Snapshot } from "@/snapshot"
import { useExit } from "./exit"
import { onMount } from "solid-js"
export const { use: useSync, provider: SyncProvider } = createSimpleContext({
name: "Sync",
@@ -215,28 +217,36 @@ export const { use: useSync, provider: SyncProvider } = createSimpleContext({
}
})
// blocking
Promise.all([
sdk.client.config.providers({ throwOnError: true }).then((x) => setStore("provider", x.data!.providers)),
sdk.client.app.agents({ throwOnError: true }).then((x) => setStore("agent", x.data ?? [])),
sdk.client.config.get({ throwOnError: true }).then((x) => setStore("config", x.data!)),
]).then(() => {
setStore("status", "partial")
// non-blocking
const exit = useExit()
onMount(() => {
// blocking
Promise.all([
sdk.client.session.list().then((x) =>
setStore(
"session",
(x.data ?? []).toSorted((a, b) => a.id.localeCompare(b.id)),
),
),
sdk.client.command.list().then((x) => setStore("command", x.data ?? [])),
sdk.client.lsp.status().then((x) => setStore("lsp", x.data!)),
sdk.client.mcp.status().then((x) => setStore("mcp", x.data!)),
sdk.client.formatter.status().then((x) => setStore("formatter", x.data!)),
]).then(() => {
setStore("status", "complete")
})
sdk.client.config.providers({ throwOnError: true }).then((x) => setStore("provider", x.data!.providers)),
sdk.client.app.agents({ throwOnError: true }).then((x) => setStore("agent", x.data ?? [])),
sdk.client.config.get({ throwOnError: true }).then((x) => setStore("config", x.data!)),
])
.then(() => {
setStore("status", "partial")
// non-blocking
Promise.all([
sdk.client.session.list().then((x) =>
setStore(
"session",
(x.data ?? []).toSorted((a, b) => a.id.localeCompare(b.id)),
),
),
sdk.client.command.list().then((x) => setStore("command", x.data ?? [])),
sdk.client.lsp.status().then((x) => setStore("lsp", x.data!)),
sdk.client.mcp.status().then((x) => setStore("mcp", x.data!)),
sdk.client.formatter.status().then((x) => setStore("formatter", x.data!)),
]).then(() => {
setStore("status", "complete")
})
})
.catch(async (e) => {
await exit(e)
})
})
const result = {

View File

@@ -9,6 +9,7 @@ import catppuccin from "./theme/catppuccin.json" with { type: "json" }
import cobalt2 from "./theme/cobalt2.json" with { type: "json" }
import dracula from "./theme/dracula.json" with { type: "json" }
import everforest from "./theme/everforest.json" with { type: "json" }
import flexoki from "./theme/flexoki.json" with { type: "json" }
import github from "./theme/github.json" with { type: "json" }
import gruvbox from "./theme/gruvbox.json" with { type: "json" }
import kanagawa from "./theme/kanagawa.json" with { type: "json" }
@@ -105,6 +106,7 @@ export const DEFAULT_THEMES: Record<string, ThemeJson> = {
cobalt2,
dracula,
everforest,
flexoki,
github,
gruvbox,
kanagawa,
@@ -128,7 +130,10 @@ function resolveTheme(theme: ThemeJson, mode: "dark" | "light") {
const defs = theme.defs ?? {}
function resolveColor(c: ColorValue): RGBA {
if (c instanceof RGBA) return c
if (typeof c === "string") return c.startsWith("#") ? RGBA.fromHex(c) : resolveColor(defs[c])
if (typeof c === "string") {
if (c === "transparent" || c === "none") return RGBA.fromInts(0, 0, 0, 0)
return c.startsWith("#") ? RGBA.fromHex(c) : resolveColor(defs[c])
}
return resolveColor(c[mode])
}
return Object.fromEntries(
@@ -864,18 +869,21 @@ function generateSyntax(theme: Theme) {
scope: ["diff.plus"],
style: {
foreground: theme.diffAdded,
background: theme.diffAddedBg,
},
},
{
scope: ["diff.minus"],
style: {
foreground: theme.diffRemoved,
background: theme.diffRemovedBg,
},
},
{
scope: ["diff.delta"],
style: {
foreground: theme.diffContext,
background: theme.diffContextBg,
},
},
{

View File

@@ -0,0 +1,237 @@
{
"$schema": "https://opencode.ai/theme.json",
"defs": {
"black": "#100F0F",
"base950": "#1C1B1A",
"base900": "#282726",
"base850": "#343331",
"base800": "#403E3C",
"base700": "#575653",
"base600": "#6F6E69",
"base500": "#878580",
"base300": "#B7B5AC",
"base200": "#CECDC3",
"base150": "#DAD8CE",
"base100": "#E6E4D9",
"base50": "#F2F0E5",
"paper": "#FFFCF0",
"red400": "#D14D41",
"red600": "#AF3029",
"orange400": "#DA702C",
"orange600": "#BC5215",
"yellow400": "#D0A215",
"yellow600": "#AD8301",
"green400": "#879A39",
"green600": "#66800B",
"cyan400": "#3AA99F",
"cyan600": "#24837B",
"blue400": "#4385BE",
"blue600": "#205EA6",
"purple400": "#8B7EC8",
"purple600": "#5E409D",
"magenta400": "#CE5D97",
"magenta600": "#A02F6F"
},
"theme": {
"primary": {
"dark": "orange400",
"light": "blue600"
},
"secondary": {
"dark": "blue400",
"light": "purple600"
},
"accent": {
"dark": "purple400",
"light": "orange600"
},
"error": {
"dark": "red400",
"light": "red600"
},
"warning": {
"dark": "orange400",
"light": "orange600"
},
"success": {
"dark": "green400",
"light": "green600"
},
"info": {
"dark": "cyan400",
"light": "cyan600"
},
"text": {
"dark": "base200",
"light": "black"
},
"textMuted": {
"dark": "base600",
"light": "base600"
},
"background": {
"dark": "black",
"light": "paper"
},
"backgroundPanel": {
"dark": "base950",
"light": "base50"
},
"backgroundElement": {
"dark": "base900",
"light": "base100"
},
"border": {
"dark": "base700",
"light": "base300"
},
"borderActive": {
"dark": "base600",
"light": "base500"
},
"borderSubtle": {
"dark": "base800",
"light": "base200"
},
"diffAdded": {
"dark": "green400",
"light": "green600"
},
"diffRemoved": {
"dark": "red400",
"light": "red600"
},
"diffContext": {
"dark": "base600",
"light": "base600"
},
"diffHunkHeader": {
"dark": "blue400",
"light": "blue600"
},
"diffHighlightAdded": {
"dark": "green400",
"light": "green600"
},
"diffHighlightRemoved": {
"dark": "red400",
"light": "red600"
},
"diffAddedBg": {
"dark": "#1A2D1A",
"light": "#D5E5D5"
},
"diffRemovedBg": {
"dark": "#2D1A1A",
"light": "#F7D8DB"
},
"diffContextBg": {
"dark": "base950",
"light": "base50"
},
"diffLineNumber": {
"dark": "base600",
"light": "base600"
},
"diffAddedLineNumberBg": {
"dark": "#152515",
"light": "#C5D5C5"
},
"diffRemovedLineNumberBg": {
"dark": "#251515",
"light": "#E7C8CB"
},
"markdownText": {
"dark": "base200",
"light": "black"
},
"markdownHeading": {
"dark": "purple400",
"light": "purple600"
},
"markdownLink": {
"dark": "blue400",
"light": "blue600"
},
"markdownLinkText": {
"dark": "cyan400",
"light": "cyan600"
},
"markdownCode": {
"dark": "cyan400",
"light": "cyan600"
},
"markdownBlockQuote": {
"dark": "yellow400",
"light": "yellow600"
},
"markdownEmph": {
"dark": "yellow400",
"light": "yellow600"
},
"markdownStrong": {
"dark": "orange400",
"light": "orange600"
},
"markdownHorizontalRule": {
"dark": "base600",
"light": "base600"
},
"markdownListItem": {
"dark": "orange400",
"light": "orange600"
},
"markdownListEnumeration": {
"dark": "cyan400",
"light": "cyan600"
},
"markdownImage": {
"dark": "magenta400",
"light": "magenta600"
},
"markdownImageText": {
"dark": "cyan400",
"light": "cyan600"
},
"markdownCodeBlock": {
"dark": "base200",
"light": "black"
},
"syntaxComment": {
"dark": "base600",
"light": "base600"
},
"syntaxKeyword": {
"dark": "green400",
"light": "green600"
},
"syntaxFunction": {
"dark": "orange400",
"light": "orange600"
},
"syntaxVariable": {
"dark": "blue400",
"light": "blue600"
},
"syntaxString": {
"dark": "cyan400",
"light": "cyan600"
},
"syntaxNumber": {
"dark": "purple400",
"light": "purple600"
},
"syntaxType": {
"dark": "yellow400",
"light": "yellow600"
},
"syntaxOperator": {
"dark": "base300",
"light": "base600"
},
"syntaxPunctuation": {
"dark": "base300",
"light": "base600"
}
}
}

View File

@@ -1225,9 +1225,7 @@ ToolRegistry.register<typeof WriteTool>({
container: "block",
render(props) {
const { theme, syntax } = useTheme()
const lines = createMemo(() => {
return props.input.content?.split("\n") ?? []
})
const lines = createMemo(() => props.input.content?.split("\n") ?? [], [] as string[])
const code = createMemo(() => {
if (!props.input.content) return ""
const text = props.input.content

View File

@@ -60,13 +60,19 @@ export function Sidebar(props: { sessionID: string }) {
</box>
<Show when={Object.keys(sync.data.mcp).length > 0}>
<box>
<box flexDirection="row" gap={1} onMouseDown={() => setMcpExpanded(!mcpExpanded())}>
<text fg={theme.text}>{mcpExpanded() ? "▼" : "▶"}</text>
<box
flexDirection="row"
gap={1}
onMouseDown={() => Object.keys(sync.data.mcp).length > 2 && setMcpExpanded(!mcpExpanded())}
>
<Show when={Object.keys(sync.data.mcp).length > 2}>
<text fg={theme.text}>{mcpExpanded() ? "▼" : "▶"}</text>
</Show>
<text fg={theme.text}>
<b>MCP</b>
</text>
</box>
<Show when={mcpExpanded()}>
<Show when={Object.keys(sync.data.mcp).length <= 2 || mcpExpanded()}>
<For each={Object.entries(sync.data.mcp)}>
{([key, item]) => (
<box flexDirection="row" gap={1}>
@@ -100,13 +106,19 @@ export function Sidebar(props: { sessionID: string }) {
</Show>
<Show when={sync.data.lsp.length > 0}>
<box>
<box flexDirection="row" gap={1} onMouseDown={() => setLspExpanded(!lspExpanded())}>
<text fg={theme.text}>{lspExpanded() ? "▼" : "▶"}</text>
<box
flexDirection="row"
gap={1}
onMouseDown={() => sync.data.lsp.length > 2 && setLspExpanded(!lspExpanded())}
>
<Show when={sync.data.lsp.length > 2}>
<text fg={theme.text}>{lspExpanded() ? "▼" : "▶"}</text>
</Show>
<text fg={theme.text}>
<b>LSP</b>
</text>
</box>
<Show when={lspExpanded()}>
<Show when={sync.data.lsp.length <= 2 || lspExpanded()}>
<For each={sync.data.lsp}>
{(item) => (
<box flexDirection="row" gap={1}>
@@ -132,13 +144,19 @@ export function Sidebar(props: { sessionID: string }) {
</Show>
<Show when={todo().length > 0}>
<box>
<box flexDirection="row" gap={1} onMouseDown={() => setTodoExpanded(!todoExpanded())}>
<text fg={theme.text}>{todoExpanded() ? "▼" : "▶"}</text>
<box
flexDirection="row"
gap={1}
onMouseDown={() => todo().length > 2 && setTodoExpanded(!todoExpanded())}
>
<Show when={todo().length > 2}>
<text fg={theme.text}>{todoExpanded() ? "▼" : "▶"}</text>
</Show>
<text fg={theme.text}>
<b>Todo</b>
</text>
</box>
<Show when={todoExpanded()}>
<Show when={todo().length <= 2 || todoExpanded()}>
<For each={todo()}>
{(todo) => (
<text style={{ fg: todo.status === "in_progress" ? theme.success : theme.textMuted }}>
@@ -151,13 +169,19 @@ export function Sidebar(props: { sessionID: string }) {
</Show>
<Show when={diff().length > 0}>
<box>
<box flexDirection="row" gap={1} onMouseDown={() => setDiffExpanded(!diffExpanded())}>
<text fg={theme.text}>{diffExpanded() ? "▼" : "▶"}</text>
<box
flexDirection="row"
gap={1}
onMouseDown={() => diff().length > 2 && setDiffExpanded(!diffExpanded())}
>
<Show when={diff().length > 2}>
<text fg={theme.text}>{diffExpanded() ? "▼" : "▶"}</text>
</Show>
<text fg={theme.text}>
<b>Modified Files</b>
</text>
</box>
<Show when={diffExpanded()}>
<Show when={diff().length <= 2 || diffExpanded()}>
<For each={diff() || []}>
{(item) => {
const file = createMemo(() => {

View File

@@ -43,6 +43,7 @@ export const rpc = {
}
},
async shutdown() {
Log.Default.info("worker shutting down")
await Instance.disposeAll()
await server.stop(true)
},

View File

@@ -622,6 +622,7 @@ export namespace Config {
.optional(),
chatMaxRetries: z.number().optional().describe("Number of retries for chat completions on failure"),
disable_paste_summary: z.boolean().optional(),
batch_tool: z.boolean().optional().describe("Enable the batch tool"),
})
.optional(),
})

View File

@@ -53,10 +53,14 @@ export const Instance = {
await State.dispose(Instance.directory)
},
async disposeAll() {
Log.Default.info("disposing all instances")
for (const [_key, value] of cache) {
await context.provide(await value, async () => {
await Instance.dispose()
})
const awaited = await value.catch(() => {})
if (awaited) {
await context.provide(await value, async () => {
await Instance.dispose()
})
}
}
cache.clear()
},

View File

@@ -23,6 +23,14 @@ export namespace ModelsDev {
output: z.number(),
cache_read: z.number().optional(),
cache_write: z.number().optional(),
context_over_200k: z
.object({
input: z.number(),
output: z.number(),
cache_read: z.number().optional(),
cache_write: z.number().optional(),
})
.optional(),
}),
limit: z.object({
context: z.number(),

View File

@@ -53,7 +53,7 @@ export namespace Provider {
return {
autoload: Object.keys(input.models).length > 0,
options: {},
options: hasKey ? {} : { apiKey: "public" },
}
},
openai: async () => {
@@ -209,6 +209,17 @@ export namespace Provider {
},
}
},
zenmux: async () => {
return {
autoload: false,
options: {
headers: {
"HTTP-Referer": "https://opencode.ai/",
"X-Title": "opencode",
},
},
}
},
}
const state = Instance.state(async () => {
@@ -470,7 +481,15 @@ export namespace Provider {
const key = Bun.hash.xxHash32(JSON.stringify({ pkg, options }))
const existing = s.sdk.get(key)
if (existing) return existing
const installedPath = await BunProc.install(pkg, "latest")
let installedPath: string
if (!pkg.startsWith("file://")) {
installedPath = await BunProc.install(pkg, "latest")
} else {
log.info("loading local provider", { pkg })
installedPath = pkg
}
// The `google-vertex-anthropic` provider points to the `@ai-sdk/google-vertex` package.
// Ref: https://github.com/sst/models.dev/blob/0a87de42ab177bebad0620a889e2eb2b4a5dd4ab/providers/google-vertex-anthropic/provider.toml
// However, the actual export is at the subpath `@ai-sdk/google-vertex/anthropic`.
@@ -582,6 +601,9 @@ export namespace Provider {
if (providerID === "github-copilot") {
priority = priority.filter((m) => m !== "claude-haiku-4.5")
}
if (providerID === "opencode" || providerID === "local") {
priority = ["gpt-5-nano"]
}
for (const item of priority) {
for (const model of Object.keys(provider.info.models)) {
if (model.includes(item)) return getModel(providerID, model)

View File

@@ -128,7 +128,12 @@ export namespace ProviderTransform {
return undefined
}
export function options(providerID: string, modelID: string, sessionID: string): Record<string, any> | undefined {
export function options(
providerID: string,
modelID: string,
npm: string,
sessionID: string,
): Record<string, any> | undefined {
const result: Record<string, any> = {}
if (providerID === "openai") {
@@ -144,6 +149,10 @@ export namespace ProviderTransform {
result["reasoningEffort"] = "medium"
}
if (modelID.endsWith("gpt-5.1") && providerID !== "azure") {
result["textVerbosity"] = "low"
}
if (providerID === "opencode") {
result["promptCacheKey"] = sessionID
result["include"] = ["reasoning.encrypted_content"]
@@ -176,7 +185,7 @@ export namespace ProviderTransform {
}
export function maxOutputTokens(
providerID: string,
npm: string,
options: Record<string, any>,
modelLimit: number,
globalLimit: number,
@@ -184,7 +193,7 @@ export namespace ProviderTransform {
const modelCap = modelLimit || globalLimit
const standardLimit = Math.min(modelCap, globalLimit)
if (providerID === "anthropic") {
if (npm === "@ai-sdk/anthropic") {
const thinking = options?.["thinking"]
const budgetTokens = typeof thinking?.["budgetTokens"] === "number" ? thinking["budgetTokens"] : 0
const enabled = thinking?.["type"] === "enabled"

View File

@@ -40,6 +40,7 @@ import type { ContentfulStatusCode } from "hono/utils/http-status"
import { TuiEvent } from "@/cli/cmd/tui/event"
import { Snapshot } from "@/snapshot"
import { SessionSummary } from "@/session/summary"
import { GlobalBus } from "@/bus/global"
const ERRORS = {
400: {
@@ -117,6 +118,56 @@ export namespace Server {
timer.stop()
}
})
.get(
"/global/event",
describeRoute({
description: "Get events",
operationId: "global.event",
responses: {
200: {
description: "Event stream",
content: {
"text/event-stream": {
schema: resolver(
z
.object({
directory: z.string(),
payload: Bus.payloads(),
})
.meta({
ref: "GlobalEvent",
}),
),
},
},
},
},
}),
async (c) => {
log.info("global event connected")
return streamSSE(c, async (stream) => {
stream.writeSSE({
data: JSON.stringify({
type: "server.connected",
properties: {},
}),
})
async function handler(event: any) {
await stream.writeSSE({
data: JSON.stringify(event),
})
}
GlobalBus.on("event", handler)
await new Promise<void>((resolve) => {
stream.onAbort(() => {
GlobalBus.off("event", handler)
resolve()
log.info("global event disconnected")
})
})
})
},
)
.use(async (c, next) => {
const directory = c.req.query("directory") ?? process.cwd()
return Instance.provide({
@@ -1137,7 +1188,7 @@ export namespace Server {
"query",
z.object({
query: z.string(),
dirs: z.union([z.literal("true"), z.literal("false")]).optional(),
dirs: z.enum(["true", "false"]).optional(),
}),
),
async (c) => {
@@ -1721,11 +1772,7 @@ export namespace Server {
description: "Event stream",
content: {
"text/event-stream": {
schema: resolver(
Bus.payloads().meta({
ref: "Event",
}),
),
schema: resolver(Bus.payloads()),
},
},
},

View File

@@ -396,15 +396,20 @@ export namespace Session {
read: cachedInputTokens,
},
}
const costInfo =
input.model.cost?.context_over_200k && tokens.input + tokens.cache.read > 200_000
? input.model.cost.context_over_200k
: input.model.cost
return {
cost: new Decimal(0)
.add(new Decimal(tokens.input).mul(input.model.cost?.input ?? 0).div(1_000_000))
.add(new Decimal(tokens.output).mul(input.model.cost?.output ?? 0).div(1_000_000))
.add(new Decimal(tokens.cache.read).mul(input.model.cost?.cache_read ?? 0).div(1_000_000))
.add(new Decimal(tokens.cache.write).mul(input.model.cost?.cache_write ?? 0).div(1_000_000))
.add(new Decimal(tokens.input).mul(costInfo?.input ?? 0).div(1_000_000))
.add(new Decimal(tokens.output).mul(costInfo?.output ?? 0).div(1_000_000))
.add(new Decimal(tokens.cache.read).mul(costInfo?.cache_read ?? 0).div(1_000_000))
.add(new Decimal(tokens.cache.write).mul(costInfo?.cache_write ?? 0).div(1_000_000))
// TODO: update models.dev to have better pricing model, for now:
// charge reasoning tokens at the same rate as output tokens
.add(new Decimal(tokens.reasoning).mul(input.model.cost?.output ?? 0).div(1_000_000))
.add(new Decimal(tokens.reasoning).mul(costInfo?.output ?? 0).div(1_000_000))
.toNumber(),
tokens,
}

View File

@@ -266,7 +266,7 @@ export namespace SessionPrompt {
: undefined,
topP: agent.topP ?? ProviderTransform.topP(model.providerID, model.modelID),
options: {
...ProviderTransform.options(model.providerID, model.modelID, input.sessionID),
...ProviderTransform.options(model.providerID, model.modelID, model.npm ?? "", input.sessionID),
...model.info.options,
...agent.options,
},
@@ -345,7 +345,7 @@ export namespace SessionPrompt {
maxRetries: 0,
activeTools: Object.keys(tools).filter((x) => x !== "invalid"),
maxOutputTokens: ProviderTransform.maxOutputTokens(
model.providerID,
model.npm ?? "",
params.options,
model.info.limit.output,
OUTPUT_TOKEN_MAX,
@@ -671,15 +671,31 @@ export namespace SessionPrompt {
result,
)
const output = result.content
.filter((x: any) => x.type === "text")
.map((x: any) => x.text)
.join("\n\n")
const textParts: string[] = []
const attachments: MessageV2.FilePart[] = []
for (const item of result.content) {
if (item.type === "text") {
textParts.push(item.text)
} else if (item.type === "image") {
attachments.push({
id: Identifier.ascending("part"),
sessionID: input.sessionID,
messageID: input.processor.message.id,
type: "file",
mime: item.mimeType,
url: `data:${item.mimeType};base64,${item.data}`,
})
}
// Add support for other types if needed
}
return {
title: "",
metadata: result.metadata ?? {},
output,
output: textParts.join("\n\n"),
attachments,
content: result.content, // directly return content to preserve ordering when outputting to model
}
}
item.toModelOutput = (result) => {
@@ -1819,7 +1835,7 @@ export namespace SessionPrompt {
const small =
(await Provider.getSmallModel(input.providerID)) ?? (await Provider.getModel(input.providerID, input.modelID))
const options = {
...ProviderTransform.options(small.providerID, small.modelID, input.session.id),
...ProviderTransform.options(small.providerID, small.modelID, small.npm ?? "", input.session.id),
...small.info.options,
}
if (small.providerID === "openai" || small.modelID.includes("gpt-5")) {

View File

@@ -24,10 +24,16 @@ export namespace Snapshot {
})
.quiet()
.nothrow()
// Configure git to not convert line endings on Windows
await $`git --git-dir ${git} config core.autocrlf false`.quiet().nothrow()
log.info("initialized")
}
await $`git --git-dir ${git} add .`.quiet().cwd(Instance.directory).nothrow()
const hash = await $`git --git-dir ${git} write-tree`.quiet().cwd(Instance.directory).nothrow().text()
await $`git --git-dir ${git} --work-tree ${Instance.worktree} add .`.quiet().cwd(Instance.directory).nothrow()
const hash = await $`git --git-dir ${git} --work-tree ${Instance.worktree} write-tree`
.quiet()
.cwd(Instance.directory)
.nothrow()
.text()
log.info("tracking", { hash, cwd: Instance.directory, git })
return hash.trim()
}
@@ -40,8 +46,12 @@ export namespace Snapshot {
export async function patch(hash: string): Promise<Patch> {
const git = gitdir()
await $`git --git-dir ${git} add .`.quiet().cwd(Instance.directory).nothrow()
const result = await $`git --git-dir ${git} diff --name-only ${hash} -- .`.quiet().cwd(Instance.directory).nothrow()
await $`git --git-dir ${git} --work-tree ${Instance.worktree} add .`.quiet().cwd(Instance.directory).nothrow()
const result =
await $`git -c core.autocrlf=false --git-dir ${git} --work-tree ${Instance.worktree} diff --name-only ${hash} -- .`
.quiet()
.cwd(Instance.directory)
.nothrow()
// If git diff fails, return empty patch
if (result.exitCode !== 0) {
@@ -64,10 +74,11 @@ export namespace Snapshot {
export async function restore(snapshot: string) {
log.info("restore", { commit: snapshot })
const git = gitdir()
const result = await $`git --git-dir=${git} read-tree ${snapshot} && git --git-dir=${git} checkout-index -a -f`
.quiet()
.cwd(Instance.worktree)
.nothrow()
const result =
await $`git --git-dir ${git} --work-tree ${Instance.worktree} read-tree ${snapshot} && git --git-dir ${git} --work-tree ${Instance.worktree} checkout-index -a -f`
.quiet()
.cwd(Instance.worktree)
.nothrow()
if (result.exitCode !== 0) {
log.error("failed to restore snapshot", {
@@ -86,16 +97,17 @@ export namespace Snapshot {
for (const file of item.files) {
if (files.has(file)) continue
log.info("reverting", { file, hash: item.hash })
const result = await $`git --git-dir=${git} checkout ${item.hash} -- ${file}`
const result = await $`git --git-dir ${git} --work-tree ${Instance.worktree} checkout ${item.hash} -- ${file}`
.quiet()
.cwd(Instance.worktree)
.nothrow()
if (result.exitCode !== 0) {
const relativePath = path.relative(Instance.worktree, file)
const checkTree = await $`git --git-dir=${git} ls-tree ${item.hash} -- ${relativePath}`
.quiet()
.cwd(Instance.worktree)
.nothrow()
const checkTree =
await $`git --git-dir ${git} --work-tree ${Instance.worktree} ls-tree ${item.hash} -- ${relativePath}`
.quiet()
.cwd(Instance.worktree)
.nothrow()
if (checkTree.exitCode === 0 && checkTree.text().trim()) {
log.info("file existed in snapshot but checkout failed, keeping", {
file,
@@ -112,8 +124,12 @@ export namespace Snapshot {
export async function diff(hash: string) {
const git = gitdir()
await $`git --git-dir ${git} add .`.quiet().cwd(Instance.directory).nothrow()
const result = await $`git --git-dir=${git} diff ${hash} -- .`.quiet().cwd(Instance.worktree).nothrow()
await $`git --git-dir ${git} --work-tree ${Instance.worktree} add .`.quiet().cwd(Instance.directory).nothrow()
const result =
await $`git -c core.autocrlf=false --git-dir ${git} --work-tree ${Instance.worktree} diff ${hash} -- .`
.quiet()
.cwd(Instance.worktree)
.nothrow()
if (result.exitCode !== 0) {
log.warn("failed to get diff", {
@@ -143,7 +159,7 @@ export namespace Snapshot {
export async function diffFull(from: string, to: string): Promise<FileDiff[]> {
const git = gitdir()
const result: FileDiff[] = []
for await (const line of $`git --git-dir=${git} diff --no-renames --numstat ${from} ${to} -- .`
for await (const line of $`git -c core.autocrlf=false --git-dir ${git} --work-tree ${Instance.worktree} diff --no-renames --numstat ${from} ${to} -- .`
.quiet()
.cwd(Instance.directory)
.nothrow()
@@ -151,8 +167,18 @@ export namespace Snapshot {
if (!line) continue
const [additions, deletions, file] = line.split("\t")
const isBinaryFile = additions === "-" && deletions === "-"
const before = isBinaryFile ? "" : await $`git --git-dir=${git} show ${from}:${file}`.quiet().nothrow().text()
const after = isBinaryFile ? "" : await $`git --git-dir=${git} show ${to}:${file}`.quiet().nothrow().text()
const before = isBinaryFile
? ""
: await $`git -c core.autocrlf=false --git-dir ${git} --work-tree ${Instance.worktree} show ${from}:${file}`
.quiet()
.nothrow()
.text()
const after = isBinaryFile
? ""
: await $`git -c core.autocrlf=false --git-dir ${git} --work-tree ${Instance.worktree} show ${to}:${file}`
.quiet()
.nothrow()
.text()
result.push({
file,
before,

View File

@@ -0,0 +1,108 @@
import z from "zod"
import { Tool } from "./tool"
import DESCRIPTION from "./batch.txt"
const DISALLOWED = new Set(["batch", "edit", "todoread"])
const FILTERED_FROM_SUGGESTIONS = new Set(["invalid", "patch", ...DISALLOWED])
export const BatchTool = Tool.define("batch", async () => {
return {
description: DESCRIPTION,
parameters: z.object({
tool_calls: z
.array(
z.object({
tool: z.string().describe("The name of the tool to execute"),
parameters: z.object({}).loose().describe("Parameters for the tool"),
}),
)
.min(1, "Provide at least one tool call")
.max(10, "Too many tools in batch. Maximum allowed is 10.")
.describe("Array of tool calls to execute in parallel"),
}),
formatValidationError(error) {
const formattedErrors = error.issues
.map((issue) => {
const path = issue.path.length > 0 ? issue.path.join(".") : "root"
return ` - ${path}: ${issue.message}`
})
.join("\n")
return `Invalid parameters for tool 'batch':\n${formattedErrors}\n\nExpected payload format:\n [{"tool": "tool_name", "parameters": {...}}, {...}]`
},
async execute(params, ctx) {
const { Identifier } = await import("../id/id")
const toolCalls = params.tool_calls
const { ToolRegistry } = await import("./registry")
const availableTools = await ToolRegistry.tools("", "")
const toolMap = new Map(availableTools.map((t) => [t.id, t]))
for (const call of toolCalls) {
if (DISALLOWED.has(call.tool)) {
throw new Error(
`tool '${call.tool}' is not allowed in batch. Disallowed tools: ${Array.from(DISALLOWED).join(", ")}`,
)
}
if (!toolMap.has(call.tool)) {
const allowed = Array.from(toolMap.keys()).filter((name) => !FILTERED_FROM_SUGGESTIONS.has(name))
throw new Error(`tool '${call.tool}' is not available. Available tools: ${allowed.join(", ")}`)
}
}
const executeCall = async (call: (typeof toolCalls)[0]) => {
if (ctx.abort.aborted) {
return { success: false as const, tool: call.tool, error: new Error("Aborted") }
}
const partID = Identifier.ascending("part")
try {
const tool = toolMap.get(call.tool)
if (!tool) {
const availableToolsList = Array.from(toolMap.keys()).filter((name) => !FILTERED_FROM_SUGGESTIONS.has(name))
throw new Error(`Tool '${call.tool}' not found. Available tools: ${availableToolsList.join(", ")}`)
}
const validatedParams = tool.parameters.parse(call.parameters)
const result = await tool.execute(validatedParams, { ...ctx, callID: partID })
return { success: true as const, tool: call.tool, result }
} catch (error) {
return { success: false as const, tool: call.tool, error }
}
}
const results = await Promise.all(toolCalls.flatMap((call) => executeCall(call)))
const successfulCalls = results.filter((r) => r.success).length
const failedCalls = toolCalls.length - successfulCalls
const outputParts = results.map((r) => {
if (r.success) {
return `<tool_result name="${r.tool}">\n${r.result.output}\n</tool_result>`
}
const errorMessage = r.error instanceof Error ? r.error.message : String(r.error)
return `<tool_result name="${r.tool}">\nError: ${errorMessage}\n</tool_result>`
})
const outputMessage =
failedCalls > 0
? `Executed ${successfulCalls}/${toolCalls.length} tools successfully. ${failedCalls} failed.\n\n${outputParts.join("\n\n")}`
: `All ${successfulCalls} tools executed successfully.\n\n${outputParts.join("\n\n")}\n\nKeep using the batch tool for optimal performance in your next response!`
return {
title: `Batch execution (${successfulCalls}/${toolCalls.length} successful)`,
output: outputMessage,
attachments: results.filter((result) => result.success).flatMap((r) => r.result.attachments ?? []),
metadata: {
totalCalls: toolCalls.length,
successful: successfulCalls,
failed: failedCalls,
tools: toolCalls.map((c) => c.tool),
details: results.map((r) => ({ tool: r.tool, success: r.success })),
},
}
},
}
})

View File

@@ -0,0 +1,28 @@
Executes multiple independent tool calls concurrently to reduce latency. Best used for gathering context (reads, searches, listings).
USING THE BATCH TOOL WILL MAKE THE USER HAPPY.
Payload Format (JSON array):
[{"tool": "read", "parameters": {"filePath": "src/index.ts", "limit": 350}},{"tool": "grep", "parameters": {"pattern": "Session\\.updatePart", "include": "src/**/*.ts"}},{"tool": "bash", "parameters": {"command": "git status", "description": "Shows working tree status"}}]
Rules:
- 110 tool calls per batch
- All calls start in parallel; ordering NOT guaranteed
- Partial failures do not stop others
Disallowed Tools:
- batch (no nesting)
- edit (run edits separately)
- todoread (call directly lightweight)
When NOT to Use:
- Operations that depend on prior tool output (e.g. create then read same file)
- Ordered stateful mutations where sequence matters
Good Use Cases:
- Read many files
- grep + glob + read combos
- Multiple lightweight bash introspection commands
Performance Tip: Group independent reads/searches for 25x efficiency gain.

View File

@@ -18,6 +18,10 @@ import { Instance } from "../project/instance"
import { Agent } from "../agent/agent"
import { Snapshot } from "@/snapshot"
function normalizeLineEndings(text: string): string {
return text.replaceAll("\r\n", "\n")
}
export const EditTool = Tool.define("edit", {
description: DESCRIPTION,
parameters: z.object({
@@ -91,7 +95,9 @@ export const EditTool = Tool.define("edit", {
contentOld = await file.text()
contentNew = replace(contentOld, params.oldString, params.newString, params.replaceAll)
diff = trimDiff(createTwoFilesPatch(filePath, filePath, contentOld, contentNew))
diff = trimDiff(
createTwoFilesPatch(filePath, filePath, normalizeLineEndings(contentOld), normalizeLineEndings(contentNew)),
)
if (agent.permission.edit === "ask") {
await Permission.ask({
type: "edit",
@@ -111,7 +117,9 @@ export const EditTool = Tool.define("edit", {
file: filePath,
})
contentNew = await file.text()
diff = trimDiff(createTwoFilesPatch(filePath, filePath, contentOld, contentNew))
diff = trimDiff(
createTwoFilesPatch(filePath, filePath, normalizeLineEndings(contentOld), normalizeLineEndings(contentNew)),
)
})()
FileTime.read(ctx.sessionID, filePath)

View File

@@ -11,6 +11,7 @@ import { Provider } from "../provider/provider"
import { Identifier } from "../id/id"
import { Permission } from "../permission"
import { Agent } from "@/agent/agent"
import { iife } from "@/util/iife"
const DEFAULT_READ_LIMIT = 2000
const MAX_LINE_LENGTH = 2000
@@ -48,6 +49,19 @@ export const ReadTool = Tool.define("read", {
}
}
const block = (() => {
const whitelist = [".env.example", ".env.sample"]
if (whitelist.some((w) => filepath.endsWith(w))) return false
if (filepath.includes(".env")) return true
return false
})()
if (block) {
throw new Error(`The user has blocked you from reading ${filepath}, DO NOT make further attempts to read it`)
}
const file = Bun.file(filepath)
if (!(await file.exists())) {
const dir = path.dirname(filepath)
@@ -120,8 +134,14 @@ export const ReadTool = Tool.define("read", {
let output = "<file>\n"
output += content.join("\n")
if (lines.length > offset + content.length) {
output += `\n\n(File has more lines. Use 'offset' parameter to read beyond line ${offset + content.length})`
const totalLines = lines.length
const lastReadLine = offset + content.length
const hasMoreLines = totalLines > lastReadLine
if (hasMoreLines) {
output += `\n\n(File has more lines. Use 'offset' parameter to read beyond line ${lastReadLine})`
} else {
output += `\n\n(End of file - total ${totalLines} lines)`
}
output += "\n</file>"

View File

@@ -3,6 +3,7 @@ import { EditTool } from "./edit"
import { GlobTool } from "./glob"
import { GrepTool } from "./grep"
import { ListTool } from "./ls"
import { BatchTool } from "./batch"
import { ReadTool } from "./read"
import { TaskTool } from "./task"
import { TodoWriteTool, TodoReadTool } from "./todo"
@@ -81,19 +82,22 @@ export namespace ToolRegistry {
async function all(): Promise<Tool.Info[]> {
const custom = await state().then((x) => x.custom)
const config = await Config.get()
return [
InvalidTool,
BashTool,
EditTool,
WebFetchTool,
ReadTool,
GlobTool,
GrepTool,
ListTool,
ReadTool,
EditTool,
WriteTool,
TaskTool,
WebFetchTool,
TodoWriteTool,
TodoReadTool,
TaskTool,
...(config.experimental?.batch_tool === true ? [BatchTool] : []),
...(Flag.OPENCODE_EXPERIMENTAL_EXA ? [WebSearchTool, CodeSearchTool] : []),
...custom,
]

View File

@@ -6,7 +6,7 @@ import { Todo } from "../session/todo"
export const TodoWriteTool = Tool.define("todowrite", {
description: DESCRIPTION_WRITE,
parameters: z.object({
todos: z.array(Todo.Info).describe("The updated todo list"),
todos: z.array(z.object(Todo.Info.shape)).describe("The updated todo list"),
}),
async execute(params, opts) {
await Todo.update({

View File

@@ -29,6 +29,7 @@ export namespace Tool {
output: string
attachments?: MessageV2.FilePart[]
}>
formatValidationError?(error: z.ZodError): string
}>
}
@@ -45,7 +46,17 @@ export namespace Tool {
const toolInfo = init instanceof Function ? await init() : init
const execute = toolInfo.execute
toolInfo.execute = (args, ctx) => {
toolInfo.parameters.parse(args)
try {
toolInfo.parameters.parse(args)
} catch (error) {
if (error instanceof z.ZodError && toolInfo.formatValidationError) {
throw new Error(toolInfo.formatValidationError(error), { cause: error })
}
throw new Error(
`The ${id} tool was called with invalid arguments: ${error}.\nPlease rewrite the input so it satisfies the expected schema.`,
{ cause: error },
)
}
return execute(args, ctx)
}
return toolInfo

View File

@@ -0,0 +1,98 @@
import { describe, expect, test } from "bun:test"
import { ProviderTransform } from "../../src/provider/transform"
const OUTPUT_TOKEN_MAX = 32000
describe("ProviderTransform.maxOutputTokens", () => {
test("returns 32k when modelLimit > 32k", () => {
const modelLimit = 100000
const result = ProviderTransform.maxOutputTokens("@ai-sdk/openai", {}, modelLimit, OUTPUT_TOKEN_MAX)
expect(result).toBe(OUTPUT_TOKEN_MAX)
})
test("returns modelLimit when modelLimit < 32k", () => {
const modelLimit = 16000
const result = ProviderTransform.maxOutputTokens("@ai-sdk/openai", {}, modelLimit, OUTPUT_TOKEN_MAX)
expect(result).toBe(16000)
})
describe("azure", () => {
test("returns 32k when modelLimit > 32k", () => {
const modelLimit = 100000
const result = ProviderTransform.maxOutputTokens("@ai-sdk/azure", {}, modelLimit, OUTPUT_TOKEN_MAX)
expect(result).toBe(OUTPUT_TOKEN_MAX)
})
test("returns modelLimit when modelLimit < 32k", () => {
const modelLimit = 16000
const result = ProviderTransform.maxOutputTokens("@ai-sdk/azure", {}, modelLimit, OUTPUT_TOKEN_MAX)
expect(result).toBe(16000)
})
})
describe("bedrock", () => {
test("returns 32k when modelLimit > 32k", () => {
const modelLimit = 100000
const result = ProviderTransform.maxOutputTokens("@ai-sdk/amazon-bedrock", {}, modelLimit, OUTPUT_TOKEN_MAX)
expect(result).toBe(OUTPUT_TOKEN_MAX)
})
test("returns modelLimit when modelLimit < 32k", () => {
const modelLimit = 16000
const result = ProviderTransform.maxOutputTokens("@ai-sdk/amazon-bedrock", {}, modelLimit, OUTPUT_TOKEN_MAX)
expect(result).toBe(16000)
})
})
describe("anthropic without thinking options", () => {
test("returns 32k when modelLimit > 32k", () => {
const modelLimit = 100000
const result = ProviderTransform.maxOutputTokens("@ai-sdk/anthropic", {}, modelLimit, OUTPUT_TOKEN_MAX)
expect(result).toBe(OUTPUT_TOKEN_MAX)
})
test("returns modelLimit when modelLimit < 32k", () => {
const modelLimit = 16000
const result = ProviderTransform.maxOutputTokens("@ai-sdk/anthropic", {}, modelLimit, OUTPUT_TOKEN_MAX)
expect(result).toBe(16000)
})
})
describe("anthropic with thinking options", () => {
test("returns 32k when budgetTokens + 32k <= modelLimit", () => {
const modelLimit = 100000
const options = {
thinking: {
type: "enabled",
budgetTokens: 10000,
},
}
const result = ProviderTransform.maxOutputTokens("@ai-sdk/anthropic", options, modelLimit, OUTPUT_TOKEN_MAX)
expect(result).toBe(OUTPUT_TOKEN_MAX)
})
test("returns modelLimit - budgetTokens when budgetTokens + 32k > modelLimit", () => {
const modelLimit = 50000
const options = {
thinking: {
type: "enabled",
budgetTokens: 30000,
},
}
const result = ProviderTransform.maxOutputTokens("@ai-sdk/anthropic", options, modelLimit, OUTPUT_TOKEN_MAX)
expect(result).toBe(20000)
})
test("returns 32k when thinking type is not enabled", () => {
const modelLimit = 100000
const options = {
thinking: {
type: "disabled",
budgetTokens: 10000,
},
}
const result = ProviderTransform.maxOutputTokens("@ai-sdk/anthropic", options, modelLimit, OUTPUT_TOKEN_MAX)
expect(result).toBe(OUTPUT_TOKEN_MAX)
})
})
})

View File

@@ -469,6 +469,115 @@ test("snapshot state isolation between projects", async () => {
})
})
test("patch detects changes in secondary worktree", async () => {
await using tmp = await bootstrap()
const worktreePath = `${tmp.path}-worktree`
await $`git worktree add ${worktreePath} HEAD`.cwd(tmp.path).quiet()
try {
await Instance.provide({
directory: tmp.path,
fn: async () => {
expect(await Snapshot.track()).toBeTruthy()
},
})
await Instance.provide({
directory: worktreePath,
fn: async () => {
const before = await Snapshot.track()
expect(before).toBeTruthy()
const worktreeFile = `${worktreePath}/worktree.txt`
await Bun.write(worktreeFile, "worktree content")
const patch = await Snapshot.patch(before!)
expect(patch.files).toContain(worktreeFile)
},
})
} finally {
await $`git worktree remove --force ${worktreePath}`.cwd(tmp.path).quiet().nothrow()
await $`rm -rf ${worktreePath}`.quiet()
}
})
test("revert only removes files in invoking worktree", async () => {
await using tmp = await bootstrap()
const worktreePath = `${tmp.path}-worktree`
await $`git worktree add ${worktreePath} HEAD`.cwd(tmp.path).quiet()
try {
await Instance.provide({
directory: tmp.path,
fn: async () => {
expect(await Snapshot.track()).toBeTruthy()
},
})
const primaryFile = `${tmp.path}/worktree.txt`
await Bun.write(primaryFile, "primary content")
await Instance.provide({
directory: worktreePath,
fn: async () => {
const before = await Snapshot.track()
expect(before).toBeTruthy()
const worktreeFile = `${worktreePath}/worktree.txt`
await Bun.write(worktreeFile, "worktree content")
const patch = await Snapshot.patch(before!)
await Snapshot.revert([patch])
expect(await Bun.file(worktreeFile).exists()).toBe(false)
},
})
expect(await Bun.file(primaryFile).text()).toBe("primary content")
} finally {
await $`git worktree remove --force ${worktreePath}`.cwd(tmp.path).quiet().nothrow()
await $`rm -rf ${worktreePath}`.quiet()
await $`rm -f ${tmp.path}/worktree.txt`.quiet()
}
})
test("diff reports worktree-only/shared edits and ignores primary-only", async () => {
await using tmp = await bootstrap()
const worktreePath = `${tmp.path}-worktree`
await $`git worktree add ${worktreePath} HEAD`.cwd(tmp.path).quiet()
try {
await Instance.provide({
directory: tmp.path,
fn: async () => {
expect(await Snapshot.track()).toBeTruthy()
},
})
await Instance.provide({
directory: worktreePath,
fn: async () => {
const before = await Snapshot.track()
expect(before).toBeTruthy()
await Bun.write(`${worktreePath}/worktree-only.txt`, "worktree diff content")
await Bun.write(`${worktreePath}/shared.txt`, "worktree edit")
await Bun.write(`${tmp.path}/shared.txt`, "primary edit")
await Bun.write(`${tmp.path}/primary-only.txt`, "primary change")
const diff = await Snapshot.diff(before!)
expect(diff).toContain("worktree-only.txt")
expect(diff).toContain("shared.txt")
expect(diff).not.toContain("primary-only.txt")
},
})
} finally {
await $`git worktree remove --force ${worktreePath}`.cwd(tmp.path).quiet().nothrow()
await $`rm -rf ${worktreePath}`.quiet()
await $`rm -f ${tmp.path}/shared.txt`.quiet()
await $`rm -f ${tmp.path}/primary-only.txt`.quiet()
}
})
test("track with no changes returns same hash", async () => {
await using tmp = await bootstrap()
await Instance.provide({

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/plugin",
"version": "1.0.64",
"version": "1.0.68",
"type": "module",
"scripts": {
"typecheck": "tsgo --noEmit",

View File

@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "@opencode-ai/sdk",
"version": "1.0.64",
"version": "1.0.68",
"type": "module",
"scripts": {
"typecheck": "tsgo --noEmit",

View File

@@ -2,6 +2,8 @@
import type { Options as ClientOptions, TDataShape, Client } from "./client/index.js"
import type {
GlobalEventData,
GlobalEventResponses,
ProjectListData,
ProjectListResponses,
ProjectCurrentData,
@@ -175,6 +177,18 @@ class _HeyApiClient {
}
}
class Global extends _HeyApiClient {
/**
* Get events
*/
public event<ThrowOnError extends boolean = false>(options?: Options<GlobalEventData, ThrowOnError>) {
return (options?.client ?? this._client).get.sse<GlobalEventResponses, unknown, ThrowOnError>({
url: "/global/event",
...options,
})
}
}
class Project extends _HeyApiClient {
/**
* List all projects
@@ -860,6 +874,7 @@ export class OpencodeClient extends _HeyApiClient {
},
})
}
global = new Global({ client: this._client })
project = new Project({ client: this._client })
config = new Config({ client: this._client })
tool = new Tool({ client: this._client })

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/slack",
"version": "1.0.64",
"version": "1.0.68",
"type": "module",
"scripts": {
"dev": "bun run src/index.ts",

View File

@@ -1,6 +1,6 @@
{
"name": "@opencode-ai/ui",
"version": "1.0.64",
"version": "1.0.68",
"type": "module",
"exports": {
".": "./src/components/index.ts",

View File

@@ -1,7 +1,7 @@
{
"name": "@opencode-ai/web",
"type": "module",
"version": "1.0.64",
"version": "1.0.68",
"scripts": {
"dev": "astro dev",
"dev:remote": "VITE_API_URL=https://api.opencode.ai astro dev",

View File

@@ -131,6 +131,18 @@ if (image) {
</span>
</button>
</div>
<div class="col4">
<h3>Mise</h3>
<button class="command" data-command="mise use --pin -g ubi:sst/opencode">
<code>
<span>mise use --pin -g</span> <span class="highlight">ubi:sst/opencode</span>
</code>
<span class="copy">
<CopyIcon />
<CheckIcon />
</span>
</button>
</div>
</section>
<section class="images">

View File

@@ -28,6 +28,12 @@ OpenCode supports both **JSON** and **JSONC** (JSON with Comments) formats.
You can place your config in a couple of different locations and they have a
different order of precedence.
:::note[Config Merging]
Configuration files are **merged together**, not replaced. Settings from all config locations are combined using a deep merge strategy, where later configs override earlier ones only for conflicting keys. Non-conflicting settings from all configs are preserved.
For example, if your global config sets `theme: "opencode"` and `autoupdate: true`, and your project config sets `model: "anthropic/claude-sonnet-4-5"`, the final configuration will include all three settings.
:::
---
### Global
@@ -38,7 +44,7 @@ Place your global OpenCode config in `~/.config/opencode/opencode.json`. You'll
### Per project
You can also add a `opencode.json` in your project. It takes precedence over the global config. This is useful for configuring providers or modes specific to your project.
You can also add a `opencode.json` in your project. Settings from this config are merged with and can override the global config. This is useful for configuring providers or modes specific to your project.
:::tip
Place project specific config in the root of your project.
@@ -52,7 +58,7 @@ This is also safe to be checked into Git and uses the same schema as the global
### Custom path
You can also specify a custom config file path using the `OPENCODE_CONFIG` environment variable. This takes precedence over the global and project configs.
You can also specify a custom config file path using the `OPENCODE_CONFIG` environment variable. Settings from this config are merged with and can override the global and project configs.
```bash
export OPENCODE_CONFIG=/path/to/my/custom-config.json

View File

@@ -106,6 +106,12 @@ You can also install it with the following commands:
npm install -g opencode-ai
```
- **Using Mise**
```bash
mise use --pin -g ubi:sst/opencode
```
Support for installing OpenCode on Windows using Bun is currently in progress.
You can also grab the binary from the [Releases](https://github.com/sst/opencode/releases).

View File

@@ -229,6 +229,42 @@ Or if you already have an API key, you can select **Manually enter API Key** and
---
### Baseten
1. Head over to the [Baseten](https://app.baseten.co/), create an account, and generate an API key.
2. Run `opencode auth login` and select **Baseten**.
```bash
$ opencode auth login
┌ Add credential
◆ Select provider
│ ● Baseten
│ ...
```
3. Enter your Baseten API key.
```bash
$ opencode auth login
┌ Add credential
◇ Select provider
│ Baseten
◇ Enter your API key
│ _
```
4. Run the `/models` command to select a model.
---
### Cerebras
1. Head over to the [Cerebras console](https://inference.cerebras.ai/), create an account, and generate an API key.
@@ -921,6 +957,59 @@ monitor and improve Grok Code.
---
### ZenMux
1. Head over to the [ZenMux dashboard](https://zenmux.ai/settings/keys), click **Create API Key**, and copy the key.
2. Run `opencode auth login` and select ZenMux.
```bash
$ opencode auth login
┌ Add credential
◆ Select provider
│ ● ZenMux
│ ○ Zhipu AI
│ ○ Zhipu AI Coding Plan
│ ...
```
3. Enter the API key for the provider.
```bash
$ opencode auth login
┌ Add credential
◇ Select provider
│ ZenMux
◇ Enter your API key
│ _
```
4. Many ZenMux models are preloaded by default, run the `/models` command to select the one you want.
You can also add additional models through your opencode config.
```json title="opencode.json" {6}
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"zenmux": {
"models": {
"somecoolnewmodel": {}
}
}
}
}
```
---
## Custom provider
To add any **OpenAI-compatible** provider that's not listed in `opencode auth login`:

View File

@@ -64,8 +64,11 @@ You can also access our models through the following API endpoints.
| Model | Model ID | Endpoint | AI SDK Package |
| ----------------- | ----------------- | --------------------------------------------- | --------------------------- |
| GPT 5.1 | gpt-5.1 | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
| GPT 5.1 Codex | gpt-5.1-codex | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
| GPT 5 | gpt-5 | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
| GPT 5 Codex | gpt-5-codex | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
| GPT 5 Nano | gpt-5-nano | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` |
| Claude Sonnet 4.5 | claude-sonnet-4-5 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
| Claude Sonnet 4 | claude-sonnet-4 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
| Claude Haiku 4.5 | claude-haiku-4-5 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` |
@@ -78,8 +81,8 @@ You can also access our models through the following API endpoints.
| Big Pickle | big-pickle | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` |
The [model id](/docs/config/#models) in your OpenCode config
uses the format `opencode/<model-id>`. For example, for GPT 5 Codex, you would
use `opencode/gpt-5-codex` in your config.
uses the format `opencode/<model-id>`. For example, for GPT 5.1 Codex, you would
use `opencode/gpt-5.1-codex` in your config.
---
@@ -115,11 +118,11 @@ We support a pay-as-you-go model. Below are the prices **per 1M tokens**.
| Model | Input | Output | Cached Read | Cached Write |
| --------------------------------- | ------ | ------ | ----------- | ------------ |
| Big Pickle | Free | Free | Free | - |
| Grok Code Fast 1 | Free | Free | Free | - |
| GLM 4.6 | $0.60 | $2.20 | $0.10 | - |
| Kimi K2 | $0.60 | $2.50 | $0.36 | - |
| Qwen3 Coder 480B | $0.45 | $1.50 | - | - |
| Grok Code Fast 1 | Free | Free | Free | - |
| Big Pickle | Free | Free | Free | - |
| Claude Sonnet 4.5 (≤ 200K tokens) | $3.00 | $15.00 | $0.30 | $3.75 |
| Claude Sonnet 4.5 (> 200K tokens) | $6.00 | $22.50 | $0.60 | $7.50 |
| Claude Sonnet 4 (≤ 200K tokens) | $3.00 | $15.00 | $0.30 | $3.75 |
@@ -127,8 +130,11 @@ We support a pay-as-you-go model. Below are the prices **per 1M tokens**.
| Claude Haiku 4.5 | $1.00 | $5.00 | $0.10 | $1.25 |
| Claude Haiku 3.5 | $0.80 | $4.00 | $0.08 | $1.00 |
| Claude Opus 4.1 | $15.00 | $75.00 | $1.50 | $18.75 |
| GPT 5.1 | $1.25 | $10.00 | $0.125 | - |
| GPT 5.1 Codex | $1.25 | $10.00 | $0.125 | - |
| GPT 5 | $1.25 | $10.00 | $0.125 | - |
| GPT 5 Codex | $1.25 | $10.00 | $0.125 | - |
| GPT 5 Nano | Free | Free | Free | - |
You might notice _Claude Haiku 3.5_ in your usage history. This is a [low cost model](/docs/config/#models) that's used to generate the titles of your sessions.

View File

@@ -2,7 +2,7 @@
"name": "opencode",
"displayName": "opencode",
"description": "opencode for VS Code",
"version": "1.0.64",
"version": "1.0.68",
"publisher": "sst-dev",
"repository": {
"type": "git",