Compare commits

...

30 Commits

Author SHA1 Message Date
Isaac Scarrott
98e2910e82 feat: Add support for OpenRouter (#92)
* Add support for OpenRouter as a new model provider

- Introduced `ProviderOpenRouter` in the `models` package.
- Added OpenRouter-specific models, including `GPT41`, `GPT41Mini`, `GPT4o`, and others, with their configurations and costs.
- Updated `generateSchema` to include OpenRouter as a provider.
- Added OpenRouter-specific environment variable handling (`OPENROUTER_API_KEY`) in `config.go`.
- Implemented default model settings for OpenRouter agents in `setDefaultModelForAgent`.
- Updated `getProviderAPIKey` to retrieve the OpenRouter API key.
- Extended `SupportedModels` to include OpenRouter models.
- Added OpenRouter client initialization in the `provider` package.
- Modified `processGeneration` to handle `FinishReasonUnknown` in addition to `FinishReasonToolUse`.

* [feature/openrouter-provider] Add new models and provider to schema

- Added "deepseek-chat-free" and "deepseek-r1-free" to the list of supported models in `opencode-schema.json`.

* [feature/openrouter-provider] Add OpenRouter provider support and integrate new models

- Updated README.md to include OpenRouter as a supported provider and its configuration details.
- Added `OPENROUTER_API_KEY` to environment variable configuration.
- Introduced OpenRouter-specific models in `internal/llm/models/openrouter.go` with mappings to existing cost and token configurations.
- Updated `internal/config/config.go` to set default models for OpenRouter agents.
- Extended `opencode-schema.json` to include OpenRouter models in the schema definitions.
- Refactored model IDs and names to align with OpenRouter naming conventions.

* [feature/openrouter-provider] Refactor finish reason handling and tool call logic in agent and OpenAI provider

- Simplified finish reason check in `agent.go` by removing redundant variable assignment.
- Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present.
- Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider.

[feature/openrouter-provider] Refactor finish reason handling and tool call logic in agent and OpenAI provider

- Simplified finish reason check in `agent.go` by removing redundant variable assignment.
- Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present.
- Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider.

* **[feature/openrouter-provider] Add support for custom headers in OpenAI client configuration**

- Introduced a new `extraHeaders` field in the `openaiOptions` struct to allow specifying additional HTTP headers.
- Added logic in `newOpenAIClient` to apply `extraHeaders` to the OpenAI client configuration.
- Implemented a new option function `WithOpenAIExtraHeaders` to set custom headers in `openaiOptions`.
- Updated the OpenRouter provider configuration in `NewProvider` to include default headers (`HTTP-Referer` and `X-Title`) for OpenRouter API requests.

* Update OpenRouter model config and remove unsupported models

* [feature/openrouter-provider] Update OpenRouter models and default configurations

- Added new OpenRouter models: `claude-3.5-sonnet`, `claude-3-haiku`, `claude-3.7-sonnet`, `claude-3.5-haiku`, and `claude-3-opus` in `openrouter.go`.
- Updated default agent models in `config.go`:
  - `agents.coder.model` now uses `claude-3.7-sonnet`.
  - `agents.task.model` now uses `claude-3.7-sonnet`.
  - `agents.title.model` now uses `claude-3.5-haiku`.
- Updated `opencode-schema.json` to include the new models in the allowed list for schema validation.
- Adjusted logic in `setDefaultModelForAgent` to reflect the new default models.

* [feature/openrouter-provider] Remove unused ProviderEvent emission in stream function

The changes remove the emission of a `ProviderEvent` with type `EventContentStop` in the `stream` function of the `openaiClient` implementation. This event was sent upon successful stream completion but is no longer used.
2025-04-29 13:56:49 +02:00
Kujtim Hoxha
2941137416 fix diagnostics for deleted files 2025-04-28 19:37:42 +02:00
Aiden Cline
b3c0285db3 feat: model selection for given provider (#57)
* feat: model selection for given provider

* tweak: adjust cfg validation func, remove duplicated logic, consolidate agent updating into agent.go

* tweak: make the model dialog scrollable, adjust padding slightly for modal"

* feat: add provider selection, add hints, simplify some logic, add horizontal scrolling support, additional scroll indicators"

* remove nav help

* update docs

* increase number of visible models, make horizontal scroll "wrap"

* add provider popularity rankings
2025-04-28 19:25:06 +02:00
YJG
805aeff83c feat: add azure openai models (#74) 2025-04-28 15:42:57 +02:00
Kujtim Hoxha
bce2ec5c10 fix duplicate context 2025-04-27 20:43:27 +02:00
Kujtim Hoxha
292e9d90ca remove unnecessary var 2025-04-27 20:34:20 +02:00
Kujtim Hoxha
2b4441a0d1 fix context 2025-04-27 20:31:53 +02:00
Garrett Ladley
8f3a94df92 feat: configure context paths (#86) 2025-04-27 20:11:09 +02:00
Kujtim Hoxha
4415220555 fix minor issue 2025-04-27 19:24:46 +02:00
Kujtim Hoxha
a3a04d8a54 fix gemini provider 2025-04-27 19:12:02 +02:00
Lukáš Loukota
792e2b164b fix: gemini tool calling 2025-04-27 19:12:02 +02:00
Kujtim Hoxha
5859dcdc00 small glob fixes 2025-04-27 18:01:31 +02:00
isaac-scarrott
3c2b0f4dd0 [feature/ripgrep-glob] Add ripgrep-based file globbing to improve performance
- Introduced `globWithRipgrep` function to perform file globbing using the `rg` (ripgrep) command.
- Updated `globFiles` to prioritize ripgrep-based globbing and fall back to doublestar-based globbing if ripgrep fails.
- Added logic to handle ripgrep command execution, output parsing, and filtering of hidden files.
- Ensured results are sorted by path length and limited to the specified maximum number of matches.
- Modified imports to include `os/exec` and `bytes` for ripgrep integration.
2025-04-27 18:01:31 +02:00
Kujtim Hoxha
9738886620 fix provider config 2025-04-27 14:44:40 +02:00
Sam Ottenhoff
f3dccad54b Handle new Cursor rules format
1. Check if a path ends with a slash (/)
2. If it does, treat it as a directory and read all files within it
3. For directories like .cursor/rules/, it will scan all files and include their content in the prompt
4. Each file from a directory will be prefixed with "# From filename" for clarity
2025-04-27 14:17:06 +02:00
Kujtim Hoxha
b3a8dbd0d9 fix retry warning 2025-04-27 14:08:09 +02:00
Garrett Mitchell Ladley
d93694a979 feat: simpler diff implementation 2025-04-27 13:56:57 +02:00
Fuad
8a4d4152ce use workingDir if shellInstance is nil otherwise use cwd if shellInstance is not nil 2025-04-27 13:46:59 +02:00
Fuad
f12386e558 use provided workingg dir 2025-04-27 13:46:59 +02:00
Fuad
94aeb7b7fe Fix nil pointer dereference in GetPersistentShell
Added nil check in GetPersistentShell before accessing
shellInstance.isAlive
to prevent panic when newPersistentShell returns nil due to shell
startup
errors. This resolves the "invalid memory address or nil pointer
dereference"
error that was occurring in the shell tool.
2025-04-27 13:46:59 +02:00
Kujtim Hoxha
a35466cdb3 fix acc error 2025-04-25 21:58:14 +02:00
Kujtim Hoxha
170c7ad67a small fixes 2025-04-25 14:42:47 +02:00
Hunter Casten
7a62ab7675 feat(groq): add support for Groq using the OpenAI provider 2025-04-25 11:11:52 +02:00
Kujtim Hoxha
1586d757dc remove tool timeout 2025-04-24 22:35:17 +02:00
Dax Raad
d043526200 add more installation options 2025-04-24 16:34:57 -04:00
Kujtim Hoxha
aaf0bc14ba try fix 2025-04-24 22:27:51 +02:00
Kujtim Hoxha
f2d9bb7ee3 try fix 2025-04-24 22:27:51 +02:00
Kujtim Hoxha
de41703e20 change db driver 2025-04-24 22:27:51 +02:00
Kujtim Hoxha
2c24bfb7b3 fix kitty issues 2025-04-24 19:57:04 +02:00
Dax Raad
47a37b7dd6 back to disablign cgo 2025-04-24 12:46:11 -04:00
33 changed files with 1943 additions and 670 deletions

View File

@@ -4,7 +4,7 @@ before:
hooks:
builds:
- env:
- CGO_ENABLED=1
- CGO_ENABLED=0
goos:
- linux
- darwin
@@ -17,7 +17,6 @@ builds:
archives:
- format: tar.gz
# this name template makes the OS and Arch compatible with the results of uname.
name_template: >-
opencode-
{{- if eq .Os "darwin" }}mac-
@@ -27,7 +26,6 @@ archives:
{{- else if eq .Arch "#86" }}i386
{{- else }}{{ .Arch }}{{ end }}
{{- if .Arm }}v{{ .Arm }}{{ end }}
# use zip for windows archives
format_overrides:
- goos: windows
format: zip

View File

@@ -11,7 +11,7 @@ OpenCode is a Go-based CLI application that brings AI assistance to your termina
## Features
- **Interactive TUI**: Built with [Bubble Tea](https://github.com/charmbracelet/bubbletea) for a smooth terminal experience
- **Multiple AI Providers**: Support for OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, and Groq
- **Multiple AI Providers**: Support for OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, Azure OpenAI, and OpenRouter
- **Session Management**: Save and manage multiple conversation sessions
- **Tool Integration**: AI can execute commands, search files, and modify code
- **Vim-like Editor**: Integrated editor with text input capabilities
@@ -22,8 +22,35 @@ OpenCode is a Go-based CLI application that brings AI assistance to your termina
## Installation
### Using the Install Script
```bash
# Install the latest version
curl -fsSL https://opencode.ai/install | bash
# Install a specific version
curl -fsSL https://opencode.ai/install | VERSION=0.1.0 bash
```
### Using Homebrew (macOS and Linux)
```bash
brew install opencode-ai/tap/opencode
```
### Using AUR (Arch Linux)
```bash
# Using yay
yay -S opencode-bin
# Using paru
paru -S opencode-bin
```
### Using Go
```bash
# Coming soon
go install github.com/opencode-ai/opencode@latest
```
@@ -39,15 +66,19 @@ OpenCode looks for configuration in the following locations:
You can configure OpenCode using environment variables:
| Environment Variable | Purpose |
| ----------------------- | ------------------------ |
| `ANTHROPIC_API_KEY` | For Claude models |
| `OPENAI_API_KEY` | For OpenAI models |
| `GEMINI_API_KEY` | For Google Gemini models |
| `GROQ_API_KEY` | For Groq models |
| `AWS_ACCESS_KEY_ID` | For AWS Bedrock (Claude) |
| `AWS_SECRET_ACCESS_KEY` | For AWS Bedrock (Claude) |
| `AWS_REGION` | For AWS Bedrock (Claude) |
| Environment Variable | Purpose |
|----------------------------|--------------------------------------------------------|
| `ANTHROPIC_API_KEY` | For Claude models |
| `OPENAI_API_KEY` | For OpenAI models |
| `GEMINI_API_KEY` | For Google Gemini models |
| `GROQ_API_KEY` | For Groq models |
| `AWS_ACCESS_KEY_ID` | For AWS Bedrock (Claude) |
| `AWS_SECRET_ACCESS_KEY` | For AWS Bedrock (Claude) |
| `AWS_REGION` | For AWS Bedrock (Claude) |
| `AZURE_OPENAI_ENDPOINT` | For Azure OpenAI models |
| `AZURE_OPENAI_API_KEY` | For Azure OpenAI models (optional when using Entra ID) |
| `AZURE_OPENAI_API_VERSION` | For Azure OpenAI models |
### Configuration File Structure
@@ -64,6 +95,14 @@ You can configure OpenCode using environment variables:
"anthropic": {
"apiKey": "your-api-key",
"disabled": false
},
"groq": {
"apiKey": "your-api-key",
"disabled": false
},
"openrouter": {
"apiKey": "your-api-key",
"disabled": false
}
},
"agents": {
@@ -131,6 +170,23 @@ OpenCode supports a variety of AI models from different providers:
- Claude 3.7 Sonnet
### Groq
- Llama 4 Maverick (17b-128e-instruct)
- Llama 4 Scout (17b-16e-instruct)
- QWEN QWQ-32b
- Deepseek R1 distill Llama 70b
- Llama 3.3 70b Versatile
### Azure OpenAI
- GPT-4.1 family (gpt-4.1, gpt-4.1-mini, gpt-4.1-nano)
- GPT-4.5 Preview
- GPT-4o family (gpt-4o, gpt-4o-mini)
- O1 family (o1, o1-mini)
- O3 family (o3, o3-mini)
- O4 Mini
## Usage
```bash
@@ -164,6 +220,7 @@ opencode -c /path/to/project
| `Ctrl+L` | View logs |
| `Ctrl+A` | Switch session |
| `Ctrl+K` | Command dialog |
| `Ctrl+O` | Toggle model selection dialog |
| `Esc` | Close current overlay/dialog or return to previous mode |
### Chat Page Shortcuts
@@ -193,6 +250,16 @@ opencode -c /path/to/project
| `Enter` | Select session |
| `Esc` | Close dialog |
### Model Dialog Shortcuts
| Shortcut | Action |
| ---------- | ----------------- |
| `↑` or `k` | Move up |
| `↓` or `j` | Move down |
| `←` or `h` | Previous provider |
| `→` or `l` | Next provider |
| `Esc` | Close dialog |
### Permission Dialog Shortcuts
| Shortcut | Action |

View File

@@ -77,6 +77,27 @@ func generateSchema() map[string]any {
"default": false,
}
schema["properties"].(map[string]any)["contextPaths"] = map[string]any{
"type": "array",
"description": "Context paths for the application",
"items": map[string]any{
"type": "string",
},
"default": []string{
".github/copilot-instructions.md",
".cursorrules",
".cursor/rules/",
"CLAUDE.md",
"CLAUDE.local.md",
"opencode.md",
"opencode.local.md",
"OpenCode.md",
"OpenCode.local.md",
"OPENCODE.md",
"OPENCODE.local.md",
},
}
// Add MCP servers
schema["properties"].(map[string]any)["mcpServers"] = map[string]any{
"type": "object",
@@ -152,7 +173,9 @@ func generateSchema() map[string]any {
string(models.ProviderOpenAI),
string(models.ProviderGemini),
string(models.ProviderGROQ),
string(models.ProviderOpenRouter),
string(models.ProviderBedrock),
string(models.ProviderAzure),
}
providerSchema["additionalProperties"].(map[string]any)["properties"].(map[string]any)["provider"] = map[string]any{
@@ -259,4 +282,3 @@ func generateSchema() map[string]any {
return schema
}

55
go.mod
View File

@@ -5,10 +5,12 @@ go 1.24.0
toolchain go1.24.2
require (
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0
github.com/JohannesKaufmann/html-to-markdown v1.6.0
github.com/PuerkitoBio/goquery v1.9.2
github.com/alecthomas/chroma/v2 v2.15.0
github.com/anthropics/anthropic-sdk-go v0.2.0-beta.2
github.com/aymanbagabas/go-udiff v0.2.0
github.com/bmatcuk/doublestar/v4 v4.8.1
github.com/catppuccin/go v0.3.0
github.com/charmbracelet/bubbles v0.20.0
@@ -18,19 +20,17 @@ require (
github.com/charmbracelet/lipgloss v1.1.0
github.com/charmbracelet/x/ansi v0.8.0
github.com/fsnotify/fsnotify v1.8.0
github.com/go-git/go-git/v5 v5.15.0
github.com/go-logfmt/logfmt v0.6.0
github.com/golang-migrate/migrate/v4 v4.18.2
github.com/google/generative-ai-go v0.19.0
github.com/google/uuid v1.6.0
github.com/lrstanley/bubblezone v0.0.0-20250315020633-c249a3fe1231
github.com/mark3labs/mcp-go v0.17.0
github.com/mattn/go-runewidth v0.0.16
github.com/mattn/go-sqlite3 v1.14.24
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6
github.com/muesli/reflow v0.3.0
github.com/muesli/termenv v0.16.0
github.com/ncruces/go-sqlite3 v0.25.0
github.com/openai/openai-go v0.1.0-beta.2
github.com/pressly/goose/v3 v3.24.2
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3
github.com/spf13/cobra v1.9.1
github.com/spf13/viper v1.20.0
@@ -45,9 +45,9 @@ require (
cloud.google.com/go/auth/oauth2adapt v0.2.6 // indirect
cloud.google.com/go/compute/metadata v0.6.0 // indirect
cloud.google.com/go/longrunning v0.5.7 // indirect
dario.cat/mergo v1.0.0 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/ProtonMail/go-crypto v1.1.6 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 // indirect
github.com/andybalholm/cascadia v1.3.2 // indirect
github.com/atotto/clipboard v0.1.4 // indirect
github.com/aws/aws-sdk-go-v2 v1.30.3 // indirect
@@ -70,62 +70,58 @@ require (
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
github.com/charmbracelet/x/exp/strings v0.0.0-20240722160745-212f7b056ed0 // indirect
github.com/charmbracelet/x/term v0.2.1 // indirect
github.com/cloudflare/circl v1.6.1 // indirect
github.com/cyphar/filepath-securejoin v0.4.1 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dlclark/regexp2 v1.11.4 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/emirpasic/gods v1.18.1 // indirect
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect
github.com/go-git/go-billy/v5 v5.6.2 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-viper/mapstructure/v2 v2.2.1 // indirect
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
github.com/golang-jwt/jwt/v5 v5.2.2 // indirect
github.com/google/s2a-go v0.1.8 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.4 // indirect
github.com/googleapis/gax-go/v2 v2.14.1 // indirect
github.com/gorilla/css v1.0.1 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
github.com/kevinburke/ssh_config v1.2.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-localereader v0.0.1 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/mfridman/interpolate v0.0.2 // indirect
github.com/microcosm-cc/bluemonday v1.0.27 // indirect
github.com/mitchellh/hashstructure/v2 v2.0.2 // indirect
github.com/muesli/cancelreader v0.2.2 // indirect
github.com/ncruces/julianday v1.0.0 // indirect
github.com/pelletier/go-toml/v2 v2.2.3 // indirect
github.com/pjbgf/sha1cd v0.3.2 // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect
github.com/sagikazarmark/locafero v0.7.0 // indirect
github.com/skeema/knownhosts v1.3.1 // indirect
github.com/sethvargo/go-retry v0.3.0 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spf13/afero v1.12.0 // indirect
github.com/spf13/cast v1.7.1 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/tetratelabs/wazero v1.9.0 // indirect
github.com/tidwall/gjson v1.18.0 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.1 // indirect
github.com/tidwall/sjson v1.2.5 // indirect
github.com/xanzy/ssh-agent v0.3.3 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
github.com/yosida95/uritemplate/v3 v3.0.2 // indirect
github.com/yuin/goldmark v1.7.8 // indirect
github.com/yuin/goldmark-emoji v1.0.5 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 // indirect
go.opentelemetry.io/otel v1.29.0 // indirect
go.opentelemetry.io/otel/metric v1.29.0 // indirect
go.opentelemetry.io/otel/trace v1.29.0 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.9.0 // indirect
go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.37.0 // indirect
golang.org/x/net v0.39.0 // indirect
golang.org/x/oauth2 v0.25.0 // indirect
@@ -134,10 +130,9 @@ require (
golang.org/x/term v0.31.0 // indirect
golang.org/x/text v0.24.0 // indirect
golang.org/x/time v0.8.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8 // indirect
google.golang.org/grpc v1.67.3 // indirect
google.golang.org/protobuf v1.36.1 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463 // indirect
google.golang.org/grpc v1.71.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

152
go.sum
View File

@@ -10,17 +10,18 @@ cloud.google.com/go/compute/metadata v0.6.0 h1:A6hENjEsCDtC1k8byVsgwvVcioamEHvZ4
cloud.google.com/go/compute/metadata v0.6.0/go.mod h1:FjyFAW1MW0C203CEOMDTu3Dk1FlqW3Rga40jzHL4hfg=
cloud.google.com/go/longrunning v0.5.7 h1:WLbHekDbjK1fVFD3ibpFFVoyizlLRl73I7YKuAKilhU=
cloud.google.com/go/longrunning v0.5.7/go.mod h1:8GClkudohy1Fxm3owmBGid8W0pSgodEMwEAztp38Xng=
dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk=
dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0 h1:g0EZJwz7xkXQiZAI5xi9f3WWFYBlX1CPTrR+NDToRkQ=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.17.0/go.mod h1:XCW7KnZet0Opnr7HccfUw1PLc4CjHqpcaxW8DHklNkQ=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0 h1:tfLQ34V6F7tVSwoTf/4lH5sE0o6eCJuNDTmH09nDpbc=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0/go.mod h1:9kIvujWAA58nmPmWB1m23fyWic1kYZMxD9CxaWn4Qpg=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0/go.mod h1:iZDifYGJTIgIIkYRNWPENUnqx6bJ2xnSDFI2tjwZNuY=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 h1:XHOnouVk1mxXfQidrMEnLlPk9UMeRtyBTnEFtxkV0kU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/JohannesKaufmann/html-to-markdown v1.6.0 h1:04VXMiE50YYfCfLboJCLcgqF5x+rHJnb1ssNmqpLH/k=
github.com/JohannesKaufmann/html-to-markdown v1.6.0/go.mod h1:NUI78lGg/a7vpEJTz/0uOcYMaibytE4BUOQS8k78yPQ=
github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ=
github.com/MakeNowJust/heredoc v1.0.0/go.mod h1:mG5amYoWBHf8vpLOuehzbGGw0EHxpZZ6lCpQ4fNJ8LE=
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/ProtonMail/go-crypto v1.1.6 h1:ZcV+Ropw6Qn0AX9brlQLAUXfqLBc7Bl+f/DmNxpLfdw=
github.com/ProtonMail/go-crypto v1.1.6/go.mod h1:rA3QumHc/FZ8pAHreoekgiAbzpNsfQAosU5td4SnOrE=
github.com/PuerkitoBio/goquery v1.9.2 h1:4/wZksC3KgkQw7SQgkKotmKljk0M6V8TUvA8Wb4yPeE=
github.com/PuerkitoBio/goquery v1.9.2/go.mod h1:GHPCaP0ODyyxqcNoFGYlAprUFH81NuRPd0GX3Zu2Mvk=
github.com/alecthomas/assert/v2 v2.11.0 h1:2Q9r3ki8+JYXvGsDyBXwH3LcJ+WK5D0gc5E8vS6K3D0=
@@ -31,12 +32,8 @@ github.com/alecthomas/repr v0.4.0 h1:GhI2A8MACjfegCPVq9f1FLvIBS+DrQ2KQBFZP1iFzXc
github.com/alecthomas/repr v0.4.0/go.mod h1:Fr0507jx4eOXV7AlPV6AVZLYrLIuIeSOWtW57eE/O/4=
github.com/andybalholm/cascadia v1.3.2 h1:3Xi6Dw5lHF15JtdcmAHD3i1+T8plmv7BQ/nsViSLyss=
github.com/andybalholm/cascadia v1.3.2/go.mod h1:7gtRlve5FxPPgIgX36uWBX58OdBsSS6lUvCFb+h7KvU=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/anthropics/anthropic-sdk-go v0.2.0-beta.2 h1:h7qxtumNjKPWFv1QM/HJy60MteeW23iKeEtBoY7bYZk=
github.com/anthropics/anthropic-sdk-go v0.2.0-beta.2/go.mod h1:AapDW22irxK2PSumZiQXYUFvsdQgkwIWlpESweWZI/c=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/atotto/clipboard v0.1.4 h1:EH0zSVneZPSuFR11BlR9YppQTVDbh5+16AmcJi4g1z4=
github.com/atotto/clipboard v0.1.4/go.mod h1:ZY9tmq7sm5xIbd9bOK4onWV4S6X0u6GY7Vn0Yu86PYI=
github.com/aws/aws-sdk-go-v2 v1.30.3 h1:jUeBtG0Ih+ZIFH0F4UkmL9w3cSpaMv9tYYDbzILP8dY=
@@ -99,11 +96,7 @@ github.com/charmbracelet/x/exp/strings v0.0.0-20240722160745-212f7b056ed0 h1:qko
github.com/charmbracelet/x/exp/strings v0.0.0-20240722160745-212f7b056ed0/go.mod h1:pBhA0ybfXv6hDjQUZ7hk1lVxBiUbupdw5R31yPUViVQ=
github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ=
github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg=
github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ0=
github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/cyphar/filepath-securejoin v0.4.1 h1:JyxxyPEaktOD+GAnqIqTf9A8tHyAG22rowi7HkoSU1s=
github.com/cyphar/filepath-securejoin v0.4.1/go.mod h1:Sdj7gXlvMcPZsbhwhQ33GguGLDGQL7h7bg04C/+u9jI=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -111,10 +104,6 @@ github.com/dlclark/regexp2 v1.11.4 h1:rPYF9/LECdNymJufQKmri9gV604RvvABwgOA8un7yA
github.com/dlclark/regexp2 v1.11.4/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/elazarl/goproxy v1.7.2 h1:Y2o6urb7Eule09PjlhQRGNsqRfPmYI3KKQLFpCAV3+o=
github.com/elazarl/goproxy v1.7.2/go.mod h1:82vkLNir0ALaW14Rc399OTTjyNREgmdL2cVoIbS6XaE=
github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc=
github.com/emirpasic/gods v1.18.1/go.mod h1:8tpGGwCnJ5H4r6BWwaV6OrWmMoPhUl5jm/FMNAnJvWQ=
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
@@ -123,16 +112,6 @@ github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHk
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.8.0 h1:dAwr6QBTBZIkG8roQaJjGof0pp0EeF+tNV7YBP3F/8M=
github.com/fsnotify/fsnotify v1.8.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/gliderlabs/ssh v0.3.8 h1:a4YXD1V7xMF9g5nTkdfnja3Sxy1PVDCj1Zg4Wb8vY6c=
github.com/gliderlabs/ssh v0.3.8/go.mod h1:xYoytBv1sV0aL3CavoDuJIQNURXkkfPA/wxQ1pL1fAU=
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI=
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic=
github.com/go-git/go-billy/v5 v5.6.2 h1:6Q86EsPXMa7c3YZ3aLAQsMA0VlWmy43r6FHqa/UNbRM=
github.com/go-git/go-billy/v5 v5.6.2/go.mod h1:rcFC2rAsp/erv7CMz9GczHcuD0D32fWzH+MJAU+jaUU=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399 h1:eMje31YglSBqCdIqdhKBW8lokaMrL3uTkpGYlE2OOT4=
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399/go.mod h1:1OCfN199q1Jm3HZlxleg+Dw/mwps2Wbk9frAWm+4FII=
github.com/go-git/go-git/v5 v5.15.0 h1:f5Qn0W0F7ry1iN0ZwIU5m/n7/BKB4hiZfc+zlZx7ly0=
github.com/go-git/go-git/v5 v5.15.0/go.mod h1:4Ge4alE/5gPs30F2H1esi2gPd69R0C39lolkucHBOp8=
github.com/go-logfmt/logfmt v0.6.0 h1:wGYYu3uicYdqXVgoYbvnkrPVXkuLM1p1ifugDMEdRi4=
github.com/go-logfmt/logfmt v0.6.0/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
@@ -142,10 +121,10 @@ github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-viper/mapstructure/v2 v2.2.1 h1:ZAaOCxANMuZx5RCeg0mBdEZk7DZasvvZIxtHqx8aGss=
github.com/go-viper/mapstructure/v2 v2.2.1/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/golang-migrate/migrate/v4 v4.18.2 h1:2VSCMz7x7mjyTXx3m2zPokOY82LTRgxK1yQYKo6wWQ8=
github.com/golang-migrate/migrate/v4 v4.18.2/go.mod h1:2CM6tJvn2kqPXwnXO/d3rAQYiyoIm180VsO8PRX6Rpk=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/generative-ai-go v0.19.0 h1:R71szggh8wHMCUlEMsW2A/3T+5LdEIkiaHSYgSpUgdg=
github.com/google/generative-ai-go v0.19.0/go.mod h1:JYolL13VG7j79kM5BtHz4qwONHkeJQzOCkKXnpqtS/E=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
@@ -160,19 +139,10 @@ github.com/googleapis/gax-go/v2 v2.14.1 h1:hb0FFeiPaQskmvakKu5EbCbpntQn48jyHuvrk
github.com/googleapis/gax-go/v2 v2.14.1/go.mod h1:Hb/NubMaVM88SrNkvl8X/o8XWwDJEPqouaLeN2IUxoA=
github.com/gorilla/css v1.0.1 h1:ntNaBIghp6JmvWnxbZKANoLyuXTPZ4cAMlo6RyhlbO8=
github.com/gorilla/css v1.0.1/go.mod h1:BvnYkspnSzMmwRK+b8/xgNPLiIuNZr6vbZBTPQ2A3b0=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM=
github.com/hexops/gotextdiff v1.0.3/go.mod h1:pSWU5MAI3yDq+fZBTazCSJysOMbxWL1BSow5/V2vxeg=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
github.com/kevinburke/ssh_config v1.2.0 h1:x584FjTGwHzMwvHx18PXxbBVzfnxogHaAReU4gf13a4=
github.com/kevinburke/ssh_config v1.2.0/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
@@ -180,8 +150,8 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lrstanley/bubblezone v0.0.0-20250315020633-c249a3fe1231 h1:9rjt7AfnrXKNSZhp36A3/4QAZAwGGCGD/p8Bse26zms=
github.com/lrstanley/bubblezone v0.0.0-20250315020633-c249a3fe1231/go.mod h1:S5etECMx+sZnW0Gm100Ma9J1PgVCTgNyFaqGu2b08b4=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
@@ -195,8 +165,8 @@ github.com/mattn/go-localereader v0.0.1/go.mod h1:8fBrzywKY7BI3czFoHkuzRoWE9C+Ei
github.com/mattn/go-runewidth v0.0.12/go.mod h1:RAqKPSqVFrSLVXbA8x7dzmKdmGzieGRCM46jaSJTDAk=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-sqlite3 v1.14.24 h1:tpSp2G2KyMnnQu99ngJ47EIkWVmliIizyZBfPrBWDRM=
github.com/mattn/go-sqlite3 v1.14.24/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/mfridman/interpolate v0.0.2 h1:pnuTK7MQIxxFz1Gr+rjSIx9u7qVjf5VOoM/u6BbAxPY=
github.com/mfridman/interpolate v0.0.2/go.mod h1:p+7uk6oE07mpE/Ik1b8EckO0O4ZXiGAfshKBWLUM9Xg=
github.com/microcosm-cc/bluemonday v1.0.27 h1:MpEUotklkwCSLeH+Qdx1VJgNqLlpY2KXwXFM08ygZfk=
github.com/microcosm-cc/bluemonday v1.0.27/go.mod h1:jFi9vgW+H7c3V0lb6nR74Ib/DIB5OBs92Dimizgw2cA=
github.com/mitchellh/hashstructure/v2 v2.0.2 h1:vGKWl0YJqUNxE8d+h8f6NJLcCJrgbhC4NcD46KavDd4=
@@ -209,19 +179,25 @@ github.com/muesli/reflow v0.3.0 h1:IFsN6K9NfGtjeggFP+68I4chLZV2yIKsXJFNZ+eWh6s=
github.com/muesli/reflow v0.3.0/go.mod h1:pbwTDkVPibjO2kyvBQRBxTWEEGDGq0FlB1BIKtnHY/8=
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
github.com/onsi/gomega v1.34.1 h1:EUMJIKUjM8sKjYbtxQI9A4z2o+rruxnzNvpknOXie6k=
github.com/onsi/gomega v1.34.1/go.mod h1:kU1QgUvBDLXBJq618Xvm2LUX6rSAfRaFRTcdOeDLwwY=
github.com/ncruces/go-sqlite3 v0.25.0 h1:trugKUs98Zwy9KwRr/EUxZHL92LYt7UqcKqAfpGpK+I=
github.com/ncruces/go-sqlite3 v0.25.0/go.mod h1:n6Z7036yFilJx04yV0mi5JWaF66rUmXn1It9Ux8dx68=
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/ncruces/julianday v1.0.0 h1:fH0OKwa7NWvniGQtxdJRxAgkBMolni2BjDHaWTxqt7M=
github.com/ncruces/julianday v1.0.0/go.mod h1:Dusn2KvZrrovOMJuOt0TNXL6tB7U2E8kvza5fFc9G7g=
github.com/openai/openai-go v0.1.0-beta.2 h1:Ra5nCFkbEl9w+UJwAciC4kqnIBUCcJazhmMA0/YN894=
github.com/openai/openai-go v0.1.0-beta.2/go.mod h1:g461MYGXEXBVdV5SaR/5tNzNbSfwTBBefwc+LlDCK0Y=
github.com/pelletier/go-toml/v2 v2.2.3 h1:YmeHyLY8mFWbdkNWwpr+qIL2bEqT0o95WSdkNHvL12M=
github.com/pelletier/go-toml/v2 v2.2.3/go.mod h1:MfCQTFTvCcUyyvvwm1+G6H/jORL20Xlb6rzQu9GuUkc=
github.com/pjbgf/sha1cd v0.3.2 h1:a9wb0bp1oC2TGwStyn0Umc/IGKQnEgF0vVaZ8QF8eo4=
github.com/pjbgf/sha1cd v0.3.2/go.mod h1:zQWigSxVmsHEZow5qaLtPYxpcKMMQpa09ixqBxuCS6A=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pressly/goose/v3 v3.24.2 h1:c/ie0Gm8rnIVKvnDQ/scHErv46jrDv9b4I0WRcFJzYU=
github.com/pressly/goose/v3 v3.24.2/go.mod h1:kjefwFB0eR4w30Td2Gj2Mznyw94vSP+2jJYkOVNbD1k=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rivo/uniseg v0.1.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
@@ -237,9 +213,8 @@ github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAm
github.com/sergi/go-diff v1.3.1/go.mod h1:aMJSSKb2lpPvRNec0+w3fl7LP9IOFzdc9Pa4NFbPK1I=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/skeema/knownhosts v1.3.1 h1:X2osQ+RAjK76shCbvhHHHVl3ZlgDm8apHEHFqRjnBY8=
github.com/skeema/knownhosts v1.3.1/go.mod h1:r7KTdC8l4uxWRyK2TpQZ/1o5HaSzh06ePQNxPwTcfiY=
github.com/sethvargo/go-retry v0.3.0 h1:EEt31A35QhrcRZtrYFDTBg91cqZVnFL2navjDrah2SE=
github.com/sethvargo/go-retry v0.3.0/go.mod h1:mNX17F0C/HguQMyMyJxcnU471gOZGxCLyYaFyAZraas=
github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo=
github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0=
github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs=
@@ -253,13 +228,14 @@ github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An
github.com/spf13/viper v1.20.0 h1:zrxIyR3RQIOsarIrgL8+sAvALXul9jeEPa06Y0Ph6vY=
github.com/spf13/viper v1.20.0/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tetratelabs/wazero v1.9.0 h1:IcZ56OuxrtaEz8UYNRHBrUa9bYeX9oVY93KspZZBf/I=
github.com/tetratelabs/wazero v1.9.0/go.mod h1:TSbcXCfFP0L2FGkRPxHphadXPjo1T6W+CseNNY7EkjM=
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY=
github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
@@ -270,8 +246,6 @@ github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4=
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY=
github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28=
github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM=
github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI1Bc68Uw=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
github.com/yosida95/uritemplate/v3 v3.0.2 h1:Ed3Oyj9yrmi9087+NczuL5BwkIc4wvTb5zIM+UJPGz4=
@@ -282,35 +256,37 @@ github.com/yuin/goldmark v1.7.8 h1:iERMLn0/QJeHFhxSt3p6PeN9mGnvIKSpG9YYorDMnic=
github.com/yuin/goldmark v1.7.8/go.mod h1:uzxRWxtg69N339t3louHJ7+O03ezfj6PlliRlaOzY1E=
github.com/yuin/goldmark-emoji v1.0.5 h1:EMVWyCGPlXJfUXBXpuMu+ii3TIaxbVBnEX9uaDC4cIk=
github.com/yuin/goldmark-emoji v1.0.5/go.mod h1:tTkZEbwu5wkPmgTcitqddVxY9osFZiavD+r4AzQrh1U=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0 h1:r6I7RJCN86bpD/FQwedZ0vSixDpwuWREjW9oRMsmqDc=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0/go.mod h1:B9yO6b04uB80CzjedvewuqDhxJxi11s7/GtiGa8bAjI=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 h1:TT4fX+nBOA/+LUkobKGW1ydGcn+G3vRw9+g5HwCphpk=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0/go.mod h1:L7UH0GbB0p47T4Rri3uHjbpCFYrVrwc1I25QhNPiGK8=
go.opentelemetry.io/otel v1.29.0 h1:PdomN/Al4q/lN6iBJEN3AwPvUiHPMlt93c8bqTG5Llw=
go.opentelemetry.io/otel v1.29.0/go.mod h1:N/WtXPs1CNCUEx+Agz5uouwCba+i+bJGFicT8SR4NP8=
go.opentelemetry.io/otel/metric v1.29.0 h1:vPf/HFWTNkPu1aYeIsc98l4ktOQaL6LeSoeV2g+8YLc=
go.opentelemetry.io/otel/metric v1.29.0/go.mod h1:auu/QWieFVWx+DmQOUMgj0F8LHWdgalxXqvp7BII/W8=
go.opentelemetry.io/otel/trace v1.29.0 h1:J/8ZNK4XgR7a21DZUAsbF8pZ5Jcw1VhACmnYt39JTi4=
go.opentelemetry.io/otel/trace v1.29.0/go.mod h1:eHl3w0sp3paPkYstJOmAimxhiFXPg+MMTlEh3nsQgWQ=
go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/multierr v1.9.0 h1:7fIwc/ZtS0q++VgcfqFDxSBZVv/Xo49/SYnDFupUwlI=
go.uber.org/multierr v1.9.0/go.mod h1:X2jQV1h+kxSjClGpnseKVIxpmcjrj7MNnI0bnlfKTVQ=
go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=
go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=
go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w=
go.opentelemetry.io/otel/trace v1.35.0 h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs=
go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.22.0/go.mod h1:vr6Su+7cTlO45qkww3VDJlzDn0ctJvRgYbC2NvXHt+M=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 h1:nDVHiLt8aIbd/VzvPWN6kSOPE7+F/fNFDSXLVYkE/Iw=
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394/go.mod h1:sIifuuw/Yco/y6yb6+bDNfyeQ/MdPUy/hKEMYQV17cM=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
@@ -328,15 +304,12 @@ golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -358,7 +331,6 @@ golang.org/x/term v0.31.0 h1:erwDkOK1Msy6offm1mOgvspSkslFnIGsFnxOKoufg3o=
golang.org/x/term v0.31.0/go.mod h1:R4BeIy7D95HzImkxGkTW1UQTtP54tio2RyHz7PwK0aw=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
@@ -375,22 +347,28 @@ golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.215.0 h1:jdYF4qnyczlEz2ReWIsosNLDuzXyvFHJtI5gcr0J7t0=
google.golang.org/api v0.215.0/go.mod h1:fta3CVtuJYOEdugLNWm6WodzOS8KdFckABwN4I40hzY=
google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576 h1:CkkIfIt50+lT6NHAVoRYEyAvQGFM7xEwXUUywFvEb3Q=
google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576/go.mod h1:1R3kvZ1dtP3+4p4d3G8uJ8rFk/fWlScl38vanWACI08=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8 h1:TqExAhdPaB60Ux47Cn0oLV07rGnxZzIsaRhQaqS666A=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8/go.mod h1:lcTa1sDdWEIHMWlITnIczmw5w60CF9ffkb8Z+DVmmjA=
google.golang.org/grpc v1.67.3 h1:OgPcDAFKHnH8X3O4WcO4XUc8GRDeKsKReqbQtiCj7N8=
google.golang.org/grpc v1.67.3/go.mod h1:YGaHCc6Oap+FzBJTZLBzkGSYt/cvGPFTPxkn7QfSU8s=
google.golang.org/protobuf v1.36.1 h1:yBPeRvTftaleIgM3PZ/WBIZ7XM/eEYAaEyCwvyjq/gk=
google.golang.org/protobuf v1.36.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422 h1:GVIKPyP/kLIyVOgOnTwFOrvQaQUzOzGMCxgFUOEmm24=
google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422/go.mod h1:b6h1vNKhxaSoEI+5jc3PJUCustfli/mRab7295pY7rw=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463 h1:e0AIkUUhxyBKh6ssZNrAMeqhA7RKUj42346d1y02i2g=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250324211829-b45e905df463/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.71.0 h1:kF77BGdPTQ4/JZWMlb9VpJ5pa25aqvVqogsxNHHdeBg=
google.golang.org/grpc v1.71.0/go.mod h1:H0GRtasmQOh9LkFoCPDu3ZrwUtD1YGE+b2vYBYd/8Ec=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME=
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/libc v1.61.13 h1:3LRd6ZO1ezsFiX1y+bHd1ipyEHIJKvuprv0sLTBwLW8=
modernc.org/libc v1.61.13/go.mod h1:8F/uJWL/3nNil0Lgt1Dpz+GgkApWh04N3el3hxJcA6E=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.9.1 h1:V/Z1solwAVmMW1yttq3nDdZPJqV1rM05Ccq6KMSZ34g=
modernc.org/memory v1.9.1/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/sqlite v1.36.2 h1:vjcSazuoFve9Wm0IVNHgmJECoOXLZM1KfMXbcX2axHA=
modernc.org/sqlite v1.36.2/go.mod h1:ADySlx7K4FdY5MaJcEv86hTJ0PjedAloTUuif0YS3ws=

View File

@@ -73,6 +73,7 @@ func New(ctx context.Context, conn *sql.DB) (*App, error) {
return app, nil
}
// Shutdown performs a clean shutdown of the application
func (app *App) Shutdown() {
// Cancel all watcher goroutines

View File

@@ -67,14 +67,15 @@ type LSPConfig struct {
// Config is the main configuration structure for the application.
type Config struct {
Data Data `json:"data"`
WorkingDir string `json:"wd,omitempty"`
MCPServers map[string]MCPServer `json:"mcpServers,omitempty"`
Providers map[models.ModelProvider]Provider `json:"providers,omitempty"`
LSP map[string]LSPConfig `json:"lsp,omitempty"`
Agents map[AgentName]Agent `json:"agents"`
Debug bool `json:"debug,omitempty"`
DebugLSP bool `json:"debugLSP,omitempty"`
Data Data `json:"data"`
WorkingDir string `json:"wd,omitempty"`
MCPServers map[string]MCPServer `json:"mcpServers,omitempty"`
Providers map[models.ModelProvider]Provider `json:"providers,omitempty"`
LSP map[string]LSPConfig `json:"lsp,omitempty"`
Agents map[AgentName]Agent `json:"agents"`
Debug bool `json:"debug,omitempty"`
DebugLSP bool `json:"debugLSP,omitempty"`
ContextPaths []string `json:"contextPaths,omitempty"`
}
// Application constants
@@ -82,8 +83,24 @@ const (
defaultDataDirectory = ".opencode"
defaultLogLevel = "info"
appName = "opencode"
MaxTokensFallbackDefault = 4096
)
var defaultContextPaths = []string{
".github/copilot-instructions.md",
".cursorrules",
".cursor/rules/",
"CLAUDE.md",
"CLAUDE.local.md",
"opencode.md",
"opencode.local.md",
"OpenCode.md",
"OpenCode.local.md",
"OPENCODE.md",
"OPENCODE.local.md",
}
// Global configuration instance
var cfg *Config
@@ -185,6 +202,7 @@ func configureViper() {
// setDefaults configures default values for configuration options.
func setDefaults(debug bool) {
viper.SetDefault("data.directory", defaultDataDirectory)
viper.SetDefault("contextPaths", defaultContextPaths)
if debug {
viper.SetDefault("debug", true)
@@ -196,16 +214,29 @@ func setDefaults(debug bool) {
}
// setProviderDefaults configures LLM provider defaults based on environment variables.
// the default model priority is:
// 1. Anthropic
// 2. OpenAI
// 3. Google Gemini
// 4. Groq
// 5. AWS Bedrock
func setProviderDefaults() {
// Anthropic configuration
// Set all API keys we can find in the environment
if apiKey := os.Getenv("ANTHROPIC_API_KEY"); apiKey != "" {
viper.SetDefault("providers.anthropic.apiKey", apiKey)
}
if apiKey := os.Getenv("OPENAI_API_KEY"); apiKey != "" {
viper.SetDefault("providers.openai.apiKey", apiKey)
}
if apiKey := os.Getenv("GEMINI_API_KEY"); apiKey != "" {
viper.SetDefault("providers.gemini.apiKey", apiKey)
}
if apiKey := os.Getenv("GROQ_API_KEY"); apiKey != "" {
viper.SetDefault("providers.groq.apiKey", apiKey)
}
// Use this order to set the default models
// 1. Anthropic
// 2. OpenAI
// 3. Google Gemini
// 4. Groq
// 5. AWS Bedrock
// Anthropic configuration
if apiKey := os.Getenv("ANTHROPIC_API_KEY"); apiKey != "" {
viper.SetDefault("agents.coder.model", models.Claude37Sonnet)
viper.SetDefault("agents.task.model", models.Claude37Sonnet)
viper.SetDefault("agents.title.model", models.Claude37Sonnet)
@@ -214,7 +245,6 @@ func setProviderDefaults() {
// OpenAI configuration
if apiKey := os.Getenv("OPENAI_API_KEY"); apiKey != "" {
viper.SetDefault("providers.openai.apiKey", apiKey)
viper.SetDefault("agents.coder.model", models.GPT41)
viper.SetDefault("agents.task.model", models.GPT41Mini)
viper.SetDefault("agents.title.model", models.GPT41Mini)
@@ -223,7 +253,6 @@ func setProviderDefaults() {
// Google Gemini configuration
if apiKey := os.Getenv("GEMINI_API_KEY"); apiKey != "" {
viper.SetDefault("providers.gemini.apiKey", apiKey)
viper.SetDefault("agents.coder.model", models.Gemini25)
viper.SetDefault("agents.task.model", models.Gemini25Flash)
viper.SetDefault("agents.title.model", models.Gemini25Flash)
@@ -232,13 +261,21 @@ func setProviderDefaults() {
// Groq configuration
if apiKey := os.Getenv("GROQ_API_KEY"); apiKey != "" {
viper.SetDefault("providers.groq.apiKey", apiKey)
viper.SetDefault("agents.coder.model", models.QWENQwq)
viper.SetDefault("agents.task.model", models.QWENQwq)
viper.SetDefault("agents.title.model", models.QWENQwq)
return
}
// OpenRouter configuration
if apiKey := os.Getenv("OPENROUTER_API_KEY"); apiKey != "" {
viper.SetDefault("providers.openrouter.apiKey", apiKey)
viper.SetDefault("agents.coder.model", models.OpenRouterClaude37Sonnet)
viper.SetDefault("agents.task.model", models.OpenRouterClaude37Sonnet)
viper.SetDefault("agents.title.model", models.OpenRouterClaude35Haiku)
return
}
// AWS Bedrock configuration
if hasAWSCredentials() {
viper.SetDefault("agents.coder.model", models.BedrockClaude37Sonnet)
@@ -246,6 +283,15 @@ func setProviderDefaults() {
viper.SetDefault("agents.title.model", models.BedrockClaude37Sonnet)
return
}
if os.Getenv("AZURE_OPENAI_ENDPOINT") != "" {
// api-key may be empty when using Entra ID credentials that's okay
viper.SetDefault("providers.azure.apiKey", os.Getenv("AZURE_OPENAI_API_KEY"))
viper.SetDefault("agents.coder.model", models.AzureGPT41)
viper.SetDefault("agents.task.model", models.AzureGPT41Mini)
viper.SetDefault("agents.title.model", models.AzureGPT41Mini)
return
}
}
// hasAWSCredentials checks if AWS credentials are available in the environment.
@@ -312,60 +358,33 @@ func applyDefaultValues() {
}
}
// Validate checks if the configuration is valid and applies defaults where needed.
// It validates model IDs and providers, ensuring they are supported.
func Validate() error {
if cfg == nil {
return fmt.Errorf("config not loaded")
func validateAgent(cfg *Config, name AgentName, agent Agent) error {
// Check if model exists
model, modelExists := models.SupportedModels[agent.Model]
if !modelExists {
logging.Warn("unsupported model configured, reverting to default",
"agent", name,
"configured_model", agent.Model)
// Set default model based on available providers
if setDefaultModelForAgent(name) {
logging.Info("set default model for agent", "agent", name, "model", cfg.Agents[name].Model)
} else {
return fmt.Errorf("no valid provider available for agent %s", name)
}
return nil
}
// Validate agent models
for name, agent := range cfg.Agents {
// Check if model exists
model, modelExists := models.SupportedModels[agent.Model]
if !modelExists {
logging.Warn("unsupported model configured, reverting to default",
"agent", name,
"configured_model", agent.Model)
// Check if provider for the model is configured
provider := model.Provider
providerCfg, providerExists := cfg.Providers[provider]
// Set default model based on available providers
if setDefaultModelForAgent(name) {
logging.Info("set default model for agent", "agent", name, "model", cfg.Agents[name].Model)
} else {
return fmt.Errorf("no valid provider available for agent %s", name)
}
continue
}
// Check if provider for the model is configured
provider := model.Provider
providerCfg, providerExists := cfg.Providers[provider]
if !providerExists {
// Provider not configured, check if we have environment variables
apiKey := getProviderAPIKey(provider)
if apiKey == "" {
logging.Warn("provider not configured for model, reverting to default",
"agent", name,
"model", agent.Model,
"provider", provider)
// Set default model based on available providers
if setDefaultModelForAgent(name) {
logging.Info("set default model for agent", "agent", name, "model", cfg.Agents[name].Model)
} else {
return fmt.Errorf("no valid provider available for agent %s", name)
}
} else {
// Add provider with API key from environment
cfg.Providers[provider] = Provider{
APIKey: apiKey,
}
logging.Info("added provider from environment", "provider", provider)
}
} else if providerCfg.Disabled || providerCfg.APIKey == "" {
// Provider is disabled or has no API key
logging.Warn("provider is disabled or has no API key, reverting to default",
if !providerExists {
// Provider not configured, check if we have environment variables
apiKey := getProviderAPIKey(provider)
if apiKey == "" {
logging.Warn("provider not configured for model, reverting to default",
"agent", name,
"model", agent.Model,
"provider", provider)
@@ -376,75 +395,110 @@ func Validate() error {
} else {
return fmt.Errorf("no valid provider available for agent %s", name)
}
}
// Validate max tokens
if agent.MaxTokens <= 0 {
logging.Warn("invalid max tokens, setting to default",
"agent", name,
"model", agent.Model,
"max_tokens", agent.MaxTokens)
// Update the agent with default max tokens
updatedAgent := cfg.Agents[name]
if model.DefaultMaxTokens > 0 {
updatedAgent.MaxTokens = model.DefaultMaxTokens
} else {
updatedAgent.MaxTokens = 4096 // Fallback default
} else {
// Add provider with API key from environment
cfg.Providers[provider] = Provider{
APIKey: apiKey,
}
cfg.Agents[name] = updatedAgent
} else if model.ContextWindow > 0 && agent.MaxTokens > model.ContextWindow/2 {
// Ensure max tokens doesn't exceed half the context window (reasonable limit)
logging.Warn("max tokens exceeds half the context window, adjusting",
"agent", name,
"model", agent.Model,
"max_tokens", agent.MaxTokens,
"context_window", model.ContextWindow)
// Update the agent with adjusted max tokens
updatedAgent := cfg.Agents[name]
updatedAgent.MaxTokens = model.ContextWindow / 2
cfg.Agents[name] = updatedAgent
logging.Info("added provider from environment", "provider", provider)
}
} else if providerCfg.Disabled || providerCfg.APIKey == "" {
// Provider is disabled or has no API key
logging.Warn("provider is disabled or has no API key, reverting to default",
"agent", name,
"model", agent.Model,
"provider", provider)
// Validate reasoning effort for models that support reasoning
if model.CanReason && provider == models.ProviderOpenAI {
if agent.ReasoningEffort == "" {
// Set default reasoning effort for models that support it
logging.Info("setting default reasoning effort for model that supports reasoning",
// Set default model based on available providers
if setDefaultModelForAgent(name) {
logging.Info("set default model for agent", "agent", name, "model", cfg.Agents[name].Model)
} else {
return fmt.Errorf("no valid provider available for agent %s", name)
}
}
// Validate max tokens
if agent.MaxTokens <= 0 {
logging.Warn("invalid max tokens, setting to default",
"agent", name,
"model", agent.Model,
"max_tokens", agent.MaxTokens)
// Update the agent with default max tokens
updatedAgent := cfg.Agents[name]
if model.DefaultMaxTokens > 0 {
updatedAgent.MaxTokens = model.DefaultMaxTokens
} else {
updatedAgent.MaxTokens = MaxTokensFallbackDefault
}
cfg.Agents[name] = updatedAgent
} else if model.ContextWindow > 0 && agent.MaxTokens > model.ContextWindow/2 {
// Ensure max tokens doesn't exceed half the context window (reasonable limit)
logging.Warn("max tokens exceeds half the context window, adjusting",
"agent", name,
"model", agent.Model,
"max_tokens", agent.MaxTokens,
"context_window", model.ContextWindow)
// Update the agent with adjusted max tokens
updatedAgent := cfg.Agents[name]
updatedAgent.MaxTokens = model.ContextWindow / 2
cfg.Agents[name] = updatedAgent
}
// Validate reasoning effort for models that support reasoning
if model.CanReason && provider == models.ProviderOpenAI {
if agent.ReasoningEffort == "" {
// Set default reasoning effort for models that support it
logging.Info("setting default reasoning effort for model that supports reasoning",
"agent", name,
"model", agent.Model)
// Update the agent with default reasoning effort
updatedAgent := cfg.Agents[name]
updatedAgent.ReasoningEffort = "medium"
cfg.Agents[name] = updatedAgent
} else {
// Check if reasoning effort is valid (low, medium, high)
effort := strings.ToLower(agent.ReasoningEffort)
if effort != "low" && effort != "medium" && effort != "high" {
logging.Warn("invalid reasoning effort, setting to medium",
"agent", name,
"model", agent.Model)
"model", agent.Model,
"reasoning_effort", agent.ReasoningEffort)
// Update the agent with default reasoning effort
// Update the agent with valid reasoning effort
updatedAgent := cfg.Agents[name]
updatedAgent.ReasoningEffort = "medium"
cfg.Agents[name] = updatedAgent
} else {
// Check if reasoning effort is valid (low, medium, high)
effort := strings.ToLower(agent.ReasoningEffort)
if effort != "low" && effort != "medium" && effort != "high" {
logging.Warn("invalid reasoning effort, setting to medium",
"agent", name,
"model", agent.Model,
"reasoning_effort", agent.ReasoningEffort)
// Update the agent with valid reasoning effort
updatedAgent := cfg.Agents[name]
updatedAgent.ReasoningEffort = "medium"
cfg.Agents[name] = updatedAgent
}
}
} else if !model.CanReason && agent.ReasoningEffort != "" {
// Model doesn't support reasoning but reasoning effort is set
logging.Warn("model doesn't support reasoning but reasoning effort is set, ignoring",
"agent", name,
"model", agent.Model,
"reasoning_effort", agent.ReasoningEffort)
}
} else if !model.CanReason && agent.ReasoningEffort != "" {
// Model doesn't support reasoning but reasoning effort is set
logging.Warn("model doesn't support reasoning but reasoning effort is set, ignoring",
"agent", name,
"model", agent.Model,
"reasoning_effort", agent.ReasoningEffort)
// Update the agent to remove reasoning effort
updatedAgent := cfg.Agents[name]
updatedAgent.ReasoningEffort = ""
cfg.Agents[name] = updatedAgent
// Update the agent to remove reasoning effort
updatedAgent := cfg.Agents[name]
updatedAgent.ReasoningEffort = ""
cfg.Agents[name] = updatedAgent
}
return nil
}
// Validate checks if the configuration is valid and applies defaults where needed.
func Validate() error {
if cfg == nil {
return fmt.Errorf("config not loaded")
}
// Validate agent models
for name, agent := range cfg.Agents {
if err := validateAgent(cfg, name, agent); err != nil {
return err
}
}
@@ -480,6 +534,10 @@ func getProviderAPIKey(provider models.ModelProvider) string {
return os.Getenv("GEMINI_API_KEY")
case models.ProviderGROQ:
return os.Getenv("GROQ_API_KEY")
case models.ProviderAzure:
return os.Getenv("AZURE_OPENAI_API_KEY")
case models.ProviderOpenRouter:
return os.Getenv("OPENROUTER_API_KEY")
case models.ProviderBedrock:
if hasAWSCredentials() {
return "aws-credentials-available"
@@ -531,6 +589,34 @@ func setDefaultModelForAgent(agent AgentName) bool {
return true
}
if apiKey := os.Getenv("OPENROUTER_API_KEY"); apiKey != "" {
var model models.ModelID
maxTokens := int64(5000)
reasoningEffort := ""
switch agent {
case AgentTitle:
model = models.OpenRouterClaude35Haiku
maxTokens = 80
case AgentTask:
model = models.OpenRouterClaude37Sonnet
default:
model = models.OpenRouterClaude37Sonnet
}
// Check if model supports reasoning
if modelInfo, ok := models.SupportedModels[model]; ok && modelInfo.CanReason {
reasoningEffort = "medium"
}
cfg.Agents[agent] = Agent{
Model: model,
MaxTokens: maxTokens,
ReasoningEffort: reasoningEffort,
}
return true
}
if apiKey := os.Getenv("GEMINI_API_KEY"); apiKey != "" {
var model models.ModelID
maxTokens := int64(5000)
@@ -592,3 +678,36 @@ func WorkingDirectory() string {
}
return cfg.WorkingDir
}
func UpdateAgentModel(agentName AgentName, modelID models.ModelID) error {
if cfg == nil {
panic("config not loaded")
}
existingAgentCfg := cfg.Agents[agentName]
model, ok := models.SupportedModels[modelID]
if !ok {
return fmt.Errorf("model %s not supported", modelID)
}
maxTokens := existingAgentCfg.MaxTokens
if model.DefaultMaxTokens > 0 {
maxTokens = model.DefaultMaxTokens
}
newAgentCfg := Agent{
Model: modelID,
MaxTokens: maxTokens,
ReasoningEffort: existingAgentCfg.ReasoningEffort,
}
cfg.Agents[agentName] = newAgentCfg
if err := validateAgent(cfg, agentName, newAgentCfg); err != nil {
// revert config update on failure
cfg.Agents[agentName] = existingAgentCfg
return fmt.Errorf("failed to update agent model: %w", err)
}
return nil
}

View File

@@ -6,14 +6,13 @@ import (
"os"
"path/filepath"
"github.com/golang-migrate/migrate/v4"
"github.com/golang-migrate/migrate/v4/source/iofs"
"github.com/golang-migrate/migrate/v4/database/sqlite3"
_ "github.com/mattn/go-sqlite3"
_ "github.com/ncruces/go-sqlite3/driver"
_ "github.com/ncruces/go-sqlite3/embed"
"github.com/opencode-ai/opencode/internal/config"
"github.com/opencode-ai/opencode/internal/logging"
"github.com/pressly/goose/v3"
)
func Connect() (*sql.DB, error) {
@@ -54,38 +53,16 @@ func Connect() (*sql.DB, error) {
}
}
// Initialize schema from embedded file
d, err := iofs.New(FS, "migrations")
if err != nil {
logging.Error("Failed to open embedded migrations", "error", err)
db.Close()
return nil, fmt.Errorf("failed to open embedded migrations: %w", err)
goose.SetBaseFS(FS)
if err := goose.SetDialect("sqlite3"); err != nil {
logging.Error("Failed to set dialect", "error", err)
return nil, fmt.Errorf("failed to set dialect: %w", err)
}
driver, err := sqlite3.WithInstance(db, &sqlite3.Config{})
if err != nil {
logging.Error("Failed to create SQLite driver", "error", err)
db.Close()
return nil, fmt.Errorf("failed to create SQLite driver: %w", err)
if err := goose.Up(db, "migrations"); err != nil {
logging.Error("Failed to apply migrations", "error", err)
return nil, fmt.Errorf("failed to apply migrations: %w", err)
}
m, err := migrate.NewWithInstance("iofs", d, "ql", driver)
if err != nil {
logging.Error("Failed to create migration instance", "error", err)
db.Close()
return nil, fmt.Errorf("failed to create migration instance: %w", err)
}
err = m.Up()
if err != nil && err != migrate.ErrNoChange {
logging.Error("Migration failed", "error", err)
db.Close()
return nil, fmt.Errorf("failed to apply schema: %w", err)
} else if err == migrate.ErrNoChange {
logging.Info("No schema changes to apply")
} else {
logging.Info("Schema migration applied successfully")
}
return db, nil
}

View File

@@ -1,10 +0,0 @@
DROP TRIGGER IF EXISTS update_sessions_updated_at;
DROP TRIGGER IF EXISTS update_messages_updated_at;
DROP TRIGGER IF EXISTS update_files_updated_at;
DROP TRIGGER IF EXISTS update_session_message_count_on_delete;
DROP TRIGGER IF EXISTS update_session_message_count_on_insert;
DROP TABLE IF EXISTS sessions;
DROP TABLE IF EXISTS messages;
DROP TABLE IF EXISTS files;

View File

@@ -1,3 +1,5 @@
-- +goose Up
-- +goose StatementBegin
-- Sessions
CREATE TABLE IF NOT EXISTS sessions (
id TEXT PRIMARY KEY,
@@ -78,3 +80,19 @@ UPDATE sessions SET
message_count = message_count - 1
WHERE id = old.session_id;
END;
-- +goose StatementEnd
-- +goose Down
-- +goose StatementBegin
DROP TRIGGER IF EXISTS update_sessions_updated_at;
DROP TRIGGER IF EXISTS update_messages_updated_at;
DROP TRIGGER IF EXISTS update_files_updated_at;
DROP TRIGGER IF EXISTS update_session_message_count_on_delete;
DROP TRIGGER IF EXISTS update_session_message_count_on_insert;
DROP TABLE IF EXISTS sessions;
DROP TABLE IF EXISTS messages;
DROP TABLE IF EXISTS files;
-- +goose StatementEnd

View File

@@ -4,23 +4,18 @@ import (
"bytes"
"fmt"
"io"
"os"
"path/filepath"
"regexp"
"strconv"
"strings"
"time"
"github.com/alecthomas/chroma/v2"
"github.com/alecthomas/chroma/v2/formatters"
"github.com/alecthomas/chroma/v2/lexers"
"github.com/alecthomas/chroma/v2/styles"
"github.com/aymanbagabas/go-udiff"
"github.com/charmbracelet/lipgloss"
"github.com/charmbracelet/x/ansi"
"github.com/go-git/go-git/v5"
"github.com/go-git/go-git/v5/plumbing/object"
"github.com/opencode-ai/opencode/internal/config"
"github.com/opencode-ai/opencode/internal/logging"
"github.com/sergi/go-diff/diffmatchpatch"
)
@@ -942,106 +937,21 @@ func GenerateDiff(beforeContent, afterContent, fileName string) (string, int, in
cwd := config.WorkingDirectory()
fileName = strings.TrimPrefix(fileName, cwd)
fileName = strings.TrimPrefix(fileName, "/")
// Create temporary directory for git operations
tempDir, err := os.MkdirTemp("", fmt.Sprintf("git-diff-%d", time.Now().UnixNano()))
if err != nil {
logging.Error("Failed to create temp directory for git diff", "error", err)
return "", 0, 0
}
defer os.RemoveAll(tempDir)
// Initialize git repo
repo, err := git.PlainInit(tempDir, false)
if err != nil {
logging.Error("Failed to initialize git repository", "error", err)
return "", 0, 0
var (
unified = udiff.Unified("a/"+fileName, "b/"+fileName, beforeContent, afterContent)
additions = 0
removals = 0
)
lines := strings.Split(unified, "\n")
for _, line := range lines {
if strings.HasPrefix(line, "+") && !strings.HasPrefix(line, "+++") {
additions++
} else if strings.HasPrefix(line, "-") && !strings.HasPrefix(line, "---") {
removals++
}
}
wt, err := repo.Worktree()
if err != nil {
logging.Error("Failed to get git worktree", "error", err)
return "", 0, 0
}
// Write the "before" content and commit it
fullPath := filepath.Join(tempDir, fileName)
if err = os.MkdirAll(filepath.Dir(fullPath), 0o755); err != nil {
logging.Error("Failed to create directory for file", "error", err)
return "", 0, 0
}
if err = os.WriteFile(fullPath, []byte(beforeContent), 0o644); err != nil {
logging.Error("Failed to write before content to file", "error", err)
return "", 0, 0
}
_, err = wt.Add(fileName)
if err != nil {
logging.Error("Failed to add file to git", "error", err)
return "", 0, 0
}
beforeCommit, err := wt.Commit("Before", &git.CommitOptions{
Author: &object.Signature{
Name: "OpenCode",
Email: "coder@opencode.ai",
When: time.Now(),
},
})
if err != nil {
logging.Error("Failed to commit before content", "error", err)
return "", 0, 0
}
// Write the "after" content and commit it
if err = os.WriteFile(fullPath, []byte(afterContent), 0o644); err != nil {
logging.Error("Failed to write after content to file", "error", err)
return "", 0, 0
}
_, err = wt.Add(fileName)
if err != nil {
logging.Error("Failed to add file to git", "error", err)
return "", 0, 0
}
afterCommit, err := wt.Commit("After", &git.CommitOptions{
Author: &object.Signature{
Name: "OpenCode",
Email: "coder@opencode.ai",
When: time.Now(),
},
})
if err != nil {
logging.Error("Failed to commit after content", "error", err)
return "", 0, 0
}
// Get the diff between the two commits
beforeCommitObj, err := repo.CommitObject(beforeCommit)
if err != nil {
logging.Error("Failed to get before commit object", "error", err)
return "", 0, 0
}
afterCommitObj, err := repo.CommitObject(afterCommit)
if err != nil {
logging.Error("Failed to get after commit object", "error", err)
return "", 0, 0
}
patch, err := beforeCommitObj.Patch(afterCommitObj)
if err != nil {
logging.Error("Failed to create git diff patch", "error", err)
return "", 0, 0
}
// Count additions and removals
additions := 0
removals := 0
for _, fileStat := range patch.Stats() {
additions += fileStat.Addition
removals += fileStat.Deletion
}
return patch.String(), additions, removals
return unified, additions, removals
}

View File

@@ -42,6 +42,7 @@ type Service interface {
Cancel(sessionID string)
IsSessionBusy(sessionID string) bool
IsBusy() bool
Update(agentName config.AgentName, modelID models.ModelID) (models.Model, error)
}
type agent struct {
@@ -436,6 +437,25 @@ func (a *agent) TrackUsage(ctx context.Context, sessionID string, model models.M
return nil
}
func (a *agent) Update(agentName config.AgentName, modelID models.ModelID) (models.Model, error) {
if a.IsBusy() {
return models.Model{}, fmt.Errorf("cannot change model while processing requests")
}
if err := config.UpdateAgentModel(agentName, modelID); err != nil {
return models.Model{}, fmt.Errorf("failed to update config: %w", err)
}
provider, err := createAgentProvider(agentName)
if err != nil {
return models.Model{}, fmt.Errorf("failed to create provider for model %s: %w", modelID, err)
}
a.provider = provider
return a.provider.Model(), nil
}
func createAgentProvider(agentName config.AgentName) (provider.Provider, error) {
cfg := config.Get()
agentConfig, ok := cfg.Agents[agentName]

View File

@@ -58,7 +58,7 @@ func runTool(ctx context.Context, c MCPClient, toolName string, input string) (t
toolRequest := mcp.CallToolRequest{}
toolRequest.Params.Name = toolName
var args map[string]any
if err = json.Unmarshal([]byte(input), &input); err != nil {
if err = json.Unmarshal([]byte(input), &args); err != nil {
return tools.NewTextErrorResponse(fmt.Sprintf("error parsing parameters: %s", err)), nil
}
toolRequest.Params.Arguments = args

View File

@@ -11,8 +11,8 @@ const (
Claude3Opus ModelID = "claude-3-opus"
)
// https://docs.anthropic.com/en/docs/about-claude/models/all-models
var AnthropicModels = map[ModelID]Model{
// Anthropic
Claude35Sonnet: {
ID: Claude35Sonnet,
Name: "Claude 3.5 Sonnet",
@@ -29,13 +29,13 @@ var AnthropicModels = map[ModelID]Model{
ID: Claude3Haiku,
Name: "Claude 3 Haiku",
Provider: ProviderAnthropic,
APIModel: "claude-3-haiku-latest",
APIModel: "claude-3-haiku-20240307", // doesn't support "-latest"
CostPer1MIn: 0.25,
CostPer1MInCached: 0.30,
CostPer1MOutCached: 0.03,
CostPer1MOut: 1.25,
ContextWindow: 200000,
DefaultMaxTokens: 5000,
DefaultMaxTokens: 4096,
},
Claude37Sonnet: {
ID: Claude37Sonnet,

View File

@@ -0,0 +1,157 @@
package models
const ProviderAzure ModelProvider = "azure"
const (
AzureGPT41 ModelID = "azure.gpt-4.1"
AzureGPT41Mini ModelID = "azure.gpt-4.1-mini"
AzureGPT41Nano ModelID = "azure.gpt-4.1-nano"
AzureGPT45Preview ModelID = "azure.gpt-4.5-preview"
AzureGPT4o ModelID = "azure.gpt-4o"
AzureGPT4oMini ModelID = "azure.gpt-4o-mini"
AzureO1 ModelID = "azure.o1"
AzureO1Mini ModelID = "azure.o1-mini"
AzureO3 ModelID = "azure.o3"
AzureO3Mini ModelID = "azure.o3-mini"
AzureO4Mini ModelID = "azure.o4-mini"
)
var AzureModels = map[ModelID]Model{
AzureGPT41: {
ID: AzureGPT41,
Name: "Azure OpenAI GPT 4.1",
Provider: ProviderAzure,
APIModel: "gpt-4.1",
CostPer1MIn: OpenAIModels[GPT41].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT41].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT41].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT41].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT41].ContextWindow,
DefaultMaxTokens: OpenAIModels[GPT41].DefaultMaxTokens,
},
AzureGPT41Mini: {
ID: AzureGPT41Mini,
Name: "Azure OpenAI GPT 4.1 mini",
Provider: ProviderAzure,
APIModel: "gpt-4.1-mini",
CostPer1MIn: OpenAIModels[GPT41Mini].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT41Mini].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT41Mini].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT41Mini].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT41Mini].ContextWindow,
DefaultMaxTokens: OpenAIModels[GPT41Mini].DefaultMaxTokens,
},
AzureGPT41Nano: {
ID: AzureGPT41Nano,
Name: "Azure OpenAI GPT 4.1 nano",
Provider: ProviderAzure,
APIModel: "gpt-4.1-nano",
CostPer1MIn: OpenAIModels[GPT41Nano].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT41Nano].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT41Nano].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT41Nano].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT41Nano].ContextWindow,
DefaultMaxTokens: OpenAIModels[GPT41Nano].DefaultMaxTokens,
},
AzureGPT45Preview: {
ID: AzureGPT45Preview,
Name: "Azure OpenAI GPT 4.5 preview",
Provider: ProviderAzure,
APIModel: "gpt-4.5-preview",
CostPer1MIn: OpenAIModels[GPT45Preview].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT45Preview].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT45Preview].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT45Preview].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT45Preview].ContextWindow,
DefaultMaxTokens: OpenAIModels[GPT45Preview].DefaultMaxTokens,
},
AzureGPT4o: {
ID: AzureGPT4o,
Name: "Azure OpenAI GPT-4o",
Provider: ProviderAzure,
APIModel: "gpt-4o",
CostPer1MIn: OpenAIModels[GPT4o].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT4o].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT4o].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT4o].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT4o].ContextWindow,
DefaultMaxTokens: OpenAIModels[GPT4o].DefaultMaxTokens,
},
AzureGPT4oMini: {
ID: AzureGPT4oMini,
Name: "Azure OpenAI GPT-4o mini",
Provider: ProviderAzure,
APIModel: "gpt-4o-mini",
CostPer1MIn: OpenAIModels[GPT4oMini].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT4oMini].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT4oMini].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT4oMini].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT4oMini].ContextWindow,
DefaultMaxTokens: OpenAIModels[GPT4oMini].DefaultMaxTokens,
},
AzureO1: {
ID: AzureO1,
Name: "Azure OpenAI O1",
Provider: ProviderAzure,
APIModel: "o1",
CostPer1MIn: OpenAIModels[O1].CostPer1MIn,
CostPer1MInCached: OpenAIModels[O1].CostPer1MInCached,
CostPer1MOut: OpenAIModels[O1].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[O1].CostPer1MOutCached,
ContextWindow: OpenAIModels[O1].ContextWindow,
DefaultMaxTokens: OpenAIModels[O1].DefaultMaxTokens,
CanReason: OpenAIModels[O1].CanReason,
},
AzureO1Mini: {
ID: AzureO1Mini,
Name: "Azure OpenAI O1 mini",
Provider: ProviderAzure,
APIModel: "o1-mini",
CostPer1MIn: OpenAIModels[O1Mini].CostPer1MIn,
CostPer1MInCached: OpenAIModels[O1Mini].CostPer1MInCached,
CostPer1MOut: OpenAIModels[O1Mini].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[O1Mini].CostPer1MOutCached,
ContextWindow: OpenAIModels[O1Mini].ContextWindow,
DefaultMaxTokens: OpenAIModels[O1Mini].DefaultMaxTokens,
CanReason: OpenAIModels[O1Mini].CanReason,
},
AzureO3: {
ID: AzureO3,
Name: "Azure OpenAI O3",
Provider: ProviderAzure,
APIModel: "o3",
CostPer1MIn: OpenAIModels[O3].CostPer1MIn,
CostPer1MInCached: OpenAIModels[O3].CostPer1MInCached,
CostPer1MOut: OpenAIModels[O3].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[O3].CostPer1MOutCached,
ContextWindow: OpenAIModels[O3].ContextWindow,
DefaultMaxTokens: OpenAIModels[O3].DefaultMaxTokens,
CanReason: OpenAIModels[O3].CanReason,
},
AzureO3Mini: {
ID: AzureO3Mini,
Name: "Azure OpenAI O3 mini",
Provider: ProviderAzure,
APIModel: "o3-mini",
CostPer1MIn: OpenAIModels[O3Mini].CostPer1MIn,
CostPer1MInCached: OpenAIModels[O3Mini].CostPer1MInCached,
CostPer1MOut: OpenAIModels[O3Mini].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[O3Mini].CostPer1MOutCached,
ContextWindow: OpenAIModels[O3Mini].ContextWindow,
DefaultMaxTokens: OpenAIModels[O3Mini].DefaultMaxTokens,
CanReason: OpenAIModels[O3Mini].CanReason,
},
AzureO4Mini: {
ID: AzureO4Mini,
Name: "Azure OpenAI O4 mini",
Provider: ProviderAzure,
APIModel: "o4-mini",
CostPer1MIn: OpenAIModels[O4Mini].CostPer1MIn,
CostPer1MInCached: OpenAIModels[O4Mini].CostPer1MInCached,
CostPer1MOut: OpenAIModels[O4Mini].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[O4Mini].CostPer1MOutCached,
ContextWindow: OpenAIModels[O4Mini].ContextWindow,
DefaultMaxTokens: OpenAIModels[O4Mini].DefaultMaxTokens,
CanReason: OpenAIModels[O4Mini].CanReason,
},
}

View File

@@ -0,0 +1,82 @@
package models
const (
ProviderGROQ ModelProvider = "groq"
// GROQ
QWENQwq ModelID = "qwen-qwq"
// GROQ preview models
Llama4Scout ModelID = "meta-llama/llama-4-scout-17b-16e-instruct"
Llama4Maverick ModelID = "meta-llama/llama-4-maverick-17b-128e-instruct"
Llama3_3_70BVersatile ModelID = "llama-3.3-70b-versatile"
DeepseekR1DistillLlama70b ModelID = "deepseek-r1-distill-llama-70b"
)
var GroqModels = map[ModelID]Model{
//
// GROQ
QWENQwq: {
ID: QWENQwq,
Name: "Qwen Qwq",
Provider: ProviderGROQ,
APIModel: "qwen-qwq-32b",
CostPer1MIn: 0.29,
CostPer1MInCached: 0.275,
CostPer1MOutCached: 0.0,
CostPer1MOut: 0.39,
ContextWindow: 128_000,
DefaultMaxTokens: 50000,
// for some reason, the groq api doesn't like the reasoningEffort parameter
CanReason: false,
},
Llama4Scout: {
ID: Llama4Scout,
Name: "Llama4Scout",
Provider: ProviderGROQ,
APIModel: "meta-llama/llama-4-scout-17b-16e-instruct",
CostPer1MIn: 0.11,
CostPer1MInCached: 0,
CostPer1MOutCached: 0,
CostPer1MOut: 0.34,
ContextWindow: 128_000, // 10M when?
},
Llama4Maverick: {
ID: Llama4Maverick,
Name: "Llama4Maverick",
Provider: ProviderGROQ,
APIModel: "meta-llama/llama-4-maverick-17b-128e-instruct",
CostPer1MIn: 0.20,
CostPer1MInCached: 0,
CostPer1MOutCached: 0,
CostPer1MOut: 0.20,
ContextWindow: 128_000,
},
Llama3_3_70BVersatile: {
ID: Llama3_3_70BVersatile,
Name: "Llama3_3_70BVersatile",
Provider: ProviderGROQ,
APIModel: "llama-3.3-70b-versatile",
CostPer1MIn: 0.59,
CostPer1MInCached: 0,
CostPer1MOutCached: 0,
CostPer1MOut: 0.79,
ContextWindow: 128_000,
},
DeepseekR1DistillLlama70b: {
ID: DeepseekR1DistillLlama70b,
Name: "DeepseekR1DistillLlama70b",
Provider: ProviderGROQ,
APIModel: "deepseek-r1-distill-llama-70b",
CostPer1MIn: 0.75,
CostPer1MInCached: 0,
CostPer1MOutCached: 0,
CostPer1MOut: 0.99,
ContextWindow: 128_000,
CanReason: true,
},
}

View File

@@ -23,21 +23,25 @@ type Model struct {
// Model IDs
const ( // GEMINI
// GROQ
QWENQwq ModelID = "qwen-qwq"
// Bedrock
BedrockClaude37Sonnet ModelID = "bedrock.claude-3.7-sonnet"
)
const (
ProviderBedrock ModelProvider = "bedrock"
ProviderGROQ ModelProvider = "groq"
// ForTests
ProviderMock ModelProvider = "__mock"
)
// Providers in order of popularity
var ProviderPopularity = map[ModelProvider]int{
ProviderAnthropic: 1,
ProviderOpenAI: 2,
ProviderGemini: 3,
ProviderGROQ: 4,
ProviderBedrock: 5,
}
var SupportedModels = map[ModelID]Model{
//
// // GEMINI
@@ -63,18 +67,6 @@ var SupportedModels = map[ModelID]Model{
// CostPer1MOut: 0.4,
// },
//
// // GROQ
// QWENQwq: {
// ID: QWENQwq,
// Name: "Qwen Qwq",
// Provider: ProviderGROQ,
// APIModel: "qwen-qwq-32b",
// CostPer1MIn: 0,
// CostPer1MInCached: 0,
// CostPer1MOutCached: 0,
// CostPer1MOut: 0,
// },
//
// // Bedrock
BedrockClaude37Sonnet: {
ID: BedrockClaude37Sonnet,
@@ -92,4 +84,7 @@ func init() {
maps.Copy(SupportedModels, AnthropicModels)
maps.Copy(SupportedModels, OpenAIModels)
maps.Copy(SupportedModels, GeminiModels)
maps.Copy(SupportedModels, GroqModels)
maps.Copy(SupportedModels, AzureModels)
maps.Copy(SupportedModels, OpenRouterModels)
}

View File

@@ -0,0 +1,262 @@
package models
const (
ProviderOpenRouter ModelProvider = "openrouter"
OpenRouterGPT41 ModelID = "openrouter.gpt-4.1"
OpenRouterGPT41Mini ModelID = "openrouter.gpt-4.1-mini"
OpenRouterGPT41Nano ModelID = "openrouter.gpt-4.1-nano"
OpenRouterGPT45Preview ModelID = "openrouter.gpt-4.5-preview"
OpenRouterGPT4o ModelID = "openrouter.gpt-4o"
OpenRouterGPT4oMini ModelID = "openrouter.gpt-4o-mini"
OpenRouterO1 ModelID = "openrouter.o1"
OpenRouterO1Pro ModelID = "openrouter.o1-pro"
OpenRouterO1Mini ModelID = "openrouter.o1-mini"
OpenRouterO3 ModelID = "openrouter.o3"
OpenRouterO3Mini ModelID = "openrouter.o3-mini"
OpenRouterO4Mini ModelID = "openrouter.o4-mini"
OpenRouterGemini25Flash ModelID = "openrouter.gemini-2.5-flash"
OpenRouterGemini25 ModelID = "openrouter.gemini-2.5"
OpenRouterClaude35Sonnet ModelID = "openrouter.claude-3.5-sonnet"
OpenRouterClaude3Haiku ModelID = "openrouter.claude-3-haiku"
OpenRouterClaude37Sonnet ModelID = "openrouter.claude-3.7-sonnet"
OpenRouterClaude35Haiku ModelID = "openrouter.claude-3.5-haiku"
OpenRouterClaude3Opus ModelID = "openrouter.claude-3-opus"
)
var OpenRouterModels = map[ModelID]Model{
OpenRouterGPT41: {
ID: OpenRouterGPT41,
Name: "OpenRouter GPT 4.1",
Provider: ProviderOpenRouter,
APIModel: "openai/gpt-4.1",
CostPer1MIn: OpenAIModels[GPT41].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT41].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT41].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT41].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT41].ContextWindow,
DefaultMaxTokens: OpenAIModels[GPT41].DefaultMaxTokens,
},
OpenRouterGPT41Mini: {
ID: OpenRouterGPT41Mini,
Name: "OpenRouter GPT 4.1 mini",
Provider: ProviderOpenRouter,
APIModel: "openai/gpt-4.1-mini",
CostPer1MIn: OpenAIModels[GPT41Mini].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT41Mini].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT41Mini].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT41Mini].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT41Mini].ContextWindow,
DefaultMaxTokens: OpenAIModels[GPT41Mini].DefaultMaxTokens,
},
OpenRouterGPT41Nano: {
ID: OpenRouterGPT41Nano,
Name: "OpenRouter GPT 4.1 nano",
Provider: ProviderOpenRouter,
APIModel: "openai/gpt-4.1-nano",
CostPer1MIn: OpenAIModels[GPT41Nano].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT41Nano].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT41Nano].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT41Nano].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT41Nano].ContextWindow,
DefaultMaxTokens: OpenAIModels[GPT41Nano].DefaultMaxTokens,
},
OpenRouterGPT45Preview: {
ID: OpenRouterGPT45Preview,
Name: "OpenRouter GPT 4.5 preview",
Provider: ProviderOpenRouter,
APIModel: "openai/gpt-4.5-preview",
CostPer1MIn: OpenAIModels[GPT45Preview].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT45Preview].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT45Preview].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT45Preview].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT45Preview].ContextWindow,
DefaultMaxTokens: OpenAIModels[GPT45Preview].DefaultMaxTokens,
},
OpenRouterGPT4o: {
ID: OpenRouterGPT4o,
Name: "OpenRouter GPT 4o",
Provider: ProviderOpenRouter,
APIModel: "openai/gpt-4o",
CostPer1MIn: OpenAIModels[GPT4o].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT4o].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT4o].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT4o].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT4o].ContextWindow,
DefaultMaxTokens: OpenAIModels[GPT4o].DefaultMaxTokens,
},
OpenRouterGPT4oMini: {
ID: OpenRouterGPT4oMini,
Name: "OpenRouter GPT 4o mini",
Provider: ProviderOpenRouter,
APIModel: "openai/gpt-4o-mini",
CostPer1MIn: OpenAIModels[GPT4oMini].CostPer1MIn,
CostPer1MInCached: OpenAIModels[GPT4oMini].CostPer1MInCached,
CostPer1MOut: OpenAIModels[GPT4oMini].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[GPT4oMini].CostPer1MOutCached,
ContextWindow: OpenAIModels[GPT4oMini].ContextWindow,
},
OpenRouterO1: {
ID: OpenRouterO1,
Name: "OpenRouter O1",
Provider: ProviderOpenRouter,
APIModel: "openai/o1",
CostPer1MIn: OpenAIModels[O1].CostPer1MIn,
CostPer1MInCached: OpenAIModels[O1].CostPer1MInCached,
CostPer1MOut: OpenAIModels[O1].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[O1].CostPer1MOutCached,
ContextWindow: OpenAIModels[O1].ContextWindow,
DefaultMaxTokens: OpenAIModels[O1].DefaultMaxTokens,
CanReason: OpenAIModels[O1].CanReason,
},
OpenRouterO1Pro: {
ID: OpenRouterO1Pro,
Name: "OpenRouter o1 pro",
Provider: ProviderOpenRouter,
APIModel: "openai/o1-pro",
CostPer1MIn: OpenAIModels[O1Pro].CostPer1MIn,
CostPer1MInCached: OpenAIModels[O1Pro].CostPer1MInCached,
CostPer1MOut: OpenAIModels[O1Pro].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[O1Pro].CostPer1MOutCached,
ContextWindow: OpenAIModels[O1Pro].ContextWindow,
DefaultMaxTokens: OpenAIModels[O1Pro].DefaultMaxTokens,
CanReason: OpenAIModels[O1Pro].CanReason,
},
OpenRouterO1Mini: {
ID: OpenRouterO1Mini,
Name: "OpenRouter o1 mini",
Provider: ProviderOpenRouter,
APIModel: "openai/o1-mini",
CostPer1MIn: OpenAIModels[O1Mini].CostPer1MIn,
CostPer1MInCached: OpenAIModels[O1Mini].CostPer1MInCached,
CostPer1MOut: OpenAIModels[O1Mini].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[O1Mini].CostPer1MOutCached,
ContextWindow: OpenAIModels[O1Mini].ContextWindow,
DefaultMaxTokens: OpenAIModels[O1Mini].DefaultMaxTokens,
CanReason: OpenAIModels[O1Mini].CanReason,
},
OpenRouterO3: {
ID: OpenRouterO3,
Name: "OpenRouter o3",
Provider: ProviderOpenRouter,
APIModel: "openai/o3",
CostPer1MIn: OpenAIModels[O3].CostPer1MIn,
CostPer1MInCached: OpenAIModels[O3].CostPer1MInCached,
CostPer1MOut: OpenAIModels[O3].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[O3].CostPer1MOutCached,
ContextWindow: OpenAIModels[O3].ContextWindow,
DefaultMaxTokens: OpenAIModels[O3].DefaultMaxTokens,
CanReason: OpenAIModels[O3].CanReason,
},
OpenRouterO3Mini: {
ID: OpenRouterO3Mini,
Name: "OpenRouter o3 mini",
Provider: ProviderOpenRouter,
APIModel: "openai/o3-mini-high",
CostPer1MIn: OpenAIModels[O3Mini].CostPer1MIn,
CostPer1MInCached: OpenAIModels[O3Mini].CostPer1MInCached,
CostPer1MOut: OpenAIModels[O3Mini].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[O3Mini].CostPer1MOutCached,
ContextWindow: OpenAIModels[O3Mini].ContextWindow,
DefaultMaxTokens: OpenAIModels[O3Mini].DefaultMaxTokens,
CanReason: OpenAIModels[O3Mini].CanReason,
},
OpenRouterO4Mini: {
ID: OpenRouterO4Mini,
Name: "OpenRouter o4 mini",
Provider: ProviderOpenRouter,
APIModel: "openai/o4-mini-high",
CostPer1MIn: OpenAIModels[O4Mini].CostPer1MIn,
CostPer1MInCached: OpenAIModels[O4Mini].CostPer1MInCached,
CostPer1MOut: OpenAIModels[O4Mini].CostPer1MOut,
CostPer1MOutCached: OpenAIModels[O4Mini].CostPer1MOutCached,
ContextWindow: OpenAIModels[O4Mini].ContextWindow,
DefaultMaxTokens: OpenAIModels[O4Mini].DefaultMaxTokens,
CanReason: OpenAIModels[O4Mini].CanReason,
},
OpenRouterGemini25Flash: {
ID: OpenRouterGemini25Flash,
Name: "OpenRouter Gemini 2.5 Flash",
Provider: ProviderOpenRouter,
APIModel: "google/gemini-2.5-flash-preview:thinking",
CostPer1MIn: GeminiModels[Gemini25Flash].CostPer1MIn,
CostPer1MInCached: GeminiModels[Gemini25Flash].CostPer1MInCached,
CostPer1MOut: GeminiModels[Gemini25Flash].CostPer1MOut,
CostPer1MOutCached: GeminiModels[Gemini25Flash].CostPer1MOutCached,
ContextWindow: GeminiModels[Gemini25Flash].ContextWindow,
DefaultMaxTokens: GeminiModels[Gemini25Flash].DefaultMaxTokens,
},
OpenRouterGemini25: {
ID: OpenRouterGemini25,
Name: "OpenRouter Gemini 2.5 Pro",
Provider: ProviderOpenRouter,
APIModel: "google/gemini-2.5-pro-preview-03-25",
CostPer1MIn: GeminiModels[Gemini25].CostPer1MIn,
CostPer1MInCached: GeminiModels[Gemini25].CostPer1MInCached,
CostPer1MOut: GeminiModels[Gemini25].CostPer1MOut,
CostPer1MOutCached: GeminiModels[Gemini25].CostPer1MOutCached,
ContextWindow: GeminiModels[Gemini25].ContextWindow,
DefaultMaxTokens: GeminiModels[Gemini25].DefaultMaxTokens,
},
OpenRouterClaude35Sonnet: {
ID: OpenRouterClaude35Sonnet,
Name: "OpenRouter Claude 3.5 Sonnet",
Provider: ProviderOpenRouter,
APIModel: "anthropic/claude-3.5-sonnet",
CostPer1MIn: AnthropicModels[Claude35Sonnet].CostPer1MIn,
CostPer1MInCached: AnthropicModels[Claude35Sonnet].CostPer1MInCached,
CostPer1MOut: AnthropicModels[Claude35Sonnet].CostPer1MOut,
CostPer1MOutCached: AnthropicModels[Claude35Sonnet].CostPer1MOutCached,
ContextWindow: AnthropicModels[Claude35Sonnet].ContextWindow,
DefaultMaxTokens: AnthropicModels[Claude35Sonnet].DefaultMaxTokens,
},
OpenRouterClaude3Haiku: {
ID: OpenRouterClaude3Haiku,
Name: "OpenRouter Claude 3 Haiku",
Provider: ProviderOpenRouter,
APIModel: "anthropic/claude-3-haiku",
CostPer1MIn: AnthropicModels[Claude3Haiku].CostPer1MIn,
CostPer1MInCached: AnthropicModels[Claude3Haiku].CostPer1MInCached,
CostPer1MOut: AnthropicModels[Claude3Haiku].CostPer1MOut,
CostPer1MOutCached: AnthropicModels[Claude3Haiku].CostPer1MOutCached,
ContextWindow: AnthropicModels[Claude3Haiku].ContextWindow,
DefaultMaxTokens: AnthropicModels[Claude3Haiku].DefaultMaxTokens,
},
OpenRouterClaude37Sonnet: {
ID: OpenRouterClaude37Sonnet,
Name: "OpenRouter Claude 3.7 Sonnet",
Provider: ProviderOpenRouter,
APIModel: "anthropic/claude-3.7-sonnet",
CostPer1MIn: AnthropicModels[Claude37Sonnet].CostPer1MIn,
CostPer1MInCached: AnthropicModels[Claude37Sonnet].CostPer1MInCached,
CostPer1MOut: AnthropicModels[Claude37Sonnet].CostPer1MOut,
CostPer1MOutCached: AnthropicModels[Claude37Sonnet].CostPer1MOutCached,
ContextWindow: AnthropicModels[Claude37Sonnet].ContextWindow,
DefaultMaxTokens: AnthropicModels[Claude37Sonnet].DefaultMaxTokens,
CanReason: AnthropicModels[Claude37Sonnet].CanReason,
},
OpenRouterClaude35Haiku: {
ID: OpenRouterClaude35Haiku,
Name: "OpenRouter Claude 3.5 Haiku",
Provider: ProviderOpenRouter,
APIModel: "anthropic/claude-3.5-haiku",
CostPer1MIn: AnthropicModels[Claude35Haiku].CostPer1MIn,
CostPer1MInCached: AnthropicModels[Claude35Haiku].CostPer1MInCached,
CostPer1MOut: AnthropicModels[Claude35Haiku].CostPer1MOut,
CostPer1MOutCached: AnthropicModels[Claude35Haiku].CostPer1MOutCached,
ContextWindow: AnthropicModels[Claude35Haiku].ContextWindow,
DefaultMaxTokens: AnthropicModels[Claude35Haiku].DefaultMaxTokens,
},
OpenRouterClaude3Opus: {
ID: OpenRouterClaude3Opus,
Name: "OpenRouter Claude 3 Opus",
Provider: ProviderOpenRouter,
APIModel: "anthropic/claude-3-opus",
CostPer1MIn: AnthropicModels[Claude3Opus].CostPer1MIn,
CostPer1MInCached: AnthropicModels[Claude3Opus].CostPer1MInCached,
CostPer1MOut: AnthropicModels[Claude3Opus].CostPer1MOut,
CostPer1MOutCached: AnthropicModels[Claude3Opus].CostPer1MOutCached,
ContextWindow: AnthropicModels[Claude3Opus].ContextWindow,
DefaultMaxTokens: AnthropicModels[Claude3Opus].DefaultMaxTokens,
},
}

View File

@@ -4,25 +4,14 @@ import (
"fmt"
"os"
"path/filepath"
"strings"
"sync"
"github.com/opencode-ai/opencode/internal/config"
"github.com/opencode-ai/opencode/internal/llm/models"
"github.com/opencode-ai/opencode/internal/logging"
)
// contextFiles is a list of potential context files to check for
var contextFiles = []string{
".github/copilot-instructions.md",
".cursorrules",
"CLAUDE.md",
"CLAUDE.local.md",
"opencode.md",
"opencode.local.md",
"OpenCode.md",
"OpenCode.local.md",
"OPENCODE.md",
"OPENCODE.local.md",
}
func GetAgentPrompt(agentName config.AgentName, provider models.ModelProvider) string {
basePrompt := ""
switch agentName {
@@ -38,26 +27,109 @@ func GetAgentPrompt(agentName config.AgentName, provider models.ModelProvider) s
if agentName == config.AgentCoder || agentName == config.AgentTask {
// Add context from project-specific instruction files if they exist
contextContent := getContextFromFiles()
contextContent := getContextFromPaths()
logging.Debug("Context content", "Context", contextContent)
if contextContent != "" {
return fmt.Sprintf("%s\n\n# Project-Specific Context\n%s", basePrompt, contextContent)
return fmt.Sprintf("%s\n\n# Project-Specific Context\n Make sure to follow the instructions in the context below\n%s", basePrompt, contextContent)
}
}
return basePrompt
}
// getContextFromFiles checks for the existence of context files and returns their content
func getContextFromFiles() string {
workDir := config.WorkingDirectory()
var contextContent string
var (
onceContext sync.Once
contextContent string
)
for _, file := range contextFiles {
filePath := filepath.Join(workDir, file)
content, err := os.ReadFile(filePath)
if err == nil {
contextContent += fmt.Sprintf("\n%s\n", string(content))
}
}
func getContextFromPaths() string {
onceContext.Do(func() {
var (
cfg = config.Get()
workDir = cfg.WorkingDir
contextPaths = cfg.ContextPaths
)
contextContent = processContextPaths(workDir, contextPaths)
})
return contextContent
}
func processContextPaths(workDir string, paths []string) string {
var (
wg sync.WaitGroup
resultCh = make(chan string)
)
// Track processed files to avoid duplicates
processedFiles := make(map[string]bool)
var processedMutex sync.Mutex
for _, path := range paths {
wg.Add(1)
go func(p string) {
defer wg.Done()
if strings.HasSuffix(p, "/") {
filepath.WalkDir(filepath.Join(workDir, p), func(path string, d os.DirEntry, err error) error {
if err != nil {
return err
}
if !d.IsDir() {
// Check if we've already processed this file (case-insensitive)
processedMutex.Lock()
lowerPath := strings.ToLower(path)
if !processedFiles[lowerPath] {
processedFiles[lowerPath] = true
processedMutex.Unlock()
if result := processFile(path); result != "" {
resultCh <- result
}
} else {
processedMutex.Unlock()
}
}
return nil
})
} else {
fullPath := filepath.Join(workDir, p)
// Check if we've already processed this file (case-insensitive)
processedMutex.Lock()
lowerPath := strings.ToLower(fullPath)
if !processedFiles[lowerPath] {
processedFiles[lowerPath] = true
processedMutex.Unlock()
result := processFile(fullPath)
if result != "" {
resultCh <- result
}
} else {
processedMutex.Unlock()
}
}
}(path)
}
go func() {
wg.Wait()
close(resultCh)
}()
results := make([]string, 0)
for result := range resultCh {
results = append(results, result)
}
return strings.Join(results, "\n")
}
func processFile(filePath string) string {
content, err := os.ReadFile(filePath)
if err != nil {
return ""
}
return "# From:" + filePath + "\n" + string(content)
}

View File

@@ -213,7 +213,7 @@ func (a *anthropicClient) send(ctx context.Context, messages []message.Message,
return nil, retryErr
}
if retry {
logging.WarnPersist("Retrying due to rate limit... attempt %d of %d", logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
logging.WarnPersist(fmt.Sprintf("Retrying due to rate limit... attempt %d of %d", attempts, maxRetries), logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
select {
case <-ctx.Done():
return nil, ctx.Err()
@@ -262,7 +262,7 @@ func (a *anthropicClient) stream(ctx context.Context, messages []message.Message
event := anthropicStream.Current()
err := accumulatedMessage.Accumulate(event)
if err != nil {
eventChan <- ProviderEvent{Type: EventError, Error: err}
logging.Warn("Error accumulating message", "error", err)
continue
}
@@ -351,7 +351,7 @@ func (a *anthropicClient) stream(ctx context.Context, messages []message.Message
return
}
if retry {
logging.WarnPersist("Retrying due to rate limit... attempt %d of %d", logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
logging.WarnPersist(fmt.Sprintf("Retrying due to rate limit... attempt %d of %d", attempts, maxRetries), logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
select {
case <-ctx.Done():
// context cancelled

View File

@@ -0,0 +1,47 @@
package provider
import (
"os"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/openai/openai-go"
"github.com/openai/openai-go/azure"
"github.com/openai/openai-go/option"
)
type azureClient struct {
*openaiClient
}
type AzureClient ProviderClient
func newAzureClient(opts providerClientOptions) AzureClient {
endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT") // ex: https://foo.openai.azure.com
apiVersion := os.Getenv("AZURE_OPENAI_API_VERSION") // ex: 2025-04-01-preview
if endpoint == "" || apiVersion == "" {
return &azureClient{openaiClient: newOpenAIClient(opts).(*openaiClient)}
}
reqOpts := []option.RequestOption{
azure.WithEndpoint(endpoint, apiVersion),
}
if opts.apiKey != "" || os.Getenv("AZURE_OPENAI_API_KEY") != "" {
key := opts.apiKey
if key == "" {
key = os.Getenv("AZURE_OPENAI_API_KEY")
}
reqOpts = append(reqOpts, azure.WithAPIKey(key))
} else if cred, err := azidentity.NewDefaultAzureCredential(nil); err == nil {
reqOpts = append(reqOpts, azure.WithTokenCredential(cred))
}
base := &openaiClient{
providerOptions: opts,
client: openai.NewClient(reqOpts...),
}
return &azureClient{openaiClient: base}
}

View File

@@ -54,19 +54,6 @@ func newGeminiClient(opts providerClientOptions) GeminiClient {
func (g *geminiClient) convertMessages(messages []message.Message) []*genai.Content {
var history []*genai.Content
// Add system message first
history = append(history, &genai.Content{
Parts: []genai.Part{genai.Text(g.providerOptions.systemMessage)},
Role: "user",
})
// Add a system response to acknowledge the system message
history = append(history, &genai.Content{
Parts: []genai.Part{genai.Text("I'll help you with that.")},
Role: "model",
})
for _, msg := range messages {
switch msg.Role {
case message.User:
@@ -132,7 +119,8 @@ func (g *geminiClient) convertMessages(messages []message.Message) []*genai.Cont
}
func (g *geminiClient) convertTools(tools []tools.BaseTool) []*genai.Tool {
geminiTools := make([]*genai.Tool, 0, len(tools))
geminiTool := &genai.Tool{}
geminiTool.FunctionDeclarations = make([]*genai.FunctionDeclaration, 0, len(tools))
for _, tool := range tools {
info := tool.Info()
@@ -146,23 +134,18 @@ func (g *geminiClient) convertTools(tools []tools.BaseTool) []*genai.Tool {
},
}
geminiTools = append(geminiTools, &genai.Tool{
FunctionDeclarations: []*genai.FunctionDeclaration{declaration},
})
geminiTool.FunctionDeclarations = append(geminiTool.FunctionDeclarations, declaration)
}
return geminiTools
return []*genai.Tool{geminiTool}
}
func (g *geminiClient) finishReason(reason genai.FinishReason) message.FinishReason {
reasonStr := reason.String()
switch {
case reasonStr == "STOP":
case reason == genai.FinishReasonStop:
return message.FinishReasonEndTurn
case reasonStr == "MAX_TOKENS":
case reason == genai.FinishReasonMaxTokens:
return message.FinishReasonMaxTokens
case strings.Contains(reasonStr, "FUNCTION") || strings.Contains(reasonStr, "TOOL"):
return message.FinishReasonToolUse
default:
return message.FinishReasonUnknown
}
@@ -171,7 +154,11 @@ func (g *geminiClient) finishReason(reason genai.FinishReason) message.FinishRea
func (g *geminiClient) send(ctx context.Context, messages []message.Message, tools []tools.BaseTool) (*ProviderResponse, error) {
model := g.client.GenerativeModel(g.providerOptions.model.APIModel)
model.SetMaxOutputTokens(int32(g.providerOptions.maxTokens))
model.SystemInstruction = &genai.Content{
Parts: []genai.Part{
genai.Text(g.providerOptions.systemMessage),
},
}
// Convert tools
if len(tools) > 0 {
model.Tools = g.convertTools(tools)
@@ -189,19 +176,13 @@ func (g *geminiClient) send(ctx context.Context, messages []message.Message, too
attempts := 0
for {
attempts++
var toolCalls []message.ToolCall
chat := model.StartChat()
chat.History = geminiMessages[:len(geminiMessages)-1] // All but last message
lastMsg := geminiMessages[len(geminiMessages)-1]
var lastText string
for _, part := range lastMsg.Parts {
if text, ok := part.(genai.Text); ok {
lastText = string(text)
break
}
}
resp, err := chat.SendMessage(ctx, genai.Text(lastText))
resp, err := chat.SendMessage(ctx, lastMsg.Parts...)
// If there is an error we are going to see if we can retry the call
if err != nil {
retry, after, retryErr := g.shouldRetry(attempts, err)
@@ -209,7 +190,7 @@ func (g *geminiClient) send(ctx context.Context, messages []message.Message, too
return nil, retryErr
}
if retry {
logging.WarnPersist("Retrying due to rate limit... attempt %d of %d", logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
logging.WarnPersist(fmt.Sprintf("Retrying due to rate limit... attempt %d of %d", attempts, maxRetries), logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
select {
case <-ctx.Done():
return nil, ctx.Err()
@@ -221,7 +202,6 @@ func (g *geminiClient) send(ctx context.Context, messages []message.Message, too
}
content := ""
var toolCalls []message.ToolCall
if len(resp.Candidates) > 0 && resp.Candidates[0].Content != nil {
for _, part := range resp.Candidates[0].Content.Parts {
@@ -232,20 +212,28 @@ func (g *geminiClient) send(ctx context.Context, messages []message.Message, too
id := "call_" + uuid.New().String()
args, _ := json.Marshal(p.Args)
toolCalls = append(toolCalls, message.ToolCall{
ID: id,
Name: p.Name,
Input: string(args),
Type: "function",
ID: id,
Name: p.Name,
Input: string(args),
Type: "function",
Finished: true,
})
}
}
}
finishReason := message.FinishReasonEndTurn
if len(resp.Candidates) > 0 {
finishReason = g.finishReason(resp.Candidates[0].FinishReason)
}
if len(toolCalls) > 0 {
finishReason = message.FinishReasonToolUse
}
return &ProviderResponse{
Content: content,
ToolCalls: toolCalls,
Usage: g.usage(resp),
FinishReason: g.finishReason(resp.Candidates[0].FinishReason),
FinishReason: finishReason,
}, nil
}
}
@@ -253,7 +241,11 @@ func (g *geminiClient) send(ctx context.Context, messages []message.Message, too
func (g *geminiClient) stream(ctx context.Context, messages []message.Message, tools []tools.BaseTool) <-chan ProviderEvent {
model := g.client.GenerativeModel(g.providerOptions.model.APIModel)
model.SetMaxOutputTokens(int32(g.providerOptions.maxTokens))
model.SystemInstruction = &genai.Content{
Parts: []genai.Part{
genai.Text(g.providerOptions.systemMessage),
},
}
// Convert tools
if len(tools) > 0 {
model.Tools = g.convertTools(tools)
@@ -277,18 +269,10 @@ func (g *geminiClient) stream(ctx context.Context, messages []message.Message, t
for {
attempts++
chat := model.StartChat()
chat.History = geminiMessages[:len(geminiMessages)-1] // All but last message
chat.History = geminiMessages[:len(geminiMessages)-1]
lastMsg := geminiMessages[len(geminiMessages)-1]
var lastText string
for _, part := range lastMsg.Parts {
if text, ok := part.(genai.Text); ok {
lastText = string(text)
break
}
}
iter := chat.SendMessageStream(ctx, genai.Text(lastText))
iter := chat.SendMessageStream(ctx, lastMsg.Parts...)
currentContent := ""
toolCalls := []message.ToolCall{}
@@ -308,7 +292,7 @@ func (g *geminiClient) stream(ctx context.Context, messages []message.Message, t
return
}
if retry {
logging.WarnPersist("Retrying due to rate limit... attempt %d of %d", logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
logging.WarnPersist(fmt.Sprintf("Retrying due to rate limit... attempt %d of %d", attempts, maxRetries), logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
select {
case <-ctx.Done():
if ctx.Err() != nil {
@@ -331,23 +315,23 @@ func (g *geminiClient) stream(ctx context.Context, messages []message.Message, t
for _, part := range resp.Candidates[0].Content.Parts {
switch p := part.(type) {
case genai.Text:
newText := string(p)
delta := newText[len(currentContent):]
delta := string(p)
if delta != "" {
eventChan <- ProviderEvent{
Type: EventContentDelta,
Content: delta,
}
currentContent = newText
currentContent += delta
}
case genai.FunctionCall:
id := "call_" + uuid.New().String()
args, _ := json.Marshal(p.Args)
newCall := message.ToolCall{
ID: id,
Name: p.Name,
Input: string(args),
Type: "function",
ID: id,
Name: p.Name,
Input: string(args),
Type: "function",
Finished: true,
}
isNew := true
@@ -369,37 +353,26 @@ func (g *geminiClient) stream(ctx context.Context, messages []message.Message, t
eventChan <- ProviderEvent{Type: EventContentStop}
if finalResp != nil {
finishReason := message.FinishReasonEndTurn
if len(finalResp.Candidates) > 0 {
finishReason = g.finishReason(finalResp.Candidates[0].FinishReason)
}
if len(toolCalls) > 0 {
finishReason = message.FinishReasonToolUse
}
eventChan <- ProviderEvent{
Type: EventComplete,
Response: &ProviderResponse{
Content: currentContent,
ToolCalls: toolCalls,
Usage: g.usage(finalResp),
FinishReason: g.finishReason(finalResp.Candidates[0].FinishReason),
FinishReason: finishReason,
},
}
return
}
// If we get here, we need to retry
if attempts > maxRetries {
eventChan <- ProviderEvent{
Type: EventError,
Error: fmt.Errorf("maximum retry attempts reached: %d retries", maxRetries),
}
return
}
// Wait before retrying
select {
case <-ctx.Done():
if ctx.Err() != nil {
eventChan <- ProviderEvent{Type: EventError, Error: ctx.Err()}
}
return
case <-time.After(time.Duration(2000*(1<<(attempts-1))) * time.Millisecond):
continue
}
}
}()

View File

@@ -8,19 +8,20 @@ import (
"io"
"time"
"github.com/openai/openai-go"
"github.com/openai/openai-go/option"
"github.com/openai/openai-go/shared"
"github.com/opencode-ai/opencode/internal/config"
"github.com/opencode-ai/opencode/internal/llm/tools"
"github.com/opencode-ai/opencode/internal/logging"
"github.com/opencode-ai/opencode/internal/message"
"github.com/openai/openai-go"
"github.com/openai/openai-go/option"
"github.com/openai/openai-go/shared"
)
type openaiOptions struct {
baseURL string
disableCache bool
reasoningEffort string
extraHeaders map[string]string
}
type OpenAIOption func(*openaiOptions)
@@ -49,6 +50,12 @@ func newOpenAIClient(opts providerClientOptions) OpenAIClient {
openaiClientOptions = append(openaiClientOptions, option.WithBaseURL(openaiOpts.baseURL))
}
if openaiOpts.extraHeaders != nil {
for key, value := range openaiOpts.extraHeaders {
openaiClientOptions = append(openaiClientOptions, option.WithHeader(key, value))
}
}
client := openai.NewClient(openaiClientOptions...)
return &openaiClient{
providerOptions: opts,
@@ -188,7 +195,7 @@ func (o *openaiClient) send(ctx context.Context, messages []message.Message, too
return nil, retryErr
}
if retry {
logging.WarnPersist("Retrying due to rate limit... attempt %d of %d", logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
logging.WarnPersist(fmt.Sprintf("Retrying due to rate limit... attempt %d of %d", attempts, maxRetries), logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
select {
case <-ctx.Done():
return nil, ctx.Err()
@@ -204,11 +211,18 @@ func (o *openaiClient) send(ctx context.Context, messages []message.Message, too
content = openaiResponse.Choices[0].Message.Content
}
toolCalls := o.toolCalls(*openaiResponse)
finishReason := o.finishReason(string(openaiResponse.Choices[0].FinishReason))
if len(toolCalls) > 0 {
finishReason = message.FinishReasonToolUse
}
return &ProviderResponse{
Content: content,
ToolCalls: o.toolCalls(*openaiResponse),
ToolCalls: toolCalls,
Usage: o.usage(*openaiResponse),
FinishReason: o.finishReason(string(openaiResponse.Choices[0].FinishReason)),
FinishReason: finishReason,
}, nil
}
}
@@ -267,13 +281,19 @@ func (o *openaiClient) stream(ctx context.Context, messages []message.Message, t
err := openaiStream.Err()
if err == nil || errors.Is(err, io.EOF) {
// Stream completed successfully
finishReason := o.finishReason(string(acc.ChatCompletion.Choices[0].FinishReason))
if len(toolCalls) > 0 {
finishReason = message.FinishReasonToolUse
}
eventChan <- ProviderEvent{
Type: EventComplete,
Response: &ProviderResponse{
Content: currentContent,
ToolCalls: toolCalls,
Usage: o.usage(acc.ChatCompletion),
FinishReason: o.finishReason(string(acc.ChatCompletion.Choices[0].FinishReason)),
FinishReason: finishReason,
},
}
close(eventChan)
@@ -288,7 +308,7 @@ func (o *openaiClient) stream(ctx context.Context, messages []message.Message, t
return
}
if retry {
logging.WarnPersist("Retrying due to rate limit... attempt %d of %d", logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
logging.WarnPersist(fmt.Sprintf("Retrying due to rate limit... attempt %d of %d", attempts, maxRetries), logging.PersistTimeArg, time.Millisecond*time.Duration(after+100))
select {
case <-ctx.Done():
// context cancelled
@@ -375,6 +395,12 @@ func WithOpenAIBaseURL(baseURL string) OpenAIOption {
}
}
func WithOpenAIExtraHeaders(headers map[string]string) OpenAIOption {
return func(options *openaiOptions) {
options.extraHeaders = headers
}
}
func WithOpenAIDisableCache() OpenAIOption {
return func(options *openaiOptions) {
options.disableCache = true

View File

@@ -107,6 +107,31 @@ func NewProvider(providerName models.ModelProvider, opts ...ProviderClientOption
options: clientOptions,
client: newBedrockClient(clientOptions),
}, nil
case models.ProviderGROQ:
clientOptions.openaiOptions = append(clientOptions.openaiOptions,
WithOpenAIBaseURL("https://api.groq.com/openai/v1"),
)
return &baseProvider[OpenAIClient]{
options: clientOptions,
client: newOpenAIClient(clientOptions),
}, nil
case models.ProviderAzure:
return &baseProvider[AzureClient]{
options: clientOptions,
client: newAzureClient(clientOptions),
}, nil
case models.ProviderOpenRouter:
clientOptions.openaiOptions = append(clientOptions.openaiOptions,
WithOpenAIBaseURL("https://openrouter.ai/api/v1"),
WithOpenAIExtraHeaders(map[string]string{
"HTTP-Referer": "opencode.ai",
"X-Title": "OpenCode",
}),
)
return &baseProvider[OpenAIClient]{
options: clientOptions,
client: newOpenAIClient(clientOptions),
}, nil
case models.ProviderMock:
// TODO: implement mock client for test
panic("not implemented")

View File

@@ -1,11 +1,13 @@
package tools
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io/fs"
"os"
"os/exec"
"path/filepath"
"sort"
"strings"
@@ -132,14 +134,73 @@ func (g *globTool) Run(ctx context.Context, call ToolCall) (ToolResponse, error)
}
func globFiles(pattern, searchPath string, limit int) ([]string, bool, error) {
if !strings.HasPrefix(pattern, "/") && !strings.HasPrefix(pattern, searchPath) {
if !strings.HasSuffix(searchPath, "/") {
searchPath += "/"
}
pattern = searchPath + pattern
matches, err := globWithRipgrep(pattern, searchPath, limit)
if err == nil {
return matches, len(matches) >= limit, nil
}
fsys := os.DirFS("/")
return globWithDoublestar(pattern, searchPath, limit)
}
func globWithRipgrep(
pattern, searchRoot string,
limit int,
) ([]string, error) {
if searchRoot == "" {
searchRoot = "."
}
rgBin, err := exec.LookPath("rg")
if err != nil {
return nil, fmt.Errorf("ripgrep not found in $PATH: %w", err)
}
if !filepath.IsAbs(pattern) && !strings.HasPrefix(pattern, "/") {
pattern = "/" + pattern
}
args := []string{
"--files",
"--null",
"--glob", pattern,
"-L",
}
cmd := exec.Command(rgBin, args...)
cmd.Dir = searchRoot
out, err := cmd.CombinedOutput()
if err != nil {
if ee, ok := err.(*exec.ExitError); ok && ee.ExitCode() == 1 {
return nil, nil
}
return nil, fmt.Errorf("ripgrep: %w\n%s", err, out)
}
var matches []string
for _, p := range bytes.Split(out, []byte{0}) {
if len(p) == 0 {
continue
}
abs := filepath.Join(searchRoot, string(p))
if skipHidden(abs) {
continue
}
matches = append(matches, abs)
}
sort.SliceStable(matches, func(i, j int) bool {
return len(matches[i]) < len(matches[j])
})
if len(matches) > limit {
matches = matches[:limit]
}
return matches, nil
}
func globWithDoublestar(pattern, searchPath string, limit int) ([]string, bool, error) {
fsys := os.DirFS(searchPath)
relPattern := strings.TrimPrefix(pattern, "/")
@@ -158,7 +219,11 @@ func globFiles(pattern, searchPath string, limit int) ([]string, bool, error) {
return nil // Skip files we can't access
}
absPath := "/" + path // Restore absolute path
absPath := path // Restore absolute path
if !strings.HasPrefix(absPath, searchPath) {
absPath = filepath.Join(searchPath, absPath)
}
matches = append(matches, fileInfo{
path: absPath,
modTime: info.ModTime(),

View File

@@ -47,7 +47,9 @@ func GetPersistentShell(workingDir string) *PersistentShell {
shellInstance = newPersistentShell(workingDir)
})
if !shellInstance.isAlive {
if shellInstance == nil {
shellInstance = newPersistentShell(workingDir)
} else if !shellInstance.isAlive {
shellInstance = newPersistentShell(shellInstance.cwd)
}

View File

@@ -389,7 +389,7 @@ func (c *Client) openKeyConfigFiles(ctx context.Context) {
filepath.Join(workDir, "package.json"),
filepath.Join(workDir, "jsconfig.json"),
}
// Also find and open a few TypeScript files to help the server initialize
c.openTypeScriptFiles(ctx, workDir)
case ServerTypeGo:
@@ -547,12 +547,12 @@ func (c *Client) openTypeScriptFiles(ctx context.Context, workDir string) {
// shouldSkipDir returns true if the directory should be skipped during file search
func shouldSkipDir(path string) bool {
dirName := filepath.Base(path)
// Skip hidden directories
if strings.HasPrefix(dirName, ".") {
return true
}
// Skip common directories that won't contain relevant source files
skipDirs := map[string]bool{
"node_modules": true,
@@ -562,7 +562,7 @@ func shouldSkipDir(path string) bool {
"vendor": true,
"target": true,
}
return skipDirs[dirName]
}
@@ -776,3 +776,10 @@ func (c *Client) GetDiagnosticsForFile(ctx context.Context, filepath string) ([]
return diagnostics, nil
}
// ClearDiagnosticsForURI removes diagnostics for a specific URI from the cache
func (c *Client) ClearDiagnosticsForURI(uri protocol.DocumentUri) {
c.diagnosticsMu.Lock()
defer c.diagnosticsMu.Unlock()
delete(c.diagnostics, uri)
}

View File

@@ -643,7 +643,9 @@ func (w *WorkspaceWatcher) debounceHandleFileEvent(ctx context.Context, uri stri
func (w *WorkspaceWatcher) handleFileEvent(ctx context.Context, uri string, changeType protocol.FileChangeType) {
// If the file is open and it's a change event, use didChange notification
filePath := uri[7:] // Remove "file://" prefix
if changeType == protocol.FileChangeType(protocol.Changed) && w.client.IsFileOpen(filePath) {
if changeType == protocol.FileChangeType(protocol.Deleted) {
w.client.ClearDiagnosticsForURI(protocol.DocumentUri(uri))
} else if changeType == protocol.FileChangeType(protocol.Changed) && w.client.IsFileOpen(filePath) {
err := w.client.NotifyChange(ctx, filePath)
if err != nil {
logging.Error("Error notifying change", "error", err)

View File

@@ -5,7 +5,6 @@ import (
"path/filepath"
"slices"
"sync"
"time"
"github.com/google/uuid"
"github.com/opencode-ai/opencode/internal/config"
@@ -104,12 +103,8 @@ func (s *permissionService) Request(opts CreatePermissionRequest) bool {
s.Publish(pubsub.CreatedEvent, permission)
// Wait for the response with a timeout
select {
case resp := <-respCh:
return resp
case <-time.After(10 * time.Minute):
return false
}
resp := <-respCh
return resp
}
func (s *permissionService) AutoApproveSession(sessionID string) {

View File

@@ -0,0 +1,363 @@
package dialog
import (
"fmt"
"slices"
"strings"
"github.com/charmbracelet/bubbles/key"
tea "github.com/charmbracelet/bubbletea"
"github.com/charmbracelet/lipgloss"
"github.com/opencode-ai/opencode/internal/config"
"github.com/opencode-ai/opencode/internal/llm/models"
"github.com/opencode-ai/opencode/internal/tui/layout"
"github.com/opencode-ai/opencode/internal/tui/styles"
"github.com/opencode-ai/opencode/internal/tui/util"
)
const (
numVisibleModels = 10
maxDialogWidth = 40
)
// ModelSelectedMsg is sent when a model is selected
type ModelSelectedMsg struct {
Model models.Model
}
// CloseModelDialogMsg is sent when a model is selected
type CloseModelDialogMsg struct{}
// ModelDialog interface for the model selection dialog
type ModelDialog interface {
tea.Model
layout.Bindings
}
type modelDialogCmp struct {
models []models.Model
provider models.ModelProvider
availableProviders []models.ModelProvider
selectedIdx int
width int
height int
scrollOffset int
hScrollOffset int
hScrollPossible bool
}
type modelKeyMap struct {
Up key.Binding
Down key.Binding
Left key.Binding
Right key.Binding
Enter key.Binding
Escape key.Binding
J key.Binding
K key.Binding
H key.Binding
L key.Binding
}
var modelKeys = modelKeyMap{
Up: key.NewBinding(
key.WithKeys("up"),
key.WithHelp("↑", "previous model"),
),
Down: key.NewBinding(
key.WithKeys("down"),
key.WithHelp("↓", "next model"),
),
Left: key.NewBinding(
key.WithKeys("left"),
key.WithHelp("←", "scroll left"),
),
Right: key.NewBinding(
key.WithKeys("right"),
key.WithHelp("→", "scroll right"),
),
Enter: key.NewBinding(
key.WithKeys("enter"),
key.WithHelp("enter", "select model"),
),
Escape: key.NewBinding(
key.WithKeys("esc"),
key.WithHelp("esc", "close"),
),
J: key.NewBinding(
key.WithKeys("j"),
key.WithHelp("j", "next model"),
),
K: key.NewBinding(
key.WithKeys("k"),
key.WithHelp("k", "previous model"),
),
H: key.NewBinding(
key.WithKeys("h"),
key.WithHelp("h", "scroll left"),
),
L: key.NewBinding(
key.WithKeys("l"),
key.WithHelp("l", "scroll right"),
),
}
func (m *modelDialogCmp) Init() tea.Cmd {
m.setupModels()
return nil
}
func (m *modelDialogCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case tea.KeyMsg:
switch {
case key.Matches(msg, modelKeys.Up) || key.Matches(msg, modelKeys.K):
m.moveSelectionUp()
case key.Matches(msg, modelKeys.Down) || key.Matches(msg, modelKeys.J):
m.moveSelectionDown()
case key.Matches(msg, modelKeys.Left) || key.Matches(msg, modelKeys.H):
if m.hScrollPossible {
m.switchProvider(-1)
}
case key.Matches(msg, modelKeys.Right) || key.Matches(msg, modelKeys.L):
if m.hScrollPossible {
m.switchProvider(1)
}
case key.Matches(msg, modelKeys.Enter):
util.ReportInfo(fmt.Sprintf("selected model: %s", m.models[m.selectedIdx].Name))
return m, util.CmdHandler(ModelSelectedMsg{Model: m.models[m.selectedIdx]})
case key.Matches(msg, modelKeys.Escape):
return m, util.CmdHandler(CloseModelDialogMsg{})
}
case tea.WindowSizeMsg:
m.width = msg.Width
m.height = msg.Height
}
return m, nil
}
// moveSelectionUp moves the selection up or wraps to bottom
func (m *modelDialogCmp) moveSelectionUp() {
if m.selectedIdx > 0 {
m.selectedIdx--
} else {
m.selectedIdx = len(m.models) - 1
m.scrollOffset = max(0, len(m.models)-numVisibleModels)
}
// Keep selection visible
if m.selectedIdx < m.scrollOffset {
m.scrollOffset = m.selectedIdx
}
}
// moveSelectionDown moves the selection down or wraps to top
func (m *modelDialogCmp) moveSelectionDown() {
if m.selectedIdx < len(m.models)-1 {
m.selectedIdx++
} else {
m.selectedIdx = 0
m.scrollOffset = 0
}
// Keep selection visible
if m.selectedIdx >= m.scrollOffset+numVisibleModels {
m.scrollOffset = m.selectedIdx - (numVisibleModels - 1)
}
}
func (m *modelDialogCmp) switchProvider(offset int) {
newOffset := m.hScrollOffset + offset
// Ensure we stay within bounds
if newOffset < 0 {
newOffset = len(m.availableProviders) - 1
}
if newOffset >= len(m.availableProviders) {
newOffset = 0
}
m.hScrollOffset = newOffset
m.provider = m.availableProviders[m.hScrollOffset]
m.setupModelsForProvider(m.provider)
}
func (m *modelDialogCmp) View() string {
// Capitalize first letter of provider name
providerName := strings.ToUpper(string(m.provider)[:1]) + string(m.provider[1:])
title := styles.BaseStyle.
Foreground(styles.PrimaryColor).
Bold(true).
Width(maxDialogWidth).
Padding(0, 0, 1).
Render(fmt.Sprintf("Select %s Model", providerName))
// Render visible models
endIdx := min(m.scrollOffset+numVisibleModels, len(m.models))
modelItems := make([]string, 0, endIdx-m.scrollOffset)
for i := m.scrollOffset; i < endIdx; i++ {
itemStyle := styles.BaseStyle.Width(maxDialogWidth)
if i == m.selectedIdx {
itemStyle = itemStyle.Background(styles.PrimaryColor).
Foreground(styles.Background).Bold(true)
}
modelItems = append(modelItems, itemStyle.Render(m.models[i].Name))
}
scrollIndicator := m.getScrollIndicators(maxDialogWidth)
content := lipgloss.JoinVertical(
lipgloss.Left,
title,
styles.BaseStyle.Width(maxDialogWidth).Render(lipgloss.JoinVertical(lipgloss.Left, modelItems...)),
scrollIndicator,
)
return styles.BaseStyle.Padding(1, 2).
Border(lipgloss.RoundedBorder()).
BorderBackground(styles.Background).
BorderForeground(styles.ForgroundDim).
Width(lipgloss.Width(content) + 4).
Render(content)
}
func (m *modelDialogCmp) getScrollIndicators(maxWidth int) string {
var indicator string
if len(m.models) > numVisibleModels {
if m.scrollOffset > 0 {
indicator += "↑ "
}
if m.scrollOffset+numVisibleModels < len(m.models) {
indicator += "↓ "
}
}
if m.hScrollPossible {
if m.hScrollOffset > 0 {
indicator = "← " + indicator
}
if m.hScrollOffset < len(m.availableProviders)-1 {
indicator += "→"
}
}
if indicator == "" {
return ""
}
return styles.BaseStyle.
Foreground(styles.PrimaryColor).
Width(maxWidth).
Align(lipgloss.Right).
Bold(true).
Render(indicator)
}
func (m *modelDialogCmp) BindingKeys() []key.Binding {
return layout.KeyMapToSlice(modelKeys)
}
func (m *modelDialogCmp) setupModels() {
cfg := config.Get()
m.availableProviders = getEnabledProviders(cfg)
m.hScrollPossible = len(m.availableProviders) > 1
agentCfg := cfg.Agents[config.AgentCoder]
selectedModelId := agentCfg.Model
modelInfo := models.SupportedModels[selectedModelId]
m.provider = modelInfo.Provider
m.hScrollOffset = findProviderIndex(m.availableProviders, m.provider)
m.setupModelsForProvider(m.provider)
}
func getEnabledProviders(cfg *config.Config) []models.ModelProvider {
var providers []models.ModelProvider
for providerId, provider := range cfg.Providers {
if !provider.Disabled {
providers = append(providers, providerId)
}
}
// Sort by provider popularity
slices.SortFunc(providers, func(a, b models.ModelProvider) int {
rA := models.ProviderPopularity[a]
rB := models.ProviderPopularity[b]
// models not included in popularity ranking default to last
if rA == 0 {
rA = 999
}
if rB == 0 {
rB = 999
}
return rA - rB
})
return providers
}
// findProviderIndex returns the index of the provider in the list, or -1 if not found
func findProviderIndex(providers []models.ModelProvider, provider models.ModelProvider) int {
for i, p := range providers {
if p == provider {
return i
}
}
return -1
}
func (m *modelDialogCmp) setupModelsForProvider(provider models.ModelProvider) {
cfg := config.Get()
agentCfg := cfg.Agents[config.AgentCoder]
selectedModelId := agentCfg.Model
m.provider = provider
m.models = getModelsForProvider(provider)
m.selectedIdx = 0
m.scrollOffset = 0
// Try to select the current model if it belongs to this provider
if provider == models.SupportedModels[selectedModelId].Provider {
for i, model := range m.models {
if model.ID == selectedModelId {
m.selectedIdx = i
// Adjust scroll position to keep selected model visible
if m.selectedIdx >= numVisibleModels {
m.scrollOffset = m.selectedIdx - (numVisibleModels - 1)
}
break
}
}
}
}
func getModelsForProvider(provider models.ModelProvider) []models.Model {
var providerModels []models.Model
for _, model := range models.SupportedModels {
if model.Provider == provider {
providerModels = append(providerModels, model)
}
}
// reverse alphabetical order (if llm naming was consistent latest would appear first)
slices.SortFunc(providerModels, func(a, b models.Model) int {
if a.Name > b.Name {
return -1
} else if a.Name < b.Name {
return 1
}
return 0
})
return providerModels
}
func NewModelDialogCmp() ModelDialog {
return &modelDialogCmp{}
}

View File

@@ -1,16 +1,15 @@
package layout
import (
"bytes"
"strings"
"github.com/charmbracelet/lipgloss"
"github.com/opencode-ai/opencode/internal/tui/styles"
"github.com/opencode-ai/opencode/internal/tui/util"
"github.com/mattn/go-runewidth"
chAnsi "github.com/charmbracelet/x/ansi"
"github.com/muesli/ansi"
"github.com/muesli/reflow/truncate"
"github.com/muesli/termenv"
"github.com/opencode-ai/opencode/internal/tui/styles"
"github.com/opencode-ai/opencode/internal/tui/util"
)
// Most of this code is borrowed from
@@ -117,42 +116,7 @@ func PlaceOverlay(
// cutLeft cuts printable characters from the left.
// This function is heavily based on muesli's ansi and truncate packages.
func cutLeft(s string, cutWidth int) string {
var (
pos int
isAnsi bool
ab bytes.Buffer
b bytes.Buffer
)
for _, c := range s {
var w int
if c == ansi.Marker || isAnsi {
isAnsi = true
ab.WriteRune(c)
if ansi.IsTerminator(c) {
isAnsi = false
if bytes.HasSuffix(ab.Bytes(), []byte("[0m")) {
ab.Reset()
}
}
} else {
w = runewidth.RuneWidth(c)
}
if pos >= cutWidth {
if b.Len() == 0 {
if ab.Len() > 0 {
b.Write(ab.Bytes())
}
if pos-cutWidth > 1 {
b.WriteByte(' ')
continue
}
}
b.WriteRune(c)
}
pos += w
}
return b.String()
return chAnsi.Cut(s, cutWidth, lipgloss.Width(s))
}
func max(a, b int) int {

View File

@@ -2,6 +2,7 @@ package tui
import (
"context"
"fmt"
"github.com/charmbracelet/bubbles/key"
tea "github.com/charmbracelet/bubbletea"
@@ -25,6 +26,7 @@ type keyMap struct {
Help key.Binding
SwitchSession key.Binding
Commands key.Binding
Models key.Binding
}
var keys = keyMap{
@@ -51,6 +53,11 @@ var keys = keyMap{
key.WithKeys("ctrl+k"),
key.WithHelp("ctrl+k", "commands"),
),
Models: key.NewBinding(
key.WithKeys("ctrl+o"),
key.WithHelp("ctrl+o", "model selection"),
),
}
var helpEsc = key.NewBinding(
@@ -93,6 +100,9 @@ type appModel struct {
commandDialog dialog.CommandDialog
commands []dialog.Command
showModelDialog bool
modelDialog dialog.ModelDialog
showInitDialog bool
initDialog dialog.InitDialogCmp
}
@@ -112,6 +122,8 @@ func (a appModel) Init() tea.Cmd {
cmds = append(cmds, cmd)
cmd = a.commandDialog.Init()
cmds = append(cmds, cmd)
cmd = a.modelDialog.Init()
cmds = append(cmds, cmd)
cmd = a.initDialog.Init()
cmds = append(cmds, cmd)
@@ -243,6 +255,20 @@ func (a appModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
a.showCommandDialog = false
return a, nil
case dialog.CloseModelDialogMsg:
a.showModelDialog = false
return a, nil
case dialog.ModelSelectedMsg:
a.showModelDialog = false
model, err := a.app.CoderAgent.Update(config.AgentCoder, msg.Model.ID)
if err != nil {
return a, util.ReportError(err)
}
return a, util.ReportInfo(fmt.Sprintf("Model changed to %s", model.Name))
case dialog.ShowInitDialogMsg:
a.showInitDialog = msg.Show
return a, nil
@@ -298,6 +324,9 @@ func (a appModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
if a.showCommandDialog {
a.showCommandDialog = false
}
if a.showModelDialog {
a.showModelDialog = false
}
return a, nil
case key.Matches(msg, keys.SwitchSession):
if a.currentPage == page.ChatPage && !a.showQuit && !a.showPermissions && !a.showCommandDialog {
@@ -325,6 +354,17 @@ func (a appModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
return a, nil
}
return a, nil
case key.Matches(msg, keys.Models):
if a.showModelDialog {
a.showModelDialog = false
return a, nil
}
if a.currentPage == page.ChatPage && !a.showQuit && !a.showPermissions && !a.showSessionDialog && !a.showCommandDialog {
a.showModelDialog = true
return a, nil
}
return a, nil
case key.Matches(msg, logsKeyReturnKey):
if a.currentPage == page.LogsPage {
return a, a.moveToPage(page.ChatPage)
@@ -405,6 +445,16 @@ func (a appModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
}
}
if a.showModelDialog {
d, modelCmd := a.modelDialog.Update(msg)
a.modelDialog = d.(dialog.ModelDialog)
cmds = append(cmds, modelCmd)
// Only block key messages send all other messages down
if _, ok := msg.(tea.KeyMsg); ok {
return a, tea.Batch(cmds...)
}
}
if a.showInitDialog {
d, initCmd := a.initDialog.Update(msg)
a.initDialog = d.(dialog.InitDialogCmp)
@@ -538,6 +588,21 @@ func (a appModel) View() string {
)
}
if a.showModelDialog {
overlay := a.modelDialog.View()
row := lipgloss.Height(appView) / 2
row -= lipgloss.Height(overlay) / 2
col := lipgloss.Width(appView) / 2
col -= lipgloss.Width(overlay) / 2
appView = layout.PlaceOverlay(
col,
row,
overlay,
appView,
true,
)
}
if a.showCommandDialog {
overlay := a.commandDialog.View()
row := lipgloss.Height(appView) / 2
@@ -577,6 +642,7 @@ func New(app *app.App) tea.Model {
quit: dialog.NewQuitCmp(),
sessionDialog: dialog.NewSessionDialogCmp(),
commandDialog: dialog.NewCommandDialogCmp(),
modelDialog: dialog.NewModelDialogCmp(),
permissions: dialog.NewPermissionDialogCmp(),
initDialog: dialog.NewInitDialogCmp(),
app: app,

View File

@@ -12,44 +12,75 @@
"model": {
"description": "Model ID for the agent",
"enum": [
"gemini-2.0-flash",
"bedrock.claude-3.7-sonnet",
"claude-3-opus",
"claude-3.5-sonnet",
"gpt-4o-mini",
"o1",
"o3-mini",
"claude-3-haiku",
"claude-3.7-sonnet",
"claude-3.5-haiku",
"o3",
"azure.o3",
"gpt-4.5-preview",
"azure.gpt-4.5-preview",
"o1-pro",
"o4-mini",
"claude-3-haiku",
"gpt-4o",
"o3",
"gpt-4.1-mini",
"gpt-4.5-preview",
"gemini-2.5-flash",
"claude-3.5-haiku",
"azure.o4-mini",
"gpt-4.1",
"gemini-2.0-flash-lite",
"claude-3.7-sonnet",
"o1-mini",
"azure.gpt-4.1",
"o3-mini",
"azure.o3-mini",
"gpt-4.1-nano",
"gemini-2.5"
"azure.gpt-4.1-nano",
"gpt-4o-mini",
"azure.gpt-4o-mini",
"o1",
"azure.o1",
"gemini-2.5-flash",
"qwen-qwq",
"meta-llama/llama-4-maverick-17b-128e-instruct",
"claude-3-opus",
"gpt-4o",
"azure.gpt-4o",
"gemini-2.0-flash-lite",
"gemini-2.0-flash",
"deepseek-r1-distill-llama-70b",
"llama-3.3-70b-versatile",
"claude-3.5-sonnet",
"o1-mini",
"azure.o1-mini",
"gpt-4.1-mini",
"azure.gpt-4.1-mini",
"gemini-2.5",
"meta-llama/llama-4-scout-17b-16e-instruct",
"openrouter.deepseek-chat-free",
"openrouter.deepseek-r1-free",
"openrouter.gpt-4.1",
"openrouter.gpt-4.1-mini",
"openrouter.gpt-4.1-nano",
"openrouter.gpt-4.5-preview",
"openrouter.gpt-4o",
"openrouter.gpt-4o-mini",
"openrouter.o1",
"openrouter.o1-pro",
"openrouter.o1-mini",
"openrouter.o3",
"openrouter.o3-mini",
"openrouter.o4-mini",
"openrouter.gemini-2.5-flash",
"openrouter.gemini-2.5",
"openrouter.claude-3.5-sonnet",
"openrouter.claude-3-haiku",
"openrouter.claude-3.7-sonnet",
"openrouter.claude-3.5-haiku",
"openrouter.claude-3-opus"
],
"type": "string"
},
"reasoningEffort": {
"description": "Reasoning effort for models that support it (OpenAI, Anthropic)",
"enum": [
"low",
"medium",
"high"
],
"enum": ["low", "medium", "high"],
"type": "string"
}
},
"required": [
"model"
],
"required": ["model"],
"type": "object"
}
},
@@ -67,44 +98,75 @@
"model": {
"description": "Model ID for the agent",
"enum": [
"gemini-2.0-flash",
"bedrock.claude-3.7-sonnet",
"claude-3-opus",
"claude-3.5-sonnet",
"gpt-4o-mini",
"o1",
"o3-mini",
"claude-3-haiku",
"claude-3.7-sonnet",
"claude-3.5-haiku",
"o3",
"azure.o3",
"gpt-4.5-preview",
"azure.gpt-4.5-preview",
"o1-pro",
"o4-mini",
"claude-3-haiku",
"gpt-4o",
"o3",
"gpt-4.1-mini",
"gpt-4.5-preview",
"gemini-2.5-flash",
"claude-3.5-haiku",
"azure.o4-mini",
"gpt-4.1",
"gemini-2.0-flash-lite",
"claude-3.7-sonnet",
"o1-mini",
"azure.gpt-4.1",
"o3-mini",
"azure.o3-mini",
"gpt-4.1-nano",
"gemini-2.5"
"azure.gpt-4.1-nano",
"gpt-4o-mini",
"azure.gpt-4o-mini",
"o1",
"azure.o1",
"gemini-2.5-flash",
"qwen-qwq",
"meta-llama/llama-4-maverick-17b-128e-instruct",
"claude-3-opus",
"gpt-4o",
"azure.gpt-4o",
"gemini-2.0-flash-lite",
"gemini-2.0-flash",
"deepseek-r1-distill-llama-70b",
"llama-3.3-70b-versatile",
"claude-3.5-sonnet",
"o1-mini",
"azure.o1-mini",
"gpt-4.1-mini",
"azure.gpt-4.1-mini",
"gemini-2.5",
"meta-llama/llama-4-scout-17b-16e-instruct",
"openrouter.deepseek-chat-free",
"openrouter.deepseek-r1-free",
"openrouter.gpt-4.1",
"openrouter.gpt-4.1-mini",
"openrouter.gpt-4.1-nano",
"openrouter.gpt-4.5-preview",
"openrouter.gpt-4o",
"openrouter.gpt-4o-mini",
"openrouter.o1",
"openrouter.o1-pro",
"openrouter.o1-mini",
"openrouter.o3",
"openrouter.o3-mini",
"openrouter.o4-mini",
"openrouter.gemini-2.5-flash",
"openrouter.gemini-2.5",
"openrouter.claude-3.5-sonnet",
"openrouter.claude-3-haiku",
"openrouter.claude-3.7-sonnet",
"openrouter.claude-3.5-haiku",
"openrouter.claude-3-opus"
],
"type": "string"
},
"reasoningEffort": {
"description": "Reasoning effort for models that support it (OpenAI, Anthropic)",
"enum": [
"low",
"medium",
"high"
],
"enum": ["low", "medium", "high"],
"type": "string"
}
},
"required": [
"model"
],
"required": ["model"],
"type": "object"
},
"description": "Agent configurations",
@@ -121,6 +183,26 @@
},
"type": "object"
},
"contextPaths": {
"default": [
".github/copilot-instructions.md",
".cursorrules",
".cursor/rules/",
"CLAUDE.md",
"CLAUDE.local.md",
"opencode.md",
"opencode.local.md",
"OpenCode.md",
"OpenCode.local.md",
"OPENCODE.md",
"OPENCODE.local.md"
],
"description": "Context paths for the application",
"items": {
"type": "string"
},
"type": "array"
},
"data": {
"description": "Storage configuration",
"properties": {
@@ -130,9 +212,7 @@
"type": "string"
}
},
"required": [
"directory"
],
"required": ["directory"],
"type": "object"
},
"debug": {
@@ -170,9 +250,7 @@
"type": "object"
}
},
"required": [
"command"
],
"required": ["command"],
"type": "object"
},
"description": "Language Server Protocol configurations",
@@ -210,10 +288,7 @@
"type": {
"default": "stdio",
"description": "Type of MCP server",
"enum": [
"stdio",
"sse"
],
"enum": ["stdio", "sse"],
"type": "string"
},
"url": {
@@ -221,9 +296,7 @@
"type": "string"
}
},
"required": [
"command"
],
"required": ["command"],
"type": "object"
},
"description": "Model Control Protocol server configurations",
@@ -249,7 +322,9 @@
"openai",
"gemini",
"groq",
"bedrock"
"bedrock",
"azure",
"openrouter"
],
"type": "string"
}