mirror of
https://mirror.skon.top/github.com/code-yeongyu/oh-my-opencode
synced 2026-04-20 15:40:12 +08:00
docs: bump claude-opus-4-6 to claude-opus-4-7 across docs, examples, and AGENTS.md
Syncs the README translations, CONTRIBUTING, docs/reference, docs/guide, docs/examples JSONC configs, and the hierarchical src/**/AGENTS.md files with the model version bump already landed in the source and migration commits.
This commit is contained in:
@@ -183,7 +183,7 @@ import type { AgentConfig } from "./types";
|
||||
|
||||
export const myAgent: AgentConfig = {
|
||||
name: "my-agent",
|
||||
model: "anthropic/claude-opus-4-6",
|
||||
model: "anthropic/claude-opus-4-7",
|
||||
description: "Description of what this agent does",
|
||||
prompt: `Your agent's system prompt here`,
|
||||
temperature: 0.1,
|
||||
|
||||
@@ -170,11 +170,11 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
|
||||
<td align="center"><img src=".github/assets/hephaestus.png" height="300" /></td>
|
||||
</tr></table>
|
||||
|
||||
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) はあなたのメインのオーケストレーターです。計画を立て、専門家に委任し、攻撃的な並列実行でタスクを完了まで推進します。途中で投げ出すことはありません。
|
||||
**Sisyphus** (`claude-opus-4-7` / **`kimi-k2.5`** / **`glm-5`**) はあなたのメインのオーケストレーターです。計画を立て、専門家に委任し、攻撃的な並列実行でタスクを完了まで推進します。途中で投げ出すことはありません。
|
||||
|
||||
**Hephaestus** (`gpt-5.4`) はあなたの自律的なディープワーカーです。レシピではなく、目標を与えてください。手取り足取り教えなくても、コードベースを探索し、パターンを研究し、端から端まで実行します。*正当なる職人 (The Legitimate Craftsman).*
|
||||
|
||||
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) はあなたの戦略プランナーです。インタビューモードで動作し、コードに触れる前に質問をしてスコープを特定し、詳細な計画を構築します。
|
||||
**Prometheus** (`claude-opus-4-7` / **`kimi-k2.5`** / **`glm-5`**) はあなたの戦略プランナーです。インタビューモードで動作し、コードに触れる前に質問をしてスコープを特定し、詳細な計画を構築します。
|
||||
|
||||
すべてのエージェントは、それぞれのモデルの強みに合わせてチューニングされています。手動でモデルを切り替える必要はありません。[詳しくはこちら →](docs/guide/overview.md)
|
||||
|
||||
|
||||
@@ -164,11 +164,11 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
|
||||
<td align="center"><img src=".github/assets/hephaestus.png" height="300" /></td>
|
||||
</tr></table>
|
||||
|
||||
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**)는 당신의 메인 오케스트레이터입니다. 공격적인 병렬 실행으로 계획을 세우고, 전문가들에게 위임하며, 완료될 때까지 밀어붙입니다. 중간에 포기하는 법이 없습니다.
|
||||
**Sisyphus** (`claude-opus-4-7` / **`kimi-k2.5`** / **`glm-5`**)는 당신의 메인 오케스트레이터입니다. 공격적인 병렬 실행으로 계획을 세우고, 전문가들에게 위임하며, 완료될 때까지 밀어붙입니다. 중간에 포기하는 법이 없습니다.
|
||||
|
||||
**Hephaestus** (`gpt-5.4`)는 당신의 자율 딥 워커입니다. 레시피가 아니라 목표를 주세요. 베이비시터 없이 알아서 코드베이스를 탐색하고, 패턴을 연구하며, 끝에서 끝까지 전부 해냅니다. *진정한 장인(The Legitimate Craftsman).*
|
||||
|
||||
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**)는 당신의 전략 플래너입니다. 인터뷰 모드로 작동합니다. 코드 한 줄 만지기 전에 질문을 던져 스코프를 파악하고 상세한 계획부터 세웁니다.
|
||||
**Prometheus** (`claude-opus-4-7` / **`kimi-k2.5`** / **`glm-5`**)는 당신의 전략 플래너입니다. 인터뷰 모드로 작동합니다. 코드 한 줄 만지기 전에 질문을 던져 스코프를 파악하고 상세한 계획부터 세웁니다.
|
||||
|
||||
모든 에이전트는 해당 모델의 특장점에 맞춰 튜닝되어 있습니다. 수동으로 모델 바꿔가며 뻘짓하지 마세요. [더 알아보기 →](docs/guide/overview.md)
|
||||
|
||||
|
||||
@@ -166,11 +166,11 @@ Even only with following subscriptions, ultrawork will work well (this project i
|
||||
<td align="center"><img src=".github/assets/hephaestus.png" height="300" /></td>
|
||||
</tr></table>
|
||||
|
||||
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`** ) is your main orchestrator. He plans, delegates to specialists, and drives tasks to completion with aggressive parallel execution. He does not stop halfway.
|
||||
**Sisyphus** (`claude-opus-4-7` / **`kimi-k2.5`** / **`glm-5`** ) is your main orchestrator. He plans, delegates to specialists, and drives tasks to completion with aggressive parallel execution. He does not stop halfway.
|
||||
|
||||
**Hephaestus** (`gpt-5.4`) is your autonomous deep worker. Give him a goal, not a recipe. He explores the codebase, researches patterns, and executes end-to-end without hand-holding. *The Legitimate Craftsman.*
|
||||
|
||||
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`** ) is your strategic planner. Interview mode: it questions, identifies scope, and builds a detailed plan before a single line of code is touched.
|
||||
**Prometheus** (`claude-opus-4-7` / **`kimi-k2.5`** / **`glm-5`** ) is your strategic planner. Interview mode: it questions, identifies scope, and builds a detailed plan before a single line of code is touched.
|
||||
|
||||
Every agent is tuned to its model's specific strengths. No manual model-juggling. [Learn more →](docs/guide/overview.md)
|
||||
|
||||
|
||||
@@ -154,11 +154,11 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
|
||||
|
||||
<table><tr> <td align="center"><img src=".github/assets/sisyphus.png" height="300" /></td> <td align="center"><img src=".github/assets/hephaestus.png" height="300" /></td> </tr></table>
|
||||
|
||||
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) — главный оркестратор. Он планирует, делегирует задачи специалистам и доводит их до завершения с агрессивным параллельным выполнением. Он не останавливается на полпути.
|
||||
**Sisyphus** (`claude-opus-4-7` / **`kimi-k2.5`** / **`glm-5`**) — главный оркестратор. Он планирует, делегирует задачи специалистам и доводит их до завершения с агрессивным параллельным выполнением. Он не останавливается на полпути.
|
||||
|
||||
**Hephaestus** (`gpt-5.4`) — автономный глубокий исполнитель. Дайте ему цель, а не рецепт. Он исследует кодовую базу, изучает паттерны и выполняет задачи сквозным образом без лишних подсказок. *Законный Мастер.*
|
||||
|
||||
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) — стратегический планировщик. Режим интервью: задаёт вопросы, определяет объём работ и формирует детальный план до того, как написана хотя бы одна строка кода.
|
||||
**Prometheus** (`claude-opus-4-7` / **`kimi-k2.5`** / **`glm-5`**) — стратегический планировщик. Режим интервью: задаёт вопросы, определяет объём работ и формирует детальный план до того, как написана хотя бы одна строка кода.
|
||||
|
||||
Каждый агент настроен под сильные стороны своей модели. Никакого ручного переключения между моделями. Подробнее →
|
||||
|
||||
|
||||
@@ -171,11 +171,11 @@ Read this and tell me why it's not just another boilerplate: https://raw.githubu
|
||||
<td align="center"><img src=".github/assets/hephaestus.png" height="300" /></td>
|
||||
</tr></table>
|
||||
|
||||
**Sisyphus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) 是你的主指挥官。他负责制定计划、分配任务给专家团队,并以极其激进的并行策略推动任务直至完成。他从不半途而废。
|
||||
**Sisyphus** (`claude-opus-4-7` / **`kimi-k2.5`** / **`glm-5`**) 是你的主指挥官。他负责制定计划、分配任务给专家团队,并以极其激进的并行策略推动任务直至完成。他从不半途而废。
|
||||
|
||||
**Hephaestus** (`gpt-5.4`) 是你的自主深度工作者。你只需要给他目标,不要给他具体做法。他会自动探索代码库模式,从头到尾独立执行任务,绝不会中途要你当保姆。*名副其实的正牌工匠。*
|
||||
|
||||
**Prometheus** (`claude-opus-4-6` / **`kimi-k2.5`** / **`glm-5`**) 是你的战略规划师。他通过访谈模式,在动一行代码之前,先通过提问确定范围并构建详尽的执行计划。
|
||||
**Prometheus** (`claude-opus-4-7` / **`kimi-k2.5`** / **`glm-5`**) 是你的战略规划师。他通过访谈模式,在动一行代码之前,先通过提问确定范围并构建详尽的执行计划。
|
||||
|
||||
每一个 Agent 都针对其底层模型的特点进行了专门调优。你无需手动来回切换模型。[阅读背景设定了解更多 →](docs/guide/overview.md)
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
// Primary orchestrator: aggressive parallel delegation
|
||||
"sisyphus": {
|
||||
"model": "kimi-for-coding/k2p5",
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-7", "variant": "max" },
|
||||
"prompt_append": "Delegate heavily to hephaestus for implementation. Parallelize exploration.",
|
||||
},
|
||||
|
||||
|
||||
@@ -7,8 +7,8 @@
|
||||
"agents": {
|
||||
// Main orchestrator: handles delegation and drives tasks to completion
|
||||
"sisyphus": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
|
||||
"model": "anthropic/claude-opus-4-7",
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-7", "variant": "max" },
|
||||
},
|
||||
|
||||
// Deep autonomous worker: end-to-end implementation
|
||||
@@ -50,7 +50,7 @@
|
||||
"categories": {
|
||||
"quick": { "model": "opencode/gpt-5-nano" },
|
||||
"unspecified-low": { "model": "anthropic/claude-sonnet-4-6" },
|
||||
"unspecified-high": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
|
||||
"unspecified-high": { "model": "anthropic/claude-opus-4-7", "variant": "max" },
|
||||
"writing": { "model": "google/gemini-3-flash" },
|
||||
"visual-engineering": { "model": "google/gemini-3.1-pro", "variant": "high" },
|
||||
"deep": { "model": "openai/gpt-5.4" },
|
||||
|
||||
@@ -7,8 +7,8 @@
|
||||
"agents": {
|
||||
// Orchestrator: delegates to planning agents first
|
||||
"sisyphus": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
|
||||
"model": "anthropic/claude-opus-4-7",
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-7", "variant": "max" },
|
||||
"prompt_append": "Always consult prometheus and atlas for planning. Never rush to implementation.",
|
||||
},
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
|
||||
// Primary planner: deep interview mode
|
||||
"prometheus": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"model": "anthropic/claude-opus-4-7",
|
||||
"thinking": { "type": "enabled", "budgetTokens": 160000 },
|
||||
"prompt_append": "Interview extensively. Question assumptions. Build exhaustive plans with milestones, risks, and contingencies. Use deep & quick agents heavily in parallel for research.",
|
||||
},
|
||||
@@ -43,7 +43,7 @@
|
||||
|
||||
// Plan review and refinement: heavily utilized
|
||||
"metis": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"model": "anthropic/claude-opus-4-7",
|
||||
"prompt_append": "Critically evaluate plans. Identify gaps, risks, and improvements. Be thorough.",
|
||||
},
|
||||
|
||||
@@ -98,7 +98,7 @@
|
||||
"openai": 3,
|
||||
},
|
||||
"modelConcurrency": {
|
||||
"anthropic/claude-opus-4-6": 2,
|
||||
"anthropic/claude-opus-4-7": 2,
|
||||
"openai/gpt-5.4": 2,
|
||||
},
|
||||
},
|
||||
|
||||
@@ -64,8 +64,8 @@ These agents have Claude-optimized prompts — long, detailed, mechanics-driven.
|
||||
|
||||
| Agent | Role | Fallback Chain | Notes |
|
||||
| ------------ | ----------------- | -------------------------------------- | ------------------------------------------------------------------------------------------------- |
|
||||
| **Sisyphus** | Main orchestrator | anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-6 (max) → opencode-go\|vercel/kimi-k2.5 → kimi-for-coding/k2p5 → opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix\|vercel/kimi-k2.5 → openai\|github-copilot\|opencode\|vercel/gpt-5.4 (medium) → zai-coding-plan\|opencode\|vercel/glm-5 → opencode/big-pickle | Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
| **Metis** | Plan gap analyzer | anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-6 (max) → openai\|github-copilot\|opencode\|vercel/gpt-5.4 (high) → opencode-go\|vercel/glm-5 → kimi-for-coding/k2p5 | Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
| **Sisyphus** | Main orchestrator | anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-7 (max) → opencode-go\|vercel/kimi-k2.5 → kimi-for-coding/k2p5 → opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix\|vercel/kimi-k2.5 → openai\|github-copilot\|opencode\|vercel/gpt-5.4 (medium) → zai-coding-plan\|opencode\|vercel/glm-5 → opencode/big-pickle | Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
| **Metis** | Plan gap analyzer | anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-7 (max) → openai\|github-copilot\|opencode\|vercel/gpt-5.4 (high) → opencode-go\|vercel/glm-5 → kimi-for-coding/k2p5 | Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
|
||||
### Dual-Prompt Agents → Claude preferred, GPT supported
|
||||
|
||||
@@ -73,7 +73,7 @@ These agents ship separate prompts for Claude and GPT families. They auto-detect
|
||||
|
||||
| Agent | Role | Fallback Chain | Notes |
|
||||
| -------------- | ----------------- | -------------------------------------- | -------------------------------------------------------------------- |
|
||||
| **Prometheus** | Strategic planner | anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-6 (max) → openai\|github-copilot\|opencode\|vercel/gpt-5.4 (high) → opencode-go\|vercel/glm-5 → google\|github-copilot\|opencode\|vercel/gemini-3.1-pro | Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
| **Prometheus** | Strategic planner | anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-7 (max) → openai\|github-copilot\|opencode\|vercel/gpt-5.4 (high) → opencode-go\|vercel/glm-5 → google\|github-copilot\|opencode\|vercel/gemini-3.1-pro | Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
| **Atlas** | Todo orchestrator | anthropic\|github-copilot\|opencode\|vercel/claude-sonnet-4-6 → opencode-go\|vercel/kimi-k2.5 → openai\|github-copilot\|opencode\|vercel/gpt-5.4 (medium) → opencode-go\|vercel/minimax-m2.7 | Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
|
||||
### Deep Specialists → GPT
|
||||
@@ -83,8 +83,8 @@ These agents are built for GPT's principle-driven style. Their prompts assume au
|
||||
| Agent | Role | Fallback Chain | Notes |
|
||||
| -------------- | ----------------------- | -------------------------------------- | ------------------------------------------------ |
|
||||
| **Hephaestus** | Autonomous deep worker | openai\|github-copilot\|venice\|opencode\|vercel/gpt-5.4 (medium) | Single-entry chain. Requires one of those providers. The craftsman. |
|
||||
| **Oracle** | Architecture consultant | openai\|github-copilot\|opencode\|vercel/gpt-5.4 (high) → google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-6 (max) → opencode-go\|vercel/glm-5 | Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
| **Momus** | Ruthless reviewer | openai\|github-copilot\|opencode\|vercel/gpt-5.4 (xhigh) → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-6 (max) → google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) → opencode-go\|vercel/glm-5 | Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
| **Oracle** | Architecture consultant | openai\|github-copilot\|opencode\|vercel/gpt-5.4 (high) → google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-7 (max) → opencode-go\|vercel/glm-5 | Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
| **Momus** | Ruthless reviewer | openai\|github-copilot\|opencode\|vercel/gpt-5.4 (xhigh) → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-7 (max) → google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) → opencode-go\|vercel/glm-5 | Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
|
||||
### Utility Runners → Speed over Intelligence
|
||||
|
||||
@@ -169,12 +169,12 @@ When agents delegate work, they don't pick a model name — they pick a **catego
|
||||
|
||||
| Category | When Used | Fallback Chain |
|
||||
| -------------------- | -------------------------- | -------------------------------------------- |
|
||||
| `visual-engineering` | Frontend, UI, CSS, design | google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) → zai-coding-plan\|opencode\|vercel/glm-5 → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-6 (max) → opencode-go\|vercel/glm-5 → kimi-for-coding/k2p5 |
|
||||
| `ultrabrain` | Maximum reasoning needed | openai\|opencode\|vercel/gpt-5.4 (xhigh) → google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-6 (max) → opencode-go\|vercel/glm-5 |
|
||||
| `deep` | Deep coding, complex logic | openai\|github-copilot\|venice\|opencode\|vercel/gpt-5.4 (medium) → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-6 (max) → google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) |
|
||||
| `artistry` | Creative, novel approaches | google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-6 (max) → openai\|github-copilot\|opencode\|vercel/gpt-5.4 |
|
||||
| `visual-engineering` | Frontend, UI, CSS, design | google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) → zai-coding-plan\|opencode\|vercel/glm-5 → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-7 (max) → opencode-go\|vercel/glm-5 → kimi-for-coding/k2p5 |
|
||||
| `ultrabrain` | Maximum reasoning needed | openai\|opencode\|vercel/gpt-5.4 (xhigh) → google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-7 (max) → opencode-go\|vercel/glm-5 |
|
||||
| `deep` | Deep coding, complex logic | openai\|github-copilot\|venice\|opencode\|vercel/gpt-5.4 (medium) → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-7 (max) → google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) |
|
||||
| `artistry` | Creative, novel approaches | google\|github-copilot\|opencode\|vercel/gemini-3.1-pro (high) → anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-7 (max) → openai\|github-copilot\|opencode\|vercel/gpt-5.4 |
|
||||
| `quick` | Simple, fast tasks | openai\|github-copilot\|opencode\|vercel/gpt-5.4-mini → anthropic\|github-copilot\|opencode\|vercel/claude-haiku-4-5 → google\|github-copilot\|opencode\|vercel/gemini-3-flash → opencode-go\|vercel/minimax-m2.7 → opencode\|vercel/gpt-5-nano |
|
||||
| `unspecified-high` | General complex work | anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-6 (max) → openai\|github-copilot\|opencode\|vercel/gpt-5.4 (high) → zai-coding-plan\|opencode\|vercel/glm-5 → kimi-for-coding/k2p5 → opencode-go\|vercel/glm-5 → opencode\|vercel/kimi-k2.5 → opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix\|vercel/kimi-k2.5 |
|
||||
| `unspecified-high` | General complex work | anthropic\|github-copilot\|opencode\|vercel/claude-opus-4-7 (max) → openai\|github-copilot\|opencode\|vercel/gpt-5.4 (high) → zai-coding-plan\|opencode\|vercel/glm-5 → kimi-for-coding/k2p5 → opencode-go\|vercel/glm-5 → opencode\|vercel/kimi-k2.5 → opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix\|vercel/kimi-k2.5 |
|
||||
| `unspecified-low` | General standard work | anthropic\|github-copilot\|opencode\|vercel/claude-sonnet-4-6 → openai\|opencode\|vercel/gpt-5.3-codex (medium) → opencode-go\|vercel/kimi-k2.5 → google\|github-copilot\|opencode\|vercel/gemini-3-flash → opencode-go\|vercel/minimax-m2.7 |
|
||||
| `writing` | Text, docs, prose | google\|github-copilot\|opencode\|vercel/gemini-3-flash → opencode-go\|vercel/kimi-k2.5 → anthropic\|github-copilot\|opencode\|vercel/claude-sonnet-4-6 → opencode-go\|vercel/minimax-m2.7 |
|
||||
|
||||
@@ -198,7 +198,7 @@ See the [Orchestration System Guide](./orchestration.md) for how agents dispatch
|
||||
// Main orchestrator: Claude Opus or Kimi K2.5 work best
|
||||
"sisyphus": {
|
||||
"model": "kimi-for-coding/k2p5",
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-7", "variant": "max" },
|
||||
},
|
||||
|
||||
// Research agents: cheaper models are fine
|
||||
@@ -217,7 +217,7 @@ See the [Orchestration System Guide](./orchestration.md) for how agents dispatch
|
||||
"categories": {
|
||||
"quick": { "model": "opencode/gpt-5-nano" },
|
||||
"unspecified-low": { "model": "anthropic/claude-sonnet-4-6" },
|
||||
"unspecified-high": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
|
||||
"unspecified-high": { "model": "anthropic/claude-opus-4-7", "variant": "max" },
|
||||
"visual-engineering": {
|
||||
"model": "google/gemini-3.1-pro",
|
||||
"variant": "high",
|
||||
@@ -234,7 +234,7 @@ See the [Orchestration System Guide](./orchestration.md) for how agents dispatch
|
||||
"zai-coding-plan": 10,
|
||||
},
|
||||
"modelConcurrency": {
|
||||
"anthropic/claude-opus-4-6": 2,
|
||||
"anthropic/claude-opus-4-7": 2,
|
||||
"opencode/gpt-5-nano": 20,
|
||||
},
|
||||
},
|
||||
|
||||
@@ -225,7 +225,7 @@ When GitHub Copilot is the best available provider, install-time defaults are ag
|
||||
|
||||
| Agent | Model |
|
||||
| ------------- | ---------------------------------- |
|
||||
| **Sisyphus** | `github-copilot/claude-opus-4.6` |
|
||||
| **Sisyphus** | `github-copilot/claude-opus-4.7` |
|
||||
| **Oracle** | `github-copilot/gpt-5.4` |
|
||||
| **Explore** | `github-copilot/grok-code-fast-1` |
|
||||
| **Atlas** | `github-copilot/claude-sonnet-4.6` |
|
||||
@@ -247,13 +247,13 @@ If Z.ai is your main provider, the most important fallbacks are:
|
||||
|
||||
#### OpenCode Zen
|
||||
|
||||
OpenCode Zen provides access to `opencode/` prefixed models including `opencode/claude-opus-4-6`, `opencode/gpt-5.4`, `opencode/gpt-5.3-codex`, `opencode/gpt-5-nano`, `opencode/glm-5`, `opencode/big-pickle`, `opencode/minimax-m2.7`, and `opencode/minimax-m2.7-highspeed`.
|
||||
OpenCode Zen provides access to `opencode/` prefixed models including `opencode/claude-opus-4-7`, `opencode/gpt-5.4`, `opencode/gpt-5.3-codex`, `opencode/gpt-5-nano`, `opencode/glm-5`, `opencode/big-pickle`, `opencode/minimax-m2.7`, and `opencode/minimax-m2.7-highspeed`.
|
||||
|
||||
When OpenCode Zen is the best available provider, these are the most relevant source-backed examples:
|
||||
|
||||
| Agent | Model |
|
||||
| ------------- | ---------------------------------------------------- |
|
||||
| **Sisyphus** | `opencode/claude-opus-4-6` |
|
||||
| **Sisyphus** | `opencode/claude-opus-4-7` |
|
||||
| **Oracle** | `opencode/gpt-5.4` |
|
||||
| **Explore** | `opencode/minimax-m2.7` |
|
||||
|
||||
@@ -330,8 +330,8 @@ Based on your subscriptions, here's how the agents were configured:
|
||||
|
||||
| Agent | Role | Default Chain | What It Does |
|
||||
| ------------ | ---------------- | ----------------------------------------------- | ---------------------------------------------------------------------------------------- |
|
||||
| **Sisyphus** | Main ultraworker | anthropic\|github-copilot\|opencode/claude-opus-4-6 (max) → opencode-go/kimi-k2.5 → kimi-for-coding/k2p5 → opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix/kimi-k2.5 → openai\|github-copilot\|opencode/gpt-5.4 (medium) → zai-coding-plan\|opencode/glm-5 → opencode/big-pickle | Primary coding agent. Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
| **Metis** | Plan review | anthropic\|github-copilot\|opencode/claude-opus-4-6 (max) → openai\|github-copilot\|opencode/gpt-5.4 (high) → opencode-go/glm-5 → kimi-for-coding/k2p5 | Reviews Prometheus plans for gaps. Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
| **Sisyphus** | Main ultraworker | anthropic\|github-copilot\|opencode/claude-opus-4-7 (max) → opencode-go/kimi-k2.5 → kimi-for-coding/k2p5 → opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix/kimi-k2.5 → openai\|github-copilot\|opencode/gpt-5.4 (medium) → zai-coding-plan\|opencode/glm-5 → opencode/big-pickle | Primary coding agent. Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
| **Metis** | Plan review | anthropic\|github-copilot\|opencode/claude-opus-4-7 (max) → openai\|github-copilot\|opencode/gpt-5.4 (high) → opencode-go/glm-5 → kimi-for-coding/k2p5 | Reviews Prometheus plans for gaps. Exact runtime chain from `src/shared/model-requirements.ts`. |
|
||||
|
||||
**Dual-Prompt Agents** (auto-switch between Claude and GPT prompts):
|
||||
|
||||
@@ -341,7 +341,7 @@ Priority: **Claude > GPT > Claude-like models**
|
||||
|
||||
| Agent | Role | Default Chain | GPT Prompt? |
|
||||
| -------------- | ----------------- | ---------------------------------------------------------- | ---------------------------------------------------------------- |
|
||||
| **Prometheus** | Strategic planner | anthropic\|github-copilot\|opencode/claude-opus-4-6 (max) → openai\|github-copilot\|opencode/gpt-5.4 (high) → opencode-go/glm-5 → google\|github-copilot\|opencode/gemini-3.1-pro | Yes — XML-tagged, principle-driven (~300 lines vs ~1,100 Claude) |
|
||||
| **Prometheus** | Strategic planner | anthropic\|github-copilot\|opencode/claude-opus-4-7 (max) → openai\|github-copilot\|opencode/gpt-5.4 (high) → opencode-go/glm-5 → google\|github-copilot\|opencode/gemini-3.1-pro | Yes — XML-tagged, principle-driven (~300 lines vs ~1,100 Claude) |
|
||||
| **Atlas** | Todo orchestrator | anthropic\|github-copilot\|opencode/claude-sonnet-4-6 → opencode-go/kimi-k2.5 → openai\|github-copilot\|opencode/gpt-5.4 (medium) → opencode-go/minimax-m2.7 | Yes - GPT-optimized todo management |
|
||||
|
||||
**GPT-Native Agents** (built for GPT, don't override to Claude):
|
||||
@@ -349,8 +349,8 @@ Priority: **Claude > GPT > Claude-like models**
|
||||
| Agent | Role | Default Chain | Notes |
|
||||
| -------------- | ---------------------- | -------------------------------------- | ------------------------------------------------------ |
|
||||
| **Hephaestus** | Deep autonomous worker | GPT-5.4 (medium) only | "Codex on steroids." No fallback. Requires GPT access. |
|
||||
| **Oracle** | Architecture/debugging | openai\|github-copilot\|opencode/gpt-5.4 (high) → google\|github-copilot\|opencode/gemini-3.1-pro (high) → anthropic\|github-copilot\|opencode/claude-opus-4-6 (max) → opencode-go/glm-5 | High-IQ strategic backup. GPT preferred. |
|
||||
| **Momus** | High-accuracy reviewer | openai\|github-copilot\|opencode/gpt-5.4 (xhigh) → anthropic\|github-copilot\|opencode/claude-opus-4-6 (max) → google\|github-copilot\|opencode/gemini-3.1-pro (high) → opencode-go/glm-5 | Verification agent. GPT preferred. |
|
||||
| **Oracle** | Architecture/debugging | openai\|github-copilot\|opencode/gpt-5.4 (high) → google\|github-copilot\|opencode/gemini-3.1-pro (high) → anthropic\|github-copilot\|opencode/claude-opus-4-7 (max) → opencode-go/glm-5 | High-IQ strategic backup. GPT preferred. |
|
||||
| **Momus** | High-accuracy reviewer | openai\|github-copilot\|opencode/gpt-5.4 (xhigh) → anthropic\|github-copilot\|opencode/claude-opus-4-7 (max) → google\|github-copilot\|opencode/gemini-3.1-pro (high) → opencode-go/glm-5 | Verification agent. GPT preferred. |
|
||||
|
||||
**Utility Agents** (speed over intelligence):
|
||||
|
||||
|
||||
@@ -35,9 +35,9 @@ The orchestration system uses a three-layer architecture that solves context ove
|
||||
flowchart TB
|
||||
subgraph Planning["Planning Layer (Human + Prometheus)"]
|
||||
User[(" User")]
|
||||
Prometheus[" Prometheus<br/>(Planner)<br/>claude-opus-4-6 / gpt-5.4 / glm-5"]
|
||||
Metis[" Metis<br/>(Consultant)<br/>claude-opus-4-6 / gpt-5.4 / glm-5"]
|
||||
Momus[" Momus<br/>(Reviewer)<br/>gpt-5.4 / claude-opus-4-6 / gemini-3.1-pro / glm-5"]
|
||||
Prometheus[" Prometheus<br/>(Planner)<br/>claude-opus-4-7 / gpt-5.4 / glm-5"]
|
||||
Metis[" Metis<br/>(Consultant)<br/>claude-opus-4-7 / gpt-5.4 / glm-5"]
|
||||
Momus[" Momus<br/>(Reviewer)<br/>gpt-5.4 / claude-opus-4-7 / gemini-3.1-pro / glm-5"]
|
||||
end
|
||||
|
||||
subgraph Execution["Execution Layer (Orchestrator)"]
|
||||
@@ -46,10 +46,10 @@ flowchart TB
|
||||
|
||||
subgraph Workers["Worker Layer (Specialized Agents)"]
|
||||
Junior[" Sisyphus-Junior<br/>(Task Executor)<br/>claude-sonnet-4-6 / kimi-k2.5 / gpt-5.4 / minimax-m2.7"]
|
||||
Oracle[" Oracle<br/>(Architecture)<br/>gpt-5.4 / gemini-3.1-pro / claude-opus-4-6 / glm-5"]
|
||||
Oracle[" Oracle<br/>(Architecture)<br/>gpt-5.4 / gemini-3.1-pro / claude-opus-4-7 / glm-5"]
|
||||
Explore[" Explore<br/>(Codebase Grep)<br/>grok-code-fast-1 / minimax-m2.7-highspeed / claude-haiku-4-5"]
|
||||
Librarian[" Librarian<br/>(Docs/OSS)<br/>minimax-m2.7 / minimax-m2.7-highspeed / claude-haiku-4-5"]
|
||||
Frontend[" visual-engineering<br/>(category + frontend-ui-ux)<br/>gemini-3.1-pro / glm-5 / claude-opus-4-6"]
|
||||
Frontend[" visual-engineering<br/>(category + frontend-ui-ux)<br/>gemini-3.1-pro / glm-5 / claude-opus-4-7"]
|
||||
end
|
||||
|
||||
User -->|"Describe work"| Prometheus
|
||||
@@ -282,7 +282,7 @@ This "boulder pushing" mechanism is why the system is named after Sisyphus.
|
||||
```typescript
|
||||
// OLD: Model name creates distributional bias
|
||||
task({ agent: "gpt-5.4", prompt: "..." }); // Model knows its limitations
|
||||
task({ agent: "claude-opus-4-6", prompt: "..." }); // Different self-perception
|
||||
task({ agent: "claude-opus-4-7", prompt: "..." }); // Different self-perception
|
||||
```
|
||||
|
||||
**The Solution: Semantic Categories:**
|
||||
@@ -298,13 +298,13 @@ task({ category: "quick", prompt: "..." }); // "Just get it done fast"
|
||||
|
||||
| Category | Default config | Runtime fallback order | When to Use |
|
||||
| -------------------- | ------------------------------- | -------------------------------------------------------------------------------------- | ----------------------------------------------------------- |
|
||||
| `visual-engineering` | `google/gemini-3.1-pro high` | `gemini-3.1-pro` → `glm-5` → `claude-opus-4-6` → `glm-5` → `k2p5` | Frontend, UI/UX, design, styling, animation |
|
||||
| `ultrabrain` | `openai/gpt-5.4 xhigh` | `gpt-5.4` → `gemini-3.1-pro` → `claude-opus-4-6` → `glm-5` | Deep logical reasoning, complex architecture decisions |
|
||||
| `deep` | `openai/gpt-5.4 medium` | `gpt-5.4` → `claude-opus-4-6` → `gemini-3.1-pro` | Goal-oriented autonomous problem-solving, thorough research |
|
||||
| `artistry` | `google/gemini-3.1-pro high` | `gemini-3.1-pro` → `claude-opus-4-6` → `gpt-5.4` | Highly creative or artistic tasks, novel ideas |
|
||||
| `visual-engineering` | `google/gemini-3.1-pro high` | `gemini-3.1-pro` → `glm-5` → `claude-opus-4-7` → `glm-5` → `k2p5` | Frontend, UI/UX, design, styling, animation |
|
||||
| `ultrabrain` | `openai/gpt-5.4 xhigh` | `gpt-5.4` → `gemini-3.1-pro` → `claude-opus-4-7` → `glm-5` | Deep logical reasoning, complex architecture decisions |
|
||||
| `deep` | `openai/gpt-5.4 medium` | `gpt-5.4` → `claude-opus-4-7` → `gemini-3.1-pro` | Goal-oriented autonomous problem-solving, thorough research |
|
||||
| `artistry` | `google/gemini-3.1-pro high` | `gemini-3.1-pro` → `claude-opus-4-7` → `gpt-5.4` | Highly creative or artistic tasks, novel ideas |
|
||||
| `quick` | `openai/gpt-5.4-mini` | `gpt-5.4-mini` → `claude-haiku-4-5` → `gemini-3-flash` → `minimax-m2.7` → `gpt-5-nano` | Trivial tasks, single file changes, typo fixes |
|
||||
| `unspecified-low` | `anthropic/claude-sonnet-4-6` | `claude-sonnet-4-6` → `gpt-5.3-codex` → `kimi-k2.5` → `gemini-3-flash` → `minimax-m2.7` | Tasks that don't fit other categories, low effort |
|
||||
| `unspecified-high` | `anthropic/claude-opus-4-6 max` | `claude-opus-4-6` → `gpt-5.4` → `glm-5` → `k2p5` → `kimi-k2.5` | Tasks that don't fit other categories, high effort |
|
||||
| `unspecified-high` | `anthropic/claude-opus-4-7 max` | `claude-opus-4-7` → `gpt-5.4` → `glm-5` → `k2p5` → `kimi-k2.5` | Tasks that don't fit other categories, high effort |
|
||||
| `writing` | `kimi-for-coding/k2p5` | `gemini-3-flash` → `kimi-k2.5` → `claude-sonnet-4-6` → `minimax-m2.7` | Documentation, prose, technical writing |
|
||||
|
||||
### Skills: Domain-Specific Instructions
|
||||
@@ -423,7 +423,7 @@ Atlas is automatically activated when you run `/start-work`. You don't need to m
|
||||
|
||||
| Aspect | Hephaestus | Sisyphus + `ulw` / `ultrawork` |
|
||||
| --------------- | ------------------------------------------ | ---------------------------------------------------- |
|
||||
| **Model** | `gpt-5.4` (`medium`) | `claude-opus-4-6` / `kimi-k2.5` / `gpt-5.4` / `glm-5` depending on setup |
|
||||
| **Model** | `gpt-5.4` (`medium`) | `claude-opus-4-7` / `kimi-k2.5` / `gpt-5.4` / `glm-5` depending on setup |
|
||||
| **Approach** | Autonomous deep worker | Keyword-activated ultrawork mode |
|
||||
| **Best For** | Complex architectural work, deep reasoning | General complex tasks, "just do it" scenarios |
|
||||
| **Planning** | Self-plans during execution | Uses Prometheus plans if available |
|
||||
|
||||
@@ -173,7 +173,7 @@ You can override specific agents or categories in your config:
|
||||
// Main orchestrator: Claude Opus or Kimi K2.5 work best
|
||||
"sisyphus": {
|
||||
"model": "kimi-for-coding/k2p5",
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-7", "variant": "max" },
|
||||
},
|
||||
|
||||
// Research agents: cheaper models are fine
|
||||
@@ -207,10 +207,10 @@ You can override specific agents or categories in your config:
|
||||
"unspecified-low": { "model": "openai/gpt-5.4-mini" },
|
||||
|
||||
// High-effort fallback: best available
|
||||
"unspecified-high": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
|
||||
"unspecified-high": { "model": "anthropic/claude-opus-4-7", "variant": "max" },
|
||||
|
||||
// Prose and documentation
|
||||
"writing": { "model": "anthropic/claude-opus-4-6", "variant": "high" },
|
||||
"writing": { "model": "anthropic/claude-opus-4-7", "variant": "high" },
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
@@ -78,7 +78,7 @@ Here's a practical starting configuration:
|
||||
// Main orchestrator: Claude Opus or Kimi K2.5 work best
|
||||
"sisyphus": {
|
||||
"model": "kimi-for-coding/k2p5",
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
|
||||
"ultrawork": { "model": "anthropic/claude-opus-4-7", "variant": "max" },
|
||||
},
|
||||
|
||||
// Research agents: cheap fast models are fine
|
||||
@@ -102,7 +102,7 @@ Here's a practical starting configuration:
|
||||
"unspecified-low": { "model": "anthropic/claude-sonnet-4-6" },
|
||||
|
||||
// unspecified-high - complex work
|
||||
"unspecified-high": { "model": "anthropic/claude-opus-4-6", "variant": "max" },
|
||||
"unspecified-high": { "model": "anthropic/claude-opus-4-7", "variant": "max" },
|
||||
|
||||
// writing - docs/prose
|
||||
"writing": { "model": "google/gemini-3-flash" },
|
||||
@@ -130,7 +130,7 @@ Here's a practical starting configuration:
|
||||
"zai-coding-plan": 10,
|
||||
},
|
||||
"modelConcurrency": {
|
||||
"anthropic/claude-opus-4-6": 2,
|
||||
"anthropic/claude-opus-4-7": 2,
|
||||
"opencode/gpt-5-nano": 20,
|
||||
},
|
||||
},
|
||||
@@ -229,7 +229,7 @@ Control what tools an agent can use:
|
||||
{
|
||||
"agents": {
|
||||
"sisyphus": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"model": "anthropic/claude-opus-4-7",
|
||||
"fallback_models": [
|
||||
// Simple string fallback
|
||||
"openai/gpt-5.4",
|
||||
@@ -293,7 +293,7 @@ Domain-specific model delegation used by the `task()` tool. When Sisyphus delega
|
||||
| `artistry` | `google/gemini-3.1-pro` (high) | Creative/unconventional approaches |
|
||||
| `quick` | `openai/gpt-5.4-mini` | Trivial tasks, typo fixes, single-file changes |
|
||||
| `unspecified-low` | `anthropic/claude-sonnet-4-6` | General tasks, low effort |
|
||||
| `unspecified-high` | `anthropic/claude-opus-4-6` (max) | General tasks, high effort |
|
||||
| `unspecified-high` | `anthropic/claude-opus-4-7` (max) | General tasks, high effort |
|
||||
| `writing` | `google/gemini-3-flash` | Documentation, prose, technical writing |
|
||||
|
||||
> **Note**: Built-in defaults only apply if the category is present in your config. Otherwise the system default model is used.
|
||||
@@ -355,28 +355,28 @@ Capability data comes from provider runtime metadata first. OmO also ships bundl
|
||||
|
||||
| Agent | Default Model | Provider Priority |
|
||||
| --------------------- | ------------------- | ---------------------------------------------------------------------------- |
|
||||
| **Sisyphus** | `claude-opus-4-6` | `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `opencode-go/kimi-k2.5` → `kimi-for-coding/k2p5` → `opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix/kimi-k2.5` → `openai\|github-copilot\|opencode/gpt-5.4 (medium)` → `zai-coding-plan\|opencode/glm-5` → `opencode/big-pickle` |
|
||||
| **Sisyphus** | `claude-opus-4-7` | `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `opencode-go/kimi-k2.5` → `kimi-for-coding/k2p5` → `opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix/kimi-k2.5` → `openai\|github-copilot\|opencode/gpt-5.4 (medium)` → `zai-coding-plan\|opencode/glm-5` → `opencode/big-pickle` |
|
||||
| **Hephaestus** | `gpt-5.4` | `gpt-5.4 (medium)` |
|
||||
| **oracle** | `gpt-5.4` | `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `opencode-go/glm-5` |
|
||||
| **oracle** | `gpt-5.4` | `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `opencode-go/glm-5` |
|
||||
| **librarian** | `minimax-m2.7` | `opencode-go/minimax-m2.7` → `opencode/minimax-m2.7-highspeed` → `anthropic\|opencode/claude-haiku-4-5` → `opencode/gpt-5-nano` |
|
||||
| **explore** | `grok-code-fast-1` | `github-copilot\|xai/grok-code-fast-1` → `opencode-go/minimax-m2.7-highspeed` → `opencode/minimax-m2.7` → `anthropic\|opencode/claude-haiku-4-5` → `opencode/gpt-5-nano` |
|
||||
| **multimodal-looker** | `gpt-5.4` | `openai\|opencode/gpt-5.4 (medium)` → `opencode-go/kimi-k2.5` → `zai-coding-plan/glm-4.6v` → `openai\|github-copilot\|opencode/gpt-5-nano` |
|
||||
| **Prometheus** | `claude-opus-4-6` | `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `opencode-go/glm-5` → `google\|github-copilot\|opencode/gemini-3.1-pro` |
|
||||
| **Metis** | `claude-opus-4-6` | `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `opencode-go/glm-5` → `kimi-for-coding/k2p5` |
|
||||
| **Momus** | `gpt-5.4` | `openai\|github-copilot\|opencode/gpt-5.4 (xhigh)` → `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `opencode-go/glm-5` |
|
||||
| **Prometheus** | `claude-opus-4-7` | `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `opencode-go/glm-5` → `google\|github-copilot\|opencode/gemini-3.1-pro` |
|
||||
| **Metis** | `claude-opus-4-7` | `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `opencode-go/glm-5` → `kimi-for-coding/k2p5` |
|
||||
| **Momus** | `gpt-5.4` | `openai\|github-copilot\|opencode/gpt-5.4 (xhigh)` → `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `opencode-go/glm-5` |
|
||||
| **Atlas** | `claude-sonnet-4-6` | `anthropic\|github-copilot\|opencode/claude-sonnet-4-6` → `opencode-go/kimi-k2.5` → `openai\|github-copilot\|opencode/gpt-5.4 (medium)` → `opencode-go/minimax-m2.7` |
|
||||
|
||||
#### Category Provider Chains
|
||||
|
||||
| Category | Default Model | Provider Priority |
|
||||
| ---------------------- | ------------------- | -------------------------------------------------------------- |
|
||||
| **visual-engineering** | `gemini-3.1-pro` | `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `zai-coding-plan\|opencode/glm-5` → `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `opencode-go/glm-5` → `kimi-for-coding/k2p5` |
|
||||
| **ultrabrain** | `gpt-5.4` | `openai\|opencode/gpt-5.4 (xhigh)` → `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `opencode-go/glm-5` |
|
||||
| **deep** | `gpt-5.4` | `openai\|github-copilot\|venice\|opencode/gpt-5.4 (medium)` → `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `google\|github-copilot\|opencode/gemini-3.1-pro (high)` |
|
||||
| **artistry** | `gemini-3.1-pro` | `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `openai\|github-copilot\|opencode/gpt-5.4` |
|
||||
| **visual-engineering** | `gemini-3.1-pro` | `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `zai-coding-plan\|opencode/glm-5` → `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `opencode-go/glm-5` → `kimi-for-coding/k2p5` |
|
||||
| **ultrabrain** | `gpt-5.4` | `openai\|opencode/gpt-5.4 (xhigh)` → `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `opencode-go/glm-5` |
|
||||
| **deep** | `gpt-5.4` | `openai\|github-copilot\|venice\|opencode/gpt-5.4 (medium)` → `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `google\|github-copilot\|opencode/gemini-3.1-pro (high)` |
|
||||
| **artistry** | `gemini-3.1-pro` | `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `openai\|github-copilot\|opencode/gpt-5.4` |
|
||||
| **quick** | `gpt-5.4-mini` | `openai\|github-copilot\|opencode/gpt-5.4-mini` → `anthropic\|github-copilot\|opencode/claude-haiku-4-5` → `google\|github-copilot\|opencode/gemini-3-flash` → `opencode-go/minimax-m2.7` → `opencode/gpt-5-nano` |
|
||||
| **unspecified-low** | `claude-sonnet-4-6` | `anthropic\|github-copilot\|opencode/claude-sonnet-4-6` → `openai\|opencode/gpt-5.3-codex (medium)` → `opencode-go/kimi-k2.5` → `google\|github-copilot\|opencode/gemini-3-flash` → `opencode-go/minimax-m2.7` |
|
||||
| **unspecified-high** | `claude-opus-4-6` | `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `zai-coding-plan\|opencode/glm-5` → `kimi-for-coding/k2p5` → `opencode-go/glm-5` → `opencode/kimi-k2.5` → `opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix/kimi-k2.5` |
|
||||
| **unspecified-high** | `claude-opus-4-7` | `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `zai-coding-plan\|opencode/glm-5` → `kimi-for-coding/k2p5` → `opencode-go/glm-5` → `opencode/kimi-k2.5` → `opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix/kimi-k2.5` |
|
||||
| **writing** | `gemini-3-flash` | `google\|github-copilot\|opencode/gemini-3-flash` → `opencode-go/kimi-k2.5` → `anthropic\|github-copilot\|opencode/claude-sonnet-4-6` → `opencode-go/minimax-m2.7` |
|
||||
|
||||
Run `bunx oh-my-opencode doctor --verbose` to see effective model resolution for your config.
|
||||
@@ -395,7 +395,7 @@ Control parallel agent execution and concurrency limits.
|
||||
"defaultConcurrency": 5,
|
||||
"staleTimeoutMs": 180000,
|
||||
"providerConcurrency": { "anthropic": 3, "openai": 5, "google": 10 },
|
||||
"modelConcurrency": { "anthropic/claude-opus-4-6": 2 }
|
||||
"modelConcurrency": { "anthropic/claude-opus-4-7": 2 }
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -678,7 +678,7 @@ Define `fallback_models` per agent or category:
|
||||
{
|
||||
"agents": {
|
||||
"sisyphus": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"model": "anthropic/claude-opus-4-7",
|
||||
"fallback_models": [
|
||||
"openai/gpt-5.4",
|
||||
{
|
||||
@@ -697,7 +697,7 @@ Define `fallback_models` per agent or category:
|
||||
{
|
||||
"agents": {
|
||||
"sisyphus": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"model": "anthropic/claude-opus-4-7",
|
||||
"fallback_models": [
|
||||
"openai/gpt-5.4",
|
||||
{
|
||||
@@ -798,7 +798,7 @@ Mix string entries and object entries when only some fallback models need specia
|
||||
{
|
||||
"agents": {
|
||||
"sisyphus": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"model": "anthropic/claude-opus-4-7",
|
||||
"fallback_models": [
|
||||
"openai/gpt-5.4",
|
||||
{
|
||||
@@ -832,7 +832,7 @@ Mix string entries and object entries when only some fallback models need specia
|
||||
"maxTokens": 12000
|
||||
},
|
||||
{
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"model": "anthropic/claude-opus-4-7",
|
||||
"variant": "max",
|
||||
"temperature": 0.2
|
||||
},
|
||||
|
||||
@@ -10,9 +10,9 @@ Core-agent tab cycling is deterministic via injected runtime order field. The fi
|
||||
|
||||
| Agent | Model | Purpose |
|
||||
| --------------------- | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Sisyphus** | `claude-opus-4-6` | The default orchestrator. Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Todo-driven workflow with extended thinking (32k budget). Fallback: `opencode-go/kimi-k2.5` → `kimi-for-coding/k2p5` → `opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix/kimi-k2.5` → `openai\|github-copilot\|opencode/gpt-5.4 (medium)` → `zai-coding-plan\|opencode/glm-5` → `opencode/big-pickle`. |
|
||||
| **Sisyphus** | `claude-opus-4-7` | The default orchestrator. Plans, delegates, and executes complex tasks using specialized subagents with aggressive parallel execution. Todo-driven workflow with extended thinking (32k budget). Fallback: `opencode-go/kimi-k2.5` → `kimi-for-coding/k2p5` → `opencode\|moonshotai\|moonshotai-cn\|firmware\|ollama-cloud\|aihubmix/kimi-k2.5` → `openai\|github-copilot\|opencode/gpt-5.4 (medium)` → `zai-coding-plan\|opencode/glm-5` → `opencode/big-pickle`. |
|
||||
| **Hephaestus** | `gpt-5.4` | The Legitimate Craftsman. Autonomous deep worker inspired by AmpCode's deep mode. Goal-oriented execution with thorough research before action. Explores codebase patterns, completes tasks end-to-end without premature stopping. Named after the Greek god of forge and craftsmanship. Requires a GPT-capable provider. |
|
||||
| **Oracle** | `gpt-5.4` | Architecture decisions, code review, debugging. Read-only consultation with stellar logical reasoning and deep analysis. Inspired by AmpCode. Fallback: `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `opencode-go/glm-5`. |
|
||||
| **Oracle** | `gpt-5.4` | Architecture decisions, code review, debugging. Read-only consultation with stellar logical reasoning and deep analysis. Inspired by AmpCode. Fallback: `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `opencode-go/glm-5`. |
|
||||
| **Librarian** | `minimax-m2.7` | Multi-repo analysis, documentation lookup, OSS implementation examples. Deep codebase understanding with evidence-based answers. Fallback: `opencode/minimax-m2.7-highspeed` → `anthropic\|opencode/claude-haiku-4-5` → `opencode/gpt-5-nano`. |
|
||||
| **Explore** | `grok-code-fast-1` | Fast codebase exploration and contextual grep. Fallback: `opencode-go/minimax-m2.7-highspeed` → `opencode/minimax-m2.7` → `anthropic\|opencode/claude-haiku-4-5` → `opencode/gpt-5-nano`. |
|
||||
| **Multimodal-Looker** | `gpt-5.4` | Visual content specialist. Analyzes PDFs, images, diagrams to extract information. Fallback: `opencode-go/kimi-k2.5` → `zai-coding-plan/glm-4.6v` → `openai\|github-copilot\|opencode/gpt-5-nano`. |
|
||||
@@ -20,9 +20,9 @@ Core-agent tab cycling is deterministic via injected runtime order field. The fi
|
||||
|
||||
| Agent | Model | Purpose |
|
||||
| -------------- | ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Prometheus** | `claude-opus-4-6` | Strategic planner with interview mode. Creates detailed work plans through iterative questioning. Fallback: `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `opencode-go/glm-5` → `google\|github-copilot\|opencode/gemini-3.1-pro`. |
|
||||
| **Metis** | `claude-opus-4-6` | Plan consultant — pre-planning analysis. Identifies hidden intentions, ambiguities, and AI failure points. Fallback: `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `opencode-go/glm-5` → `kimi-for-coding/k2p5`. |
|
||||
| **Momus** | `gpt-5.4` | Plan reviewer — validates plans against clarity, verifiability, and completeness standards. Fallback: `anthropic\|github-copilot\|opencode/claude-opus-4-6 (max)` → `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `opencode-go/glm-5`. |
|
||||
| **Prometheus** | `claude-opus-4-7` | Strategic planner with interview mode. Creates detailed work plans through iterative questioning. Fallback: `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `opencode-go/glm-5` → `google\|github-copilot\|opencode/gemini-3.1-pro`. |
|
||||
| **Metis** | `claude-opus-4-7` | Plan consultant — pre-planning analysis. Identifies hidden intentions, ambiguities, and AI failure points. Fallback: `openai\|github-copilot\|opencode/gpt-5.4 (high)` → `opencode-go/glm-5` → `kimi-for-coding/k2p5`. |
|
||||
| **Momus** | `gpt-5.4` | Plan reviewer — validates plans against clarity, verifiability, and completeness standards. Fallback: `anthropic\|github-copilot\|opencode/claude-opus-4-7 (max)` → `google\|github-copilot\|opencode/gemini-3.1-pro (high)` → `opencode-go/glm-5`. |
|
||||
|
||||
### Orchestration Agents
|
||||
|
||||
@@ -115,7 +115,7 @@ By combining these two concepts, you can generate optimal agents through `task`.
|
||||
| `artistry` | `google/gemini-3.1-pro` (high) | Highly creative/artistic tasks, novel ideas |
|
||||
| `quick` | `openai/gpt-5.4-mini` | Trivial tasks - single file changes, typo fixes, simple modifications |
|
||||
| `unspecified-low` | `anthropic/claude-sonnet-4-6` | Tasks that don't fit other categories, low effort required |
|
||||
| `unspecified-high` | `anthropic/claude-opus-4-6` (max) | Tasks that don't fit other categories, high effort required |
|
||||
| `unspecified-high` | `anthropic/claude-opus-4-7` (max) | Tasks that don't fit other categories, high effort required |
|
||||
| `writing` | `google/gemini-3-flash` | Documentation, prose, technical writing |
|
||||
|
||||
### Usage
|
||||
@@ -138,7 +138,7 @@ You can define custom categories in your plugin config file. During the rename t
|
||||
| Field | Type | Description |
|
||||
| ------------------- | ------- | --------------------------------------------------------------------------- |
|
||||
| `description` | string | Human-readable description of the category's purpose. Shown in task prompt. |
|
||||
| `model` | string | AI model ID to use (e.g., `anthropic/claude-opus-4-6`) |
|
||||
| `model` | string | AI model ID to use (e.g., `anthropic/claude-opus-4-7`) |
|
||||
| `variant` | string | Model variant (e.g., `max`, `xhigh`) |
|
||||
| `temperature` | number | Creativity level (0.0 ~ 2.0). Lower is more deterministic. |
|
||||
| `top_p` | number | Nucleus sampling parameter (0.0 ~ 1.0) |
|
||||
@@ -170,7 +170,7 @@ You can define custom categories in your plugin config file. During the rename t
|
||||
|
||||
// 3. Configure thinking model and restrict tools
|
||||
"deep-reasoning": {
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"model": "anthropic/claude-opus-4-7",
|
||||
"thinking": {
|
||||
"type": "enabled",
|
||||
"budgetTokens": 32000,
|
||||
|
||||
@@ -10,16 +10,16 @@ Agent factories following `createXXXAgent(model) → AgentConfig` pattern. Each
|
||||
|
||||
| Agent | Model | Temp | Mode | Fallback Chain | Purpose |
|
||||
|-------|-------|------|------|----------------|---------|
|
||||
| **Sisyphus** | claude-opus-4-6 max | 0.1 | all | k2p5 -> kimi-k2.5 -> gpt-5.4 medium -> glm-5 -> big-pickle | Main orchestrator, plans + delegates |
|
||||
| **Sisyphus** | claude-opus-4-7 max | 0.1 | all | k2p5 -> kimi-k2.5 -> gpt-5.4 medium -> glm-5 -> big-pickle | Main orchestrator, plans + delegates |
|
||||
| **Hephaestus** | gpt-5.4 medium | 0.1 | all | — | Autonomous deep worker |
|
||||
| **Oracle** | gpt-5.4 high | 0.1 | subagent | gemini-3.1-pro high -> claude-opus-4-6 max | Read-only consultation |
|
||||
| **Oracle** | gpt-5.4 high | 0.1 | subagent | gemini-3.1-pro high -> claude-opus-4-7 max | Read-only consultation |
|
||||
| **Librarian** | minimax-m2.7 | 0.1 | subagent | minimax-m2.7-highspeed -> claude-haiku-4-5 -> gpt-5-nano | External docs/code search |
|
||||
| **Explore** | grok-code-fast-1 | 0.1 | subagent | minimax-m2.7-highspeed -> minimax-m2.7 -> claude-haiku-4-5 -> gpt-5-nano | Contextual grep |
|
||||
| **Multimodal-Looker** | gpt-5.3-codex medium | 0.1 | subagent | k2p5 -> gemini-3-flash -> glm-4.6v -> gpt-5-nano | PDF/image analysis |
|
||||
| **Metis** | claude-opus-4-6 max | **0.3** | subagent | gpt-5.4 high -> gemini-3.1-pro high | Pre-planning consultant |
|
||||
| **Momus** | gpt-5.4 xhigh | 0.1 | subagent | claude-opus-4-6 max -> gemini-3.1-pro high | Plan reviewer |
|
||||
| **Metis** | claude-opus-4-7 max | **0.3** | subagent | gpt-5.4 high -> gemini-3.1-pro high | Pre-planning consultant |
|
||||
| **Momus** | gpt-5.4 xhigh | 0.1 | subagent | claude-opus-4-7 max -> gemini-3.1-pro high | Plan reviewer |
|
||||
| **Atlas** | claude-sonnet-4-6 | 0.1 | primary | gpt-5.4 medium | Todo-list orchestrator |
|
||||
| **Prometheus** | claude-opus-4-6 max | 0.1 | — | internal planner | Strategic planner (internal) |
|
||||
| **Prometheus** | claude-opus-4-7 max | 0.1 | — | internal planner | Strategic planner (internal) |
|
||||
| **Sisyphus-Junior** | claude-sonnet-4-6 | 0.1 | all | user-configurable | Category-spawned executor |
|
||||
|
||||
## TOOL RESTRICTIONS
|
||||
|
||||
@@ -44,7 +44,7 @@ Both must agree before marking a task complete. Prevents premature completion on
|
||||
|
||||
## CONCURRENCY MODEL
|
||||
|
||||
- Key format: `{providerID}/{modelID}` (e.g., `anthropic/claude-opus-4-6`)
|
||||
- Key format: `{providerID}/{modelID}` (e.g., `anthropic/claude-opus-4-7`)
|
||||
- Default limit: 5 concurrent per key (configurable via `background_task` config)
|
||||
- FIFO queue: tasks wait in order when slots full
|
||||
- Slot released on: completion, error, cancellation
|
||||
|
||||
@@ -97,7 +97,7 @@
|
||||
| artistry | gemini-3.1-pro high | Creative approaches |
|
||||
| quick | gpt-5.4-mini | Trivial tasks |
|
||||
| unspecified-low | claude-sonnet-4-6 | Moderate effort |
|
||||
| unspecified-high | claude-opus-4-6 max | High effort |
|
||||
| unspecified-high | claude-opus-4-7 max | High effort |
|
||||
| writing | gemini-3-flash | Documentation |
|
||||
|
||||
## HOW TO ADD A TOOL
|
||||
|
||||
Reference in New Issue
Block a user