Stop building agents, start harnessing Goose
Stop building agents, start harnessing Goose
There's a disconnect in the AI Engineering space right now and I think that the open source community has alread risen to the occasion to bridge the gap, but I don't see any signal that it's well understood or widely adopted. The industry is overwhelmingly focused on building agents from scratch via custom frameworks, bespoke orchestration layers, hand-rolled tool-calling loops, etc. when many of the hard problems have already been solved in that layer of the stack. The building block exists. It's open source. It's called goose.
I think for over 90% of use cases, if you're spending your time implementing an agent from scratch, you're already behind or potentially have already lost the race. My hypothesis is that Goose is the building block. It's the small, composable thing that becomes powerful when you wrap it in what the industry is rapidly agreeing is called the Harness.
The composable agent you didn't know you needed
Most people hear "goose" and think either "another AI coding assistant" or "another AI chatbot" (depending on how they came across goose and how they use it). That misunderstanding is the problem. Goose is not a coding assistant. It is not a chatbot. It is not a Claude Code competitor, though it can be configured to act as all of those things. At its core, goose is a small, configurable agent runtime with an extension-based architecture that can be composed into virtually anything.
It operates on three components:
Interface: Desktop app or CLI/TUI that collects user input and displays output.
Agent: The core logic engine that manages the interactive loop: sending requests to LLM providers, orchestrating tool calls, and handling context revision.
Extensions: Pluggable components built on the Model Context Protocol (MCP) that provide specific tools and capabilities.
A small core with a lot of power delivered through native extensions, external plugins, and configuration options. The agent core itself is minimal, it's an interactive loop plus context management. That's it. All capabilities come through the extension system.
You can strip goose down to nothing. No external capabilities. No tool calling. No skills. No plugins. You can even configure it so it cannot access the internet, only the inference service to talk to the model (which can be local). At that point, it's a plain chatbot with no agency whatsoever.
Or you can go the other direction entirely.
From zero to everything
Configure goose with the Developer extension, Computer Controller, Memory, and a handful of MCP servers and you have a working replacement for Claude Code, Codex, Gemini CLI, OpenCode, or any other similar tool. Same capabilities, no vendor lock-in, and you choose your own inference provider from over 25 options (at the time of this writing)including Anthropic, OpenAI, Google Gemini, Groq, Mistral, and more. You can run fully local inference via goose's native inference provider, or offload to Ollama, Ramalama LM Studio, or Docker Model Runner. The full list of providers is in the goose documentation.
If you put this together, you're well on your way to unlocking the full potential but you're just getting started.
Recipes: reproducible, composable workflows
Where goose gets interesting is its composition model. Goose Recipes are reusable, shareable workflow definitions that package together instructions, extensions, parameters, provider settings, retry logic, and structured response schemas. A recipe can be as simple as a single prompt with a specific extension configuration. Alternatively it can be sophisticated, composed of subrecipes where each subrecipe is effectively another goose agent with its own configuration: its own extensions, plugins, inference provider, system prompt, and skills.
Subrecipes run in isolated sessions with no shared conversation history, memory, or state. The main recipe's agent decides when to invoke them, can run them sequentially or in parallel, and chains their outputs through conversation context. Compositional agent orchestration without writing a single line of framework code.
You're not writing an orchestration layer. You're not building a DAG executor. You're not implementing tool-calling logic. You're writing YAML that describes what you want done and goose handles the how.
Goosetown: multi-agent orchestration, no framework required
If want to take this all the way to the extreme of a fully autonomous software factory like the one Steve Yegge outlines in his now infamous blog post, "Welcome to Gas Town", and implemented via his Gastown project. Gastown is a multi-agent workspace manager for orchestrating Claude Code, GitHub Copilot, Codex, Gemini, and other AI agents with persistent work tracking. It's a Go application with concepts like Mayors, Rigs, Polecats, Hooks, Convoys, and Beads. It's a real engineering effort to coordinate 20-30 agents on a codebase.
You can do exactly that by using goose as the building block. The open source community did it. They looked at Gastown and re-implemented its core concepts using goose's native capabilities. The result is Goosetown. Goosetown is a multi-agent coordination system that orchestrates "flocks" of AI agents (researchers, writers, workers, reviewers) to decompose and execute complex tasks. Goosetown uses goose's subagent delegation, skills system for role-based specialization, inter-agent communication via a broadcast channel called the "Town Wall," and multi-model support for adversarial cross-reviews where different LLMs review each other's work.
If you look at the code, it's just a few flat files, some shell scripts, some skills markdown, and some agent definitions.
All of this built on top of goose. Not alongside it. Not wrapping it. On it. Using the primitives goose already provides: skills, subagents, extensions, and recipes.
Goose as a service
Goose also runs as a daemon, exposing itself to other applications via the Agent Client Protocol (ACP) (a standardized JSON-RPC protocol developed by Zed Industries). ACP does for AI agents what LSP did for language servers. ACP decouples agents from editors and frontends, so goose can be embedded directly into Zed, JetBrains, Neovim, or any ACP-compatible environment.
The composability runs both directions. Goose can also consume other ACP agents as providers, routing its LLM calls through Claude Code, Codex, or Gemini while keeping its own extension ecosystem and UI. As Adrian Cole wrote in his blog post "How to Break Up with Your Agent":
"Pick the UI you like. Pick the agent you like. They don't have to be the same thing."
This bidirectional composability — goose as a component and goose as an orchestrator — is what separates it from other agent tools.
Open governance, no vendor lock-in
Goose is fully open source under the leadership of the Agentic AI Foundation (AAIF), which provides vendor-neutral governance under the umbrella of the Linux Foundation. AAIF also hosts the Model Context Protocol (MCP) itself, so the standards goose builds on are governed with the same neutrality.
This matters. When you build your workflows on goose, you're building on a foundation governed by a neutral body with a Governing Board, a Technical Committee, and a transparent contribution model. This is the same open, collaborative, and neutral model that made Linux and Kubernetes into reliable core components of the entire software industry, and it's the same reason I think it's worth investing time and energy into.
It's no secret I'm an open source nerd, and goose checks all the boxes.
The harness is the thing
We've collectively been on a journey. First it was Prompt Engineering, crafting the right words to get the right output. Then it was Context Engineering, making sure the model has the right information at the right time. Now, it seems we've arrived at the next turn in this adventure we all find ourselves in: Harness Engineering.
Ralph Bean nails this in his blog post "What Even Is the Harness?". The harness is the enablement layer. It's everything you add to the agent runtime that gives you control over your outcomes:
"Harness — the enablement layer. AGENTS.md files, skills, custom tools, hand-crafted linters, system prompts for task-oriented agents. These are the things you engineer, iteratively, to increase the chances the agent gets things right. This is what Birgitta Böckeler calls the user harness and is where Mitchell Hashimoto's attention lives."
—Ralph Bean
Read that again. The harness is not the agent. The harness is what you add to
the agent. The AGENTS.md files. The skills. The custom MCP tools. The
hand-crafted linters. The system prompts. The recipes and subrecipes. The
extension configurations. The provider choices. The permission policies.
This is where your engineering effort belongs. Not in building the interactive loop, or implementing tool-calling JSON parsing, or writing context window management, or building MCP client libraries. Goose already does all of that and does so with the full backing of the AAIF, the Linux Foundation, and a vibrant open source community.
In most cases, and I'd argue almost all cases, your job is to build the harness.
The 90% argument
I think for over 90% of use cases where someone is building an agent today, goose is a better starting point than a blank text editor or a vibe coding session (are we calling it Agentic Engineering yet?).
If you need a coding assistant, goose does that. If you need a research agent, configure goose with web scraping extensions and a research-focused recipe or skill. If you need a CI/CD bot, run goose in daemon mode with ACP or orchestrate it with scripts/recipes in your CI job runner of choice. If you need multi-agent orchestration, compose goose instances with subrecipes or build a Goosetown-style flock. If you need local-only, air-gapped inference, point goose at Ollama, Ramalama, LM Studio, or its native inference provider. If you need to integrate with your existing editor, goose speaks ACP natively or you can set GOOSE_PROMPT_EDITOR and run the whole flow from inside your editor of choice. If you need vendor-neutral governance, it's under the Linux Foundation umbrella via AAIF.
The remaining 10%? Those are the genuinely novel agent architectures, the research projects pushing boundaries, the use cases where you do need to control every byte of the agent loop. For those, build from scratch. For everything else, build the harness. I'm not saying you can't build agents from scratch. I'm simply suggesting that you probably don't need to.
A call to action
If you're a professional technologist or an aspiring AI Engineer, I'd encourage you to shift your mental model. Stop thinking about building agents. Start thinking about harnessing them. At this point in the AI hype cycle, the agent is mature enough to be the commodity. The harness is your competitive advantage.
Install goose. Strip it down to nothing and build it back up. Write a recipe. Compose some subrecipes. Add skills. Configure extensions. Point it at different providers. Run it as a daemon. Embed it in your editor. Build a flock. Engineer the harness.
Go forth and harness your agents.
Happy hacking. <3