<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="../assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>adam.yml (Posts about Harness Engineering)</title><link>https://maxamillion.sh/</link><description></description><atom:link href="https://maxamillion.sh/categories/harness-engineering.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><lastBuildDate>Wed, 15 Apr 2026 19:17:59 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>Stop building agents, start harnessing Goose</title><link>https://maxamillion.sh/blog/stop-building-agents-start-harnessing-goose/</link><dc:creator>Adam John Miller</dc:creator><description>&lt;section id="stop-building-agents-start-harnessing-goose"&gt;
&lt;h2&gt;Stop building agents, start harnessing Goose&lt;/h2&gt;
&lt;p&gt;There's a disconnect in the AI Engineering space right now and I think that the
open source community has alread risen to the occasion to bridge the gap, but
I don't see any signal that it's well understood or widely adopted.
The industry is overwhelmingly focused on building agents from
scratch via custom frameworks, bespoke orchestration layers, hand-rolled
tool-calling loops, etc. when many of the hard problems have already been solved
in that layer of the stack. The building block exists. It's open source. It's called
&lt;a class="reference external" href="https://github.com/aaif-goose/goose"&gt;goose&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I think for over 90% of use cases, if you're spending your time implementing an
agent from scratch, you're already behind or potentially have already lost the race.
My hypothesis is that Goose is the building block. It's the small, composable
thing that becomes powerful when you wrap it in what the industry is rapidly agreeing
is called &lt;em&gt;the Harness&lt;/em&gt;.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="the-composable-agent-you-didn-t-know-you-needed"&gt;
&lt;h2&gt;The composable agent you didn't know you needed&lt;/h2&gt;
&lt;p&gt;Most people hear "goose" and think either "another AI coding assistant" or "another
AI chatbot" (depending on how they came across goose and how they use it). That
misunderstanding is the problem. Goose is not a coding assistant. It is not a
chatbot. It is not a Claude Code competitor, though it can be configured to act
as all of those things. At its core, goose is &lt;strong&gt;a small, configurable agent
runtime with an extension-based architecture that can be composed into virtually
anything&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;It operates on three components:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Interface&lt;/strong&gt;: Desktop app or CLI/TUI that collects user input and displays
output.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Agent&lt;/strong&gt;: The core logic engine that manages the interactive loop: sending
requests to LLM providers, orchestrating tool calls, and handling context
revision.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extensions&lt;/strong&gt;: Pluggable components built on the &lt;a class="reference external" href="https://modelcontextprotocol.io/"&gt;Model Context Protocol
(MCP)&lt;/a&gt; that provide specific tools and
capabilities.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A small core with a lot of power delivered through native extensions, external
plugins, and configuration options. The agent core itself is minimal, it's an
interactive loop plus context management. That's it. &lt;em&gt;All&lt;/em&gt; capabilities come
through the extension system.&lt;/p&gt;
&lt;p&gt;You can strip goose down to nothing. No external capabilities. No tool calling.
No &lt;a class="reference external" href="https://agentskills.io/home"&gt;skills&lt;/a&gt;. No plugins. You can even configure it so it cannot access the
internet, only the inference service to talk to the model (which can be local).
At that point, it's a plain chatbot with no agency whatsoever.&lt;/p&gt;
&lt;p&gt;Or you can go the other direction entirely.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="from-zero-to-everything"&gt;
&lt;h2&gt;From zero to everything&lt;/h2&gt;
&lt;p&gt;Configure goose with the Developer extension, Computer Controller, Memory,
and a handful of MCP servers and you have a working replacement for
&lt;a class="reference external" href="https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview"&gt;Claude Code&lt;/a&gt;,
&lt;a class="reference external" href="https://github.com/openai/codex"&gt;Codex&lt;/a&gt;,
&lt;a class="reference external" href="https://github.com/google-gemini/gemini-cli"&gt;Gemini CLI&lt;/a&gt;,
&lt;a class="reference external" href="https://github.com/nicosalm/opencode"&gt;OpenCode&lt;/a&gt;,
or any other similar tool. Same capabilities, no vendor lock-in, and you choose
your own inference provider from over 25 options (at the time of this writing)including
&lt;a class="reference external" href="https://www.anthropic.com/"&gt;Anthropic&lt;/a&gt;,
&lt;a class="reference external" href="https://openai.com/"&gt;OpenAI&lt;/a&gt;,
&lt;a class="reference external" href="https://ai.google.dev/"&gt;Google Gemini&lt;/a&gt;,
&lt;a class="reference external" href="https://groq.com/"&gt;Groq&lt;/a&gt;,
&lt;a class="reference external" href="https://mistral.ai/"&gt;Mistral&lt;/a&gt;,
and more. You can run fully local inference via goose's native inference
provider, or offload to &lt;a class="reference external" href="https://ollama.com/"&gt;Ollama&lt;/a&gt;, &lt;a class="reference external" href="https://ramalama.ai/"&gt;Ramalama&lt;/a&gt;
&lt;a class="reference external" href="https://lmstudio.ai/"&gt;LM Studio&lt;/a&gt;, or
&lt;a class="reference external" href="https://docs.docker.com/model-runner/"&gt;Docker Model Runner&lt;/a&gt;. The full list
of providers is in the
&lt;a class="reference external" href="https://goose-docs.ai/docs/getting-started/providers/"&gt;goose documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you put this together, you're well on your way to unlocking the full potential
but you're just getting started.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="recipes-reproducible-composable-workflows"&gt;
&lt;h2&gt;Recipes: reproducible, composable workflows&lt;/h2&gt;
&lt;p&gt;Where goose gets interesting is its composition model.
&lt;a class="reference external" href="https://goose-docs.ai/docs/guides/recipes/"&gt;Goose Recipes&lt;/a&gt; are reusable,
shareable workflow definitions that package together instructions, extensions,
parameters, provider settings, retry logic, and structured response schemas. A
recipe can be as simple as a single prompt with a specific extension configuration.
Alternatively it can be sophisticated, composed of subrecipes where each subrecipe is
effectively another goose agent with its own configuration: its own extensions,
plugins, inference provider, system prompt, and skills.&lt;/p&gt;
&lt;p&gt;Subrecipes run in isolated sessions with no shared conversation history, memory,
or state. The main recipe's agent decides when to invoke them, can run them
sequentially or in parallel, and chains their outputs through conversation
context. Compositional agent orchestration without writing a single line of
framework code.&lt;/p&gt;
&lt;p&gt;You're not writing an orchestration layer. You're not building a DAG executor.
You're not implementing tool-calling logic. You're writing YAML that describes
&lt;em&gt;what you want done&lt;/em&gt; and goose handles the &lt;em&gt;how&lt;/em&gt;.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="goosetown-multi-agent-orchestration-no-framework-required"&gt;
&lt;h2&gt;Goosetown: multi-agent orchestration, no framework required&lt;/h2&gt;
&lt;p&gt;If want to take this all the way to the extreme of a fully autonomous software
factory like the one Steve Yegge outlines in his now infamous blog post,
"&lt;a class="reference external" href="https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16dd04"&gt;Welcome to Gas Town&lt;/a&gt;",
and implemented via his &lt;a class="reference external" href="https://github.com/gastownhall/gastown"&gt;Gastown&lt;/a&gt; project.
Gastown is a multi-agent workspace
manager for orchestrating Claude Code, GitHub Copilot, Codex, Gemini, and other
AI agents with persistent work tracking. It's a Go application with concepts
like Mayors, Rigs, Polecats, Hooks, Convoys, and Beads. It's a real engineering
effort to coordinate 20-30 agents on a codebase.&lt;/p&gt;
&lt;p&gt;You can do exactly that by using goose as the building block. The open source
community did it. They looked at Gastown and re-implemented its core concepts using goose's
native capabilities. The result is
&lt;a class="reference external" href="https://github.com/aaif-goose/goosetown"&gt;Goosetown&lt;/a&gt;. Goosetown is a multi-agent
coordination system that orchestrates "flocks" of AI agents (researchers,
writers, workers, reviewers) to decompose and execute complex tasks. Goosetown
uses goose's subagent delegation, skills system for role-based specialization,
inter-agent communication via a broadcast channel called the "Town Wall," and
multi-model support for adversarial cross-reviews where different LLMs review
each other's work.&lt;/p&gt;
&lt;p&gt;If you look at the code, it's just a few flat files, some shell scripts,
some skills markdown, and some agent definitions.&lt;/p&gt;
&lt;p&gt;All of this built on top of goose. Not alongside it. Not wrapping it. &lt;em&gt;On&lt;/em&gt; it.
Using the primitives goose already provides: skills, subagents, extensions, and
recipes.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="goose-as-a-service"&gt;
&lt;h2&gt;Goose as a service&lt;/h2&gt;
&lt;p&gt;Goose also runs as a daemon, exposing itself to other applications via the
&lt;a class="reference external" href="https://agentclientprotocol.com/get-started/introduction"&gt;Agent Client Protocol (ACP)&lt;/a&gt;
(a standardized JSON-RPC protocol developed by &lt;a class="reference external" href="https://zed.dev/"&gt;Zed Industries&lt;/a&gt;).
ACP does for AI agents what LSP did for language servers. ACP decouples agents
from editors and frontends, so goose can be embedded directly into Zed, JetBrains, Neovim, or
any ACP-compatible environment.&lt;/p&gt;
&lt;p&gt;The composability runs both directions. Goose can also &lt;em&gt;consume&lt;/em&gt; other ACP
agents as providers, routing its LLM calls through Claude Code, Codex, or
Gemini while keeping its own extension ecosystem and UI. As Adrian Cole wrote
in his blog post
&lt;a class="reference external" href="https://goose-docs.ai/blog/2026/04/08/how-to-break-up-with-your-agent/"&gt;"How to Break Up with Your Agent"&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Pick the UI you like. Pick the agent you like. They don't have to be the
same thing."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This bidirectional composability — goose as a component &lt;em&gt;and&lt;/em&gt; goose as an
orchestrator — is what separates it from other agent tools.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="open-governance-no-vendor-lock-in"&gt;
&lt;h2&gt;Open governance, no vendor lock-in&lt;/h2&gt;
&lt;p&gt;Goose is fully open source under the leadership of the
&lt;a class="reference external" href="https://aaif.io/"&gt;Agentic AI Foundation (AAIF)&lt;/a&gt;, which provides
vendor-neutral governance under the umbrella of the
&lt;a class="reference external" href="https://www.linuxfoundation.org/"&gt;Linux Foundation&lt;/a&gt;. AAIF also hosts the
&lt;a class="reference external" href="https://modelcontextprotocol.io/"&gt;Model Context Protocol (MCP)&lt;/a&gt; itself, so
the standards goose builds on are governed with the same neutrality.&lt;/p&gt;
&lt;p&gt;This matters. When you build your workflows on goose, you're building on a
foundation governed by a neutral body with a Governing Board, a Technical
Committee, and a transparent contribution model. This is the same open,
collaborative, and neutral model that made Linux and Kubernetes into reliable
core components of the entire software industry, and it's the same reason I
think it's worth investing time and energy into.&lt;/p&gt;
&lt;p&gt;It's no secret I'm an open source nerd, and goose checks all the boxes.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="the-harness-is-the-thing"&gt;
&lt;h2&gt;The harness is the thing&lt;/h2&gt;
&lt;p&gt;We've collectively been on a journey. First it was Prompt Engineering, crafting the right
words to get the right output. Then it was Context Engineering, making sure the
model has the right information at the right time. Now, it seems we've arrived
at the next turn in this adventure we all find ourselves in: &lt;strong&gt;Harness Engineering&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Ralph Bean nails this in his blog post
&lt;a class="reference external" href="https://medium.com/@rbean_3467/what-even-is-the-harness-f21336768a80"&gt;"What Even Is the Harness?"&lt;/a&gt;.
The harness is the enablement layer. It's everything you add to the agent runtime
that gives you control over your outcomes:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Harness — the enablement layer. AGENTS.md files, skills, custom tools,
hand-crafted linters, system prompts for task-oriented agents. These are the
things you engineer, iteratively, to increase the chances the agent gets
things right. This is what Birgitta Böckeler calls the user harness and is
where Mitchell Hashimoto's attention lives."&lt;/em&gt;&lt;/p&gt;
&lt;p class="attribution"&gt;—Ralph Bean&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Read that again. The harness is not the agent. The harness is what you add to
the agent. The &lt;code class="docutils literal"&gt;AGENTS.md&lt;/code&gt; files. The skills. The custom MCP tools. The
hand-crafted linters. The system prompts. The recipes and subrecipes. The
extension configurations. The provider choices. The permission policies.&lt;/p&gt;
&lt;p&gt;This is where your engineering effort belongs. Not in building the interactive
loop, or implementing tool-calling JSON parsing, or writing context window
management, or building MCP client libraries. Goose already does all of that and
does so with the full backing of the AAIF, the Linux Foundation, and a vibrant
open source community.&lt;/p&gt;
&lt;p&gt;In most cases, and I'd argue almost all cases, your job is to build the harness.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="the-90-argument"&gt;
&lt;h2&gt;The 90% argument&lt;/h2&gt;
&lt;p&gt;I think for over 90% of use cases where someone is building an
agent today, goose is a better starting point than a blank text editor or a vibe
coding session (are we calling it Agentic Engineering yet?).&lt;/p&gt;
&lt;p&gt;If you need a coding assistant, goose does that. If you need a research agent,
configure goose with web scraping extensions and a research-focused recipe or skill.
If you need a CI/CD bot, run goose in daemon mode with ACP or orchestrate it with
scripts/recipes in your CI job runner of choice. If you need multi-agent
orchestration, compose goose instances with subrecipes or build a
Goosetown-style flock. If you need local-only, air-gapped inference, point
goose at Ollama, Ramalama, LM Studio, or its native inference provider. If you
need to integrate with your existing editor, goose speaks ACP natively or you
can set &lt;a class="reference external" href="https://goose-docs.ai/docs/guides/environment-variables/"&gt;GOOSE_PROMPT_EDITOR&lt;/a&gt;
and run the whole flow from inside your editor of choice. If you need vendor-neutral
governance, it's under the Linux Foundation umbrella via AAIF.&lt;/p&gt;
&lt;p&gt;The remaining 10%? Those are the genuinely novel agent architectures, the
research projects pushing boundaries, the use cases where you do need to control
every byte of the agent loop. For those, build from scratch. For everything else,
build the harness. I'm not saying you can't build agents from scratch. I'm simply
suggesting that you probably don't need to.&lt;/p&gt;
&lt;/section&gt;
&lt;section id="a-call-to-action"&gt;
&lt;h2&gt;A call to action&lt;/h2&gt;
&lt;p&gt;If you're a professional technologist or an aspiring AI Engineer, I'd encourage
you to shift your mental model. Stop thinking about building agents. Start
thinking about &lt;em&gt;harnessing&lt;/em&gt; them. At this point in the AI hype cycle, the agent
is mature enough to be the commodity. The harness is your competitive advantage.&lt;/p&gt;
&lt;p&gt;Install &lt;a class="reference external" href="https://github.com/aaif-goose/goose"&gt;goose&lt;/a&gt;. Strip it down to
nothing and build it back up. Write a recipe. Compose some subrecipes. Add
skills. Configure extensions. Point it at different providers. Run it as a
daemon. Embed it in your editor. Build a flock. Engineer the harness.&lt;/p&gt;
&lt;p&gt;Go forth and harness your agents.&lt;/p&gt;
&lt;p&gt;Happy hacking. &amp;lt;3&lt;/p&gt;
&lt;/section&gt;</description><category>Agents</category><category>AI</category><category>Goose</category><category>Harness Engineering</category><category>Open Source</category><guid>https://maxamillion.sh/blog/stop-building-agents-start-harnessing-goose/</guid><pubDate>Wed, 15 Apr 2026 14:00:00 GMT</pubDate></item></channel></rss>