Your First Agent
Build, describe, and run a metadata agent end-to-end.
This walkthrough builds the same ask agent that ships with examples/hello-world. By the end you'll have a typed agent, a describe view of its schema, and a working run invocation with both mock and real models.
1. Create the workspace
A Fabric Harness workspace is any directory with a .fabricharness/ folder.
mkdir my-first-agent
cd my-first-agent
mkdir -p .fabricharness/agentsInitialize a package.json and depend on the SDK (since there's no npm publish yet, link against the workspace inside the monorepo):
{
"name": "my-first-agent",
"type": "module",
"private": true,
"dependencies": {
"@fabric-harness/sdk": "workspace:*"
}
}2. Write the agent
Create .fabricharness/agents/ask.ts:
import { agent, schema } from '@fabric-harness/sdk';
export default agent({
name: 'ask',
description: 'Answers a question using the configured model.',
input: schema.object({
question: schema.string().describe('Question to answer'),
}),
output: schema.string(),
model: process.env.FABRIC_MODEL ?? 'mock/test-model',
triggers: { webhook: true },
run: async ({ init, input }) => {
const fabricAgent = await init();
const session = await fabricAgent.session();
return await session.prompt(input.question);
},
});Two things matter here:
agent({...})registers a metadata agent — its input/output schema is discoverable viafh describe, and the framework validates the payload before and afterrun.triggersdeclares how the agent can be invoked once deployed.webhook: trueis enough forPOST /agents/:agent/:idon the Node server.
3. List and describe
From the workspace root:
fh agents
fh describe askdescribe prints the input/output schema, declared model, default target, and any examples. Use --json if you need machine-readable output.
4. Run it
fh run ask --question "What is Temporal?"Behind the scenes the CLI:
- Discovers
.fabricharness/agents/ask.ts. - Loads workspace config (
.fabricharness/config.ts, optional). - Picks a model — CLI flag →
FABRIC_MODELenv → config → agent default. - Validates the input against the declared Fabric schema.
- Calls
run({ init, input, payload }). - Validates the output against the declared Fabric schema.
- Persists the session under
.fabricharness/sessions/.
Other ways to pass payload:
fh run ask --payload '{"question":"What is Temporal?"}'
fh run ask question="What is Temporal?"
fh run ask --payload-file input.json
echo '{"question":"hi"}' | fh run ask --stdin5. Use a real model
Put provider keys once in the repo-level .env.local; Fabric Harness auto-loads repo/workspace .env and .env.local files, and shell env still wins.
cp .env.example .env.local
# edit .env.local and set OPENAI_API_KEY=...
fh run ask --model openai/gpt-5.5 --question "What is Temporal?"For repeated use, put the model in .fabricharness/config.ts so you do not need --model either.
Never paste API keys into source files or session artifacts. Use
.env.local, a secret store, or shell environment variables.
6. Inspect what happened
fh sessions
fh inspect <session-id>
fh logs <session-id>
fh metrics <session-id>Next steps
- Workspace layout — the rest of
.fabricharness/. - Configuration —
config.ts, env precedence, model defaults. - Building agents — skills, roles, tools, sandboxes.
- CLI reference — every command.