FabricFabricHarness
Getting Started

Your First Agent

Build, describe, and run a metadata agent end-to-end.

This walkthrough builds the same ask agent that ships with examples/hello-world. By the end you'll have a typed agent, a describe view of its schema, and a working run invocation with both mock and real models.

1. Create the workspace

A Fabric Harness workspace is any directory with a .fabricharness/ folder.

mkdir my-first-agent
cd my-first-agent
mkdir -p .fabricharness/agents

Initialize a package.json and depend on the SDK (since there's no npm publish yet, link against the workspace inside the monorepo):

{
  "name": "my-first-agent",
  "type": "module",
  "private": true,
  "dependencies": {
    "@fabric-harness/sdk": "workspace:*"
  }
}

2. Write the agent

Create .fabricharness/agents/ask.ts:

import { agent, schema } from '@fabric-harness/sdk';

export default agent({
  name: 'ask',
  description: 'Answers a question using the configured model.',
  input: schema.object({
    question: schema.string().describe('Question to answer'),
  }),
  output: schema.string(),
  model: process.env.FABRIC_MODEL ?? 'mock/test-model',
  triggers: { webhook: true },
  run: async ({ init, input }) => {
    const fabricAgent = await init();
    const session = await fabricAgent.session();
    return await session.prompt(input.question);
  },
});

Two things matter here:

  • agent({...}) registers a metadata agent — its input/output schema is discoverable via fh describe, and the framework validates the payload before and after run.
  • triggers declares how the agent can be invoked once deployed. webhook: true is enough for POST /agents/:agent/:id on the Node server.

3. List and describe

From the workspace root:

fh agents
fh describe ask

describe prints the input/output schema, declared model, default target, and any examples. Use --json if you need machine-readable output.

4. Run it

fh run ask --question "What is Temporal?"

Behind the scenes the CLI:

  1. Discovers .fabricharness/agents/ask.ts.
  2. Loads workspace config (.fabricharness/config.ts, optional).
  3. Picks a model — CLI flag → FABRIC_MODEL env → config → agent default.
  4. Validates the input against the declared Fabric schema.
  5. Calls run({ init, input, payload }).
  6. Validates the output against the declared Fabric schema.
  7. Persists the session under .fabricharness/sessions/.

Other ways to pass payload:

fh run ask --payload '{"question":"What is Temporal?"}'
fh run ask question="What is Temporal?"
fh run ask --payload-file input.json
echo '{"question":"hi"}' | fh run ask --stdin

5. Use a real model

Put provider keys once in the repo-level .env.local; Fabric Harness auto-loads repo/workspace .env and .env.local files, and shell env still wins.

cp .env.example .env.local
# edit .env.local and set OPENAI_API_KEY=...
fh run ask --model openai/gpt-5.5 --question "What is Temporal?"

For repeated use, put the model in .fabricharness/config.ts so you do not need --model either.

Never paste API keys into source files or session artifacts. Use .env.local, a secret store, or shell environment variables.

6. Inspect what happened

fh sessions
fh inspect <session-id>
fh logs <session-id>
fh metrics <session-id>

Next steps