FabricFabricHarness
CLI

fh run

Execute a workspace agent against a local Node runtime or a Temporal worker.

fabric-harness run <agent> [options]

Runs the named agent. The CLI loads the agent module, validates the input payload, calls run({ init, input, payload }), and validates the output.

Options

FlagDescription
--id <id>Run/agent identifier. If omitted, a session id is generated using ${idPrefix}-${uuid}.
--target <node|temporal-worker>Where the agent runs. Defaults to node. temporal-worker switches --runtime to temporal.
--runtime <inline|temporal>Equivalent lower-level flag. Usually set via --target.
--model <provider/model-id>Override the agent's default model, e.g. openai/gpt-5.5.
--cwd <dir>Default sandbox/session working directory for this run. Relative paths stay scoped inside the sandbox workspace.
--prompt <text>Direct prompt text for --runtime temporal mode.
--payload '<json>'Pass an inline JSON payload.
--payload-file <path>Read JSON payload from a file.
--stdinRead JSON payload from stdin.
--set key=valueSet a payload field. May be repeated.
--<field> <value>Shortcut for setting a payload field, e.g. --question "...".
--env <file>Load .env-style variables. Repeatable; shell env wins.

Payload precedence

When more than one source is given, fields merge in this order (later wins):

--payload  →  --payload-file  →  --stdin  →  --set / --<field> / key=value

Examples

Inline JSON

fh run ask --payload '{"question":"What is Temporal?"}'

Field shortcut

fh run ask --question "What is Temporal?"
fh run ask question="What is Temporal?"
fh run ask --set question="What is Temporal?"

Working directory

fh run code --cwd /workspace/project --prompt "Run tests and summarize failures"
fh run code --cwd packages/core --payload '{"prompt":"Inspect this package"}'

--cwd is a sandbox cwd, not a host directory switch. Use it to make file/shell tools default to a repository subdirectory.

From a file or stdin

fh run ask --payload-file input.json
echo '{"question":"hi"}' | fh run ask --stdin

Real model

Put provider keys once in a repo/workspace .env.local; Fabric Harness auto-loads it and shell env still wins.

cp .env.example .env.local
# edit .env.local and set OPENAI_API_KEY=...
fh run ask --model openai/gpt-5.5 --question "What is Temporal?"

Use explicit --env <file> only for test/CI overrides.

Temporal worker target

# In a separate terminal:
fh temporal-worker

# Then:
fh run ask --target temporal-worker --id ask-001 --prompt "What is Temporal?"

What happens during run

  1. Resolve workspace root by walking up to find .fabricharness/.
  2. Load .fabricharness/config.ts and merge with env and CLI flags.
  3. Resolve agent path, target, runtime, model, id, and sandbox cwd.
  4. Apply env model provider (e.g. set provider credentials).
  5. Validate input payload against the agent's input schema (metadata agents only).
  6. Call the agent's run({ init, input, payload }).
  7. Validate output against the output schema (metadata agents only).
  8. Persist session and emit a session id.

See also