FabricFabricHarness
Building Agents

Model Providers

Configure and select LLM providers per agent, session, or call.

Fabric Harness uses an explicit provider/model-id reference everywhere a model is selected. There is no implicit "default OpenAI" — you opt into a provider by configuring credentials and naming the model.

Reference format

provider/model-id

Examples:

openai/gpt-5.5
openai/gpt-5.5
openai/gpt-5.5
mock/test-model

Setting credentials

Put provider keys once in a repo/workspace .env.local; Fabric Harness auto-loads .env and .env.local files, while shell env still wins:

cp .env.example .env.local
# OPENAI_API_KEY=...
# ANTHROPIC_API_KEY=...
# AZURE_OPENAI_ENDPOINT=https://....openai.azure.com
# AZURE_OPENAI_API_KEY=...

Use explicit --env <file> only for overrides. Never paste API keys into source files or session artifacts.

Selecting the model

The first non-empty wins, in this order:

  1. CLI flag: fh run ask --model openai/gpt-5.5
  2. Environment: FABRIC_MODEL=openai/gpt-5.5
  3. .fabricharness/config.tsrun.model or agent.model
  4. Agent-declared default: agent({ model: 'openai/gpt-5.5' })

Per-call override:

await session.prompt('Summarize', { model: 'openai/gpt-5.5' });

Mock provider

For local development and tests, mock/test-model returns deterministic stub responses. It honors the typed-result schema where possible.

export default agent({
  // ...
  model: process.env.FABRIC_MODEL ?? 'mock/test-model',
});

Provider env names

Fabric Harness knows the standard env names for common providers:

  • OPENAI_API_KEY
  • ANTHROPIC_API_KEY
  • OPENROUTER_API_KEY
  • GEMINI_API_KEY
  • GOOGLE_API_KEY
  • AZURE_OPENAI_API_KEY + AZURE_OPENAI_ENDPOINT
  • GROQ_API_KEY
  • MISTRAL_API_KEY
  • COHERE_API_KEY

Roadmap

  • Foundry-runtime model routing for hosted-agent deployments.
  • Per-call cost telemetry (today: token usage in fh metrics).