Model Providers
Configure and select LLM providers per agent, session, or call.
Fabric Harness uses an explicit provider/model-id reference everywhere a model is selected. There is no implicit "default OpenAI" — you opt into a provider by configuring credentials and naming the model.
Reference format
provider/model-idExamples:
openai/gpt-5.5
openai/gpt-5.5
openai/gpt-5.5
mock/test-modelSetting credentials
Put provider keys once in a repo/workspace .env.local; Fabric Harness auto-loads .env and .env.local files, while shell env still wins:
cp .env.example .env.local
# OPENAI_API_KEY=...
# ANTHROPIC_API_KEY=...
# AZURE_OPENAI_ENDPOINT=https://....openai.azure.com
# AZURE_OPENAI_API_KEY=...Use explicit --env <file> only for overrides. Never paste API keys into source files or session artifacts.
Selecting the model
The first non-empty wins, in this order:
- CLI flag:
fh run ask --model openai/gpt-5.5 - Environment:
FABRIC_MODEL=openai/gpt-5.5 .fabricharness/config.ts→run.modeloragent.model- Agent-declared default:
agent({ model: 'openai/gpt-5.5' })
Per-call override:
await session.prompt('Summarize', { model: 'openai/gpt-5.5' });Mock provider
For local development and tests, mock/test-model returns deterministic stub responses. It honors the typed-result schema where possible.
export default agent({
// ...
model: process.env.FABRIC_MODEL ?? 'mock/test-model',
});Provider env names
Fabric Harness knows the standard env names for common providers:
OPENAI_API_KEYANTHROPIC_API_KEYOPENROUTER_API_KEYGEMINI_API_KEYGOOGLE_API_KEYAZURE_OPENAI_API_KEY+AZURE_OPENAI_ENDPOINTGROQ_API_KEYMISTRAL_API_KEYCOHERE_API_KEY
Roadmap
- Foundry-runtime model routing for hosted-agent deployments.
- Per-call cost telemetry (today: token usage in
fh metrics).