One API Call. Complete Visibility.
Send your AI usage to Fenra after each provider call. We handle the rest:- Automatic cost calculation: No spreadsheets, no manual lookups. We know every provider’s pricing.
- Feature-level breakdown: See exactly which features, users, and environments drive your costs.
- Multi-provider support: OpenAI, Anthropic, Google, Bedrock, xAI, DeepSeek. One dashboard for all of them.
- Privacy by design: We never store your prompts or outputs. Only usage context.
Quickstart
Send your first transaction in under 5 minutes.
API Reference
Full documentation for the Fenra API.
How It Works
After your AI provider returns a response, extract the usage data and send it to Fenra. The ingest endpoint is designed for hot paths: it responds in 20-40ms, returns a202 Accepted immediately, and processes asynchronously. No blocking, no performance impact on your application.
No prompts or outputs are ever stored. We only track:
- Token counts (input, output, cached, reasoning)
- Model and provider
- Your custom context (feature, environment, user)
Supported Providers
OpenAI
All models including gpt-4o, o1, o3, dall-e, whisper
Anthropic
All Claude models including claude-3.5-sonnet, claude-3-opus
All Gemini models including gemini-2.0-flash, gemini-1.5-pro
AWS Bedrock
All Bedrock-hosted models
xAI
All xAI models including grok-beta
DeepSeek
All DeepSeek models including deepseek-chat, deepseek-reasoner
custom and define your own pricing.