Cloudshrink Documentation
Cloudshrink attributes every LLM dollar to a project, actor, and model. This documentation covers how to instrument your code, what metrics we track, and how to troubleshoot common issues.
Required Fields
| Field | Type | Required | Description |
|---|---|---|---|
provider | string | Yes | LLM provider (OpenAI, Anthropic, Google, Mistral) |
model | string | Yes | Model name (e.g., GPT-4o, Claude Sonnet 4) |
input_tokens | number | Yes | Number of input/prompt tokens |
output_tokens | number | Yes | Number of output/completion tokens |
cached_tokens | number | No | Number of cached/prompt-cache tokens (default: 0) |
project | string | No | Project identifier for attribution |
actor | string | No | User or service identifier |
env | string | No | Environment (production, staging, development) |
cost_usd | number | No | Cost in USD (auto-calculated if omitted) |
request_id | string | No | Unique request identifier (auto-generated if omitted) |
Quick Start
- Install the SDK:
pip install cloudshrinkornpm i @cloudshrink/sdk - Wrap your LLM client with
wrap(client, project="...", actor="...") - View your dashboard at /dashboard—everything is tracked automatically