Cloudshrink Documentation

Cloudshrink attributes every LLM dollar to a project, actor, and model. This documentation covers how to instrument your code, what metrics we track, and how to troubleshoot common issues.

Required Fields

FieldTypeRequiredDescription
providerstringYesLLM provider (OpenAI, Anthropic, Google, Mistral)
modelstringYesModel name (e.g., GPT-4o, Claude Sonnet 4)
input_tokensnumberYesNumber of input/prompt tokens
output_tokensnumberYesNumber of output/completion tokens
cached_tokensnumberNoNumber of cached/prompt-cache tokens (default: 0)
projectstringNoProject identifier for attribution
actorstringNoUser or service identifier
envstringNoEnvironment (production, staging, development)
cost_usdnumberNoCost in USD (auto-calculated if omitted)
request_idstringNoUnique request identifier (auto-generated if omitted)

Quick Start

  1. Install the SDK: pip install cloudshrink or npm i @cloudshrink/sdk
  2. Wrap your LLM client with wrap(client, project="...", actor="...")
  3. View your dashboard at /dashboard—everything is tracked automatically