LLMModelUsage: {
    inputAudioTokens: number;
    inputCachedAudioTokens: number;
    inputCachedImageTokens: number;
    inputCachedTextTokens: number;
    inputCachedTokens: number;
    inputImageTokens: number;
    inputTextTokens: number;
    inputTokens: number;
    model: string;
    outputAudioTokens: number;
    outputTextTokens: number;
    outputTokens: number;
    provider: string;
    sessionDurationMs: number;
    type: "llm_usage";
}

Type declaration

  • inputAudioTokens: number

    Input audio tokens (for multimodal models).

  • inputCachedAudioTokens: number

    Cached input audio tokens.

  • inputCachedImageTokens: number

    Cached input image tokens.

  • inputCachedTextTokens: number

    Cached input text tokens.

  • inputCachedTokens: number

    Input tokens served from cache.

  • inputImageTokens: number

    Input image tokens (for multimodal models).

  • inputTextTokens: number

    Input text tokens.

  • inputTokens: number

    Total input tokens.

  • model: string

    The model name (e.g., 'gpt-4o', 'claude-3-5-sonnet').

  • outputAudioTokens: number

    Output audio tokens (for multimodal models).

  • outputTextTokens: number

    Output text tokens.

  • outputTokens: number

    Total output tokens.

  • provider: string

    The provider name (e.g., 'openai', 'anthropic').

  • sessionDurationMs: number

    Total session connection duration in milliseconds (for session-based billing like xAI).

  • type: "llm_usage"