AI Tracing
Use Blyp AI tracing when you want LLM calls to emit normalized ai_trace records with usage, finish reason, timing, and request-scoped correlation.
Pick the SDK guide you actually use
Blyp splits AI tracing across dedicated entrypoints, so each SDK now has its own setup page:
- Better Agent for
@blyp/core/ai/better-agent - Vercel AI SDK for
@blyp/core/ai/vercel - OpenAI SDK for
@blyp/core/ai/openaiand OpenRouter through the OpenAI-compatible client path - Anthropic SDK for
@blyp/core/ai/anthropic - AI Privacy & Capture for the shared
capture,exclude, andlimitscontrols
Provider matrix
| Entry | Wrapped methods |
|---|---|
@blyp/core/ai/better-agent | Better Agent app/plugin runs and manual tracker flows, emitted as one run-level trace per Better Agent run |
@blyp/core/ai/vercel | Vercel AI SDK generateText() and streamText() |
@blyp/core/ai/openai | OpenAI responses.create() and chat.completions.create(), plus OpenRouter through the OpenAI-compatible client path |
@blyp/core/ai/anthropic | Anthropic messages.create() |
@blyp/core/ai/fetch | low-level fetch transport tracing for status, latency, request IDs, and optional JSON body inspection |
Advanced API surface
Additional public APIs:
blypPlugincreateBetterAgentTrackercreateOpenAITrackerBlypProviderOptionsBlypBetterAgentOptionsBlypBetterAgentRunResolverBlypBetterAgentTrackerBlypMiddlewareContextBlypSDKContextBlypLLMTraceBlypTraceEvent
Shared provider-level trace types such as BlypAIProvider, BlypCaptureOptions, BlypExcludeOptions, BlypLimitOptions, BlypLLMTrace, BlypProviderOptions, BlypSDKContext, and BlypTraceEvent are exported from @blyp/core/ai/shared. BlypMiddlewareContext remains on @blyp/core/ai/vercel.
Use AI Privacy & Capture for the capture defaults and the exact capture, exclude, and limits controls.