Blyp Docs

AI Tracing

Use Blyp AI tracing when you want LLM calls to emit normalized ai_trace records with usage, finish reason, timing, and request-scoped correlation.

Pick the SDK guide you actually use

Blyp splits AI tracing across dedicated entrypoints, so each SDK now has its own setup page:

Provider matrix

EntryWrapped methods
@blyp/core/ai/better-agentBetter Agent app/plugin runs and manual tracker flows, emitted as one run-level trace per Better Agent run
@blyp/core/ai/vercelVercel AI SDK generateText() and streamText()
@blyp/core/ai/openaiOpenAI responses.create() and chat.completions.create(), plus OpenRouter through the OpenAI-compatible client path
@blyp/core/ai/anthropicAnthropic messages.create()
@blyp/core/ai/fetchlow-level fetch transport tracing for status, latency, request IDs, and optional JSON body inspection

Advanced API surface

Additional public APIs:

Shared provider-level trace types such as BlypAIProvider, BlypCaptureOptions, BlypExcludeOptions, BlypLimitOptions, BlypLLMTrace, BlypProviderOptions, BlypSDKContext, and BlypTraceEvent are exported from @blyp/core/ai/shared. BlypMiddlewareContext remains on @blyp/core/ai/vercel.

Use AI Privacy & Capture for the capture defaults and the exact capture, exclude, and limits controls.