Database
Use database mode when Blyp cannot safely rely on local file persistence, especially in serverless or short-lived runtimes.
In this mode, Blyp replaces file logging as the primary persistence layer and writes normalized log rows into your application database. Connectors such as Better Stack, PostHog, Sentry, Databuddy, and OTLP still work independently.
Why the schema matters
Database mode is not just a config flag. It depends on a specific schema contract.
If the table, model, columns, indexes, or adapter wiring do not match what Blyp expects, database logging can become partially broken or fully unusable. Typical failure modes include:
- database logging being disabled during config resolution
- adapter startup failures because the expected model or table is missing
- failed inserts because required fields or JSON columns are missing
- degraded Studio queries and slower inspection when expected indexes are missing
- incorrect assumptions about what Blyp persists
Schema is required: Treat the generated Blyp schema as a compatibility contract. If you rename fields, remove indexes, or point the adapter at the wrong model or table, Blyp database logging may stop working correctly.
Supported setups
- Prisma + Postgres
- Prisma + MySQL
- Drizzle + Postgres
- Drizzle + MySQL
Required setup sequence
- Choose your adapter: Prisma or Drizzle.
- Choose your dialect: Postgres or MySQL.
- Create the required Blyp schema contract.
- Run migrations so the database actually matches that contract.
- Wire
blyp.config.tsto the correct adapter runtime object. - Run
blyp db:generateif you are using Prisma. - Emit a test log and confirm the row is inserted.
Required storage contract
The current CLI-generated contract is:
- SQL table:
blyp_logs - Prisma model:
BlypLog - Prisma adapter delegate name:
blypLog - Drizzle exported table symbol:
blypLogs
For the full column and index contract, see Schema Contract.
Adapter guides
blyp.config.ts requirement
Database mode requires an executable config file such as blyp.config.ts, blyp.config.mts, blyp.config.js, or blyp.config.cjs.
blyp.config.json is not enough because Prisma and Drizzle adapters are runtime objects, not plain JSON values.
Prisma example
import { PrismaClient } from "@prisma/client";
import { createPrismaDatabaseAdapter } from "@blyp/core/database";
const prisma = new PrismaClient();
export default {
destination: "database",
database: {
dialect: "postgres",
adapter: createPrismaDatabaseAdapter({
client: prisma,
model: "blypLog",
}),
},
};Drizzle example
import { createDrizzleDatabaseAdapter } from "@blyp/core/database";
import { db } from "./db";
import { blypLogs } from "./db/schema/blyp";
export default {
destination: "database",
database: {
dialect: "mysql",
adapter: createDrizzleDatabaseAdapter({
db,
table: blypLogs,
}),
},
};Delivery behavior
Database delivery supports immediate writes and batched writes.
Default values:
strategy: "immediate"batchSize: 1flushIntervalMs: 250maxQueueSize: 1000overflowStrategy: "drop-oldest"flushTimeoutMs: 5000retry.maxRetries: 1retry.backoffMs: 100
export default {
destination: "database",
database: {
dialect: "postgres",
adapter,
delivery: {
strategy: "batch",
batchSize: 50,
flushIntervalMs: 1000,
},
},
};Flushing and shutdown
All Blyp loggers expose:
await logger.flush();
await logger.shutdown();Promise-based and hook-driven integrations such as Elysia, Hono, Next.js, React Router, Astro, Nitro, Nuxt, SolidStart, SvelteKit, and TanStack Start flush database writes before the request finishes.
For callback-style servers such as Express, Fastify, and NestJS, call await logger.flush() at your own boundary when you need a hard durability point.
CLI workflow
The recommended guided flow is:
blyp db:init
blyp db:migrate
blyp db:generatedb:generate is Prisma-only.
Without a global install:
bunx @blyp/cli db:init
bunx @blyp/cli db:migrate
bunx @blyp/cli db:generateThe detailed command behavior is documented in Migrations.
traceId in database rows
As of @blyp/[email protected], request trace IDs are persisted in database records. Use this together with Request Tracing when you want request logs, browser logs, AI traces, connector-forwarded logs, and database rows to share the same correlation ID.