Production
This guide covers Blyp behavior in production deployments: where logs are persisted, what logger.flush() actually guarantees, how connectors behave under sustained load, and what changes in serverless and Workers runtimes.
Use this page together with Configuration, File Logging, Database Logging, Connectors, Next.js App Router, and Cloudflare Workers.
File Logging In Production
What file mode is today
File mode is Blyp's default destination. Blyp writes NDJSON synchronously to local disk, using:
logs/log.ndjsonfor the active combined streamlogs/log.error.ndjsonfor the active error and critical stream
error and critical records are written to both streams. Rotation is size-based, not time-based.
Rotation thresholds and archive strategy
Current defaults:
- rotation threshold:
10 MB - retained archives:
5per stream - compression: gzip enabled
Archived files are written under logs/archive/ with UTC timestamps such as log.20260309T121530Z.ndjson.gz. If a timestamp collision happens, Blyp appends -1, -2, and so on to keep the archive name unique.
Rotation happens before appending a new line when the next write would exceed maxSizeBytes. On startup, Blyp also rotates an existing active file if it is already larger than the configured threshold. Archive pruning is based on archive file modification time.
export default {
file: {
rotation: {
maxSizeBytes: 10 * 1024 * 1024,
maxArchives: 5,
compress: true,
},
},
};Durability and flush guarantees in file mode
File writes use synchronous filesystem appends. In current @blyp/core, the file primary sink flush() is a no-op, so await logger.flush() does not add an extra durability guarantee in file mode.
The practical persistence boundary is the local synchronous append itself, subject to normal OS and filesystem behavior. await logger.shutdown() also does not add anything beyond normal write completion for file persistence. This is local synchronous append behavior, not an fsync-level or crash-proof durability guarantee.
Disk space considerations
For capacity planning:
- active footprint is roughly
2 * maxSizeBytesbecause the combined and error streams can both be active - archive footprint depends on traffic mix, error rate, and gzip compression ratio
- error-heavy services consume more space because error and critical records are duplicated into the error stream
- Blyp does not enforce a total disk budget across active and archived files
Operators should provision disk with room for both active files and retained archives, or pair Blyp with external retention and shipping controls at the platform level.
Practical recommendations:
- increase
maxArchivesonly when local retention is intentionally part of your ops model - prefer Database Logging or external shipping when container-local disk is ephemeral or tightly capped
Reading rotated archives
readLogFile() reads both active .ndjson files and rotated .ndjson.gz archives. Rotated archives stay newline-delimited JSON after decompression and live under logs/archive/.
import { readLogFile } from "@blyp/core";
const active = await readLogFile("logs/log.ndjson", {
format: "json",
limit: 100,
});import { readLogFile } from "@blyp/core";
const archived = await readLogFile("logs/archive/log.20260309T121530Z.ndjson.gz", {
format: "json",
limit: 100,
});Database Mode In Production
When to use database mode vs file mode
| Deployment shape | Recommended primary sink | Why |
|---|---|---|
| Long-lived Node/Bun server with reliable writable disk | file | Local persistence is simple and fast when disk is stable |
| Serverless function with ephemeral or read-only filesystem | database | Local file persistence is not reliable enough |
| Container with tightly capped local storage | database | Avoid relying on local archive retention |
| Service that still uses external connectors | either | Connectors stay independent from the primary destination |
destination: "database" is usually the better fit when local persistence is unreliable, ephemeral, or read-only.
What database mode changes
Database mode replaces only the primary file sink. Connectors such as Better Stack, PostHog, Sentry, and OTLP continue to run independently.
Blyp keeps an in-memory queue for database writes. Current default delivery settings are:
strategy: "immediate"batchSize: 1flushIntervalMs: 250maxQueueSize: 1000overflowStrategy: "drop-oldest"flushTimeoutMs: 5000retry.maxRetries: 1retry.backoffMs: 100
Queue, overflow, retry, and flush behavior
Current database delivery semantics:
- immediate mode still queues internally and drains one record at a time
- batch mode drains up to
batchSizerecords per insert - if the queue exceeds
maxQueueSize, Blyp drops records according tooverflowStrategy - the default overflow strategy is
drop-oldest - retry is only for database insertion failures, not connector delivery
logger.flush()waits for the database queue to drain or untilflushTimeoutMsis reached- on flush timeout, Blyp warns and rejects the flush promise
- after a terminal database insert failure, future flushes reject with that terminal error
shutdown()stops accepting new writes, then flushes what is already queued
export default {
destination: "database",
database: {
dialect: "postgres",
adapter,
delivery: {
strategy: "batch",
batchSize: 50,
flushIntervalMs: 1000,
maxQueueSize: 1000,
overflowStrategy: "drop-oldest",
flushTimeoutMs: 5000,
retry: {
maxRetries: 1,
backoffMs: 100,
},
},
},
};Prisma and Drizzle connection pooling behavior
Blyp does not create or manage its own connection pool. It uses the Prisma client or Drizzle database instance that you pass into the adapter, so pooling behavior comes from your Prisma or Drizzle setup and the driver or proxy behind it.
Current adapter behavior:
- Prisma uses
createManywhen available for multi-row inserts - if
createManyis unavailable or unsupported, Prisma falls back to$transaction([...create])or serialcreate()calls - Drizzle uses
db.insert(table).values(rows)
Recommended production setup:
- use a shared singleton Prisma or Drizzle client per process or runtime instance
- in serverless SQL deployments, pair Blyp with the same pooling or proxy strategy you already use for application queries
See Database Logging and Configuration for adapter setup.
Flush guarantees by runtime style
Promise-based adapters auto-flush in database mode before request completion. Current auto-flushing integrations are:
- Hono
- Elysia
- Next.js
- React Router
- Astro
- Nitro
- Nuxt
- SolidStart
- SvelteKit
- TanStack Start
Callback-style servers do not auto-flush to the same hard persistence boundary. For Express, Fastify, NestJS, standalone handlers, and custom serverless wrappers, call await logger.flush() where you need the database persistence boundary.
Connector Delivery At Scale
Durable connector delivery model
As of @blyp/[email protected], server-side connector forwarding can opt into a Blyp-managed delivery queue through connectors.delivery.
Current behavior:
- an in-memory hot buffer accepts connector work first
- retryable connector failures can spill into a Blyp-managed SQLite queue
- the default queue path is
.blyp/connectors.sqlite .blyp/should be ignored in git- queueing applies to server connector forwarding only, not the primary
fileordatabasedestination - Blyp uses at-least-once style delivery semantics for queued connector work
- older runtimes without built-in SQLite support fall back to memory-only retries with a warning
Connector-by-connector behavior
| Connector | Current Blyp behavior |
|---|---|
| Better Stack | Uses @logtail/node for log delivery and can enqueue retryable failures through Blyp's connector queue |
| PostHog | Uses OTLP HTTP for logs plus posthog-node for exceptions, with retryable delivery eligible for Blyp queueing |
| Sentry | Uses the Sentry SDK directly and can queue retryable log delivery work through Blyp |
| OTLP | Uses OpenTelemetry log exporters per target and can persist retryable delivery work through Blyp's queue |
Batch delivery behavior
Blyp batches queued connector work at the delivery-manager level through memoryBatchSize, sqliteWriteBatchSize, and sqliteReadBatchSize. Connector SDKs may still batch internally as well.
That means effective behavior is a combination of Blyp queue settings and connector SDK behavior. Do not assume identical batching semantics across Better Stack, PostHog, Sentry, and OTLP.
What happens when a connector is slow
Current behavior at sustained load:
- the primary sink continues independently
- retryable connector failures can spill from memory into SQLite when durable delivery is enabled
- queued work is retried with configurable backoff, concurrency, polling, and overflow behavior
- exhausted jobs move to dead letters instead of disappearing silently
- connector failures do not stop the primary sink from writing file or database logs
- file and database persistence stay separate from connector forwarding work
logger.flush() first flushes the primary sink, then waits for connector delivery work, including durable staged writes and replayed queue jobs. This is a stronger boundary than the pre-0.1.22 best-effort connector flush behavior, but it still does not turn vendor-side availability into a guaranteed success outcome.
Rate limits per connector
Better Stack, PostHog, Sentry, and OTLP collectors all have vendor-specific ingestion limits. Blyp does not publish per-connector rate ceilings because those limits still depend on vendor APIs and SDKs.
In practice:
- use the vendor docs from each connector page for current ingestion limits
- treat sustained throttling or
429responses as vendor or SDK operational concerns - size your connector usage and alerting around both vendor ingestion health and Blyp queue status when durable delivery is enabled
Serverless-Specific Guidance
Core rules
For serverless deployments:
- file logging is not appropriate on read-only or ephemeral filesystems
- database mode is the recommended primary sink for serverless Node runtimes
- call
await logger.flush()before handler return when you are outside an auto-flushing promise-based Blyp adapter - Cloudflare Workers use a separate API and do not use file or database destinations or
blyp.config.*
Recommended config matrix
| Runtime | Recommended primary sink | Config surface | Flush guidance |
|---|---|---|---|
| Vercel Serverless Functions / Next.js Node runtime | database | blyp.config.ts | automatic in the Next.js adapter for database mode, manual in standalone or custom wrappers |
| Vercel Edge | none of the Node/Bun primary sinks | no current dedicated Edge-only Blyp primary sink guidance | do not assume file or database mode; keep usage conservative and avoid Node/Bun persistence assumptions |
| Cloudflare Workers | console-based Workers logger | @blyp/core/workers | no logger.flush() path; emit request logs explicitly with emit() |
| AWS Lambda | database | executable config plus shared DB client | call await logger.flush() before returning in standalone or custom handling |
If you are deploying to Vercel Edge, Blyp does not currently document a dedicated Edge runtime API the way it documents Cloudflare Workers. Keep the recommendation conservative and avoid treating Edge like a writable Node runtime.
Vercel / Next.js App Router example
Use database mode with a shared Prisma client outside the route handler. In current @blyp/core, the Next.js adapter auto-flushes database writes before the response completes.
// lib/prisma.ts
import { PrismaClient } from "@prisma/client";
const globalForPrisma = globalThis as typeof globalThis & {
prisma?: PrismaClient;
};
export const prisma = globalForPrisma.prisma ?? new PrismaClient();
if (process.env.NODE_ENV !== "production") {
globalForPrisma.prisma = prisma;
}// blyp.config.ts
import { createPrismaDatabaseAdapter } from "@blyp/core/database";
import { prisma } from "./lib/prisma";
export default {
destination: "database",
database: {
dialect: "postgres",
adapter: createPrismaDatabaseAdapter({
client: prisma,
model: "blypLog",
}),
},
};// app/api/orders/route.ts
import { createLogger } from "@blyp/core/nextjs";
const nextLogger = createLogger();
export const POST = nextLogger.withLogger(async (_request, _context, { log }) => {
log.info("creating order");
return Response.json({ ok: true });
});Cloudflare Workers example
Cloudflare Workers use @blyp/core/workers. This is a console-based API with explicit request emission. There is no file sink, no database primary sink, no blyp.config.*, and no logger.flush() path.
import { initWorkersLogger, createWorkersLogger } from "@blyp/core/workers";
initWorkersLogger({
env: { service: "edge-api" },
});
export default {
async fetch(request: Request): Promise<Response> {
const log = createWorkersLogger(request);
log.info("worker request started");
const response = Response.json({ ok: true });
log.emit({ response });
return response;
},
};AWS Lambda example
In standalone or custom Lambda handling, keep the database client outside the handler and flush before returning.
import { PrismaClient } from "@prisma/client";
import { createStandaloneLogger } from "@blyp/core";
import { createPrismaDatabaseAdapter } from "@blyp/core/database";
const prisma = new PrismaClient();
const logger = createStandaloneLogger({
destination: "database",
database: {
dialect: "postgres",
adapter: createPrismaDatabaseAdapter({
client: prisma,
model: "blypLog",
}),
},
});
export async function handler() {
logger.info("lambda invocation started");
await logger.flush();
return {
statusCode: 200,
body: JSON.stringify({ ok: true }),
};
}Cold start impact
Blyp itself adds relatively little startup work compared with database client and connector SDK initialization. In practice, cold start cost is dominated by your DB client, vendor SDKs, and the runtime environment.
Recommended guidance:
- keep Prisma or Drizzle client initialization outside the handler when the platform can reuse it
- initialize connector SDKs through normal application startup paths, not per-request if you can avoid it
- expect Workers to avoid file and database sink setup because they use the separate console-based API
Outages And Monitoring
Current behavior during connector outages
During connector outages or sustained connector failures:
- the primary sink continues independently
- retryable connector work can remain in memory or spill into the durable SQLite queue
- Blyp tracks delivery status such as pending counts, last success, last failure, and last error per connector target
- exhausted jobs are moved to dead letters for inspection and replay
- database mode and connector delivery still remain separate systems with separate queues and failure boundaries
Queue depth monitoring
Current monitoring guidance:
- use
getConnectorDeliveryStatusSummary()or Studio tooling to inspect queue state - use
listConnectorDeadLetters()to inspect exhausted jobs - use
retryConnectorDeadLetters()orclearConnectorDeadLetters()as part of recovery tooling - for database mode, monitor application warnings about overflow and track downstream insert latency and errors externally
- for connectors, monitor both vendor ingestion health and Blyp queue status when durable delivery is enabled