Blyp Docs

Production

This guide covers Blyp behavior in production deployments: where logs are persisted, what logger.flush() actually guarantees, how connectors behave under sustained load, and what changes in serverless and Workers runtimes.

Use this page together with Configuration, File Logging, Database Logging, Connectors, Next.js App Router, and Cloudflare Workers.

File Logging In Production

What file mode is today

File mode is Blyp's default destination. Blyp writes NDJSON synchronously to local disk, using:

error and critical records are written to both streams. Rotation is size-based, not time-based.

Rotation thresholds and archive strategy

Current defaults:

Archived files are written under logs/archive/ with UTC timestamps such as log.20260309T121530Z.ndjson.gz. If a timestamp collision happens, Blyp appends -1, -2, and so on to keep the archive name unique.

Rotation happens before appending a new line when the next write would exceed maxSizeBytes. On startup, Blyp also rotates an existing active file if it is already larger than the configured threshold. Archive pruning is based on archive file modification time.

export default {
  file: {
    rotation: {
      maxSizeBytes: 10 * 1024 * 1024,
      maxArchives: 5,
      compress: true,
    },
  },
};

Durability and flush guarantees in file mode

File writes use synchronous filesystem appends. In current @blyp/core, the file primary sink flush() is a no-op, so await logger.flush() does not add an extra durability guarantee in file mode.

The practical persistence boundary is the local synchronous append itself, subject to normal OS and filesystem behavior. await logger.shutdown() also does not add anything beyond normal write completion for file persistence. This is local synchronous append behavior, not an fsync-level or crash-proof durability guarantee.

Disk space considerations

For capacity planning:

Operators should provision disk with room for both active files and retained archives, or pair Blyp with external retention and shipping controls at the platform level.

Practical recommendations:

Reading rotated archives

readLogFile() reads both active .ndjson files and rotated .ndjson.gz archives. Rotated archives stay newline-delimited JSON after decompression and live under logs/archive/.

import { readLogFile } from "@blyp/core";

const active = await readLogFile("logs/log.ndjson", {
  format: "json",
  limit: 100,
});
import { readLogFile } from "@blyp/core";

const archived = await readLogFile("logs/archive/log.20260309T121530Z.ndjson.gz", {
  format: "json",
  limit: 100,
});

Database Mode In Production

When to use database mode vs file mode

Deployment shapeRecommended primary sinkWhy
Long-lived Node/Bun server with reliable writable diskfileLocal persistence is simple and fast when disk is stable
Serverless function with ephemeral or read-only filesystemdatabaseLocal file persistence is not reliable enough
Container with tightly capped local storagedatabaseAvoid relying on local archive retention
Service that still uses external connectorseitherConnectors stay independent from the primary destination

destination: "database" is usually the better fit when local persistence is unreliable, ephemeral, or read-only.

What database mode changes

Database mode replaces only the primary file sink. Connectors such as Better Stack, PostHog, Sentry, and OTLP continue to run independently.

Blyp keeps an in-memory queue for database writes. Current default delivery settings are:

Queue, overflow, retry, and flush behavior

Current database delivery semantics:

export default {
  destination: "database",
  database: {
    dialect: "postgres",
    adapter,
    delivery: {
      strategy: "batch",
      batchSize: 50,
      flushIntervalMs: 1000,
      maxQueueSize: 1000,
      overflowStrategy: "drop-oldest",
      flushTimeoutMs: 5000,
      retry: {
        maxRetries: 1,
        backoffMs: 100,
      },
    },
  },
};

Prisma and Drizzle connection pooling behavior

Blyp does not create or manage its own connection pool. It uses the Prisma client or Drizzle database instance that you pass into the adapter, so pooling behavior comes from your Prisma or Drizzle setup and the driver or proxy behind it.

Current adapter behavior:

Recommended production setup:

See Database Logging and Configuration for adapter setup.

Flush guarantees by runtime style

Promise-based adapters auto-flush in database mode before request completion. Current auto-flushing integrations are:

Callback-style servers do not auto-flush to the same hard persistence boundary. For Express, Fastify, NestJS, standalone handlers, and custom serverless wrappers, call await logger.flush() where you need the database persistence boundary.

Connector Delivery At Scale

Durable connector delivery model

As of @blyp/[email protected], server-side connector forwarding can opt into a Blyp-managed delivery queue through connectors.delivery.

Current behavior:

Connector-by-connector behavior

ConnectorCurrent Blyp behavior
Better StackUses @logtail/node for log delivery and can enqueue retryable failures through Blyp's connector queue
PostHogUses OTLP HTTP for logs plus posthog-node for exceptions, with retryable delivery eligible for Blyp queueing
SentryUses the Sentry SDK directly and can queue retryable log delivery work through Blyp
OTLPUses OpenTelemetry log exporters per target and can persist retryable delivery work through Blyp's queue

Batch delivery behavior

Blyp batches queued connector work at the delivery-manager level through memoryBatchSize, sqliteWriteBatchSize, and sqliteReadBatchSize. Connector SDKs may still batch internally as well.

That means effective behavior is a combination of Blyp queue settings and connector SDK behavior. Do not assume identical batching semantics across Better Stack, PostHog, Sentry, and OTLP.

What happens when a connector is slow

Current behavior at sustained load:

logger.flush() first flushes the primary sink, then waits for connector delivery work, including durable staged writes and replayed queue jobs. This is a stronger boundary than the pre-0.1.22 best-effort connector flush behavior, but it still does not turn vendor-side availability into a guaranteed success outcome.

Rate limits per connector

Better Stack, PostHog, Sentry, and OTLP collectors all have vendor-specific ingestion limits. Blyp does not publish per-connector rate ceilings because those limits still depend on vendor APIs and SDKs.

In practice:

Serverless-Specific Guidance

Core rules

For serverless deployments:

RuntimeRecommended primary sinkConfig surfaceFlush guidance
Vercel Serverless Functions / Next.js Node runtimedatabaseblyp.config.tsautomatic in the Next.js adapter for database mode, manual in standalone or custom wrappers
Vercel Edgenone of the Node/Bun primary sinksno current dedicated Edge-only Blyp primary sink guidancedo not assume file or database mode; keep usage conservative and avoid Node/Bun persistence assumptions
Cloudflare Workersconsole-based Workers logger@blyp/core/workersno logger.flush() path; emit request logs explicitly with emit()
AWS Lambdadatabaseexecutable config plus shared DB clientcall await logger.flush() before returning in standalone or custom handling

If you are deploying to Vercel Edge, Blyp does not currently document a dedicated Edge runtime API the way it documents Cloudflare Workers. Keep the recommendation conservative and avoid treating Edge like a writable Node runtime.

Vercel / Next.js App Router example

Use database mode with a shared Prisma client outside the route handler. In current @blyp/core, the Next.js adapter auto-flushes database writes before the response completes.

// lib/prisma.ts
import { PrismaClient } from "@prisma/client";

const globalForPrisma = globalThis as typeof globalThis & {
  prisma?: PrismaClient;
};

export const prisma = globalForPrisma.prisma ?? new PrismaClient();

if (process.env.NODE_ENV !== "production") {
  globalForPrisma.prisma = prisma;
}
// blyp.config.ts
import { createPrismaDatabaseAdapter } from "@blyp/core/database";
import { prisma } from "./lib/prisma";

export default {
  destination: "database",
  database: {
    dialect: "postgres",
    adapter: createPrismaDatabaseAdapter({
      client: prisma,
      model: "blypLog",
    }),
  },
};
// app/api/orders/route.ts
import { createLogger } from "@blyp/core/nextjs";

const nextLogger = createLogger();

export const POST = nextLogger.withLogger(async (_request, _context, { log }) => {
  log.info("creating order");
  return Response.json({ ok: true });
});

Cloudflare Workers example

Cloudflare Workers use @blyp/core/workers. This is a console-based API with explicit request emission. There is no file sink, no database primary sink, no blyp.config.*, and no logger.flush() path.

import { initWorkersLogger, createWorkersLogger } from "@blyp/core/workers";

initWorkersLogger({
  env: { service: "edge-api" },
});

export default {
  async fetch(request: Request): Promise<Response> {
    const log = createWorkersLogger(request);

    log.info("worker request started");

    const response = Response.json({ ok: true });
    log.emit({ response });

    return response;
  },
};

AWS Lambda example

In standalone or custom Lambda handling, keep the database client outside the handler and flush before returning.

import { PrismaClient } from "@prisma/client";
import { createStandaloneLogger } from "@blyp/core";
import { createPrismaDatabaseAdapter } from "@blyp/core/database";

const prisma = new PrismaClient();

const logger = createStandaloneLogger({
  destination: "database",
  database: {
    dialect: "postgres",
    adapter: createPrismaDatabaseAdapter({
      client: prisma,
      model: "blypLog",
    }),
  },
});

export async function handler() {
  logger.info("lambda invocation started");

  await logger.flush();

  return {
    statusCode: 200,
    body: JSON.stringify({ ok: true }),
  };
}

Cold start impact

Blyp itself adds relatively little startup work compared with database client and connector SDK initialization. In practice, cold start cost is dominated by your DB client, vendor SDKs, and the runtime environment.

Recommended guidance:

Outages And Monitoring

Current behavior during connector outages

During connector outages or sustained connector failures:

Queue depth monitoring

Current monitoring guidance: