--- url: 'https://loglayer.dev/introduction.md' description: Learn more about LogLayer and how it unifies your logging experience --- # Introduction Most logging libraries offer the usual methods like `info`, `warn`, and `error`, but vary significantly in how they handle structured metadata and `Error` objects. This inconsistency leads to ad-hoc solutions and code that's tightly coupled to a specific logger. LogLayer solves this by providing a fluent, expressive API that routes logs to any logging library, cloud provider, files, or OpenTelemetry through its transport system. ```typescript log .withMetadata({ userId: '1234' }) .withError(new Error('Something went wrong')) .error('User action completed') ``` ```json { "msg": "User action completed", "userId": "1234", "err": { "message": "Something went wrong", "stack": "Error: Something went wrong\n at ..." } } ``` ## Multi-Platform Support LogLayer works seamlessly across server-side and browser environments, and supports multiple JavaScript runtimes including Node.js, Deno, and Bun. *Individual transports and plugins may have specific environment requirements, which is indicated on their respective page.* See the [getting started guide](/getting-started) for setup instructions. ## Bring Your Own Logger LogLayer is designed to sit on top of your logging library(s) of choice, such as `pino`, `winston`, `bunyan`, and more. Learn more about logging [transports](/transports/). ## Consistent API No need to remember different parameter orders or method names between logging libraries: ```typescript // With loglayer - consistent API regardless of logging library log.withMetadata({ some: 'data' }).info('my message') // Without loglayer - different APIs for different libraries winston.info('my message', { some: 'data' }) // winston bunyan.info({ some: 'data' }, 'my message') // bunyan ``` Start with [basic logging](/logging-api/basic-logging). ## Separation of Errors, Context, and Metadata LogLayer distinguishes between three types of structured data, each serving a specific purpose: | Type | Method | Scope | Purpose | |------|--------|-------|---------| | **Context** | `withContext()` | Persistent across all logs | Request IDs, user info, session data | | **Metadata** | `withMetadata()` | Single log entry only | Event-specific details like durations, counts | | **Errors** | `withError()` | Single log entry only | Error objects with stack traces | This separation provides several benefits: * **Clarity**: Each piece of data has a clear purpose and appropriate scope * **No pollution**: Per-log metadata doesn't accidentally persist to future logs * **Flexible output**: Configure where each type appears in the final log (root level or dedicated fields) * **Better debugging**: Errors are handled consistently with proper serialization ```typescript log .withContext({ requestId: 'abc-123' }) // Persists for all future logs .withMetadata({ duration: 150 }) // Only for this log entry .withError(new Error('Timeout')) // Only for this log entry .error('Request failed') ``` ```json { "msg": "Request failed", "requestId": "abc-123", "duration": 150, "err": { "message": "Timeout", "stack": "Error: Timeout\n at ..." } } ``` *Context, metadata, and errors can be placed in dedicated fields via [configuration](/configuration).* See the dedicated pages for [context](/logging-api/context), [metadata](/logging-api/metadata), and [errors](/logging-api/error-handling). ## Battle Tested LogLayer has been in production use for at least four years at [Airtop.ai](https://airtop.ai) (formerly Switchboard) in multiple backend and frontend systems. *LogLayer is not affiliated with Airtop.* ## Tiny and Tree-Shakable * `loglayer` standalone is 5kB gzipped. * Most logging-based LogLayer transports are < 1kB gzipped. * All LogLayer packages are tree-shakable. ## Powerful Plugin System Extend functionality with plugins: ```typescript const log = new LogLayer({ plugins: [{ onBeforeDataOut: (params) => { // Redact sensitive information before logging if (params.data?.password) { params.data.password = '***' } return params.data } }] }) ``` See more about using and creating [plugins](/plugins/). ## Multiple Logger Support Send your logs to multiple destinations simultaneously: ```typescript import { LogLayer } from 'loglayer' import { PinoTransport } from "@loglayer/transport-pino" import { DatadogBrowserLogsTransport } from "@loglayer/transport-datadog-browser-logs" import { datadogLogs } from '@datadog/browser-logs' import pino from 'pino' // Initialize Datadog datadogLogs.init({ clientToken: '', site: '', forwardErrorsToLogs: true, }) const log = new LogLayer({ transport: [ new PinoTransport({ logger: pino() }), new DatadogBrowserLogsTransport({ id: "datadog", logger: datadogLogs }) ] }) // Logs will be sent to both Pino and Datadog log.info('User logged in successfully') ``` See more about [multi-transport support](/transports/multiple-transports). ## HTTP Logging Send logs directly to any HTTP endpoint without a third-party logging library. Supports batching, retries, and custom headers. See the [HTTP transport](/transports/http) for more details. ## File Logging Write logs directly to files with support for rotation based on time or size, optional compression, and batching. See the [Log File Rotation transport](/transports/log-file-rotation) for more details. ## OpenTelemetry Send logs to OpenTelemetry collectors with the [OpenTelemetry transport](/transports/opentelemetry), or enrich logs with trace context using the [OpenTelemetry plugin](/plugins/opentelemetry). ## StatsD Support Extend LogLayer with mixins to add observability capabilities beyond logging. Use the [hot-shots mixin](/mixins/hot-shots) to send StatsD metrics alongside your logs: ```typescript import { LogLayer, useLogLayerMixin, ConsoleTransport } from 'loglayer'; import { StatsD } from 'hot-shots'; import { hotshotsMixin } from '@loglayer/mixin-hot-shots'; // Create and configure your StatsD client const statsd = new StatsD({ host: 'localhost', port: 8125 }); // Register the mixin (must be called before creating LogLayer instances) useLogLayerMixin(hotshotsMixin(statsd)); const log = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }); // Send metrics and logs together log.stats.increment('request.count').send(); log.withMetadata({ reqId: '1234' }).info('Request received'); log.stats.timing('request.duration', 150).send(); log.info('Request processed'); ``` See more about [mixins](/mixins/). ## Easy Testing Built-in mocks make testing a breeze: ```typescript import { MockLogLayer } from 'loglayer' // Use MockLogLayer in your tests - no real logging will occur const log = new MockLogLayer() ``` See more about [testing](/logging-api/unit-testing). --- --- url: 'https://loglayer.dev/logging-api/adjusting-log-levels.md' description: Learn how to adjust and control log levels in LogLayer. --- # Adjusting Log Levels Globally While certain transports and logging libraries may allow you to adjust log levels at an individual level, you can adjust log levels in LogLayer globally across all transports. ::: warning Global vs Transport Log Levels The log level methods described here set the global log level for LogLayer. However, individual transports and logging libraries may have their own log level settings that also apply. When both are set, the most restrictive level takes effect. For example, if LogLayer's global level is set to `debug`, but a transport or logging library has its level set to `error`, the transport will only send out `error` and `fatal` messages, even though the global level allows `debug` messages. ::: ## Log Level Hierarchy Log levels follow a hierarchy, with higher numeric values indicating higher severity: | Level | Value | |-------|-------| | `trace` | 10 | | `debug` | 20 | | `info` | 30 | | `warn` | 40 | | `error` | 50 | | `fatal` | 60 | For example, when using `setLevel()`, all levels equal to and above it are also enabled. For example, if you set the log level to `warn`: * lower severity levels `trace`, `debug`, and `info` messages will be ignored. * equal and higher severity levels `warn`, `error`, and `fatal` messages will be logged ## Enabling/Disabling Logging All of these methods can be used during runtime to dynamically adjust log levels without restarting your application. You can control whether logs are output using these methods: ### Set Log Level All levels equal to and above the set level are enabled. ```typescript import type { LogLevel } from 'loglayer' // Enable warn, error, and fatal (disable trace, debug, info) log.setLevel(LogLevel.warn) ``` ### Enable / Disable All Logging ```typescript log.disableLogging() ``` ```typescript log.enableLogging() ``` ### Individual Log Levels You can ignore the hierarchy by using `enableIndividualLevel()` and `disableIndividualLevel()` methods to enable or disable specific log levels. ```typescript import type { LogLevel } from 'loglayer' log.enableIndividualLevel(LogLevel.debug) // Enable only debug logs ``` ```typescript import type { LogLevel } from 'loglayer' log.disableIndividualLevel(LogLevel.debug) // Disable only debug logs ``` ## Checking if a Log Level is Enabled You can check if a specific log level is enabled using the `isLevelEnabled` method: ```typescript import type { LogLevel } from 'loglayer' if (log.isLevelEnabled(LogLevel.debug)) { log.debug('Debugging is enabled') } else { log.info('Debugging is disabled') } ``` ## Log Level Managers *New in LogLayer v8*. Log level managers control how log levels are inherited and propagated between parent and child loggers. By default, LogLayer uses the [**Default Log Level Manager**](/log-level-managers/default), which provides independent log level management for each logger instance. With the default log level manager, child loggers inherit the log level from their parent when created, but subsequent changes to the parent's log level do not affect existing children: ```typescript import { LogLayer, ConsoleTransport, LogLevel } from "loglayer"; const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }); parentLog.setLevel(LogLevel.warn); const childLog = parentLog.child(); // Child inherits parent's log level at creation childLog.isLevelEnabled(LogLevel.warn); // true childLog.isLevelEnabled(LogLevel.info); // false // Parent change does not affect child parentLog.setLevel(LogLevel.debug); childLog.isLevelEnabled(LogLevel.debug); // false (child not affected) ``` For more information about log level managers and available options, see the [Log Level Managers documentation](/log-level-managers/). --- --- url: 'https://loglayer.dev/transports/aws-cloudwatch-logs.md' description: Send logs to Amazon CloudWatch platform with the LogLayer logging library --- # Amazon CloudWatch Logs Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-aws-cloudwatch-logs)](https://www.npmjs.com/package/@loglayer/transport-aws-cloudwatch-logs) [Transport Source](https://github.com/loglayer/loglayer/blob/master/packages/transports/aws-cloudwatch-logs) The Amazon CloudWatch Logs transport allows you to send logs to [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/), a service to monitor and manage resources in AWS. It uses the [AWS SDK for JavaScript CloudWatchLogs](https://www.npmjs.com/package/@aws-sdk/client-cloudwatch-logs). ## Installation ::: code-group ```sh [npm] npm install loglayer @loglayer/transport-aws-cloudwatch-logs @aws-sdk/client-cloudwatch-logs serialize-error ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-aws-cloudwatch-logs @aws-sdk/client-cloudwatch-logs serialize-error ``` ```sh [yarn] yarn add loglayer @loglayer/transport-aws-cloudwatch-logs @aws-sdk/client-cloudwatch-logs serialize-error ``` ::: ## Permissions The transport requires specific AWS permissions to send logs to CloudWatch Logs. The required permissions depend on your configuration: ### Required Permissions **Always required:** * `logs:PutLogEvents` - Send log events to CloudWatch Logs **Required when using `createIfNotExists: true`:** The included processing strategies have an option to create the groups and streams if they do not exist. * `logs:DescribeLogGroups` - Check if log group exists * `logs:DescribeLogStreams` - Check if log stream exists * `logs:CreateLogGroup` - Create log group if it doesn't exist * `logs:CreateLogStream` - Create log stream if it doesn't exist ### IAM Policy Example Here's a minimal IAM policy that grants the necessary permissions: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:log-group:your-log-group-name" }, { "Effect": "Allow", "Action": [ "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "*" } ] } ``` For more details, see the [CloudWatch Logs permissions reference](https://docs.aws.amazon.com/AmazonCloud/latest/logs/permissions-reference-cwl.html). ## Usage ```typescript import { LogLayer } from 'loglayer'; import { CloudWatchLogsTransport } from "@loglayer/transport-aws-cloudwatch-logs"; import { serializeError } from "serialize-error"; // Create LogLayer instance with CloudWatch Logs transport const log = new LogLayer({ errorSerializer: serializeError, transport: new CloudWatchLogsTransport({ groupName: "/loglayer/group", streamName: "loglayer-stream-name", }), }); // Use LogLayer as normal log.withMetadata({ customField: 'value' }).info('Hello from Lambda!'); ``` When no processing strategy is explicitly defined, the transport uses the [default strategy](#default-strategy) with your default AWS profile or environment variables (such as `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_REGION`). ## Configuration Options ### Required Parameters | Name | Type | Description | | ------------ | ------------------------------------------------------- | ---------------------- | | `groupName` | `string \| (params: LogLayerTransportParams) => string` | Target log group name. | | `streamName` | `string \| (params: LogLayerTransportParams) => string` | Target stream name | ### Optional Parameters | Name | Type | Default | Description | | ------------------- | --------------------------------------------------------------------- | ------- | --------------------------------------------------------------------- | | `strategy` | `BaseStrategy` | `DefaultCloudWatchStrategy()` | Strategy object that handles the log events. | | `payloadTemplate` | `(params: LogLayerTransportParams, timestamp: number) => string` | - | Build the log message to be sent to cloudwatch. | | `onError` | `(error: Error) => void` | - | Callback for error handling | | `enabled` | `boolean` | `true` | If false, the transport will not send logs to the logger | | `consoleDebug` | `boolean` | `false` | If true, the transport will log to the console for debugging purposes | | `id` | `string` | - | A user-defined identifier for the transport | ## Log Format Each log entry is written as a [InputLogEvent](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_InputLogEvent.html) object with the following format: ```json5 { "message": "{\"level\":\"info\",\"timestamp\":1641013456789,\"message\":\"Log message\"}", "timestamp": 1641013456789, } ``` The message field contains a JSON stringified object with: * `level`: The log level (e.g., "info", "error", "debug") * `timestamp`: The timestamp when the log was created (in milliseconds) * `message`: The joined message string * Additional data fields (only included when present) Then, the message is sent to CloudWatch Logs using the [PutLogEvents](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html) action. ### Customizing log entries If you want to customize the log format, use the `payloadTemplate` option as follows: ```typescript import { LogLayer } from 'loglayer'; import { CloudWatchLogsTransport } from "@loglayer/transport-aws-cloudwatch-logs"; const log = new LogLayer({ transport: new CloudWatchLogsTransport({ groupName: "/loglayer/group", streamName: "loglayer-stream-name", payloadTemplate: (params, timestamp) => { const isoDate = new Date(timestamp).toISOString(); const msg = params.messages.map((msg) => String(msg)).join(" "); return `${isoDate} [${params.logLevel}] ${msg}`; }, }) }) ``` The previous code will produce a log entry with the following format: ```json { "message": "2022-01-01T05:04:16.789Z [info] Log message", "timestamp": 1641013456789 } ``` #### PayloadTemplate Parameters The `payloadTemplate` function receives two parameters: | Parameter | Type | Description | |-----------|------|-------------| | `params` | `LogLayerTransportParams` | The log entry data containing all the information about the log message | | `timestamp` | `number` | The timestamp when the log was created (in milliseconds) | #### LogLayerTransportParams Properties The `params` object contains the following properties: | Property | Type | Description | |----------|------|-------------| | `logLevel` | `LogLevelType` | The log level of the message (e.g., "info", "error", "debug") | | `messages` | `any[]` | The parameters that were passed to the log message method | | `data` | `LogLayerData` | Combined object data containing the metadata, context, and/or error data | | `hasData` | `boolean` | If true, the data object is included in the message parameters | | `metadata` | `LogLayerMetadata` | Individual metadata object passed to the log message method | | `error` | `any` | Error passed to the log message method | | `context` | `LogLayerContext` | Context data that is included with each log entry | ## Error Handling The transport provides error handling through the `onError` callback: ```typescript const logger = new LogLayer({ transport: new CloudWatchLogsTransport({ groupName: "/loglayer/group", streamName: "loglayer-stream-name", strategy: new DefaultCloudWatchStrategy(), onError: (error) => { // Custom error handling console.error("Failed to send log to CloudWatch:", error); }, }), }); ``` ## Processing Strategies The transport uses a strategy-based architecture to handle log events. It includes two built-in processing strategies: a [default strategy](#default-strategy) and a [worker queue strategy](#worker-queue-strategy). ### Default Strategy The default strategy is used when a strategy is not specified in the transport and sends each log event immediately in a single request. It's the most straightforward approach and is suitable for most use cases. ```typescript import { LogLayer } from 'loglayer'; import { CloudWatchLogsTransport, DefaultCloudWatchStrategy } from "@loglayer/transport-aws-cloudwatch-logs"; // Simple usage with default AWS configuration const log = new LogLayer({ transport: new CloudWatchLogsTransport({ groupName: "/loglayer/group", streamName: "loglayer-stream-name", }), }); ``` Or with custom AWS client configuration: ```typescript import { LogLayer } from 'loglayer'; import { CloudWatchLogsTransport, DefaultCloudWatchStrategy } from "@loglayer/transport-aws-cloudwatch-logs"; const log = new LogLayer({ transport: new CloudWatchLogsTransport({ groupName: "/loglayer/group", streamName: "loglayer-stream-name", strategy: new DefaultCloudWatchStrategy({ clientConfig: { region: "us-east-1", }, }), }), }); ``` Or with automatic log group and stream creation: ```typescript import { LogLayer } from 'loglayer'; import { CloudWatchLogsTransport, DefaultCloudWatchStrategy } from "@loglayer/transport-aws-cloudwatch-logs"; const log = new LogLayer({ transport: new CloudWatchLogsTransport({ groupName: "/loglayer/group", streamName: "loglayer-stream-name", strategy: new DefaultCloudWatchStrategy({ createIfNotExists: true, }), }), }); ``` #### Default Strategy Options | Name | Type | Default | Description | | ------------------- | --------------------------- | ------- | --------------------------------------------------------------------- | | `clientConfig` | `CloudWatchLogsClientConfig` | - | AWS SDK client configuration. | | `createIfNotExists` | `boolean` | `false` | Try to create the log group and log stream if they don't exist yet. | ### Worker Queue Strategy If you're sending a lot of logs, you may prefer to use the worker queue strategy to improve performance. It uses a worker thread and allows you to send your logs in batches. ```typescript import { LogLayer } from 'loglayer'; import { CloudWatchLogsTransport, WorkerQueueStrategy } from "@loglayer/transport-aws-cloudwatch-logs"; const log = new LogLayer({ transport: new CloudWatchLogsTransport({ groupName: "/loglayer/group", streamName: "loglayer-stream-name", strategy: new WorkerQueueStrategy({ batchSize: 1000, delay: 5000, }), }), }); ``` Or with automatic log group and stream creation: ```typescript import { LogLayer } from 'loglayer'; import { CloudWatchLogsTransport, WorkerQueueStrategy } from "@loglayer/transport-aws-cloudwatch-logs"; const log = new LogLayer({ transport: new CloudWatchLogsTransport({ groupName: "/loglayer/group", streamName: "loglayer-stream-name", strategy: new WorkerQueueStrategy({ batchSize: 1000, delay: 5000, createIfNotExists: true, }), }), }); ``` #### Worker Queue Strategy Options | Name | Type | Default | Description | | ------------------- | -------- | ------- | --------------------------------------------------------------------- | | `batchSize` | `number` | 10000 | The maximum number of messages to send in one request. | | `delay` | `number` | 6000 | The amount of time to wait before sending logs in ms. | | `clientConfig` | `CloudWatchLogsClientConfig` | - | AWS SDK client configuration. | | `createIfNotExists` | `boolean` | `false` | Try to create the log group and log stream if they don't exist yet. | ## Creating Custom Strategies The strategy-based architecture allows you to create custom strategies that implement your own logging behavior. This is useful when you need specialized functionality like custom batching, retry logic, or integration with other services. ### Strategy Interface All strategies must extend the `BaseStrategy` class and implement the required methods. The `BaseStrategy` class provides the foundation for all custom strategies and includes several protected properties and methods that you can use in your implementations. #### BaseStrategy Properties The `BaseStrategy` class provides these protected properties that are automatically configured by the transport: | Property | Type | Description | |----------|------|-------------| | `onError` | `ErrorHandler \| undefined` | Error handler callback function. Set by the transport's `onError` option. | #### BaseStrategy Methods | Method | Type | Description | |--------|------|-------------| | `sendEvent(params)` | `(params: SendEventParams) => Promise \| void` | **Abstract method** - Must be implemented. Handles sending log events to CloudWatch Logs. | | `cleanup()` | `() => Promise \| void` | **Optional override** - Called when the transport is disposed. Use this to clean up resources. | #### Basic Strategy Template ```typescript import { BaseStrategy } from "@loglayer/transport-aws-cloudwatch-logs"; import type { SendEventParams, CloudWatchLogsStrategyOptions } from "@loglayer/transport-aws-cloudwatch-logs"; class MyCustomStrategy extends BaseStrategy { // Optional: Add your own properties private myProperty: string; constructor(myProperty: string) { super(); this.myProperty = myProperty; } // Required: Implement the sendEvent method async sendEvent({ event, logGroupName, logStreamName }: SendEventParams): Promise { // Your custom implementation here // You can access this.onError } // Optional: Override cleanup for resource management cleanup(): void { // Clean up any resources (timers, connections, etc.) } } ``` #### SendEventParams Interface The `sendEvent` method receives a `SendEventParams` object with these properties: | Property | Type | Description | |----------|------|-------------| | `event` | `InputLogEvent` | The log event to send, containing `timestamp` and `message` | | `logGroupName` | `string` | The CloudWatch Logs group name | | `logStreamName` | `string` | The CloudWatch Logs stream name | ### Basic Custom Strategy Here's a simple example that adds custom retry logic: ```typescript import { LogLayer } from 'loglayer'; import { CloudWatchLogsTransport, BaseStrategy } from "@loglayer/transport-aws-cloudwatch-logs"; import { CloudWatchLogsClient, PutLogEventsCommand } from "@aws-sdk/client-cloudwatch-logs"; class RetryStrategy extends BaseStrategy { private client: CloudWatchLogsClient; private maxRetries: number; constructor(maxRetries = 3) { super(); this.client = new CloudWatchLogsClient({}); this.maxRetries = maxRetries; } async sendEvent({ event, logGroupName, logStreamName }: SendEventParams): Promise { let lastError: Error | undefined; for (let attempt = 1; attempt <= this.maxRetries; attempt++) { try { const command = new PutLogEventsCommand({ logEvents: [event], logGroupName, logStreamName, }); await this.client.send(command); return; // Success, exit retry loop } catch (error) { lastError = error as Error; if (attempt === this.maxRetries) { this.onError?.(lastError); throw lastError; } // Wait before retry (exponential backoff) await new Promise(resolve => setTimeout(resolve, Math.pow(2, attempt) * 1000)); } } } } const log = new LogLayer({ transport: new CloudWatchLogsTransport({ groupName: "/loglayer/group", streamName: "loglayer-stream-name", strategy: new RetryStrategy(5), // 5 retry attempts }), }); ``` ## Changelog View the changelog [here](./changelogs/aws-cloudwatch-logs-changelog.md). --- --- url: 'https://loglayer.dev/transports.md' description: Logging libraries that that you can use with LogLayer --- # Transports Transports are the way LogLayer sends logs to a logging library. ## Available Transports ### Built-in Transports | Name | Description | |------|-------------| | [Console](/transports/console) | Simple console-based logging for development | | [Blank Transport](/transports/blank-transport) | For quickly creating / prototyping new transports | ### Logging Libraries | Name | Package | Changelog | Description | |------|---------|------------------------------------------------------------------------|-------------| | [Bunyan](/transports/bunyan) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-bunyan)](https://www.npmjs.com/package/@loglayer/transport-bunyan) | [Changelog](/transports/changelogs/bunyan-changelog.md) | JSON logging library for Node.js | | [Consola](/transports/consola) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-consola)](https://www.npmjs.com/package/@loglayer/transport-consola) | [Changelog](/transports/changelogs/consola-changelog.md) | Elegant console logger for Node.js and browser | | [Electron-log](/transports/electron-log) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-electron-log)](https://www.npmjs.com/package/@loglayer/transport-electron-log) | [Changelog](/transports/changelogs/electron-log-changelog.md) | Logging library for Electron applications | | [Log4js](/transports/log4js) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-log4js)](https://www.npmjs.com/package/@loglayer/transport-log4js) | [Changelog](/transports/changelogs/log4js-node-changelog.md) | Port of Log4j framework to Node.js | | [loglevel](/transports/loglevel) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-loglevel)](https://www.npmjs.com/package/@loglayer/transport-loglevel) | [Changelog](/transports/changelogs/loglevel-changelog.md) | Minimal lightweight logging for JavaScript | | [LogTape](/transports/logtape) | [![npm](https://img.shields.io/npm/v/%40loglayer%2Ftransport-logtape)](https://www.npmjs.com/package/@loglayer/transport-logtape) | [Changelog](/transports/changelogs/logtape-changelog.md) | Modern, structured logging library for TypeScript and JavaScript | | [Pino](/transports/pino) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-pino)](https://www.npmjs.com/package/@loglayer/transport-pino) | [Changelog](/transports/changelogs/pino-changelog.md) | Very low overhead Node.js logger | | [Roarr](/transports/roarr) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-roarr)](https://www.npmjs.com/package/@loglayer/transport-roarr) | [Changelog](/transports/changelogs/roarr-changelog.md) | JSON logger for Node.js and browser | | [Signale](/transports/signale) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-signale)](https://www.npmjs.com/package/@loglayer/transport-signale) | [Changelog](/transports/changelogs/signale-changelog.md) | Highly configurable CLI logger | | [tslog](/transports/tslog) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-tslog)](https://www.npmjs.com/package/@loglayer/transport-tslog) | [Changelog](/transports/changelogs/tslog-changelog.md) | Powerful, fast and expressive logging for TypeScript and JavaScript | | [Tracer](/transports/tracer) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-tracer)](https://www.npmjs.com/package/@loglayer/transport-tracer) | [Changelog](/transports/changelogs/tracer-changelog.md) | Tracer logging library for Node.js | | [Winston](/transports/winston) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-winston)](https://www.npmjs.com/package/@loglayer/transport-winston) | [Changelog](/transports/changelogs/winston-changelog.md) | A logger for just about everything | ### Cloud Providers | Name | Package | Changelog | Description | |------|---------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------| | [Amazon CloudWatch Logs](/transports/aws-cloudwatch-logs) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-aws-cloudwatch-logs)](https://www.npmjs.com/package/@loglayer/transport-aws-cloudwatch-logs) | [Changelog](/transports/changelogs/aws-cloudwatch-logs-changelog.md) | Logging for Amazon CloudWatch Logs | | [AWS Lambda Powertools Logger](/transports/aws-lambda-powertools) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-aws-lambda-powertools)](https://www.npmjs.com/package/@loglayer/transport-aws-lambda-powertools) | [Changelog](/transports/changelogs/aws-lambda-powertools-changelog.md) | Logging for AWS Lambdas | | [Axiom](/transports/axiom) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-axiom)](https://www.npmjs.com/package/@loglayer/transport-axiom) | [Changelog](/transports/changelogs/axiom-changelog.md) | Send logs to Axiom cloud logging platform | | [Better Stack](/transports/betterstack) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-betterstack)](https://www.npmjs.com/package/@loglayer/transport-betterstack) | [Changelog](/transports/changelogs/betterstack-changelog.md) | Send logs to Better Stack log management platform | | [Datadog](/transports/datadog) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-datadog)](https://www.npmjs.com/package/@loglayer/transport-datadog) | [Changelog](/transports/changelogs/datadog-changelog.md) | Server-side logging for Datadog | | [Datadog Browser Logs](/transports/datadog-browser-logs) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-datadog-browser-logs)](https://www.npmjs.com/package/@loglayer/transport-datadog-browser-logs) | [Changelog](/transports/changelogs/datadog-browser-logs-changelog.md) | Browser-side logging for Datadog | | [Dynatrace](/transports/dynatrace) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-dynatrace)](https://www.npmjs.com/package/@loglayer/transport-dynatrace) | [Changelog](/transports/changelogs/dynatrace-changelog.md) | Server-side logging for Dynatrace | | [Google Cloud Logging](/transports/google-cloud-logging) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-google-cloud-logging)](https://www.npmjs.com/package/@loglayer/transport-google-cloud-logging) | [Changelog](/transports/changelogs/google-cloud-logging-changelog.md) | Server-side logging for Google Cloud Platform | | [New Relic](/transports/new-relic) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-new-relic)](https://www.npmjs.com/package/@loglayer/transport-new-relic) | [Changelog](/transports/changelogs/new-relic-changelog.md) | Server-side logging for New Relic | | [Sentry](/transports/sentry) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-sentry)](https://www.npmjs.com/package/@loglayer/transport-sentry) | [Changelog](/transports/changelogs/sentry-changelog.md) | Send structured logs to Sentry using the Sentry SDK logger API | | [Logflare](/transports/logflare) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-logflare)](https://www.npmjs.com/package/@loglayer/transport-logflare) | [Changelog](/transports/changelogs/logflare-changelog.md) | Send logs to [Logflare](https://logflare.app) log ingestion and querying engine | | [Sumo Logic](/transports/sumo-logic) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-sumo-logic)](https://www.npmjs.com/package/@loglayer/transport-sumo-logic) | [Changelog](/transports/changelogs/sumo-logic-changelog.md) | Send logs to Sumo Logic via HTTP Source | | [VictoriaLogs](/transports/victoria-logs) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-victoria-logs)](https://www.npmjs.com/package/@loglayer/transport-victoria-logs) | [Changelog](/transports/changelogs/victoria-logs-changelog.md) | Send logs to [VictoriaLogs](https://victoriametrics.com/products/victorialogs/) by [Victoria Metrics](https://victoriametrics.com/) using JSON stream API | ### Other Transports | Name | Package | Changelog | Description | |------|---------------------------------------------------------------------------------------------------------------------------------------------------|-----------|-------------------------------------------------------------------------------| | [HTTP](/transports/http) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-http)](https://www.npmjs.com/package/@loglayer/transport-http) | [Changelog](/transports/changelogs/http-changelog.md) | Generic HTTP transport with batching, compression, and retry support | | [Log File Rotation](/transports/log-file-rotation) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-log-file-rotation)](https://www.npmjs.com/package/@loglayer/transport-log-file-rotation) | [Changelog](/transports/changelogs/log-file-rotation-changelog.md) | Write logs to local files with rotation support | | [OpenTelemetry](/transports/opentelemetry) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-opentelemetry)](https://www.npmjs.com/package/@loglayer/transport-opentelemetry) | [Changelog](/transports/changelogs/opentelemetry-changelog.md) | Send logs using the OpenTelemetry Logs SDK | | [Pretty Terminal](/transports/pretty-terminal) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-pretty-terminal)](https://www.npmjs.com/package/@loglayer/transport-pretty-terminal) | [Changelog](/transports/changelogs/pretty-terminal-changelog.md) | Pretty prints logs in the terminal with text search / advanced interactivity. | | [Simple Pretty Terminal](/transports/simple-pretty-terminal) | [![npm](https://img.shields.io/npm/v/@loglayer/transport-simple-pretty-terminal)](https://www.npmjs.com/package/@loglayer/transport-simple-pretty-terminal) | [Changelog](/transports/changelogs/simple-pretty-terminal-changelog.md) | Pretty prints logs in the browser / terminal / Next.js. | --- --- url: 'https://loglayer.dev/transports/aws-lambda-powertools.md' description: Logging for AWS Lambdas with the LogLayer logging library --- # AWS Lambda Powertools Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-aws-lambda-powertools)](https://www.npmjs.com/package/@loglayer/transport-aws-lambda-powertools) A LogLayer transport for [AWS Lambda Powertools Logger](https://docs.powertools.aws.dev/lambda/typescript/latest/core/logger/). [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/aws-lambda-powertools) ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-aws-lambda-powertools @aws-lambda-powertools/logger ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-aws-lambda-powertools @aws-lambda-powertools/logger ``` ```sh [yarn] yarn add loglayer @loglayer/transport-aws-lambda-powertools @aws-lambda-powertools/logger ``` ::: ## Setup ::: warning The Logger utility from `@aws-lambda-powertools/logger` must always be instantiated outside the Lambda handler. ::: ```typescript import { Logger } from '@aws-lambda-powertools/logger'; import { LogLayer } from 'loglayer'; import { PowertoolsTransport } from '@loglayer/transport-aws-lambda-powertools'; // Create a new Powertools logger instance const powertoolsLogger = new Logger({ serviceName: 'my-service', logLevel: 'INFO' }); // Create LogLayer instance with Powertools transport const log = new LogLayer({ transport: new PowertoolsTransport({ logger: powertoolsLogger }) }); // Use LogLayer as normal log.withMetadata({ customField: 'value' }).info('Hello from Lambda!'); ``` ## Log Level Mapping | LogLayer | Powertools | |----------|------------| | trace | DEBUG | | debug | DEBUG | | info | INFO | | warn | WARN | | error | ERROR | | fatal | ERROR | ## Changelog View the changelog [here](./changelogs/aws-lambda-powertools-changelog.md). --- --- url: 'https://loglayer.dev/transports/axiom.md' description: Send logs to Axiom cloud logging platform with the LogLayer logging library --- # Axiom Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-axiom)](https://www.npmjs.com/package/@loglayer/transport-axiom) [Transport Source](https://github.com/loglayer/loglayer/blob/master/packages/transports/axiom) The Axiom transport allows you to send logs to [Axiom.co](https://axiom.co), a cloud-native logging and observability platform. It uses the [Axiom JavaScript SDK](https://github.com/axiomhq/axiom-js). ## Installation ::: code-group ```sh [npm] npm install @loglayer/transport-axiom @axiomhq/js serialize-error loglayer ``` ```sh [pnpm] pnpm add @loglayer/transport-axiom @axiomhq/js serialize-error loglayer ``` ```sh [yarn] yarn add @loglayer/transport-axiom @axiomhq/js serialize-error loglayer ``` ::: ## Usage ```typescript import { LogLayer } from "loglayer"; import { AxiomTransport } from "@loglayer/transport-axiom"; import { serializeError } from "serialize-error"; import { Axiom } from "@axiomhq/js"; // Create the Axiom client const axiom = new Axiom({ token: process.env.AXIOM_TOKEN, // Optional: other Axiom client options // orgId: 'your-org-id', // url: 'https://cloud.axiom.co', }); // Create the LogLayer instance with AxiomTransport const logger = new LogLayer({ errorSerializer: serializeError, transport: new AxiomTransport({ logger: axiom, dataset: "your-dataset", }), }); // Start logging logger.info("Hello from LogLayer!"); ``` ## Configuration Options ### Required Parameters | Name | Type | Description | |------|------|-------------| | `logger` | `Axiom` | Instance of the Axiom client | | `dataset` | `string` | The Axiom dataset name to send logs to | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `fieldNames` | `AxiomFieldNames` | - | Custom field names for log entries. See [Field Names](#field-names) | | `timestampFn` | `() => string \| number` | `() => new Date().toISOString()` | Function to generate timestamps | | `onError` | `(error: Error) => void` | - | Callback for error handling | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process | | `levelMap` | `AxiomLevelMap` | - | Custom mapping for log levels | | `enabled` | `boolean` | `true` | If false, the transport will not send logs to the logger | | `consoleDebug` | `boolean` | `false` | If true, the transport will log to the console for debugging purposes | | `id` | `string` | - | A user-defined identifier for the transport | ### Field Names The `fieldNames` object allows you to customize the field names in the log entry JSON: | Field | Type | Description | Default | |-------|------|-------------|---------| | `level` | `string` | Field name for the log level | `"level"` | | `message` | `string` | Field name for the log message | `"message"` | | `timestamp` | `string` | Field name for the timestamp | `"timestamp"` | ### Level Mapping The `levelMap` object allows you to map each log level to either a string or number: | Level | Type | Example (Numeric) | Example (String) | |-------|------|------------------|------------------| | `debug` | `string \| number` | 20 | `"DEBUG"` | | `error` | `string \| number` | 50 | `"ERROR"` | | `fatal` | `string \| number` | 60 | `"FATAL"` | | `info` | `string \| number` | 30 | `"INFO"` | | `trace` | `string \| number` | 10 | `"TRACE"` | | `warn` | `string \| number` | 40 | `"WARNING"` | ## Log Format Each log entry is written as a JSON object with the following format: ```json5 { "level": "info", "message": "Log message", "timestamp": "2024-01-17T12:34:56.789Z", // metadata / context / error data will depend on your LogLayer configuration "userId": "123", "requestId": "abc-123" } ``` ## Log Level Filtering You can set a minimum log level to filter out less important logs: ```typescript const logger = new LogLayer({ transport: new AxiomTransport({ logger: axiom, dataset: "your-dataset", level: "warn", // Only process warn, error, and fatal logs }), }); logger.debug("This won't be sent"); // Filtered out logger.info("This won't be sent"); // Filtered out logger.warn("This will be sent"); // Included logger.error("This will be sent"); // Included ``` ## Error Handling The transport provides error handling through the `onError` callback: ```typescript const logger = new LogLayer({ transport: new AxiomTransport({ logger: axiom, dataset: "your-dataset", onError: (error) => { // Custom error handling console.error("Failed to send log to Axiom:", error); }, }), }); ``` ## Changelog View the changelog [here](./changelogs/axiom-changelog.md). --- --- url: 'https://loglayer.dev/transports/betterstack.md' description: >- Send logs to Better Stack log management platform with the LogLayer logging library --- # Better Stack Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-betterstack)](https://www.npmjs.com/package/@loglayer/transport-betterstack) [Transport Source](https://github.com/loglayer/loglayer/blob/master/packages/transports/betterstack) The Better Stack transport allows you to send logs to [Better Stack's log management platform](https://betterstack.com/log-management) using their HTTP API. It provides a simple and efficient way to ship logs to Better Stack for centralized logging and analysis. ## Installation ::: code-group ```sh [npm] npm install @loglayer/transport-betterstack loglayer ``` ```sh [pnpm] pnpm add @loglayer/transport-betterstack loglayer ``` ```sh [yarn] yarn add @loglayer/transport-betterstack loglayer ``` ::: ## Usage * Create a "Javascript / Node.js" log source in your Better Stack account. * In the "Data ingestion" tab of your source, find your `source token` and the `ingesting host`. * Add `https://` in front of the ingesting host for the `url` parameter. ```typescript import { LogLayer } from "loglayer"; import { BetterStackTransport } from "@loglayer/transport-betterstack"; const logger = new LogLayer({ transport: new BetterStackTransport({ sourceToken: "", url: "https://", onError: (err) => { console.error('Failed to send logs to Better Stack:', err); }, onDebug: (entry) => { console.log('Log entry being sent to Better Stack:', entry); }, onDebugReqRes: ({ req, res }) => { console.log("=== HTTP Request ==="); console.log("URL:", req.url); console.log("Method:", req.method); console.log("Headers:", JSON.stringify(req.headers, null, 2)); console.log("Body:", typeof req.body === "string" ? req.body : `[Uint8Array: ${req.body.length} bytes]`); console.log("=== HTTP Response ==="); console.log("Status:", res.status, res.statusText); console.log("Headers:", JSON.stringify(res.headers, null, 2)); console.log("Body:", res.body); console.log("==================="); }, }), }); // Start logging logger.info("Hello from LogLayer!"); logger.withMetadata({ userId: "123" }).info("User logged in"); ``` ## Configuration Options ### Required Parameters | Name | Type | Description | |------|------|-------------| | `sourceToken` | `string` | Your Better Stack source token for authentication (found in the "Data ingestion" tab of your "Javascript / Node.js" source) | | `url` | `string` | Better Stack ingestion host URL (add "https://" in front of the ingestion host from the "Data ingestion" tab of your "Javascript / Node.js" source) | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `includeTimestamp` | `boolean` | `true` | Whether to include timestamp in the log payload | | `timestampField` | `string` | `"dt"` | Custom field name for the timestamp | | `onError` | `(error: Error) => void` | - | Callback for error handling | | `onDebug` | `(entry: Record) => void` | - | Callback for debugging log entries | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process | | `enabled` | `boolean` | `true` | If false, the transport will not send logs to the logger | | `id` | `string` | - | A user-defined identifier for the transport | ### HTTP Transport Optional Parameters #### General Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `enabled` | `boolean` | `true` | Whether the transport is enabled | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Logs below this level will be filtered out | | `method` | `string` | `"POST"` | HTTP method to use for requests | | `headers` | `Record \| (() => Record)` | `{}` | Headers to include in the request. Can be an object or a function that returns headers | | `contentType` | `string` | `"application/json"` | Content type for single log requests. User-specified headers take precedence | | `compression` | `boolean` | `false` | Whether to use gzip compression | | `maxRetries` | `number` | `3` | Number of retry attempts before giving up | | `retryDelay` | `number` | `1000` | Base delay between retries in milliseconds | | `respectRateLimit` | `boolean` | `true` | Whether to respect rate limiting by waiting when a 429 response is received | | `maxLogSize` | `number` | `1048576` | Maximum size of a single log entry in bytes (1MB) | | `maxPayloadSize` | `number` | `5242880` | Maximum size of the payload (uncompressed) in bytes (5MB) | | `enableNextJsEdgeCompat` | `boolean` | `false` | Whether to enable Next.js Edge compatibility | #### Debug Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `onError` | `(err: Error) => void` | - | Error handling callback | | `onDebug` | `(entry: Record) => void` | - | Debug callback for inspecting log entries before they are sent | | `onDebugReqRes` | `(reqRes: { req: { url: string; method: string; headers: Record; body: string \| Uint8Array }; res: { status: number; statusText: string; headers: Record; body: string } }) => void` | - | Debug callback for inspecting HTTP requests and responses. Provides complete request/response details including headers and body content | #### Batch Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `batchContentType` | `string` | `"application/json"` | Content type for batch log requests. User-specified headers take precedence | | `enableBatchSend` | `boolean` | `true` | Whether to enable batch sending | | `batchSize` | `number` | `100` | Number of log entries to batch before sending | | `batchSendTimeout` | `number` | `5000` | Timeout in milliseconds for sending batches regardless of size | | `batchSendDelimiter` | `string` | `"\n"` | Delimiter to use between log entries in batch mode | | `batchMode` | `"delimiter" \| "field" \| "array"` | `"delimiter"` | Batch mode for sending multiple log entries. "delimiter" joins entries with a delimiter, "field" wraps an array of entries in an object with a field name, "array" sends entries as a plain JSON array of objects | | `batchFieldName` | `string` | - | Field name to wrap batch entries in when batchMode is "field" | For more details on these options, see the [HTTP transport documentation](/transports/http#configuration). ## Changelog View the changelog [here](./changelogs/betterstack-changelog.md). --- --- url: 'https://loglayer.dev/transports/blank-transport.md' description: Create custom transports quickly with LogLayer's BlankTransport --- # Blank Transport The built-in `BlankTransport` allows you to quickly create custom transports by providing your own `shipToLogger` function. This is perfect for simple custom logging logic, prototyping new transport ideas, or quick integrations with custom services. [Transport Source](https://github.com/loglayer/loglayer/blob/master/packages/core/loglayer/src/transports/BlankTransport.ts) ::: tip If you want to create more advanced / complex transports, it is recommended you read the [Creating Transports](./creating-transports.md) guide. ::: ## Installation No additional packages needed beyond the core `loglayer` package: ::: code-group ```sh [npm] npm i loglayer ``` ```sh [pnpm] pnpm add loglayer ``` ```sh [yarn] yarn add loglayer ``` ::: ## Setup ```typescript import { LogLayer, BlankTransport } from 'loglayer' const log = new LogLayer({ transport: new BlankTransport({ shipToLogger: ({ logLevel, messages, data, hasData }) => { // Your custom logging logic here console.log(`[${logLevel}]`, ...messages, data && hasData ? data : ''); // Return value is used for debugging when consoleDebug is enabled return messages; } }) }) ``` ## Configuration Options ### `shipToLogger` (Required) The function that will be called to handle log shipping. This is the only required parameter for creating a custom transport. * Type: `(params: LogLayerTransportParams) => any[]` * Required: `true` **Return Value**: The function must return an array (`any[]`). This return value is used for debugging purposes when `consoleDebug` is enabled - it will be logged to the console using the appropriate console method based on the log level. The function receives a `LogLayerTransportParams` object with these fields: ```typescript interface LogLayerTransportParams { /** * The log level of the message */ logLevel: LogLevel; /** * The parameters that were passed to the log message method (eg: info / warn / debug / error) */ messages: any[]; /** * Combined object data containing the metadata, context, and / or error data in a * structured format configured by the user. */ data?: Record; /** * If true, the data object is included in the message parameters */ hasData?: boolean; /** * Individual metadata object passed to the log message method. */ metadata?: Record; /** * Error passed to the log message method. */ error?: any; /** * Context data that is included with each log entry. */ context?: Record; } ``` ::: tip Message Parameters The `messages` parameter is an array because LogLayer supports multiple parameters for formatting. See the [Basic Logging](/logging-api/basic-logging.html#message-parameters) section for more details. ::: For example, if a user does the following: ```typescript logger.withMetadata({foo: 'bar'}).info('hello world', 'foo'); ``` The parameters passed to `shipToLogger` would be: ```typescript { logLevel: 'info', messages: ['hello world', 'foo'], data: {foo: 'bar'}, hasData: true } ``` ### `level` Sets the minimum log level to process. Messages with a lower priority level will be ignored. * Type: `"trace" | "debug" | "info" | "warn" | "error" | "fatal"` * Default: `"trace"` (processes all log levels) ### `enabled` If false, the transport will not send logs to the logger. * Type: `boolean` * Default: `true` ### `consoleDebug` If true, the transport will log to the console for debugging purposes. * Type: `boolean` * Default: `false` When `consoleDebug` is enabled, the return value from your `shipToLogger` function will be logged to the console using the appropriate console method based on the log level (e.g., `console.info()` for info logs, `console.error()` for error logs, etc.). This is useful for debugging your custom transport logic and seeing exactly what data is being processed. ## Error Serialization When using the BlankTransport, it's recommended to configure LogLayer with an `errorSerializer` to ensure errors are properly serialized before being passed to your `shipToLogger` function. The [`serialize-error`](https://www.npmjs.com/package/serialize-error) package is the recommended choice for consistent error serialization. ### Installation ::: code-group ```sh [npm] npm install serialize-error ``` ```sh [yarn] yarn add serialize-error ``` ```sh [pnpm] pnpm add serialize-error ``` ::: ### Usage ```typescript import { LogLayer, BlankTransport } from 'loglayer' import serializeError from 'serialize-error' const log = new LogLayer({ errorSerializer: serializeError, transport: new BlankTransport({ shipToLogger: ({ logLevel, messages, data, hasData }) => { console.log(`[${logLevel}]`, ...messages, data && hasData ? data : ''); return messages; } }) }) ``` ### Before and After Example ::: tip Error Field Name The error appears in `data.err` by default, but this field name can be customized using the `errorFieldName` configuration option. See the [Error Handling Configuration](/configuration.html#error-handling) section for more details. ::: **Without errorSerializer:** ```typescript const log = new LogLayer({ transport: new BlankTransport({ shipToLogger: ({ logLevel, messages, data, hasData }) => { console.log(`[${logLevel}]`, ...messages, data && hasData ? data : ''); return messages; } }) }) log.withError(new Error('Database connection failed')).error('Failed to connect'); // Output: [error] Failed to connect { err: [Error: Database connection failed] } ``` **With errorSerializer:** ```typescript const log = new LogLayer({ errorSerializer: serializeError, transport: new BlankTransport({ shipToLogger: ({ logLevel, messages, data, hasData }) => { console.log(`[${logLevel}]`, ...messages, data && hasData ? data : ''); return messages; } }) }) log.withError(new Error('Database connection failed')).error('Failed to connect'); // Output: [error] Failed to connect { // err: { // name: 'Error', // message: 'Database connection failed', // stack: 'Error: Database connection failed\n at ...' // } // } ``` ## Examples ### Simple Console Logging ```typescript import { LogLayer, BlankTransport } from 'loglayer' const log = new LogLayer({ transport: new BlankTransport({ shipToLogger: ({ logLevel, messages, data, hasData }) => { const timestamp = new Date().toISOString(); const message = messages.join(" "); const dataStr = data && hasData ? ` | ${JSON.stringify(data)}` : ''; console.log(`[${timestamp}] [${logLevel.toUpperCase()}] ${message}${dataStr}`); // Return value is used for debugging when consoleDebug is enabled return messages; } }) }) ``` ``` log.withMetadata({ user: 'john' }).info('User logged in'); // Output: [2023-12-01T10:30:00.000Z] [INFO] User logged in | {"user":"john"} ``` ### Custom API Integration ```typescript import { LogLayer, BlankTransport } from 'loglayer' const log = new LogLayer({ transport: new BlankTransport({ shipToLogger: ({ logLevel, messages, data, hasData }) => { const payload = { level: logLevel, message: messages.join(" "), timestamp: new Date().toISOString(), ...(data && hasData ? data : {}) }; // Send to your custom API fetch('/api/logs', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }).catch(err => { console.error('Failed to send log to API:', err); }); // Return value is used for debugging when consoleDebug is enabled return messages; } }) }) ``` ### Debug Mode ```typescript import { LogLayer, BlankTransport } from 'loglayer' const log = new LogLayer({ transport: new BlankTransport({ consoleDebug: true // This will also log to console for debugging shipToLogger: ({ logLevel, messages, data, hasData }) => { // Your custom logic here const payload = { level: logLevel, message: messages.join(" "), ...(data && hasData ? data : {}) }; // Send to external service fetch('/api/logs', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }); // Return value is used for debugging when consoleDebug is enabled return messages; }, }) }) ``` --- --- url: 'https://loglayer.dev/example-integrations/bun.md' description: Using LogLayer with Bun runtime --- # Bun Integration LogLayer has support for the [Bun](https://bun.sh/) runtime. ::: warning Bun Compatibility Not all transports and plugins are compatible with Bun. Some items that rely on Node.js-specific APIs (like file system operations or native modules) may not work in Bun. Items that have been tested with Bun are marked with a badge. Not all items have been tested with Bun; a lack of a badge does not imply a lack of support. Please let us know if you do find a transport / plugin is supported. ::: ## Installation ### Using npm packages Bun has excellent npm compatibility, so you can install LogLayer packages using bun: ```bash bun add loglayer bun add @loglayer/transport-simple-pretty-terminal ``` ### Import statements ```typescript import { LogLayer, ConsoleTransport } from "loglayer"; import { getSimplePrettyTerminal } from "@loglayer/transport-simple-pretty-terminal"; ``` ## Basic Setup with Console Transport The [Console Transport](/transports/console) is built into LogLayer and works perfectly in Bun: ```typescript import { LogLayer, ConsoleTransport } from "loglayer"; const log = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }); log.info("Hello from Bun with LogLayer!"); ``` ## Enhanced Setup with Simple Pretty Terminal For more visually appealing output, use the [Simple Pretty Terminal Transport](/transports/simple-pretty-terminal): ```typescript import { LogLayer } from "loglayer"; import { getSimplePrettyTerminal } from "@loglayer/transport-simple-pretty-terminal"; const log = new LogLayer({ transport: getSimplePrettyTerminal({ runtime: "node", viewMode: "inline" }) }); // Pretty formatted logging log.info("This is a pretty formatted log message"); log.withMetadata({ userId: 12345, action: "login", timestamp: new Date().toISOString() }).info("User performed action"); ``` --- --- url: 'https://loglayer.dev/transports/bunyan.md' description: Send logs to Bunyan with the LogLayer logging library --- # Bunyan Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-bunyan)](https://www.npmjs.com/package/@loglayer/transport-bunyan) [Bunyan](https://github.com/trentm/node-bunyan) is a JSON logging library for Node.js services. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/bunyan) ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-bunyan bunyan ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-bunyan bunyan ``` ```sh [yarn] yarn add loglayer @loglayer/transport-bunyan bunyan ``` ::: ## Setup ```typescript import bunyan from 'bunyan' import { LogLayer } from 'loglayer' import { BunyanTransport } from "@loglayer/transport-bunyan" const b = bunyan.createLogger({ name: "my-logger", level: "trace", // Show all log levels serializers: { err: bunyan.stdSerializers.err // Use Bunyan's error serializer } }) const log = new LogLayer({ errorFieldName: "err", // Match Bunyan's error field name transport: new BunyanTransport({ logger: b }) }) ``` ## Configuration Options ### Required Parameters None - all parameters are optional. ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Messages with a lower priority level will be ignored | | `enabled` | `boolean` | `true` | If false, the transport will not send any logs to the logger | | `consoleDebug` | `boolean` | `false` | If true, the transport will also log messages to the console for debugging | | `id` | `string` | - | A unique identifier for the transport | ## Log Level Mapping | LogLayer | Bunyan | |----------|---------| | trace | trace | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | fatal | ## Changelog View the changelog [here](./changelogs/bunyan-changelog.md). --- --- url: 'https://loglayer.dev/logging-api/child-loggers.md' description: Learn how to create child loggers in LogLayer --- # Child Loggers Child loggers allow you to create new logger instances that inherit configuration, context, and plugins from their parent logger. This is particularly useful for creating loggers with additional context for specific components or modules while maintaining the base configuration. ## Creating Child Loggers Use the `child()` method to create a child logger: ```typescript const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }) const childLog = parentLog.child() ``` ## Inheritance Behavior Child loggers inherit: 1. Configuration from the parent 2. Context data (as a shallow copy by default, or shared reference if configured) 3. Plugins ### Configuration Inheritance All configuration options are inherited from the parent: ```typescript const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }), contextFieldName: 'context', metadataFieldName: 'metadata', errorFieldName: 'error' }) // Child inherits all configuration const childLog = parentLog.child() ``` ### Context Inheritance Context inheritance behavior depends on the [Context Manager](/context-managers/) being used. By default, the [Default Context Manager](/context-managers/default) is used. When creating child loggers, the Default Context Manager will: 1. Copy the parent's context to the child logger at creation time 2. Maintain independent context after creation ```typescript parentLogger.withContext({ requestId: "123" }); const childLogger = parentLogger.child(); // Child inherits parent's context at creation via shallow-copy childLogger.info("Initial log"); // Includes requestId: "123" // Child can modify its context independently childLogger.withContext({ userId: "456" }); childLogger.info("User action"); // Includes requestId: "123" and userId: "456" // Parent's context remains unchanged parentLogger.info("Parent log"); // Only includes requestId: "123" ``` --- --- url: 'https://loglayer.dev/transports/configuration.md' description: Learn how to configure LogLayer transports --- # Transport Configuration All LogLayer transports share a common set of configuration options that control their behavior. These options are passed to the transport constructor when creating a new transport instance. ## Common Configuration Options ### Required Parameters None - all parameters are optional. ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `id` | `string` | - | A unique identifier for the transport. If not provided, a random ID will be generated. This is used if you need to call getLoggerInstance() on the LogLayer instance | | `enabled` | `boolean` | `true` | If false, the transport will not send any logs to the logger. Useful for temporarily disabling a transport | | `consoleDebug` | `boolean` | `false` | If true, the transport will also log messages to the console. Useful for debugging transport behavior | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Messages below this level will be ignored. See [Log Level Hierarchy](/logging-api/adjusting-log-levels#log-level-hierarchy) | ## Example Usage Here's an example of configuring a transport with common options: ```typescript import { LogLayer } from 'loglayer' import { PinoTransport } from "@loglayer/transport-pino" import pino from 'pino' const pinoLogger = pino() const transport = new PinoTransport({ // Custom identifier for the transport id: 'main-pino-transport', // Your configured logger instance logger: pinoLogger, // Disable the transport temporarily enabled: process.env.NODE_ENV !== 'test', // Enable console debugging consoleDebug: process.env.DEBUG === 'true', // Set minimum log level (only process info and above) level: 'info' }) const log = new LogLayer({ transport }) ``` --- --- url: 'https://loglayer.dev/configuration.md' description: Learn how to configure LogLayer to customize its behavior --- # Configuration LogLayer can be configured with various options to customize its behavior. Here's a comprehensive guide to all available configuration options. ## Basic Configuration When creating a new LogLayer instance, you can pass a configuration object: ```typescript import { LogLayer, ConsoleTransport } from 'loglayer' const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, }), // ... other options }) ``` ## Configuration Options ### Transport Configuration The `transport` option is the only required configuration. It specifies which logging library to use: ```typescript { // Can be a single transport or an array of transports transport: new ConsoleTransport({ logger: console, }) } ``` You can also pass an array of transports to the `transport` option. This is useful if you want to send logs to multiple destinations. ```typescript { transport: [ new ConsoleTransport({ logger: console }), new DatadogBrowserLogsTransport({ logger: datadogBrowserLogs })], } ``` For more transport options, see the [Transport Configuration](./transports/configuration) section. ### Message Prefixing You can add a prefix to all log messages: ```typescript { // Will prepend "[MyApp]" to all log messages prefix: '[MyApp]' } ``` ### Logging Control Control whether logging is enabled: ```typescript { // Set to false to disable all logging (default: true) enabled: true } ``` See the [Enabling/Disabling Logging](./logging-api/basic-logging#enabling-disabling-logging) section for more details. ### Debugging If you're implementing a transport, you can set the `consoleDebug` option to `true` to output to the console before sending to the logging library: ```typescript { // Useful for debugging - will output to console before sending to logging library consoleDebug: true } ``` This is useful when: * Debugging why logs aren't appearing in your logging library * Verifying the data being sent to the logging library * Testing log formatting ### Error Handling Configuration Configure how errors are handled and serialized: ```typescript { // Function to transform Error objects (useful if logging library doesn't handle errors well) errorSerializer: (err) => ({ message: err.message, stack: err.stack }), // Field name for errors (default: 'err') errorFieldName: 'err', // Copy error.message to log message when using errorOnly() (default: false) copyMsgOnOnlyError: true, // Include error in metadata instead of root level (default: false) errorFieldInMetadata: false } ``` #### Recommended Error Serializer For production applications, we recommend using the [`serialize-error`](https://www.npmjs.com/package/serialize-error) package as your error serializer. This package properly serializes Error objects including nested errors, circular references, and non-enumerable properties. **Installation:** ::: code-group ```sh [npm] npm install serialize-error ``` ```sh [yarn] yarn add serialize-error ``` ```sh [pnpm] pnpm add serialize-error ``` ::: **Usage:** ```typescript import { serializeError } from 'serialize-error' const log = new LogLayer({ errorSerializer: serializeError, transport: new ConsoleTransport({ logger: console }), }) ``` ### Data Structure Configuration ::: tip See [error handling configuration](#error-handling-configuration) for configuring the error field name and placement. ::: Control how context and metadata are structured in log output: ```typescript { // Put context data in a specific field (default: flattened) contextFieldName: 'context', // Put metadata in a specific field (default: flattened) metadataFieldName: 'metadata', // Disable context/metadata in log output muteContext: false, muteMetadata: false } ``` Example output with field names configured: ```json { "level": 30, "time": 1638138422796, "msg": "User logged in", "context": { "requestId": "123" }, "metadata": { "userId": "456" } } ``` Example output with flattened fields (default): ```json { "level": 30, "time": 1638138422796, "msg": "User logged in", "requestId": "123", "userId": "456" } ``` ### Plugin System Plugins are used to modify logging behavior. See the [Plugins](./plugins/index) section for more information. ## Retrieving Configuration You can retrieve the current configuration using the `getConfig()` method: ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console }), prefix: '[MyApp]', enabled: true, }) const config = log.getConfig() // Returns the configuration object used to initialize the logger ``` This method returns the complete configuration object, including any default values that were applied during initialization. ## Complete Configuration Example Here's an example showing all configuration options: ```typescript const log = new LogLayer({ // Required: Transport configuration transport: new ConsoleTransport({ logger: console, }), // Optional configurations prefix: '[MyApp]', enabled: true, consoleDebug: false, // Error handling errorSerializer: (err) => ({ message: err.message, stack: err.stack }), errorFieldName: 'error', copyMsgOnOnlyError: true, errorFieldInMetadata: false, // Data structure contextFieldName: 'context', metadataFieldName: 'metadata', muteContext: false, muteMetadata: false, // Plugins plugins: [ { id: 'timestamp-plugin', onBeforeDataOut: ({ data }) => { if (data) { data.timestamp = Date.now() } return data } } ] }) ``` --- --- url: 'https://loglayer.dev/transports/consola.md' description: Send logs to Consola with the LogLayer logging library --- # Consola Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-consola)](https://www.npmjs.com/package/@loglayer/transport-consola) [Consola](https://github.com/unjs/consola) is an elegant and configurable console logger for Node.js and browser. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/consola) ## Important Notes * The default log level is `3` which excludes `debug` and `trace` * Set level to `5` to enable all log levels * Consola provides additional methods not available through LogLayer ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-consola consola ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-consola consola ``` ```sh [yarn] yarn add loglayer @loglayer/transport-consola consola ``` ::: ## Setup ```typescript import { createConsola } from 'consola' import { LogLayer } from 'loglayer' import { ConsolaTransport } from "@loglayer/transport-consola" const log = new LogLayer({ transport: new ConsolaTransport({ logger: createConsola({ level: 5 // Enable all log levels }) }) }) ``` ## Log Level Mapping | LogLayer | Consola | |----------|---------| | trace | trace | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | fatal | ## Changelog View the changelog [here](./changelogs/consola-changelog.md). --- --- url: 'https://loglayer.dev/transports/console.md' description: Send logs to the console with the LogLayer logging library --- # Console Transport The simplest integration is with the built-in `console` object, which is available in both Node.js and browser environments. [Transport Source](https://github.com/loglayer/loglayer/blob/master/packages/core/loglayer/src/transports/ConsoleTransport.ts) ## Installation No additional packages needed beyond the core `loglayer` package: ::: code-group ```sh [npm] npm i loglayer ``` ```sh [pnpm] pnpm add loglayer ``` ```sh [yarn] yarn add loglayer ``` ::: ## Setup ```typescript import { LogLayer, ConsoleTransport } from 'loglayer' const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, // Optional: control where object data appears in log messages appendObjectData: false // default: false - object data appears first }) }) ``` ## Configuration Options ### Required Parameters None - all parameters are optional. ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Sets the minimum log level to process. Messages with a lower priority level will be ignored | | `appendObjectData` | `boolean` | `false` | Controls where object data (metadata, context, errors) appears in the log messages. `false`: Object data appears as the first parameter. `true`: Object data appears as the last parameter. Has no effect if `messageField` is defined | | `messageField` | `string` | - | If defined, places the message into the specified field in the log object, joins multi-parameter messages with a space (use the sprintf plugin for formatted messages), and only logs the object to the console | | `dateField` | `string` | - | If defined, populates the field with the ISO date and adds it as an additional parameter to the console call. If `dateFn` is defined, will call `dateFn` to derive the date | | `levelField` | `string` | - | If defined, populates the field with the log level and adds it as an additional parameter to the console call. If `levelFn` is defined, will call `levelFn` to derive the level | | `dateFn` | `() => string \| number` | - | If defined, a function that returns a string or number for the value to be used for the `dateField` | | `levelFn` | `(logLevel: LogLevelType) => string \| number` | - | If defined, a function that returns a string or number for a given log level. The input should be the logLevel | | `stringify` | `boolean` | `false` | If true, applies JSON.stringify to the structured log output when messageField, dateField, or levelField is defined | | `messageFn` | `(params: LogLayerTransportParams) => string` | - | Custom function to format the log message output. Receives log level, messages, and data; returns the formatted string | ### Examples #### Level Configuration ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, level: "info" // Will only process info, warn, error, and fatal logs }) }); log.debug('This message will be ignored'); log.info('This message will be logged'); ``` #### Object Data Positioning ```typescript // appendObjectData: false (default) - data appears first const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, appendObjectData: false }) }); log.withMetadata({ user: 'john' }).info('User logged in'); // console.info({ user: 'john' }, 'User logged in') ``` ```typescript // appendObjectData: true - data appears last const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, appendObjectData: true }) }); log.withMetadata({ user: 'john' }).info('User logged in'); // console.info('User logged in', { user: 'john' }) ``` #### Message Field ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, messageField: 'msg' }) }); log.withMetadata({ user: 'john' }).info('User logged in', 'successfully'); // console.info({ user: 'john', msg: 'User logged in successfully' }) ``` #### Date Field ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, dateField: 'timestamp' }) }); log.info('User logged in'); // console.info('User logged in', { timestamp: '2023-12-01T10:30:00.000Z' }) ``` #### Level Field ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, levelField: 'level' }) }); log.warn('User session expired'); // console.warn('User session expired', { level: 'warn' }) ``` #### Custom Date Function ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, dateField: 'timestamp', dateFn: () => Date.now() // Returns Unix timestamp }) }); log.info('User logged in'); // console.info('User logged in', { timestamp: 1701437400000 }) ``` #### Custom Level Function ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, levelField: 'level', levelFn: (level) => level.toUpperCase() }) }); log.warn('User session expired'); // console.warn('User session expired', { level: 'WARN' }) ``` #### Numeric Level Mapping ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, levelField: 'level', levelFn: (level) => { const levels = { trace: 10, debug: 20, info: 30, warn: 40, error: 50, fatal: 60 }; return levels[level as keyof typeof levels] || 0; } }) }); log.error('Database connection failed'); // console.error('Database connection failed', { level: 50 }) ``` #### Stringify Output ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, messageField: 'msg', dateField: 'timestamp', levelField: 'level', stringify: true }) }); log.withMetadata({ user: 'john' }).info('User logged in'); // console.info('{"user":"john","msg":"User logged in","timestamp":"2023-12-01T10:30:00.000Z","level":"info"}') ``` #### Custom Message Formatting ```typescript import { LogLayer, ConsoleTransport } from 'loglayer'; import type { LogLayerTransportParams } from 'loglayer'; const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, messageFn: ({ logLevel, messages }: LogLayerTransportParams) => { return `[${logLevel.toUpperCase()}] ${messages.join(' ')}`; } }) }); log.info('User logged in'); // console.info('[INFO] User logged in') log.warn('Connection unstable'); // console.warn('[WARN] Connection unstable') ``` ::: tip Prefix behavior If you use `withPrefix()`, the prefix is applied to the messages before they reach `messageFn`. For example, `log.withPrefix('[MyApp]').info('Hello')` would pass `messages: ['[MyApp] Hello']` to your `messageFn`. ::: ## Structured Logging You can combine multiple fields to create structured log objects: ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, messageField: 'msg', dateField: 'timestamp', levelField: 'level' }) }); log.withMetadata({ user: 'john' }).info('User logged in'); // console.info({ // user: 'john', // msg: 'User logged in', // timestamp: '2023-12-01T10:30:00.000Z', // level: 'info' // }) ``` ## Log Level Mapping | LogLayer | Console | |----------|-----------| | trace | debug | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | error | --- --- url: 'https://loglayer.dev/logging-api/context.md' description: Learn how to create logs with context data in LogLayer --- # Logging with Context Context allows you to add persistent data that will be included with every log message. This is particularly useful for adding request IDs, user information, or any other data that should be present across multiple log entries. ::: info Message field name The output examples use `msg` as the message field. The name of this field may vary depending on the logging library you are using. In the `console` logger, this field does not exist by default, and the message is printed directly. However, you can configure the console transport to use a message field - see the [Console Transport](/transports/console) documentation for more details. ::: ## Adding Context Use the `withContext` method to add context data: ```typescript log.withContext({ requestId: '123', userId: 'user_456' }) // Context will be included in all subsequent log messages log.info('Processing request') log.warn('User quota exceeded') ``` By default, context data is flattened into the root of the log object: ```json { "msg": "Processing request", "requestId": "123", "userId": "user_456" } ``` ::: warning Clearing context Passing an empty value (`null`, `undefined`, or an empty object) to `withContext` will *not* clear the context; it does nothing. Use `clearContext()` to remove all context data, or `clearContext(keys)` to remove specific keys. ::: ## Context Behavior / Management How the context behaves between a parent and child logger is defined by the [Context Manager](/context-managers/) being used. By default, the [Default Context Manager](/context-managers/default) is used for managing context when creating an instance of LogLayer. When creating child loggers, the Default Context Manager will: 1. Copy the parent's context to the child logger at creation time 2. Maintain independent context after creation ```typescript parentLogger.withContext({ requestId: "123" }); const childLogger = parentLogger.child(); // Child inherits parent's context at creation via shallow-copy childLogger.info("Initial log"); // Includes requestId: "123" // Child can modify its context independently childLogger.withContext({ userId: "456" }); childLogger.info("User action"); // Includes requestId: "123" and userId: "456" // Parent's context remains unchanged parentLogger.info("Parent log"); // Only includes requestId: "123" ``` ::: tip Altering context behavior You can create custom context managers to define how context data should be stored and retrieved, and how it behaves between parent and child loggers. This allows you to implement custom logic for context propagation, isolation, or any other specific requirements you may have. See [Creating Context Managers](/context-managers/creating-context-managers) for more details. ::: ## Structuring Context ### Using a Dedicated Context Field You can configure LogLayer to place context data in a dedicated field by setting the `contextFieldName` option: ```typescript const log = new LogLayer({ contextFieldName: 'context' }) log.withContext({ requestId: '123', userId: 'user_456' }).info('Processing request') ``` This produces: ```json { "msg": "Processing request", "context": { "requestId": "123", "userId": "user_456" } } ``` ### Combining Context and Metadata Fields If you set the same field name for both context and metadata, they will be merged: ```typescript const log = new LogLayer({ contextFieldName: 'data', metadataFieldName: 'data', }) log.withContext({ requestId: '123' }) .withMetadata({ duration: 1500 }) .info('Request completed') ``` This produces: ```json { "msg": "Request completed", "data": { "requestId": "123", "duration": 1500 } } ``` ## Managing Context ### Getting Current Context You can retrieve the current context data: ```typescript log.withContext({ requestId: '123' }) const context = log.getContext() // Returns: { requestId: '123' } ``` ### Clearing Context You can clear all context data: ```typescript log.clearContext() ``` Or clear specific keys by passing a string or an array of strings: ```typescript log.withContext({ requestId: '123', userId: 'user_456', sessionId: 'sess_789' }) // Clear a single key log.clearContext('userId') // Context now: { requestId: '123', sessionId: 'sess_789' } // Clear multiple keys log.clearContext(['requestId', 'sessionId']) // Context now: {} ``` The method supports chaining: ```typescript log.withContext({ a: 1, b: 2, c: 3 }) .clearContext('a') .info('Only b and c in context') ``` ### Muting Context You can temporarily disable context logging: ```typescript // Via configuration const log = new LogLayer({ muteContext: true, }) // Or via methods log.muteContext() // Disable context log.unMuteContext() // Re-enable context ``` This is useful for development or troubleshooting when you want to reduce log verbosity. ## Combining Context with Other Features ### With Errors Context data is included when logging errors: ```typescript log.withContext({ requestId: '123' }) .withError(new Error('Not found')) .error('Failed to fetch user') ``` ### With Metadata Context can be combined with per-message metadata: ```typescript log.withContext({ requestId: '123' }) .withMetadata({ userId: 'user_456' }) .info('User logged in') ``` --- --- url: 'https://loglayer.dev/context-managers.md' description: Learn how to create and use context managers with LogLayer --- # Context Managers *New in LogLayer v6*. Context managers in LogLayer are responsible for managing contextual data that gets included with log entries. They provide a way to store and retrieve context data that will be automatically included with every log message. ::: tip Do you need to specify a context manager? Context managers are an advanced feature of LogLayer. Unless you need to manage context data in a specific way, you can use the default context manager, which is already automatically used when creating a new LogLayer instance. ::: ### Available Context Managers | Name | Package | Description | |------|---------|-------------------------------------------------------------------------------------------------| | [Default](/context-managers/default) | [![npm](https://img.shields.io/npm/v/@loglayer/context-manager)](https://www.npmjs.com/package/@loglayer/context-manager) | Default built-in context manager that copies context from parent to child on child log creation | | [Isolated](/context-managers/isolated) | [![npm](https://img.shields.io/npm/v/@loglayer/context-manager-isolated)](https://www.npmjs.com/package/@loglayer/context-manager-isolated) | Context manager that does not copy context from parent to child on child log creation | | [Linked](/context-managers/linked) | [![npm](https://img.shields.io/npm/v/@loglayer/context-manager-linked)](https://www.npmjs.com/package/@loglayer/context-manager-linked) | Context manager that keeps context synchronized between parent and all children | ## Context Manager Management ### Using a custom context manager You can set a custom context manager using the `withContextManager()` method. Example usage: ```typescript import { MyCustomContextManager } from './MyCustomContextManager'; const logger = new LogLayer() .withContextManager(new MyCustomContextManager()); ``` ::: tip Use the `withContextManager()` method right after creating the LogLayer instance. Using it after the context has already been set will drop the existing context data. ::: ### Obtaining the current context manager You can get the current context manager instance using the `getContextManager()` method: ```typescript const contextManager = logger.getContextManager(); ``` You can also type the return value when getting a specific context manager implementation: ```typescript const linkedContextManager = logger.getContextManager(); ``` --- --- url: 'https://loglayer.dev/context-managers/creating-context-managers.md' description: Learn how to create a custom context manager for LogLayer --- # Creating Context Managers ::: warning Using async libraries LogLayer is a synchronous library, so context managers must perform synchronous operations only. Integrations that use promises, callbacks, or other asynchronous patterns to set and fetch context data is not supported / recommended unless you are making those calls out-of-band for other reasons. ::: ## The IContextManager Interface To create a custom context manager, you'll first need to install the base package: ::: code-group ```bash [npm] npm install @loglayer/context-manager ``` ```bash [yarn] yarn add @loglayer/context-manager ``` ```bash [pnpm] pnpm add @loglayer/context-manager ``` ::: Then implement the `IContextManager` interface: ```typescript import type { IContextManager, ILogLayer } from '@loglayer/context-manager'; interface OnChildLoggerCreatedParams { /** * The parent logger instance */ parentLogger: ILogLayer; /** * The child logger instance */ childLogger: ILogLayer; /** * The parent logger's context manager */ parentContextManager: IContextManager; /** * The child logger's context manager */ childContextManager: IContextManager; } interface IContextManager { // Sets the context data. Set to undefined to clear the context. setContext(context?: Record): void; // Appends context data to existing context appendContext(context: Record): void; // Returns the current context data getContext(): Record; // Returns true if there is context data present hasContextData(): boolean; // Clears context data. If keys provided, only those keys are removed. clearContext(keys?: string | string[]): void; // Called when a child logger is created onChildLoggerCreated(params: OnChildLoggerCreatedParams): void; // Creates a new instance with the same context data clone(): IContextManager; } ``` ## Context Manager Lifecycle When using a context manager with a LogLayer logger instance: * When the logger is first created, the [Default Context Manager](/context-managers/default) is automatically is attached to it * The context manager is attached to a logger using [`withContextManager()`](/context-managers/#using-a-custom-context-manager) * If the existing context manager implements [`Disposable`](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-5-2.html#using-declarations-and-explicit-resource-management), it will be called to clean up resources * When `withContext()` is called on the logger it calls `appendContext()` on the context manager * When `clearContext()` is called on the logger it calls `clearContext()` on the context manager, optionally with keys to remove * When a child logger is created: * `clone()` is called on the parent's context manager and the cloned context manager is attached to the child logger * `onChildLoggerCreated()` is called on the parent's context manager * When LogLayer needs to obtain context data, it first calls `hasContextData()` to check if context is present, then calls `getContext()` to get the context data if it is. ## Resource Cleanup with Disposable If your context manager needs to clean up resources (like file handles, memory, or external connections), you can implement the [`Disposable`](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-5-2.html#using-declarations-and-explicit-resource-management) interface. LogLayer will automatically call the dispose method when the context manager is replaced using `withContextManager()` if defined. ### Implementing Disposable To make your context manager disposable: 1. Add `Disposable` to your class implementation 2. Implement the `[Symbol.dispose]()` method 3. Add a flag to track the disposed state 4. Guard your methods against calls after disposal Here's an example: ```typescript export class MyContextManager implements IContextManager, Disposable { private isDisposed = false; private hasContext = false; private someResource: any; // ... other methods ... hasContextData(): boolean { if (this.isDisposed) return false; return this.hasContext; } setContext(context?: Record): void { if (this.isDisposed) return; // Implementation } getContext(): Record { if (this.isDisposed) return {}; return this.context; } [Symbol.dispose](): void { if (this.isDisposed) return; // Clean up resources this.someResource?.close(); this.context = {}; this.isDisposed = true; } } ``` :::tip Always implement `Disposable` if your context manager holds onto resources that need cleanup. This ensures proper resource management and prevents memory leaks. ::: ## Example Implementation Here's an example of a simple file-based context manager that saves context to a file. ::: warning Don't try this at home This example is for educational purposes only and will have a significant performance impact and has possible race conditions in actual usage. ::: ```typescript import { openSync, closeSync, readSync, writeSync, fstatSync } from 'node:fs'; import type { IContextManager, OnChildLoggerCreatedParams } from '@loglayer/context-manager'; /** * Example context manager that persists context to a file using a file handle. * Implements both IContextManager for context management and Disposable for cleanup. * * This example demonstrates proper resource cleanup by maintaining an open file handle * that needs to be properly closed when the context manager is disposed. */ export class FileContextManager implements IContextManager, Disposable { // In-memory storage of context data private context: Record = {}; // Flag to track if we have any context data private hasContext = false; // Path to the file where context is persisted private filePath: string; // File handle for persistent storage private fileHandle: number | null = null; // Flag to track if this manager has been disposed private isDisposed = false; constructor(filePath: string) { this.filePath = filePath; // Open file handle in read/write mode, create if doesn't exist try { this.fileHandle = openSync(filePath, 'a+'); this.loadContext(); } catch (err) { // Handle error gracefully - continue with empty context this.context = {}; this.hasContext = false; } } /** * Loads context from the file system into memory using the file handle. * Called during initialization and after file changes. */ private loadContext() { if (this.isDisposed || this.fileHandle === null) return; try { // Get file size const stats = fstatSync(this.fileHandle); if (stats.size === 0) { this.context = {}; this.hasContext = false; return; } // Read entire file content const buffer = Buffer.alloc(stats.size); readSync(this.fileHandle, buffer, 0, stats.size, 0); // Parse content const data = buffer.toString('utf8'); this.context = JSON.parse(data); this.hasContext = Object.keys(this.context).length > 0; } catch (err) { // Handle error gracefully - initialize empty context this.context = {}; this.hasContext = false; } } /** * Persists the current in-memory context to the file system using the file handle. * Called after any context modifications. */ private saveContext() { if (this.isDisposed || this.fileHandle === null) return; try { const data = JSON.stringify(this.context); const buffer = Buffer.from(data); // Truncate file first writeSync(this.fileHandle, buffer, 0, buffer.length, 0); } catch (err) { // Handle error gracefully - continue with in-memory context } } /** * Sets the entire context, replacing any existing context. * Passing undefined clears the context. */ setContext(context?: Record): void { if (this.isDisposed) return; if (!context) { this.context = {}; this.hasContext = false; } else { this.context = { ...context }; this.hasContext = true; } this.saveContext(); } /** * Merges new context data with existing context. * Any matching keys will be overwritten with new values. */ appendContext(context: Record): void { if (this.isDisposed) return; this.context = { ...this.context, ...context }; this.hasContext = true; this.saveContext(); } /** * Returns the current context data. * Returns empty object if disposed. */ getContext(): Record { if (this.isDisposed) return {}; return this.context; } /** * Checks if there is any context data present. * Returns false if disposed. */ hasContextData(): boolean { if (this.isDisposed) return false; return this.hasContext; } /** * Clears context data. If keys are provided, only those keys are removed. * If no keys are provided, all context is cleared. */ clearContext(keys?: string | string[]): void { if (this.isDisposed) return; if (keys === undefined) { this.context = {}; this.hasContext = false; } else { const keysToRemove = Array.isArray(keys) ? keys : [keys]; for (const key of keysToRemove) { delete this.context[key]; } this.hasContext = Object.keys(this.context).length > 0; } this.saveContext(); } /** * Called when a child logger is created to handle context inheritance. * Copies parent context to child if parent has context data. */ onChildLoggerCreated({ parentContextManager, childContextManager }: OnChildLoggerCreatedParams): void { if (this.isDisposed) return; // Copy parent context to child if parent has context if (parentContextManager.hasContextData()) { const parentContext = parentContextManager.getContext(); childContextManager.setContext({ ...parentContext }); } } /** * Creates a new instance with a copy of the current context. * Note: This implementation most likely has issues since the same file is being manipulated. * This could potentially introduce a race condition when this method is called via child() */ clone(): IContextManager { return new FileContextManager(this.filePath); } /** * Implements the Disposable interface for cleanup. * Properly closes the file handle and cleans up memory resources. * This is critical to prevent file handle leaks in the operating system. */ [Symbol.dispose](): void { if (this.isDisposed) return; // Clean up in-memory resources this.context = {}; this.hasContext = false; // Close the file handle if it's open if (this.fileHandle !== null) { try { closeSync(this.fileHandle); } catch (err) { // Handle cleanup error gracefully } this.fileHandle = null; } this.isDisposed = true; } } ``` You can use this context manager like this: ```typescript import { LogLayer } from 'loglayer'; import { FileContextManager } from './FileContextManager'; // The context manager will maintain an open file handle until disposed const logger = new LogLayer() .withContextManager(new FileContextManager('./context.json')); logger.withContext({ user: 'alice' }); logger.info('User logged in'); // Will include { user: 'alice' } in context ``` ## Best Practices When implementing a context manager: * Make all operations synchronous * Handle errors gracefully without throwing exceptions * Implement proper cleanup in stateful context managers with `Disposable` --- --- url: 'https://loglayer.dev/log-level-managers/creating-log-level-managers.md' description: Learn how to create a custom log level manager for LogLayer --- # Creating Log Level Managers ## Installation To create a custom log level manager, you'll first need to install the base package: ::: code-group ```bash [npm] npm install @loglayer/log-level-manager ``` ```bash [yarn] yarn add @loglayer/log-level-manager ``` ```bash [pnpm] pnpm add @loglayer/log-level-manager ``` ::: ## Understanding Log Level Hierarchy ::: warning Log Level Hierarchy Log level managers must follow the [log level hierarchy](/logging-api/adjusting-log-levels#log-level-hierarchy) described in the Adjusting Log Levels documentation. The hierarchy defines the priority of log levels, with higher numeric values indicating higher severity. You can import `LogLevel`, `LogLevelPriority`, and `LogLevelPriorityToNames` from `@loglayer/log-level-manager` to work with the hierarchy in your implementation. ::: ### Log Level Priority Table The following table shows the log level hierarchy with their priority values: | Log Level | Priority Value | Description | |-----------|----------------|-------------| | `trace` | 10 | Lowest priority, most verbose | | `debug` | 20 | Debug information | | `info` | 30 | Informational messages | | `warn` | 40 | Warning messages | | `error` | 50 | Error messages | | `fatal` | 60 | Highest priority, most critical | ::: tip Understanding Priority Values When setting a log level, all levels with priority **greater than or equal to** the set level are enabled. For example, if you set the level to `warn` (priority 40), all levels with priority >= 40 (warn, error, fatal) will be enabled, and all levels with priority < 40 (trace, debug, info) will be disabled. ::: ### Log Level Priority Mappings The `LogLevelPriority` mapping provides numeric values for each log level. Note that in the code, higher numeric values indicate higher priority (fatal=60 is highest, trace=10 is lowest): ```typescript import { LogLevelPriority } from '@loglayer/log-level-manager'; // LogLevelPriority structure: { [LogLevel.trace]: 10, [LogLevel.debug]: 20, [LogLevel.info]: 30, [LogLevel.warn]: 40, [LogLevel.error]: 50, [LogLevel.fatal]: 60, } ``` The `LogLevelPriorityToNames` mapping provides the reverse lookup, mapping numeric values back to log level names: ```typescript import { LogLevelPriorityToNames } from '@loglayer/log-level-manager'; // LogLevelPriorityToNames structure: { 10: LogLevel.trace, 20: LogLevel.debug, 30: LogLevel.info, 40: LogLevel.warn, 50: LogLevel.error, 60: LogLevel.fatal, } ``` When implementing `setLevel()`, you'll typically use `LogLevelPriority` to determine which levels should be enabled based on the hierarchy. For example, if you set the level to `warn` (priority 40), all levels with priority >= 40 (warn, error, fatal) should be enabled, and all levels with priority < 40 (trace, debug, info) should be disabled. ## The ILogLevelManager Interface Then implement the `ILogLevelManager` interface: ```typescript import type { ILogLevelManager, ILogLayer, LogLevelType, OnChildLogLevelManagerCreatedParams } from '@loglayer/log-level-manager'; import { LogLevel, LogLevelPriority } from '@loglayer/log-level-manager'; interface ILogLevelManager { /** * Sets the minimum log level to be used by the logger. * * **When triggered:** Called when `logger.setLevel()` is invoked on a LogLayer instance. */ setLevel(logLevel: LogLevelType): void; /** * Enables a specific log level. * * **When triggered:** Called when `logger.enableIndividualLevel()` is invoked on a LogLayer instance. */ enableIndividualLevel(logLevel: LogLevelType): void; /** * Disables a specific log level. * * **When triggered:** Called when `logger.disableIndividualLevel()` is invoked on a LogLayer instance. */ disableIndividualLevel(logLevel: LogLevelType): void; /** * Checks if a specific log level is enabled. * * **When triggered:** Called before every log method execution (e.g., `info()`, `warn()`, `error()`, `debug()`, `trace()`, `fatal()`, `raw()`, `metadataOnly()`, `errorOnly()`) to determine if the log should be processed. Also called when `logger.isLevelEnabled()` is invoked directly. */ isLevelEnabled(logLevel: LogLevelType): boolean; /** * Enable sending logs to the logging library. * * **When triggered:** Called when `logger.enableLogging()` is invoked on a LogLayer instance. */ enableLogging(): void; /** * All logging inputs are dropped and stops sending logs to the logging library. * * **When triggered:** Called when `logger.disableLogging()` is invoked on a LogLayer instance, or when a LogLayer instance is created with `enabled: false` in the configuration. */ disableLogging(): void; /** * Called when a child logger is created. Use to manipulate log level settings between parent and child. * * **When triggered:** Called automatically when `logger.child()` is invoked, after the child logger is created and the parent's log level manager has been cloned. This allows the manager to establish relationships between parent and child loggers. */ onChildLoggerCreated(params: OnChildLogLevelManagerCreatedParams): void; /** * Creates a new instance of the log level manager with the same log level settings. * * **When triggered:** Called automatically when `logger.child()` is invoked to create a new log level manager instance for the child logger. The cloned instance should have the same initial log level state as the parent, but can be modified independently (unless the manager implements shared state behavior). */ clone(): ILogLevelManager; } ``` ## Log Level Manager Lifecycle When using a log level manager with a LogLayer logger instance: * The log level manager is initialized when the logger is created (or when `withLogLevelManager()` is called) * If the existing log level manager implements [`Disposable`](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-5-2.html#using-declarations-and-explicit-resource-management), it will be called to clean up resources * When a child logger is created via `child()`, the parent's log level manager is cloned * The `onChildLoggerCreated()` method is called to allow the manager to set up relationships between parent and child * Log level changes are managed through the manager's methods ## Resource Cleanup with Disposable If your log level manager needs to clean up resources (like parent-child references, memory, or external connections), you can implement the [`Disposable`](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-5-2.html#using-declarations-and-explicit-resource-management) interface. LogLayer will automatically call the dispose method when the log level manager is replaced using `withLogLevelManager()` if defined. ### Implementing Disposable To make your log level manager disposable: 1. Add `Disposable` to your class implementation 2. Implement the `[Symbol.dispose]()` method 3. Add a flag to track the disposed state 4. Guard your methods against calls after disposal Here's an example: ```typescript export class MyLogLevelManager implements ILogLevelManager, Disposable { private isDisposed = false; private logLevelEnabledStatus: LogLevelEnabledStatus = { info: true, warn: true, error: true, debug: true, trace: true, fatal: true, }; private parentManager: WeakRef | null = null; private childManagers: Set> = new Set(); // ... other methods ... setLevel(logLevel: LogLevelType): void { if (this.isDisposed) return; // Implementation } isLevelEnabled(logLevel: LogLevelType): boolean { if (this.isDisposed) return false; // Implementation } [Symbol.dispose](): void { if (this.isDisposed) return; // Clean up resources this.parentManager = null; this.childManagers.clear(); this.isDisposed = true; } } ``` :::tip Always implement `Disposable` if your log level manager holds onto resources that need cleanup (like parent-child references). This ensures proper resource management and prevents memory leaks. ::: ## Example Implementation Here's an example of an isolated log level manager where children do not inherit log levels from their parent: ```typescript import type { ILogLevelManager, LogLevelType, OnChildLogLevelManagerCreatedParams } from '@loglayer/log-level-manager'; import { LogLevel, LogLevelPriority } from '@loglayer/log-level-manager'; interface LogLevelEnabledStatus { info: boolean; warn: boolean; error: boolean; debug: boolean; trace: boolean; fatal: boolean; } export class IsolatedLogLevelManager implements ILogLevelManager { // Track which log levels are enabled for this logger instance private logLevelEnabledStatus: LogLevelEnabledStatus = { info: true, warn: true, error: true, debug: true, trace: true, fatal: true, }; /** * Sets the minimum log level. All levels at or above this level will be enabled, * and all levels below will be disabled. * * For example, setting to LogLevel.warn will enable warn, error, and fatal, * but disable trace, debug, and info. */ setLevel(logLevel: LogLevelType): void { // Get the numeric priority value for the specified log level // Higher values = higher severity (trace=10, debug=20, info=30, warn=40, error=50, fatal=60) const minLogValue = LogLevelPriority[logLevel as LogLevel]; // Iterate through all log levels and enable/disable based on hierarchy for (const level of Object.values(LogLevel)) { const levelKey = level as keyof LogLevelEnabledStatus; const levelValue = LogLevelPriority[level]; // Enable if the level's severity is >= the minimum (higher or equal severity) // Disable if the level's severity is < the minimum (lower severity) this.logLevelEnabledStatus[levelKey] = levelValue >= minLogValue; } } /** * Enables a specific log level, regardless of the hierarchy. * This allows fine-grained control over individual levels. */ enableIndividualLevel(logLevel: LogLevelType): void { const level = logLevel as keyof LogLevelEnabledStatus; if (level in this.logLevelEnabledStatus) { this.logLevelEnabledStatus[level] = true; } } /** * Disables a specific log level, regardless of the hierarchy. * This allows fine-grained control over individual levels. */ disableIndividualLevel(logLevel: LogLevelType): void { const level = logLevel as keyof LogLevelEnabledStatus; if (level in this.logLevelEnabledStatus) { this.logLevelEnabledStatus[level] = false; } } /** * Checks if a specific log level is currently enabled. * This is called before every log operation to determine if the log should be processed. */ isLevelEnabled(logLevel: LogLevelType): boolean { const level = logLevel as keyof LogLevelEnabledStatus; return this.logLevelEnabledStatus[level]; } /** * Enables all log levels. This allows all logs to be processed regardless of level. */ enableLogging(): void { for (const level of Object.keys(this.logLevelEnabledStatus)) { this.logLevelEnabledStatus[level as keyof LogLevelEnabledStatus] = true; } } /** * Disables all log levels. This prevents all logs from being processed. */ disableLogging(): void { for (const level of Object.keys(this.logLevelEnabledStatus)) { this.logLevelEnabledStatus[level as keyof LogLevelEnabledStatus] = false; } } /** * Called when a child logger is created. * * For an isolated manager, we intentionally do NOT copy the parent's state. * This means children start with their own default state (all levels enabled) * and are completely independent from their parent. * * Compare this to DefaultLogLevelManager, which copies the parent's state * in this method to allow inheritance at creation time. */ onChildLoggerCreated(_params: OnChildLogLevelManagerCreatedParams): void { // Intentionally do nothing - children do not inherit from parent // The child will have its own default state (all levels enabled) // This is what makes this manager "isolated" } /** * Creates a new instance of the log level manager. * * For an isolated manager, we create a fresh instance with default state * (all levels enabled) rather than copying the current instance's state. * * This ensures that when a child logger is created, it starts with * all levels enabled, independent of the parent's configuration. */ clone(): ILogLevelManager { // Create a new instance with default state (all levels enabled) // Children do not inherit the parent's log level state // This is the key difference from DefaultLogLevelManager return new IsolatedLogLevelManager(); } } ``` ## Using Your Custom Log Level Manager ```typescript import { LogLayer, ConsoleTransport, LogLevel } from "loglayer"; import { IsolatedLogLevelManager } from './IsolatedLogLevelManager'; const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }).withLogLevelManager(new IsolatedLogLevelManager()); // Set log level on parent parentLog.setLevel(LogLevel.warn); // Create child - it will NOT inherit parent's log level const childLog = parentLog.child(); // Child has all levels enabled by default (not inherited from parent) childLog.isLevelEnabled(LogLevel.info); // true (not inherited from parent) childLog.isLevelEnabled(LogLevel.warn); // true ``` --- --- url: 'https://loglayer.dev/transports/creating-transports.md' description: Learn how to create custom transports for LogLayer --- # Creating Transports To integrate a logging library with LogLayer, you must create a transport. A transport is a class that translates LogLayer's standardized logging format into the format expected by your target logging library or service. ## Quick Start: Blank Transport For most users who want to quickly create / prototype a transport, the [Blank Transport](/transports/blank-transport) is the easiest option. It's built into the core `loglayer` package and doesn't require any additional installation. The `BlankTransport` allows you to create a transport by simply providing a `shipToLogger` function: ```typescript import { LogLayer, BlankTransport } from 'loglayer' const log = new LogLayer({ transport: new BlankTransport({ shipToLogger: ({ logLevel, messages, data, hasData }) => { // Your custom logging logic here console.log(`[${logLevel}]`, ...messages, data && hasData ? data : ''); // Return value is used for debugging when consoleDebug is enabled return messages; } }) }) ``` The `BlankTransport` is perfect for: * Simple custom logging logic * Prototyping new transport ideas * Quick integrations with custom services * Testing and debugging For detailed documentation and examples, see the [Blank Transport documentation](/transports/blank-transport). ## Creating Full Transport Classes If you need to create a reusable transport library or require more complex functionality, you can create a full transport class by extending `BaseTransport` or `LoggerlessTransport`. ### Installation To implement a full transport class, you must install the `@loglayer/transport` package: ::: code-group ```bash [npm] npm install @loglayer/transport ``` ```bash [yarn] yarn add @loglayer/transport ``` ```bash [pnpm] pnpm add @loglayer/transport ``` ::: ### Implementing a Transport The key requirement for any transport is extending the `BaseTransport` or `LoggerlessTransport` class and implementing the `shipToLogger` method. This method is called by LogLayer whenever a log needs to be sent, and it's where you transform LogLayer's standardized format into the format your target library or service expects. The method must return an array (`any[]`) which is used for debugging purposes when `consoleDebug` is enabled. ## Types of Transports LogLayer supports three types of transports: ### Logger-Based Transports For libraries that follow a common logging interface with methods like `info()`, `warn()`, `error()`, `debug()`, etc., extend the `BaseTransport` class. The `BaseTransport` class provides a `logger` property where users pass in their logging library instance. It also supports level filtering to control which log levels are processed: ```typescript import { BaseTransport, type LogLayerTransportConfig, type LogLayerTransportParams, } from "@loglayer/transport"; export interface CustomLoggerTransportConfig extends LogLayerTransportConfig { // Add configuration options here if necessary } export class CustomLoggerTransport extends BaseTransport { constructor(config: CustomLoggerTransportConfig) { super(config); } shipToLogger({ logLevel, messages, data, hasData }: LogLayerTransportParams) { if (data && hasData) { // Most logging libraries expect data as first or last parameter messages.unshift(data); // or messages.push(data); } switch (logLevel) { case LogLevel.info: this.logger.info(...messages); break; case LogLevel.warn: this.logger.warn(...messages); break; case LogLevel.error: this.logger.error(...messages); break; // ... handle other log levels } // Return value is used for debugging when consoleDebug is enabled return messages; } } ``` To use this transport, you must provide a logger instance when creating it: ```typescript import { LogLayer } from 'loglayer'; import { YourLogger } from 'your-logger-library'; // Initialize your logging library const loggerInstance = new YourLogger(); // Create LogLayer instance with the transport const log = new LogLayer({ transport: new CustomLoggerTransport({ logger: loggerInstance, // Required: the logger instance is passed here level: 'info' // Optional: set minimum log level to process }) }); ``` ::: info All BaseTransport-based transports support an optional `level` parameter for filtering logs. This is handled automatically by the `BaseTransport` class - you don't need to implement any level filtering logic in your transport. ::: ### HTTP / Cloud Service Transports For services that have an HTTP API to ship logs to and do not provide an SDK, you can extend the `HTTPTransport` class using the [HTTP Transport](http.md). ### Loggerless Transports For services or libraries that don't follow the common logging interface (e.g., analytics services, monitoring tools), extend the `LoggerlessTransport` class. Unlike `BaseTransport`, `LoggerlessTransport` doesn't provide a `logger` property since these services typically don't require a logger instance. Instead, you'll usually initialize your service in the constructor: ::: info All loggerless transports have an optional `level` input as part of configuration. This is used by the `LoggerlessTransport` class to filter out logs that are below the specified level. You do not need to do any work around filtering based on level. ::: ```typescript import { LoggerlessTransport, type LogLayerTransportParams, type LoggerlessTransportConfig } from "@loglayer/transport"; export interface CustomServiceTransportConfig extends LoggerlessTransportConfig { // Add configuration options here if necessary } export class CustomServiceTransport extends LoggerlessTransport { private service: YourServiceType; constructor(config: CustomServiceTransportConfig) { super(config); this.service = new YourServiceType(config); } shipToLogger({ logLevel, messages, data, hasData }: LogLayerTransportParams) { const payload = { level: logLevel, message: messages.join(" "), timestamp: new Date().toISOString(), ...(data && hasData ? data : {}) }; // Send to your service this.service.send(payload); // Return value is used for debugging when consoleDebug is enabled return messages; } } ``` To use this transport, you only need to provide the configuration for your service: ```typescript import { LogLayer } from 'loglayer'; // Create LogLayer instance with the transport const log = new LogLayer({ transport: new CustomServiceTransport({ // No logger property needed, just your service configuration apiKey: 'your-api-key', endpoint: 'https://api.yourservice.com/logs' }) }); ``` ## `shipToLogger` Parameters LogLayer calls the `shipToLogger` method of a transport at the end of log processing to send the log to the target logging library. **Return Value**: The `shipToLogger` method must return an array (`any[]`). This return value is used for debugging purposes when `consoleDebug` is enabled - it will be logged to the console using the appropriate console method based on the log level. It receives a `LogLayerTransportParams` object with these fields: ```typescript interface LogLayerTransportParams { /** * The log level of the message */ logLevel: LogLevel; /** * The parameters that were passed to the log message method (eg: info / warn / debug / error) */ messages: any[]; /** * Combined object data containing the metadata, context, and / or error data in a * structured format configured by the user. */ data?: Record; /** * If true, the data object is included in the message parameters */ hasData?: boolean; /** * Individual metadata object passed to the log message method. */ metadata?: Record; /** * Error passed to the log message method. */ error?: any; /** * Context data that is included with each log entry. */ context?: Record; } ``` ::: tip Message Parameters The `messages` parameter is an array because LogLayer supports multiple parameters for formatting. See the [Basic Logging](/logging-api/basic-logging.html#message-parameters) section for more details. ::: For example, if a user does the following: ```typescript logger.withMetadata({foo: 'bar'}).info('hello world', 'foo'); ``` The parameters passed to `shipToLogger` would be: ```typescript { logLevel: 'info', messages: ['hello world', 'foo'], data: {foo: 'bar'}, hasData: true } ``` ## Resource Cleanup with Disposable If your transport needs to clean up resources (like network connections, file handles, or external service connections), you can implement the [`Disposable`](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-5-2.html#using-declarations-and-explicit-resource-management) interface. LogLayer will automatically call the dispose method when: * The transport is removed using `removeTransport()` * The transport is replaced by another transport with the same ID using `addTransport()` * All transports are replaced using `withFreshTransports()` ### Implementing Disposable To make your transport disposable: 1. Add `Disposable` to your class implementation 2. Implement the `[Symbol.dispose]()` method 3. Add a flag to track the disposed state 4. Guard your methods against calls after disposal Here's an example: ```typescript export class MyTransport extends LoggerlessTransport implements Disposable { private isDisposed = false; private client: ExternalServiceClient; constructor(config: MyTransportConfig) { super(config); this.client = new ExternalServiceClient(config); } shipToLogger({ logLevel, messages, data, hasData }: LogLayerTransportParams) { if (this.isDisposed) return messages; // Implementation this.client.send({ level: logLevel, message: messages.join(" "), ...(data && hasData ? data : {}) }); // Return value is used for debugging when consoleDebug is enabled return messages; } [Symbol.dispose](): void { if (this.isDisposed) return; // Clean up resources this.client?.close(); this.isDisposed = true; } } ``` :::tip Always implement `Disposable` if your transport maintains connections or holds onto resources that need cleanup. This ensures proper resource management and prevents memory leaks. ::: For a real-world example of a transport that implements `Disposable`, see the [Log Rotation Transport Source](/transports/log-file-rotation) implementation which properly manages file handles. ## Examples ### Logger-Based Example: Console Transport ```typescript import { BaseTransport, LogLevel, LogLayerTransportParams } from "@loglayer/transport"; export class ConsoleTransport extends BaseTransport { shipToLogger({ logLevel, messages, data, hasData }: LogLayerTransportParams) { if (data && hasData) { // put object data as the first parameter messages.unshift(data); } switch (logLevel) { case LogLevel.info: this.logger.info(...messages); break; case LogLevel.warn: this.logger.warn(...messages); break; case LogLevel.error: this.logger.error(...messages); break; case LogLevel.trace: this.logger.trace(...messages); break; case LogLevel.debug: this.logger.debug(...messages); break; case LogLevel.fatal: this.logger.error(...messages); break; } // Return value is used for debugging when consoleDebug is enabled return messages; } } ``` ### Loggerless Example: DataDog Transport For an example of a loggerless transport that sends logs to a third-party service, see the [Datadog Transport](https://github.com/loglayer/loglayer/blob/master/packages/transports/datadog/src/DataDogTransport.ts) implementation. ### HTTP Transport Example For an example of a transport that wraps the [HTTP Transport](http.md), see the source code for the [VictoriaLogs Transport](victoria-logs.md). ## Boilerplate / Template Code A sample project that you can use as a template is provided here: [GitHub Boilerplate Template](https://github.com/loglayer/loglayer-transport-boilerplate) --- --- url: 'https://loglayer.dev/mixins/creating-mixins.md' description: Learn how to create custom mixins for LogLayer --- # Creating Mixins ## Overview According to [patterns.dev](https://www.patterns.dev/vanilla/mixin-pattern/), a mixin's sole purpose is to add functionality to objects or classes *without inheritance*. In LogLayer, mixins allow you to extend the `LogLayer` or `LogBuilder` class prototypes with custom methods and functionality without having to extend the classes. Unlike plugins (which intercept and modify log processing) or transports (which send logs to destinations), mixins add new methods directly to the LogLayer API. Mixins are useful when you want to: * Add domain-specific methods to LogLayer (e.g., metrics, tracing) * Integrate third-party libraries directly into the logging API * Extend LogLayer with capabilities beyond logging (e.g., StatsD metrics) *Mixin functionality and types are provided directly by the `loglayer` package; no other external packages are required.* ## Anatomy of a Mixin in LogLayer A mixin consists of several key components: 1. TypeScript type declarations for your new methods 2. The mixin implementation that augments the prototype of the class you want to add to and its corresponding mock 3. A registration function that users call to register the mixin 4. Optional plugins that work with the mixin to modify logging data ### TypeScript Type Declarations All mixins must use TypeScript declaration merging to add type definitions for their methods. Create a **generic** mixin interface and augment the `loglayer` module. The generic type parameter allows you to use the same interface definition for both `LogLayer` and `MockLogLayer` (or `LogBuilder` and `MockLogBuilder`) without duplicating the method definitions: ```typescript // types.ts // Use _T if none of your interface items do not use the template export interface ICustomMixin { /** * Your method documentation */ customMethod(param: string): T; } // Augment the loglayer module declare module 'loglayer' { interface LogLayer extends ICustomMixin {} interface MockLogLayer extends ICustomMixin {} interface ILogLayer extends ICustomMixin {} } ``` **Module augmentation explanation:** The `loglayer` module augmentation extends: * The concrete `LogLayer` and `MockLogLayer` classes for runtime prototype augmentation (necessary because your mixin implementation adds methods to these class prototypes at runtime) * The generic `ILogLayer` interface so that mixin methods are automatically available on the return types of methods like `withContext()`, `child()`, etc. This preserves mixin types through method chaining and enables the generic template system. By parameterizing the return type with the generic `T`, you define the mixin methods once and reuse them for both classes, with each class getting methods that return the correct type (`LogLayer` or `MockLogLayer`). #### Augmenting ILogBuilder Mixins can also augment `ILogBuilder` to add methods available during the builder phase (after calling `withMetadata()` or `withError()`): ```typescript // types.ts export interface ICustomBuilderMixin { customBuilderMethod(param: string): T; } // Augment the loglayer module declare module 'loglayer' { interface LogBuilder extends ICustomBuilderMixin {} interface MockLogBuilder extends ICustomBuilderMixin {} interface ILogBuilder extends ICustomBuilderMixin {} } ``` Usage: ```typescript logger .withMetadata({ foo: 'bar' }) // Returns ILogBuilder .customBuilderMethod('test') // Mixin method available on builder .withError(error) // Builder method .customBuilderMethod('test2') // Still available through chaining .info('Message'); ``` #### Augmenting Both ILogLayer and ILogBuilder Many mixins need to work in both the logger and builder phases. Here's an example of a mixin that adds methods to both interfaces: ```typescript // types.ts export interface IPerfTimingMixin { withPerfStart(id: string): T; withPerfEnd(id: string): T; } // Augment the loglayer module with all interfaces declare module 'loglayer' { interface LogLayer extends IPerfTimingMixin {} interface LogBuilder extends IPerfTimingMixin {} interface MockLogLayer extends IPerfTimingMixin {} interface MockLogBuilder extends IPerfTimingMixin {} interface ILogLayer extends IPerfTimingMixin {} interface ILogBuilder extends IPerfTimingMixin {} } ``` This allows the mixin methods to work in both phases: ```typescript // Works on LogLayer logger.withPerfStart('operation').info('Started'); // Also works on LogBuilder (after withMetadata/withError) logger.withMetadata({ step: 1 }) .withPerfStart('operation') .info('Started'); ``` ### Mixin Implementation A mixin is an object that implements either `LogLayerMixin` or `LogBuilderMixin`: ```typescript import type { LogLayerMixin, LogLayer, MockLogLayer } from 'loglayer'; import { LogLayerMixinAugmentType } from 'loglayer'; const customMixinImplementation: LogLayerMixin = { augmentationType: LogLayerMixinAugmentType.LogLayer, // Optional: Called when each LogLayer instance is constructed onConstruct: (instance: LogLayer, config: LogLayerConfig) => { // The LogLayer instance is passed as the first parameter // Initialize per-instance state here if needed }, // Required: Augments the prototype with new methods augment: (prototype) => { // When assigning methods to the prototype, use regular functions (not arrow functions) // to preserve proper `this` context prototype.customMethod = function (this: LogLayer, param: string): LogLayer { // Your implementation return this; // Return this for method chaining }; }, // Required: Augments the MockLogLayer prototype with the same methods // This ensures mock classes have the same functionality for testing augmentMock: (prototype) => { // Implement the same methods as no-ops for the mock class prototype.customMethod = function (this: MockLogLayer, param: string): MockLogLayer { // Mock implementation - typically just returns this without side effects return this; }; } }; ``` #### Mock Class Augmentation All mixins must implement `augmentMock` because **most users will use `MockLogLayer` or `MockLogBuilder` in their unit tests**. Without it, mixin methods won't be available on mock classes, causing TypeScript errors and runtime failures. The `augmentMock` implementation should be a no-op version with the same method signatures that returns `this` for chaining without performing any work: ```typescript import type { ILogLayer } from 'loglayer'; import { describe, it, expect } from 'vitest'; import { MockLogLayer } from 'loglayer'; // Service class that uses mixin methods class MyService { constructor(private log: ILogLayer) {} processRequest() { // Uses mixin method - works with both LogLayer and MockLogLayer this.log.customMethod('request').info('Processing request'); } } describe('MyService', () => { it('should work with MockLogLayer', () => { // We don't want to print logs during testing, so use the // MockLogLayer instead of LogLayer const mockLog = new MockLogLayer(); const service = new MyService(mockLog); // Without augmentMock, this would fail at compile time and runtime service.processRequest(); // MockLogLayer has customMethod thanks to augmentMock }); }); ``` ### Optional Plugins Mixins can optionally include [plugins](/plugins/) that work alongside the mixin to modify logging data. This is useful when: * You want to automatically enrich log data based on mixin state * You need to transform or filter logs based on how mixin methods are used * The mixin needs to interact with the logging pipeline The key insight is that **plugins receive the LogLayer instance as a parameter**, allowing them to access any state or methods that your mixin has added to the LogLayer instance. This creates a powerful integration where mixin methods can set state, and plugins can automatically include that state in every log entry. Here's a complete example showing how a mixin and plugin work together: ```typescript import type { LogLayerPlugin, PluginBeforeDataOutParams, LogLayer } from 'loglayer'; // 1. Declare the mixin method that tracks request context export interface IRequestTrackingMixin { /** * Sets the current request ID for correlation tracking */ setRequestId(requestId: string): T; /** * Gets the current request ID */ getRequestId(): string | undefined; } // Augment the loglayer module declare module 'loglayer' { interface LogLayer extends IRequestTrackingMixin {} interface MockLogLayer extends IRequestTrackingMixin {} interface ILogLayer extends IRequestTrackingMixin {} } // 2. Mixin implementation that stores request ID on each LogLayer instance // Use a shared Symbol so both onConstruct and augment methods can access the same property const REQUEST_ID_KEY = Symbol('requestId'); const requestTrackingMixinImpl: LogLayerMixin = { augmentationType: LogLayerMixinAugmentType.LogLayer, // Initialize per-instance state when LogLayer is constructed onConstruct: (instance: LogLayer) => { // Initialize the request ID storage on this instance (instance as any)[REQUEST_ID_KEY] = undefined; }, augment: (prototype) => { prototype.setRequestId = function (this: LogLayer, requestId: string): LogLayer { // Store the request ID on this instance (this as any)[REQUEST_ID_KEY] = requestId; return this; }; prototype.getRequestId = function (this: LogLayer): string | undefined { return (this as any)[REQUEST_ID_KEY]; }; }, augmentMock: (prototype) => { prototype.setRequestId = function (this: MockLogLayer, requestId: string): MockLogLayer { return this; // No-op for mocks }; prototype.getRequestId = function (this: MockLogLayer): string | undefined { return undefined; // No-op for mocks }; } }; // 3. Plugin that automatically adds request ID to every log entry const requestIdPlugin: LogLayerPlugin = { onBeforeDataOut: (params: PluginBeforeDataOutParams, loglayer: LogLayer) => { // Access the mixin's state via the LogLayer instance const requestId = loglayer.getRequestId(); // Automatically enrich all log entries with request ID if present if (requestId) { return { ...(params.data || {}), requestId: requestId }; } // Return original data if no request ID is set return params.data; } }; // 4. Registration function that includes both mixin and plugin export function requestTrackingMixin(): LogLayerMixinRegistration { return { mixinsToAdd: [requestTrackingMixinImpl], pluginsToAdd: [requestIdPlugin] // Plugin automatically enriches logs }; } ``` **Usage example:** ```typescript import { useLogLayerMixin, LogLayer, ConsoleTransport } from 'loglayer'; import type { ILogLayer } from 'loglayer'; import { requestTrackingMixin } from '@your-package/request-tracking'; // Register the mixin (includes the plugin automatically) useLogLayerMixin(requestTrackingMixin()); // Create LogLayer instance // ILogLayer automatically includes mixin methods through the generic parameter const log: ILogLayer = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }); // Use the mixin method to set request context log.setRequestId('req-123'); // All subsequent logs automatically include the request ID via the plugin log.info('Processing user request'); // Output: { requestId: 'req-123', message: 'Processing user request', ... } // Mixin methods are preserved through method chaining log.withMetadata({ userId: 456 }).setRequestId('req-456').info('User action'); // Output: { requestId: 'req-456', userId: 456, message: 'User action', ... } // Clear or change the request ID log.setRequestId('req-789'); log.info('New request'); // Output: { requestId: 'req-789', message: 'New request', ... } ``` This pattern demonstrates the key interaction: the mixin provides methods to manage state on the LogLayer instance, and the plugin automatically reads that state and enriches log data without requiring manual intervention in your logging code. Plugins registered through `pluginsToAdd` are automatically added to all LogLayer instances created **after** the mixin is registered, just like the mixin methods themselves. ### Registration Function Create a registration function that returns a `LogLayerMixinRegistration`, which is what users of your mixin will call with `useLogLayerMixin()` to register the mixin before creating any LogLayer instances. The registration function: * Might take in optional configuration parameters * Can initialize shared state based on the configuration * Returns a `LogLayerMixinRegistration` object containing the mixins (and optionally plugins) to register ```typescript import type { LogLayerMixinRegistration, LogLayerPlugin } from 'loglayer'; // The registration function is what users will import and call export function customMixin(config?: CustomMixinConfig): LogLayerMixinRegistration { // Optional: Initialize shared state based on config if (config) { // Initialize any shared state, validate config, etc. } // Reference the mixin implementation created earlier return { mixinsToAdd: [customMixinImplementation], pluginsToAdd: [/* optional plugins */] // See "Using Plugins with Mixins" below }; } ``` **Users of your mixin will register it like this:** ```typescript import { useLogLayerMixin } from 'loglayer'; import { customMixin } from '@your-package/mixin'; // Register a single mixin (must be called before creating LogLayer instances) useLogLayerMixin(customMixin({ /* optional config */ })); // Or register multiple mixins at once useLogLayerMixin([ customMixin({ /* optional config */ }), // otherMixin(), ]); // Now all LogLayer instances will have your mixin methods const log = new LogLayer({ transport: ... }); log.customMethod(); // Your mixin method is available ``` ## Mixin Reference ### Mixin Types LogLayer supports two types of mixins: **LogLayer Mixins** extend the `LogLayer` class prototype. Methods are available directly on LogLayer instances: ```typescript const log = new LogLayer({ transport: ... }); log.customMethod(); // Your mixin method ``` **LogBuilder Mixins** extend the `LogBuilder` class prototype. Certain methods in the `LogLayer` class will return an instance of the `LogBuilder`. ```typescript const log = new LogLayer({ transport: ... }); log.withMetadata({}).customBuilderMethod(); ``` ### Interface Definitions #### LogLayerMixin ```typescript interface LogLayerMixin { /** * Specifies that this mixin augments the main LogLayer class. */ augmentationType: LogLayerMixinAugmentType.LogLayer; /** * Called at the end of the LogLayer construct() method. * The LogLayer instance is passed as the first parameter. */ onConstruct?: (instance: LogLayer, config: LogLayerConfig) => void; /** * Function that performs the augmentation of the LogLayer prototype. */ augment: (prototype: typeof LogLayer.prototype) => void; /** * Function that performs the augmentation of the MockLogLayer prototype. * This is called to ensure the mock class has the same functionality as the real class. * Mock implementations should typically be no-ops that return the instance for chaining. */ augmentMock: (prototype: typeof MockLogLayer.prototype) => void; } ``` #### LogBuilderMixin ```typescript interface LogBuilderMixin { /** * Specifies that this mixin augments the main LogBuilder class. */ augmentationType: LogLayerMixinAugmentType.LogBuilder; /** * Called at the end of the LogBuilder construct() method. * The LogBuilder instance is passed as the first parameter. */ onConstruct?: (instance: LogBuilder, logger: LogLayer) => void; /** * Function that performs the augmentation of the LogBuilder prototype. */ augment: (prototype: typeof LogBuilder.prototype) => void; /** * Function that performs the augmentation of the MockLogBuilder prototype. * This is called to ensure the mock class has the same functionality as the real class. * Mock implementations should typically be no-ops that return the instance for chaining. */ augmentMock: (prototype: typeof MockLogBuilder.prototype) => void; } ``` #### LogLayerMixinRegistration ```typescript interface LogLayerMixinRegistration { /** * Array of mixins to add to LogLayer. */ mixinsToAdd: LogLayerMixinType[]; /** * Optional array of plugins to add to LogLayer. * Plugins registered here are automatically added to all LogLayer instances * created after the mixin is registered. */ pluginsToAdd?: LogLayerPlugin[]; } ``` ## Creating In-Project If you want to quickly add a mixin for your own project: ```typescript import { LogLayer, useLogLayerMixin, ConsoleTransport, LogLayerMixinAugmentType } from 'loglayer'; import type { LogLayerMixin, LogLayerMixinRegistration, LogLayer, ILogLayer } from 'loglayer'; import type { MockLogLayer } from 'loglayer'; // 1. Define TypeScript declarations using a generic interface export interface IMetricsMixin { /** * Records a custom metric */ recordMetric(name: string, value: number): T; } // Augment the loglayer module declare module 'loglayer' { interface LogLayer extends IMetricsMixin {} interface MockLogLayer extends IMetricsMixin {} interface ILogLayer extends IMetricsMixin {} } // 2. Create the mixin const metricsMixin: LogLayerMixin = { augmentationType: LogLayerMixinAugmentType.LogLayer, augment: (prototype) => { prototype.recordMetric = function (this: LogLayer, name: string, value: number): LogLayer { console.log(`Metric: ${name} = ${value}`); return this; }; }, augmentMock: (prototype) => { prototype.recordMetric = function (this: MockLogLayer, name: string, value: number): MockLogLayer { // Mock implementation - no-op for testing return this; }; } }; // 3. Register the mixin (must be called before creating LogLayer instances) // You can register a single mixin: useLogLayerMixin({ mixinsToAdd: [metricsMixin] }); // Or register multiple mixins at once: // useLogLayerMixin([ // { mixinsToAdd: [metricsMixin] }, // // other mixin registrations... // ]); // 4. Create LogLayer instance and use your custom method // ILogLayer automatically includes mixin methods through the generic parameter const log: ILogLayer = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }); // Mixin methods are available directly log.recordMetric('requests', 1).info('Request received'); // Mixin methods are preserved through method chaining log.withContext({ userId: 123 }).recordMetric('requests', 1).info('Request received'); ``` ## As an NPM Package When creating a reusable mixin package: ### TypeScript Setup To use your mixin with TypeScript, users must register the types by adding your mixin package to their `tsconfig.json` includes: ```json { "include": [ "./node_modules/@your-package/mixin-name" ] } ``` This ensures TypeScript recognizes the mixin methods on LogLayer instances. ### Package.json You need `loglayer` for types. **Peer Dependencies:** Since mixins are registered before LogLayer is used, `loglayer` should be installed as a **peer dependency** as the end-user will have their own version of loglayer. **Important:** Specify the minimal version of `loglayer` required for your mixin using the `>=` version range. For example, if your mixin requires features introduced in LogLayer v7.0.2: ```json { "peerDependencies": { "loglayer": ">=7.0.2" } } ``` This ensures that users have at least version 7.0.2 of `loglayer` installed, while allowing them to use any newer compatible versions (7.1.0, 8.0.0, etc.). ## Testing Your Mixin Testing mixins is crucial to ensure they work correctly with both `LogLayer` and `MockLogLayer`. For comprehensive testing guidance including unit testing, integration testing, and using LogLayer's testing utilities, see the [Testing Mixins](/mixins/testing-mixins) guide. ## Important Considerations ### Avoiding Arrow Functions When Assigning Methods When assigning methods to the prototype, always use regular functions (not arrow functions). Arrow functions don't have their own `this` binding and may override the context. However, you can use an arrow function for the `augment` method itself: ```typescript // Correct: Arrow function for augment is fine augment: (prototype) => { // Wrong: Arrow function for the method itself prototype.myMethod = () => { /* `this` may be wrong */ }; // Correct: Regular function for the method prototype.myMethod = function (this: LogLayer) { // `this` is correctly bound to the instance return this; }; } ``` ## Boilerplate / Template Code A sample project that you can use as a template is provided here: [GitHub Boilerplate Template](https://github.com/loglayer/loglayer-mixin-boilerplate) --- --- url: 'https://loglayer.dev/plugins/creating-plugins.md' description: Learn how to create custom plugins for LogLayer --- # Creating Plugins ## Overview A plugin is a plain object that implements the `LogLayerPlugin` interface from `@loglayer/plugin` or `loglayer`: ```typescript interface LogLayerPlugin { /** * Unique identifier for the plugin. Used for selectively disabling / enabling * and removing the plugin. */ id?: string; /** * If true, the plugin will skip execution */ disabled?: boolean; /** * Called after onBeforeDataOut and onBeforeMessageOut but before shouldSendToLogger to transform the log level. */ transformLogLevel?(params: PluginTransformLogLevelParams, loglayer: ILogLayer): LogLevelType | null | undefined | false; /** * Called after the assembly of the data object that contains * metadata / context / error data before being sent to the logging library. */ onBeforeDataOut?(params: PluginBeforeDataOutParams, loglayer: ILogLayer): Record | null | undefined; /** * Called to modify message data before it is sent to the logging library. */ onBeforeMessageOut?(params: PluginBeforeMessageOutParams, loglayer: ILogLayer): MessageDataType[]; /** * Controls whether the log entry should be sent to the logging library. */ shouldSendToLogger?(params: PluginShouldSendToLoggerParams, loglayer: ILogLayer): boolean; /** * Called when withMetadata() or metadataOnly() is called. */ onMetadataCalled?(metadata: Record, loglayer: ILogLayer): Record | null | undefined; /** * Called when withContext() is called. */ onContextCalled?(context: Record, loglayer: ILogLayer): Record | null | undefined; } ``` ## In-Project If you want to quickly write a plugin for your own project, you can use the `loglayer` package to get the Typescript types for the plugin interface. ### Example ```typescript import { LogLayer, ConsoleTransport } from 'loglayer' import type { LogLayerPlugin, PluginBeforeMessageOutParams } from 'loglayer' // Create a timestamp plugin const timestampPlugin: LogLayerPlugin = { onBeforeMessageOut: ({ messages }: PluginBeforeMessageOutParams, loglayer: ILogLayer): string[] => { // Add timestamp prefix to each message return messages.map(msg => `[${new Date().toISOString()}] ${msg}`) } } // Create LogLayer instance with console transport and timestamp plugin const log = new LogLayer({ transport: new ConsoleTransport({ logger: console }), plugins: [timestampPlugin] }) // Usage example log.info('Hello world!') // Output: [2024-01-17T12:34:56.789Z] Hello world! ``` ## As an NPM Package If you're creating an npm package, you should use the `@loglayer/plugin` package to get the Typescript types for the plugin interface instead of making `loglayer` a dependency. ::: info We recommend it as a `dependency` and not a `devDependency` as `@loglayer/plugin` may not be types-only in the future. ::: ### Installation ::: code-group ```sh [npm] npm install @loglayer/plugin ``` ```sh [pnpm] pnpm add @loglayer/plugin ``` ```sh [yarn] yarn add @loglayer/plugin ``` ::: ### Example ```typescript import type { LogLayerPlugin, PluginBeforeMessageOutParams, LogLayerPluginParams, ILogLayer } from '@loglayer/plugin' // LogLayerPluginParams provides the common options for the plugin export interface TimestampPluginOptions extends LogLayerPluginParams { /** * Format of the timestamp. If not provided, uses ISO string */ format?: 'iso' | 'locale' } export const createTimestampPlugin = (options: TimestampPluginOptions = {}, loglayer: ILogLayer): LogLayerPlugin => { return { // Copy over the common options id: options.id, disabled: options.disabled, // Implement the onBeforeMessageOut lifecycle method onBeforeMessageOut: ({ messages }: PluginBeforeMessageOutParams, loglayer: ILogLayer): string[] => { const timestamp = options.format === 'locale' ? new Date().toLocaleString() : new Date().toISOString() return messages.map(msg => `[${timestamp}] ${msg}`) } } } ``` ## Plugin Lifecycle Methods ### transformLogLevel Allows you to transform the log level after `onBeforeDataOut` and `onBeforeMessageOut` have processed the data, but before `shouldSendToLogger` is called. This is useful for dynamically adjusting log levels based on the processed log data, metadata, context, or error information. This callback runs after `onBeforeDataOut` and `onBeforeMessageOut`, so the `data` parameter will contain any modifications made by `onBeforeDataOut` plugins, and the `messages` parameter will contain any modifications made by `onBeforeMessageOut` plugins. The transformed log level will be used by `shouldSendToLogger` and when sending to transports. If multiple plugins define `transformLogLevel`, the last one that returns a valid log level (not null, undefined, or false) will be used. **Method Signature:** ```typescript transformLogLevel?(params: PluginTransformLogLevelParams, loglayer: ILogLayer): LogLevelType | null | undefined | false ``` **Parameters:** | Parameter | Type | Description | |-----------|------|-------------| | `logLevel` | `LogLevel` | Log level of the data | | `messages` | `any[]` | Message data that is copied from the original | | `data` | `Record` (optional) | Combined object data containing the metadata, context, and / or error data in a structured format configured by the user | | `metadata` | `Record` (optional) | Individual metadata object passed to the log message method | | `error` | `any` (optional) | Error passed to the log message method | | `context` | `Record` (optional) | Context data that is included with each log entry | **Return Value:** * Returns a `LogLevelType` (or equivalent string) to use the transformed log level * Returns `null`, `undefined`, or `false` to use the log level originally specified **Example:** ```typescript const logLevelTransformerPlugin = { transformLogLevel: ({ logLevel, error, metadata, messages }: PluginTransformLogLevelParams, loglayer: ILogLayer) => { // Upgrade errors to fatal if they have a specific flag if (logLevel === 'error' && metadata?.critical) { return 'fatal' } // Downgrade debug logs in production if (logLevel === 'debug' && process.env.NODE_ENV === 'production') { return 'info' } // Upgrade to error if message contains "CRITICAL" if (messages.some(msg => String(msg).includes('CRITICAL'))) { return 'error' } // Use original log level if no transformation needed return } } ``` **Example:** ```typescript const errorLevelUpgradePlugin = { transformLogLevel: ({ logLevel, error }: PluginTransformLogLevelParams, loglayer: ILogLayer) => { // Upgrade all errors with stack traces to fatal if (logLevel === 'error' && error?.stack) { return 'fatal' } // Use original log level return undefined } } ``` ### onBeforeDataOut Allows you to modify or transform the data object containing metadata, context, and error information before it's sent to the logging library. This is useful for adding additional fields, transforming data formats, or filtering sensitive information. **Method Signature:** ```typescript onBeforeDataOut?(params: PluginBeforeDataOutParams, loglayer: ILogLayer): Record | null | undefined ``` **Parameters:** | Parameter | Type | Description | |-----------|------|-------------| | `logLevel` | `LogLevel` | Log level of the data | | `data` | `Record` (optional) | Combined object data containing the metadata, context, and / or error data in a structured format configured by the user | | `metadata` | `Record` (optional) | Individual metadata object passed to the log message method | | `error` | `any` (optional) | Error passed to the log message method | | `context` | `Record` (optional) | Context data that is included with each log entry | **Example:** ```typescript const dataEnrichmentPlugin = { onBeforeDataOut: ({ data, logLevel, metadata, error, context }: PluginBeforeDataOutParams, loglayer: ILogLayer) => { return { ...(data || {}), environment: process.env.NODE_ENV, timestamp: new Date().toISOString(), logLevel // Note: This adds logLevel as a field in the data object, but does not modify the actual log level } } } ``` ::: info Changing the log level Including `logLevel` in the returned data object (as shown in the example above) only adds it as a field in the data object sent to the logging library. It does **not** modify the actual log level used by LogLayer. If you need to transform the log level itself, use the [`transformLogLevel`](#transformloglevel) callback instead. ::: ### onBeforeMessageOut Allows you to modify or transform the message content before it's sent to the logging library. This is useful for adding prefixes, formatting messages, or transforming message content. **Method Signature:** ```typescript onBeforeMessageOut?(params: PluginBeforeMessageOutParams, loglayer: ILogLayer): MessageDataType[] ``` **Parameters:** | Parameter | Type | Description | |-----------|------|-------------| | `messages` | `any[]` | Message data that is copied from the original | | `logLevel` | `LogLevel` | Log level of the message | **Example:** ```typescript const messageFormatterPlugin = { onBeforeMessageOut: ({ messages, logLevel }: PluginBeforeMessageOutParams, loglayer: ILogLayer) => { return messages.map(msg => `[${logLevel.toUpperCase()}][${new Date().toISOString()}] ${msg}`) } } ``` ### shouldSendToLogger Controls whether a log entry should be sent to the logging library. This is useful for implementing log filtering, rate limiting, or environment-specific logging. **Method Signature:** ```typescript shouldSendToLogger?(params: PluginShouldSendToLoggerParams, loglayer: ILogLayer): boolean ``` **Parameters:** | Parameter | Type | Description | |-----------|------|-------------| | `messages` | `any[]` | Message data that is copied from the original | | `logLevel` | `LogLevel` | Log level of the message | | `transportId` | `string` (optional) | ID of the transport that will send the log | | `data` | `Record` (optional) | Combined object data containing the metadata, context, and / or error data in a structured format configured by the user | | `metadata` | `Record` (optional) | Individual metadata object passed to the log message method | | `error` | `any` (optional) | Error passed to the log message method | | `context` | `Record` (optional) | Context data that is included with each log entry | **Example:** ```typescript const productionFilterPlugin = { shouldSendToLogger: ({ logLevel, data, metadata, error, context }: PluginShouldSendToLoggerParams, loglayer: ILogLayer) => { // Filter out debug logs in production if (process.env.NODE_ENV === 'production') { return logLevel !== 'debug' } // Rate limit error logs if (logLevel === 'error') { return !isRateLimited('error-logs') } return true } } ``` **Example:** ```typescript const transportFilterPlugin = { shouldSendToLogger: ({ transportId, logLevel, data, metadata, error, context }: PluginShouldSendToLoggerParams, loglayer: ILogLayer) => { // don't send logs if the transportId is 'console' if (transportId === 'console') { return false } return true } } ``` ### onMetadataCalled Intercepts and modifies metadata when `withMetadata()` or `metadataOnly()` is called. This is useful for transforming or enriching metadata before it's attached to logs. Returning `null` or `undefined` will prevent the metadata from being added to the log. **Method Signature:** ```typescript onMetadataCalled?(metadata: Record, loglayer: ILogLayer): Record | null | undefined ``` **Parameters:** | Parameter | Type | Description | |-----------|------|-------------| | `metadata` | `Record` | The metadata object being added | | `loglayer` | `ILogLayer` | The LogLayer instance | **Example:** ```typescript const metadataEnrichmentPlugin = { onMetadataCalled: (metadata: Record, loglayer: ILogLayer) => { return { ...metadata, enrichedAt: new Date().toISOString(), userId: getCurrentUser()?.id } } } ``` ### onContextCalled Intercepts and modifies context when `withContext()` is called. This is useful for transforming or enriching context data before it's used in logs. Returning `null` or `undefined` will prevent the context from being added to the log. **Method Signature:** ```typescript onContextCalled?(context: Record, loglayer: ILogLayer): Record | null | undefined ``` **Parameters:** | Parameter | Type | Description | |-----------|------|-------------| | `context` | `Record` | The context object being added | | `loglayer` | `ILogLayer` | The LogLayer instance | **Example:** ```typescript const contextEnrichmentPlugin = { onContextCalled: (context: Record, loglayer: ILogLayer) => { return { ...context, environment: process.env.NODE_ENV, processId: process.pid, timestamp: new Date().toISOString() } } } ``` ## Boilerplate / Template Code A sample project that you can use as a template is provided here: [GitHub Boilerplate Template](https://github.com/loglayer/loglayer-plugin-boilerplate) --- --- url: 'https://loglayer.dev/example-integrations/nextjs.md' description: Learn how to implement LogLayer with Next.js --- # Custom logging in Next.js This guide shows you how to integrate the [LogLayer](/introduction) logging library to replace Next.js' default logging behavior using popular logging libraries like `pino` and `winston` along with seamless integration to log collection platforms like DataDog. ## Installation This guide assumes you already have [Next.js](https://nextjs.org/) set up. First, install the required packages. We'll use [Pino](/transports/pino) for production and [Simple Pretty Terminal](/transports/simple-pretty-terminal) for development: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-pino @loglayer/transport-simple-pretty-terminal pino serialize-error ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-pino @loglayer/transport-simple-pretty-terminal pino serialize-error ``` ```sh [yarn] yarn add loglayer @loglayer/transport-pino @loglayer/transport-simple-pretty-terminal pino serialize-error ``` ::: ## Setup ```typescript // logger.ts import { LogLayer } from 'loglayer' import { PinoTransport } from '@loglayer/transport-pino' import { getSimplePrettyTerminal } from '@loglayer/transport-simple-pretty-terminal' import { serializeError } from 'serialize-error' import { pino } from 'pino' // Detect if we're on the server or client const isServer = typeof window === 'undefined' // Create a Pino instance (only needs to be done once) const pinoLogger = pino({ level: 'trace' // Set to desired log level }) const log = new LogLayer({ errorSerializer: serializeError, transport: [ // Simple Pretty Terminal for development getSimplePrettyTerminal({ enabled: process.env.NODE_ENV === 'development', runtime: isServer ? 'node' : 'browser', viewMode: 'inline', }), // Pino for production (both server and client) new PinoTransport({ enabled: process.env.NODE_ENV === 'production', logger: pinoLogger }) ], plugins: [ { // Add a plugin to label the log entry as coming from the server or client onBeforeMessageOut(params: PluginBeforeMessageOutParams) { const tag = isServer ? "Server" : "Client"; if (params.messages && params.messages.length > 0) { if (typeof params.messages[0] === "string") { params.messages[0] = `[${tag}] ${params.messages[0]}`; } } return params.messages; }, }, ] }) // Add server/client context to all log entries log.withContext({ isServer }) export function getLogger() { return log; } ``` We expose a function called `getLogger()` to get the logger instance. We do this in the event that you want to mock the logger in your tests, where you can override `getLogger()` to return the LogLayer mock, [MockLogLayer](/logging-api/unit-testing). At this point you should be able to call `getLogger()` anywhere in your Next.js app to get the logger instance and write logs. ```typescript // pages.tsx import { getLogger } from './logger' export default function Page() { const log = getLogger() log.withMetadata({ some: "data" }).info('Hello, world!') return
Hello, world!
} ``` ### Using environment-specific transports If you use transports that are only client-side or server-side (such as the [DataDog](/transports/datadog) and [DataDog Browser](/transports/datadog-browser-logs) Transports), you can conditionally enable them based on the environment by adding them to the transport array with appropriate `enabled` conditions. ## Handling server-side uncaught exceptions and rejections Next.js [does not](https://github.com/vercel/next.js/discussions/63787) have a way to use a custom logger for server-side uncaught exceptions and rejections. To use LogLayer for this, you will need to create an [instrumentation file](https://nextjs.org/docs/app/building-your-application/optimizing/instrumentation) in the root of your project. Here's an example using the [Simple Pretty Terminal](/transports/simple-pretty-terminal), [Pino](/transports/pino), and [DataDog](/transports/datadog) transports: ```typescript // instrumentation.ts import { LogLayer, type ILogLayer } from 'loglayer'; import { DataDogTransport } from "@loglayer/transport-datadog"; import { PinoTransport } from "@loglayer/transport-pino"; import { getSimplePrettyTerminal } from '@loglayer/transport-simple-pretty-terminal'; import pino from "pino"; import { serializeError } from "serialize-error"; /** * Strip ANSI codes from a string, which is something Next.js likes to inject. */ function stripAnsiCodes(str: string): string { return str.replace( /[\u001b\u009b][[()#;?]*(?:[0-9]{1,4}(?:;[0-9]{0,4})*)?[0-9A-ORZcf-nqry=><]/g, "", ); } /** * Create a console method that logs to LogLayer */ function createConsoleMethod(log: ILogLayer, method: "error" | "info" | "warn" | "debug" | "log") { let mappedMethod: "error" | "info" | "warn" | "debug"; if (method === "log") { mappedMethod = "info"; } else { mappedMethod = method; } return (...args: unknown[]) => { const data: Record = {}; let hasData = false; let error: Error | null = null; const messages: string[] = []; for (const arg of args) { if (arg instanceof Error) { error = arg; continue; } if (typeof arg === "object" && arg !== null) { Object.assign(data, arg); hasData = true; continue; } if (typeof arg === "string") { messages.push(arg); } } let finalMessage = stripAnsiCodes(messages.join(" ")).trim(); // next.js uses an "x" for the error message when it's an error object if (finalMessage === "⨯" && error) { finalMessage = error?.message || ""; } if (error && hasData && messages.length > 0) { log.withError(error).withMetadata(data)[mappedMethod](finalMessage); } else if (error && messages.length > 0) { log.withError(error)[mappedMethod](finalMessage); } else if (hasData && messages.length > 0) { log.withMetadata(data)[mappedMethod](finalMessage); } else if (error && hasData && messages.length === 0) { log.withError(error).withMetadata(data)[mappedMethod](""); } else if (error && messages.length === 0) { log.errorOnly(error); } else if (hasData && messages.length === 0) { log.metadataOnly(data); } else { log[mappedMethod](finalMessage); } }; } export async function register() { const logger = new LogLayer({ errorSerializer: serializeError, transport: [ // Simple Pretty Terminal for development getSimplePrettyTerminal({ enabled: process.env.NODE_ENV === 'development', runtime: 'node', // Server-side only in instrumentation viewMode: 'inline', }), // Pino for production new PinoTransport({ enabled: process.env.NODE_ENV === 'production', logger: pino(), }), new DataDogTransport({ enabled: process.env.NODE_ENV === 'production' ... }), ] }) if (process.env.NEXT_RUNTIME === "nodejs") { console.error = createConsoleMethod(logger, "error"); console.log = createConsoleMethod(logger, "log"); console.info = createConsoleMethod(logger, "info"); console.warn = createConsoleMethod(logger, "warn"); console.debug = createConsoleMethod(logger, "debug"); } } ``` If you threw an error from `page.tsx` that is uncaught, you should see this in the terminal: ```json lines {"err":{"type":"Object","message":"test","stack":"Error: test\n at Page (webpack-internal:///(rsc)/./src/app/page.tsx:12:11)","digest":"699232626","name":"Error"},"msg":"test"} ``` --- --- url: 'https://loglayer.dev/plugins/datadog-apm-trace-injector.md' --- # Datadog APM Trace Injector Plugin [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Fplugin-datadog-apm-trace-injector)](https://www.npmjs.com/package/@loglayer/plugin-datadog-apm-trace-injector) [![Source](https://img.shields.io/badge/source-GitHub-blue)](https://github.com/loglayer/loglayer/tree/master/packages/plugins/datadog-apm-trace-injector) The Datadog APM Trace Injector Plugin automatically injects Datadog APM trace context into your LogLayer logs, enabling correlation between your application logs and distributed traces in Datadog. ## Installation This plugin requires the [`dd-trace`](https://github.com/DataDog/dd-trace-js) library to be installed in your project. ::: code-group ```bash [npm] npm install @loglayer/plugin-datadog-apm-trace-injector dd-trace ``` ```bash [yarn] yarn add @loglayer/plugin-datadog-apm-trace-injector dd-trace ``` ```bash [pnpm] pnpm add @loglayer/plugin-datadog-apm-trace-injector dd-trace ``` ::: ## Configuration ### Required Parameters | Name | Type | Description | |------|------|-------------| | `tracerInstance` | `Tracer` | The `dd-trace` tracer instance | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `id` | `string` | - | Unique identifier for the plugin | | `disabled` | `boolean` | `false` | Disable the plugin | | `onError` | `(error: Error, data?: Record) => void` | - | Error handler for tracer operation failures | ## Usage ```typescript // dd-trace generally needs to be the first import of any project // as it needs to patch node_module packages before they are imported import tracer from 'dd-trace'; import { LogLayer } from 'loglayer'; import { datadogTraceInjectorPlugin } from '@loglayer/plugin-datadog-apm-trace-injector'; tracer.init(); // Create the plugin const traceInjector = datadogTraceInjectorPlugin({ tracerInstance: tracer, // Enable the plugin only if the Datadog API key is set disabled: !process.env.DD_API_KEY }); // Add to your LogLayer instance const log = new LogLayer({ plugins: [traceInjector], }); // Your logs will now automatically include trace context log.info('User action completed'); ``` ### With Error Handling ```typescript const traceInjectorWithErrorHandling = datadogTraceInjectorPlugin({ tracerInstance: tracer, onError: (error, data) => { console.error('Datadog trace injection failed:', error.message, data); }, }); const log = new LogLayer({ plugins: [traceInjectorWithErrorHandling], }); ``` ## Express example ```typescript import tracer from 'dd-trace'; import express from 'express'; import { LogLayer, ConsoleTransport } from 'loglayer'; import { datadogTraceInjectorPlugin } from '@loglayer/plugin-datadog-apm-trace-injector'; // Initialize dd-trace tracer.init(); const app = express(); const log = new LogLayer({ transport: new ConsoleTransport({ messageField: 'msg', logger: console, }), plugins: [ datadogTraceInjectorPlugin({ tracerInstance: tracer, }), ], }); app.get('/', (req, res) => { // This log will automatically include trace context log.info('Fetching users from database'); // Your API logic here res.json({ users: [] }); }); app.listen(3004, function(err){ if (err) console.log("Error in server setup") console.log("Server listening on Port 3004"); }); ``` Visiting `/` outputs the following: ```json { dd: { trace_id: '689cd152000000002bf3186dadf7c91a', span_id: '6991062753777198294', service: 'test-service', version: '0.0.1' }, msg: 'Fetching users from database' } ``` ## How It Works The plugin hooks into LogLayer's `onBeforeDataOut` lifecycle and: 1. **Retrieves Active Span**: Gets the currently active span from the dd-trace tracer 2. **Injects Trace Context**: Uses `tracer.inject()` to add trace and span IDs to the log data 3. **Preserves Existing Data**: Maintains all existing log data while adding trace context The injected trace context follows Datadog's [log correlation format](https://docs.datadoghq.com/tracing/other_telemetry/connect_logs_and_traces/nodejs/), allowing you to: * Correlate logs with traces in the Datadog UI * Filter logs by trace ID or span ID * View logs alongside trace spans in distributed tracing views ## Trace Context Fields When a trace is active, the following fields are automatically added to your logs: * `dd.trace_id`: The current trace ID * `dd.span_id`: The current span ID * `dd.service`: The service name (if configured in dd-trace) * `dd.version`: The service version (if configured in dd-trace) ## Changelog View the changelog [here](./changelogs/datadog-apm-trace-injector-changelog.md). --- --- url: 'https://loglayer.dev/transports/datadog-browser-logs.md' description: Send logs using the DataDog Browser Logs library with LogLayer --- # DataDog Browser Logs Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-datadog-browser-logs)](https://www.npmjs.com/package/@loglayer/transport-datadog-browser-logs) [@datadog/browser-logs](https://docs.datadoghq.com/logs/log_collection/javascript/) is Datadog's official browser-side logging library. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/datadog-browser-logs) ## Important Notes * Only works in browser environments (not in Node.js) * For server-side logging, use the [`@loglayer/transport-datadog`](/transports/datadog.html) package * You will not get any console output since this sends directly to DataDog. Use the `onDebug` option to log out messages. ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-datadog-browser-logs @datadog/browser-logs ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-datadog-browser-logs @datadog/browser-logs ``` ```sh [yarn] yarn add loglayer @loglayer/transport-datadog-browser-logs @datadog/browser-logs ``` ::: ## Setup ::: tip DataDog Error Tracking To use [DataDog's Error Tracking](https://docs.datadoghq.com/logs/error_tracking/) feature, configure the `errorFieldName` as `error` in your LogLayer configuration. Alternatively, you can use DataDog's [Pipeline Processing to remap](https://docs.datadoghq.com/logs/error_tracking/backend/?tab=serilog#setup) the default `err` field to `error`. You can also add remapping via DataDog `Logs` > `Configuration` > `Standard Attributes` and add remapping for the `error.message` / `error.stack` attributes (or add a new one for these if they do not exist) and remap to how you've configured LogLayer. For example, attribute `error.message` might remap to `err.message,metadata.err.message`, depending on how LogLayer is configured. ::: ```typescript import { datadogLogs } from '@datadog/browser-logs' import { LogLayer } from 'loglayer' import { DataDogBrowserLogsTransport } from "@loglayer/transport-datadog-browser-logs" // Initialize Datadog datadogLogs.init({ clientToken: '', site: '', forwardErrorsToLogs: true, sampleRate: 100 }) // Basic setup const log = new LogLayer({ errorFieldName: "error", transport: new DataDogBrowserLogsTransport({ logger: datadogLogs.logger }) }) // Or with a custom logger instance const logger = datadogLogs.createLogger('my-logger') const log = new LogLayer({ transport: new DataDogBrowserLogsTransport({ errorFieldName: "error", logger }) }) ``` ## Configuration Options ### Required Parameters | Name | Type | Description | |------|------|-------------| | `logger` | `datadogLogs.logger` | The DataDog browser logs instance | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `enabled` | `boolean` | `true` | Whether the transport is enabled | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Logs below this level will be filtered out | ## Changelog View the changelog [here](./changelogs/datadog-browser-logs-changelog.md). ## Log Level Mapping | LogLayer | Datadog | |----------|---------| | trace | debug | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | error | ## Changelog View the changelog [here](./changelogs/datadog-browser-logs-changelog.md). --- --- url: 'https://loglayer.dev/transports/datadog.md' description: Send logs to DataDog with the LogLayer logging library --- # DataDog Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-datadog)](https://www.npmjs.com/package/@loglayer/transport-datadog) Ships logs server-side to Datadog using the [datadog-transport-common](https://www.npmjs.com/package/datadog-transport-common) library. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/datadog) ## Important Notes * Only works server-side (not in browsers) * For browser-side logging, use the [`@loglayer/transport-datadog-browser-logs`](/transports/datadog-browser-logs) package * You will not get any console output since this sends directly to DataDog. Use the `onDebug` option to log out messages. ## Installation Install the required packages (`datadog-transport-common` is installed as part of `@loglayer/transport-datadog`): ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-datadog serialize-error ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-datadog serialize-error ``` ```sh [yarn] yarn add loglayer @loglayer/transport-datadog serialize-error ``` ::: ## Usage Example ::: tip DataDog Error Tracking To use [DataDog's Error Tracking](https://docs.datadoghq.com/logs/error_tracking/) feature, configure the `errorFieldName` as `error` in your LogLayer configuration. Alternatively, you can use DataDog's [Pipeline Processing to remap](https://docs.datadoghq.com/logs/error_tracking/backend/?tab=serilog#setup) the default `err` field to `error`. You can also add remapping via DataDog `Logs` > `Configuration` > `Standard Attributes` and add remapping for the `error.message` / `error.stack` attributes (or add a new one for these if they do not exist) and remap to how you've configured LogLayer. For example, attribute `error.message` might remap to `err.message,metadata.err.message`, depending on how LogLayer is configured. ::: ```typescript import { LogLayer } from 'loglayer' import { DataDogTransport } from "@loglayer/transport-datadog" import { serializeError } from "serialize-error"; const log = new LogLayer({ errorSerializer: serializeError, errorFieldName: "error", transport: new DataDogTransport({ options: { ddClientConf: { authMethods: { apiKeyAuth: "YOUR_API_KEY", }, }, ddServerConf: { // Note: This must match the site you use for your DataDog login - See below for more info site: "datadoghq.eu" }, onDebug: (msg) => { console.log(msg); }, onError: (err, logs) => { console.error(err, logs); }, }, }) }) ``` ## Transport Configuration ### Required Parameters | Name | Type | Description | |------|---------------------------------------------------------------------------------------------------------------------------------------------|-------------| | `options` | [`DDTransportOptions`](https://github.com/theogravity/datadog-transports/tree/main/packages/datadog-transport-common#configuration-options) | The options for the transport | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `enabled` | `boolean` | `true` | Whether the transport is enabled | | `level` | `LogLevel` or `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | - | Minimum log level to send to DataDog. Logs below this level will be filtered out. See [Log Level Hierarchy](/logging-api/adjusting-log-levels#log-level-hierarchy) for more details | | `messageField` | `string` | `"message"` | The field name to use for the message | | `levelField` | `string` | `"level"` | The field name to use for the log level | | `timestampField` | `string` | `"time"` | The field name to use for the timestamp | | `timestampFunction` | `() => any` | - | A custom function to stamp the timestamp. The default timestamp uses the ISO 8601 format | ## Changelog View the changelog [here](./changelogs/datadog-changelog.md). --- --- url: 'https://loglayer.dev/context-managers/default.md' description: The default context manager used in LogLayer. --- # Default Context Manager [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Fcontext-manager)](https://www.npmjs.com/package/@loglayer/context-manager) [Context Manager Source](https://github.com/loglayer/loglayer/tree/master/packages/core/context-manager) The Default Context Manager is the base context manager used by LogLayer. It provides a simple key-value store for managing context data with independent context for each logger instance. ::: info Batteries included This context manager is automatically used when creating a new LogLayer instance. You should not need to use this context manager directly. ::: ## Installation This package is included with the `loglayer` package, so you don't need to install it separately. It is, however, available as a standalone package: ::: code-group ```bash [npm] npm install @loglayer/context-manager ``` ```bash [yarn] yarn add @loglayer/context-manager ``` ```bash [pnpm] pnpm add @loglayer/context-manager ``` ::: ## Usage ### Basic Usage ```typescript import { LogLayer, ConsoleTransport } from "loglayer"; import { DefaultContextManager } from "@loglayer/context-manager"; const logger = new LogLayer({ transport: new ConsoleTransport({ logger: console }) // NOTE: This is redundant and unnecessary since DefaultContextManager is already // the default context manager when LogLayer is created. }).withContextManager(new DefaultContextManager()); // Set context logger.setContext({ requestId: "123", userId: "456" }); // Log with context logger.info("User action"); // Will include requestId and userId in the log entry ``` ### Child Loggers When creating child loggers, the Default Context Manager will: 1. Copy the parent's context to the child logger at creation time 2. Maintain independent context after creation ```typescript parentLogger.withContext({ requestId: "123" }); const childLogger = parentLogger.child(); // Child inherits parent's context at creation via shallow-copy childLogger.info("Initial log"); // Includes requestId: "123" // Child can modify its context independently childLogger.withContext({ userId: "456" }); childLogger.info("User action"); // Includes requestId: "123" and userId: "456" // Parent's context remains unchanged parentLogger.info("Parent log"); // Only includes requestId: "123" ``` --- --- url: 'https://loglayer.dev/log-level-managers/default.md' description: The default log level manager used in LogLayer. --- # Default Log Level Manager [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Flog-level-manager)](https://www.npmjs.com/package/@loglayer/log-level-manager) [Log Level Manager Source](https://github.com/loglayer/loglayer/tree/master/packages/core/log-level-manager) The Default Log Level Manager is the base log level manager used by LogLayer. It provides independent log level management for each logger instance, with children inheriting the initial log level from their parent. ::: info Batteries included This log level manager is automatically used when creating a new LogLayer instance. You should not need to use this log level manager directly. ::: ## Installation This package is included with the `loglayer` package, so you don't need to install it separately. It is, however, available as a standalone package: ::: code-group ```bash [npm] npm install @loglayer/log-level-manager ``` ```bash [yarn] yarn add @loglayer/log-level-manager ``` ```bash [pnpm] pnpm add @loglayer/log-level-manager ``` ::: ## Usage ### Basic Usage ```typescript import { LogLayer, ConsoleTransport, LogLevel } from "loglayer"; import { DefaultLogLevelManager } from "@loglayer/log-level-manager"; const logger = new LogLayer({ transport: new ConsoleTransport({ logger: console }) // NOTE: This is redundant and unnecessary since DefaultLogLevelManager is already // the default log level manager when LogLayer is created. }).withLogLevelManager(new DefaultLogLevelManager()); // Set log level logger.setLevel(LogLevel.warn); // Log with different levels logger.info('This will not be logged'); // Not logged logger.warn('This will be logged'); // Logged ``` ### Child Loggers With the Default Log Level Manager, child loggers inherit the log level from their parent when created, but subsequent changes to the parent's log level do not affect existing children: ```typescript import { LogLayer, ConsoleTransport, LogLevel } from "loglayer"; const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }); // Set parent log level parentLog.setLevel(LogLevel.warn); // Create child - inherits parent's log level (warn) const childLog = parentLog.child(); // Change parent log level parentLog.setLevel(LogLevel.debug); // Child is not affected - still at warn level childLog.info('This will not be logged'); // Not logged (child still at warn) parentLog.info('This will be logged'); // Logged (parent changed to debug) ``` ## Behavior * **Initial Inheritance**: When a child logger is created, it inherits the current log level from its parent * **Independent Changes**: After creation, parent and child loggers maintain independent log level settings * **No Propagation**: Changes to the parent's log level do not propagate to existing children --- --- url: 'https://loglayer.dev/example-integrations/deno.md' description: Using LogLayer with Deno runtime --- # Deno Integration LogLayer has support for the [Deno](https://deno.land/) runtime. ::: warning Deno Compatibility Not all transports and plugins are compatible with Deno. Some items that rely on Node.js-specific APIs (like file system operations or native modules) may not work in Deno. Items that have been tested with Deno are marked with a badge. Not all items have been tested with Deno; a lack of a badge does not imply a lack of support. Please let us know if you do find a transport / plugin is supported. ::: ## Installation ### Using npm: Specifier The recommended way to use LogLayer with Deno is through npm: specifiers: ```typescript import { LogLayer, ConsoleTransport } from "npm:loglayer@latest"; import { getSimplePrettyTerminal } from "npm:@loglayer/transport-simple-pretty-terminal@latest"; ``` ### Using Import Maps For better dependency management, use an import map: **deno.json** ```json { "imports": { "loglayer": "npm:loglayer@latest", "@loglayer/transport-simple-pretty-terminal": "npm:@loglayer/transport-simple-pretty-terminal@latest" } } ``` **main.ts** ```typescript import { LogLayer, ConsoleTransport } from "loglayer"; import { getSimplePrettyTerminal } from "@loglayer/transport-simple-pretty-terminal"; ``` ## Basic Setup with Console Transport The [Console Transport](/transports/console) is built into LogLayer and works perfectly in Deno: ```typescript import { LogLayer, ConsoleTransport } from "npm:loglayer@latest"; const log = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }); log.info("Hello from Deno with LogLayer!"); ``` ## Enhanced Setup with Simple Pretty Terminal For more visually appealing output, use the [Simple Pretty Terminal Transport](/transports/simple-pretty-terminal): ```typescript import { LogLayer } from "npm:loglayer@latest"; import { getSimplePrettyTerminal } from "npm:@loglayer/transport-simple-pretty-terminal@latest"; const log = new LogLayer({ transport: getSimplePrettyTerminal({ runtime: "node", // Use "node" for Deno viewMode: "inline" }) }); // Pretty formatted logging log.info("This is a pretty formatted log message"); log.withMetadata({ userId: 12345, action: "login", timestamp: new Date().toISOString() }).info("User performed action"); ``` --- --- url: 'https://loglayer.dev/transports/dynatrace.md' description: Send logs to Dynatrace with the LogLayer logging library --- # Dynatrace Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-dynatrace)](https://www.npmjs.com/package/@loglayer/transport-dynatrace) [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/dynatrace) The Dynatrace transport sends logs to Dynatrace using their [Log Monitoring API v2](https://docs.dynatrace.com/docs/discover-dynatrace/references/dynatrace-api/environment-api/log-monitoring-v2/post-ingest-logs). ::: warning See the "Limitations" section of the documentation for limits. This transport does not do any checks on limitations, so it's up to you to ensure you're not exceeding them. Although the limitations are pretty generous, it is advised to define the `onError` callback to handle any errors that may occur. ::: ## Installation ::: code-group ```bash [npm] npm install loglayer @loglayer/transport-dynatrace serialize-error ``` ```bash [yarn] yarn add loglayer @loglayer/transport-dynatrace serialize-error ``` ```bash [pnpm] pnpm add loglayer @loglayer/transport-dynatrace serialize-error ``` ::: ## Usage You will need an access token with the `logs.ingest` scope. See [access token documentation](https://docs.dynatrace.com/docs/discover-dynatrace/references/dynatrace-api/basics/dynatrace-api-authentication) for more details. ```typescript import { LogLayer } from 'loglayer' import { DynatraceTransport } from "@loglayer/transport-dynatrace" const log = new LogLayer({ errorSerializer: serializeError, transport: new DynatraceTransport({ url: "https://your-environment-id.live.dynatrace.com/api/v2/logs/ingest", ingestToken: "your-api-token", onError: (error) => { console.error('Failed to send log to Dynatrace:', error) } }) }) log.info('Hello world') ``` ## Configuration The transport accepts the following configuration options: ### Required Parameters | Name | Type | Description | |------|------|-------------| | `url` | `string` | The URL to post logs to. Should be in one of these formats: `https://.live.dynatrace.com/api/v2/logs/ingest` or `https://{your-activegate-domain}:9999/e/{your-environment-id}/api/v2/logs/ingest` | | `ingestToken` | `string` | An API token with the `logs.ingest` scope | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `onError` | `(error: Error) => void` | - | A callback function that will be called when there's an error sending logs to Dynatrace | | `enabled` | `boolean` | `true` | If set to `false`, the transport will not send any logs | | `consoleDebug` | `boolean` | `false` | If set to `true`, logs will also be output to the console | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Logs below this level will be filtered out | ## Log Format The transport sends logs to Dynatrace in the following format: ```json { "content": "Your log message", "severity": "info|warn|error|debug", "timestamp": "2024-01-01T00:00:00.000Z", // Any additional metadata fields } ``` ## Changelog View the changelog [here](./changelogs/dynatrace-changelog.md). --- --- url: 'https://loglayer.dev/transports/electron-log.md' description: Send logs to electron-log with the LogLayer logging library --- # Electron-log Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-electron-log)](https://www.npmjs.com/package/@loglayer/transport-electron-log) [Electron-log](https://github.com/megahertz/electron-log) is a logging library designed specifically for Electron applications. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/electron-log) ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-electron-log electron-log ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-electron-log electron-log ``` ```sh [yarn] yarn add loglayer @loglayer/transport-electron-log electron-log ``` ::: ## Setup ```typescript // Main process logger import log from 'electron-log/src/main' // Or for Renderer process // import log from 'electron-log/src/renderer' import { LogLayer } from 'loglayer' import { ElectronLogTransport } from "@loglayer/transport-electron-log" const logger = new LogLayer({ transport: new ElectronLogTransport({ logger: log }) }) ``` ## Log Level Mapping | LogLayer | Electron-log | |----------|--------------| | trace | silly | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | error | ## Changelog View the changelog [here](./changelogs/electron-log-changelog.md). --- --- url: 'https://loglayer.dev/logging-api/error-handling.md' description: Learn how to pass errors to LogLayer for logging --- # Error Handling LogLayer provides robust error handling capabilities with flexible configuration options for how errors are logged and serialized. ## Basic Error Logging ### With a Message The most common way to log an error is using the `withError` method along with a message: ```typescript const error = new Error('Database connection failed') log.withError(error).error('Failed to process request') ``` You can use any log level with error logging: ```typescript // Log error with warning level log.withError(error).warn('Database connection unstable') // Log error with info level log.withError(error).info('Retrying connection') ``` ### Error-Only Logging When you just want to log an error without an additional message: ```typescript // Default log level is 'error' log.errorOnly(new Error('Database connection failed')) // With custom log level log.errorOnly(new Error('Connection timeout'), { logLevel: LogLevel.warn }) ``` ## Error Configuration ### Error Field Name By default, errors are logged under the `err` field. You can customize this: ```typescript const log = new LogLayer({ errorFieldName: 'error', // Default is 'err' }) log.errorOnly(new Error('test')) // Output: { "error": { "message": "test", "stack": "..." } } ``` ### Error Serialization Some logging libraries don't handle Error objects well. You can provide a custom error serializer: ```typescript const log = new LogLayer({ errorSerializer: (err) => ({ message: err.message, stack: err.stack, code: err.code }), }) ``` For libraries like `roarr` that require error serialization, you can use a package like `serialize-error`: ```typescript import { serializeError } from 'serialize-error' const log = new LogLayer({ errorSerializer: serializeError, transport: new RoarrTransport({ logger: roarr }) }) ``` ::: tip Use serialize-error We strongly recommend the use of `serialize-error` for error serialization. ::: ### Error Message Copying You can configure LogLayer to automatically copy the error's message as the log message: ```typescript const log = new LogLayer({ copyMsgOnOnlyError: true, }) // Will include error.message as the log message log.errorOnly(new Error('Connection failed')) ``` You can override this behavior per call: ```typescript // Disable message copying for this call log.errorOnly(new Error('test'), { copyMsg: false }) // Enable message copying for this call even if disabled globally log.errorOnly(new Error('test'), { copyMsg: true }) ``` ### Error in Metadata You can configure errors to be included in the metadata field instead of at the root level: ```typescript const log = new LogLayer({ errorFieldInMetadata: true, metadataFieldName: 'metadata', }) log.errorOnly(new Error('test')) // Output: { "metadata": { "err": { "message": "test", "stack": "..." } } } ``` ## Combining Errors with Other Data ### With Metadata You can combine errors with metadata: ```typescript log.withError(new Error('Query failed')) .withMetadata({ query: 'SELECT * FROM users', duration: 1500 }) .error('Database error') ``` ### With Context Errors can be combined with context data: ```typescript log.withContext({ requestId: '123' }) .withError(new Error('Not found')) .error('Resource not found') ``` --- --- url: 'https://loglayer.dev/plugins/filter.md' description: 'Filter logs using string patterns, regular expressions, or JSON Queries' --- # Filter Plugin [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Fplugin-filter)](https://www.npmjs.com/package/@loglayer/plugin-filter) [Plugin Source](https://github.com/loglayer/loglayer/tree/master/packages/plugins/filter) A plugin that filters log messages. You can filter logs using string patterns, regular expressions, or [JSON Queries](https://jsonquerylang.org/). ## Installation ::: code-group ```bash [npm] npm install @loglayer/plugin-filter ``` ```bash [yarn] yarn add @loglayer/plugin-filter ``` ```bash [pnpm] pnpm add @loglayer/plugin-filter ``` ::: ## Usage ```typescript import { filterPlugin } from '@loglayer/plugin-filter'; // Create a filter that only allows error messages const filter = filterPlugin({ // checks the assembled message using an includes() messages: ['error'], }); // Checks the level of the log const levelFilter = filterPlugin({ queries: ['.level == "error" or .level == "warn"'], }); ``` ### Configuration The plugin accepts the following configuration options: | Option | Type | Description | |--------|------|-----------------------------------------------------------------------------------------------------------------------------------| | `messages` | `Array` | Optional. Array of string patterns or regular expressions to match against log messages | | `queries` | `string[]` | Optional. Array of JSON queries to filter logs. A JSON Query `filter()` is applied, which each item being part of an OR condition | | `debug` | `boolean` | Optional. Enable debug mode for troubleshooting | | `disabled` | `boolean` | Optional. Disable the plugin | ## Message Pattern Matching You can filter logs using string patterns or regular expressions: ```typescript // Using string patterns const filter = filterPlugin({ messages: ['error', 'warning'], }); // Using regular expressions const regexFilter = filterPlugin({ messages: [/error/i, /warning\d+/], }); // Mixed patterns const mixedFilter = filterPlugin({ messages: ['error', /warning\d+/], }); ``` ## Query-Based Filtering You can use [JSON Queries](https://jsonquerylang.org/) to filter logs based on any field. ### Usage ```typescript const filter = filterPlugin({ // each item is used as an OR condition queries: [ // Filter by log level '.level == "error"', // Filter by data properties '.data.userId == 123', // Complex conditions '(.level == "error") and (.data.retryCount > 3)', ], }); ``` ::: tip For joining conditions, wrap them in parentheses. ::: This would translate in JSON Query to: ```text filter((.level == "error") or (.data.userId == 123) or ((.level == "error") and (.data.retryCount > 3))) ``` ::: info * `filter()` is added around the queries by the plugin. * Single-quotes are converted to double-quotes. ::: ### Query Context The queries are executed against an array containing an object that is defined as the following: ```typescript [{ level: string; // Log level message: string; // Combined log message data: object; // Additional log data, which includes, error data, context data and metadata }] ``` If you did the following: ```typescript log.withMetadata({ userId: '123' }).error('Failed to process request'); ``` Then the query context would be: ```typescript { level: 'error', message: 'Failed to process request', data: { userId: '123' } } ``` ### Example Queries ```text // Filter by log level [".level == 'error'"] // Filter by message content // see: https://github.com/jsonquerylang/jsonquery/blob/main/reference/functions.md#regex ["regex(.message, 'test', 'i')"] // Filter by data properties [".data.user.age == 25"] // Complex conditions ["(.level == "error") and (.data.retryCount > 3)"] ``` ## Debug Mode Enable debug mode to see detailed information about the filtering process: ```typescript const filter = filterPlugin({ messages: ['error'], queries: ['.level == "error"'], debug: true, }); ``` ## Filter Logic The plugin follows this logic when filtering logs: 1. If no filters are defined (no messages and no queries), allow all logs 2. If message patterns are defined, check them first * If any pattern matches, allow the log 3. If no message patterns match (or none defined) and queries are defined: * Execute queries * If any query matches, allow the log 4. If no patterns or queries match, filter out the log ## Changelog View the changelog [here](./changelogs/filter-changelog.md). --- --- url: 'https://loglayer.dev/getting-started.md' description: Learn how to install and use LogLayer in your project --- # Getting Started *LogLayer is designed to work seamlessly across both server-side and browser environments. However, individual transports and plugins may have specific environment requirements, which is indicated on their respective page.* ## Installation ### Node.js ::: code-group ```sh [npm] npm install loglayer ``` ```sh [pnpm] pnpm add loglayer ``` ```sh [yarn] yarn add loglayer ``` ::: ### Deno For Deno, you can use npm: specifiers or import maps: **Using npm: specifiers:** ```typescript import { LogLayer } from "npm:loglayer@latest"; ``` **Using import maps (recommended):** ```json // deno.json { "imports": { "loglayer": "npm:loglayer@latest" } } ``` ```typescript // main.ts import { LogLayer } from "loglayer"; ``` For detailed Deno setup and examples, see the [Deno integration guide](/example-integrations/deno). ### Bun For Bun, you can install LogLayer using bun's package manager: ```sh bun add loglayer ``` For detailed Bun setup and examples, see the [Bun integration guide](/example-integrations/bun). ## Basic Usage with Console Transport The simplest way to get started is to use the built-in console transport, which uses the standard `console` object for logging: ```typescript import { LogLayer, ConsoleTransport } from 'loglayer' const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, }), }) // Basic logging log.info('Hello world!') // Logging with metadata log.withMetadata({ user: 'john' }).info('User logged in') // Logging with context (persists across log calls) log.withContext({ requestId: '123' }) log.info('Processing request') // Will include requestId // Logging errors log.withError(new Error('Something went wrong')).error('Failed to process request') ``` ::: tip Structured logging If you want to use the Console Transport as a structured logger (JSON output), see the [Console Transport structured logging section](/transports/console#structured-logging). ::: ## Using an Error Serializer When logging errors, JavaScript `Error` objects don't serialize to JSON well by default. We recommend using an error serializer like `serialize-error` to ensure error details are properly captured: ::: code-group ```sh [npm] npm install serialize-error ``` ```sh [pnpm] pnpm add serialize-error ``` ```sh [yarn] yarn add serialize-error ``` ::: ```typescript import { LogLayer, ConsoleTransport } from 'loglayer' import { serializeError } from 'serialize-error' const log = new LogLayer({ errorSerializer: serializeError, transport: new ConsoleTransport({ logger: console, }), }) // Error details will be properly serialized log.withError(new Error('Something went wrong')).error('Failed to process request') ``` For more error handling options, see the [Error Handling documentation](/logging-api/error-handling#error-serialization). ## Next steps * Optionally [configure](/configuration) LogLayer to further customize logging behavior. * See the [Console Transport](/transports/console) documentation for more configuration options. * Start exploring the [Logging API](/logging-api/basic-logging) section for more advanced logging features. * See the [Transports](/transports/) section for more ways to ship logs to different destinations. --- --- url: 'https://loglayer.dev/log-level-managers/global.md' description: Apply log level changes to all loggers globally in LogLayer. --- # Global Log Level Manager [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Flog-level-manager-global)](https://www.npmjs.com/package/@loglayer/log-level-manager-global) [Log Level Manager Source](https://github.com/loglayer/loglayer/tree/master/packages/log-level-managers/global) A log level manager that applies log level changes to all loggers globally, regardless of whether they are parent or child loggers. ## Installation ::: code-group ```bash [npm] npm install @loglayer/log-level-manager-global ``` ```bash [yarn] yarn add @loglayer/log-level-manager-global ``` ```bash [pnpm] pnpm add @loglayer/log-level-manager-global ``` ::: ## Usage ```typescript import { LogLayer, ConsoleTransport, LogLevel } from "loglayer"; import { GlobalLogLevelManager } from '@loglayer/log-level-manager-global'; const logger1 = new LogLayer({ transport: new ConsoleTransport({ logger: console }), }).withLogLevelManager(new GlobalLogLevelManager()); const logger2 = new LogLayer({ transport: new ConsoleTransport({ logger: console }), }).withLogLevelManager(new GlobalLogLevelManager()); // Changing log level on logger1 affects logger2 as well logger1.setLevel(LogLevel.warn); logger1.info('This will not be logged'); // Not logged logger2.info('This will also not be logged'); // Not logged (affected by logger1's change) // Changing log level on logger2 also affects logger1 logger2.setLevel(LogLevel.debug); logger1.debug('This will be logged'); // Logged (affected by logger2's change) logger2.debug('This will be logged'); // Logged ``` ### With Child Loggers ```typescript import { LogLayer, ConsoleTransport, LogLevel } from "loglayer"; import { GlobalLogLevelManager } from '@loglayer/log-level-manager-global'; const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }), }).withLogLevelManager(new GlobalLogLevelManager()); const childLog = parentLog.child(); // Changing log level on parent affects all loggers globally parentLog.setLevel(LogLevel.warn); // Both parent and child are affected parentLog.info('This will not be logged'); // Not logged childLog.info('This will not be logged'); // Not logged // Changing log level on child also affects all loggers globally childLog.setLevel(LogLevel.debug); // Both parent and child are affected parentLog.debug('This will be logged'); // Logged childLog.debug('This will be logged'); // Logged ``` ## Behavior * **Global State**: All loggers using `GlobalLogLevelManager` share the same global log level state * **Bidirectional**: Changes to any logger affect all other loggers using the same manager * **No Isolation**: There is no isolation between different logger instances ## Use Cases * **Application-wide Log Level Control**: When you want to control log levels across your entire application from a single point * **Dynamic Log Level Adjustment**: When you need to change log levels globally at runtime (e.g., based on environment or configuration) --- --- url: 'https://loglayer.dev/transports/google-cloud-logging.md' description: Send logs to Google Cloud Logging with the LogLayer logging library --- # Google Cloud Logging Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-google-cloud-logging)](https://www.npmjs.com/package/@loglayer/transport-google-cloud-logging) [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/google-cloud-logging) Implements the [Google Cloud Logging library](https://www.npmjs.com/package/@google-cloud/logging). This transport sends logs to [Google Cloud Logging](https://cloud.google.com/logging) (formerly known as Stackdriver Logging). ## Configuration Options ### Required Parameters | Name | Type | Description | |------|------|-------------| | `logger` | `Log` | The Google Cloud Logging instance | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | The minimum log level to process. Logs below this level will be filtered out | | `rootLevelData` | `Record` | - | Data to be included in the metadata portion of the log entry | | `rootLevelMetadataFields` | `Array` | `[]` | List of LogLayer metadata fields to merge into `rootLevelData` | | `onError` | `(error: Error) => void` | - | Error handling callback | | `enabled` | `boolean` | `true` | If false, the transport will not send logs to the logger | | `consoleDebug` | `boolean` | `false` | If true, the transport will log to the console for debugging purposes | | `id` | `string` | - | A user-defined identifier for the transport | ## Installation ::: code-group ```bash [npm] npm install @loglayer/transport-google-cloud-logging @google-cloud/logging serialize-error ``` ```bash [yarn] yarn add @loglayer/transport-google-cloud-logging @google-cloud/logging serialize-error ``` ```bash [pnpm] pnpm add @loglayer/transport-google-cloud-logging @google-cloud/logging serialize-error ``` ::: ## Usage ::: info This transport uses `log.entry(metadata, data)` as described in the library documentation. * The `metadata` portion is not the data from `withMetadata()` or `withContext()`. See the `rootLevelData` and `rootLevelMetadataFields` options for this transport on how to modify this value. * The `data` portion is actually the `jsonPayload` is what the transport uses for all LogLayer data. * The message data is stored in `jsonPayload.message` For more information, see [Structured Logging](https://cloud.google.com/logging/docs/structured-logging), specifically [LogEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry). ::: ```typescript import { LogLayer } from "loglayer"; import { GoogleCloudLoggingTransport } from "@loglayer/transport-google-cloud-logging"; import { Logging } from '@google-cloud/logging'; import { serializeError } from "serialize-error"; // Create the logging client const logging = new Logging({ projectId: "GOOGLE_CLOUD_PLATFORM_PROJECT_ID" }); const log = logging.log('my-log'); // Create LogLayer instance with the transport const logger = new LogLayer({ errorSerializer: serializeError, transport: new GoogleCloudLoggingTransport({ logger: log, }) }); // The logs will include the default metadata logger.info("Hello from Cloud Run!"); ``` ## Configuration ### `rootLevelData` The root level data to include for all log entries. This is not the same as using `withContext()`, which would be included as part of the `jsonPayload`. The `rootLevelData` option accepts any valid [Google Cloud LogEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry) fields except for `severity`, `timestamp`, and `jsonPayload` which are managed by the transport. ```typescript const logger = new LogLayer({ transport: new GoogleCloudLoggingTransport({ logger: log, rootLevelData: { resource: { type: "cloud_run_revision", labels: { project_id: "my-project", service_name: "my-service", revision_name: "my-revision", }, }, labels: { environment: "production", version: "1.0.0", }, }, }), }); ``` ### `rootLevelMetadataFields` By default, `withMetadata()` fields are forwarded as part of `jsonPayload` of the [LogEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry). The `rootLevelMetadataFields` option accepts an array of field names to pluck from metadata and shallow merge with `rootLevelData`. This allows you to dynamically specify the metadata portion of a log entry. ```typescript const logger = new LogLayer({ transport: new GoogleCloudLoggingTransport({ logger: log, rootLevelMetadataFields: ["labels"], rootLevelData: { labels: { environment: "production", location: "west", }, }, }), }); // This will overwrite `labels` in root level data. // `customField` is still sent as part of `jsonPayload`. logger .withMetadata({ labels: { location: "east" }, customField: "example" }) .info("example") ``` To allow mapping to every supported `LogEntry` metadata field, the following list is recommended: ```typescript const logger = new LogLayer({ transport: new GoogleCloudLoggingTransport({ logger: log, rootLevelMetadataFields: [ "logName", "resource", "insertId", "httpRequest", "labels", "operation", "trace", "spanId", "traceSampled", "sourceLocation", "split", ], }), }); ``` ## Log Level Mapping LogLayer log levels are mapped to Google Cloud Logging severity levels as follows: | LogLayer Level | Google Cloud Logging Severity | |---------------|------------------------------| | `fatal` | `CRITICAL` | | `error` | `ERROR` | | `warn` | `WARNING` | | `info` | `INFO` | | `debug` | `DEBUG` | | `trace` | `DEBUG` | ## Changelog View the changelog [here](./changelogs/google-cloud-logging-changelog.md). --- --- url: 'https://loglayer.dev/mixins/hot-shots.md' description: Add StatsD metrics functionality to LogLayer using hot-shots --- # Hot-Shots (StatsD) Mixin [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Fmixin-hot-shots)](https://www.npmjs.com/package/@loglayer/mixin-hot-shots) [![Source](https://img.shields.io/badge/source-GitHub-blue)](https://github.com/loglayer/loglayer/tree/master/packages/mixins/hot-shots) Adds StatsD metrics functionality to the [LogLayer](https://loglayer.dev) logging library using [hot-shots](https://github.com/bdeitte/hot-shots). The mixin provides a fluent builder API for sending metrics to StatsD, DogStatsD, and Telegraf through a `stats` property on LogLayer instances. ## Installation This mixin requires the [`hot-shots`](https://github.com/bdeitte/hot-shots) library to be installed in your project. ::: code-group ```bash [npm] npm install @loglayer/mixin-hot-shots hot-shots ``` ```bash [yarn] yarn add @loglayer/mixin-hot-shots hot-shots ``` ```bash [pnpm] pnpm add @loglayer/mixin-hot-shots hot-shots ``` ::: ## TypeScript Setup To use this mixin with TypeScript, you must register the types by adding the mixin package to your `tsconfig.json` includes: ```json { "include": [ "./node_modules/@loglayer/mixin-hot-shots" ] } ``` This ensures TypeScript recognizes the mixin methods on your LogLayer instances. ## Usage First, create and configure your StatsD client, then register the mixin **before** creating any LogLayer instances: ```typescript import { LogLayer, useLogLayerMixin, ConsoleTransport } from 'loglayer'; import { StatsD } from 'hot-shots'; import { hotshotsMixin } from '@loglayer/mixin-hot-shots'; // Create and configure your StatsD client const statsd = new StatsD({ host: 'localhost', port: 8125 }); // Register the mixin (must be called before creating LogLayer instances) useLogLayerMixin(hotshotsMixin(statsd)); // Create LogLayer instance const log = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }); // Use StatsD methods through the stats property log.stats.increment('request.count').send(); log.stats.decrement('request.count').send(); log.stats.timing('request.duration', 150).send(); log.stats.gauge('active.connections', 42).send(); ``` ### Builder Pattern All stats methods use a fluent builder pattern. You can chain configuration methods before calling `send()`. Since `send()` returns `void`, stats calls should be placed at the end of a method chain: ```typescript // Increment with value, tags, and sample rate log.stats.increment('request.count') .withValue(5) .withTags(['env:production', 'service:api']) .withSampleRate(0.5) .withCallback((error, bytes) => { if (error) { log.withError(error).error('Error sending metric'); } else { log.info(`Sent ${bytes} bytes`); } }) .send(); // Chain LogLayer methods BEFORE stats log .withContext({ userId: '123' }) .withPrefix('API') .stats.increment('user.login').send(); // Chain ends here // Stats methods are available on child loggers const childLogger = log.child(); childLogger.stats.timing('operation.duration', 150).send(); // To continue logging after stats, start a new statement log.stats.increment('login').send(); log.info('User logged in'); ``` ### Testing with MockLogLayer The hot-shots mixin is fully compatible with `MockLogLayer` for unit testing: ```typescript import { MockLogLayer } from 'loglayer'; import { useLogLayerMixin } from 'loglayer'; import { hotshotsMixin, MockStatsAPI } from '@loglayer/mixin-hot-shots'; // Register the mixin with a null client for testing useLogLayerMixin(hotshotsMixin(null)); const mockLogger = new MockLogLayer(); // All stats methods are available on MockLogLayer mockLogger.stats.increment('counter').send(); mockLogger.stats.gauge('gauge', 100).send(); mockLogger.stats.timing('timer', 500).send(); // The stats property will be a MockStatsAPI when using null client // No actual metrics are sent, making it safe for unit tests ``` For more information on testing with MockLogLayer, see the [Unit Testing documentation](/logging-api/unit-testing). ## Migration from v2 to v3 In v3, the mixin API has been refactored to use a builder pattern with a `stats` property instead of direct methods on LogLayer. ### API Changes **Before (v2):** ```typescript // Mixin registration useLogLayerMixin(hotshotsMixin(statsd)); // Usage log.statsIncrement('request.count').info('Request received'); log.statsDecrement('request.count'); log.statsTiming('request.duration', 150).info('Request processed'); log.statsGauge('active.connections', 42).info('Connection established'); ``` **After (v3):** ```typescript // Mixin registration (same API, but now uses stats property) useLogLayerMixin(hotshotsMixin(statsd)); // Usage (now uses stats property with builder pattern) log.stats.increment('request.count').send(); log.stats.decrement('request.count').send(); log.stats.timing('request.duration', 150).send(); log.stats.gauge('active.connections', 42).send(); ``` ### Migration Steps 1. **Update method calls**: Replace all `stats*` methods with the `stats` property (mixin registration stays the same: `hotshotsMixin(statsd)`): * `log.statsIncrement(...)` → `log.stats.increment(...).send()` * `log.statsDecrement(...)` → `log.stats.decrement(...).send()` * `log.statsTiming(...)` → `log.stats.timing(...).send()` * `log.statsGauge(...)` → `log.stats.gauge(...).send()` * And so on for all stats methods 2. **Remove LogLayer chaining**: The stats methods no longer return LogLayer instances. If you were chaining logging methods after stats calls, you'll need to separate them: ```typescript // Before log.statsIncrement('request.count').info('Request received'); // After log.stats.increment('request.count').send(); log.info('Request received'); ``` 3. **Update method parameters**: Some methods now use the builder pattern for optional parameters: ```typescript // Before log.statsIncrement('counter', 5, 0.5, ['tag:value']); // After log.stats.increment('counter') .withValue(5) .withSampleRate(0.5) .withTags(['tag:value']) .send(); ``` ## Configuration The `hotshotsMixin` function requires a configured `StatsD` client instance from the `hot-shots` library. ### Required Parameters | Name | Type | Description | |------|------|-------------| | `client` | `StatsD` | A configured `StatsD` client instance from the `hot-shots` library | For detailed information about configuring the `StatsD` client, see the [hot-shots documentation](https://github.com/bdeitte/hot-shots). ## Available Methods All stats methods are accessed through the `stats` property on LogLayer instances. Each method returns a builder that supports chaining configuration methods before calling `send()` or `create()`. For detailed usage information, parameters, and examples, refer to the [hot-shots documentation](https://github.com/bdeitte/hot-shots). ### Accessing the StatsD Client | Method | Returns | Description | |--------|---------|-------------| | `getClient()` | `StatsD` | Returns the underlying hot-shots StatsD client instance that was configured when the mixin was registered. Useful for accessing advanced features that aren't directly exposed through the mixin API. | ### Counters | Method | Returns | Description | |--------|---------|-------------| | `stats.increment(stat)` | `IIncrementDecrementBuilder` | Increments a counter stat. Use `withValue()` to specify the increment amount (defaults to 1) | | `stats.decrement(stat)` | `IIncrementDecrementBuilder` | Decrements a counter stat. Use `withValue()` to specify the decrement amount (defaults to 1) | **Builder methods**: `withValue(value)`, `withTags(tags)`, `withSampleRate(rate)`, `withCallback(callback)`, `send()` ### Timings | Method | Returns | Description | |--------|---------|-------------| | `stats.timing(stat, value)` | `IStatsBuilder` | Sends a timing command. The value can be milliseconds (number) or a Date object | | `stats.timer(func, stat)` | `ITimerBuilder` | Wraps a synchronous function to automatically time its execution. Returns a builder that supports chaining with `withTags()`, `withSampleRate()`, and `withCallback()`. Call `create()` to get the wrapped function | | `stats.asyncTimer(func, stat)` | `IAsyncTimerBuilder` | Wraps an async function to automatically time its execution. Returns a builder that supports chaining with `withTags()`, `withSampleRate()`, and `withCallback()`. Call `create()` to get the wrapped function | | `stats.asyncDistTimer(func, stat)` | `IAsyncDistTimerBuilder` | Wraps an async function to automatically time its execution as a distribution metric (DogStatsD only). Returns a builder that supports chaining with `withTags()`, `withSampleRate()`, and `withCallback()`. Call `create()` to get the wrapped function | **Builder methods for `timing`**: `withTags(tags)`, `withSampleRate(rate)`, `withCallback(callback)`, `send()` **Builder methods for `timer`, `asyncTimer`, and `asyncDistTimer`**: `withTags(tags)`, `withSampleRate(rate)`, `withCallback(callback)`, `create()` ```typescript // Wrap a synchronous function to automatically time it const processData = (data: string) => { // Your synchronous logic return data.toUpperCase(); }; const timedProcess = log.stats.timer(processData, 'data.process.duration').create(); const result = timedProcess('hello'); // Automatically records timing // Wrap an async function to automatically time it const fetchData = async (url: string) => { const response = await fetch(url); return response.json(); }; // Simple usage const timedFetch = log.stats.asyncTimer(fetchData, 'api.fetch.duration').create(); const data = await timedFetch('https://api.example.com/data'); // With tags and sample rate const timedFetchWithOptions = log.stats .asyncTimer(fetchData, 'api.fetch.duration') .withTags(['env:production', 'service:api']) .withSampleRate(0.5) .create(); const data2 = await timedFetchWithOptions('https://api.example.com/data'); ``` ### Histograms and Distributions | Method | Returns | Description | |--------|---------|-------------| | `stats.histogram(stat, value)` | `IStatsBuilder` | Records a value in a histogram. Histograms track the statistical distribution of values (DogStatsD/Telegraf only) | | `stats.distribution(stat, value)` | `IStatsBuilder` | Tracks the statistical distribution of a set of values (DogStatsD v6). Similar to histogram but optimized for distributions | **Builder methods**: `withTags(tags)`, `withSampleRate(rate)`, `withCallback(callback)`, `send()` ### Gauges | Method | Returns | Description | |--------|---------|-------------| | `stats.gauge(stat, value)` | `IStatsBuilder` | Sets or changes a gauge stat to the specified value | | `stats.gaugeDelta(stat, delta)` | `IStatsBuilder` | Changes a gauge stat by a specified amount rather than setting it to a value | **Builder methods**: `withTags(tags)`, `withSampleRate(rate)`, `withCallback(callback)`, `send()` ### Sets | Method | Returns | Description | |--------|---------|-------------| | `stats.set(stat, value)` | `IStatsBuilder` | Counts unique occurrences of a stat. Records how many unique elements were tracked | | `stats.unique(stat, value)` | `IStatsBuilder` | Alias for `set`. Counts unique occurrences of a stat | **Builder methods**: `withTags(tags)`, `withSampleRate(rate)`, `withCallback(callback)`, `send()` ### Service Checks (DogStatsD only) | Method | Returns | Description | |--------|---------|-------------| | `stats.check(name, status)` | `ICheckBuilder` | Sends a service check status. Status values: `0` = OK, `1` = WARNING, `2` = CRITICAL, `3` = UNKNOWN | **Builder methods**: `withOptions(options)`, `withTags(tags)`, `withCallback(callback)`, `send()` **Note**: `withSampleRate()` is not supported for service checks and is a no-op. ### Events (DogStatsD only) | Method | Returns | Description | |--------|---------|-------------| | `stats.event(title)` | `IEventBuilder` | Sends an event. Use `withText()` to set the event description | **Builder methods**: `withText(text)`, `withTags(tags)`, `withCallback(callback)`, `send()` **Note**: `withSampleRate()` is not supported for events and is a no-op. ## Builder Methods All stats methods return a builder that supports the following configuration methods: ### Common Builder Methods | Method | Type | Description | |--------|------|-------------| | `withTags(tags)` | `StatsTags` | Add tags to the metric. Can be an array of strings (`['env:prod', 'service:api']`) or an object (`{ env: 'prod', service: 'api' }`) | | `withSampleRate(rate)` | `number` | Set the sample rate for the metric (0.0 to 1.0). Not supported for events and service checks | | `withCallback(callback)` | `StatsCallback` | Add a callback function to be called after sending the metric. Receives `(error?: Error, bytes?: number)` | | `send()` | `void` | Send the metric with the configured options | ### Increment/Decrement Builder Methods | Method | Type | Description | |--------|------|-------------| | `withValue(value)` | `number` | Set the increment/decrement value (defaults to 1 if not specified) | ### Event Builder Methods | Method | Type | Description | |--------|------|-------------| | `withText(text)` | `string` | Set the event text/description | ### Check Builder Methods | Method | Type | Description | |--------|------|-------------| | `withOptions(options)` | `CheckOptions` | Set service check options (hostname, timestamp, message, etc.) | ## Examples ### Basic Usage ```typescript import { LogLayer, useLogLayerMixin, ConsoleTransport } from 'loglayer'; import { StatsD } from 'hot-shots'; import { hotshotsMixin } from '@loglayer/mixin-hot-shots'; const statsd = new StatsD({ host: 'localhost', port: 8125 }); useLogLayerMixin(hotshotsMixin(statsd)); const log = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }); // Simple increment log.stats.increment('page.views').send(); // Increment by specific value log.stats.increment('items.sold').withValue(10).send(); // Timing with tags log.stats.timing('request.duration', 150) .withTags(['method:GET', 'status:200']) .send(); // Gauge with object tags log.stats.gauge('active.users', 42) .withTags({ env: 'production', region: 'us-east-1' }) .send(); ``` ### Advanced Usage ```typescript // Increment with all options log.stats.increment('api.requests') .withValue(1) .withTags(['env:production', 'service:api']) .withSampleRate(0.1) // Sample 10% of requests .withCallback((error, bytes) => { if (error) { log.error('Failed to send metric').withError(error).send(); } }) .send(); // Service check (DogStatsD) import { StatsD } from 'hot-shots'; const statsd = new StatsD({ host: 'localhost', port: 8125 }); const CHECKS = statsd.CHECKS; log.stats.check('database.health', CHECKS.OK) .withOptions({ hostname: 'db-server-1', message: 'Database is healthy' }) .withTags(['service:database', 'env:production']) .send(); // Event (DogStatsD) log.stats.event('Deployment completed') .withText('Version 1.2.3 deployed to production') .withTags(['env:production', 'version:1.2.3']) .send(); ``` ### Multiple Stats All methods support arrays of stat names: ```typescript // Increment multiple counters at once log.stats.increment(['requests.total', 'requests.api']).send(); // Set multiple gauges log.stats.gauge(['cpu.usage', 'memory.usage'], 75).send(); ``` --- --- url: 'https://loglayer.dev/transports/http.md' description: Send logs to any HTTP endpoint with the LogLayer logging library --- # HTTP Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-http)](https://www.npmjs.com/package/@loglayer/transport-http) Ships logs to any HTTP endpoint with support for batching, compression, retries, and rate limiting. Features include: * Configurable HTTP method and headers * Custom payload template function * Gzip compression support * Retry logic with exponential backoff * Rate limiting support * Batch sending with configurable size and timeout * Error and debug callbacks * Log size validation and payload size tracking * Support for Next.js edge deployments [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/http) This transport was 99% vibe-coded, with manual testing against [VictoriaLogs](victoria-logs.md) and [Logflare](logflare.md). [Vibe Code Prompts](https://github.com/loglayer/loglayer/tree/master/packages/transports/http/PROMPTS.md) ## Installation ::: code-group ```bash [npm] npm install loglayer @loglayer/transport-http serialize-error ``` ```bash [pnpm] pnpm add loglayer @loglayer/transport-http serialize-error ``` ```bash [yarn] yarn add loglayer @loglayer/transport-http serialize-error ``` ::: ## Basic Usage ```typescript import { LogLayer } from 'loglayer' import { HttpTransport } from "@loglayer/transport-http" import { serializeError } from "serialize-error"; const log = new LogLayer({ errorSerializer: serializeError, transport: new HttpTransport({ url: "https://api.example.com/logs", method: "POST", // optional, defaults to POST headers: { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json" }, payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ timestamp: new Date().toISOString(), level: logLevel, message, metadata: data, }), compression: true, // optional, defaults to false maxRetries: 3, // optional, defaults to 3 retryDelay: 1000, // optional, defaults to 1000 respectRateLimit: true, // optional, defaults to true enableBatchSend: true, // optional, defaults to true batchSize: 100, // optional, defaults to 100 batchSendTimeout: 5000, // optional, defaults to 5000ms batchSendDelimiter: "\n", // optional, defaults to "\n" maxLogSize: 1048576, // optional, defaults to 1MB maxPayloadSize: 5242880, // optional, defaults to 5MB enableNextJsEdgeCompat: false, // optional, defaults to false onError: (err) => { console.error('Failed to send logs:', err); }, onDebug: (entry) => { console.log('Log entry being sent:', entry); }, onDebugReqRes: ({ req, res }) => { console.log('HTTP Request:', { url: req.url, method: req.method, headers: req.headers, body: req.body }); console.log('HTTP Response:', { status: res.status, statusText: res.statusText, headers: res.headers, body: res.body }); }, }) }) // Use the logger log.info("This is a test message"); log.withMetadata({ userId: "123" }).error("User not found"); ``` ## Configuration ### Required Parameters | Name | Type | Description | |------|------|-------------| | `url` | `string` | The URL to send logs to | | `payloadTemplate` | `(data: { logLevel: string; message: string; data?: Record }) => string` | Function to transform log data into the payload format | ### HTTP Transport Optional Parameters #### General Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `enabled` | `boolean` | `true` | Whether the transport is enabled | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Logs below this level will be filtered out | | `method` | `string` | `"POST"` | HTTP method to use for requests | | `headers` | `Record \| (() => Record)` | `{}` | Headers to include in the request. Can be an object or a function that returns headers | | `contentType` | `string` | `"application/json"` | Content type for single log requests. User-specified headers take precedence | | `compression` | `boolean` | `false` | Whether to use gzip compression | | `maxRetries` | `number` | `3` | Number of retry attempts before giving up | | `retryDelay` | `number` | `1000` | Base delay between retries in milliseconds | | `respectRateLimit` | `boolean` | `true` | Whether to respect rate limiting by waiting when a 429 response is received | | `maxLogSize` | `number` | `1048576` | Maximum size of a single log entry in bytes (1MB) | | `maxPayloadSize` | `number` | `5242880` | Maximum size of the payload (uncompressed) in bytes (5MB) | | `enableNextJsEdgeCompat` | `boolean` | `false` | Whether to enable Next.js Edge compatibility | #### Debug Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `onError` | `(err: Error) => void` | - | Error handling callback | | `onDebug` | `(entry: Record) => void` | - | Debug callback for inspecting log entries before they are sent | | `onDebugReqRes` | `(reqRes: { req: { url: string; method: string; headers: Record; body: string \| Uint8Array }; res: { status: number; statusText: string; headers: Record; body: string } }) => void` | - | Debug callback for inspecting HTTP requests and responses. Provides complete request/response details including headers and body content | #### Batch Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `batchContentType` | `string` | `"application/json"` | Content type for batch log requests. User-specified headers take precedence | | `enableBatchSend` | `boolean` | `true` | Whether to enable batch sending | | `batchSize` | `number` | `100` | Number of log entries to batch before sending | | `batchSendTimeout` | `number` | `5000` | Timeout in milliseconds for sending batches regardless of size | | `batchSendDelimiter` | `string` | `"\n"` | Delimiter to use between log entries in batch mode | | `batchMode` | `"delimiter" \| "field" \| "array"` | `"delimiter"` | Batch mode for sending multiple log entries. "delimiter" joins entries with a delimiter, "field" wraps an array of entries in an object with a field name, "array" sends entries as a plain JSON array of objects | | `batchFieldName` | `string` | - | Field name to wrap batch entries in when batchMode is "field" | ## Features ### Custom Payload Templates The transport requires you to provide a `payloadTemplate` function that transforms your log data into a string format expected by your HTTP endpoint: ```typescript // Simple JSON format payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ timestamp: new Date().toISOString(), level: logLevel, message, ...data, }) // Custom format for specific APIs payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ event_type: "log_entry", severity: logLevel.toUpperCase(), text: message, service_name: "my-app", environment: process.env.NODE_ENV, ...data, }) // Plain text format payloadTemplate: ({ logLevel, message, data }) => `[${logLevel.toUpperCase()}] ${message} ${data ? JSON.stringify(data) : ''}` // XML format payloadTemplate: ({ logLevel, message, data }) => ` ${message} ${data ? `${JSON.stringify(data)}` : ''} ` ``` ### Content Type Configuration The transport provides separate content type configuration for single and batch requests: ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), contentType: "application/json", // For single log requests batchContentType: "application/x-ndjson", // For batch requests }) ``` **Important**: User-specified headers take precedence over the `contentType` and `batchContentType` parameters. If you include `"content-type"` in your `headers` object or function, it will override both parameters: ```typescript // This will use "application/xml" for both single and batch requests new HttpTransport({ url: "https://api.example.com/logs", headers: { "content-type": "application/xml", // Takes precedence }, contentType: "application/json", // Ignored batchContentType: "application/x-ndjson", // Ignored payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), }) ``` **See also**: [Content Type Configuration for Non-JSON Payloads](#content-type-configuration-for-non-json-payloads) for examples of configuring content types for different payload formats. ### Content Type Configuration for Non-JSON Payloads When using non-JSON payload formats, make sure to configure the appropriate content types: ```typescript // Plain text format new HttpTransport({ url: "https://api.example.com/logs", contentType: "text/plain", // For single requests batchContentType: "text/plain", // For batch requests payloadTemplate: ({ logLevel, message, data }) => `[${logLevel.toUpperCase()}] ${message} ${data ? JSON.stringify(data) : ''}`, }) // XML format new HttpTransport({ url: "https://api.example.com/logs", contentType: "application/xml", // For single requests batchContentType: "application/xml", // For batch requests payloadTemplate: ({ logLevel, message, data }) => ` ${message} ${data ? `${JSON.stringify(data)}` : ''} `, }) // Newline-delimited JSON (NDJSON) for batch processing new HttpTransport({ url: "https://api.example.com/logs", contentType: "application/json", // For single requests batchContentType: "application/x-ndjson", // For batch requests payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), }) ``` **Note**: If you specify `"content-type"` in your `headers` object or function, it will override both `contentType` and `batchContentType` parameters. ### Next.js Edge Runtime When using the HTTP transport with Next.js Edge Runtime, you need to enable compatibility mode: ```typescript // Next.js Edge Runtime configuration new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), enableNextJsEdgeCompat: true, // Required for Edge Runtime compression: false, // Compression is not available in Edge Runtime }) ``` **Important Edge Runtime Limitations:** * **TextEncoder is disabled**: The transport automatically uses `Buffer.byteLength()` for size calculations * **Compression is disabled**: Gzip compression is not available in Edge Runtime * **Automatic detection**: Falls back to Edge-compatible methods when TextEncoder is not available **Example for Next.js API Route:** ```typescript // app/api/logs/route.ts import { LogLayer } from 'loglayer' import { HttpTransport } from "@loglayer/transport-http" import { serializeError } from "serialize-error"; // This allows you to use compression and alternative size comparison if not running on edge const isServer = typeof window === 'undefined' const log = new LogLayer({ errorSerializer: serializeError, transport: new HttpTransport({ url: "https://your-logging-service.com/logs", headers: { "Authorization": `Bearer ${process.env.LOGGING_API_KEY}`, }, payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ timestamp: new Date().toISOString(), level: logLevel, message, metadata: data, service: "my-nextjs-app", environment: process.env.NODE_ENV, }), enableNextJsEdgeCompat: isServer, // Required for Edge Runtime enableBatchSend: false, // Recommended for Edge Runtime (no background processing) maxRetries: 2, // Lower retry count for Edge Runtime retryDelay: 500, // Faster retry for Edge Runtime onError: (err) => { // Log errors to console in Edge Runtime console.error('HTTP transport error:', err.message); }, }) }) export async function POST(request: Request) { try { const body = await request.json() log.info("API request received", { body }) return Response.json({ success: true }) } catch (error) { log.withError(error as Error).error("API request failed") return Response.json({ error: "Internal server error" }, { status: 500 }) } } ``` **Edge Runtime Best Practices:** 1. **Disable batching**: Set `enableBatchSend: false` to avoid background processing 2. **Lower retry counts**: Use fewer retries since Edge Runtime has time limits 3. **Faster retries**: Use shorter retry delays 4. **Error handling**: Always provide an `onError` callback 5. **Size limits**: Be conservative with `maxLogSize` and `maxPayloadSize` ### Dynamic Headers You can provide headers as either a static object or a function that returns headers dynamically: ```typescript // Static headers headers: { "Authorization": "Bearer YOUR_API_KEY", "Content-Type": "application/json", "X-Service-Name": "my-app" } // Dynamic headers headers: () => ({ "Authorization": `Bearer ${getApiKey()}`, "Content-Type": "application/json", "X-Request-ID": generateRequestId(), "X-Timestamp": new Date().toISOString() }) ``` ### Batching The transport supports batching to improve performance and reduce HTTP requests: ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), enableBatchSend: true, // Enable batching (default) batchSize: 50, // Send when 50 logs are queued batchSendTimeout: 3000, // Or send after 3 seconds batchSendDelimiter: "\n", // Separate entries with newlines }) ``` When batching is enabled: * Logs are queued until `batchSize` is reached OR `batchSendTimeout` expires * Multiple log entries are joined according to the `batchMode` setting * Each entry is already a string from the payloadTemplate * **Payload size tracking**: The transport keeps a running tally of uncompressed payload size * **Automatic sending**: If adding a new log entry would exceed 90% of `maxPayloadSize`, the batch is sent immediately ### Batch Modes The HTTP transport supports three different batch modes to format multiple log entries: #### Delimiter Mode (Default) The default mode joins log entries with a delimiter: ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), batchMode: "delimiter", // Default batchSendDelimiter: "\n", // Default delimiter }) ``` **Output format:** ``` {"level":"info","message":"First log","metadata":{}} {"level":"error","message":"Second log","metadata":{"userId":"123"}} {"level":"warn","message":"Third log","metadata":{}} ``` #### Array Mode Sends log entries as a plain JSON array: ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), batchMode: "array", }) ``` **Output format:** ```json [ {"level":"info","message":"First log","metadata":{}}, {"level":"error","message":"Second log","metadata":{"userId":"123"}}, {"level":"warn","message":"Third log","metadata":{}} ] ``` #### Field Mode Wraps log entries in an object with a specified field name (useful for APIs like Logflare): ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), batchMode: "field", batchFieldName: "logs", // Required when using field mode }) ``` **Output format:** ```json { "logs": [ {"level":"info","message":"First log","metadata":{}}, {"level":"error","message":"Second log","metadata":{"userId":"123"}}, {"level":"warn","message":"Third log","metadata":{}} ] } ``` **Important**: When using `batchMode: "field"`, you must provide the `batchFieldName` parameter. The transport will throw an error if this is missing. ### Log Size Validation The transport validates individual log entry sizes to prevent oversized payloads: ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), maxLogSize: 2097152, // 2MB limit onError: (err) => { if (err.name === "LogSizeError") { console.error("Log entry too large:", err.message); console.log("Log entry:", err.logEntry); console.log("Size:", err.size, "bytes"); console.log("Limit:", err.limit, "bytes"); } } }) ``` When a log entry exceeds `maxLogSize`: * The entry is not sent * A `LogSizeError` is passed to the `onError` callback * The error includes the log entry, actual size, and size limit ### Compression Enable gzip compression to reduce bandwidth usage: ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), compression: true, // Enable gzip compression }) ``` When compression is enabled: * The `Content-Encoding: gzip` header is automatically added * Payload is compressed using the CompressionStream API (browsers) or zlib (Node.js) * Falls back to uncompressed if compression fails ### Retry Logic The transport includes retry logic with exponential backoff: ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), maxRetries: 5, // Retry up to 5 times retryDelay: 2000, // Start with 2 second delay }) ``` The actual delay between retries is calculated using: ``` delay = baseDelay * (2 ^ attemptNumber) + random(0-200)ms ``` ### Rate Limiting The transport handles rate limiting (HTTP 429 responses): ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), respectRateLimit: true, // Wait for Retry-After header (default) // or respectRateLimit: false, // Fail immediately on rate limit }) ``` When `respectRateLimit` is enabled: * Waits for the duration specified in the `Retry-After` header * Uses the `retryDelay` value if no header is present * Rate limit retries don't count against `maxRetries` ### Debugging The HTTP transport provides several debugging callbacks to help you monitor and troubleshoot log transmission: #### onError Callback The `onError` callback is triggered when any error occurs during log transmission: ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), onError: (err) => { console.error('HTTP transport error:', err); } }) ``` #### onDebug Callback The `onDebug` callback provides visibility into individual log entries being processed: ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), onDebug: (entry) => { console.log('Processing log entry:', { logLevel: entry.logLevel, message: entry.message, data: entry.data }); } }) ``` The `entry` object contains: * `logLevel`: The log level (info, error, etc.) * `message`: The log message * `data`: The metadata/context data #### onDebugReqRes Callback The `onDebugReqRes` callback provides detailed information about HTTP requests and responses for deeper troubleshooting: ```typescript new HttpTransport({ url: "https://api.example.com/logs", payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ level: logLevel, message, metadata: data, }), onDebugReqRes: ({ req, res }) => { console.log('HTTP Request:', { url: req.url, method: req.method, headers: req.headers, body: req.body }); console.log('HTTP Response:', { status: res.status, statusText: res.statusText, headers: res.headers, body: res.body }); } }) ``` The request object (`req`) contains: * `url`: The request URL * `method`: HTTP method (POST, PUT, etc.) * `headers`: Request headers * `body`: Request body content (string or Uint8Array) The response object (`res`) contains: * `status`: HTTP status code * `statusText`: HTTP status text * `headers`: Response headers * `body`: Response body content (string) ### Implementation Examples * [Logflare Transport](logflare.md) - Built on top of the HTTP transport for Logflare integration * [VictoriaLogs Transport](victoria-logs.md) - Wraps around this transport to add support for [VictoriaLogs](https://victoriametrics.com/products/victorialogs/) using their [JSON Stream API](https://docs.victoriametrics.com/victorialogs/data-ingestion/#json-stream-api) --- --- url: 'https://loglayer.dev/context-managers/isolated.md' description: Maintain isolated context for each logger instance in LogLayer. --- # Isolated Context Manager [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Fcontext-manager-isolated)](https://www.npmjs.com/package/@loglayer/context-manager-isolated) [Context Manager Source](https://github.com/loglayer/loglayer/tree/master/packages/context-managers/isolated) A context manager that maintains isolated context for each logger instance. When a child logger is created, it starts with no context data - it does not inherit from the parent. This is useful when you want complete isolation between parent and child loggers, ensuring that context changes in one logger don't affect any other loggers. This Context Manager was 99% vibe-coded. [Vibe Code Prompts](https://github.com/loglayer/loglayer/tree/master/packages/context-managers/isolated/PROMPTS.md) ## Installation ::: code-group ```bash [npm] npm install @loglayer/context-manager-isolated ``` ```bash [yarn] yarn add @loglayer/context-manager-isolated ``` ```bash [pnpm] pnpm add @loglayer/context-manager-isolated ``` ::: ## Usage ```typescript import { LogLayer, ConsoleTransport } from "loglayer"; import { IsolatedContextManager } from '@loglayer/context-manager-isolated'; const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }), }).withContextManager(new IsolatedContextManager()); // Set context on parent logger parentLog.withContext({ userId: "123", requestId: "abc" }); // Create child logger const childLog = parentLog.child(); // Parent has context parentLog.info('Parent log message'); // Output: { userId: "123", requestId: "abc" } Parent log message // Child starts with NO context (isolation) childLog.info('Child log message'); // Output: Child log message (no context data) // Adding context to child doesn't affect parent childLog.withContext({ module: 'auth', action: 'login' }); parentLog.info('Another parent message'); // Output: { userId: "123", requestId: "abc" } Another parent message childLog.info('Another child message'); // Output: { module: 'auth', action: 'login' } Another child message ``` ## Changelog View the changelog [here](./changelogs/isolated-changelog.md). --- --- url: 'https://loglayer.dev/context-managers/linked.md' description: Share context between parent and child logs in LogLayer. --- # Linked Context Manager [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Fcontext-manager-linked)](https://www.npmjs.com/package/@loglayer/context-manager-linked) [Context Manager Source](https://github.com/loglayer/loglayer/tree/master/packages/context-managers/linked) A context manager that keeps context linked between parent and child loggers. This means that changes to the context in the parent / child / child of child loggers will affect all loggers. ## Installation ::: code-group ```bash [npm] npm install @loglayer/context-manager-linked ``` ```bash [yarn] yarn add @loglayer/context-manager-linked ``` ```bash [pnpm] pnpm add @loglayer/context-manager-linked ``` ::: ## Usage ```typescript import { LogLayer, ConsoleTransport } from "loglayer"; import { LinkedContextManager } from '@loglayer/context-manager-linked'; const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }), }).withContextManager(new LinkedContextManager()); const childLog = parentLog.child(); childLog.withContext({ module: 'users' }); parentLog.withContext({ app: 'myapp' }); parentLog.info('Parent log'); childLog.info('Child log'); // Output includes: { module: 'users', app: 'myapp' } // for both parentLog and childLog ``` ## Changelog View the changelog [here](./changelogs/linked-changelog.md). --- --- url: 'https://loglayer.dev/log-level-managers/linked.md' description: >- Synchronize log levels bidirectionally between parent and child loggers in LogLayer. --- # Linked Log Level Manager [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Flog-level-manager-linked)](https://www.npmjs.com/package/@loglayer/log-level-manager-linked) [Log Level Manager Source](https://github.com/loglayer/loglayer/tree/master/packages/log-level-managers/linked) A log level manager that keeps log levels synchronized between parent and children. Parent and child changes affect each other bidirectionally. Changes only apply to a parent and their children (not separate instances). ## Installation ::: code-group ```bash [npm] npm install @loglayer/log-level-manager-linked ``` ```bash [yarn] yarn add @loglayer/log-level-manager-linked ``` ```bash [pnpm] pnpm add @loglayer/log-level-manager-linked ``` ::: ## Usage ```typescript import { LogLayer, ConsoleTransport, LogLevel } from "loglayer"; import { LinkedLogLevelManager } from '@loglayer/log-level-manager-linked'; const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }), }).withLogLevelManager(new LinkedLogLevelManager()); const childLog = parentLog.child(); // Parent changes affect children parentLog.setLevel(LogLevel.warn); parentLog.info('This will not be logged'); // Not logged childLog.info('This will not be logged'); // Not logged (affected by parent) // Child changes also affect parent childLog.setLevel(LogLevel.debug); parentLog.debug('This will be logged'); // Logged (affected by child) childLog.debug('This will be logged'); // Logged (child changed to debug) ``` ### Nested Children ```typescript import { LogLayer, ConsoleTransport, LogLevel } from "loglayer"; import { LinkedLogLevelManager } from '@loglayer/log-level-manager-linked'; const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }), }).withLogLevelManager(new LinkedLogLevelManager()); const childLog = parentLog.child(); const grandchildLog = childLog.child(); // Parent changes affect all descendants parentLog.setLevel(LogLevel.error); parentLog.warn('This will not be logged'); // Not logged childLog.warn('This will not be logged'); // Not logged (affected by parent) grandchildLog.warn('This will not be logged'); // Not logged (affected by parent) // Grandchild changes affect all ancestors grandchildLog.setLevel(LogLevel.debug); parentLog.debug('This will be logged'); // Logged (affected by grandchild) childLog.debug('This will be logged'); // Logged (affected by grandchild) grandchildLog.debug('This will be logged'); // Logged (grandchild changed to debug) ``` ## Behavior * **Bidirectional Propagation**: Parent changes propagate to all children, and child changes propagate to the parent and all siblings * **Hierarchical**: Changes apply only within a parent-child hierarchy, not across separate logger instances * **Shared Container**: All linked managers in a hierarchy share the same log level container, ensuring immediate synchronization ## Use Cases * **Tightly Coupled Modules**: When you want log levels to be synchronized across a module hierarchy where any change should affect all related loggers * **Shared Configuration**: When you need a single source of truth for log levels across a parent-child relationship * **Dynamic Log Level Control**: When you want to be able to change log levels from any logger in the hierarchy and have it affect all others --- --- url: 'https://loglayer.dev/transports/log-file-rotation.md' description: >- Write logs to files with automatic rotation based on size or time with the LogLayer logging library --- # Log File Rotation Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-log-file-rotation)](https://www.npmjs.com/package/@loglayer/transport-log-file-rotation) [Transport Source](https://github.com/loglayer/loglayer/blob/master/packages/transports/log-file-rotation) The Log File Rotation transport writes logs to files with automatic rotation based on size or time. This transport is built on top of [`file-stream-rotator`](https://github.com/rogerc/file-stream-rotator/), a library for handling log file rotation in Node.js applications. ## Features * Automatic log file rotation based on time (hourly, daily) * Support for date patterns in filenames using numerical values * Size-based rotation with support for KB, MB, and GB units * Compression of rotated log files * Maximum file count or age-based retention * Automatic cleanup of old log files * Batch processing of logs for improved performance (must be enabled) ## Installation ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-log-file-rotation serialize-error ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-log-file-rotation serialize-error ``` ```sh [yarn] yarn add loglayer @loglayer/transport-log-file-rotation serialize-error ``` ::: ## Usage ```typescript import { LogLayer } from "loglayer"; import { LogFileRotationTransport } from "@loglayer/transport-log-file-rotation"; import { serializeError } from "serialize-error"; const logger = new LogLayer({ errorSerializer: serializeError, transport: [ new LogFileRotationTransport({ filename: "./logs/app.log" }), ], }); logger.info("Application started"); ``` ::: warning Filename Uniqueness Each instance of `LogFileRotationTransport` must have a unique filename to prevent possible race conditions. If you try to create multiple transport instances with the same filename, an error will be thrown. If you need multiple loggers to write to the same file, they should share the same transport instance: ```typescript // Create a single transport instance const fileTransport = new LogFileRotationTransport({ filename: "./logs/app-%DATE%.log", dateFormat: "YMD", frequency: "daily" }); // Share it between multiple loggers const logger1 = new LogLayer({ transport: [fileTransport] }); const logger2 = new LogLayer({ transport: [fileTransport] }); ``` Child loggers do not have this problem as they inherit the transport instance from their parent logger. ::: ## Configuration Options ### Required Parameters | Name | Type | Description | |------|------|-------------| | `filename` | `string` | The filename pattern to use for the log files. Supports date format using numerical values (e.g., `"./logs/application-%DATE%.log"`) | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `auditFile` | `string` | - | Location to store the log audit file | | `auditHashType` | `"md5" \| "sha256"` | `"md5"` | Hashing algorithm for audit file. Use 'sha256' for FIPS compliance | | `batch` | `object` | - | Batch processing configuration. See [Batch Configuration](#batch-configuration) for details | | `callbacks` | `object` | - | Event callbacks for various file stream events. See [Callbacks](#callbacks) for details | | `compressOnRotate` | `boolean` | `false` | Whether to compress rotated log files using gzip | | `createSymlink` | `boolean` | `false` | Create a tailable symlink to the current active log file | | `dateFormat` | `string` | `"YMD"` | The date format to use in the filename. Uses single characters: 'Y' (full year), 'M' (month), 'D' (day), 'H' (hour), 'm' (minutes), 's' (seconds) | | `delimiter` | `string` | `"\n"` | Delimiter between log entries | | `extension` | `string` | - | File extension to be appended to the filename | | `fieldNames` | `object` | - | Custom field names for the log entry JSON. See [Field Names](#field-names) for details | | `fileMode` | `number` | `0o640` | File mode (permissions) to be used when creating log files | | `fileOptions` | `object` | `{ flags: 'a' }` | Options passed to fs.createWriteStream | | `frequency` | `string` | - | The frequency of rotation. Can be 'daily', 'date', '\[1-30]m' for minutes, or '\[1-12]h' for hours | | `levelMap` | `object` | - | Custom mapping for log levels. See [Level Mapping](#level-mapping) for details | | `maxLogs` | `string \| number` | - | Maximum number of logs to keep. Can be a number of files or days (e.g., "10d" for 10 days) | | `size` | `string` | - | The size at which to rotate. Must include a unit suffix: "k"/"K" for kilobytes, "m"/"M" for megabytes, "g"/"G" for gigabytes (e.g., "10M", "100K") | | `staticData` | `(() => Record) \| Record` | - | Static data to be included in every log entry. Can be either a function that returns an object, or a direct object. If it's a function, it's called for each log entry | | `symlinkName` | `string` | `"current.log"` | Name to use when creating the symbolic link | | `timestampFn` | `() => string \| number` | `() => new Date().toISOString()` | Custom function to generate timestamps | | `utc` | `boolean` | `false` | Use UTC time for date in filename | | `verbose` | `boolean` | `false` | Whether to enable verbose mode in the underlying file-stream-rotator. See [Verbose Mode](#verbose-mode) for details | ### Field Names The `fieldNames` object allows you to customize the field names in the log entry JSON: | Name | Type | Required | Description | |------|------|----------|-------------| | `level` | `string` | No | Field name for the log level. Default: "level" | | `message` | `string` | No | Field name for the log message. Default: "message" | | `timestamp` | `string` | No | Field name for the timestamp. Default: "timestamp" | ### Callbacks The `callbacks` object supports the following event handlers: | Name | Type | Required | Description | |------|------|----------|-------------| | `onClose` | `() => void` | No | Called when a log file is closed | | `onError` | `(error: Error) => void` | No | Called when an error occurs | | `onFinish` | `() => void` | No | Called when the stream is finished | | `onLogRemoved` | `(info: { date: number; name: string; hash: string }) => void` | No | Called when a log file is removed due to retention policy | | `onNew` | `(newFile: string) => void` | Called when a new log file is created | | `onOpen` | `() => void` | Called when a log file is opened | | `onRotate` | `(oldFile: string, newFile: string) => void` | Called when a log file is rotated | ### Level Mapping The `levelMap` object allows you to map each log level to either a string or number: | Level | Type | Example (Numeric) | Example (String) | |-------|------|------------------|------------------| | `debug` | `string \| number` | 20 | `"DEBUG"` | | `error` | `string \| number` | 50 | `"ERROR"` | | `fatal` | `string \| number` | 60 | `"FATAL"` | | `info` | `string \| number` | 30 | `"INFO"` | | `trace` | `string \| number` | 10 | `"TRACE"` | | `warn` | `string \| number` | 40 | `"WARNING"` | ### Batch Configuration The `batch` option enables batching of log entries to improve performance by reducing disk writes. When enabled, logs are queued in memory and written to disk in batches. The configuration accepts the following options: | Option | Type | Description | Default | |--------|------|-------------|---------| | `size` | `number` | Maximum number of log entries to queue before writing | `1000` | | `timeout` | `number` | Maximum time in milliseconds to wait before writing queued logs | `5000` | Queued logs are automatically flushed in the following situations: * When the batch size is reached * When the batch timeout is reached * When the transport is disposed * When the process exits (including SIGINT and SIGTERM signals) Example usage: ```typescript new LogFileRotationTransport({ filename: "./logs/app", frequency: "daily", dateFormat: "YMD", extension: ".log", batch: { size: 1000, // Write after 1000 logs are queued timeout: 5000 // Or after 5 seconds, whichever comes first } }); ``` ::: tip Performance Tuning For high-throughput applications, you might want to adjust the batch settings based on your needs: * Increase `batch.size` for better throughput at the cost of higher memory usage * Decrease `batch.timeout` to reduce the risk of losing logs in case of crashes ::: ## Verbose Mode It is recommended to enable `verbose` when configuring log rotation rules. This option allows troubleshooting and debugging of the rotation settings. Once properly configured, it can be removed / disabled. ```typescript new LogFileRotationTransport({ filename: "./logs/daily/test-%DATE%.log", frequency: "1h", dateFormat: "YMD", // using hourly frequency, but missing the "m" part verbose: true, }); ``` If there is something wrong with the configuration, you will get something like: ```bash [FileStreamRotator] Date format not suitable for X hours rotation. Changing date format to 'YMDHm ``` Which will help you identify the issue and correct it: ```typescript new LogFileRotationTransport({ filename: "./logs/daily/test-%DATE%.log", frequency: "1h", dateFormat: "YMDm", }); ``` ## Log Format Each log entry is written as a JSON object with the following format: ```json5 { "level": "info", "message": "Log message", "timestamp": "2024-01-17T12:34:56.789Z", // metadata / context / error data will depend on your LogLayer configuration "userId": "123", "requestId": "abc-123" } ``` ## Adding Static Data to Every Log Entry ```typescript import { hostname } from "node:os"; // Using a function new LogFileRotationTransport({ filename: "./logs/app-%DATE%.log", frequency: "daily", dateFormat: "YMD", staticData: () => ({ hostname: hostname(), // Add the server's hostname pid: process.pid, // Add the process ID environment: process.env.NODE_ENV || "development" }) }); // Using a direct object new LogFileRotationTransport({ filename: "./logs/app-%DATE%.log", frequency: "daily", dateFormat: "YMD", staticData: { hostname: hostname(), pid: process.pid, environment: process.env.NODE_ENV || "development" } }); ``` This will add the hostname, process ID, and environment to every log entry: ```json { "level": "info", "message": "Application started", "timestamp": "2024-01-17T12:34:56.789Z", "hostname": "my-server", "pid": 12345, "environment": "production" } ``` ::: tip Static Data Performance When using static values that don't change during the lifetime of your application (like hostname and process ID), it's better to use a direct object instead of a function: ```typescript // Better performance: object is created once new LogFileRotationTransport({ filename: "./logs/app-%DATE%.log", staticData: { hostname: hostname(), pid: process.pid, environment: process.env.NODE_ENV || "development" } }); // Use a function only if you need dynamic values new LogFileRotationTransport({ filename: "./logs/app-%DATE%.log", staticData: () => ({ timestamp: Date.now(), // Dynamic value that changes hostname: hostname(), // Static value pid: process.pid // Static value }) }); ``` ::: ## Rotation Examples ::: tip Date Format Requirements The transport requires specific date formats based on the rotation frequency: * For daily rotation: use `dateFormat: "YMD"` * For hourly rotation: use `dateFormat: "YMDHm"` * For minute rotation: use `dateFormat: "YMDHm"` These formats ensure proper rotation timing and file naming. ::: ### Daily Rotation ```typescript new LogFileRotationTransport({ filename: "./logs/daily/test-%DATE%.log", frequency: "daily", dateFormat: "YMD", // Required for daily rotation }); ``` ### Hourly Rotation ```typescript new LogFileRotationTransport({ filename: "./logs/hourly/test-%DATE%.log", frequency: "1h", dateFormat: "YMDHm", // Required for hourly rotation }); ``` ### Minute-based Rotation ```typescript new LogFileRotationTransport({ filename: "./logs/minutes/test-%DATE%.log", frequency: "5m", dateFormat: "YMDHm", // Required for minute rotation }); ``` ### Size-based Rotation ```typescript new LogFileRotationTransport({ filename: "./logs/size/app.log", size: "50k", maxLogs: 5, }); ``` ## Changelog View the changelog [here](./changelogs/log-file-rotation-changelog.md). --- --- url: 'https://loglayer.dev/log-level-managers.md' description: Learn how to create and use log level managers with LogLayer --- # Log Level Managers *New in LogLayer v8*. Log level managers in LogLayer are responsible for managing log level settings across logger instances. They provide a way to control how log levels are inherited and propagated between parent and child loggers. ::: tip Do you need to specify a log level manager? Log level managers are an advanced feature of LogLayer. Unless you need to manage log levels in a specific way, you can use the default log level manager, which is already automatically used when creating a new LogLayer instance. ::: ### Available Log Level Managers | Name | Package | Description | |------|---------|-------------| | [Default](/log-level-managers/default) | [![npm](https://img.shields.io/npm/v/%40loglayer%2Flog-level-manager)](https://www.npmjs.com/package/@loglayer/log-level-manager) | Children inherit log level from parent, but changes from parent do not propagate down | | [Global](/log-level-managers/global) | [![npm](https://img.shields.io/npm/v/%40loglayer%2Flog-level-manager-global)](https://www.npmjs.com/package/@loglayer/log-level-manager-global) | Changes apply to all loggers globally | | [One Way](/log-level-managers/one-way) | [![npm](https://img.shields.io/npm/v/%40loglayer%2Flog-level-manager-one-way)](https://www.npmjs.com/package/@loglayer/log-level-manager-one-way) | Parent changes affect children, but child changes do not affect parents | | [Linked](/log-level-managers/linked) | [![npm](https://img.shields.io/npm/v/%40loglayer%2Flog-level-manager-linked)](https://www.npmjs.com/package/@loglayer/log-level-manager-linked) | Parent and child changes affect each other bidirectionally | ## Log Level Manager Management ### Using a custom log level manager You can set a custom log level manager using the `withLogLevelManager()` method. Example usage: ```typescript import { GlobalLogLevelManager } from '@loglayer/log-level-manager-global'; const logger = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }).withLogLevelManager(new GlobalLogLevelManager()); ``` ::: tip Use the `withLogLevelManager()` method right after creating the LogLayer instance. Using it after log levels have already been set may result in unexpected behavior. ::: ### Obtaining the current log level manager You can get the current log level manager instance using the `getLogLevelManager()` method: ```typescript const logLevelManager = logger.getLogLevelManager(); ``` You can also type the return value when getting a specific log level manager implementation: ```typescript const globalLogLevelManager = logger.getLogLevelManager(); ``` --- --- url: 'https://loglayer.dev/transports/log4js.md' description: Send logs to Log4js with the LogLayer logging library --- # Log4js Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-log4js)](https://www.npmjs.com/package/@loglayer/transport-log4js) [Log4js-node](https://log4js-node.github.io/log4js-node/) is a conversion of the Log4j framework to Node.js. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/log4js-node) ## Important Notes * Log4js only works in Node.js environments (not in browsers) * By default, logging is disabled and must be configured via `level` or advanced configuration * Consider using Winston as an alternative if Log4js configuration is too complex ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-log4js log4js ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-log4js log4js ``` ```sh [yarn] yarn add loglayer @loglayer/transport-log4js log4js ``` ::: ## Setup ```typescript import log4js from 'log4js' import { LogLayer } from 'loglayer' import { Log4JsTransport } from "@loglayer/transport-log4js" const logger = log4js.getLogger() // Enable logging output logger.level = "trace" const log = new LogLayer({ transport: new Log4JsTransport({ logger }) }) ``` ## Log Level Mapping | LogLayer | Log4js | |----------|---------| | trace | trace | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | fatal | ## Changelog View the changelog [here](./changelogs/log4js-node-changelog.md). --- --- url: 'https://loglayer.dev/transports/logflare.md' description: Send logs to Logflare with the LogLayer logging library --- # Logflare Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-logflare)](https://www.npmjs.com/package/@loglayer/transport-logflare) Ships logs to [Logflare](https://logflare.app) using the HTTP transport with Logflare-specific configuration. Features include: * Automatic Logflare JSON format * Built on top of the robust HTTP transport * Retry logic with exponential backoff * Rate limiting support * Batch sending with configurable size and timeout * Error and debug callbacks * Support for self-hosted Logflare instances [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/logflare) ## Installation ::: code-group ```bash [npm] npm install loglayer @loglayer/transport-logflare serialize-error ``` ```bash [pnpm] pnpm add loglayer @loglayer/transport-logflare serialize-error ``` ```bash [yarn] yarn add loglayer @loglayer/transport-logflare serialize-error ``` ::: ## Basic Usage ::: warning Logflare free tier issues The free tier has very low rate limits which can be *easily* exceeded. You may find yourself getting `503: Service Unavailable` codes if you exceed the rate limit. Make sure to define the `onError()` callback to catch this and adjust batch timings accordingly. ::: ```typescript import { LogLayer } from 'loglayer' import { LogflareTransport } from "@loglayer/transport-logflare" import { serializeError } from "serialize-error"; const log = new LogLayer({ errorSerializer: serializeError, contextFieldName: null, // recommended based on testing metadataFieldName: null, // recommended based on testing transport: new LogflareTransport({ sourceId: "YOUR-SOURCE-ID", apiKey: "YOUR-API-KEY", onError: (err) => { console.error('Failed to send logs to Logflare:', err); }, onDebug: (entry) => { console.log('Log entry being sent to Logflare:', entry); }, onDebugReqRes: ({ req, res }) => { console.log("=== HTTP Request ==="); console.log("URL:", req.url); console.log("Method:", req.method); console.log("Headers:", JSON.stringify(req.headers, null, 2)); console.log("Body:", typeof req.body === "string" ? req.body : `[Uint8Array: ${req.body.length} bytes]`); console.log("=== HTTP Response ==="); console.log("Status:", res.status, res.statusText); console.log("Headers:", JSON.stringify(res.headers, null, 2)); console.log("Body:", res.body); console.log("==================="); }, }) }) // Use the logger log.info("This is a test message"); log.withMetadata({ userId: "123" }).error("User not found"); ``` ## Configuration ### Required Parameters | Name | Type | Description | |------|------|-------------| | `sourceId` | `string` | Your Logflare source ID | | `apiKey` | `string` | Your Logflare API key | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `url` | `string` | `"https://api.logflare.app"` | Custom Logflare API endpoint (for self-hosted instances) | | `enabled` | `boolean` | `true` | Whether the transport is enabled | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Logs below this level will be filtered out | ### HTTP Transport Optional Parameters #### General Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `enabled` | `boolean` | `true` | Whether the transport is enabled | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Logs below this level will be filtered out | | `method` | `string` | `"POST"` | HTTP method to use for requests | | `headers` | `Record \| (() => Record)` | `{}` | Headers to include in the request. Can be an object or a function that returns headers | | `contentType` | `string` | `"application/json"` | Content type for single log requests. User-specified headers take precedence | | `compression` | `boolean` | `false` | Whether to use gzip compression | | `maxRetries` | `number` | `3` | Number of retry attempts before giving up | | `retryDelay` | `number` | `1000` | Base delay between retries in milliseconds | | `respectRateLimit` | `boolean` | `true` | Whether to respect rate limiting by waiting when a 429 response is received | | `maxLogSize` | `number` | `1048576` | Maximum size of a single log entry in bytes (1MB) | | `maxPayloadSize` | `number` | `5242880` | Maximum size of the payload (uncompressed) in bytes (5MB) | | `enableNextJsEdgeCompat` | `boolean` | `false` | Whether to enable Next.js Edge compatibility | #### Debug Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `onError` | `(err: Error) => void` | - | Error handling callback | | `onDebug` | `(entry: Record) => void` | - | Debug callback for inspecting log entries before they are sent | | `onDebugReqRes` | `(reqRes: { req: { url: string; method: string; headers: Record; body: string \| Uint8Array }; res: { status: number; statusText: string; headers: Record; body: string } }) => void` | - | Debug callback for inspecting HTTP requests and responses. Provides complete request/response details including headers and body content | #### Batch Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `batchContentType` | `string` | `"application/json"` | Content type for batch log requests. User-specified headers take precedence | | `enableBatchSend` | `boolean` | `true` | Whether to enable batch sending | | `batchSize` | `number` | `100` | Number of log entries to batch before sending | | `batchSendTimeout` | `number` | `5000` | Timeout in milliseconds for sending batches regardless of size | | `batchSendDelimiter` | `string` | `"\n"` | Delimiter to use between log entries in batch mode | | `batchMode` | `"delimiter" \| "field" \| "array"` | `"delimiter"` | Batch mode for sending multiple log entries. "delimiter" joins entries with a delimiter, "field" wraps an array of entries in an object with a field name, "array" sends entries as a plain JSON array of objects | | `batchFieldName` | `string` | - | Field name to wrap batch entries in when batchMode is "field" | For more details on these options, see the [HTTP transport documentation](/transports/http#configuration). --- --- url: 'https://loglayer.dev/logging-api/basic-logging.md' description: Learn how to log messages at different severity levels with LogLayer --- # Basic Logging LogLayer provides a simple and consistent API for logging messages at different severity levels. This guide covers the basics of logging messages. ## Basic Message Logging The simplest way to log a message is to use one of the log level methods: ```typescript // Basic info message log.info('User logged in successfully') // Warning message log.warn('API rate limit approaching') // Error message log.error('Failed to connect to database') // Debug message log.debug('Processing request payload') // Trace message (detailed debugging) log.trace('Entering authentication function') // Fatal message (critical errors) log.fatal('System out of memory') ``` ## Message Parameters All log methods accept multiple parameters, which can be strings, booleans, numbers, null, or undefined: ```typescript // Multiple parameters log.info('User', 123, 'logged in') // With string formatting log.info('User %s logged in from %s', 'john', 'localhost') ``` ::: tip sprintf-style formatting The logging library you use may or may not support sprintf-style string formatting. If it does not, you can use the [sprintf plugin](/plugins/sprintf) to enable support. ::: ## Message Prefixing You can add a prefix to all log messages either through configuration or using the `withPrefix` method: ```typescript // Via configuration const log = new LogLayer({ prefix: '[MyApp]', transport: new ConsoleTransport({ logger: console }) }) // Via method const prefixedLogger = log.withPrefix('[MyApp]') // Output: "[MyApp] User logged in" prefixedLogger.info('User logged in') ``` ## Raw Logging The `raw(logEntry: RawLogEntry)` method allows you to bypass the normal LogLayer API and directly specify all aspects of a log entry. This is useful for scenarios where you need to log structured data that doesn't fit the standard LogLayer patterns, or when integrating with external logging systems that provide pre-formatted log entries. The raw entry will still go through all LogLayer processing like the log level methods. ```typescript import { LogLevel } from 'loglayer' // Basic raw logging with just a message log.raw({ logLevel: LogLevel.info, messages: ['User action completed', { userId: 123 }] }) ``` ### Raw Logging Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------------------------------------| | `logLevel` | `LogLevelType` | Yes | The log level for this entry | | `messages` | `MessageDataType[]` | No | Array of message parameters | | `metadata` | `Record` | No | Additional metadata to include | | `error` | `any` | No | Error object to include | | `context` | `Record` | No | Context data to include (see notes below) | ### Context Behavior When using the `context` parameter in raw logging, the behavior depends on whether you provide the `context` parameter or not. * If you provide a `context` in the raw entry, that context data will be used instead of the context manager for that specific log entry. * If you do not provide a `context`, the context manager data will be used (like normal logging). * Passing an empty object `{}` as `context` will result in no context data being included for that log entry. ### Examples ```typescript import { LogLayer, ConsoleTransport, LogLevel } from 'loglayer' const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, messageField: 'msg' }), // Configure custom field names for better organization contextFieldName: 'ctx', metadataFieldName: 'meta', errorFieldName: 'err' }) // Set some stored context log.withContext({ userId: 123, sessionId: 'abc' }) // This will use the stored context log.raw({ logLevel: LogLevel.info, messages: ['User action'] }) // Output: { "level": "info", "msg": "User action", "ctx": { "userId": 123, "sessionId": "abc" } } // This will override the stored context for this entry only log.raw({ logLevel: LogLevel.info, messages: ['Admin action'], context: { adminId: 456, action: 'override' } }) // Output: { "level": "info", "msg": "Admin action", "ctx": { "adminId": 456, "action": "override" } } // This will use the stored context again (userId: 123, sessionId: 'abc') log.raw({ logLevel: LogLevel.info, messages: ['Another user action'] }) // Output: { "level": "info", "msg": "Another user action", "ctx": { "userId": 123, "sessionId": "abc" } } // This will override with empty context, resulting in no context data log.raw({ logLevel: LogLevel.info, messages: ['System action'], context: {} // Empty context overrides stored context }) // Output: { "level": "info", "msg": "System action" } // Example with metadata and error log.raw({ logLevel: LogLevel.error, messages: ['Database operation failed'], metadata: { operation: 'insert', table: 'users' }, error: new Error('Connection timeout'), context: { requestId: 'req-789' } }) // Output: { "level": "error", "msg": "Database operation failed", "ctx": { "requestId": "req-789" }, "meta": { "operation": "insert", "table": "users" }, "err": { "type": "Error", "message": "Connection timeout", "stack": "Error: Connection timeout\n at ..." } } ``` --- --- url: 'https://loglayer.dev/logging-api/metadata.md' description: Learn how to log structured metadata with your log messages in LogLayer --- # Logging with Metadata Metadata allows you to add structured data to your log messages. LogLayer provides several ways to include metadata in your logs. ::: info Message field name The output examples use `msg` as the message field. The name of this field may vary depending on the logging library you are using. In the `console` logger, this field does not exist by default, and the message is printed directly. However, you can configure the console transport to use a message field - see the [Console Transport](/transports/console) documentation for more details. ::: ## Adding Metadata to Messages The most common way to add metadata is using the `withMetadata` method: ```typescript log.withMetadata({ userId: '123', action: 'login', browser: 'Chrome' }).info('User logged in') ``` By default, this produces a flattened log entry: ```json { "msg": "User logged in", "userId": "123", "action": "login", "browser": "Chrome" } ``` ::: info Passing empty metadata Passing an empty value (`null`, `undefined`, or an empty object) to `withMetadata` will not add any metadata or call related plugins. ::: ## Logging Metadata Only Sometimes you want to log metadata without a message. Use `metadataOnly` for this: ```typescript // Default log level is 'info' log.metadataOnly({ status: 'healthy', memory: '512MB', cpu: '45%' }) // Or specify a different log level log.metadataOnly({ status: 'warning', memory: '1024MB', cpu: '90%' }, LogLevel.warn) ``` ::: info Passing empty metadata Passing an empty value (`null`, `undefined`, or an empty object) to `withMetadata` will not add any metadata or call related plugins. ::: ## Structuring Metadata ### Using a Dedicated Metadata Field By default, metadata is flattened into the root of the log object. You can configure LogLayer to place metadata in a dedicated field by setting the `metadataFieldName` option: ```typescript const log = new LogLayer({ metadataFieldName: 'metadata' }) log.withMetadata({ userId: '123', action: 'login' }).info('User logged in') ``` This produces: ```json { "msg": "User logged in", "metadata": { "userId": "123", "action": "login" } } ``` ## Combining Metadata with Other Data ### With Context Metadata can be combined with context data: ```typescript log.withContext({ requestId: 'abc' }) .withMetadata({ userId: '123' }) .info('Processing request') ``` If using field names: ```json { "msg": "Processing request", "context": { "requestId": "abc" }, "metadata": { "userId": "123" } } ``` ### With Errors Metadata can be combined with error logging: ```typescript log.withError(new Error('Database connection failed')) .withMetadata({ dbHost: 'localhost', retryCount: 3 }) .error('Failed to connect') ``` ## Controlling Metadata Output ### Muting Metadata You can temporarily disable metadata output: ```typescript // Via configuration const log = new LogLayer({ muteMetadata: true, transport: new ConsoleTransport({ logger: console }) }) // Or via methods log.muteMetadata() // Disable metadata log.unMuteMetadata() // Re-enable metadata ``` --- --- url: 'https://loglayer.dev/logging-api/typescript.md' description: Notes on using LogLayer with Typescript --- # Typescript Tips ## Use `ILogLayer` if you need to type your logger `ILogLayer` is the interface implemented by `LogLayer`. By using this interface, you will also be able to use the mock `MockLogLayer` class for unit testing. ```typescript import type { ILogLayer } from 'loglayer' const logger: ILogLayer = new LogLayer() ``` ## Use `LogLevelType` if you need to type your log level when creating a logger ```typescript import type { LogLevelType } from 'loglayer' const logger = new LogLayer({ transport: new ConsoleTransport({ level: process.env.LOG_LEVEL as LogLevelType }) }) ``` ## Use `LogLayerTransport` if you need to type an array of transports ```typescript import type { LogLayerTransport } from 'loglayer' const transports: LogLayerTransport[] = [ new ConsoleTransport({ level: process.env.LOG_LEVEL as LogLevelType }), new FileTransport({ level: process.env.LOG_LEVEL as LogLevelType }) ] const logger = new LogLayer({ transport: transports, }) ``` ## Override types for custom IntelliSense You can extend LogLayer's base types to provide custom IntelliSense for your specific use case. This is particularly useful when you have consistent fields across your application. Create a `loglayer.d.ts` (or any `d.ts`) file in your project source: ```typescript // loglayer.d.ts declare module "loglayer" { /** * Defines the structure for context data that persists across multiple log entries * within the same context scope. This is set using log.withContext(). */ interface LogLayerContext { userId?: string; sessionId?: string; requestId?: string; siteName?: string; [key: string]: any; // Allow any other properties } /** * Defines the structure for metadata that can be attached to individual log entries. * This is set using log.withMetadata() / log.metadataOnly(). */ interface LogLayerMetadata { operation?: string; duration?: number; ipAddress?: string; userAgent?: string; [key: string]: any; } } ``` With the type overrides in place, you'll get full IntelliSense support: ```typescript import { LogLayer } from 'loglayer'; import { ConsoleTransport } from '@loglayer/transport-console'; // Optional import if you need to define constants or types import type { LogLayerContext, LogLayerMetadata } from 'loglayer'; const log = new LogLayer({ transport: new ConsoleTransport() }); // Set persistent context - IntelliSense will suggest userId, sessionId, requestId, siteName log.withContext({ userId: "user123", sessionId: "sess456", requestId: "req789", siteName: "myapp.com" }); ``` --- --- url: 'https://loglayer.dev/transports/loglevel.md' description: Send logs to loglevel with the LogLayer logging library --- # loglevel Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-loglevel)](https://www.npmjs.com/package/@loglayer/transport-loglevel) [loglevel](https://github.com/pimterry/loglevel) is a minimal lightweight logging library for JavaScript. It provides a simple logging API that works in both Node.js and browser environments. [Transport Source](https://github.com/loglayer/loglayer/blob/master/packages/transports/loglevel) ## Installation ::: code-group ```sh [npm] npm i @loglayer/transport-loglevel loglevel ``` ```sh [pnpm] pnpm add @loglayer/transport-loglevel loglevel ``` ```sh [yarn] yarn add @loglayer/transport-loglevel loglevel ``` ::: ## Setup ```typescript import { LogLayer } from 'loglayer'; import { LogLevelTransport } from '@loglayer/transport-loglevel'; import log from 'loglevel'; const logger = log.getLogger('myapp'); logger.setLevel('trace'); // Enable all log levels const loglayer = new LogLayer({ transport: new LogLevelTransport({ logger, // Optional: control where object data appears in log messages appendObjectData: false // default: false - object data appears first }) }); ``` ## Configuration Options ### Required Parameters None - all parameters are optional. ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Messages with a lower priority level will be ignored | | `enabled` | `boolean` | `true` | If false, the transport will not send any logs to the logger | | `consoleDebug` | `boolean` | `false` | If true, the transport will also log messages to the console for debugging | | `id` | `string` | - | A unique identifier for the transport | | `appendObjectData` | `boolean` | `false` | Controls where object data (metadata, context, errors) appears in the log messages. `false`: Object data appears as the first parameter. `true`: Object data appears as the last parameter | ### Examples #### appendObjectData: false (default) ```typescript loglayer.withMetadata({ user: 'john' }).info('User logged in'); // logger.info({ user: 'john' }, 'User logged in') ``` #### appendObjectData: true ```typescript loglayer.withMetadata({ user: 'john' }).info('User logged in'); // logger.info('User logged in', { user: 'john' }) ``` ## Log Level Mapping | LogLayer | loglevel | |----------|----------| | trace | trace | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | error | ## Changelog View the changelog [here](./changelogs/loglevel-changelog.md). --- --- url: 'https://loglayer.dev/transports/logtape.md' description: Send logs to LogTape with the LogLayer logging library --- # LogTape Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-logtape)](https://www.npmjs.com/package/@loglayer/transport-logtape) [LogTape](https://logtape.org) is a modern, structured logging library for TypeScript and JavaScript with support for multiple sinks, filters, and adapters. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/logtape) ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-logtape @logtape/logtape ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-logtape @logtape/logtape ``` ```sh [yarn] yarn add loglayer @loglayer/transport-logtape @logtape/logtape ``` ::: ## Setup ```typescript import { configure, getConsoleSink, getLogger } from '@logtape/logtape' import { LogLayer } from 'loglayer' import { LogTapeTransport } from "@loglayer/transport-logtape" // Configure LogTape await configure({ sinks: { console: getConsoleSink() }, loggers: [ { category: "my-app", lowestLevel: "debug", sinks: ["console"] } ] }) // Get a LogTape logger instance const logtapeLogger = getLogger(["my-app", "my-module"]) const log = new LogLayer({ transport: new LogTapeTransport({ logger: logtapeLogger }) }) ``` ## Configuration Options ### Required Parameters | Name | Type | Description | |------|------|-------------| | `logger` | `LogTapeLogger` | A configured LogTape logger instance | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Messages with a lower priority level will be ignored | | `enabled` | `boolean` | `true` | If false, the transport will not send any logs to the logger | | `consoleDebug` | `boolean` | `false` | If true, the transport will also log messages to the console for debugging | | `id` | `string` | - | A unique identifier for the transport | ## Changelog View the changelog [here](./changelogs/logtape-changelog.md). --- --- url: 'https://loglayer.dev/mixins.md' description: Learn how to create and use mixins with LogLayer --- # Mixins LogLayer's mixin system allows you to extend the `LogLayer` and `LogBuilder` prototypes with custom methods and functionality. Unlike plugins (which intercept and modify log processing) or transports (which send logs to destinations), mixins add new methods directly to the LogLayer API. Mixins are useful when you want to: * Add domain-specific methods to LogLayer (e.g., metrics, tracing) * Integrate third-party libraries directly into the logging API * Extend LogLayer with capabilities beyond logging ### Available Mixins | Name | Package | Description | |------|---------|-------------| | [Hot-Shots (StatsD)](/mixins/hot-shots) | [![npm](https://img.shields.io/npm/v/@loglayer/mixin-hot-shots)](https://www.npmjs.com/package/@loglayer/mixin-hot-shots) | Adds the [`hot-shots`](https://github.com/bdeitte/hot-shots) API to LogLayer | ## Using Mixins ### TypeScript Setup To use mixins with TypeScript, you must register the types by adding each mixin package to your `tsconfig.json` includes: ```json { "include": [ "./node_modules/@loglayer/mixin-hot-shots" ] } ``` ### Registering Mixins Mixins must be registered **before** creating LogLayer instances. You can register a single mixin or multiple mixins at once: ```typescript import { LogLayer, useLogLayerMixin, ConsoleTransport } from 'loglayer'; import { hotshotsMixin } from '@loglayer/mixin-hot-shots'; import { StatsD } from 'hot-shots'; // Create and configure your third-party library const statsd = new StatsD({ host: 'localhost', port: 8125 }); // Register a single mixin (must be before creating LogLayer instances) useLogLayerMixin(hotshotsMixin(statsd)); // Or register multiple mixins at once useLogLayerMixin([ hotshotsMixin(statsd), // otherMixin(), ]); // Now create LogLayer instances with the mixin functionality const log = new LogLayer({ transport: new ConsoleTransport({ logger: console }) }); // Use the mixin methods through the stats property log.stats.increment('request.count').send(); log.stats.timing('request.duration', 150).send(); ``` ## Creating Mixins To learn how to create your own mixins, see the [Creating Mixins](/mixins/creating-mixins) guide. --- --- url: 'https://loglayer.dev/transports/new-relic.md' description: Send logs to New Relic with the LogLayer logging library --- # New Relic Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-new-relic)](https://www.npmjs.com/package/@loglayer/transport-new-relic) The New Relic transport allows you to send logs directly to New Relic's [Log API](https://docs.newrelic.com/docs/logs/log-api/introduction-log-api/). It provides robust features including compression, retry logic, rate limiting support, and validation of New Relic's API constraints. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/new-relic) ## Installation ::: code-group ```bash [npm] npm install loglayer @loglayer/transport-new-relic serialize-error ``` ```bash [pnpm] pnpm add loglayer @loglayer/transport-new-relic serialize-error ``` ```bash [yarn] yarn add loglayer @loglayer/transport-new-relic serialize-error ``` ::: ## Basic Usage ```typescript import { LogLayer } from 'loglayer' import { NewRelicTransport } from "@loglayer/transport-new-relic" import { serializeError } from "serialize-error"; const log = new LogLayer({ errorSerializer: serializeError, transport: new NewRelicTransport({ apiKey: "YOUR_NEW_RELIC_API_KEY", endpoint: "https://log-api.newrelic.com/log/v1", // optional, this is the default useCompression: true, // optional, defaults to true maxRetries: 3, // optional, defaults to 3 retryDelay: 1000, // optional, base delay in ms, defaults to 1000 respectRateLimit: true, // optional, defaults to true onError: (err) => { console.error('Failed to send logs to New Relic:', err); }, onDebug: (entry) => { console.log('Log entry being sent:', entry); }, }) }) // Use the logger log.info("This is a test message"); log.withMetadata({ userId: "123" }).error("User not found"); ``` ## Configuration Options ### Required Parameters | Name | Type | Description | |------|------|-------------| | `apiKey` | `string` | Your New Relic API key | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `endpoint` | `string` | `"https://log-api.newrelic.com/log/v1"` | The New Relic Log API endpoint | | `useCompression` | `boolean` | `true` | Whether to use gzip compression | | `maxRetries` | `number` | `3` | Maximum number of retry attempts | | `retryDelay` | `number` | `1000` | Base delay between retries (ms) | | `respectRateLimit` | `boolean` | `true` | Whether to respect rate limiting | | `onError` | `(err: Error) => void` | - | Error handling callback | | `onDebug` | `(entry: Record) => void` | - | Debug callback for inspecting log entries | | `enabled` | `boolean` | `true` | Whether the transport is enabled | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Logs below this level will be filtered out | ## Features ### Compression The transport uses gzip compression by default to reduce bandwidth usage. You can disable this if needed: ```typescript new NewRelicTransport({ apiKey: "YOUR_API_KEY", useCompression: false }) ``` ### Retry Logic The transport includes a sophisticated retry mechanism with exponential backoff and jitter: ```typescript new NewRelicTransport({ apiKey: "YOUR_API_KEY", maxRetries: 5, // Increase max retries retryDelay: 2000 // Increase base delay to 2 seconds }) ``` The actual delay between retries is calculated using: ``` delay = baseDelay * (2 ^ attemptNumber) + random(0-200)ms ``` ### Rate Limiting The transport handles New Relic's rate limiting in two ways: 1. **Respect Rate Limits (Default)** ```typescript new NewRelicTransport({ apiKey: "YOUR_API_KEY", respectRateLimit: true // This is the default }) ``` * Waits for the duration specified in the `Retry-After` header * Rate limit retries don't count against `maxRetries` * Uses 60 seconds as default wait time if no header is present 2. **Ignore Rate Limits** ```typescript new NewRelicTransport({ apiKey: "YOUR_API_KEY", respectRateLimit: false, onError: (err) => { if (err.name === "RateLimitError") { // Handle rate limit error } } }) ``` * Fails immediately when rate limited * Calls `onError` with a `RateLimitError` ### Validation The transport automatically validates logs against New Relic's constraints: ```typescript // This will be validated: log.withMetadata({ veryLongKey: "x".repeat(300), // Will throw ValidationError (name too long) normalKey: "x".repeat(5000) // Will be truncated to 4094 characters }).info("Test message") ``` Validation includes: * Maximum payload size of 1MB (before and after compression) * Maximum of 255 attributes per log entry * Maximum attribute name length of 255 characters * Automatic truncation of attribute values longer than 4094 characters ### Error Handling The transport provides detailed error information through the `onError` callback: ```typescript new NewRelicTransport({ apiKey: "YOUR_API_KEY", onError: (err) => { switch (err.name) { case "ValidationError": // Handle validation errors (payload size, attribute limits) break; case "RateLimitError": // Handle rate limiting errors const rateLimitErr = err as RateLimitError; console.log(`Rate limited. Retry after: ${rateLimitErr.retryAfter}s`); break; default: // Handle other errors (network, API errors) console.error("Failed to send logs:", err.message); } } }) ``` ### Debug Callback The transport includes a debug callback that allows you to inspect log entries before they are sent to New Relic: ```typescript new NewRelicTransport({ apiKey: "YOUR_API_KEY", onDebug: (entry) => { // Log the entry being sent console.log('Sending log entry:', JSON.stringify(entry, null, 2)); } }) ``` ## Best Practices 1. **Error Handling**: Always provide an `onError` callback to handle failures gracefully. 2. **Compression**: Keep compression enabled unless you have a specific reason to disable it. 3. **Rate Limiting**: Use the default rate limit handling unless you have a custom rate limiting strategy. 4. **Retry Configuration**: Adjust `maxRetries` and `retryDelay` based on your application's needs: * Increase for critical logs that must be delivered * Decrease for high-volume, less critical logs 5. **Validation**: Be aware of the attribute limits when adding metadata to avoid validation errors. ## Changelog View the changelog [here](./changelogs/new-relic-changelog.md). ## TypeScript Support The transport is written in TypeScript and provides full type definitions: ```typescript import type { NewRelicTransportConfig } from "@loglayer/transport-new-relic" const config: NewRelicTransportConfig = { apiKey: "YOUR_API_KEY", // TypeScript will enforce correct options } ``` ## Changelog View the changelog [here](./changelogs/new-relic-changelog.md). --- --- url: 'https://loglayer.dev/log-level-managers/one-way.md' description: Synchronize log levels between parent and child loggers in LogLayer. --- # One Way Log Level Manager [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Flog-level-manager-one-way)](https://www.npmjs.com/package/@loglayer/log-level-manager-one-way) [Log Level Manager Source](https://github.com/loglayer/loglayer/tree/master/packages/log-level-managers/one-way) A log level manager that keeps log levels synchronized between parent and children. Parent changes affect children, but child changes do not affect parents. Changes only apply to a parent and their children (not separate instances). ## Installation ::: code-group ```bash [npm] npm install @loglayer/log-level-manager-one-way ``` ```bash [yarn] yarn add @loglayer/log-level-manager-one-way ``` ```bash [pnpm] pnpm add @loglayer/log-level-manager-one-way ``` ::: ## Usage ```typescript import { LogLayer, ConsoleTransport, LogLevel } from "loglayer"; import { OneWayLogLevelManager } from '@loglayer/log-level-manager-one-way'; const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }), }).withLogLevelManager(new OneWayLogLevelManager()); const childLog = parentLog.child(); // Parent changes affect children parentLog.setLevel(LogLevel.warn); parentLog.info('This will not be logged'); // Not logged childLog.info('This will not be logged'); // Not logged (affected by parent) // Child changes do not affect parent childLog.setLevel(LogLevel.debug); parentLog.debug('This will not be logged'); // Not logged (parent still at warn) childLog.debug('This will be logged'); // Logged (child changed to debug) ``` ### Nested Children ```typescript import { LogLayer, ConsoleTransport, LogLevel } from "loglayer"; import { OneWayLogLevelManager } from '@loglayer/log-level-manager-one-way'; const parentLog = new LogLayer({ transport: new ConsoleTransport({ logger: console }), }).withLogLevelManager(new OneWayLogLevelManager()); const childLog = parentLog.child(); const grandchildLog = childLog.child(); // Parent changes affect all descendants parentLog.setLevel(LogLevel.error); parentLog.warn('This will not be logged'); // Not logged childLog.warn('This will not be logged'); // Not logged (affected by parent) grandchildLog.warn('This will not be logged'); // Not logged (affected by parent) // Child changes only affect that child and its descendants childLog.setLevel(LogLevel.debug); parentLog.warn('This will not be logged'); // Not logged (parent still at error) childLog.debug('This will be logged'); // Logged (child changed to debug) grandchildLog.debug('This will be logged'); // Logged (affected by child) ``` ## Behavior * **One-way Propagation**: Parent changes propagate to all children, but child changes do not affect parents * **Hierarchical**: Changes apply only within a parent-child hierarchy, not across separate logger instances * **Independent Child Containers**: Each child has its own log level container, initialized from the parent but independent after creation ## Use Cases * **Module-level Log Level Control**: When you want to control log levels for a specific module and all its sub-modules * **Hierarchical Logging**: When you need different log levels for different parts of your application hierarchy --- --- url: 'https://loglayer.dev/plugins/opentelemetry.md' description: Add OpenTelemetry trace context to logs using LogLayer --- # OpenTelemetry Plugin [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Fplugin-opentelemetry)](https://www.npmjs.com/package/@loglayer/plugin-opentelemetry) [Plugin Source](https://github.com/loglayer/loglayer/tree/master/packages/plugins/opentelemetry) The OpenTelemetry plugin for [LogLayer](https://loglayer.dev) uses [`@opentelemetry/api`](https://www.npmjs.com/package/@opentelemetry/api) to store the following in the log context: * `trace_id` * `span_id` * `trace_flags` ::: info If you are using OpenTelemetry with log processors, use the [OpenTelemetry Transport](/transports/opentelemetry). If you don't know what that is, then you'll want to use this plugin instead of the transport. ::: ## Installation ::: code-group ```bash [npm] npm install loglayer @loglayer/plugin-opentelemetry ``` ```bash [yarn] yarn add loglayer @loglayer/plugin-opentelemetry ``` ```bash [pnpm] pnpm add loglayer @loglayer/plugin-opentelemetry ``` ::: ## Usage Follow the [OpenTelemetry Getting Started Guide](https://opentelemetry.io/docs/languages/js/getting-started/nodejs/) to set up OpenTelemetry in your application. ```typescript import { LogLayer, ConsoleTransport } from 'loglayer' import { openTelemetryPlugin } from '@loglayer/plugin-opentelemetry' const logger = new LogLayer({ transport: [ new ConsoleTransport({ logger: console }), ], plugins: [ openTelemetryPlugin() ] }); ``` ## Configuration The plugin accepts the following configuration options: ### Required Parameters None - all parameters are optional. ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `traceFieldName` | `string` | - | If specified, all trace fields will be nested under this key | | `traceIdFieldName` | `string` | `'trace_id'` | Field name for the trace ID | | `spanIdFieldName` | `string` | `'span_id'` | Field name for the span ID | | `traceFlagsFieldName` | `string` | `'trace_flags'` | Field name for the trace flags | | `disabled` | `boolean` | `false` | Whether the plugin is disabled | ### Example with Custom Configuration ```typescript const logger = new LogLayer({ transport: [ new ConsoleTransport({ logger: console }), ], plugins: [ openTelemetryPlugin({ // Nest all trace fields under 'trace' traceFieldName: 'trace', // Custom field names traceIdFieldName: 'traceId', spanIdFieldName: 'spanId', traceFlagsFieldName: 'flags' }) ] }); ``` This would output logs with the following structure: ```json { "trace": { "traceId": "8de71fcab951aad172f1148c74d0877e", "spanId": "349623465c6dfc1b", "flags": "01" } } ``` ## Example with Express This example has been tested to work with the plugin. * It sets up `express`-based instrumentation using OpenTelemetry * Going to the root endpoint will log a message with the request context ### Installation ::: info This setup assumes you have Typescript configured and have `tsx` installed as a dev dependency. ::: ::: code-group ```bash [npm] npm install express loglayer @loglayer/plugin-opentelemetry serialize-error \ @opentelemetry/instrumentation-express @opentelemetry/instrumentation-http \ @opentelemetry/sdk-node ``` ```bash [yarn] yarn add express loglayer @loglayer/plugin-opentelemetry serialize-error \ @opentelemetry/instrumentation-express @opentelemetry/instrumentation-http \ @opentelemetry/sdk-node ``` ```bash [pnpm] pnpm add express loglayer @loglayer/plugin-opentelemetry serialize-error \ @opentelemetry/instrumentation-express @opentelemetry/instrumentation-http \ @opentelemetry/sdk-node ``` ::: ### Files #### instrumentation.ts ```typescript // instrumentation.ts import { ExpressInstrumentation } from "@opentelemetry/instrumentation-express"; import { HttpInstrumentation } from "@opentelemetry/instrumentation-http"; import { NodeSDK } from "@opentelemetry/sdk-node"; const sdk = new NodeSDK({ instrumentations: [ // Express instrumentation expects HTTP layer to be instrumented new HttpInstrumentation(), new ExpressInstrumentation(), ], }); sdk.start(); ``` #### app.ts ```typescript // app.ts import express from "express"; import { type ILogLayer, LogLayer } from "loglayer"; import { serializeError } from "serialize-error"; import { openTelemetryPlugin } from "@loglayer/plugin-opentelemetry"; import { ConsoleTransport } from "loglayer"; const app = express(); // Add types for the req.log property declare global { namespace Express { interface Request { log: ILogLayer; } } } // Define logging middleware app.use((req, res, next) => { // Create a new LogLayer instance for each request req.log = new LogLayer({ transport: new ConsoleTransport({ logger: console, }), errorSerializer: serializeError, plugins: [openTelemetryPlugin()], }).withContext({ reqId: crypto.randomUUID(), // Add unique request ID method: req.method, path: req.path, }); next(); }); function sayHelloWorld(req: express.Request) { req.log.info("Printing hello world"); return "Hello world!"; } // Use the logger in your routes app.get("/", (req, res) => { req.log.info("Processing request to root endpoint"); // Add additional context for specific logs req.log.withContext({ query: req.query }).info("Request includes query parameters"); res.send(sayHelloWorld(req)); }); // Error handling middleware app.use((err: Error, req: express.Request, res: express.Response, next: express.NextFunction) => { req.log.withError(err).error("An error occurred while processing the request"); res.status(500).send("Internal Server Error"); }); app.listen(3000, () => { console.log("Server started on http://localhost:3000"); }); ``` ### Running the Example ```bash npx tsx --import ./instrumentation.ts ./app.ts ``` Then visit `http://localhost:3000` in your browser. ### Sample Output Output might look like this: ```text { reqId: 'c34ab246-fc51-4b69-9ba6-5e0dfa150e5a', method: 'GET', path: '/', query: {}, trace_id: '8de71fcab951aad172f1148c74d0877e', span_id: '349623465c6dfc1b', trace_flags: '01' } Printing hello world ``` ## Changelog View the changelog [here](./changelogs/opentelemetry-changelog.md). --- --- url: 'https://loglayer.dev/transports/opentelemetry.md' description: Send logs to OpenTelemetry with the LogLayer logging library --- # OpenTelemetry Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-opentelemetry)](https://www.npmjs.com/package/@loglayer/transport-opentelemetry) [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/opentelemetry) The OpenTelemetry transport sends logs using the [OpenTelemetry Logs SDK](https://www.npmjs.com/package/@opentelemetry/sdk-logs). This allows you to integrate logs with OpenTelemetry's observability ecosystem. Compatible with OpenTelemetry JS API and SDK `1.0+`. ::: info In most cases, you should use the [OpenTelemetry Plugin](/plugins/opentelemetry) instead as it stamps logs with trace context. Use this transport if you are using OpenTelemetry log processors, where the log processors do the actual shipping of logs. ::: ### Acknowledgements A lot of the code is based on the [@opentelemetry/winston-transport](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/packages/winston-transport) code, which is licensed under Apache 2.0. ## Installation ::: code-group ```bash [npm] npm install loglayer @loglayer/transport-opentelemetry serialize-error ``` ```bash [yarn] yarn add loglayer @loglayer/transport-opentelemetry serialize-error ``` ```bash [pnpm] pnpm add loglayer @loglayer/transport-opentelemetry serialize-error ``` ::: ## Usage Follow the [OpenTelemetry Getting Started Guide](https://opentelemetry.io/docs/languages/js/getting-started/nodejs/) to set up OpenTelemetry in your application. ```typescript import { LogLayer } from 'loglayer' import { OpenTelemetryTransport } from '@loglayer/transport-opentelemetry' import { serializeError } from 'serialize-error' const logger = new LogLayer({ errorSerializer: serializeError, // This will send logs to the OpenTelemetry SDK // Where it sends to depends on the configured logRecordProcessors in the SDK transport: [ new OpenTelemetryTransport({ // Optional: provide a custom error handler onError: (error) => console.error('OpenTelemetry logging error:', error), // Optional: disable the transport enabled: process.env.NODE_ENV !== 'test', // Optional: enable console debugging consoleDebug: process.env.DEBUG === 'true' }), ], }); ``` ## Configuration Options ### `level` Minimum log level to process. Logs below this level will be filtered out. * Type: `'trace' | 'debug' | 'info' | 'warn' | 'error' | 'fatal'` * Default: `'trace'` (allows all logs) Example: ```typescript // Only process info logs and above (info, warn, error, fatal) new OpenTelemetryTransport({ level: 'info' }) // Process all logs (default behavior) new OpenTelemetryTransport() ``` ### `onError` Callback to handle errors that occur when logging. * Type: `(error: any) => void` * Default: `undefined` ### `enabled` Enable or disable the transport. * Type: `boolean` * Default: `true` ### `consoleDebug` Enable console debugging for the transport. * Type: `boolean` * Default: `false` ## Example with Express This example has been tested to work with the `OpenTelemetryTransport`. * It uses the OpenTelemetry SDK to send logs to the console (via `logRecordProcessors`) * It sets up `express`-based instrumentation using OpenTelemetry * Going to the root endpoint will log a message with the request context ### Installation ::: info This setup assumes you have Typescript configured and have `tsx` installed as a dev dependency. ::: ::: code-group ```bash [npm] npm install express loglayer @loglayer/transport-opentelemetry serialize-error \ @opentelemetry/instrumentation-express @opentelemetry/instrumentation-http \ @opentelemetry/sdk-logs @opentelemetry/sdk-node \ @opentelemetry/sdk-trace-node @opentelemetry/semantic-conventions ``` ```bash [yarn] yarn add express loglayer @loglayer/transport-opentelemetry serialize-error \ @opentelemetry/instrumentation-express @opentelemetry/instrumentation-http \ @opentelemetry/sdk-logs @opentelemetry/sdk-node \ @opentelemetry/sdk-trace-node @opentelemetry/semantic-conventions ``` ```bash [pnpm] pnpm add express loglayer @loglayer/transport-opentelemetry serialize-error \ @opentelemetry/instrumentation-express @opentelemetry/instrumentation-http \ @opentelemetry/sdk-logs @opentelemetry/sdk-node \ @opentelemetry/sdk-trace-node @opentelemetry/semantic-conventions ``` ::: ### Files #### instrumentation.ts ```typescript // instrumentation.ts import { ExpressInstrumentation } from "@opentelemetry/instrumentation-express"; import { HttpInstrumentation } from "@opentelemetry/instrumentation-http"; import { ConsoleLogRecordExporter, SimpleLogRecordProcessor } from "@opentelemetry/sdk-logs"; import { NodeSDK } from "@opentelemetry/sdk-node"; import { ConsoleSpanExporter } from "@opentelemetry/sdk-trace-node"; import { ATTR_SERVICE_NAME, ATTR_SERVICE_VERSION } from "@opentelemetry/semantic-conventions"; const sdk = new NodeSDK({ traceExporter: new ConsoleSpanExporter(), logRecordProcessors: [new SimpleLogRecordProcessor(new ConsoleLogRecordExporter())], instrumentations: [ // Express instrumentation expects HTTP layer to be instrumented new HttpInstrumentation(), new ExpressInstrumentation(), ], }); sdk.start(); ``` #### app.ts ```typescript // app.ts import express from "express"; import { type ILogLayer, LogLayer } from "loglayer"; import { serializeError } from "serialize-error"; import { OpenTelemetryTransport } from "@loglayer/transport-opentelemetry"; const app = express(); // Add types for the req.log property declare global { namespace Express { interface Request { log: ILogLayer; } } } // Define logging middleware app.use((req, res, next) => { // Create a new LogLayer instance for each request req.log = new LogLayer({ transport: [new OpenTelemetryTransport({ // Optional: provide a custom error handler onError: (error) => console.error('OpenTelemetry logging error:', error), // Optional: provide a custom ID id: 'otel-transport', // Optional: disable the transport enabled: process.env.NODE_ENV !== 'test', // Optional: enable console debugging consoleDebug: process.env.DEBUG === 'true' })], errorSerializer: serializeError, }).withContext({ reqId: crypto.randomUUID(), // Add unique request ID method: req.method, path: req.path, }); next(); }); function sayHelloWorld(req: express.Request) { req.log.info("Printing hello world"); return "Hello world!"; } // Use the logger in your routes app.get("/", (req, res) => { req.log.info("Processing request to root endpoint"); // Add additional context for specific logs req.log.withContext({ query: req.query }).info("Request includes query parameters"); res.send(sayHelloWorld(req)); }); // Error handling middleware app.use((err: Error, req: express.Request, res: express.Response, next: express.NextFunction) => { req.log.withError(err).error("An error occurred while processing the request"); res.status(500).send("Internal Server Error"); }); app.listen(3000, () => { console.log("Server started on http://localhost:3000"); }); ``` ### Running the Example ```bash npx tsx --import ./instrumentation.ts ./app.ts ``` Then visit `http://localhost:3000` in your browser. ### Sample Output Output might look like this: ```json { "resource": { "attributes": { "service.name": "yourServiceName", "telemetry.sdk.language": "nodejs", "telemetry.sdk.name": "opentelemetry", "telemetry.sdk.version": "1.30.0", "service.version": "1.0" } }, "instrumentationScope": { "name": "loglayer", "version": "5.1.1", "schemaUrl": "undefined" }, "timestamp": 1736730221608000, "traceId": "c738a5f750b89f988d679235405e1b3b", "spanId": "676f11075b9785d9", "traceFlags": 1, "severityText": "info", "severityNumber": 9, "body": "Printing hello world", "attributes": { "reqId": "3164c21c-d195-49be-b757-3e056881f3d6", "method": "GET", "path": "/" } } ``` --- --- url: 'https://loglayer.dev/transports/pino.md' description: Send logs to Pino with the LogLayer logging library --- # Pino Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-pino)](https://www.npmjs.com/package/@loglayer/transport-pino) [Pino](https://github.com/pinojs/pino) is a very low overhead Node.js logger, focused on performance. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/pino) ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-pino pino ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-pino pino ``` ```sh [yarn] yarn add loglayer @loglayer/transport-pino pino ``` ::: ## Setup ```typescript import { pino } from 'pino' import { LogLayer } from 'loglayer' import { PinoTransport } from "@loglayer/transport-pino" const p = pino({ level: 'trace' // Enable all log levels }) const log = new LogLayer({ transport: new PinoTransport({ logger: p }) }) ``` ## Configuration Options ### Required Parameters None - all parameters are optional. ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Messages with a lower priority level will be ignored | | `enabled` | `boolean` | `true` | If false, the transport will not send any logs to the logger | | `consoleDebug` | `boolean` | `false` | If true, the transport will also log messages to the console for debugging | | `id` | `string` | - | A unique identifier for the transport | ## Log Level Mapping | LogLayer | Pino | |----------|---------| | trace | trace | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | fatal | ## Changelog View the changelog [here](./changelogs/pino-changelog.md). --- --- url: 'https://loglayer.dev/plugins.md' description: Learn how to create and use plugins with LogLayer --- # Plugins LogLayer's plugin system allows you to extend and modify logging behavior at various points in the log lifecycle. Plugins can modify data and messages before they're sent to the logging library, control whether logs should be sent, and intercept metadata calls. ### Available Plugins | Name | Package | Changelog | Description | |------|---------|-----------|-------------------------------------------------------------------------------------------------------| | [Datadog APM Trace Injector](/plugins/datadog-apm-trace-injector) | [![npm](https://img.shields.io/npm/v/@loglayer/plugin-datadog-apm-trace-injector)](https://www.npmjs.com/package/@loglayer/plugin-datadog-apm-trace-injector) | [Changelog](/plugins/changelogs/datadog-apm-trace-injector-changelog.md) | Automatically inject Datadog APM trace context into logs for correlation | | [Filter](/plugins/filter) | [![npm](https://img.shields.io/npm/v/@loglayer/plugin-filter)](https://www.npmjs.com/package/@loglayer/plugin-filter) | [Changelog](/plugins/changelogs/filter-changelog.md) | Filter logs using string patterns, regular expressions, or [JSON Queries](https://jsonquerylang.org/) | | [OpenTelemetry](/plugins/opentelemetry) | [![npm](https://img.shields.io/npm/v/@loglayer/plugin-opentelemetry)](https://www.npmjs.com/package/@loglayer/plugin-opentelemetry) | [Changelog](/plugins/changelogs/opentelemetry-changelog.md) | Add OpenTelemetry trace context to logs | | [Redaction](/plugins/redaction) | [![npm](https://img.shields.io/npm/v/@loglayer/plugin-redaction)](https://www.npmjs.com/package/@loglayer/plugin-redaction) | [Changelog](/plugins/changelogs/redaction-changelog.md) | Redact sensitive information from logs | | [Sprintf](/plugins/sprintf) | [![npm](https://img.shields.io/npm/v/@loglayer/plugin-sprintf)](https://www.npmjs.com/package/@loglayer/plugin-sprintf) | [Changelog](/plugins/changelogs/sprintf-changelog.md) | Printf-style string formatting support | ## Plugin Management ### Adding Plugins You can add plugins when creating the LogLayer instance: ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console }), plugins: [ timestampPlugin(), { // id is optional id: 'sensitive-data-filter', onBeforeDataOut(params) { // a simple plugin that does something return params.data } } ] }) ``` Or add them later: ```typescript log.addPlugins([timestampPlugin()]) log.addPlugins([{ onBeforeDataOut(params) { // a simple plugin that does something return params.data } }]) ``` ### Enabling/Disabling Plugins Plugins can be enabled or disabled at runtime using their ID (if defined): ```typescript // Disable a plugin log.disablePlugin('sensitive-data-filter') // Enable a plugin log.enablePlugin('sensitive-data-filter') ``` ### Removing Plugins Remove a plugin using its ID (if defined): ```typescript log.removePlugin('sensitive-data-filter') ``` ### Replacing All Plugins Replace all existing plugins with new ones: ```typescript log.withFreshPlugins([ timestampPlugin(), { onBeforeDataOut(params) { // do something return params.data } } ]) ``` When used with child loggers, this only affects the current logger instance and does not modify the parent's plugins. ::: warning Potential Performance Impact Replacing plugins at runtime may have a performance impact if you are frequently creating new plugins. It is recommended to re-use the same plugin instance(s) where possible. ::: --- --- url: 'https://loglayer.dev/transports/pretty-terminal.md' description: Interact with pretty printed logs in the terminal --- # Pretty Terminal Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-pretty-terminal)](https://www.npmjs.com/package/@loglayer/transport-pretty-terminal) [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/pretty-terminal) The Pretty Terminal Transport provides interactivity and pretty printing for your logs in the terminal. It has interactive browsing, text search, detailed viewing for large logs, and themes. ::: warning Using Next.js or LogLayer in a browser? This transport has dependencies that are not supported in Next.js or a browser. Use the [Simple Pretty Terminal](/transports/simple-pretty-terminal) instead. ::: ::: warning Running multiple applications concurrently? This transport has interactive features that are designed for a single app. If you are running multiple apps concurrently in the same terminal, then it is recommended you use the the [Simple Pretty Terminal](/transports/simple-pretty-terminal) instead. ::: ## Features * 🔍 **Interactive Selection Mode** - Browse and inspect logs in a full-screen interactive view * 📝 **Detailed Log Inspection** - Examine individual log entries with formatted data and context * 🔎 **Search/Filter Functionality** - Find specific logs with powerful filtering capabilities * 💅 **JSON Pretty Printing** - Beautifully formatted structured data with syntax highlighting * 🎭 **Configurable Themes** - Choose from pre-built themes or customize your own colors ## Installation ::: warning Compatbility Note Pretty Terminal has only been tested in MacOS with the native Terminal app and [Warp](https://www.warp.dev/). It may not work as expected in other terminal emulators or operating systems. ::: ::: code-group ```bash [npm] npm install loglayer @loglayer/transport-pretty-terminal serialize-error ``` ```bash [pnpm] pnpm add loglayer @loglayer/transport-pretty-terminal serialize-error ``` ```bash [yarn] yarn add loglayer @loglayer/transport-pretty-terminal serialize-error ``` ::: ## Basic Usage ::: warning Development Only Pretty Terminal is designed to work in a terminal only for local development. It should not be used for production environments. It is recommended that you disable other transports when using Pretty Terminal to avoid duplicate log output. ::: ```typescript import { LogLayer, ConsoleTransport } from 'loglayer'; import { getPrettyTerminal } from '@loglayer/transport-pretty-terminal'; import { serializeError } from "serialize-error"; // Create LogLayer instance with the transport const log = new LogLayer({ errorSerializer: serializeError, transport: [ new ConsoleTransport({ // Example of how to enable a transport for non-development environments enabled: process.env.NODE_ENV !== 'development', }), getPrettyTerminal({ // Only enable Pretty Terminal in development enabled: process.env.NODE_ENV === 'development', }) ], }); // Start logging! log.withMetadata({ foo: 'bar' }).info('Hello from Pretty Terminal!'); ``` ::: warning Single-Instance Only Because Pretty Terminal is an interactive transport, it may not work well if you run multiple applications in the same terminal window that share the same output stream. If you need to run multiple applications that use Pretty Terminal in the same terminal window, you can: 1. Use the `disableInteractiveMode` option to disable keyboard input and navigation features 2. Keep interactive mode enabled in only one application and disable it in others 3. Use the [Simple Pretty Terminal](/transports/simple-pretty-terminal) instead. The transport is designed to work as a single interactive instance. `getPrettyTerminal()` can be safely used multiple times in the same application as it uses the same transport reference. ::: ::: warning Performance Note Logs are stored using an in-memory SQLite database by default. For long-running applications or large log volumes, consider using a persistent storage file using the `logFile` option to avoid out of memory issues. ::: ## Keyboard Controls The Pretty Terminal Transport provides an interactive interface with three main modes: ### Simple View (Default) ![Simple View](/images/pretty-terminal/simple-view.webp) The default view shows real-time log output with the following controls: * `P`: Toggle pause/resume of log output * `C`: Cycle through view modes (full → truncated → condensed) * `↑/↓`: Enter selection mode When paused, new logs are buffered and a counter shows how many logs are waiting. Resuming will display all buffered logs. The view has three modes: * **Full View** (default): Shows all information with complete data structures (no truncation) * **Truncated View**: Shows complete log information including timestamp, ID, level, message, with data structures truncated based on `maxInlineDepth` and `maxInlineLength` settings * **Condensed View**: Shows only the timestamp, log level and message for a cleaner output (no data shown) When entering selection mode while paused: * Only logs that were visible before pause are shown initially * Buffered logs from pause are tracked as new logs * The notification shows how many new logs are available * Pressing ↓ at the bottom will reveal new logs ### Selection Mode ![Selection Mode](/images/pretty-terminal/selection-mode.webp) An interactive mode for browsing and filtering logs: * `↑/↓`: Navigate through logs * `ENTER`: View detailed log information (preserves current filter) * `TAB`: Return to simple view * Type to filter logs (searches through all log content) * `BACKSPACE`: Edit/clear filter text When filtering is active: * Only matching logs are displayed * The filter persists when entering detail view * Navigation (↑/↓) only moves through filtered results * New logs that match the filter are automatically included Each log entry in selection mode shows: * Timestamp and log ID * Log level with color coding * Complete message * Full structured data inline (like simple view's full mode) * Selected entry is highlighted with `►` ### Detail View ![Detail View](/images/pretty-terminal/detail-view.webp) A full-screen view showing comprehensive log information: * `↑/↓`: Scroll through log content line by line * `Q/W`: Page up/down through content * `←/→`: Navigate to previous/next log entry (respects active filter) * `A/S`: Jump to first/last log entry * `C`: Toggle array collapse in JSON data * `J`: Toggle raw JSON view (for easy copying) * `TAB`: Return to selection view (or return to detail view from JSON view) Features in Detail View: * Shows full timestamp and log level * Displays complete structured data with syntax highlighting * Shows context (previous and next log entries) * Shows active filter in header when filtering is enabled * Auto-updates when viewing latest log (respects current filter) * Pretty-prints JSON data with color coding * Collapsible arrays for better readability * Raw JSON view for easy copying ## Configuration The Pretty Terminal Transport can be customized with various options: ```typescript import { getPrettyTerminal, moonlight } from '@loglayer/transport-pretty-terminal'; const transport = getPrettyTerminal({ // Maximum depth for inline data display in truncated mode maxInlineDepth: 4, // Maximum length for inline data in truncated mode maxInlineLength: 120, // Custom theme configuration (default is moonlight) theme: moonlight, // Optional path to SQLite file for persistent storage logFile: 'path/to/logs.sqlite', // Enable/disable the transport (defaults to true) enabled: process.env.NODE_ENV === 'development', // Disable interactive mode for multi-app terminal output (defaults to false) disableInteractiveMode: false, }); ``` ### Configuration Options ### Required Parameters None - all parameters are optional. ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `maxInlineDepth` | `number` | `4` | Maximum depth for displaying nested data inline. Only applies in truncated view mode. Selection mode and detail view always show full depth | | `maxInlineLength` | `number` | `120` | Maximum length for inline data before truncating. Only applies in truncated view mode. Selection mode and detail view always show full content | | `theme` | `PrettyTerminalTheme` | `moonlight` | Theme configuration for colors and styling | | `logFile` | `string` | `:memory:` | Path to SQLite file for persistent storage. Relative paths are resolved from the current working directory. If not provided, uses in-memory database | | `enabled` | `boolean` | `true` | Whether the transport is enabled. If false, all operations will no-op | | `disableInteractiveMode` | `boolean` | `false` | Whether to disable interactive mode (keyboard input and navigation). Useful when multiple applications need to print to the same terminal | ::: warning Security Note If using the `logFile` option, be aware that: 1. All logs will be stored in the specified SQLite database file. 2. The file will be purged of any existing data when the transport initializes 3. Relative paths (e.g., "logs/app.db") are resolved from the current working directory 4. It is recommended to add the `logFile` path to your `.gitignore` file to avoid committing sensitive log data 5. Do not use the same logfile path in another application (as in two separate applications running the transport against the same file) to avoid data corruption. If you do have sensitive data that shouldn't be logged in general, use the [Redaction Plugin](/plugins/redaction) to filter out sensitive information before logging. ::: ## Themes The transport comes with several built-in themes to match your terminal style: ### Moonlight Theme (Default) A dark theme with cool blue tones, perfect for night-time coding sessions and modern IDEs. ![Moonlight Theme](/images/pretty-terminal/moonlight.webp) ```typescript import { getPrettyTerminal, moonlight } from '@loglayer/transport-pretty-terminal'; const transport = getPrettyTerminal({ theme: moonlight, }); ``` ### Sunlight Theme A light theme with warm tones, ideal for daytime use, high-glare environments, and printed documentation. ![Sunlight Theme](/images/pretty-terminal/sunlight.webp) ```typescript import { getPrettyTerminal, sunlight } from '@loglayer/transport-pretty-terminal'; const transport = getPrettyTerminal({ theme: sunlight, }); ``` ### Neon Theme A vibrant, cyberpunk-inspired theme with electric colors and high contrast, perfect for modern tech-focused applications. ![Neon Theme](/images/pretty-terminal/neon.webp) ```typescript import { getPrettyTerminal, neon } from '@loglayer/transport-pretty-terminal'; const transport = getPrettyTerminal({ theme: neon, }); ``` ### Nature Theme A light theme with organic, earthy colors inspired by forest landscapes. Great for nature-inspired interfaces and applications focusing on readability. ![Nature Theme](/images/pretty-terminal/nature.webp) ```typescript import { getPrettyTerminal, nature } from '@loglayer/transport-pretty-terminal'; const transport = getPrettyTerminal({ theme: nature, }); ``` ### Pastel Theme A soft, calming theme with gentle colors inspired by watercolor paintings. Perfect for long coding sessions and reduced visual stress. ![Pastel Theme](/images/pretty-terminal/pastel.webp) ```typescript import { getPrettyTerminal, pastel } from '@loglayer/transport-pretty-terminal'; const transport = getPrettyTerminal({ theme: pastel, }); ``` ## Custom Themes You can create your own theme by implementing the `PrettyTerminalTheme` interface, which uses [`chalk`](https://github.com/chalk/chalk) for color styling: ```typescript import { getPrettyTerminal, chalk } from '@loglayer/transport-pretty-terminal'; const myCustomTheme = { // Configuration for the default log view shown in real-time simpleView: { // Color configuration for different log levels colors: { trace: chalk.gray, // Style for trace level logs debug: chalk.blue, // Style for debug level logs info: chalk.green, // Style for info level logs warn: chalk.yellow, // Style for warning level logs error: chalk.red, // Style for error level logs fatal: chalk.bgRed.white, // Style for fatal level logs - background red with white text }, logIdColor: chalk.dim, // Style for the unique log identifier dataValueColor: chalk.white, // Style for the actual values in structured data dataKeyColor: chalk.dim, // Style for the keys/property names in structured data selectorColor: chalk.cyan, // Style for the selection indicator (►) in selection mode }, detailedView: { // Inherits all options from simpleView, plus additional detailed view options colors: { trace: chalk.gray, debug: chalk.blue, info: chalk.green, warn: chalk.yellow, error: chalk.red, fatal: chalk.bgRed.white, }, logIdColor: chalk.dim, dataValueColor: chalk.white, dataKeyColor: chalk.dim, // Additional detailed view specific options headerColor: chalk.bold.cyan, // Style for section headers labelColor: chalk.bold, // Style for field labels (e.g., "Timestamp:", "Level:") separatorColor: chalk.dim, // Style for visual separators // Configuration for JSON pretty printing jsonColors: { keysColor: chalk.dim, // Style for JSON property names dashColor: chalk.dim, // Style for array item dashes numberColor: chalk.yellow, // Style for numeric values stringColor: chalk.green, // Style for string values multilineStringColor: chalk.green, // Style for multiline strings positiveNumberColor: chalk.yellow, // Style for positive numbers negativeNumberColor: chalk.red, // Style for negative numbers booleanColor: chalk.cyan, // Style for boolean values nullUndefinedColor: chalk.gray, // Style for null/undefined values dateColor: chalk.magenta, // Style for date values }, }, }; const transport = getPrettyTerminal({ theme: myCustomTheme, }); ``` --- --- url: 'https://loglayer.dev/plugins/redaction.md' description: Learn how to use the redaction plugin to protect sensitive data in your logs --- # Redaction Plugin [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Fplugin-redaction)](https://www.npmjs.com/package/@loglayer/plugin-redaction) [Plugin Source](https://github.com/loglayer/loglayer/tree/master/packages/plugins/redaction) The redaction plugin for LogLayer provides a simple way to redact sensitive information from your logs using [fast-redact](https://www.npmjs.com/package/fast-redact). It currently only performs redaction on metadata. ## Installation ::: code-group ```sh [npm] npm install @loglayer/plugin-redaction ``` ```sh [pnpm] pnpm add @loglayer/plugin-redaction ``` ```sh [yarn] yarn add @loglayer/plugin-redaction ``` ::: ## Basic Usage ```typescript import { LogLayer, ConsoleTransport } from 'loglayer' import { redactionPlugin } from '@loglayer/plugin-redaction' const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, }), plugins: [ redactionPlugin({ paths: ["password"], }), ], }) // The password will be redacted in the output log.metadataOnly({ password: "123456", }) ``` ## Configuration Options ### Required Parameters None - all parameters are optional. ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `id` | `string` | - | Unique identifier for the plugin. Used for selectively disabling / enabling and removing the plugin | | `disabled` | `boolean` | `false` | If true, the plugin will skip execution | | `paths` | `string[]` | - | An array of strings describing the nested location of a key in an object. See https://www.npmjs.com/package/fast-redact for path syntax | | `censor` | `string \| ((v: any) => any)` | `"[REDACTED]"` | This is the value which overwrites redacted properties | | `remove` | `boolean` | `false` | When set to true, will cause keys to be removed from the serialized output | | `strict` | `boolean` | `false` | When set to true, will cause the redactor function to throw if instead of an object it finds a primitive | ## Examples ### Basic Path Redaction ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, }), plugins: [ redactionPlugin({ paths: ["password", "creditCard", "ssn"], }), ], }) log.metadataOnly({ user: "john", password: "secret123", creditCard: "4111111111111111", ssn: "123-45-6789" }) // Output: // { // "user": "john", // "password": "[REDACTED]", // "creditCard": "[REDACTED]", // "ssn": "[REDACTED]" // } ``` ### Nested Path Redaction ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, }), plugins: [ redactionPlugin({ paths: ["user.password", "payment.*.number"], }), ], }) log.metadataOnly({ user: { name: "john", password: "secret123" }, payment: { credit: { number: "4111111111111111", expiry: "12/24" }, debit: { number: "4222222222222222", expiry: "01/25" } } }) // Output: // { // "user": { // "name": "john", // "password": "[REDACTED]" // }, // "payment": { // "credit": { // "number": "[REDACTED]", // "expiry": "12/24" // }, // "debit": { // "number": "[REDACTED]", // "expiry": "01/25" // } // } // } ``` ### Custom Censor Value ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, }), plugins: [ redactionPlugin({ paths: ["password"], censor: "***", }), ], }) log.metadataOnly({ user: "john", password: "secret123" }) // Output: // { // "user": "john", // "password": "***" // } ``` ### Remove Instead of Redact ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, }), plugins: [ redactionPlugin({ paths: ["password"], remove: true, }), ], }) log.metadataOnly({ user: "john", password: "secret123" }) // Output: // { // "user": "john" // } ``` ### Custom Censor Function ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, }), plugins: [ redactionPlugin({ paths: ["creditCard"], censor: (value) => { if (typeof value === 'string') { return value.slice(-4).padStart(value.length, '*') } return '[REDACTED]' }, }), ], }) log.metadataOnly({ user: "john", creditCard: "4111111111111111" }) // Output: // { // "user": "john", // "creditCard": "************1111" // } ``` ## Changelog View the changelog [here](./changelogs/redaction-changelog.md). --- --- url: 'https://loglayer.dev/transports/roarr.md' description: Send logs to Roarr with the LogLayer logging library --- # Roarr Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-roarr)](https://www.npmjs.com/package/@loglayer/transport-roarr) [Roarr](https://github.com/gajus/roarr) is a JSON logger for Node.js and browser environments. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/roarr) ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-roarr roarr serialize-error ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-roarr roarr serialize-error ``` ```sh [yarn] yarn add loglayer @loglayer/transport-roarr roarr serialize-error ``` ::: ## Setup Roarr requires environment configuration to enable logging: ### Node.js ```bash ROARR_LOG=true node your-app.js ``` ### Browser ```typescript window.ROARR = { enabled: true } ``` ### Implementation ```typescript import { Roarr as r } from 'roarr' import { LogLayer } from 'loglayer' import { RoarrTransport } from "@loglayer/transport-roarr" import { serializeError } from 'serialize-error' const log = new LogLayer({ transport: new RoarrTransport({ logger: r }), errorSerializer: serializeError // Roarr requires error serialization }) ``` ## Changelog View the changelog [here](./changelogs/roarr-changelog.md). --- --- url: 'https://loglayer.dev/transports/sentry.md' --- # Sentry Transport [![npm version](https://img.shields.io/npm/v/@loglayer/transport-sentry.svg)](https://www.npmjs.com/package/@loglayer/transport-sentry) [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/sentry) The Sentry transport for LogLayer sends structured logs to Sentry using the Sentry SDK's logger API. This transport integrates seamlessly with Sentry's structured logging features and supports all Sentry log levels. ## Installation ```bash npm install @loglayer/transport-sentry serialize-error ``` ```bash yarn add @loglayer/transport-sentry serialize-error ``` ```bash pnpm add @loglayer/transport-sentry serialize-error ``` Replace `` with the appropriate Sentry SDK for your platform: * **Browser**: [@sentry/browser](https://docs.sentry.io/platforms/javascript/logs/) * **Next.js**: [@sentry/nextjs](https://docs.sentry.io/platforms/javascript/guides/nextjs/logs/) * **Bun**: [@sentry/bun](https://docs.sentry.io/platforms/javascript/guides/bun/logs/) * **Deno**: [@sentry/deno](https://docs.sentry.io/platforms/javascript/guides/deno/) * **Node.js**: [@sentry/node](https://docs.sentry.io/platforms/javascript/guides/node/) ## Setup First, initialize Sentry with structured logging enabled for your platform: ```typescript // Node.js example, but most of the JS-based SDKs follow this pattern // Also in most cases, you'll want to initialize at the top-most entrypoint // to your app so Sentry can instrument your code, such as the index.ts file import * as Sentry from "@sentry/node"; Sentry.init({ dsn: "YOUR_SENTRY_DSN", enableLogs: true, }); ``` Then configure LogLayer with the Sentry transport: ```typescript // logger.ts import * as Sentry from "@sentry/node"; import { LogLayer } from "loglayer"; import { SentryTransport } from "@loglayer/transport-sentry"; import { serializeError } from "serialize-error"; const log = new LogLayer({ errorSerializer: serializeError, new SentryTransport({ logger: Sentry.logger, }), ], }); ``` ## Configuration ### Required Parameters | Name | Type | Description | |------|------|-------------| | `logger` | `SentryLogger` | The Sentry logger instance to use for logging | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `id` | `string` | Auto-generated | An identifier for the transport | | `enabled` | `boolean` | `true` | If false, the transport will not send logs to the logger | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process | | `consoleDebug` | `boolean` | `false` | If true, the transport will log to the console for debugging purposes | ## Changelog View the changelog [here](./changelogs/sentry-changelog.md). --- --- url: 'https://loglayer.dev/transports/signale.md' description: Send logs to Signale with the LogLayer logging library --- # Signale Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-signale)](https://www.npmjs.com/package/@loglayer/transport-signale) [Signale](https://github.com/klaussinani/signale) is a highly configurable logging utility designed for CLI applications. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/signale) ## Important Notes * Signale only works in Node.js environments (not in browsers) * It is primarily designed for CLI applications * LogLayer only integrates with standard log levels (not CLI-specific levels like `success`, `await`, etc.) ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-signale signale ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-signale signale ``` ```sh [yarn] yarn add loglayer @loglayer/transport-signale signale ``` ::: ## Setup ```typescript import { Signale } from 'signale' import { LogLayer } from 'loglayer' import { SignaleTransport } from "@loglayer/transport-signale" const signale = new Signale() const log = new LogLayer({ transport: new SignaleTransport({ logger: signale }) }) ``` ## Log Level Mapping | LogLayer | Signale | |----------|---------| | trace | debug | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | error | ## Changelog View the changelog [here](./changelogs/signale-changelog.md). --- --- url: 'https://loglayer.dev/transports/simple-pretty-terminal.md' description: >- Simple, pretty log output for LogLayer in the terminal (no interactive features) --- # Simple Pretty Terminal Transport [![NPM Version](https://img.shields.io/npm/v/@loglayer/transport-simple-pretty-terminal)](https://www.npmjs.com/package/@loglayer/transport-simple-pretty-terminal) [Source on GitHub](https://github.com/loglayer/loglayer/tree/master/packages/transports/simple-pretty-terminal) ![Inline mode](/images/simple-pretty-terminal/terminal-inline.webp) The Simple Pretty Terminal Transport provides beautiful, themed log output in your console, with no interactive features. Supports printing in both Node.js and browser environments as well as Next.js (client and server-side). ::: tip Looking for a powerful alternative? The Simple Pretty Terminal does not support keyboard navigation, search / filtering, or interactive features. For more advanced console printing for a non-Next.js / non-browser / single application, use the [Pretty Terminal Transport](/transports/pretty-terminal). ::: ## Installation ::: code-group ```bash [npm] npm install loglayer @loglayer/transport-simple-pretty-terminal ``` ```bash [pnpm] pnpm add loglayer @loglayer/transport-simple-pretty-terminal ``` ```bash [yarn] yarn add loglayer @loglayer/transport-simple-pretty-terminal ``` ::: ## Basic Usage ::: warning Pair with another logger for production Simple Pretty Terminal is really meant for local development. Although there's nothing wrong with running it in production, the log output is not designed to be ingested by 3rd party log collection systems. It is recommended that you disable other transports when using Pretty Terminal to avoid duplicate log output. ::: ::: warning Required Runtime Configuration You **must** specify the `runtime` option when creating the transport. * `runtime: "node"` — Use in Node.js environments. Logs are written using `process.stdout.write`. * `runtime: "browser"` — Use in browser environments. Logs are written using `console.log`. ::: ```typescript import { LogLayer } from "loglayer"; import { getSimplePrettyTerminal, moonlight } from "@loglayer/transport-simple-pretty-terminal"; const log = new LogLayer({ transport: [ new ConsoleTransport({ // Example of how to enable a transport for non-development environments enabled: process.env.NODE_ENV !== 'development', }), getSimplePrettyTerminal({ enabled: process.env.NODE_ENV === 'development', runtime: "node", // Required: "node" or "browser" viewMode: "expanded", // "inline" | "message-only" | "expanded" theme: moonlight }), ], }); log.withMetadata({ foo: "bar" }).info("Hello from Simple Pretty Terminal!"); ``` ## Configuration Options ### Required Parameters | Name | Type | Description | |------|------|-------------| | `runtime` | `string` | Runtime environment: "node" or "browser" | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `enabled` | `boolean` | `true` | Enable/disable the transport | | `viewMode` | `string` | `"inline"` | Log view: "inline", "message-only", or "expanded" | | `theme` | `object` | `moonlight` | Theme for log output (see built-in themes) | | `maxInlineDepth` | `number` | `4` | Max depth for inline data in inline mode | | `showLogId` | `boolean` | `false` | Whether to show log IDs in the output | | `timestampFormat` | `string \| function` | `"HH:mm:ss.SSS"` | Custom timestamp format ([date-fns format string](https://date-fns.org/docs/format) or function) | | `collapseArrays` | `boolean` | `true` | Whether to collapse arrays in expanded mode for cleaner output | | `flattenNestedObjects` | `boolean` | `true` | Whether to flatten nested objects with dot notation in inline mode | | `includeDataInBrowserConsole` | `boolean` | `false` | When enabled, also passes the data object as a second argument to browser console methods for easier inspection. Recommended with `viewMode: "inline"` or `"message-only"` | | `enableSprintf` | `boolean` | `false` | Enable sprintf-style message formatting with placeholders like `%s`, `%d`, `%f`, `%j` | ## Runtime Environments The transport supports two runtime environments: ### Node.js Runtime Use `runtime: "node"` for Node.js applications: ```typescript const transport = getSimplePrettyTerminal({ runtime: "node", viewMode: "inline", theme: moonlight, }); ``` In Node.js runtime, logs are written using `process.stdout.write` for optimal terminal output. ### Browser Runtime Use `runtime: "browser"` for browser applications: ```typescript const transport = getSimplePrettyTerminal({ runtime: "browser", viewMode: "inline", theme: moonlight, }); ``` In browser runtime, logs are written using appropriate console methods based on log level: * `trace` and `debug` levels → `console.debug()` * `info` level → `console.info()` * `warn` level → `console.warn()` * `error` and `fatal` levels → `console.error()` This ensures proper log level filtering and styling in browser developer tools. ## Next.js usage To configure with Next.js, use the following code to use both server and client-side rendering: ```typescript const isServer = typeof window === "undefined"; const transport = getSimplePrettyTerminal({ runtime: isServer ? "node" : "browser", viewMode: "inline", theme: moonlight, }); ``` For full integration instructions, see the [Next.js integration guide](/example-integrations/nextjs.md) ## View Modes The transport supports three view modes: ### Inline (default) ![Inline mode](/images/simple-pretty-terminal/terminal-inline.webp) * `viewMode: 'inline'` Shows all information with complete data structures inline using key=value format. Nested objects are flattened with dot notation (e.g., `user.profile.name=John`). Arrays can be controlled with `collapseArrays` - when `true` they show as `[...]`, when `false` they show as full JSON. Complex objects are shown as JSON when expanded. ### Expanded ![Expanded mode](/images/simple-pretty-terminal/terminal-expanded.webp) * `viewMode: 'expanded'` Shows timestamp, level, and message on first line, with data on indented separate lines for better readability. ### Message only ![Message only mode](/images/simple-pretty-terminal/terminal-message-only.webp) * `viewMode: 'message-only'` Shows only the timestamp, log level and message for a cleaner output. ## Themes The transport comes with several built-in themes: * `moonlight` (default) * `sunlight` * `neon` * `nature` * `pastel` For more information on themes, see the [Pretty Terminal Themes](/transports/pretty-terminal.md#themes) ## Creating Custom Themes You can create your own custom themes by implementing the `SimplePrettyTerminalTheme` interface. This gives you complete control over the colors and styling of your log output. ### Theme Structure A theme consists of the following properties: ```typescript interface SimplePrettyTerminalTheme { colors?: { trace?: ChalkInstance; debug?: ChalkInstance; info?: ChalkInstance; warn?: ChalkInstance; error?: ChalkInstance; fatal?: ChalkInstance; }; logIdColor?: ChalkInstance; dataValueColor?: ChalkInstance; dataKeyColor?: ChalkInstance; } ``` ### Using Chalk for Colors The transport uses the `chalk` library for styling. You can import it from the transport package: ```typescript import { chalk } from "@loglayer/transport-simple-pretty-terminal"; ``` ### Basic Custom Theme Example Here's a simple custom theme with a dark blue color scheme: ```typescript import { chalk } from "@loglayer/transport-simple-pretty-terminal"; const darkBlueTheme = { colors: { trace: chalk.gray, debug: chalk.blue, info: chalk.cyan, warn: chalk.yellow, error: chalk.red, fatal: chalk.bgRed.white, }, logIdColor: chalk.gray, dataValueColor: chalk.white, dataKeyColor: chalk.blue, }; const transport = getSimplePrettyTerminal({ runtime: "node", theme: darkBlueTheme, }); ``` ## Custom Timestamp Formatting You can customize timestamp formatting using [date-fns format strings](https://date-fns.org/docs/format) or custom functions: ```typescript // Using date-fns format strings const transport1 = getSimplePrettyTerminal({ runtime: "node", timestampFormat: "yyyy-MM-dd HH:mm:ss", // 2024-01-15 14:30:25 timestampFormat: "MMM dd, yyyy 'at' HH:mm", // Jan 15, 2024 at 14:30 timestampFormat: "HH:mm:ss.SSS", // 14:30:25.123 (default) }); // Using custom functions const transport2 = getSimplePrettyTerminal({ runtime: "node", timestampFormat: (timestamp: number) => { const date = new Date(timestamp); return `${date.getHours()}:${date.getMinutes()}:${date.getSeconds()}`; }, }); ``` ## Browser Console Data Inspection ### `includeDataInBrowserConsole` Option When running in the browser, you can enable the `includeDataInBrowserConsole` option to pass the log data object as a second argument to the browser's console methods (e.g., `console.info(message, data)` vs `console.info(message)`). This allows you to expand and inspect the data object directly in your browser's developer tools, making debugging much easier. **Recommended:** Use this option with `"message-only"` to avoid redundant data printing (otherwise, the data will be printed both inline and as an expandable object). ```typescript const transport = getSimplePrettyTerminal({ runtime: "browser", viewMode: "message-only" includeDataInBrowserConsole: true, }); ``` **Example output in browser devtools:** ```js INFO [12:34:56.789] ▶ INFO User logged in { user: { id: 123, name: "Alice" } } ``` You can expand the object in the console for deeper inspection. ## Sprintf Message Formatting The transport supports sprintf-style message formatting when `enableSprintf` is set to `true`, powered by the [sprintf-js](https://github.com/alexei/sprintf.js) package. This allows you to use format specifiers like `%s`, `%d`, `%f`, and `%j` in your log messages. ```typescript const transport = getSimplePrettyTerminal({ runtime: "node", enableSprintf: true, }); const log = new LogLayer({ transport }); // String substitution log.info("User %s logged in from %s", "Alice", "192.168.1.100"); // Output: User Alice logged in from 192.168.1.100 // Integer substitution log.info("Processing %d items", 42); // Output: Processing 42 items // Float with precision log.info("Completed in %.2f seconds", 3.14159); // Output: Completed in 3.14 seconds // JSON substitution log.info("Request body: %j", { name: "John", age: 30 }); // Output: Request body: {"name":"John","age":30} // Multiple substitutions log.warn("Memory usage at %d%% - threshold is %d%%", 85, 90); // Output: Memory usage at 85% - threshold is 90% ``` ### Supported Format Specifiers | Specifier | Description | |-----------|-------------| | `%s` | String | | `%d` or `%i` | Integer | | `%f` | Floating point number | | `%.Nf` | Floating point with N decimal places (e.g., `%.2f`) | | `%j` | JSON (serializes objects) | | `%%` | Literal percent sign | For the full list of format specifiers and advanced formatting options, see the [sprintf-js documentation](https://github.com/alexei/sprintf.js#format-specification). ::: tip If sprintf formatting fails (e.g., invalid format specifier), the transport will gracefully fall back to joining the messages with spaces. ::: --- --- url: 'https://loglayer.dev/plugins/sprintf.md' description: Printf-style string formatting support for LogLayer --- # Sprintf Plugin [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Fplugin-sprintf)](https://www.npmjs.com/package/@loglayer/plugin-sprintf) [Plugin Source](https://github.com/loglayer/loglayer/tree/master/packages/plugins/sprintf) The sprintf plugin provides printf-style string formatting support using [sprintf-js](https://www.npmjs.com/package/sprintf-js). It allows you to format your log messages using familiar printf-style placeholders if a logging library does not support this behavior. ::: warning LogLayer does not allow passing items that are not strings, booleans, or numbers into message methods like `info`, `error`, etc. **It is recommended to only use string / boolean / number specifiers in your format strings.** ::: ## Installation ::: code-group ```bash [npm] npm install @loglayer/plugin-sprintf ``` ```bash [yarn] yarn add @loglayer/plugin-sprintf ``` ```bash [pnpm] pnpm add @loglayer/plugin-sprintf ``` ::: ## Usage ```typescript import { LogLayer, ConsoleTransport } from 'loglayer' import { sprintfPlugin } from '@loglayer/plugin-sprintf' const log = new LogLayer({ transport: new ConsoleTransport({ logger: console }), plugins: [ sprintfPlugin() ] }) // Example usage log.info("Hello %s!", "world") // Output: Hello world! log.info("Number: %d", 42) // Output: Number: 42 ``` ## Changelog View the changelog [here](./changelogs/sprintf-changelog.md). --- --- url: 'https://loglayer.dev/transports/sumo-logic.md' description: Send logs to Sumo Logic with the LogLayer logging library --- # Sumo Logic Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-sumo-logic)](https://www.npmjs.com/package/@loglayer/transport-sumo-logic) [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/sumo-logic) The Sumo Logic transport sends logs to [Sumo Logic](https://www.sumologic.com/) via their [HTTP Source](https://help.sumologic.com/docs/send-data/hosted-collectors/http-source/logs-metrics/upload-logs/). ## Installation Using npm: ```bash npm install @loglayer/transport-sumo-logic serialize-error ``` Using yarn: ```bash yarn add @loglayer/transport-sumo-logic serialize-error ``` Using pnpm: ```bash pnpm add @loglayer/transport-sumo-logic serialize-error ``` ## Usage First, you'll need to [create an HTTP Source in Sumo Logic](https://help.sumologic.com/docs/send-data/hosted-collectors/http-source/logs-metrics/#configure-an-httplogs-and-metrics-source). Once you have the URL, you can configure the transport: ```typescript import { LogLayer } from "loglayer"; import { SumoLogicTransport } from "@loglayer/transport-sumo-logic"; import { serializeError } from "serialize-error"; const transport = new SumoLogicTransport({ url: "YOUR_SUMO_LOGIC_HTTP_SOURCE_URL", }); const logger = new LogLayer({ errorSerializer: serializeError, // Important for proper error serialization transport }); logger.info("Hello from LogLayer!"); ``` ## Configuration Options ### Required Parameters | Name | Type | Description | |------|------|-------------| | `url` | `string` | The URL of your HTTP Source endpoint | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `useCompression` | `boolean` | `true` | Whether to use gzip compression | | `sourceCategory` | `string` | - | Source category to assign to the logs | | `sourceName` | `string` | - | Source name to assign to the logs | | `sourceHost` | `string` | - | Source host to assign to the logs | | `fields` | `Record` | `{}` | Fields to be added as X-Sumo-Fields header | | `headers` | `Record` | `{}` | Custom headers to be added to the request | | `messageField` | `string` | `"message"` | Field name to use for the log message | | `onError` | `(error: Error \| string) => void` | - | Callback for error handling | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Logs below this level will be filtered out | | `retryConfig.maxRetries` | `number` | `3` | Maximum number of retry attempts | | `retryConfig.initialRetryMs` | `number` | `1000` | Initial retry delay in milliseconds | ## Examples ### With Source Information ```typescript const transport = new SumoLogicTransport({ url: "YOUR_SUMO_LOGIC_HTTP_SOURCE_URL", sourceCategory: "backend", sourceName: "api-server", sourceHost: "prod-api-1" }); ``` ### With Custom Fields ```typescript const transport = new SumoLogicTransport({ url: "YOUR_SUMO_LOGIC_HTTP_SOURCE_URL", fields: { environment: "production", team: "platform", region: "us-west-2" } }); ``` ### With Custom Message Field ```typescript const transport = new SumoLogicTransport({ url: "YOUR_SUMO_LOGIC_HTTP_SOURCE_URL", messageField: "log_message" // Messages will be sent as { log_message: "..." } }); ``` ### With Retry Configuration ```typescript const transport = new SumoLogicTransport({ url: "YOUR_SUMO_LOGIC_HTTP_SOURCE_URL", retryConfig: { maxRetries: 5, initialRetryMs: 500 } }); ``` ### With Custom Headers ```typescript const transport = new SumoLogicTransport({ url: "YOUR_SUMO_LOGIC_HTTP_SOURCE_URL", headers: { "X-Custom-Header": "value" } }); ``` ## Log Format The transport sends logs to Sumo Logic in the following format: ```typescript { message?: string; // Present only if there are string messages severity: string; // The log level (e.g., "INFO", "ERROR") timestamp: string; // ISO 8601 timestamp ...metadata // Any additional metadata passed to the logger } ``` ### Custom Fields Custom fields specified in the `fields` option are sent as an `X-Sumo-Fields` header in the format: ``` X-Sumo-Fields: key1=value1,key2=value2 ``` This allows for better indexing and searching in Sumo Logic. ## Size Limits The transport enforces Sumo Logic's 1MB payload size limit. If a payload exceeds this limit: 1. The transport will not send the log 2. The `onError` callback will be called with an error message 3. The error will include the actual size that exceeded the limit This applies to both raw and compressed payloads. ## Changelog View the changelog [here](./changelogs/sumo-logic-changelog.md). --- --- url: 'https://loglayer.dev/plugins/testing-plugins.md' description: Learn how to write tests for your LogLayer plugins --- # Testing Plugins LogLayer provides a `TestTransport` and `TestLoggingLibrary` that make it easy to test your plugins. Here's an example of how to test a plugin that adds a timestamp to metadata: ```typescript import { LogLayer, TestLoggingLibrary, TestTransport } from "loglayer"; import { describe, expect, it } from "vitest"; describe("timestamp plugin", () => { it("should add timestamp to metadata", () => { // Create a test logger to capture output const logger = new TestLoggingLibrary(); // Create the timestamp plugin const timestampPlugin = { id: "timestamp", onMetadataCalled: (metadata) => ({ ...metadata, timestamp: "2024-01-01T00:00:00.000Z" }) }; // Create LogLayer instance with the plugin const log = new LogLayer({ transport: new TestTransport({ logger, }), plugins: [timestampPlugin], }); // Test the plugin by adding some metadata log.metadataOnly({ message: "test message" }); // Get the logged line and verify the timestamp was added const line = logger.popLine(); expect(line.data[0].timestamp).toBe("2024-01-01T00:00:00.000Z"); expect(line.data[0].message).toBe("test message"); }); it("should handle empty metadata", () => { const logger = new TestLoggingLibrary(); const timestampPlugin = { id: "timestamp", onMetadataCalled: (metadata) => ({ ...metadata, timestamp: "2024-01-01T00:00:00.000Z" }) }; const log = new LogLayer({ transport: new TestTransport({ logger, }), plugins: [timestampPlugin], }); log.metadataOnly({}); const line = logger.popLine(); expect(line.data[0].timestamp).toBe("2024-01-01T00:00:00.000Z"); }); }); ``` ## TestLoggingLibrary API The `TestLoggingLibrary` provides several methods and properties to help you test your plugins: ### Properties * `lines`: An array containing all logged lines. Each line has a `level` (`LogLevel`) and `data` (array of parameters passed to the log method). ### Methods * `getLastLine()`: Returns the most recent log line without removing it. Returns null if no lines exist. * `popLine()`: Returns and removes the most recent log line. Returns null if no lines exist. * `clearLines()`: Removes all logged lines, resetting the library to its initial state. Each logged line has the following structure: ```typescript { level: LogLevel; // The log level (info, warn, error, etc.) data: any[]; // Array of parameters passed to the log method } ``` --- --- url: 'https://loglayer.dev/mixins/testing-mixins.md' description: Learn how to write tests for your LogLayer mixins --- # Testing Mixins LogLayer provides testing utilities to help you test your mixins. Since mixins add methods directly to LogLayer instances, testing focuses on verifying that your mixin methods work correctly and integrate properly with the LogLayer API. There are two approaches to testing mixins: * **Unit Testing**: Uses `TestTransport` and `TestLoggingLibrary` for fast, isolated tests with assertions * **Live Testing**: Uses real transports to verify actual output and real-world behavior ## Example Mixin We'll use the performance timing mixin from the [Creating Mixins](/mixins/creating-mixins) guide as our example. This mixin adds `withPerfStart()` and `withPerfEnd()` methods to both `LogLayer` and `LogBuilder`, and includes a plugin that automatically adds performance timing data to log metadata. First, here's the complete mixin implementation: ```typescript // perf-timing-mixin.ts import type { LogLayerMixin, LogBuilderMixin, LogLayer, LogBuilder, LogLayerMixinRegistration, LogLayerPlugin, MockLogLayer, MockLogBuilder, } from 'loglayer'; import { LogLayerMixinAugmentType, LogBuilder, MockLogBuilder } from 'loglayer'; import type { PluginBeforeDataOutParams } from 'loglayer'; // TypeScript declarations export interface IPerfTimingMixin { withPerfStart(id: string): T; withPerfEnd(id: string): T; } // Augment the loglayer module declare module 'loglayer' { interface LogLayer extends IPerfTimingMixin {} interface LogBuilder extends IPerfTimingMixin {} interface MockLogLayer extends IPerfTimingMixin {} interface MockLogBuilder extends IPerfTimingMixin {} interface ILogLayer extends IPerfTimingMixin {} interface ILogBuilder extends IPerfTimingMixin {} } // Module-level storage for performance timing state const perfStartTimes = new Map(); const perfDurations = new Map(); // LogLayer mixin - methods return LogBuilder const logLayerPerfMixin: LogLayerMixin = { augmentationType: LogLayerMixinAugmentType.LogLayer, augment: (prototype) => { prototype.withPerfStart = function (this: LogLayer, id: string): LogBuilder { return new LogBuilder(this).withPerfStart(id); }; prototype.withPerfEnd = function (this: LogLayer, id: string): LogBuilder { return new LogBuilder(this).withPerfEnd(id); }; }, augmentMock: (prototype) => { prototype.withPerfStart = function (this: MockLogLayer, id: string): any { return new MockLogBuilder(this).withPerfStart(id); }; prototype.withPerfEnd = function (this: MockLogLayer, id: string): any { return new MockLogBuilder(this).withPerfEnd(id); }; } }; // LogBuilder mixin - actual implementation const logBuilderPerfMixin: LogBuilderMixin = { augmentationType: LogLayerMixinAugmentType.LogBuilder, augment: (prototype) => { prototype.withPerfStart = function (this: LogBuilder, id: string): LogBuilder { perfStartTimes.set(id, Date.now()); return this; }; prototype.withPerfEnd = function (this: LogBuilder, id: string): LogBuilder { const startTime = perfStartTimes.get(id); if (startTime !== undefined) { const duration = Date.now() - startTime; perfDurations.set(id, duration); perfStartTimes.delete(id); } return this; }; }, augmentMock: (prototype) => { prototype.withPerfStart = function (this: MockLogBuilder, id: string): MockLogBuilder { // Mock implementation - no-op for testing return this; }; prototype.withPerfEnd = function (this: MockLogBuilder, id: string): MockLogBuilder { // Mock implementation - no-op for testing return this; }; } }; // Plugin that adds performance timings to log metadata const perfPlugin: LogLayerPlugin = { onBeforeDataOut: (params: PluginBeforeDataOutParams) => { const perfTimings: Record = {}; // Collect all durations and clear them for (const [id, duration] of perfDurations.entries()) { perfTimings[id] = duration; perfDurations.delete(id); } if (Object.keys(perfTimings).length > 0) { return { ...params.data, perfTimings }; } return params.data; } }; // Registration function export function perfTimingMixin(): LogLayerMixinRegistration { return { mixinsToAdd: [logLayerPerfMixin, logBuilderPerfMixin], pluginsToAdd: [perfPlugin] }; } ``` ## Unit Testing Unit tests use `TestTransport` and `TestLoggingLibrary` to verify mixin methods and plugin integration without requiring external dependencies. These are fast, isolated tests that use assertions to verify behavior. ### TestLoggingLibrary API The `TestLoggingLibrary` provides several methods to help you test your mixins: * `lines`: Array of all logged lines (each line has `level` and `data`) * `getLastLine()`: Get the last line that was logged * `popLine()`: Get and remove the last line that was logged * `clearLines()`: Clear all logged lines Each line in `lines` has the following structure: ```typescript { level: 'info' | 'warn' | 'error' | 'debug' | 'trace' | 'fatal'; data: any[]; // Array of data passed to the logger } ``` The `data` array contains the actual log data, with metadata typically in the first element: ```typescript const line = logger.popLine(); const logData = line.data[0]; // Usually contains metadata and message expect(logData.perfTimings).toBeDefined(); ``` ### Example Unit Test ```typescript import { LogLayer, useLogLayerMixin, TestLoggingLibrary, TestTransport } from 'loglayer'; import { describe, expect, it, beforeEach } from 'vitest'; import { perfTimingMixin } from './perf-timing-mixin.js'; describe('perfTimingMixin', () => { let log: LogLayer; let logger: TestLoggingLibrary; beforeEach(() => { logger = new TestLoggingLibrary(); useLogLayerMixin(perfTimingMixin()); log = new LogLayer({ transport: new TestTransport({ logger }), }); }); it('should add mixin methods and plugin-enriched data via TestLoggingLibrary', async () => { // Mixin methods are available expect(typeof log.withPerfStart).toBe('function'); expect(typeof log.withPerfEnd).toBe('function'); // Use mixin methods to track timing log.withPerfStart('api-call'); await new Promise(resolve => setTimeout(resolve, 20)); log.withPerfEnd('api-call').info('API call completed'); // TestLoggingLibrary captures plugin-modified data const line = logger.popLine(); expect(line?.data[0]).toHaveProperty('perfTimings'); expect(line?.data[0].perfTimings['api-call']).toBeGreaterThanOrEqual(20); }); it('should verify plugin state management via TestLoggingLibrary', async () => { log.withPerfStart('timer'); await new Promise(resolve => setTimeout(resolve, 10)); log.withPerfEnd('timer').info('First log'); const firstLine = logger.popLine(); expect(firstLine?.data[0].perfTimings).toHaveProperty('timer'); // Plugin cleared state, so second log has no perfTimings log.info('Second log'); const secondLine = logger.popLine(); expect(secondLine?.data[0]).not.toHaveProperty('perfTimings'); }); }); ``` ## Live Testing Live tests verify that your mixin works correctly with real transports and outputs. Live tests use actual transports (like `ConsoleTransport`) to see the real output and verify end-to-end behavior. For comprehensive testing, you should test your mixin with both `LogLayer` and `MockLogLayer` to verify that your `augmentMock` implementation works correctly. This ensures that mock implementations behave correctly in test environments. Here's a complete live test example for the performance timing mixin that tests both implementations: ```typescript // livetest.ts import { LogLayer, MockLogLayer, useLogLayerMixin, ConsoleTransport } from 'loglayer'; import { perfTimingMixin } from './perf-timing-mixin.js'; // Register the mixin before creating LogLayer instances useLogLayerMixin(perfTimingMixin()); // Test helper function to run tests with either LogLayer or MockLogLayer async function runTests(log: LogLayer | MockLogLayer, testName: string) { console.log(`\n===== ${testName} =====\n`); // Test mixin methods on LogLayer console.log('===== withPerfStart/End on LogLayer ====='); log.withPerfStart('api-call').info('API call started'); await new Promise(resolve => setTimeout(resolve, 50)); log.withPerfEnd('api-call').info('API call completed'); // Test mixin methods on LogBuilder console.log('\n===== withPerfStart/End on LogBuilder ====='); log.withMetadata({ userId: 123 }) .withPerfStart('db-query') .info('Database query started'); await new Promise(resolve => setTimeout(resolve, 30)); log.withPerfEnd('db-query') .withMetadata({ result: 'success' }) .info('Database query completed'); // Test multiple timers console.log('\n===== Multiple Timers ====='); log.withPerfStart('timer-1'); log.withPerfStart('timer-2'); await new Promise(resolve => setTimeout(resolve, 20)); log.withPerfEnd('timer-1'); log.withPerfEnd('timer-2'); log.info('Both timers ended'); // Test method chaining console.log('\n===== Method Chaining ====='); log.withPerfStart('chained-timer') .withMetadata({ step: 1 }) .info('Process started'); await new Promise(resolve => setTimeout(resolve, 25)); log.withPerfEnd('chained-timer') .withMetadata({ step: 2 }) .warn('Process completed'); } console.log('\n===== Start Livetest for: perfTimingMixin =====\n'); // Test with LogLayer (real implementation) const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, }), }); await runTests(log, 'Testing with LogLayer'); // Test with MockLogLayer (mock implementation) const mockLog = new MockLogLayer(); await runTests(mockLog, 'Testing with MockLogLayer'); console.log('\n===== End Livetest for: perfTimingMixin =====\n'); ``` This two-round testing pattern ensures: * Your mixin methods work correctly with the real `LogLayer` implementation * Your `augmentMock` implementation provides the same API surface for `MockLogLayer` Run the live test with: ```bash pnpm run livetest # or npx tsx livetest.ts ``` ## Implementing augmentMock for Wrapper Methods If your mixin includes methods that wrap functions (like timer methods), your `augmentMock` implementation should return the function itself so it can still be executed, just without the side effects. For example, if you have a timer method that wraps a function: ```typescript // Real implementation augment: (prototype) => { prototype.statsTimer = function(this: LogLayer, fn: Function, stat: string) { // Wrap function with timing logic return client.timer(fn, stat); }; }, // Mock implementation - return the function so it can still be executed augmentMock: (prototype) => { prototype.statsTimer = function(this: MockLogLayer, fn: Function, _stat: string) { // Return the function itself - no-op for testing return fn; }; } ``` This ensures that when testing with `MockLogLayer`, the wrapped functions still execute correctly, just without the timing/metrics being sent. --- --- url: 'https://loglayer.dev/transports/testing-transports.md' description: Learn how to test transports for LogLayer --- # Testing Transports ## Unit testing Unfortunately there is not a prescribed way to unit test transport implementations. This is because the implementation of a transport is highly dependent on the target logging library. You will want to at least test that the calls to LogLayer reaches the intended method of the destination logger. * Some loggers allow for the specification of an output stream. You can usually use this to end-to-end test the logger output. * A lot of loggers do this, and you can check out the unit tests for the LogLayer transports for examples. * If they don't have an output stream, replace the logger with a mock and test that the mock is called with the correct parameters. ## Live testing ### With `testTransportOutput` Live testing tests that the transport actually works with the target logger. The `@loglayer/transport` library exports a `testTransportOutput(label: string, loglayer: LogLayer)` function that can be used to test that the transport works with the target logger. It calls the commonly used methods on the `loglayer` instance and outputs what the result is to the console. Lots of transports have a `src/__tests__/livetest.ts` file that you can look at to see how to use it. Here is an example of how to use it from the bunyan transport: ```typescript // livetest.ts import { testTransportOutput } from "@loglayer/transport"; import bunyan from "bunyan"; import { LogLayer } from "loglayer"; import { BunyanTransport } from "../BunyanTransport.js"; const b = bunyan.createLogger({ name: "my-logger", level: "trace", // Show all log levels serializers: { err: bunyan.stdSerializers.err, // Use Bunyan's error serializer }, }); const log = new LogLayer({ errorFieldName: "err", // Match Bunyan's error field name transport: new BunyanTransport({ logger: b, }), }); testTransportOutput("Bunyan logger", log); ``` Then you can use `pnpm run livetest` / `npx tsx livetest.ts` to run the test. ### For cloud providers For cloud provider-sent logs, you'll have to use the cloud provider's log console to verify that the logs are being sent correctly. This was done for the DataDog transports, where the logs were sent to DataDog and verified in the DataDog console. --- --- url: 'https://loglayer.dev/transports/tracer.md' description: Send logs to Tracer with the LogLayer logging library --- # Tracer Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-tracer)](https://www.npmjs.com/package/@loglayer/transport-tracer) [Tracer](https://www.npmjs.com/package/tracer) is a powerful and customizable logging library for Node.js. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/tracer) ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-tracer tracer ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-tracer tracer ``` ```sh [yarn] yarn add loglayer @loglayer/transport-tracer tracer ``` ::: ## Setup ```typescript import { LogLayer } from 'loglayer' import { TracerTransport } from '@loglayer/transport-tracer' import tracer from 'tracer' // Create a tracer logger instance const logger = tracer.console() const log = new LogLayer({ transport: new TracerTransport({ id: 'tracer', logger }) }) ``` ## Log Level Mapping | LogLayer | Tracer | |----------|---------| | trace | trace | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | error | ## Changelog View the changelog [here](./changelogs/tracer-changelog.md). --- --- url: 'https://loglayer.dev/logging-api/transport-management.md' description: How to manage transports in LogLayer --- # Transport Management LogLayer provides methods to dynamically add, remove, and replace transports at runtime. ::: tip Parent-Child Isolation Transport changes only affect the current logger instance. Child loggers created before the change will retain their original transports, and parent loggers are not affected when a child modifies its transports. ::: ::: warning Potential Performance Impact Modifying transports at runtime may have a performance impact if you are frequently creating / removing transports. It is recommended to re-use the same transport instance(s) where possible. ::: ## Adding Transports `addTransport(transports: LogLayerTransport | Array): ILogLayer` Adds one or more transports to the existing transports. If a transport with the same ID already exists, it will be replaced and its `[Symbol.dispose]()` method will be called if implemented. ```typescript // Add a single transport logger.addTransport(new PinoTransport({ logger: pino(), id: 'pino' })) // Add multiple transports at once logger.addTransport([ new ConsoleTransport({ logger: console, id: 'console' }), new PinoTransport({ logger: pino(), id: 'pino' }) ]) // Replace an existing transport by using the same ID logger.addTransport(new PinoTransport({ logger: pino({ level: 'debug' }), id: 'pino' // This will replace the existing 'pino' transport })) ``` ## Removing Transports `removeTransport(id: string): boolean` Removes a transport by its ID. Returns `true` if the transport was found and removed, `false` otherwise. If the transport implements the [Disposable](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-5-2.html#using-declarations-and-explicit-resource-management) interface, its `[Symbol.dispose]()` method will be called automatically when removed. ```typescript const log = new LogLayer({ transport: [ new ConsoleTransport({ logger: console, id: 'console' }), new PinoTransport({ logger: pino(), id: 'pino' }) ] }) // Remove a specific transport const wasRemoved = log.removeTransport('console') // true // Trying to remove a non-existent transport returns false const notFound = log.removeTransport('nonexistent') // false ``` ## Replacing All Transports `withFreshTransports(transports: LogLayerTransport | Array): ILogLayer` Replaces all existing transports with the provided transport(s). This is useful when you want to completely change the logging destinations. All existing transports will have their `[Symbol.dispose]()` method called if implemented. ```typescript // Replace with a single transport logger.withFreshTransports(new PinoTransport({ logger: pino() })) // Replace with multiple transports logger.withFreshTransports([ new ConsoleTransport({ logger: console }), new PinoTransport({ logger: pino() }) ]) ``` ## Obtaining the underlying logger instance You can get the underlying logger for a transport if you've assigned an ID to it: ```typescript const log = new LogLayer({ transport: new ConsoleTransport({ logger: console, id: 'console' }) }) const consoleLogger = log.getLoggerInstance('console') ``` ```typescript import { type P, pino } from "pino"; import { PinoTransport } from "@loglayer/transport-pino"; const log = new LogLayer({ transport: new PinoTransport({ logger: pino(), id: 'pino' }) }) const pinoLogger = log.getLoggerInstance('pino') ``` ::: info Not all transports have a logger instance attached to them. In those cases, you will get a null result. You can identify if a transport can return such an instance if it takes in a `logger` parameter in its constructor. ::: --- --- url: 'https://loglayer.dev/transports/tslog.md' description: Send logs to TsLog with the LogLayer logging library --- # tslog Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-tslog)](https://www.npmjs.com/package/@loglayer/transport-tslog) [tslog](https://tslog.js.org/) is a powerful TypeScript logging library that provides beautiful logging with full TypeScript support. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/tslog) ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-tslog tslog ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-tslog tslog ``` ```sh [yarn] yarn add loglayer @loglayer/transport-tslog tslog ``` ::: ## Setup ```typescript import { Logger } from "tslog" import { LogLayer } from 'loglayer' import { TsLogTransport } from "@loglayer/transport-tslog" const tslog = new Logger() const log = new LogLayer({ transport: new TsLogTransport({ logger: tslog }) }) log.info("Hello from tslog transport!") ``` ```bash 2025-10-07 04:01:14.302 INFO logger.ts:15 Hello from tslog transport! ``` ::: info Callsite information Because tslog is being used as part of LogLayer, LogLayer modifies the tslog instance's private property `stackDepthLevel` to a value of `9` so tslog can output the proper filename in the log output (as in the example above, it shows `logger.ts:15` instead of something like `LogLayer.ts:123`). This is also exposed as an optional configuration parameter in the transport if you need to modify it. ::: ## Configuration Options ### Required Parameters | Name | Type | Description | |------|------|-------------| | `logger` | `Logger` | The tslog Logger instance to use for logging. | ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `stackDepthLevel` | `number` | `9` | The stack depth level to use for logging. This is useful for getting accurate file and line number information in the logs. You may need to adjust this value based on how many layers of abstraction are between your logging calls and the transport. | ## Log Level Mapping | LogLayer | tslog | |----------|--------| | trace | trace | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | fatal | ## Changelog View the changelog [here](./changelogs/tslog-changelog.md). --- --- url: 'https://loglayer.dev/mixins/_partials/mixin-list.md' --- ### Available Mixins | Name | Package | Description | |------|---------|-------------| | [Hot-Shots (StatsD)](/mixins/hot-shots) | [![npm](https://img.shields.io/npm/v/@loglayer/mixin-hot-shots)](https://www.npmjs.com/package/@loglayer/mixin-hot-shots) | Adds the [`hot-shots`](https://github.com/bdeitte/hot-shots) API to LogLayer | --- --- url: 'https://loglayer.dev/example-integrations/async-context.md' description: Learn how to implement LogLayer across async contexts --- # Asynchronous context tracking with LogLayer This document will explain how to use LogLayer across async contexts using [`AsyncLocalStorage`](https://nodejs.org/api/async_context.html#class-asynclocalstorage). ## Why use `AsyncLocalStorage`? Trevor Lasn in his article, [AsyncLocalStorage: Simplify Context Management in Node.js ](https://www.trevorlasn.com/blog/node-async-local-storage), says it best: *AsyncLocalStorage gives you a way to maintain context across your async operations without manually passing data through every function. Think of it like having a secret storage box that follows your request around, carrying important information that any part of your code can access.* ### Addresses context tracking hell Like how promises addressed [callback hell](https://medium.com/@raihan_tazdid/callback-hell-in-javascript-all-you-need-to-know-296f7f5d3c1), `AsyncLocalStorage` addresses the problem of context tracking hell: ```typescript // An example of context management hell using express async function myExternalFunction(log: ILogLayer) { log.info('Doing something') // need to pass that logger down await someNestedFunction(log) } async function someNestedFunction(log: ILogLayer) { log.info('Doing something else') } // Define logging middleware app.use((req, res, next) => { // Create a new LogLayer instance for each request req.log = new LogLayer() next() }) // Use the logger in your routes app.get('/', async (req, res) => { req.log.info('Processing request to root endpoint') // You have to pass in the logger here await myExternalFunction(req.log) res.send('Hello World!') }) ``` * In the above example, we have to pass the `log` (or a context) object to every function that needs it. This can lead to a lot of boilerplate code. * Using `AsyncLocalStorage`, we can avoid this and make our code cleaner. ## Do not use async hooks You may have heard of async hooks for addressing this problem, but it has been superseded by async local storage. The documentation for [async hooks](https://nodejs.org/api/async_hooks.html) has it in an experimental state for years, citing that it has "usability issues, safety risks, and performance implications", and to instead use `AsyncLocalStorage`. ## Integration with `AsyncLocalStorage` The following example has been tested to work. It uses the [`express`](./express) framework, but you can use the `async-local-storage.ts` and `logger.ts` code for any other framework. ### Create a file for the `AsyncLocalStorage` instance ```typescript // async-local-storage.ts import { AsyncLocalStorage } from "node:async_hooks"; import type { ILogLayer } from "loglayer"; export const asyncLocalStorage = new AsyncLocalStorage<{ logger: ILogLayer }>(); ``` ### Create a file to get the logger instance from the storage ```typescript // logger.ts import { asyncLocalStorage } from "./async-local-storage"; import { ConsoleTransport, LogLayer } from "loglayer"; import type { ILogLayer } from "loglayer"; export function createLogger() { return new LogLayer({ transport: new ConsoleTransport({ logger: console, }), }) } // Create a default logger for non-request contexts const defaultLogger = createLogger(); export function getLogger(): ILogLayer { const store = asyncLocalStorage.getStore(); if (!store) { // Use non-request specific logger // Remove these console logs once you're convinced it works console.log("using non-async local storage logger"); return defaultLogger; } console.log("Using async local storage logger"); return store.logger; } ``` ### Register the logger per-request to the storage ```typescript // app.ts import express from 'express'; import { asyncLocalStorage } from "./async-local-storage"; import { getLogger, createLogger } from "./logger"; import type { ILogLayer } from "loglayer"; // Extend Express Request type to include log property declare global { namespace Express { interface Request { log: ILogLayer; } } } // Initialize Express app const app = express(); // no need to pass in the logger now that we can use async local storage async function myExternalFunction() { // Will use the request-specific logger if being called // from the context of a request getLogger().info('Doing something') await someNestedFunction() } async function someNestedFunction() { getLogger().info('Doing something else') } // Define logging middleware app.use((req, res, next) => { const logger = createLogger() req.log = logger; // Stores the request-specific logger into storage asyncLocalStorage.run({ logger }, next); }) // Use the logger in your routes app.get('/', async (req, res) => { // You can also use getLogger() instead req.log.info('Processing request to root endpoint') await myExternalFunction() res.send('Hello World!') }) // Start the server app.listen(3000, () => { console.log('Server is running on port 3000'); }); ``` ### Output ``` Processing request to root endpoint Using async local storage logger Doing something Using async local storage logger Doing something else ``` --- --- url: 'https://loglayer.dev/example-integrations/express.md' description: Learn how to implement LogLayer with Express --- # LogLayer with Express LogLayer can be easily integrated with Express as middleware to provide request-scoped logging via `req.log`. This guide will show you how to set it up. ## Installation First, install the required packages. You can use any transport you prefer - we'll use Pino in this example: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-pino pino express ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-pino pino express serialize-error ``` ```sh [yarn] yarn add loglayer @loglayer/transport-pino pino express serialize-error ``` ::: ## Example ```typescript import express from 'express' import { pino } from 'pino' import { ILogLayer, LogLayer } from 'loglayer' import { PinoTransport } from '@loglayer/transport-pino' import { serializeError } from 'serialize-error'; // Create a Pino instance (only needs to be done once) const pinoLogger = pino({ level: 'trace' // Set to desired log level }) const app = express() // Add types for the req.log property declare global { namespace Express { interface Request { log: ILogLayer } } } // Define logging middleware app.use((req, res, next) => { // Create a new LogLayer instance for each request req.log = new LogLayer({ transport: new PinoTransport({ logger: pinoLogger }), errorSerializer: serializeError, }).withContext({ reqId: crypto.randomUUID(), // Add unique request ID method: req.method, path: req.path, ip: req.ip }) next() }) // Use the logger in your routes app.get('/', (req, res) => { req.log.info('Processing request to root endpoint') // Add additional context for specific logs req.log .withContext({ query: req.query }) .info('Request includes query parameters') res.send('Hello World!') }) // Error handling middleware app.use((err: Error, req: express.Request, res: express.Response, next: express.NextFunction) => { req.log.withError(err).error('An error occurred while processing the request') res.status(500).send('Internal Server Error') }) app.listen(3000, () => { console.log('Server started on port 3000') }) ``` ## Using Async Local Storage You will most likely want to use async local storage to avoid passing the logger around in your code. See an example of how to do this [here](./async-context). --- --- url: 'https://loglayer.dev/example-integrations/fastify.md' description: Learn how to implement LogLayer with Fastify --- # LogLayer with Fastify ## Installation First, install the required packages. Pino is the default logger for Fastify, so we'll use it in this example: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-pino pino fastify serialize-error ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-pino pino fastify serialize-error ``` ```sh [yarn] yarn add loglayer @loglayer/transport-pino pino fastify serialize-error ``` ::: ## Example ```typescript import Fastify from 'fastify' import { type P, pino } from "pino"; import { ILogLayer, LogLayer } from 'loglayer' import { PinoTransport } from '@loglayer/transport-pino' import { serializeError } from 'serialize-error'; declare module 'fastify' { interface FastifyBaseLogger extends ILogLayer {} } let p: P.Logger = pino(); const logger = new LogLayer({ transport: new PinoTransport({ logger: p, }), errorSerializer: serializeError, }); const fastify = Fastify({ // @ts-ignore LogLayer doesn't implement some of Fastify's logger interface // but we've found this hasn't been an issue in production usage loggerInstance: logger, // This makes logs extremely verbose, so only disable for debugging disableRequestLogging: true, }) // Add request path to logs fastify.addHook('onRequest', async (request, reply) => { // @ts-ignore LogLayer doesn't implement some of Fastify's logger interface request.log = request.log.withContext({ path: request.url }); }); // Declare a route fastify.get('/', function (request, reply) { request.log.info('hello world') reply.send({ hello: 'world' }) }) // Run the server! fastify.listen({ port: 3000 }, function (err, address) { if (err) { fastify.log.withError(err).error("error starting server") process.exit(1) } // Server is now listening on ${address} }) ``` ## Example repo You can find a complete example of using LogLayer with Fastify in the [fastify-starter-turbo-monorepo](https://github.com/theogravity/fastify-starter-turbo-monorepo) example. It uses [`AsyncLocalStorage`](./async-context) to store the logger for use in request contexts. --- --- url: 'https://loglayer.dev/example-integrations/hono.md' description: Learn how to implement LogLayer with Hono --- # LogLayer with Hono LogLayer can be easily integrated with [Hono](https://hono.dev/) using its context system to provide request-scoped logging with full type safety. This guide will show you how to set it up. This guide assumes you already have Hono installed with a project created. ## Installation First, install the required packages. You can use any transport you prefer - we'll use Pino in this example: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-pino pino serialize-error ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-pino pino serialize-error ``` ```sh [yarn] yarn add loglayer @loglayer/transport-pino pino serialize-error ``` ::: ## Example Create a `logger.ts` file: ```typescript // logger.ts import { LogLayer } from 'loglayer' import { PinoTransport } from '@loglayer/transport-pino' import { serializeError } from 'serialize-error' import { pino } from 'pino' // Create a Pino instance (only needs to be done once) const pinoLogger = pino({ level: 'trace' // Set to desired log level }) const log = new LogLayer({ errorSerializer: serializeError, transport: [ new PinoTransport({ logger: pinoLogger }) ] }) export function getLogger() { return log; } ``` Then in your application: ```typescript // index.ts import { Hono } from 'hono' import { serve } from "@hono/node-server"; import { createMiddleware } from 'hono/factory' import { type ILogLayer } from 'loglayer' import { getLogger } from "./logger.js"; // Define the context variables for type safety type Variables = { log: ILogLayer } const app = new Hono<{ Variables: Variables }>() const loggerMiddleware = createMiddleware(async (c, next) => { // Create a new LogLayer instance for each request const log = getLogger().child().withContext({ reqId: crypto.randomUUID(), // Add unique request ID method: c.req.method, path: c.req.path, }) // Set the logger in the context for type safety c.set('log', log) await next() }) app.use('*', loggerMiddleware) // Use the logger in your routes app.get('/', (c) => { const log = c.get('log') // Fully typed! log.info('Processing request to root endpoint') // Add additional context for specific logs log .withContext({ query: c.req.query() }) .info('Request includes query parameters') return c.text('Hello World!') }) // Example with error handling app.get('/api/users/:id', async (c) => { const log = c.get('log') const userId = c.req.param('id') try { log.withContext({ userId }).info('Fetching user data') // Simulate some async operation const user = await fetchUser(userId) log.withContext({ userId }).info('User data retrieved successfully') return c.json(user) } catch (error) { log.withError(error).withMetadata({ userId }).error('Failed to fetch user data') return c.json({ error: 'User not found' }, 404) } }) // Error handling middleware app.onError((err, c) => { const log = c.get('log') log.withError(err).error('An error occurred while processing the request') return c.json({ error: 'Internal Server Error' }, 500) }) // Helper function for demonstration async function fetchUser(id: string) { // Simulate database lookup if (id === '123') { return { id: '123', name: 'John Doe' } } throw new Error('User not found') } serve({ fetch: app.fetch, port: 3000 }, (info) => { getLogger().info(`Server is running on http://localhost:${info.port}`) }) export default app ``` ## Using Async Local Storage You will most likely want to use async local storage to avoid passing the logger around in your code. See an example of how to do this [here](./async-context). --- --- url: 'https://loglayer.dev/logging-api/unit-testing.md' description: Learn how to silence logging during unit tests using MockLogLayer --- # Working with LogLayer in Testing ## No-op Mock LogLayer for Unit Testing LogLayer provides a `MockLogLayer` class that implements the `ILogLayer` interface implemented by `LogLayer` but all methods are no-ops (they do nothing). This is useful for testing services that use logging. This example demonstrates how to use `MockLogLayer` for testing a service that uses logging. ```typescript import { describe, it, expect } from 'vitest' import { MockLogLayer, ILogLayer } from 'loglayer' // Example service that uses logging class UserService { private logger: ILogLayer constructor(logger: ILogLayer) { this.logger = logger } async createUser(username: string, email: string) { try { // Simulate user creation this.logger.withMetadata({ username, email }).info('Creating new user') if (!email.includes('@')) { const error = new Error('Invalid email format') this.logger.withError(error).error('Failed to create user') throw error } // Simulate successful creation this.logger.withContext({ userId: '123' }).info('User created successfully') return { id: '123', username, email } } catch (error) { this.logger.errorOnly(error) throw error } } } describe('UserService', () => { it('should create a user successfully', async () => { // Create a mock logger const mockLogger = new MockLogLayer() const userService = new UserService(mockLogger) const result = await userService.createUser('testuser', 'test@example.com') expect(result).toEqual({ id: '123', username: 'testuser', email: 'test@example.com' }) }) it('should throw error for invalid email', async () => { const mockLogger = new MockLogLayer() const userService = new UserService(mockLogger) await expect( userService.createUser('testuser', 'invalid-email') ).rejects.toThrow('Invalid email format') }) // Example showing that the mock logger implements all methods but doesn't actually log it('should handle all logging methods without throwing errors', () => { const mockLogger = new MockLogLayer() // All these calls should work without throwing errors mockLogger.info('test message') mockLogger.error('error message') mockLogger.warn('warning message') mockLogger.debug('debug message') mockLogger.trace('trace message') mockLogger.fatal('fatal message') // Method chaining should work mockLogger .withContext({ userId: '123' }) .withMetadata({ action: 'test' }) .info('test with context and metadata') // Error logging should work mockLogger.withError(new Error('test error')).error('error occurred') mockLogger.errorOnly(new Error('standalone error')) // All these calls should complete without throwing errors expect(true).toBe(true) }) }) ``` ## Writing Tests Against LogLayer Directly When a new instance of `MockLogLayer` is created, it also internally creates a new instance of a [`MockLogBuilder`](https://github.com/loglayer/loglayer/blob/master/packages/core/loglayer/src/MockLogBuilder.ts), which is used when chaining methods like `withMetadata`, `withError`, etc. `MockLogLayer` and `MockLogBuilder` both implement their respective interfaces with generic type parameters: * `MockLogLayer` implements `ILogLayer` and `ILogBuilder` * `MockLogBuilder` implements `ILogBuilder` This allows proper type preservation through method chaining and mixin support in tests. `MockLogLayer` has three methods to help with directly testing the logger itself: * `getMockLogBuilder(): ILogBuilder`: Returns the underlying `MockLogBuilder` instance. * `resetMockLogBuilder()`: Tells `MockLogLayer` to create a new internal instance of the `MockLogBuilder`. * `setMockLogBuilder(builder: ILogBuilder)`: Sets the mock log builder instance to be used if you do not want to use the internal instance. The following example shows how you can use these methods to write tests against the logger directly. ```typescript import { describe, expect, it, vi } from "vitest"; import { MockLogLayer, MockLogBuilder } from "loglayer"; describe("MockLogLayer tests", () => { it("should be able to mock a log message method", () => { const logger = new MockLogLayer(); logger.info = vi.fn(); logger.info("testing"); expect(logger.info).toBeCalledWith("testing"); }); it("should be able to spy on a log message method", () => { const logger = new MockLogLayer(); const infoSpy = vi.spyOn(logger, "info"); logger.info("testing"); expect(infoSpy).toBeCalledWith("testing"); }); it("should be able to spy on a chained log message method", () => { const logger = new MockLogLayer(); // Get the mock builder instance const builder = logger.getMockLogBuilder(); const infoSpy = vi.spyOn(builder, "info"); logger.withMetadata({ test: "test" }).info("testing"); expect(infoSpy).toBeCalledWith("testing"); }); it("should be able to mock a log message method when using withMetadata", () => { const logger = new MockLogLayer(); const builder = logger.getMockLogBuilder(); // to be able to chain withMetadata with info, we need to // make sure the withMetadata method returns the builder builder.withMetadata = vi.fn().mockReturnValue(builder); builder.info = vi.fn(); logger.withMetadata({ test: "test" }).info("testing"); expect(builder.withMetadata).toBeCalledWith({ test: "test" }); expect(builder.info).toBeCalledWith("testing"); }); it("should be able to spy on a log message method when using withMetadata", () => { const logger = new MockLogLayer(); const builder = logger.getMockLogBuilder(); // to be able to chain withMetadata with info, we need to // make sure the withMetadata method returns the builder const metadataSpy = vi.spyOn(builder, "withMetadata"); const infoSpy = vi.spyOn(builder, "info"); logger.withMetadata({ test: "test" }).info("testing"); expect(metadataSpy).toBeCalledWith({ test: "test" }); expect(infoSpy).toBeCalledWith("testing"); }); it('should be able to spy on a multi-chained log message method', () => { const logger = new MockLogLayer(); const builder = logger.getMockLogBuilder(); const error = new Error('test error'); const metadataSpy = vi.spyOn(builder, 'withMetadata'); const errorSpy = vi.spyOn(builder, 'withError'); const infoSpy = vi.spyOn(builder, 'info'); logger .withMetadata({ test: 'test' }) .withError(error) .info('testing'); expect(metadataSpy).toBeCalledWith({ test: 'test' }); expect(errorSpy).toBeCalledWith(error); expect(infoSpy).toBeCalledWith('testing'); }); it("should use a custom MockLogBuilder", () => { const builder = new MockLogBuilder(); const logger = new MockLogLayer(); // Get the mock builder instance logger.setMockLogBuilder(builder); builder.withMetadata = vi.fn().mockReturnValue(builder); builder.info = vi.fn(); logger.withMetadata({ test: "test" }).info("testing"); expect(builder.withMetadata).toBeCalledWith({ test: "test" }); expect(builder.info).toBeCalledWith("testing"); }); it("should be able to mock errorOnly", () => { const error = new Error("testing"); const logger = new MockLogLayer(); logger.errorOnly = vi.fn(); logger.errorOnly(error); expect(logger.errorOnly).toBeCalledWith(error); }); }); ``` ## References * [MockLogLayer](https://github.com/loglayer/loglayer/blob/master/packages/core/loglayer/src/MockLogLayer.ts) * [MockLogBuilder](https://github.com/loglayer/loglayer/blob/master/packages/core/loglayer/src/MockLogBuilder.ts) --- --- url: 'https://loglayer.dev/transports/multiple-transports.md' description: Learn how to use multiple transports with LogLayer --- # Multiple Transports You can use multiple logging libraries simultaneously: ```typescript import { LogLayer } from 'loglayer' import { PinoTransport } from "@loglayer/transport-pino" import { WinstonTransport } from "@loglayer/transport-winston" const log = new LogLayer({ transport: [ new PinoTransport({ logger: pinoLogger }), new WinstonTransport({ logger: winstonLogger }) ] }) ``` --- --- url: 'https://loglayer.dev/transports/victoria-logs.md' description: Send logs to Victoria Metrics' VictoriaLogs with the LogLayer logging library --- # VictoriaLogs Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-victoria-logs)](https://www.npmjs.com/package/@loglayer/transport-victoria-logs) This transport adds support for [Victoria Metrics](https://victoriametrics.com/)' [VictoriaLogs](https://victoriametrics.com/products/victorialogs/) and is a wrapper around the [HTTP transport](https://loglayer.dev/transports/http) using the [VictoriaLogs JSON stream API](https://docs.victoriametrics.com/victorialogs/data-ingestion/#json-stream-api). [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/victoria-logs) [Vibe Code Prompts](https://github.com/loglayer/loglayer/tree/master/packages/transports/victoria-logs/PROMPTS.md) *The code has been manually tested against a local VictoriaLogs instance.* ## Installation ::: code-group ```bash [npm] npm install loglayer @loglayer/transport-victoria-logs serialize-error ``` ```bash [pnpm] pnpm add loglayer @loglayer/transport-victoria-logs serialize-error ``` ```bash [yarn] yarn add loglayer @loglayer/transport-victoria-logs serialize-error ``` ::: ## Basic Usage ```typescript import { LogLayer } from 'loglayer' import { VictoriaLogsTransport } from "@loglayer/transport-victoria-logs" import { serializeError } from "serialize-error"; const log = new LogLayer({ errorSerializer: serializeError, transport: new VictoriaLogsTransport({ url: "http://localhost:9428", // optional, defaults to http://localhost:9428 // Configure stream-level fields for better performance streamFields: () => ({ service: "my-app", environment: process.env.NODE_ENV || "development", instance: process.env.HOSTNAME || "unknown", }), // Custom timestamp function (optional) timestamp: () => new Date().toISOString(), // Custom HTTP parameters for VictoriaLogs ingestion httpParameters: { _time_field: "_time", _msg_field: "_msg", }, // All other HttpTransport options are available and optional compression: false, // optional, defaults to false maxRetries: 3, // optional, defaults to 3 retryDelay: 1000, // optional, defaults to 1000 respectRateLimit: true, // optional, defaults to true enableBatchSend: true, // optional, defaults to true batchSize: 100, // optional, defaults to 100 batchSendTimeout: 5000, // optional, defaults to 5000ms onError: (err) => { console.error('Failed to send logs to VictoriaLogs:', err); }, onDebug: (entry) => { console.log('Log entry being sent to VictoriaLogs:', entry); }, }) }) // Use the logger log.info("This is a test message"); log.withMetadata({ userId: "123" }).error("User not found"); ``` ## Configuration The VictoriaLogs transport extends the [HTTP transport configuration](/transports/http#configuration) with VictoriaLogs specific defaults: | Option | Type | Default | Description | |--------|------|---------|-------------| | `url` | `string` | `"http://localhost:9428"` | The VictoriaLogs host URL. The `/insert/jsonline` path is automatically appended | | `method` | `string` | `"POST"` | HTTP method to use for requests | | `headers` | `Record \| (() => Record)` | `{ "Content-Type": "application/stream+json" }` | Headers to include in the request | | `contentType` | `string` | `"application/stream+json"` | Content type for single log requests | | `batchContentType` | `string` | `"application/stream+json"` | Content type for batch log requests | | `streamFields` | `() => Record` | `() => ({})` | Function to generate stream-level fields for VictoriaLogs. The keys of the returned object are automatically used as the values for the `_stream_fields` parameter. See [stream fields documentation](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) | | `timestamp` | `() => string` | `() => new Date().toISOString()` | Function to generate the timestamp for the `_time` field | | `httpParameters` | `Record` | `{}` | Custom HTTP query parameters for VictoriaLogs ingestion. See [HTTP parameters documentation](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) | | `payloadTemplate` | `(data: { logLevel: string; message: string; data?: Record }) => string` | VictoriaLogs format | Pre-configured payload template for VictoriaLogs | ### HTTP Transport Optional Parameters #### General Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `enabled` | `boolean` | `true` | Whether the transport is enabled | | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Logs below this level will be filtered out | | `method` | `string` | `"POST"` | HTTP method to use for requests | | `headers` | `Record \| (() => Record)` | `{}` | Headers to include in the request. Can be an object or a function that returns headers | | `contentType` | `string` | `"application/json"` | Content type for single log requests. User-specified headers take precedence | | `compression` | `boolean` | `false` | Whether to use gzip compression | | `maxRetries` | `number` | `3` | Number of retry attempts before giving up | | `retryDelay` | `number` | `1000` | Base delay between retries in milliseconds | | `respectRateLimit` | `boolean` | `true` | Whether to respect rate limiting by waiting when a 429 response is received | | `maxLogSize` | `number` | `1048576` | Maximum size of a single log entry in bytes (1MB) | | `maxPayloadSize` | `number` | `5242880` | Maximum size of the payload (uncompressed) in bytes (5MB) | | `enableNextJsEdgeCompat` | `boolean` | `false` | Whether to enable Next.js Edge compatibility | #### Debug Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `onError` | `(err: Error) => void` | - | Error handling callback | | `onDebug` | `(entry: Record) => void` | - | Debug callback for inspecting log entries before they are sent | | `onDebugReqRes` | `(reqRes: { req: { url: string; method: string; headers: Record; body: string \| Uint8Array }; res: { status: number; statusText: string; headers: Record; body: string } }) => void` | - | Debug callback for inspecting HTTP requests and responses. Provides complete request/response details including headers and body content | #### Batch Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `batchContentType` | `string` | `"application/json"` | Content type for batch log requests. User-specified headers take precedence | | `enableBatchSend` | `boolean` | `true` | Whether to enable batch sending | | `batchSize` | `number` | `100` | Number of log entries to batch before sending | | `batchSendTimeout` | `number` | `5000` | Timeout in milliseconds for sending batches regardless of size | | `batchSendDelimiter` | `string` | `"\n"` | Delimiter to use between log entries in batch mode | | `batchMode` | `"delimiter" \| "field" \| "array"` | `"delimiter"` | Batch mode for sending multiple log entries. "delimiter" joins entries with a delimiter, "field" wraps an array of entries in an object with a field name, "array" sends entries as a plain JSON array of objects | | `batchFieldName` | `string` | - | Field name to wrap batch entries in when batchMode is "field" | ## VictoriaLogs Specific Features ### Pre-configured Payload Template The transport comes with a pre-configured payload template that formats logs for VictoriaLogs according to the [data model](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model): ```typescript payloadTemplate: ({ logLevel, message, data }) => { const streamFieldsData = streamFields(); const timeValue = timestamp(); // Determine field names based on HTTP parameters const msgField = httpParameters._msg_field || "_msg"; const timeField = httpParameters._time_field || "_time"; return JSON.stringify({ [msgField]: message || "(no message)", [timeField]: timeValue, level: logLevel, ...streamFieldsData, ...data, }); } ``` **Note**: The payload template automatically adapts to your HTTP parameters. For example, if you set `_msg_field: "message"` in your HTTP parameters, the transport will use `message` as the field name instead of `_msg`. ### Stream Fields Configuration Configure stream-level fields to optimize VictoriaLogs performance. Stream fields should contain fields that uniquely identify your application instance and remain constant during its lifetime: ```typescript new VictoriaLogsTransport({ url: "http://localhost:9428", streamFields: () => ({ service: "my-app", environment: process.env.NODE_ENV || "development", instance: process.env.HOSTNAME || "unknown", // Add other constant fields that identify your application instance // Avoid high-cardinality fields like user_id, ip, trace_id, etc. }), }) ``` **Important**: * Never add high-cardinality fields (like `user_id`, `ip`, `trace_id`) to stream fields as this can cause performance issues. Only include fields that remain constant during your application instance's lifetime. * The keys of the object returned by `streamFields()` are automatically used as the values for the `_stream_fields` HTTP parameter. For example, if `streamFields()` returns `{ service: "my-app", environment: "prod" }`, the transport will automatically add `_stream_fields=service,environment` to the HTTP query parameters. For more information about stream fields and their importance for performance, see the [VictoriaLogs stream fields documentation](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields). ### HTTP Parameters Configure custom HTTP query parameters for VictoriaLogs ingestion. This allows you to specify how VictoriaLogs should process your logs: ```typescript new VictoriaLogsTransport({ url: "http://localhost:9428", httpParameters: { _time_field: "_time", // Specify the timestamp field name _msg_field: "_msg", // Specify the message field name // Add other VictoriaLogs HTTP parameters as needed }, }) ``` Common HTTP parameters include: * `_stream_fields`: Comma-separated list of fields to use for stream identification (automatically set from `streamFields()` keys) * `_time_field`: Name of the timestamp field in your logs * `_msg_field`: Name of the message field in your logs * `_default_msg_value`: Default message value when the message field is empty **Important**: The payload template automatically adapts to your HTTP parameters. For example: * If you set `_msg_field: "message"`, the transport will use `message` as the field name instead of `_msg` * If you set `_time_field: "timestamp"`, the transport will use `timestamp` as the field name instead of `_time` For a complete list of available HTTP parameters, see the [VictoriaLogs HTTP parameters documentation](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters). ### Automatic URL Construction The transport automatically appends the `/insert/jsonline` path to your VictoriaLogs host URL: ```typescript // This URL: "http://localhost:9428" // Becomes: "http://localhost:9428/insert/jsonline" new VictoriaLogsTransport({ url: "http://localhost:9428" }) ``` ### VictoriaLogs JSON Stream API This transport uses the VictoriaLogs JSON stream API, which supports: * Unlimited number of log lines in a single request * Automatic timestamp handling when `_time` is set to `"0"` * Stream-based processing for high throughput * Support for custom fields and metadata For more information about the VictoriaLogs JSON stream API, see the [official documentation](https://docs.victoriametrics.com/victorialogs/data-ingestion/#json-stream-api). ## Customization Since this transport extends the HTTP transport, you can override any HTTP transport option: ```typescript new VictoriaLogsTransport({ url: "http://my-victoria-logs:9428", // Override the default payload template payloadTemplate: ({ logLevel, message, data }) => JSON.stringify({ _msg: message, _time: new Date().toISOString(), level: logLevel, custom_field: "custom_value", ...data, }), // Override other HTTP transport options compression: true, batchSize: 50, maxRetries: 5, }) ``` ## Related * [HTTP Transport](/transports/http) - The underlying HTTP transport * [VictoriaLogs Documentation](https://docs.victoriametrics.com/victorialogs/) - Official VictoriaLogs documentation * [VictoriaLogs JSON Stream API](https://docs.victoriametrics.com/victorialogs/data-ingestion/#json-stream-api) - API documentation for the JSON stream endpoint * [VictoriaLogs HTTP Parameters](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) - HTTP query parameters documentation * [VictoriaLogs Stream Fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) - Stream fields documentation --- --- url: 'https://loglayer.dev/transports/winston.md' description: Send logs to Winston with the LogLayer logging library --- # Winston Transport [![NPM Version](https://img.shields.io/npm/v/%40loglayer%2Ftransport-winston)](https://www.npmjs.com/package/@loglayer/transport-winston) [Winston](https://github.com/winstonjs/winston) A logger for just about everything. [Transport Source](https://github.com/loglayer/loglayer/tree/master/packages/transports/winston) ## Installation Install the required packages: ::: code-group ```sh [npm] npm i loglayer @loglayer/transport-winston winston ``` ```sh [pnpm] pnpm add loglayer @loglayer/transport-winston winston ``` ```sh [yarn] yarn add loglayer @loglayer/transport-winston winston ``` ::: ## Setup ```typescript import winston from 'winston' import { LogLayer } from 'loglayer' import { WinstonTransport } from "@loglayer/transport-winston" const w = winston.createLogger({}) const log = new LogLayer({ transport: new WinstonTransport({ logger: w }) }) ``` ## Configuration Options ### Required Parameters None - all parameters are optional. ### Optional Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `level` | `"trace" \| "debug" \| "info" \| "warn" \| "error" \| "fatal"` | `"trace"` | Minimum log level to process. Messages with a lower priority level will be ignored | | `enabled` | `boolean` | `true` | If false, the transport will not send any logs to the logger | | `consoleDebug` | `boolean` | `false` | If true, the transport will also log messages to the console for debugging | | `id` | `string` | - | A unique identifier for the transport | ## Log Level Mapping | LogLayer | Winston | |----------|---------| | trace | silly | | debug | debug | | info | info | | warn | warn | | error | error | | fatal | error | ## Changelog View the changelog [here](./changelogs/winston-changelog.md).