Bun

Node.js module

stream

The 'node:stream' module provides the API for working with streaming data in Node.js, including readable, writable, duplex, and transform streams.

Streams are event emitters that process data in chunks, offering memory-efficient handling of large data flows, such as file reading/writing and network communication.

Works in Bun

Fully implemented.

    • function finished(
      stream: ReadableStream | WritableStream | ReadableStream<any> | WritableStream<any>,
      options: FinishedOptions,
      callback: (err?: null | ErrnoException) => void
      ): () => void;

      A readable and/or writable stream/webstream.

      A function to get notified when a stream is no longer readable, writable or has experienced an error or a premature close event.

      import { finished } from 'node:stream';
      import fs from 'node:fs';
      
      const rs = fs.createReadStream('archive.tar');
      
      finished(rs, (err) => {
        if (err) {
          console.error('Stream failed.', err);
        } else {
          console.log('Stream is done reading.');
        }
      });
      
      rs.resume(); // Drain the stream.
      

      Especially useful in error handling scenarios where a stream is destroyed prematurely (like an aborted HTTP request), and will not emit 'end' or 'finish'.

      The finished API provides promise version.

      stream.finished() leaves dangling event listeners (in particular 'error', 'end', 'finish' and 'close') after callback has been invoked. The reason for this is so that unexpected 'error' events (due to incorrect stream implementations) do not cause unexpected crashes. If this is unwanted behavior then the returned cleanup function needs to be invoked in the callback:

      const cleanup = finished(rs, (err) => {
        cleanup();
        // ...
      });
      
      @param stream

      A readable and/or writable stream.

      @param callback

      A callback function that takes an optional error argument.

      @returns

      A cleanup function which removes all registered listeners.

      function finished(
      stream: ReadableStream | WritableStream | ReadableStream<any> | WritableStream<any>,
      callback: (err?: null | ErrnoException) => void
      ): () => void;

      A readable and/or writable stream/webstream.

      A function to get notified when a stream is no longer readable, writable or has experienced an error or a premature close event.

      import { finished } from 'node:stream';
      import fs from 'node:fs';
      
      const rs = fs.createReadStream('archive.tar');
      
      finished(rs, (err) => {
        if (err) {
          console.error('Stream failed.', err);
        } else {
          console.log('Stream is done reading.');
        }
      });
      
      rs.resume(); // Drain the stream.
      

      Especially useful in error handling scenarios where a stream is destroyed prematurely (like an aborted HTTP request), and will not emit 'end' or 'finish'.

      The finished API provides promise version.

      stream.finished() leaves dangling event listeners (in particular 'error', 'end', 'finish' and 'close') after callback has been invoked. The reason for this is so that unexpected 'error' events (due to incorrect stream implementations) do not cause unexpected crashes. If this is unwanted behavior then the returned cleanup function needs to be invoked in the callback:

      const cleanup = finished(rs, (err) => {
        cleanup();
        // ...
      });
      
      @param stream

      A readable and/or writable stream.

      @param callback

      A callback function that takes an optional error argument.

      @returns

      A cleanup function which removes all registered listeners.

      namespace finished

    • function pipeline<S extends PipelineSource<any>, D extends WritableStream | WritableStream<any> | TransformStream<any, any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineDestinationFunction<ReadableStream, any> | PipelineDestinationFunction<ReadableStream<any>, any> | PipelineDestinationFunction<TransformStream<any, any>, any> | PipelineDestinationFunction<Iterable<any, any, any>, any> | PipelineDestinationFunction<AsyncIterable<any, any, any>, any> | PipelineDestinationFunction<PipelineSourceFunction<any>, any>>(
      source: S,
      destination: D,
      callback: PipelineCallback<D>

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline<S extends PipelineSource<any>, T extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<ReadableStream, any> | PipelineTransformGenerator<ReadableStream<any>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<Iterable<any, any, any>, any> | PipelineTransformGenerator<AsyncIterable<any, any, any>, any> | PipelineTransformGenerator<PipelineSourceFunction<any>, any>, D extends WritableStream | WritableStream<any> | TransformStream<any, any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineDestinationFunction<TransformStream<any, any>, any> | PipelineDestinationFunction<ReadWriteStream, any> | PipelineDestinationFunction<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<ReadableStream, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<ReadableStream<any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<Iterable<any, any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>>(
      source: S,
      transform: T,
      destination: D,
      callback: PipelineCallback<D>

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline<S extends PipelineSource<any>, T1 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<ReadableStream, any> | PipelineTransformGenerator<ReadableStream<any>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<Iterable<any, any, any>, any> | PipelineTransformGenerator<AsyncIterable<any, any, any>, any> | PipelineTransformGenerator<PipelineSourceFunction<any>, any>, T2 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, D extends WritableStream | WritableStream<any> | TransformStream<any, any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineDestinationFunction<TransformStream<any, any>, any> | PipelineDestinationFunction<ReadWriteStream, any> | PipelineDestinationFunction<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>>(
      source: S,
      transform1: T1,
      transform2: T2,
      destination: D,
      callback: PipelineCallback<D>

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline<S extends PipelineSource<any>, T1 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<ReadableStream, any> | PipelineTransformGenerator<ReadableStream<any>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<Iterable<any, any, any>, any> | PipelineTransformGenerator<AsyncIterable<any, any, any>, any> | PipelineTransformGenerator<PipelineSourceFunction<any>, any>, T2 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, T3 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>, D extends WritableStream | WritableStream<any> | TransformStream<any, any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineDestinationFunction<TransformStream<any, any>, any> | PipelineDestinationFunction<ReadWriteStream, any> | PipelineDestinationFunction<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>, any>>(
      source: S,
      transform1: T1,
      transform2: T2,
      transform3: T3,
      destination: D,
      callback: PipelineCallback<D>

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline<S extends PipelineSource<any>, T1 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<ReadableStream, any> | PipelineTransformGenerator<ReadableStream<any>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<Iterable<any, any, any>, any> | PipelineTransformGenerator<AsyncIterable<any, any, any>, any> | PipelineTransformGenerator<PipelineSourceFunction<any>, any>, T2 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, T3 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>, T4 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>, any>, D extends WritableStream | WritableStream<any> | TransformStream<any, any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineDestinationFunction<TransformStream<any, any>, any> | PipelineDestinationFunction<ReadWriteStream, any> | PipelineDestinationFunction<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>, any>, any>>(
      source: S,
      transform1: T1,
      transform2: T2,
      transform3: T3,
      transform4: T4,
      destination: D,
      callback: PipelineCallback<D>

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline(
      streams: readonly WritableStream | PipelineSource<any> | PipelineTransformStreams<unknown, any> | PipelineTransformGenerator<any, any> | WritableStream<unknown> | PipelineDestinationFunction<any, any>[],
      callback: (err: null | ErrnoException) => void
      ): WritableStream;

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline(
      ...streams: [PipelineSource<any>, ...PipelineTransformStreams<unknown, any> | PipelineTransformGenerator<any, any>[], WritableStream | TransformStream<unknown, any> | WritableStream<unknown> | PipelineDestinationFunction<any, any>, callback: (err: null | ErrnoException) => void]
      ): WritableStream;

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      

      namespace pipeline

    • class Duplex

      Duplex streams are streams that implement both the Readable and Writable interfaces.

      Examples of Duplex streams include:

      • TCP sockets
      • zlib streams
      • crypto streams
      • allowHalfOpen: boolean

        If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

        This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'finish'.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • size: number
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
        @returns

        AsyncIterator to fully consume the stream.

      • error: Error,
        event: string | symbol,
        ...args: any[]
        ): void;

        The Symbol.for('nodejs.rejection') method is called in case a promise rejection happens when emitting an event and captureRejections is enabled on the emitter. It is possible to use events.captureRejectionSymbol in place of Symbol.for('nodejs.rejection').

        import { EventEmitter, captureRejectionSymbol } from 'node:events';
        
        class MyClass extends EventEmitter {
          constructor() {
            super({ captureRejections: true });
          }
        
          [captureRejectionSymbol](err, event, ...args) {
            console.log('rejection happened for', event, 'with', err, ...args);
            this.destroy(err);
          }
        
          destroy(err) {
            // Tear the resource down here.
          }
        }
        
      • addListener<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Alias for emitter.on(eventName, listener).

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • stream: WritableStream | WritableStream<any> | TransformStream<any, any> | (source: any) => void,
        options?: Abortable
        ): Duplex;
        import { Readable } from 'node:stream';
        
        async function* splitToWords(source) {
          for await (const chunk of source) {
            const words = String(chunk).split(' ');
        
            for (const word of words) {
              yield word;
            }
          }
        }
        
        const wordsStream = Readable.from(['this is', 'compose as operator']).compose(splitToWords);
        const words = await wordsStream.toArray();
        
        console.log(words); // prints ['this', 'is', 'compose', 'as', 'operator']
        

        See stream.compose for more information.

        @returns

        a stream composed with the stream stream.

      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Abortable

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • emit<E extends keyof DuplexEventMap>(
        eventName: E,
        ...args: DuplexEventMap[E]
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        eventName: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • find<T>(
        fn: (data: any, options?: Abortable) => data is T,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Abortable) => any,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Abortable) => void | Promise<void>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to events.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • listenerCount<E extends keyof DuplexEventMap>(
        eventName: E,
        listener?: (...args: DuplexEventMap[E]) => void
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

        eventName: string | symbol,
        listener?: (...args: any[]) => void
        ): number;
      • listeners<E extends keyof DuplexEventMap>(
        eventName: E
        ): (...args: DuplexEventMap[E]) => void[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
        eventName: string | symbol
        ): (...args: any[]) => void[];
      • fn: (data: any, options?: Abortable) => any,

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Alias for emitter.removeListener().

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • on<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • once<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: PipeOptions
        ): T;
      • eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds a one-time listener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • rawListeners<E extends keyof DuplexEventMap>(
        eventName: E
        ): (...args: DuplexEventMap[E]) => void[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
        eventName: string | symbol
        ): (...args: any[]) => void[];
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T>(
        fn: (previous: any, data: any, options?: Abortable) => T
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @returns

        a promise for the final value of the reduction.

        reduce<T>(
        fn: (previous: T, data: any, options?: Abortable) => T,
        initial: T,
        options?: Abortable
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: E
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

        eventName?: string | symbol
        ): this;
      • removeListener<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them from emit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indexes of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Abortable

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Abortable
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      • static from(
        src: string | Blob | Promise<any> | ReadableStream | WritableStream | Iterable<any, any, any> | AsyncIterable<any, any, any> | (source: AsyncIterable<any>) => AsyncIterable<any> | (source: AsyncIterable<any>) => Promise<void> | ReadableWritablePair<any, any> | ReadableStream<any> | WritableStream<any>
        ): Duplex;

        A utility method for creating duplex streams.

        • Stream converts writable stream into writable Duplex and readable stream to Duplex.
        • Blob converts into readable Duplex.
        • string converts into readable Duplex.
        • ArrayBuffer converts into readable Duplex.
        • AsyncIterable converts into a readable Duplex. Cannot yield null.
        • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
        • AsyncFunction converts into a writable Duplex. Must return either null or undefined
        • Object ({ writable, readable }) converts readable and writable into Stream and then combines them into Duplex where the Duplex will write to the writable and read from the readable.
        • Promise converts into readable Duplex. Value null is ignored.
      • static fromWeb(
        duplexStream: ReadableWritablePair,
        options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
        ): Duplex;

        A utility method for creating a Duplex from a web ReadableStream and WritableStream.

      • static toWeb(
        streamDuplex: ReadWriteStream

        A utility method for creating a web ReadableStream and WritableStream from a Duplex.

    • class PassThrough

      The stream.PassThrough class is a trivial implementation of a Transform stream that simply passes the input bytes across to the output. Its purpose is primarily for examples and testing, but there are some use cases where stream.PassThrough is useful as a building block for novel sorts of streams.

      • allowHalfOpen: boolean

        If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

        This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'finish'.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • ): void;
      • size: number
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
        @returns

        AsyncIterator to fully consume the stream.

      • error: Error,
        event: string | symbol,
        ...args: any[]
        ): void;

        The Symbol.for('nodejs.rejection') method is called in case a promise rejection happens when emitting an event and captureRejections is enabled on the emitter. It is possible to use events.captureRejectionSymbol in place of Symbol.for('nodejs.rejection').

        import { EventEmitter, captureRejectionSymbol } from 'node:events';
        
        class MyClass extends EventEmitter {
          constructor() {
            super({ captureRejections: true });
          }
        
          [captureRejectionSymbol](err, event, ...args) {
            console.log('rejection happened for', event, 'with', err, ...args);
            this.destroy(err);
          }
        
          destroy(err) {
            // Tear the resource down here.
          }
        }
        
      • addListener<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Alias for emitter.on(eventName, listener).

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • stream: WritableStream | WritableStream<any> | TransformStream<any, any> | (source: any) => void,
        options?: Abortable
        ): Duplex;
        import { Readable } from 'node:stream';
        
        async function* splitToWords(source) {
          for await (const chunk of source) {
            const words = String(chunk).split(' ');
        
            for (const word of words) {
              yield word;
            }
          }
        }
        
        const wordsStream = Readable.from(['this is', 'compose as operator']).compose(splitToWords);
        const words = await wordsStream.toArray();
        
        console.log(words); // prints ['this', 'is', 'compose', 'as', 'operator']
        

        See stream.compose for more information.

        @returns

        a stream composed with the stream stream.

      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Abortable

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • emit<E extends keyof DuplexEventMap>(
        eventName: E,
        ...args: DuplexEventMap[E]
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        eventName: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • find<T>(
        fn: (data: any, options?: Abortable) => data is T,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Abortable) => any,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Abortable) => void | Promise<void>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to events.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • listenerCount<E extends keyof DuplexEventMap>(
        eventName: E,
        listener?: (...args: DuplexEventMap[E]) => void
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

        eventName: string | symbol,
        listener?: (...args: any[]) => void
        ): number;
      • listeners<E extends keyof DuplexEventMap>(
        eventName: E
        ): (...args: DuplexEventMap[E]) => void[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
        eventName: string | symbol
        ): (...args: any[]) => void[];
      • fn: (data: any, options?: Abortable) => any,

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Alias for emitter.removeListener().

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • on<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • once<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: PipeOptions
        ): T;
      • eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds a one-time listener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • rawListeners<E extends keyof DuplexEventMap>(
        eventName: E
        ): (...args: DuplexEventMap[E]) => void[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
        eventName: string | symbol
        ): (...args: any[]) => void[];
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T>(
        fn: (previous: any, data: any, options?: Abortable) => T
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @returns

        a promise for the final value of the reduction.

        reduce<T>(
        fn: (previous: T, data: any, options?: Abortable) => T,
        initial: T,
        options?: Abortable
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: E
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

        eventName?: string | symbol
        ): this;
      • removeListener<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them from emit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indexes of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Abortable

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Abortable
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      • static from(
        src: string | Blob | Promise<any> | ReadableStream | WritableStream | Iterable<any, any, any> | AsyncIterable<any, any, any> | (source: AsyncIterable<any>) => AsyncIterable<any> | (source: AsyncIterable<any>) => Promise<void> | ReadableWritablePair<any, any> | ReadableStream<any> | WritableStream<any>
        ): Duplex;

        A utility method for creating duplex streams.

        • Stream converts writable stream into writable Duplex and readable stream to Duplex.
        • Blob converts into readable Duplex.
        • string converts into readable Duplex.
        • ArrayBuffer converts into readable Duplex.
        • AsyncIterable converts into a readable Duplex. Cannot yield null.
        • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
        • AsyncFunction converts into a writable Duplex. Must return either null or undefined
        • Object ({ writable, readable }) converts readable and writable into Stream and then combines them into Duplex where the Duplex will write to the writable and read from the readable.
        • Promise converts into readable Duplex. Value null is ignored.
      • static fromWeb(
        duplexStream: ReadableWritablePair,
        options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
        ): Duplex;

        A utility method for creating a Duplex from a web ReadableStream and WritableStream.

      • static toWeb(
        streamDuplex: ReadWriteStream

        A utility method for creating a web ReadableStream and WritableStream from a Duplex.

    • class Readable

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • size: number
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
        @returns

        AsyncIterator to fully consume the stream.

      • error: Error,
        event: string | symbol,
        ...args: any[]
        ): void;

        The Symbol.for('nodejs.rejection') method is called in case a promise rejection happens when emitting an event and captureRejections is enabled on the emitter. It is possible to use events.captureRejectionSymbol in place of Symbol.for('nodejs.rejection').

        import { EventEmitter, captureRejectionSymbol } from 'node:events';
        
        class MyClass extends EventEmitter {
          constructor() {
            super({ captureRejections: true });
          }
        
          [captureRejectionSymbol](err, event, ...args) {
            console.log('rejection happened for', event, 'with', err, ...args);
            this.destroy(err);
          }
        
          destroy(err) {
            // Tear the resource down here.
          }
        }
        
      • addListener<E extends keyof ReadableEventMap>(
        eventName: E,
        listener: (...args: ReadableEventMap[E]) => void
        ): this;

        Alias for emitter.on(eventName, listener).

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • stream: WritableStream | WritableStream<any> | TransformStream<any, any> | (source: any) => void,
        options?: Abortable
        ): Duplex;
        import { Readable } from 'node:stream';
        
        async function* splitToWords(source) {
          for await (const chunk of source) {
            const words = String(chunk).split(' ');
        
            for (const word of words) {
              yield word;
            }
          }
        }
        
        const wordsStream = Readable.from(['this is', 'compose as operator']).compose(splitToWords);
        const words = await wordsStream.toArray();
        
        console.log(words); // prints ['this', 'is', 'compose', 'as', 'operator']
        

        See stream.compose for more information.

        @returns

        a stream composed with the stream stream.

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Abortable

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • emit<E extends keyof ReadableEventMap>(
        eventName: E,
        ...args: ReadableEventMap[E]
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        eventName: string | symbol,
        ...args: any[]
        ): boolean;
      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • find<T>(
        fn: (data: any, options?: Abortable) => data is T,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Abortable) => any,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Abortable) => void | Promise<void>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to events.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • eventName: E,
        listener?: (...args: ReadableEventMap[E]) => void
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

        eventName: string | symbol,
        listener?: (...args: any[]) => void
        ): number;
      • listeners<E extends keyof ReadableEventMap>(
        eventName: E
        ): (...args: ReadableEventMap[E]) => void[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
        eventName: string | symbol
        ): (...args: any[]) => void[];
      • fn: (data: any, options?: Abortable) => any,

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<E extends keyof ReadableEventMap>(
        eventName: E,
        listener: (...args: ReadableEventMap[E]) => void
        ): this;

        Alias for emitter.removeListener().

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • on<E extends keyof ReadableEventMap>(
        eventName: E,
        listener: (...args: ReadableEventMap[E]) => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • once<E extends keyof ReadableEventMap>(
        eventName: E,
        listener: (...args: ReadableEventMap[E]) => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: PipeOptions
        ): T;
      • eventName: E,
        listener: (...args: ReadableEventMap[E]) => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • eventName: E,
        listener: (...args: ReadableEventMap[E]) => void
        ): this;

        Adds a one-time listener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • rawListeners<E extends keyof ReadableEventMap>(
        eventName: E
        ): (...args: ReadableEventMap[E]) => void[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
        eventName: string | symbol
        ): (...args: any[]) => void[];
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T>(
        fn: (previous: any, data: any, options?: Abortable) => T
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @returns

        a promise for the final value of the reduction.

        reduce<T>(
        fn: (previous: T, data: any, options?: Abortable) => T,
        initial: T,
        options?: Abortable
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: E
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

        eventName?: string | symbol
        ): this;
      • eventName: E,
        listener: (...args: ReadableEventMap[E]) => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them from emit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indexes of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Abortable

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Abortable
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • static from(
        iterable: Iterable<any, any, any> | AsyncIterable<any, any, any>,

        A utility method for creating Readable Streams out of iterators.

        @param iterable

        Object implementing the Symbol.asyncIterator or Symbol.iterator iterable protocol. Emits an 'error' event if a null value is passed.

        @param options

        Options provided to new stream.Readable([options]). By default, Readable.from() will set options.objectMode to true, unless this is explicitly opted out by setting options.objectMode to false.

      • static fromWeb(
        readableStream: ReadableStream,
        options?: Pick<ReadableOptions<Readable>, 'signal' | 'encoding' | 'highWaterMark' | 'objectMode'>

        A utility method for creating a Readable from a web ReadableStream.

      • static isDisturbed(
        stream: ReadableStream | ReadableStream<any>
        ): boolean;

        Returns whether the stream has been read from or cancelled.

      • static toWeb(
        streamReadable: ReadableStream,

        A utility method for creating a web ReadableStream from a Readable.

    • class Transform

      Transform streams are Duplex streams where the output is in some way related to the input. Like all Duplex streams, Transform streams implement both the Readable and Writable interfaces.

      Examples of Transform streams include:

      • zlib streams
      • crypto streams
      • allowHalfOpen: boolean

        If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

        This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'finish'.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • ): void;
      • size: number
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
        @returns

        AsyncIterator to fully consume the stream.

      • error: Error,
        event: string | symbol,
        ...args: any[]
        ): void;

        The Symbol.for('nodejs.rejection') method is called in case a promise rejection happens when emitting an event and captureRejections is enabled on the emitter. It is possible to use events.captureRejectionSymbol in place of Symbol.for('nodejs.rejection').

        import { EventEmitter, captureRejectionSymbol } from 'node:events';
        
        class MyClass extends EventEmitter {
          constructor() {
            super({ captureRejections: true });
          }
        
          [captureRejectionSymbol](err, event, ...args) {
            console.log('rejection happened for', event, 'with', err, ...args);
            this.destroy(err);
          }
        
          destroy(err) {
            // Tear the resource down here.
          }
        }
        
      • addListener<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Alias for emitter.on(eventName, listener).

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • stream: WritableStream | WritableStream<any> | TransformStream<any, any> | (source: any) => void,
        options?: Abortable
        ): Duplex;
        import { Readable } from 'node:stream';
        
        async function* splitToWords(source) {
          for await (const chunk of source) {
            const words = String(chunk).split(' ');
        
            for (const word of words) {
              yield word;
            }
          }
        }
        
        const wordsStream = Readable.from(['this is', 'compose as operator']).compose(splitToWords);
        const words = await wordsStream.toArray();
        
        console.log(words); // prints ['this', 'is', 'compose', 'as', 'operator']
        

        See stream.compose for more information.

        @returns

        a stream composed with the stream stream.

      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Abortable

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • emit<E extends keyof DuplexEventMap>(
        eventName: E,
        ...args: DuplexEventMap[E]
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        eventName: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • find<T>(
        fn: (data: any, options?: Abortable) => data is T,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Abortable) => any,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Abortable) => void | Promise<void>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to events.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • listenerCount<E extends keyof DuplexEventMap>(
        eventName: E,
        listener?: (...args: DuplexEventMap[E]) => void
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

        eventName: string | symbol,
        listener?: (...args: any[]) => void
        ): number;
      • listeners<E extends keyof DuplexEventMap>(
        eventName: E
        ): (...args: DuplexEventMap[E]) => void[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
        eventName: string | symbol
        ): (...args: any[]) => void[];
      • fn: (data: any, options?: Abortable) => any,

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Alias for emitter.removeListener().

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • on<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • once<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: PipeOptions
        ): T;
      • eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Adds a one-time listener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • rawListeners<E extends keyof DuplexEventMap>(
        eventName: E
        ): (...args: DuplexEventMap[E]) => void[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
        eventName: string | symbol
        ): (...args: any[]) => void[];
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T>(
        fn: (previous: any, data: any, options?: Abortable) => T
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @returns

        a promise for the final value of the reduction.

        reduce<T>(
        fn: (previous: T, data: any, options?: Abortable) => T,
        initial: T,
        options?: Abortable
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: E
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

        eventName?: string | symbol
        ): this;
      • removeListener<E extends keyof DuplexEventMap>(
        eventName: E,
        listener: (...args: DuplexEventMap[E]) => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them from emit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indexes of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Abortable) => boolean | Promise<boolean>,
        options?: Pick<ReadableOperatorOptions, 'signal' | 'concurrency'>
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Abortable

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Abortable
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      • static from(
        src: string | Blob | Promise<any> | ReadableStream | WritableStream | Iterable<any, any, any> | AsyncIterable<any, any, any> | (source: AsyncIterable<any>) => AsyncIterable<any> | (source: AsyncIterable<any>) => Promise<void> | ReadableWritablePair<any, any> | ReadableStream<any> | WritableStream<any>
        ): Duplex;

        A utility method for creating duplex streams.

        • Stream converts writable stream into writable Duplex and readable stream to Duplex.
        • Blob converts into readable Duplex.
        • string converts into readable Duplex.
        • ArrayBuffer converts into readable Duplex.
        • AsyncIterable converts into a readable Duplex. Cannot yield null.
        • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
        • AsyncFunction converts into a writable Duplex. Must return either null or undefined
        • Object ({ writable, readable }) converts readable and writable into Stream and then combines them into Duplex where the Duplex will write to the writable and read from the readable.
        • Promise converts into readable Duplex. Value null is ignored.
      • static fromWeb(
        duplexStream: ReadableWritablePair,
        options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
        ): Duplex;

        A utility method for creating a Duplex from a web ReadableStream and WritableStream.

      • static toWeb(
        streamDuplex: ReadWriteStream

        A utility method for creating a web ReadableStream and WritableStream from a Duplex.

    • class Writable

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after writable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'finish'.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls writable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • error: Error,
        event: string | symbol,
        ...args: any[]
        ): void;

        The Symbol.for('nodejs.rejection') method is called in case a promise rejection happens when emitting an event and captureRejections is enabled on the emitter. It is possible to use events.captureRejectionSymbol in place of Symbol.for('nodejs.rejection').

        import { EventEmitter, captureRejectionSymbol } from 'node:events';
        
        class MyClass extends EventEmitter {
          constructor() {
            super({ captureRejections: true });
          }
        
          [captureRejectionSymbol](err, event, ...args) {
            console.log('rejection happened for', event, 'with', err, ...args);
            this.destroy(err);
          }
        
          destroy(err) {
            // Tear the resource down here.
          }
        }
        
      • addListener<E extends keyof WritableEventMap>(
        eventName: E,
        listener: (...args: WritableEventMap[E]) => void
        ): this;

        Alias for emitter.on(eventName, listener).

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the writable stream has ended and subsequent calls to write() or end() will result in an ERR_STREAM_DESTROYED error. This is a destructive and immediate way to destroy a stream. Previous calls to write() may not have drained, and may trigger an ERR_STREAM_DESTROYED error. Use end() instead of destroy if data should flush before close, or wait for the 'drain' event before destroying the stream.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement writable._destroy().

        @param error

        Optional, an error to emit with 'error' event.

      • emit<E extends keyof WritableEventMap>(
        eventName: E,
        ...args: WritableEventMap[E]
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        eventName: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to events.defaultMaxListeners.

      • eventName: E,
        listener?: (...args: WritableEventMap[E]) => void
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

        eventName: string | symbol,
        listener?: (...args: any[]) => void
        ): number;
      • listeners<E extends keyof WritableEventMap>(
        eventName: E
        ): (...args: WritableEventMap[E]) => void[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
        eventName: string | symbol
        ): (...args: any[]) => void[];
      • off<E extends keyof WritableEventMap>(
        eventName: E,
        listener: (...args: WritableEventMap[E]) => void
        ): this;

        Alias for emitter.removeListener().

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • on<E extends keyof WritableEventMap>(
        eventName: E,
        listener: (...args: WritableEventMap[E]) => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • once<E extends keyof WritableEventMap>(
        eventName: E,
        listener: (...args: WritableEventMap[E]) => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pipe<T extends WritableStream>(
        destination: T,
        options?: PipeOptions
        ): T;
      • eventName: E,
        listener: (...args: WritableEventMap[E]) => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • eventName: E,
        listener: (...args: WritableEventMap[E]) => void
        ): this;

        Adds a one-time listener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param eventName

        The name of the event.

        @param listener

        The callback function

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • rawListeners<E extends keyof WritableEventMap>(
        eventName: E
        ): (...args: WritableEventMap[E]) => void[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
        eventName: string | symbol
        ): (...args: any[]) => void[];
      • eventName?: E
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

        eventName?: string | symbol
        ): this;
      • eventName: E,
        listener: (...args: WritableEventMap[E]) => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them from emit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indexes of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      • static fromWeb(
        writableStream: WritableStream,
        options?: Pick<WritableOptions<Writable>, 'signal' | 'decodeStrings' | 'highWaterMark' | 'objectMode'>

        A utility method for creating a Writable from a web WritableStream.

      • static toWeb(
        streamWritable: WritableStream

        A utility method for creating a web WritableStream from a Writable.

    • interface DuplexEventMap

    • interface DuplexOptions<T extends Duplex = Duplex>

    • interface FinishedOptions

    • interface PipeOptions

      • end?: boolean

        End the writer when the reader ends.

    • interface ReadableEventMap

    • interface ReadableIteratorOptions

      • destroyOnReturn?: boolean

        When set to false, calling return on the async iterator, or exiting a for await...of iteration using a break, return, or throw will not destroy the stream.

    • interface ReadableOperatorOptions

      • concurrency?: number

        The maximum concurrent invocations of fn to call on the stream at once.

      • highWaterMark?: number

        How many items to buffer while waiting for user consumption of the output.

    • interface ReadableOptions<T extends Readable = Readable>

    • interface StreamOptions<T extends Stream>

    • interface TransformOptions<T extends Transform = Transform>

    • interface WritableEventMap

    • interface WritableOptions<T extends Writable = Writable>

    • type ComposeDestination<S extends ComposeTransformSource<any>> = S extends ComposeTransformSource<infer I> ? NodeJS.WritableStream | web.WritableStream<I> | web.TransformStream<I, any> | (source: AsyncIterable<I>) => void : never
    • type ComposeSource<O> = NodeJS.ReadableStream | web.ReadableStream<O> | Iterable<O> | AsyncIterable<O> | () => AsyncIterable<O>
    • type ComposeTransform<S extends ComposeTransformSource<any>, O> = S extends ComposeSource<infer I> | ComposeTransformStreams<any, infer I> | ComposeTransformGenerator<any, infer I> ? ComposeTransformStreams<I, O> | ComposeTransformGenerator<I, O> : never
    • type ComposeTransformGenerator<I, O> = (source: AsyncIterable<I>) => AsyncIterable<O>
    • type ComposeTransformStreams<I, O> = NodeJS.ReadWriteStream | web.TransformStream<I, O>
    • type PipelineCallback<S extends PipelineDestination<any, any>> = (err: NodeJS.ErrnoException | null, value: S extends (...args: any[]) => PromiseLike<infer R> ? R : undefined) => void
    • type PipelineDestination<S extends PipelineTransformSource<any>, R> = S extends PipelineSource<infer I> | PipelineTransform<any, infer I> ? NodeJS.WritableStream | web.WritableStream<I> | web.TransformStream<I, any> | PipelineDestinationFunction<S, R> : never
    • type PipelineDestinationFunction<S extends PipelineTransformSource<any>, R> = (source: PipelineSourceArgument<S>, options?: Abortable) => R
    • type PipelineResult<S extends PipelineDestination<any, any>> = S extends NodeJS.WritableStream ? S : Duplex
    • type PipelineSource<O> = NodeJS.ReadableStream | web.ReadableStream<O> | web.TransformStream<any, O> | Iterable<O> | AsyncIterable<O> | PipelineSourceFunction<O>
    • type PipelineSourceArgument<T> = T extends (...args: any[]) => infer R ? R : T extends infer S ? S extends web.TransformStream<any, infer O> ? web.ReadableStream<O> : S : never
    • type PipelineSourceFunction<O> = (options?: Abortable) => Iterable<O> | AsyncIterable<O>
    • type PipelineTransform<S extends PipelineTransformSource<any>, O> = S extends PipelineSource<infer I> | PipelineTransformStreams<any, infer I> | (...args: any[]) => infer I ? PipelineTransformStreams<I, O> | PipelineTransformGenerator<S, O> : never
    • type PipelineTransformGenerator<S extends PipelineTransformSource<any>, O> = (source: PipelineSourceArgument<S>, options?: Abortable) => AsyncIterable<O>
    • type PipelineTransformStreams<I, O> = NodeJS.ReadWriteStream | web.TransformStream<I, O>
    • type TransformCallback = (error?: Error | null, data?: any) => void
    • function addAbortSignal<T extends ReadableStream | WritableStream | ReadableStream<any> | WritableStream<any>>(
      signal: AbortSignal,
      stream: T
      ): T;

      A stream to attach a signal to.

      Attaches an AbortSignal to a readable or writeable stream. This lets code control stream destruction using an AbortController.

      Calling abort on the AbortController corresponding to the passed AbortSignal will behave the same way as calling .destroy(new AbortError()) on the stream, and controller.error(new AbortError()) for webstreams.

      import fs from 'node:fs';
      
      const controller = new AbortController();
      const read = addAbortSignal(
        controller.signal,
        fs.createReadStream(('object.json')),
      );
      // Later, abort the operation closing the stream
      controller.abort();
      

      Or using an AbortSignal with a readable stream as an async iterable:

      const controller = new AbortController();
      setTimeout(() => controller.abort(), 10_000); // set a timeout
      const stream = addAbortSignal(
        controller.signal,
        fs.createReadStream(('object.json')),
      );
      (async () => {
        try {
          for await (const chunk of stream) {
            await process(chunk);
          }
        } catch (e) {
          if (e.name === 'AbortError') {
            // The operation was cancelled
          } else {
            throw e;
          }
        }
      })();
      

      Or using an AbortSignal with a ReadableStream:

      const controller = new AbortController();
      const rs = new ReadableStream({
        start(controller) {
          controller.enqueue('hello');
          controller.enqueue('world');
          controller.close();
        },
      });
      
      addAbortSignal(controller.signal, rs);
      
      finished(rs, (err) => {
        if (err) {
          if (err.name === 'AbortError') {
            // The operation was cancelled
          }
        }
      });
      
      const reader = rs.getReader();
      
      reader.read().then(({ value, done }) => {
        console.log(value); // hello
        console.log(done); // false
        controller.abort();
      });
      
      @param signal

      A signal representing possible cancellation

      @param stream

      A stream to attach a signal to.

    • function compose(
      stream: WritableStream | TransformStream<unknown, any> | WritableStream<unknown> | ComposeSource<any> | (source: AsyncIterable<unknown>) => void
      ): Duplex;

      Combines two or more streams into a Duplex stream that writes to the first stream and reads from the last. Each provided stream is piped into the next, using stream.pipeline. If any of the streams error then all are destroyed, including the outer Duplex stream.

      Because stream.compose returns a new stream that in turn can (and should) be piped into other streams, it enables composition. In contrast, when passing streams to stream.pipeline, typically the first stream is a readable stream and the last a writable stream, forming a closed circuit.

      If passed a Function it must be a factory method taking a source Iterable.

      import { compose, Transform } from 'node:stream';
      
      const removeSpaces = new Transform({
        transform(chunk, encoding, callback) {
          callback(null, String(chunk).replace(' ', ''));
        },
      });
      
      async function* toUpper(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      }
      
      let res = '';
      for await (const buf of compose(removeSpaces, toUpper).end('hello world')) {
        res += buf;
      }
      
      console.log(res); // prints 'HELLOWORLD'
      

      stream.compose can be used to convert async iterables, generators and functions into streams.

      • AsyncIterable converts into a readable Duplex. Cannot yield null.
      • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
      • AsyncFunction converts into a writable Duplex. Must return either null or undefined.
      import { compose } from 'node:stream';
      import { finished } from 'node:stream/promises';
      
      // Convert AsyncIterable into readable Duplex.
      const s1 = compose(async function*() {
        yield 'Hello';
        yield 'World';
      }());
      
      // Convert AsyncGenerator into transform Duplex.
      const s2 = compose(async function*(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      });
      
      let res = '';
      
      // Convert AsyncFunction into writable Duplex.
      const s3 = compose(async function(source) {
        for await (const chunk of source) {
          res += chunk;
        }
      });
      
      await finished(compose(s1, s2, s3));
      
      console.log(res); // prints 'HELLOWORLD'
      

      See readable.compose(stream) for stream.compose as operator.

      function compose<S extends ComposeSource<any> | ComposeTransformStreams<unknown, any> | ComposeTransformGenerator<unknown, any>, D extends WritableStream | WritableStream<any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any> | (source: AsyncIterable<string | Buffer<ArrayBufferLike>>) => void | (source: AsyncIterable<any>) => void>(
      source: S,
      destination: D
      ): Duplex;

      Combines two or more streams into a Duplex stream that writes to the first stream and reads from the last. Each provided stream is piped into the next, using stream.pipeline. If any of the streams error then all are destroyed, including the outer Duplex stream.

      Because stream.compose returns a new stream that in turn can (and should) be piped into other streams, it enables composition. In contrast, when passing streams to stream.pipeline, typically the first stream is a readable stream and the last a writable stream, forming a closed circuit.

      If passed a Function it must be a factory method taking a source Iterable.

      import { compose, Transform } from 'node:stream';
      
      const removeSpaces = new Transform({
        transform(chunk, encoding, callback) {
          callback(null, String(chunk).replace(' ', ''));
        },
      });
      
      async function* toUpper(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      }
      
      let res = '';
      for await (const buf of compose(removeSpaces, toUpper).end('hello world')) {
        res += buf;
      }
      
      console.log(res); // prints 'HELLOWORLD'
      

      stream.compose can be used to convert async iterables, generators and functions into streams.

      • AsyncIterable converts into a readable Duplex. Cannot yield null.
      • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
      • AsyncFunction converts into a writable Duplex. Must return either null or undefined.
      import { compose } from 'node:stream';
      import { finished } from 'node:stream/promises';
      
      // Convert AsyncIterable into readable Duplex.
      const s1 = compose(async function*() {
        yield 'Hello';
        yield 'World';
      }());
      
      // Convert AsyncGenerator into transform Duplex.
      const s2 = compose(async function*(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      });
      
      let res = '';
      
      // Convert AsyncFunction into writable Duplex.
      const s3 = compose(async function(source) {
        for await (const chunk of source) {
          res += chunk;
        }
      });
      
      await finished(compose(s1, s2, s3));
      
      console.log(res); // prints 'HELLOWORLD'
      

      See readable.compose(stream) for stream.compose as operator.

      function compose<S extends ComposeSource<any> | ComposeTransformStreams<unknown, any> | ComposeTransformGenerator<unknown, any>, T extends TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any>, D extends WritableStream | WritableStream<any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any> | (source: AsyncIterable<string | Buffer<ArrayBufferLike>>) => void | (source: AsyncIterable<any>) => void>(
      source: S,
      transform: T,
      destination: D
      ): Duplex;

      Combines two or more streams into a Duplex stream that writes to the first stream and reads from the last. Each provided stream is piped into the next, using stream.pipeline. If any of the streams error then all are destroyed, including the outer Duplex stream.

      Because stream.compose returns a new stream that in turn can (and should) be piped into other streams, it enables composition. In contrast, when passing streams to stream.pipeline, typically the first stream is a readable stream and the last a writable stream, forming a closed circuit.

      If passed a Function it must be a factory method taking a source Iterable.

      import { compose, Transform } from 'node:stream';
      
      const removeSpaces = new Transform({
        transform(chunk, encoding, callback) {
          callback(null, String(chunk).replace(' ', ''));
        },
      });
      
      async function* toUpper(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      }
      
      let res = '';
      for await (const buf of compose(removeSpaces, toUpper).end('hello world')) {
        res += buf;
      }
      
      console.log(res); // prints 'HELLOWORLD'
      

      stream.compose can be used to convert async iterables, generators and functions into streams.

      • AsyncIterable converts into a readable Duplex. Cannot yield null.
      • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
      • AsyncFunction converts into a writable Duplex. Must return either null or undefined.
      import { compose } from 'node:stream';
      import { finished } from 'node:stream/promises';
      
      // Convert AsyncIterable into readable Duplex.
      const s1 = compose(async function*() {
        yield 'Hello';
        yield 'World';
      }());
      
      // Convert AsyncGenerator into transform Duplex.
      const s2 = compose(async function*(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      });
      
      let res = '';
      
      // Convert AsyncFunction into writable Duplex.
      const s3 = compose(async function(source) {
        for await (const chunk of source) {
          res += chunk;
        }
      });
      
      await finished(compose(s1, s2, s3));
      
      console.log(res); // prints 'HELLOWORLD'
      

      See readable.compose(stream) for stream.compose as operator.

      function compose<S extends ComposeSource<any> | ComposeTransformStreams<unknown, any> | ComposeTransformGenerator<unknown, any>, T1 extends TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any>, T2 extends TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any>, D extends WritableStream | WritableStream<any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any> | (source: AsyncIterable<string | Buffer<ArrayBufferLike>>) => void | (source: AsyncIterable<any>) => void>(
      source: S,
      transform1: T1,
      transform2: T2,
      destination: D
      ): Duplex;

      Combines two or more streams into a Duplex stream that writes to the first stream and reads from the last. Each provided stream is piped into the next, using stream.pipeline. If any of the streams error then all are destroyed, including the outer Duplex stream.

      Because stream.compose returns a new stream that in turn can (and should) be piped into other streams, it enables composition. In contrast, when passing streams to stream.pipeline, typically the first stream is a readable stream and the last a writable stream, forming a closed circuit.

      If passed a Function it must be a factory method taking a source Iterable.

      import { compose, Transform } from 'node:stream';
      
      const removeSpaces = new Transform({
        transform(chunk, encoding, callback) {
          callback(null, String(chunk).replace(' ', ''));
        },
      });
      
      async function* toUpper(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      }
      
      let res = '';
      for await (const buf of compose(removeSpaces, toUpper).end('hello world')) {
        res += buf;
      }
      
      console.log(res); // prints 'HELLOWORLD'
      

      stream.compose can be used to convert async iterables, generators and functions into streams.

      • AsyncIterable converts into a readable Duplex. Cannot yield null.
      • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
      • AsyncFunction converts into a writable Duplex. Must return either null or undefined.
      import { compose } from 'node:stream';
      import { finished } from 'node:stream/promises';
      
      // Convert AsyncIterable into readable Duplex.
      const s1 = compose(async function*() {
        yield 'Hello';
        yield 'World';
      }());
      
      // Convert AsyncGenerator into transform Duplex.
      const s2 = compose(async function*(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      });
      
      let res = '';
      
      // Convert AsyncFunction into writable Duplex.
      const s3 = compose(async function(source) {
        for await (const chunk of source) {
          res += chunk;
        }
      });
      
      await finished(compose(s1, s2, s3));
      
      console.log(res); // prints 'HELLOWORLD'
      

      See readable.compose(stream) for stream.compose as operator.

      function compose<S extends ComposeSource<any> | ComposeTransformStreams<unknown, any> | ComposeTransformGenerator<unknown, any>, T1 extends TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any>, T2 extends TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any>, T3 extends TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any>, D extends WritableStream | WritableStream<any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any> | (source: AsyncIterable<string | Buffer<ArrayBufferLike>>) => void | (source: AsyncIterable<any>) => void>(
      source: S,
      transform1: T1,
      transform2: T2,
      transform3: T3,
      destination: D
      ): Duplex;

      Combines two or more streams into a Duplex stream that writes to the first stream and reads from the last. Each provided stream is piped into the next, using stream.pipeline. If any of the streams error then all are destroyed, including the outer Duplex stream.

      Because stream.compose returns a new stream that in turn can (and should) be piped into other streams, it enables composition. In contrast, when passing streams to stream.pipeline, typically the first stream is a readable stream and the last a writable stream, forming a closed circuit.

      If passed a Function it must be a factory method taking a source Iterable.

      import { compose, Transform } from 'node:stream';
      
      const removeSpaces = new Transform({
        transform(chunk, encoding, callback) {
          callback(null, String(chunk).replace(' ', ''));
        },
      });
      
      async function* toUpper(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      }
      
      let res = '';
      for await (const buf of compose(removeSpaces, toUpper).end('hello world')) {
        res += buf;
      }
      
      console.log(res); // prints 'HELLOWORLD'
      

      stream.compose can be used to convert async iterables, generators and functions into streams.

      • AsyncIterable converts into a readable Duplex. Cannot yield null.
      • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
      • AsyncFunction converts into a writable Duplex. Must return either null or undefined.
      import { compose } from 'node:stream';
      import { finished } from 'node:stream/promises';
      
      // Convert AsyncIterable into readable Duplex.
      const s1 = compose(async function*() {
        yield 'Hello';
        yield 'World';
      }());
      
      // Convert AsyncGenerator into transform Duplex.
      const s2 = compose(async function*(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      });
      
      let res = '';
      
      // Convert AsyncFunction into writable Duplex.
      const s3 = compose(async function(source) {
        for await (const chunk of source) {
          res += chunk;
        }
      });
      
      await finished(compose(s1, s2, s3));
      
      console.log(res); // prints 'HELLOWORLD'
      

      See readable.compose(stream) for stream.compose as operator.

      function compose<S extends ComposeSource<any> | ComposeTransformStreams<unknown, any> | ComposeTransformGenerator<unknown, any>, T1 extends TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any>, T2 extends TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any>, T3 extends TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any>, T4 extends TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any>, D extends WritableStream | WritableStream<any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | ComposeTransformGenerator<string | Buffer<ArrayBufferLike>, any> | ComposeTransformStreams<any, any> | ComposeTransformGenerator<any, any> | (source: AsyncIterable<string | Buffer<ArrayBufferLike>>) => void | (source: AsyncIterable<any>) => void>(
      source: S,
      transform1: T1,
      transform2: T2,
      transform3: T3,
      transform4: T4,
      destination: D
      ): Duplex;

      Combines two or more streams into a Duplex stream that writes to the first stream and reads from the last. Each provided stream is piped into the next, using stream.pipeline. If any of the streams error then all are destroyed, including the outer Duplex stream.

      Because stream.compose returns a new stream that in turn can (and should) be piped into other streams, it enables composition. In contrast, when passing streams to stream.pipeline, typically the first stream is a readable stream and the last a writable stream, forming a closed circuit.

      If passed a Function it must be a factory method taking a source Iterable.

      import { compose, Transform } from 'node:stream';
      
      const removeSpaces = new Transform({
        transform(chunk, encoding, callback) {
          callback(null, String(chunk).replace(' ', ''));
        },
      });
      
      async function* toUpper(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      }
      
      let res = '';
      for await (const buf of compose(removeSpaces, toUpper).end('hello world')) {
        res += buf;
      }
      
      console.log(res); // prints 'HELLOWORLD'
      

      stream.compose can be used to convert async iterables, generators and functions into streams.

      • AsyncIterable converts into a readable Duplex. Cannot yield null.
      • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
      • AsyncFunction converts into a writable Duplex. Must return either null or undefined.
      import { compose } from 'node:stream';
      import { finished } from 'node:stream/promises';
      
      // Convert AsyncIterable into readable Duplex.
      const s1 = compose(async function*() {
        yield 'Hello';
        yield 'World';
      }());
      
      // Convert AsyncGenerator into transform Duplex.
      const s2 = compose(async function*(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      });
      
      let res = '';
      
      // Convert AsyncFunction into writable Duplex.
      const s3 = compose(async function(source) {
        for await (const chunk of source) {
          res += chunk;
        }
      });
      
      await finished(compose(s1, s2, s3));
      
      console.log(res); // prints 'HELLOWORLD'
      

      See readable.compose(stream) for stream.compose as operator.

      function compose(
      ...streams: [ComposeSource<any>, ...ComposeTransformStreams<unknown, any> | ComposeTransformGenerator<unknown, any>[], WritableStream | TransformStream<unknown, any> | WritableStream<unknown> | (source: AsyncIterable<unknown>) => void]
      ): Duplex;

      Combines two or more streams into a Duplex stream that writes to the first stream and reads from the last. Each provided stream is piped into the next, using stream.pipeline. If any of the streams error then all are destroyed, including the outer Duplex stream.

      Because stream.compose returns a new stream that in turn can (and should) be piped into other streams, it enables composition. In contrast, when passing streams to stream.pipeline, typically the first stream is a readable stream and the last a writable stream, forming a closed circuit.

      If passed a Function it must be a factory method taking a source Iterable.

      import { compose, Transform } from 'node:stream';
      
      const removeSpaces = new Transform({
        transform(chunk, encoding, callback) {
          callback(null, String(chunk).replace(' ', ''));
        },
      });
      
      async function* toUpper(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      }
      
      let res = '';
      for await (const buf of compose(removeSpaces, toUpper).end('hello world')) {
        res += buf;
      }
      
      console.log(res); // prints 'HELLOWORLD'
      

      stream.compose can be used to convert async iterables, generators and functions into streams.

      • AsyncIterable converts into a readable Duplex. Cannot yield null.
      • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
      • AsyncFunction converts into a writable Duplex. Must return either null or undefined.
      import { compose } from 'node:stream';
      import { finished } from 'node:stream/promises';
      
      // Convert AsyncIterable into readable Duplex.
      const s1 = compose(async function*() {
        yield 'Hello';
        yield 'World';
      }());
      
      // Convert AsyncGenerator into transform Duplex.
      const s2 = compose(async function*(source) {
        for await (const chunk of source) {
          yield String(chunk).toUpperCase();
        }
      });
      
      let res = '';
      
      // Convert AsyncFunction into writable Duplex.
      const s3 = compose(async function(source) {
        for await (const chunk of source) {
          res += chunk;
        }
      });
      
      await finished(compose(s1, s2, s3));
      
      console.log(res); // prints 'HELLOWORLD'
      

      See readable.compose(stream) for stream.compose as operator.

    • function duplexPair(
      ): [Duplex, Duplex];

      The utility function duplexPair returns an Array with two items, each being a Duplex stream connected to the other side:

      const [ sideA, sideB ] = duplexPair();
      

      Whatever is written to one stream is made readable on the other. It provides behavior analogous to a network connection, where the data written by the client becomes readable by the server, and vice-versa.

      The Duplex streams are symmetrical; one or the other may be used without any difference in behavior.

      @param options

      A value to pass to both Duplex constructors, to set options such as buffering.

    • function finished(
      stream: ReadableStream | WritableStream | ReadableStream<any> | WritableStream<any>,
      options: FinishedOptions,
      callback: (err?: null | ErrnoException) => void
      ): () => void;

      A readable and/or writable stream/webstream.

      A function to get notified when a stream is no longer readable, writable or has experienced an error or a premature close event.

      import { finished } from 'node:stream';
      import fs from 'node:fs';
      
      const rs = fs.createReadStream('archive.tar');
      
      finished(rs, (err) => {
        if (err) {
          console.error('Stream failed.', err);
        } else {
          console.log('Stream is done reading.');
        }
      });
      
      rs.resume(); // Drain the stream.
      

      Especially useful in error handling scenarios where a stream is destroyed prematurely (like an aborted HTTP request), and will not emit 'end' or 'finish'.

      The finished API provides promise version.

      stream.finished() leaves dangling event listeners (in particular 'error', 'end', 'finish' and 'close') after callback has been invoked. The reason for this is so that unexpected 'error' events (due to incorrect stream implementations) do not cause unexpected crashes. If this is unwanted behavior then the returned cleanup function needs to be invoked in the callback:

      const cleanup = finished(rs, (err) => {
        cleanup();
        // ...
      });
      
      @param stream

      A readable and/or writable stream.

      @param callback

      A callback function that takes an optional error argument.

      @returns

      A cleanup function which removes all registered listeners.

      function finished(
      stream: ReadableStream | WritableStream | ReadableStream<any> | WritableStream<any>,
      callback: (err?: null | ErrnoException) => void
      ): () => void;

      A readable and/or writable stream/webstream.

      A function to get notified when a stream is no longer readable, writable or has experienced an error or a premature close event.

      import { finished } from 'node:stream';
      import fs from 'node:fs';
      
      const rs = fs.createReadStream('archive.tar');
      
      finished(rs, (err) => {
        if (err) {
          console.error('Stream failed.', err);
        } else {
          console.log('Stream is done reading.');
        }
      });
      
      rs.resume(); // Drain the stream.
      

      Especially useful in error handling scenarios where a stream is destroyed prematurely (like an aborted HTTP request), and will not emit 'end' or 'finish'.

      The finished API provides promise version.

      stream.finished() leaves dangling event listeners (in particular 'error', 'end', 'finish' and 'close') after callback has been invoked. The reason for this is so that unexpected 'error' events (due to incorrect stream implementations) do not cause unexpected crashes. If this is unwanted behavior then the returned cleanup function needs to be invoked in the callback:

      const cleanup = finished(rs, (err) => {
        cleanup();
        // ...
      });
      
      @param stream

      A readable and/or writable stream.

      @param callback

      A callback function that takes an optional error argument.

      @returns

      A cleanup function which removes all registered listeners.

    • objectMode: boolean
      ): number;

      Returns the default highWaterMark used by streams. Defaults to 65536 (64 KiB), or 16 for objectMode.

    • function isErrored(
      stream: ReadableStream | WritableStream | ReadableStream<any> | WritableStream<any>
      ): boolean;

      Returns whether the stream has encountered an error.

    • function isReadable(
      stream: ReadableStream | ReadableStream<any>
      ): null | boolean;

      Returns whether the stream is readable.

      @returns

      Only returns null if stream is not a valid Readable, Duplex or ReadableStream.

    • function isWritable(
      stream: WritableStream | WritableStream<any>
      ): null | boolean;

      Returns whether the stream is writable.

      @returns

      Only returns null if stream is not a valid Writable, Duplex or WritableStream.

    • function pipeline<S extends PipelineSource<any>, D extends WritableStream | WritableStream<any> | TransformStream<any, any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineDestinationFunction<ReadableStream, any> | PipelineDestinationFunction<ReadableStream<any>, any> | PipelineDestinationFunction<TransformStream<any, any>, any> | PipelineDestinationFunction<Iterable<any, any, any>, any> | PipelineDestinationFunction<AsyncIterable<any, any, any>, any> | PipelineDestinationFunction<PipelineSourceFunction<any>, any>>(
      source: S,
      destination: D,
      callback: PipelineCallback<D>

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline<S extends PipelineSource<any>, T extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<ReadableStream, any> | PipelineTransformGenerator<ReadableStream<any>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<Iterable<any, any, any>, any> | PipelineTransformGenerator<AsyncIterable<any, any, any>, any> | PipelineTransformGenerator<PipelineSourceFunction<any>, any>, D extends WritableStream | WritableStream<any> | TransformStream<any, any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineDestinationFunction<TransformStream<any, any>, any> | PipelineDestinationFunction<ReadWriteStream, any> | PipelineDestinationFunction<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<ReadableStream, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<ReadableStream<any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<Iterable<any, any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>>(
      source: S,
      transform: T,
      destination: D,
      callback: PipelineCallback<D>

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline<S extends PipelineSource<any>, T1 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<ReadableStream, any> | PipelineTransformGenerator<ReadableStream<any>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<Iterable<any, any, any>, any> | PipelineTransformGenerator<AsyncIterable<any, any, any>, any> | PipelineTransformGenerator<PipelineSourceFunction<any>, any>, T2 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, D extends WritableStream | WritableStream<any> | TransformStream<any, any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineDestinationFunction<TransformStream<any, any>, any> | PipelineDestinationFunction<ReadWriteStream, any> | PipelineDestinationFunction<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>>(
      source: S,
      transform1: T1,
      transform2: T2,
      destination: D,
      callback: PipelineCallback<D>

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline<S extends PipelineSource<any>, T1 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<ReadableStream, any> | PipelineTransformGenerator<ReadableStream<any>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<Iterable<any, any, any>, any> | PipelineTransformGenerator<AsyncIterable<any, any, any>, any> | PipelineTransformGenerator<PipelineSourceFunction<any>, any>, T2 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, T3 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>, D extends WritableStream | WritableStream<any> | TransformStream<any, any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineDestinationFunction<TransformStream<any, any>, any> | PipelineDestinationFunction<ReadWriteStream, any> | PipelineDestinationFunction<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>, any>>(
      source: S,
      transform1: T1,
      transform2: T2,
      transform3: T3,
      destination: D,
      callback: PipelineCallback<D>

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline<S extends PipelineSource<any>, T1 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<ReadableStream, any> | PipelineTransformGenerator<ReadableStream<any>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<Iterable<any, any, any>, any> | PipelineTransformGenerator<AsyncIterable<any, any, any>, any> | PipelineTransformGenerator<PipelineSourceFunction<any>, any>, T2 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, T3 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>, T4 extends ReadWriteStream | TransformStream<any, any> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineTransformGenerator<TransformStream<any, any>, any> | PipelineTransformGenerator<ReadWriteStream, any> | PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any>, any> | PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>, any>, D extends WritableStream | WritableStream<any> | TransformStream<any, any> | WritableStream<string | Buffer<ArrayBufferLike>> | TransformStream<string | Buffer<ArrayBufferLike>, any> | PipelineDestinationFunction<TransformStream<any, any>, any> | PipelineDestinationFunction<ReadWriteStream, any> | PipelineDestinationFunction<TransformStream<string | Buffer<ArrayBufferLike>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<any, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<ReadWriteStream, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadWriteStream, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<string | Buffer<ArrayBufferLike>, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<ReadableStream<any>, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<TransformStream<any, any>, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<Iterable<any, any, any>, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<AsyncIterable<any, any, any>, any>, any>, any>, any>, any> | PipelineDestinationFunction<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineTransformGenerator<PipelineSourceFunction<any>, any>, any>, any>, any>, any>>(
      source: S,
      transform1: T1,
      transform2: T2,
      transform3: T3,
      transform4: T4,
      destination: D,
      callback: PipelineCallback<D>

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline(
      streams: readonly WritableStream | PipelineSource<any> | PipelineTransformStreams<unknown, any> | PipelineTransformGenerator<any, any> | WritableStream<unknown> | PipelineDestinationFunction<any, any>[],
      callback: (err: null | ErrnoException) => void
      ): WritableStream;

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
      @param callback

      Called when the pipeline is fully done.

      function pipeline(
      ...streams: [PipelineSource<any>, ...PipelineTransformStreams<unknown, any> | PipelineTransformGenerator<any, any>[], WritableStream | TransformStream<unknown, any> | WritableStream<unknown> | PipelineDestinationFunction<any, any>, callback: (err: null | ErrnoException) => void]
      ): WritableStream;

      A module method to pipe between streams and generators forwarding errors and properly cleaning up and provide a callback when the pipeline is complete.

      import { pipeline } from 'node:stream';
      import fs from 'node:fs';
      import zlib from 'node:zlib';
      
      // Use the pipeline API to easily pipe a series of streams
      // together and get notified when the pipeline is fully done.
      
      // A pipeline to gzip a potentially huge tar file efficiently:
      
      pipeline(
        fs.createReadStream('archive.tar'),
        zlib.createGzip(),
        fs.createWriteStream('archive.tar.gz'),
        (err) => {
          if (err) {
            console.error('Pipeline failed.', err);
          } else {
            console.log('Pipeline succeeded.');
          }
        },
      );
      

      The pipeline API provides a promise version.

      stream.pipeline() will call stream.destroy(err) on all streams except:

      • Readable streams which have emitted 'end' or 'close'.
      • Writable streams which have emitted 'finish' or 'close'.

      stream.pipeline() leaves dangling event listeners on the streams after the callback has been invoked. In the case of reuse of streams after failure, this can cause event listener leaks and swallowed errors. If the last stream is readable, dangling event listeners will be removed so that the last stream can be consumed later.

      stream.pipeline() closes all the streams when an error is raised. The IncomingRequest usage with pipeline could lead to an unexpected behavior once it would destroy the socket without sending the expected response. See the example below:

      import fs from 'node:fs';
      import http from 'node:http';
      import { pipeline } from 'node:stream';
      
      const server = http.createServer((req, res) => {
        const fileStream = fs.createReadStream('./fileNotExist.txt');
        pipeline(fileStream, res, (err) => {
          if (err) {
            console.log(err); // No such file
            // this message can't be sent once `pipeline` already destroyed the socket
            return res.end('error!!!');
          }
        });
      });
      
    • objectMode: boolean,
      value: number
      ): void;

      Sets the default highWaterMark used by streams.

      @param value

      highWaterMark value

  • class default

    The EventEmitter class is defined and exposed by the node:events module:

    import { EventEmitter } from 'node:events';
    

    All EventEmitters emit the event 'newListener' when new listeners are added and 'removeListener' when existing listeners are removed.

    It supports the following option:

    • error: Error,
      event: string | symbol,
      ...args: any[]
      ): void;

      The Symbol.for('nodejs.rejection') method is called in case a promise rejection happens when emitting an event and captureRejections is enabled on the emitter. It is possible to use events.captureRejectionSymbol in place of Symbol.for('nodejs.rejection').

      import { EventEmitter, captureRejectionSymbol } from 'node:events';
      
      class MyClass extends EventEmitter {
        constructor() {
          super({ captureRejections: true });
        }
      
        [captureRejectionSymbol](err, event, ...args) {
          console.log('rejection happened for', event, 'with', err, ...args);
          this.destroy(err);
        }
      
        destroy(err) {
          // Tear the resource down here.
        }
      }
      
    • addListener<E extends string | symbol>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.on(eventName, listener).

    • emit<E extends string | symbol>(
      eventName: string | symbol,
      ...args: any[]
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to events.defaultMaxListeners.

    • listenerCount<E extends string | symbol>(
      eventName: string | symbol,
      listener?: (...args: any[]) => void
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • listeners<E extends string | symbol>(
      eventName: string | symbol
      ): (...args: any[]) => void[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • off<E extends string | symbol>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • on<E extends string | symbol>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param eventName

      The name of the event.

      @param listener

      The callback function

    • once<E extends string | symbol>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param eventName

      The name of the event.

      @param listener

      The callback function

    • pipe<T extends WritableStream>(
      destination: T,
      options?: PipeOptions
      ): T;
    • prependListener<E extends string | symbol>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param eventName

      The name of the event.

      @param listener

      The callback function

    • prependOnceListener<E extends string | symbol>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds a one-time listener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param eventName

      The name of the event.

      @param listener

      The callback function

    • rawListeners<E extends string | symbol>(
      eventName: string | symbol
      ): (...args: any[]) => void[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • removeAllListeners<E extends string | symbol>(
      eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • removeListener<E extends string | symbol>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them from emit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indexes of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.