Skip to content

News

pgflow 0.13.3: Authentication Fix for ensure_workers and EdgeWorker

This release fixes a critical authentication issue where ensure_workers() fails to start Edge Workers due to key format mismatches, resulting in 401 Unauthorized errors. This resolves GitHub issue #603.

Supabase stores the service role key in different formats depending on where you access it:

  • In Vault (used by ensure_workers): JWT format starting with eyJhbG...
  • In Edge Functions (SUPABASE_SERVICE_ROLE_KEY env var): Internal format starting with sb_secret_...

When ensure_workers sends an HTTP request to trigger a worker, it includes the JWT-format key from vault in the Authorization header. However, the Edge Worker validates this against SUPABASE_SERVICE_ROLE_KEY, which contains the sb_secret_... format. Since these strings don’t match, authentication fails with a 401 error.

We’ve introduced PGFLOW_AUTH_SECRET - a user-controlled authentication secret that bypasses the format mismatch entirely:

  1. Store your custom secret in vault as pgflow_auth_secret
  2. Set the same value as PGFLOW_AUTH_SECRET in your Edge Function secrets
  3. Both sides now use the identical string for authentication

Update your pgflow installation and configure the new authentication secret:

  1. Update your packages and migrations following the update guide
  2. Set pgflow_auth_secret in vault (same value you’ll use in step 3):
    SELECT vault.create_secret('your-secret-value', 'pgflow_auth_secret');
  3. Set PGFLOW_AUTH_SECRET in your Edge Function secrets with the same value

See the Configure Secrets documentation for detailed setup instructions.

For backwards compatibility, pgflow falls back to the legacy secrets if PGFLOW_AUTH_SECRET is not configured:

  • Database side: Falls back to supabase_service_role_key in vault
  • Edge Function side: Falls back to SUPABASE_SERVICE_ROLE_KEY env var

Existing deployments will continue to work without immediate changes, though we strongly recommend migrating to PGFLOW_AUTH_SECRET to avoid the format mismatch issues.

pgflow 0.13.2: Stalled Task Recovery and Config Fixes

This patch release fixes two issues: tasks getting stuck when workers crash, and the maxPgConnections config being ignored.

When a worker crashes or is terminated unexpectedly, tasks can get stuck in started status indefinitely. These “stalled” tasks never complete and block flow progress. This was reported in #586.

A new pgflow.requeue_stalled_tasks() function automatically detects and recovers stalled tasks:

  • Runs via cron job every 15 seconds
  • Identifies tasks stuck in started status beyond their timeout + 30s buffer
  • Requeues them back to queued status (up to 3 times)
  • After 3 requeue attempts, archives the message and marks task with permanently_stalled_at timestamp for manual investigation

The cron job is set up automatically via migration. For more details, see the Troubleshooting Stalled Tasks guide.

Default visibility timeout increased from 2s to 5s to reduce the likelihood of tasks appearing stalled during normal processing delays.

Run this query to check if you had stalled tasks before upgrading:

SELECT count(*)
FROM pgflow.step_tasks
WHERE status = 'started'
AND started_at < now() - interval '5 minutes';

If you see results, those tasks will now be automatically recovered.

To find tasks that exceeded the max requeue limit (for manual investigation):

SELECT r.run_id, r.flow_slug, st.step_slug, st.requeued_count, st.permanently_stalled_at
FROM pgflow.step_tasks st
JOIN pgflow.runs r ON r.run_id = st.run_id
WHERE st.permanently_stalled_at IS NOT NULL;

The maxPgConnections configuration option was being ignored when passed to createFlowWorker().

The config is now properly passed through the connection chain with a default of 4 connections.

Thanks to matz for reporting issue #586!

pgflow 0.13.1: CLI Fix + Step Output Storage for Conditional Execution

pgflow 0.13.1 fixes a compatibility issue with recent Supabase CLI versions and introduces atomic step output storage for 2x faster Map chains.

If local development stopped working after a Supabase CLI update, this release fixes it.

pgflow now detects local development by checking the SUPABASE_URL environment variable. When running locally, Supabase CLI sets this to http://kong:8000 (the Docker-internal API gateway). This is more reliable than the previous key-based detection, which broke when Supabase CLI transitioned to new opaque API keys.

Step outputs are now stored in step_states.output when a step completes, rather than being aggregated on-demand.

Previously, every time a downstream step needed its dependency’s output, pgflow ran an aggregation query:

SELECT jsonb_agg(output ORDER BY index)
FROM pgflow.step_tasks
WHERE run_id = $1 AND step_slug = $2

For a Map step with 500 items, this query scanned 500 rows. When another Map step depended on it with 500 tasks, that’s 500 tasks x 500 rows = 250,000 row scans.

With outputs pre-stored, downstream tasks now read a single column instead of aggregating:

BenchmarkBeforeAfterImprovement
Start 500 tasks reading 500-element output189s87s2.17x faster

The complexity dropped from O(N^2) to O(N) for Map-to-Map chains.

Querying step outputs is now straightforward:

-- Get output for a specific step in a run
SELECT output
FROM pgflow.step_states
WHERE run_id = 'run-abc' AND step_slug = 'process';
-- Get all step outputs for a run
SELECT step_slug, output
FROM pgflow.step_states
WHERE run_id = 'run-abc';

No need to aggregate from step_tasks - the output is ready to use.

This release includes a data migration that backfills step_states.output for existing completed steps.

  1. Test locally with production data

    Download a production backup and restore it locally to test the migration. See Restoring a downloaded backup in the Supabase docs, then run npx supabase db push to test the migration.

  2. Record enabled worker functions

    Before disabling, note which functions are currently enabled:

    SELECT function_name FROM pgflow.worker_functions WHERE enabled = true;

    Save this list - you’ll need it in step 7.

  3. Disable all worker functions

    UPDATE pgflow.worker_functions SET enabled = false;
  4. Deprecate running workers

    UPDATE pgflow.workers
    SET deprecated_at = NOW()
    WHERE deprecated_at IS NULL
    AND stopped_at IS NULL;
  5. Wait for all workers to stop

    SELECT COUNT(*) FROM pgflow.workers WHERE stopped_at IS NULL;

    Wait until this returns 0.

  6. Apply database migration

    npx supabase db push
  7. Deploy workers and re-enable

    Deploy your workers and re-enable only the functions that were enabled before:

    npx supabase functions deploy
    UPDATE pgflow.worker_functions
    SET enabled = true
    WHERE function_name IN ('worker-1', 'worker-2'); -- from step 2

This change stores step outputs in a queryable location - a prerequisite for conditional execution. Future releases will use these stored outputs to evaluate conditions and skip steps dynamically.


Questions or issues? Join the Discord community or open a GitHub issue.

pgflow 0.12.0: Simpler Handler Signatures for Flow Composition

pgflow 0.12.0 introduces asymmetric handler signatures - a breaking change that removes the run wrapper from step inputs, enabling cleaner functional composition and preparing the foundation for subflows.

Handler signatures are now asymmetric. The input.run pattern no longer exists.

// Root steps
(input) => input.run.xxx
(flowInput) => flowInput.xxx
// Dependent steps
(input) => input.dep.xxx
(deps) => deps.dep.xxx
// Dependent needing flowInput
(input) => input.run.xxx
async (deps, ctx) => (await ctx.flowInput).xxx

The previous run wrapper blocked functional composition:

// OLD: The run wrapper created type mismatches
// Root steps received: { run: flowInput }
// Dependent steps received: { run: flowInput, dep1: output1 }
// This meant subflows couldn't compose cleanly:
const ChildFlow = new Flow<{ data: string }>()
.step({ slug: "process" }, (input) => {
// Expected: input.data
// Received: { run: parentInput, prep: { data: "..." } }
// TYPE MISMATCH!
});

By removing the wrapper, outputs from one flow can become inputs to another without transformation.

Apply these patterns to update your handlers. The red/green highlights show exactly what changes.

// BEFORE
.step({ slug: 'init' }, (input) => {
return { userId: input.run.userId };
})
// AFTER
.step({ slug: 'init' }, (flowInput) => {
return { userId: flowInput.userId };
})
// BEFORE
.step({ slug: 'process', dependsOn: ['init'] }, (input) => {
const config = input.run.config;
const data = input.init.data;
return combine(data, config);
})
// AFTER (must be async)
.step({ slug: 'process', dependsOn: ['init'] }, async (deps, ctx) => {
const flowInput = await ctx.flowInput;
const config = flowInput.config;
const data = deps.init.data;
return combine(data, config);
})

Dependent Steps - Not Needing flowInput (Common Case)

Section titled “Dependent Steps - Not Needing flowInput (Common Case)”
// BEFORE
.step({ slug: 'save', dependsOn: ['process'] }, (input) => {
return saveToDb(input.process.result);
})
// AFTER
.step({ slug: 'save', dependsOn: ['process'] }, (deps) => {
return saveToDb(deps.process.result);
})

Map Steps (no change needed if only using item)

Section titled “Map Steps (no change needed if only using item)”

Most map steps just use item and need no changes. Only update if you need flowInput:

// BEFORE (accessing flowInput)
.map({ slug: 'transform', array: 'items' }, (item) => {
return process(item);
})
// AFTER (must be async to access flowInput)
.map({ slug: 'transform', array: 'items' }, async (item, ctx) => {
const flowInput = await ctx.flowInput;
return process(item, flowInput.options);
})

The upgrade requires careful coordination to avoid running old code against the new SQL schema.

  1. Update handlers locally and test

    Update all your flow handlers to the new signatures. Test locally:

    npx supabase functions serve my-worker
  2. Disable worker functions

    Prevent cron from starting new workers with old code:

    UPDATE pgflow.worker_functions
    SET enabled = false
    WHERE function_name = 'my-worker';
  3. Deprecate existing workers

    UPDATE pgflow.workers
    SET deprecated_at = NOW()
    WHERE function_name = 'my-worker'
    AND deprecated_at IS NULL;

    Deprecated workers finish their current task but won’t call start_tasks again - so they won’t be affected by the SQL changes.

  4. Wait for workers to stop

    Monitor workers until all have exited:

    SELECT COUNT(*) FROM pgflow.workers
    WHERE function_name = 'my-worker'
    AND stopped_at IS NULL;

    Wait until this returns 0 before proceeding.

  5. Apply database migration

    npx supabase db push
  6. Deploy new workers

    npx supabase functions deploy my-worker
  7. Enable worker functions

    UPDATE pgflow.worker_functions
    SET enabled = true
    WHERE function_name = 'my-worker';

    The pgflow cron automatically starts new workers within seconds.

  • Fixed CONNECT_TIMEOUT errors on Lovable.dev by switching to jsr:@oscar6echo/postgres fork
  • Fixed setTimeout context binding issue in @pgflow/client for browser compatibility

Questions or issues? Join the Discord community or open a GitHub issue.

pgflow 0.11.0: Compilation Configuration Overhaul

pgflow 0.11.0 introduces a redesigned compilation configuration system. The new compilation option provides clearer semantics and adds support for rapid iteration platforms like Lovable through the allowDataLoss flag.

  • New compilation config - Grouped compilation options with clearer semantics
  • allowDataLoss option - Enable destructive recompilation in production for rapid iteration
  • Breaking change - ensureCompiledOnStartup removed in favor of compilation

The compilation behavior is now controlled by a single compilation option:

// Default: auto-detect environment
EdgeWorker.start(MyFlow);
// Skip compilation (flows pre-compiled via CLI)
EdgeWorker.start(MyFlow, { compilation: false });
// Allow destructive recompile in production
EdgeWorker.start(MyFlow, { compilation: { allowDataLoss: true } });
OptionUse Case
(not set)Standard development and production - auto-detects environment
compilation: falseCI/CD pipelines with pre-compiled flows
compilation: { allowDataLoss: true }Rapid iteration platforms (Lovable)

The new allowDataLoss flag enables local-development behavior in production environments. When a flow shape mismatch is detected:

  1. Deletes the existing flow and all its run data
  2. Compiles the new flow from the TypeScript definition
  3. Worker continues with the new schema

Target Use Case: Lovable and Similar Platforms

Section titled “Target Use Case: Lovable and Similar Platforms”

The allowDataLoss option was designed for platforms like Lovable where:

  • Flow definitions change frequently during development
  • Execution history doesn’t need to persist between iterations
  • Speed of iteration matters more than data durability
  • Users expect “just works” behavior without manual intervention

The ensureCompiledOnStartup option has been replaced by the new compilation config.

Before (0.10.x):

// Skip compilation
EdgeWorker.start(MyFlow, { ensureCompiledOnStartup: false });
// Enable compilation (default)
EdgeWorker.start(MyFlow, { ensureCompiledOnStartup: true });

After (0.11.0):

// Skip compilation
EdgeWorker.start(MyFlow, { compilation: false });
// Enable compilation (default) - just don't set the option
EdgeWorker.start(MyFlow);

If you were using ensureCompiledOnStartup:

Old ConfigNew Config
ensureCompiledOnStartup: trueRemove (or use compilation: {})
ensureCompiledOnStartup: falsecompilation: false

Follow the standard Update pgflow guide to update packages and apply migrations.

  1. Update your packages:

    npx pgflow@latest install
  2. Find and replace in your worker files:

    • ensureCompiledOnStartup: falsecompilation: false
    • ensureCompiledOnStartup: true → remove the option entirely
  3. Apply database migrations:

    supabase db push

pgflow 0.10.0: Auto-Compilation and Worker Management

pgflow 0.10.0 is a developer experience milestone. This release eliminates two of the biggest friction points in pgflow development: manual flow compilation and worker lifecycle management.

  • Auto-compilation at startup - Workers verify and compile flows automatically
  • Intelligent worker management - Database-driven cron keeps workers running reliably
  • Beautiful local logging - Colored output with retry information and exponential backoff display

Auto-Compilation: Never Compile Manually Again

Section titled “Auto-Compilation: Never Compile Manually Again”

Previously, every flow change required running pgflow compile, dropping old flow data with ‘pgflow.delete_flow_and_data()’ and migrating database, all before you can test your flow. This release introduces automatic flow compilation and previous version deletion that happens when workers start.

When a worker starts, it calls the new ensure_flow_compiled() function which:

  1. Checks if the flow exists - If not, compiles it immediately
  2. Compares flow shapes - Verifies the TypeScript definition matches the database
  3. Auto-recompiles in development - Detects local Supabase and just deletes previous version and compiles new one
  4. Fails safely in production - Returns an error instead of deleting/recompiling

This achieves full watch-mode in local development:

  • Change your flow definition in TypeScript
  • Supabase detects the change and starts new function
  • Cron polls the function, starting new worker automatically
  • Flow is compiled and ready - no manual step required

And a convenient “deploy updated flow, no need for migrations” workflow on production:

  • Production worker only compiles new flows
  • Mismatches are caught immediately with clear error messages, worker refuses to start

Auto-compilation is enabled by default. To opt out:

EdgeWorker.start(MyFlow, {
ensureCompiledOnStartup: false,
});

Managing worker lifecycles on hosted Supabase has been a pain point. Edge Functions have CPU time limits, worker contantly respanw themselves for continuos operation and to make it reliable one needed to setup a manual cron schedule.

pgflow 0.10.0 introduces built-in worker management that handles all of this automatically.

Each worker function (edge function having EdgeWorker.start()) is tracked in new pgflow.worker_functions table.

When you install 0.10.0 migrations, it will setup a new pgflow_ensure_workers cron job runs every second and:

  1. Tracks registered workers from the worker_functions table
  2. Detects dead workers - Uses heartbeats and stopped_at timestamps
  3. Invokes workers when needed - Pings edge functions via HTTP to spawn new workers
  4. Debounces intelligently - Prevents spawning too many workers simultaneously
  5. Only invokes HTTP requests when needed, saving on invocations and egress.

Before 0.10.0:

  • Set up manual pg_cron jobs with watchdog logic
  • Workers died and stayed dead until manually restarted
  • Risk of spawning too many workers under load

After 0.10.0:

  • Workers register themselves automatically
  • Dead workers are detected and replaced within seconds
  • Debouncing prevents worker storms
  • Graceful shutdown signals prevent false-positive restarts

The worker management system uses Vault secrets to invoke edge functions:

  • supabase_service_role_key - For authentication
  • supabase_project_id - To build the function URL

The new logging system adapts to your environment automatically.

Logger output showing colored status icons, worker prefixes, and retry information

Features:

  • Colored icons (green checkmarks, red X, yellow retry arrows)
  • Worker-prefixed lines for multi-worker clarity
  • Flow/step paths for context
  • Retry countdown with exponential backoff calculation
[INFO] worker=analyze-website queue=pgflow_tasks flow=analyze_website status=verified worker_id=abc123
[VERBOSE] worker=analyze-website flow=analyze_website step=scrape status=completed duration_ms=127
[VERBOSE] worker=analyze-website flow=analyze_website step=analyze status=failed error="Rate limit" retry=1/3 retry_delay_s=5

Features:

  • Structured key=value format for log aggregators
  • Appropriate log levels (INFO, VERBOSE, DEBUG)
  • All context included for filtering and alerting

The format is auto-detected based on environment, but you can override:

# Force fancy format in production (not recommended)
EDGE_WORKER_LOG_FORMAT=fancy
# Set log level explicitly
EDGE_WORKER_LOG_LEVEL=debug # error < warn < info < verbose < debug
# Disable colors (respects NO_COLOR standard)
NO_COLOR=1

None. All new features are additive and existing workflows continue to work unchanged.

Follow the standard Update pgflow guide to update packages and apply migrations.

pgflow now sets up the pgflow_ensure_workers cron automatically during migration. If you previously set up manual cron jobs to keep workers running (from older documentation), remove them:

-- Check for old watchdog jobs
SELECT jobid, jobname, schedule, command
FROM cron.job
WHERE jobname LIKE '%watchdog%'
OR command LIKE '%invoke%function%';
-- Remove old jobs (replace ID with actual jobid)
SELECT cron.unschedule(jobid) FROM cron.job WHERE jobname LIKE '%watchdog%';

Questions or issues? Open a GitHub issue or join the discussion on Discord.

pgflow 0.9.1: Unified Connection Configuration

This release unifies how the edge worker handles database connections - zero-config local development, a clear priority chain, and proper SSL support.

  • connectionString now works (#469, #424) - The connectionString config option was being ignored. Now it works, enabling patterns like falling back to SUPABASE_DB_URL.
  • Pass raw postgres.js connection - Use config.sql to pass your own postgres.js instance with custom options (SSL, connection pooling, etc.).
  • Zero-config local dev - When local environment is detected (via known local dev keys), the worker automatically connects to the local transaction pooler. No environment variables needed.
  • Cleaner install - pgflow install no longer writes EDGE_WORKER_DB_URL to .env since local dev works without it.

The edge worker now resolves database connections in this order:

  1. config.sql - Full control with custom postgres instance
  2. config.connectionString - Explicit URL in code
  3. EDGE_WORKER_DB_URL - Environment variable
  4. Local fallback - Auto-detected local Supabase pooler

If nothing is configured and local Supabase is not detected, the worker throws a clear error explaining the options.

The Supabase deployment guide has been revamped from a single page into a structured overview with dedicated pages for each topic:

  • #469 - connectionString config was being ignored
  • #424 - Simplified local development setup
  • Discussion #280 - Preview branch connection patterns

This release was shaped by community feedback. Thanks to Nciso, mikz, ddlaws0n, and PixelEcho (Discord) for reporting issues, suggesting improvements, and helping improve the documentation.

pgflow 0.9.0: Control Plane and HTTP-Based Compilation

pgflow 0.9.0 introduces the ControlPlane edge function, enabling HTTP-based flow compilation without requiring a local Deno installation.

The CLI no longer spawns Deno processes locally. Instead, compilation requests go through the ControlPlane edge function:

pgflow compile my_flow --> ControlPlane --> SQL migration

This simplifies the development setup and lays groundwork for future auto-compilation features where workers will verify and compile flows at startup without any CLI involvement.

The pgflow compile command now takes a flow slug instead of a file path:

Before (0.8.x):

pgflow compile path/to/my_flow.ts --deno-json supabase/functions/deno.json

After (0.9.0):

pgflow compile my_flow

The --deno-json flag has been removed.

The installer now scaffolds a complete working setup:

npx [email protected] install

This creates:

  • supabase/flows/ - Directory for flow definitions with namespace imports
  • supabase/flows/greet-user.ts - Example GreetUser flow
  • supabase/functions/pgflow/ - Control Plane for compilation
  • supabase/functions/greet-user-worker/ - Example worker ready to run

The installer shows a clear summary and asks for a single confirmation before making changes.

ControlPlane now supports namespace imports, the recommended pattern:

import { ControlPlane } from '@pgflow/edge-worker';
import * as flows from '../../flows/index.ts';
ControlPlane.serve(flows);

Add flows by exporting from flows/index.ts:

export { GreetUser } from './greet-user.ts';
export { MyOtherFlow } from './my-other-flow.ts';

Existing compiled flows continue to work - no recompilation needed.

To compile new flows with v0.9.0:

  1. Run npx [email protected] install to add the ControlPlane and supabase/flows/ directory
  2. Create new flow files in supabase/flows/
  3. Export them from supabase/flows/index.ts

The --deno-json flag has been removed. Compilation now uses the ControlPlane edge function instead of local Deno. If you used this flag in CI/CD scripts, update them to use the new compilation approach.

pgflow 0.8.0: Modernizing Dependencies - pgmq 1.5.0 and PostgreSQL 17

Cyberpunk hacker upgrading database dependencies on a glitchy terminal screen

pgflow 0.8.0 requires pgmq 1.5.0 or higher and PostgreSQL 17. This release removes the pgmq compatibility layer and prepares the foundation for upcoming flow auto-compilation features.

pgflow 0.8.0 introduces breaking dependency changes:

  1. pgmq version: Now requires 1.5.0 or higher (previously supported 1.4.x)
  2. PostgreSQL version: Upgrades to PostgreSQL 17 (from 15)
  3. Supabase CLI: Requires version 2.50.3 or higher (includes pgmq 1.5.0+)
  4. Deno version: Now requires 2.1.x or higher (previously 1.45.x)

The migration will fail safely if these requirements are not met - no partial upgrades or corrupted state.

This release skips backward compatibility entirely - pgflow 0.8.0 requires pgmq 1.5.0 from the start. Maintaining compatibility layers for older versions would accumulate technical debt that becomes harder to remove as the project matures.

Additionally, upcoming features like flow auto-compilation depend on infrastructure available only in newer Supabase versions (automatic edge function respawning in local development). Moving to pgmq 1.5.0 and PostgreSQL 17 unblocks these improvements.

Run this query to verify your current pgmq version:

SELECT extversion FROM pg_extension WHERE extname = 'pgmq';

You need version 1.5.0 or higher to upgrade to pgflow 0.8.0.

Verify your Deno installation meets the minimum requirement:

deno --version

You need Deno 2.1.x or higher (previously 1.45.x). If you need to upgrade, see the Deno installation guide.

The migration includes a pre-check that inspects the pgmq schema before making any changes. If pgmq 1.5.0 is not detected, the migration aborts with this error:

ERROR: INCOMPATIBLE PGMQ VERSION DETECTED
This migration requires pgmq 1.5.0 or higher.
The pgmq.message_record type is missing the "headers" column,
which indicates you are running pgmq < 1.5.0.
Action required:
- Supabase: Ensure you are running a recent version that includes pgmq 1.5.0+
- Self-hosted: Upgrade pgmq to version 1.5.0 or higher before running this migration
Migration aborted to prevent runtime failures.

This safety check prevents partial migrations and data corruption.

Supabase CLI version 2.50.3 or higher includes pgmq 1.5.0 by default.

  1. Update your Supabase CLI if needed:

    Terminal window
    supabase -v # Check current version
    npm install -g supabase # Update if below 2.50.3
  2. Verify your pgmq version using the SQL query above

  3. Run your migrations:

    Terminal window
    supabase db push

You must upgrade pgmq to 1.5.0+ before upgrading pgflow.

  1. Check your current pgmq version

  2. Upgrade pgmq to 1.5.0 or higher following the pgmq upgrade guide

  3. Upgrade PostgreSQL to version 17 if not already running it

  4. Run your pgflow migrations:

    Terminal window
    supabase db push

For questions or issues with the upgrade, visit the pgflow Discord or open a GitHub issue.

pgflow 0.7.3: Improved Supabase Realtime Connection Reliability

pgflow 0.7.3 introduces a configurable realtimeStabilizationDelayMs option that addresses a known Supabase Realtime limitation where backend routing isn’t fully established when the SUBSCRIBED event is emitted.

The TypeScript client now includes a realtimeStabilizationDelayMs configuration option (default: 300ms) that adds a delay after subscribing to Realtime channels. This works around a known Supabase Realtime issue where messages sent immediately after subscription confirmation may be missed because backend routing takes additional time to fully establish.

When starting flows or retrieving runs, the client waits for this stabilization period after receiving the SUBSCRIBED event, ensuring that all subsequent realtime events are properly received.

The default 300ms delay works reliably in most environments. If you experience missed events or connection issues, increase the delay to 400-500ms:

import { PgflowClient } from '@pgflow/client';
// Increase stabilization delay for unreliable connections
const pgflow = new PgflowClient(supabase, {
realtimeStabilizationDelayMs: 400,
});

You can also disable the delay by setting it to 0, though this may cause missed events in some environments:

// Disable delay (may cause missed events)
const pgflow = new PgflowClient(supabase, {
realtimeStabilizationDelayMs: 0,
});

See the TypeScript Client documentation and PgflowClient API reference for complete details.

Follow the update guide to upgrade your @pgflow/client dependency to 0.7.3.


Questions or issues? Join our Discord community or open an issue on GitHub.

Chat with Author