Skip to main content
Raw tables are an experimental feature. We’re actively seeking feedback on:
  • API design and developer experience
  • Additional features or optimizations needed
View the roadmap doc to see our latest thinking on the future of this feature, and join our Discord community to share your experience and get help.
By default, PowerSync uses a JSON-based view system where data is stored schemalessly in JSON format and then presented through SQLite views based on the client-side schema. Raw tables allow you to define native SQLite tables in the client-side schema, bypassing this. This eliminates overhead associated with extracting values from the JSON data and provides access to advanced SQLite features like foreign key constraints and custom indexes.
AvailabilityFeatures describes on this page were introduced in the following versions of our client SDKs:
  • JavaScript (Node: 0.18.0, React-Native: 1.31.0, Web: 1.35.0)
  • Dart: Version 1.18.0 of package:powersync.
  • Kotlin: Version 1.11.0.
  • Swift: Version 1.12.0.
  • Rust: Version 0.0.4.
  • This feature is not yet available on our .NET SDK.

When to Use Raw Tables

Consider raw tables when you need:
  • Indexes - PowerSync’s default schema has basic support for indexes on columns, while raw tables give you complete control to create indexes on expressions, use GENERATED columns, etc.
  • Improved performance for complex queries (e.g., SELECT SUM(value) FROM transactions) - raw tables more efficiently get these values directly from the SQLite column, instead of extracting the value from the JSON object on every row.
  • Reduced storage overhead - eliminate JSON object overhead for each row in ps_data__<table>.data column.
  • To manually create tables - Sometimes you need full control over table creation, for example when implementing custom triggers.
Advanced SQLite features like FOREIGN KEY and ON DELETE CASCADE constraints need special consideration.

How Raw Tables Work

Current JSON-Based System

Currently the sync system involves two general steps:
  1. Download bucket operations from the PowerSync Service.
  2. Once the client has a complete checkpoint and no pending local changes in the upload queue, sync the local database with the bucket operations.
The bucket operations use JSON to store the individual operation data. The local database uses tables with a simple schemaless ps_data__<table_name> structure containing only an id (TEXT) and data (JSON) column. PowerSync automatically creates views on that table that extract JSON fields to resemble standard tables reflecting your schema.

Raw Tables Approach

When opting in to raw tables, you are responsible for creating the tables before using them - PowerSync will no longer create them automatically. Because PowerSync takes no control over raw tables, you need to manually:
  1. Define how PowerSync’s schemaless protocol maps to your raw tables — see Define sync mapping for raw tables
  2. Define triggers that capture local writes from raw tables — see Capture local writes with triggers
For the purpose of this example, consider a simple table like this:
CREATE TABLE todo_lists (
   id TEXT NOT NULL PRIMARY KEY,
   created_by TEXT NOT NULL,
   title TEXT NOT NULL,
   content TEXT
) STRICT;

Define sync mapping for raw tables

To sync into the raw todo_lists table instead of ps_data__, PowerSync needs the SQL statements extracting columns from the untyped JSON protocol used during syncing. Internally, this involves two SQL statements:
  1. A put SQL statement for upserts, responsible for creating a todo_list row or updating it based on its id and data columns.
  2. A delete SQL statement responsible for deletions.
The PowerSync client as part of our SDKs will automatically run these statements in response to sync lines being sent from the PowerSync Service. In most cases, these statements can be inferred automatically. However, the statements can also be given explicitly if customization is needed.

Inferring sync statements

In most cases, the put and delete statements are obvious when looking at the structure of the table. With the todo_list example, a delete statement would DELETE FROM todo_lists WHERE id = $row_id_to_delete. Similarly, a put statement would use a straightforward upsert to create or update rows. When the SDK knows the name of the local table you’re inserting into, it can infer statements automatically by analyzing the CREATE TABLE structure. The name of raw tables can be provided with the RawTableSchema type:
// Raw tables are not included in the regular Schema() object.
// Instead, add them afterwards using withRawTables().
const mySchema = new Schema({
  // Define your PowerSync-managed schema here
  // ...
});
mySchema.withRawTables({
  todo_lists: {
    schema: {},
  }
});
When to use inferred statementsIf you have a local table that directly corresponds to the schema of a synced output table, inferred statements greatly simplify the schema setup.You will need explicit sync statements if, for instance:
  • you want to apply transformations on synced values before inserting them into your local database.
  • you need custom default values for synced NULL values.
  • you’re using the rest column pattern to help with migrations.
  • you have a custom setup where a raw table stores data from multiple source tables.
If the name of the SQLite table and the name of the synced table aren’t the same, the inferred statements can be customized. For instance, say you had a local_users table in your SQLite database and want to sync rows from the users table in your backend. Here, the name of the raw table must be users to match PowerSync definitions, but the RawTableSchema type on every SDK has an optional tableName field that can be set to local_users in this case.

Explicit sync statements

To pass statements explicitly, use the put and delete parameters available in each SDK. A statement consists of two parts:
  1. An SQL string of the statement to run. It should use positional parameters (?) as placeholders for values from the synced row.
  2. An array describing the instantiation of positional parameter. delete statements can reference the id of the affected row, while put statements can also reference individual column values. A rest parameter is also available, see migrations for details on how that can be useful.
Declaring these statements and parameters happens as part of the schema passed to PowerSync databases:
// Raw tables are not included in the regular Schema() object.
// Instead, add them afterwards using withRawTables().
// The values of parameters are described as a JSON array either containing:
//   - the string 'Id' to reference the id of the affected row.
//   - the object { Column: name } to reference the value of the column 'name'.
const mySchema = new Schema({
  // Define your PowerSync-managed schema here
  // ...
});
mySchema.withRawTables({
  // The name here doesn't have to match the name of the table in SQL. Instead, it's used to match
  // the table name from the backend source database as sent by the PowerSync Service.
  todo_lists: {
    put: {
      sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)',
      params: ['Id', { Column: 'created_by' }, { Column: 'title' }, { Column: 'content' }]
    },
    delete: {
      sql: 'DELETE FROM lists WHERE id = ?',
      params: ['Id']
    }
  }
});
// We will simplify this API after understanding the use-cases for raw tables better.
After adding raw tables to the schema, you’re also responsible for creating them by executing the corresponding CREATE TABLE statement before connect()-ing the database.

Capture local writes with triggers

PowerSync uses an internal SQLite table to collect local writes. For PowerSync-managed views, a trigger for insertions, updates and deletions automatically forwards local mutations into this table. When using raw tables, defining those triggers is your responsibility. The PowerSync SQLite extension creates an insert-only virtual table named powersync_crud with these columns:
-- This table is part of the PowerSync SQLite core extension
CREATE VIRTUAL TABLE powersync_crud(
   -- The type of operation: 'PUT' or 'DELETE'
   op TEXT,
   -- The id of the affected row
   id TEXT,
   type TEXT,
   -- optional (not set on deletes): The column values for the row
   data TEXT,
   -- optional: Previous column values to include in a CRUD entry
   old_values TEXT,
   -- optional: Metadata for the write to include in a CRUD entry
   metadata TEXT,
);
The virtual table associates local mutations with the current transaction and ensures writes made during the sync process (applying server-side changes) don’t count as local writes. The role of triggers is to insert into powersync_crud to record writes on raw tables. Like with statements, these triggers can usually be inferred from the schema of the table.

Inferred triggers

The powersync_create_raw_table_crud_trigger SQL function is available in migrations to create triggers for raw tables. It takes three arguments:
  1. A JSON description of the raw table with options, which can be generated by PowerSync SDKs.
  2. The name of the trigger to create.
  3. The type of write for which to generate a trigger (INSERT, UPDATE or DELETE). Typically, you’d generate all three.
powersync_create_raw_table_crud_trigger parses the structure of tables from the database schema, so it must be called after the raw table has been created.
const table: RawTable = { name: 'todo_lists', schema: {} };
await database.execute("CREATE TABLE todo_lists (...)");

for (const write of ["INSERT", "UPDATE", "DELETE"]) {
  await database.execute(
    "SELECT powersync_create_raw_table_crud_trigger(?, ?, ?)",
    [JSON.stringify(Schema.rawTableToJson(table)), `users_${write}`, write],
  );
}
Note that these triggers are created just once! It is your responsibility to drop and re-create them after altering the table.
Regular JSON-based tables include advanced options. These are also available on raw tables and they affect the generated trigger.You can track previous values, mark a raw table as insert-only or configure the trigger to ignore empty updates by passing an options parameter (Rust, Swift, Dart, Kotlin) or set the options on the object literal when defining raw tables (JavaScript).

Explicit triggers

Triggers on raw tables can also be defined explicitly instead of using powersync_create_raw_table_crud_trigger. It is your responsibility to setup and migrate these triggers along with the table:
CREATE TRIGGER todo_lists_insert
   AFTER INSERT ON todo_lists
   FOR EACH ROW
   BEGIN
      INSERT INTO powersync_crud (op, id, type, data) VALUES ('PUT', NEW.id, 'todo_lists', json_object(
         'created_by', NEW.created_by,
         'title', NEW.title,
         'content', NEW.content
      ));
   END;

CREATE TRIGGER todo_lists_update
   AFTER UPDATE ON todo_lists
   FOR EACH ROW
   BEGIN
      SELECT CASE
         WHEN (OLD.id != NEW.id)
         THEN RAISE (FAIL, 'Cannot update id')
      END;

      -- TODO: You may want to replace the json_object with a powersync_diff call of the old and new values, or
      -- use your own diff logic to avoid marking unchanged columns as updated.
      INSERT INTO powersync_crud (op, id, type, data) VALUES ('PATCH', NEW.id, 'todo_lists', json_object(
         'created_by', NEW.created_by,
         'title', NEW.title,
         'content', NEW.content
      ));
   END;

CREATE TRIGGER todo_lists_delete
   AFTER DELETE ON todo_lists
   FOR EACH ROW
   BEGIN
      INSERT INTO powersync_crud (op, id, type) VALUES ('DELETE', OLD.id, 'todo_lists');
   END;

Using foreign keys

Raw tables support advanced table constraints including foreign keys. When enabling foreign keys however, you need to be aware of the following:
  1. While PowerSync will always apply synced data in a transaction, there is no way to control the order in which rows get applied. For this reason, foreign keys need to be configured with DEFERRABLE INITIALLY DEFERRED.
  2. When using stream priorities, you need to ensure you don’t have foreign keys from high-priority rows to lower-priority data. PowerSync applies data in one transaction per priority, so these foreign keys would not work.
  3. As usual when using foreign keys, note that they need to be explicitly enabled with pragma foreign_keys = on.

Local-Only Columns

Raw tables allow you to add columns that exist only on the client and are never synced to the backend. This is useful for client-specific state like user preferences, local notes, or UI flags that should persist across app restarts but have no equivalent in the backend database.
Local-only columns are not supported with PowerSync’s default JSON-based view system. Raw tables are required for this functionality.
Building on the todo_lists example above, you can add local-only columns such as is_pinned and local_notes:
CREATE TABLE IF NOT EXISTS todo_lists (
   id TEXT NOT NULL PRIMARY KEY,
   -- Synced columns
   created_by TEXT NOT NULL,
   title TEXT NOT NULL,
   content TEXT,
   -- Local-only columns (not synced)
   is_pinned INTEGER NOT NULL DEFAULT 0,
   local_notes TEXT
) STRICT;

With inferred statements and triggers

Both the inferred put and delete statements as well as triggers generated by powersync_create_raw_table_crud_trigger support local-only columns. To configure this, include a syncedColumns array on the RawTableSchema:
const table: RawTable = {
  name: 'todo_lists',
  schema: {
    syncedColumns: ['created_by', 'title', 'content'],
  },
};

With explicit statements

The standard raw table setup requires modifications to support local-only columns:

Use upsert instead of INSERT OR REPLACE

The put statement must use INSERT ... ON CONFLICT(id) DO UPDATE SET instead of INSERT OR REPLACE. INSERT OR REPLACE deletes and re-inserts the row, which resets local-only columns to their defaults on every sync update. An upsert only updates the specified synced columns, leaving local-only columns intact. Only synced columns should be referenced in the put params. Local-only columns are omitted entirely:
schema.withRawTables({
  todo_lists: {
    put: {
      sql: `INSERT INTO todo_lists (id, created_by, title, content)
            VALUES (?, ?, ?, ?)
            ON CONFLICT(id) DO UPDATE SET
              created_by = excluded.created_by,
              title = excluded.title,
              content = excluded.content`,
      params: ['Id', { Column: 'created_by' }, { Column: 'title' }, { Column: 'content' }]
    },
    delete: {
      sql: 'DELETE FROM todo_lists WHERE id = ?',
      params: ['Id']
    }
  }
});

Exclude local-only columns from triggers

The json_object() in both the INSERT and UPDATE triggers should only reference synced columns. Local-only columns must not appear in the CRUD payload sent to the backend. Additionally, the UPDATE trigger needs a WHEN clause that checks only synced columns. Without it, changes to local-only columns would fire the trigger and produce unnecessary CRUD entries that get uploaded. The WHEN clause must use IS NOT instead of != for NULL-safe comparisons. NULL != NULL evaluates to NULL in SQLite, which would cause the trigger to skip legitimate changes to nullable synced columns.
CREATE TRIGGER todo_lists_insert
   AFTER INSERT ON todo_lists
   FOR EACH ROW
   BEGIN
      INSERT INTO powersync_crud (op, id, type, data) VALUES ('PUT', NEW.id, 'todo_lists', json_object(
         'created_by', NEW.created_by,
         'title', NEW.title,
         'content', NEW.content
      ));
   END;

-- WHEN clause ensures this only fires for synced column changes.
-- Uses IS NOT instead of != for correct NULL handling.
CREATE TRIGGER todo_lists_update
   AFTER UPDATE ON todo_lists
   FOR EACH ROW
   WHEN
      OLD.created_by IS NOT NEW.created_by
      OR OLD.title IS NOT NEW.title
      OR OLD.content IS NOT NEW.content
   BEGIN
      INSERT INTO powersync_crud (op, id, type, data) VALUES ('PATCH', NEW.id, 'todo_lists', json_object(
         'created_by', NEW.created_by,
         'title', NEW.title,
         'content', NEW.content
      ));
   END;

CREATE TRIGGER todo_lists_delete
   AFTER DELETE ON todo_lists
   FOR EACH ROW
   BEGIN
      INSERT INTO powersync_crud (op, id, type) VALUES ('DELETE', OLD.id, 'todo_lists');
   END;
With this setup, local-only columns can be queried and updated using standard SQL without affecting sync:
-- Updating a local-only column does not produce a CRUD entry
UPDATE todo_lists SET is_pinned = 1 WHERE id = '...';

-- Local-only columns can be used in queries and ordering
SELECT * FROM todo_lists ORDER BY is_pinned DESC, title ASC;

Migrations

In PowerSync’s JSON-based view system the client-side schema is applied to the schemaless data, meaning no migrations are required. Raw tables however are excluded from this, so it is the developers responsibility to manage migrations for these tables.

Adding raw tables as a new table

When you’re adding new tables to your Sync Streams (or legacy Sync Rules), clients will start to sync data on those tables - even if the tables aren’t mentioned in the client’s schema yet. So at the time you’re introducing a new raw table to your app, it’s possible that PowerSync has already synced some data for that table, which would be stored in ps_untyped. When adding regular tables, PowerSync will automatically extract rows from ps_untyped. With raw tables, that step is your responsibility. To copy data, run these statements in a transaction after creating the table:
INSERT INTO my_table (id, my_column, ...)
   SELECT id, data ->> 'my_column' FROM ps_untyped WHERE type = 'my_table';
DELETE FROM ps_untyped WHERE type = 'my_table';
This does not apply if you’ve been using the raw table from the beginning (and never called connect() without them) - you only need this for raw tables you already had locally. Another workaround is to clear PowerSync data when changing raw tables and opt for a full resync.

Migrating to raw tables

To migrate from PowerSync-managed tables to raw tables, first:
  1. Open the database with the new schema mentioning raw tables. PowerSync will copy data from tables previously managed by PowerSync into ps_untyped.
  2. Create raw tables.
  3. Run the INSERT FROM SELECT statement to insert ps_untyped data into your raw tables.

Migrations on raw tables

For JSON-based tables, migrations are trivial since all rows are stored as complete JSON objects. Adding or removing columns only affects views over unchanged JSON data, making the schema a stateless structure. For raw tables, the situation is different. When adding a new column for instance, existing rows would not have a default value even if one could have been synced already. Suppose a new column is added with a simple migration: ALTER TABLE todo_list ADD COLUMN priority INTEGER. This adds the new column on the client, with null values for each existing row. If the client updates the schema before the server and then syncs the changes, every row effectively resyncs and reflects populated values for the new column. So clients observe a consistent state after the sync. If new values have been synced before the client updates, existing rows may not receive the new column until those rows are synced again! This is why special approaches are needed when migrating synced tables.

Deleting data on migrations

One option that makes migrations safe (with obvious downsides) is to simply reset the database before migrating: await db.disconnectAndClear(soft: true) deletes materialized sync rows while keeping downloaded data active. Afterwards, migrations can migrate the schema in any way before you reconnect. In a soft clear, data doesn’t have to be downloaded again in most cases. This might reduce the downtime in which no data is available, but a network connection is necessary for data to become available again.

Triggering resync on migrations

An alternative to the approach of deleting data could be to trigger a re-sync without clearing tables. For example:
-- We need an (optimistic) default value for existing rows
ALTER TABLE todo_list ADD COLUMN priority INTEGER DEFAULT 1 NOT NULL;
SELECT powersync_trigger_resync(TRUE);
The optimistic default value would be overridden on the next completed sync (depending on when the user is online again). This means that the app is still usable offline after an update, but having optimistic state on the client is a caveat because PowerSync normally has stronger consistency guarantees. There may be cases where the approach of deleting data is a safer choice.

The _extra column pattern

Another option to avoid data inconsistencies in migrations is to ensure the raw table stores a full row as expected by PowerSync. To do that, you can introduce an extra column on your table designed to hold values from the backend database that a client is not yet aware of:
CREATE TABLE todo_lists (
   id TEXT NOT NULL PRIMARY KEY,
   created_by TEXT NOT NULL,
   title TEXT NOT NULL,
   content TEXT,
   _extra TEXT
) STRICT;
The _extra column is not used in the app, but the sync service can be informed about it using the Rest column source:
mySchema.withRawTables({
  // The name here doesn't have to match the name of the table in SQL. Instead, it's used to match
  // the table name from the backend source database as sent by the PowerSync Service.
  todo_lists: {
    put: {
      sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content, _extra) VALUES (?, ?, ?, ?, ?)',
      params: ['Id', { Column: 'created_by' }, { Column: 'title' }, { Column: 'content' }, 'Rest']
    },
    delete: ...
  }
});
If PowerSync then syncs a row like {"created_by": "User", "title": "title", "content": "content", "tags": "Important"}, this put statement would set _extra to {"tags":"Important"}, ensuring that the entire source row can be recovered from a row in the raw table. This then allows writing migrations:
  1. Adding new columns by using json_extract(_extra, '$.newColumnName') as a default value.
  2. Removing existing columns by updating _extra = json_set(_extra, '$.droppedColumnName', droppedColumnName) before dropping the column.
Don’t forget to delete triggers before running these statements in migrations, since these updates shouldn’t result in ps_crud writes.

Deleting data and raw tables

APIs that clear an entire PowerSync database, like e.g. disconnectAndClear(), don’t affect raw tables by default. You can use the clear parameter on the RawTable constructor to set an SQL statement to run when clearing the database. Typically, something like DELETE FROM $tableName would be a reasonable statement to run. clear statements are not inferred automatically and must always be set explicitly. Raw tables themselves are not managed by PowerSync and need to be dropped to delete them.