Esta página inclui scripts que ajudam na depuração e na utilização do PostgreSQL.
Migre operações UPDATE e DELETE para tabelas de chaves não principais
Para tabelas que não têm chaves principais, o Database Migration Service suporta a migração da imagem instantânea inicial e das declarações INSERT durante a fase de captura de dados de alterações (CDC).
Para atualizar os processos UPDATE e DELETE em falta, consulte as secções posteriores deste documento.
Detetar dados em falta entre as instâncias de origem e de destino do Cloud SQL
Identifique as tabelas que não têm chaves principais:
select tab.table_schema,
tab.table_name
from information_schema.tables tab
left join information_schema.table_constraints tco
on tab.table_schema = tco.table_schema
and tab.table_name = tco.table_name
and tco.constraint_type = 'PRIMARY KEY'
where tab.table_type = 'BASE TABLE'
and tab.table_schema not in ('pg_catalog', 'information_schema', 'pglogical')
and tco.constraint_name is null
order by table_schema,
table_name;
Antes de iniciar a migração, para todas as tabelas sem chaves primárias, verifique se existem atualizações ou eliminações através desta consulta:
SELECT schemaname,
relname,
n_tup_ins,
n_tup_upd,
n_tup_del
FROM pg_stat_user_tables
WHERE schemaname NOT IN
('pglogical', 'pg_catalog', 'information_schema');
Onde:
n_tup_ins: número de linhas inseridas
n_tup_upd: número de linhas atualizadas (inclui linhas atualizadas HOT)
n_tup_del: número de linhas eliminadas
Guarde estes resultados numa tabela separada ou num ficheiro.
Após a conclusão da configuração da migração, execute novamente a consulta.
Compare os resultados com os resultados do passo 3.
Se existirem diferenças nos valores de n_tup_upd ou n_tup_del na origem durante a migração, podem ocorrer algumas atualizações ou eliminações na origem.
Migre manualmente os dados da origem para as instâncias de destino do Cloud SQL
Se detetar algumas discrepâncias entre a origem e a instância de destino do Cloud SQL, pode migrar os dados através de uma das seguintes opções:
Opção 1: compare manualmente os dados entre a origem e o destino do Cloud SQL e execute as consultas SQL adequadas para atualizar apenas os dados que são diferentes entre a origem e a réplica.
Opção 3: use o comando COPY do Postgres para migrar os seus dados. Consulte a documentação do PostgreSQL para mais informações. Tenha em atenção que este passo requer que a VM de réplica consiga estabelecer ligação à VM de origem.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-08-21 UTC."],[[["\u003cp\u003eThis page provides debugging scripts and guidance for using PostgreSQL within the Database Migration Service.\u003c/p\u003e\n"],["\u003cp\u003eDatabase Migration Service supports the migration of initial snapshots and \u003ccode\u003eINSERT\u003c/code\u003e statements for tables without primary keys, but \u003ccode\u003eUPDATE\u003c/code\u003e and \u003ccode\u003eDELETE\u003c/code\u003e operations may need manual intervention.\u003c/p\u003e\n"],["\u003cp\u003eYou can identify tables lacking primary keys and check for any updates or deletes on the source during migration using the provided SQL queries against the \u003ccode\u003einformation_schema\u003c/code\u003e and \u003ccode\u003epg_stat_user_tables\u003c/code\u003e tables.\u003c/p\u003e\n"],["\u003cp\u003eIf data discrepancies are found between the source and the Cloud SQL destination instance, you can manually resolve them by comparing and updating data, or by utilizing \u003ccode\u003epg_dump\u003c/code\u003e/\u003ccode\u003epg_restore\u003c/code\u003e or the \u003ccode\u003eCOPY\u003c/code\u003e command for the affected tables.\u003c/p\u003e\n"],["\u003cp\u003eIt is important to potentially clean the replica before using \u003ccode\u003epg_restore\u003c/code\u003e or \u003ccode\u003eCOPY\u003c/code\u003e commands if there is data that was previously migrated.\u003c/p\u003e\n"]]],[],null,["# Debugging and other tools\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n[MySQL](/database-migration/docs/mysql/debugging-tools \"View this page for the MySQL version of Database Migration Service.\") \\| PostgreSQL \\| [PostgreSQL to AlloyDB](/database-migration/docs/postgresql-to-alloydb/debugging-tools \"View this page for the PostgreSQL to AlloyDB version of Database Migration Service.\")\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nOverview\n--------\n\nThis page includes scripts that help with debugging and using PostgreSQL.\n\nMigrate UPDATE and DELETE operations for non-primary key tables\n---------------------------------------------------------------\n\nFor tables that don't have primary keys, Database Migration Service supports migration of the initial snapshot and `INSERT` statements during the change data capture (CDC) phase.\n\nTo update the missing `UPDATE` and `DELETE` processes, refer to the later sections of this document.\n\n### Detect missing data between the source and Cloud SQL destination instances\n\n1. Identify which tables don't have primary keys: \n\n ```\n select tab.table_schema,\n tab.table_name\n from information_schema.tables tab\n left join information_schema.table_constraints tco \n on tab.table_schema = tco.table_schema\n and tab.table_name = tco.table_name \n and tco.constraint_type = 'PRIMARY KEY'\n where tab.table_type = 'BASE TABLE'\n and tab.table_schema not in ('pg_catalog', 'information_schema', 'pglogical')\n and tco.constraint_name is null\n order by table_schema,\n table_name;\n ```\n2. Before starting the migration, for all tables without primary keys, check if there are any updates or deletes using this query: \n\n ```\n SELECT schemaname,\n relname,\n n_tup_ins,\n n_tup_upd,\n n_tup_del\n FROM pg_stat_user_tables\n WHERE schemaname NOT IN\n ('pglogical', 'pg_catalog', 'information_schema');\n ```\n Where:\n - `n_tup_ins`: number of rows inserted\n - `n_tup_upd`: number of rows updated (includes HOT updated rows)\n - `n_tup_del`: number of rows deleted\n3. Save these results either in a separate table or in a file.\n4. After the migration setup is complete, run the query again.\n5. Compare the results with the results from step 3.\n\nIf there are differences in values of `n_tup_upd` or `n_tup_del` on the source during migration, there might be some updates or deletes on the source.\n| **Note:** You can manually reset `pg_stat_user_tables` using the `pg_stat_reset()` function. If this function is run on the source, the statistics might not reflect accurate information.\n\n### Manually migrate data from the source to the Cloud SQL destination instances\n\nIf you detect there are some discrepancies between the source and Cloud SQL destination instance, you can migrate the data using one of the options:\n\n- **Option 1**: Manually compare the data between the source and Cloud SQL destination and execute appropriate SQL queries to update only the data that is different between the source and the replica.\n\n- **Option 2** : Run `pg_dump` and `pg_restore` for the tables without primary keys. See [Export data from an on-premises PostgreSQL server using pg_dump](/sql/docs/postgres/import-export/import-export-dmp#external-server) for more information.\n\n- **Option 3** : Use Postgres `COPY` command to migrate your data. See [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-copy.html) for more information. Note that this step requires the replica VM to be able to connect to the source VM.\n\n| For **Option 2** and **Option 3** , if there is any data that is already migrated on to the replica, those records might require cleanup before running the `pg_restore` or `COPY` commands. Please make changes accordingly."]]