F.2. aqo — cost-based query optimization #
The aqo module is a Postgres Pro Standard extension for cost-based query optimization. Using machine learning methods, more precisely, a modification of the k-NN algorithm, aqo improves cardinality estimation, which can optimize execution plans and, consequently, speed up query execution.
F.2.1. Description #
The aqo module can collect statistics on all the executed queries, excluding the queries that access system relations. The collected statistics is classified by query class. If the queries differ in their constants only, they belong to the same class. For each query class, aqo stores the cardinality quality, planning time, execution time, and execution statistics for machine learning. Based on this data, aqo builds a new query plan and uses it for the next query of the same class. aqo test runs have shown significant performance improvements for complex queries.
aqo saves all the learning data (aqo_data), queries (aqo_query_texts), query settings (aqo_queries), and query execution statistics (aqo_query_stat) to files. When aqo starts, it loads this data to shared memory. You can access aqo data through functions and views.
When aqo.advanced is on, and aqo is run in the intelligent or learn mode, a unique hash value, which is computed from the query tree, is assigned to each query class to identify it and separate the collected statistics. If aqo.advanced is off, the statistics for all untracked query classes is stored in a common query class with hash 0.
Each query class has an associated separate space, called feature space, in which the statistics for this query class is collected. This feature space is identified by a hash value (fs), called a basic hash, which is the same for queries that only differ in table names, so the learning data is aggregated for such queries. Each feature space has associated feature subspaces, where the information about selectivity and cardinality for each query plan node is collected. Each subspace is also identified by a hash value (fss).
Query-specific optimization settings are stored in the aqo_queries view.
F.2.1.1. Limitations #
aqo currently has the following limitations:
Query optimization using the aqo module is not supported on standby.
Query optimization with aqo does not work with queries that only have temporary objects.
Query optimization with aqo does not work for queries that contain
IMMUTABLEfunctions.aqo does not collect statistics on replicas because replicas are read-only. However, aqo may use query execution statistics from the primary if the replica is physical.
learnandintelligentmodes are not supposed to work for a whole cluster with queries having a dynamically generated structure because these modes store all query class IDs, which are different for all queries in such a workload. Dynamically generated constants are supported, however.
F.2.2. Installation and Setup #
The aqo extension is included into Postgres Pro Standard. Once you have Postgres Pro Standard installed, complete the following steps to enable aqo:
Add
aqoto the shared_preload_libraries parameter in thepostgresql.conffile:shared_preload_libraries = 'aqo'
The
aqolibrary must be preloaded at the server startup, since adaptive query optimization needs to be enabled per cluster.Create the aqo extension using the following query:
CREATE EXTENSION aqo;
Once the extension is created, you can start optimizing queries.
To disable aqo in the current database, run:
DROP EXTENSION aqo;
To disable aqo at the cluster level, run:
ALTER SYSTEM SET aqo.enable = off; SELECT pg_reload_conf();
To remove all the aqo data including the collected statistics, call aqo_reset(): to remove the data from the current database, run:
SELECT aqo_reset();
to remove all the data from the aqo storage, run:
SELECT aqo_reset(NULL);
If you do not want aqo to be loaded at the server restart, remove the line
shared_preload_libraries = 'aqo'
from the postgresql.conf file.
Important
For smooth physical replication transferring aqo data from the primary to a replica, ensure that the same aqo versions are installed on both. You can have different aqo versions installed, but in this case, set aqo.wal_rw to off on both and anticipate no replication.
F.2.2.1. Configuration #
aqo behavior mainly depends on the aqo.enable, aqo.mode and aqo.advanced configuration parameters. Their default values allow you to start learning in the aqo basic mode once you just set aqo.enable to on.
To dynamically change any of these parameters, for example, mode, in your current session, run the following command:
SET aqo.mode = 'mode';
where mode is the name of the operation mode to use.
F.2.3. Usage #
F.2.3.1. Using aqo in a Basic Mode #
By default, aqo.advanced is off. This sets a recommended, basic, mode, where statistics is collected for plan nodes (identified by fss), and the collected machine learning data is used to correct the cardinality error for all queries whose plan contains a certain plan node. Once you set aqo.enable to on, aqo starts learning. Execute queries that you need to optimize several times until the plan is good enough and change aqo.mode to frozen. To apply the machine learning data at the level of the server instance, run the following command:
ALTER SYSTEM SET aqo.mode = frozen; ALTER SYSTEM SET aqo.enable = on; SELECT pg_reload_conf();
The machine learning data will be applied not only to the queries on which aqo learned, but to all the queries whose plan contains the nodes for which the statistics was collected. For the machine learning data not affect other queries, set aqo.advanced to on and collect statistics for individual queries. See Section F.2.3.2 for details.
F.2.3.2. Choosing the Operation Mode for Advanced Query Optimization #
If you often run queries of the same class, for example, your application limits the number of possible query classes, you can enable aqo.advanced and use the intelligent mode to improve planning for these queries. In this mode, aqo analyzes each query execution and stores statistics. Statistics on queries of different classes is stored separately. If performance is not improved after 50 iterations, the aqo extension falls back to the default query planner.
Note
You can view the current query plan using the standard Postgres Pro EXPLAIN command with the ANALYZE option. For details, see the Section 14.1.
Since the intelligent mode tries to learn separately for different query classes, aqo may fail to provide performance improvements if queries in the workload are of multiple different classes or if the classes of the queries in the workload are constantly changing. For such workloads, reset the aqo extension to the controlled mode, or try to turn off aqo.advanced.
When aqo.advanced is on, in the controlled mode, aqo does not collect statistics for new query classes, so they will not be optimized, but for known query classes, aqo will continue collecting statistics and using optimized planning algorithms. So use the controlled mode only after aqo learned in the learn or intelligent mode.
After aqo has already learned, controlled is the mode recommended for production use. To make aqo run in this mode on your whole production cluster, run
ALTER SYSTEM SET aqo.mode = 'controlled'; SELECT pg_reload_conf();
When aqo.advanced is on, the learn mode collects statistics from all the executed queries and updates the data for query classes. This mode is similar to the intelligent mode, except that it does not provide intelligent tuning. This mode is not recommended to be used permanently for a whole cluster because it tries aqo optimizations for every query class, even for those that do not need it, and this may lead to an unnecessary computational overhead and cause performance degradation.
Use the learn mode with aqo.advanced turned off to handle workloads with dynamically generated query structures. Overall performance improvement is not guaranteed. As this mode lacks intelligent tuning, the performance for some queries may even decrease, but this mode is good for a dynamic workload and consumes less memory than the intelligent mode.
When aqo.advanced is on, in the auto mode, aqo behaves like in the learn mode, but the auto mode prevents the memory overflow by implementing the least recently used (LRU) cache mechanism. This mechanism ensures that the aqo.dsm_size_max and the aqo.fss_max_items limits are not exceeded when incoming queries are saved. When the aqo_query_texts_update function is called in this mode, aqo does not save query texts and does not free the dynamic shared memory used.
For data that may change significantly between queries, enabling the aqo.delta_rows parameter may be useful. With this setting, aqo makes predictions based on the planner's row count estimates. For example, if data is deleted from a table, the planner estimates fewer rows than before. aqo can then use this updated information to predict a lower row count without requiring additional learning steps.
If you want to reduce the impact of aqo on query planning and execution, you can use it in the frozen mode. In this mode, aqo only reads the collected statistics, but does not collect any new data.
F.2.3.3. Fine-Tuning aqo #
You must have superuser rights to access aqo views and configure advanced query settings.
You can view all the processed query classes and their corresponding hash values in the aqo_query_texts view:
SELECT * FROM aqo_query_texts;
To find out the class, that is, hash, of a query and aqo mode, enable aqo.show_hash (boolean) and aqo.show_details (boolean) environment variables and execute the query. The output will contain something like this:
... Planning Time: 23.538 ms ... Execution Time: 249813.875 ms ... Using aqo: true AQO mode: LEARN AQO advanced: OFF ... Query hash: -2439501042637610315
Each query class has its own optimization settings. These settings are shown in the aqo_queries view:
SELECT * FROM aqo_queries;
You can manually change these settings to adjust the optimization for a particular query class. For example:
-- Add a new query class to the aqo_queries view: SET aqo.enable='on'; SET aqo.advanced='on'; SET aqo.mode='intelligent'; SELECT * FROM a, b WHERE a.id=b.id; SET aqo.mode='controlled'; -- Disable auto_tuning, enable both learn_aqo and use_aqo -- for this query class: SELECT count(*) FROM aqo_queries, LATERAL aqo_queries_update(fs, NULL, true, true, false) WHERE fs = (SELECT fs FROM aqo_query_texts WHERE query_text LIKE 'SELECT * FROM a, b WHERE a.id=b.id;'); -- Run EXPLAIN ANALYZE while the plan changes: EXPLAIN ANALYZE SELECT * FROM a, b WHERE a.id=b.id; EXPLAIN ANALYZE SELECT * FROM a, b WHERE a.id=b.id; -- Disable learning to stop statistics collection -- and use the optimized plan: SELECT count(*) FROM aqo_queries, LATERAL aqo_queries_update(fs, NULL, false, true, false) WHERE fs = (SELECT fs FROM aqo_query_texts WHERE query_text LIKE 'SELECT * FROM a, b WHERE a.id=b.id;');
To stop intelligent tuning for a particular query class, disable the auto_tuning setting:
SELECT count(*) FROM aqo_queries, LATERAL aqo_queries_update(fs, NULL, NULL, NULL, false) WHERE fs = 'hash';
where hash is the hash value for this query class. As a result, aqo disables automatic change of the learn_aqo and use_aqo settings.
To disable further learning for a particular query class, use the following command:
SELECT count(*) FROM aqo_queries, LATERAL aqo_queries_update(fs, NULL, false, NULL, false) WHERE fs = 'hash';
where hash is the hash value for this query class.
To fully disable aqo for all queries and use the default Postgres Pro query planner, run:
SELECT count(*) FROM aqo_queries, LATERAL aqo_disable_class(fs, NULL) WHERE fs <> 0;
To disable aqo for all queries temporarily in the current session or for the whole cluster, but not remove or change the collected statistics and settings, turn off aqo.enable:
SET aqo.enable = 'off';
or
ALTER SYSTEM SET aqo.enable = 'off'
F.2.3.4. Sandbox Mode #
You can experiment with aqo without touching its main knowledge base. To do this, execute the command
SET aqo.sandbox = ON;
This turns on the sandbox mode, which means that aqo will work in the isolated environment. However, if you turn on aqo.sandbox in different SQL sessions, they will use the same data.
Data obtained in the sandbox mode does not get replicated. But the sandbox mode can be used on a standby. Moreover, the only way to train aqo on a standby is turning on the sandbox mode when the replication is turned on, that is, aqo.wal_rw is true. Without the sandbox mode, aqo will work on the standby as if aqo.mode = FROZEN, that is, it will be able to use the existing knowledge base, but not update or extend it.
F.2.4. Reference #
F.2.4.1. Configuration Parameters #
aqo.enable(boolean) #Defines the state of aqo. If set to
offaqo does not work except when aqo.force_collect_stat =on.Default:
off.aqo.mode(text) #Sets the aqo operation mode. Defines how aqo handles new queries. Possible values:
intelligent— saves new queries withauto_tuningenabled. See the description of the aqo_queries view for more details. May disable aqo for a query in the case of average performance reduction. Only works in this way if aqo.advanced =on, otherwise, this mode works exactly likelearn.learn— collects statistics on all the executed queries, learns and makes predictions based on these statistics.controlled— only learns and makes predictions for known queries.frozen— makes predictions for known queries, but does not learn from any queries.auto— behaves likelearnbut prevents memory overflow by implementing the LRU cache mechanism.
Default:
learn.aqo.advanced(boolean) #Enables the advanced learning routine, which saves separate learning statistics for each query class. Also allows fine-tuning the
use_aqoandlearn_aqosettings in the aqo_queries view. Fine-tuned query settings in theaqo_queryview continue to work ifaqo.advancedis disabled.Default:
off.aqo.force_collect_stat(boolean) #Collects statistics on query executions in all aqo modes and even if
aqo.enableis off.Default:
off.aqo.show_details(boolean) #Adds some details to
EXPLAINoutput of a query, such as the prediction or feature-subspace hash, and shows some additional aqo-specific on-screen information.Default:
on.aqo.show_hash(boolean) #Shows a hash value that uniquely identifies the class of queries or class of plan nodes. aqo uses the native query ID to identify a query class for consistency with other extensions, such as pg_stat_statements. So, the query ID can be taken from the
Query hashfield inEXPLAIN ANALYZEoutput of a query.Default:
on.aqo.join_threshold(integer) #Ignores queries that contain smaller number of joins, which means that statistics for such queries is not collected.
Default:
0(no queries are ignored).aqo.learn_statement_timeout(boolean) #Learns on a plan interrupted by the statement timeout.
Default:
off.aqo.statement_timeout(integer) #Defines the initial value of the smart statement timeout, in milliseconds, which is needed to limit the execution time when manually training aqo on special queries with a poor cardinality forecast. aqo can dynamically change the value of the smart statement timeout during this training. When the cardinality estimation error on nodes exceeds 0.1, the value of
aqo.statement_timeoutis automatically incremented exponentially, but remains not greater than statement_timeout.Default:
0.aqo.wide_search(boolean) #Enables searching neighbors with the same feature subspace among different query classes. Only has an effect if aqo.advanced =
on.Default:
off.aqo.min_neighbors_for_predicting(integer) #Defines how many samples collected in previous executions of the query will be used to predict the cardinality next time. If there are fewer of them, aqo will not make any prediction. A too large value may affect performance, but a too small value may reduce the prediction quality.
Default:
3.aqo.predict_with_few_neighbors(boolean) #Enables aqo to make predictions with fewer neighbors than specified by aqo.min_neighbors_for_predicting. When set to
off, then aqo learns, but does not make predictions until the execution count for the query with different constants reaches 3 (default foraqo.min_neighbors_for_predicting).Default:
on.aqo.fs_max_items(integer) #Defines the maximum number of feature spaces that aqo can operate with. When this number is exceeded, learning on new query classes will no longer occur, and they will not appear in the views accordingly. This parameter can only be set at server start.
Default:
10000.aqo.fss_max_items(integer) #Defines the maximum number of feature subspaces that aqo can operate with. When this number is exceeded, the selectivity and cardinality for new query plan nodes will no longer be collected, and new feature subspaces will not appear in the aqo_data view accordingly. This parameter can only be set at server start.
Default:
100000.aqo.querytext_max_size(integer) #Defines the maximum size of the query in the aqo_query_texts view. This parameter can only be set at server start.
Default:
1000.aqo.dsm_size_max(integer) #Defines the maximum size of dynamic shared memory, in MB, that aqo can allocate to store learning data and query texts. If set to a number that is less than the size of the saved aqo data, the server will not start. This parameter can only be set at server start.
Default:
100.aqo.wal_rw(boolean) #Enables physical replication and allows complete aqo data recovery after failure. When set to
offon the primary, no data is transferred from it to a replica. When set tooffon a replica, any data transferred from the primary is ignored. With this value, when the server fails, data can only be restored as of the last checkpoint. This parameter can only be set at server start.Default:
on.aqo.sandbox(boolean) #Enables reserving a separate memory area in shared memory to be used by a primary or standby node, which allows collecting and using statistics with the data in this memory area. If enabled on the primary, the extension uses the separate shared memory area that is not replicated to the standby. Changing the value of this parameter resets the aqo cache. Only superusers can change this setting.
Default:
off.aqo.delta_rows(boolean) #Enables a learning mechanism where aqo adjusts the planner's row count estimates with its own predictions. If disabled, aqo uses its own predictions.
Default:
off.
F.2.4.2. Views #
F.2.4.2.1. aqo_query_texts #
The aqo_query_texts view classifies all the query classes processed by aqo. For each query class, the view shows the text of the first analyzed query of this class.
Table F.1. aqo_query_texts View
| Column Name | Description |
|---|---|
queryid | The unique identifier of the query. |
dbid | The identifier of the database. |
fs | The identifier of the feature space. |
query_text | Text of the first analyzed query of the given class. The query text length is limited by aqo.querytext_max_size. |
F.2.4.2.2. aqo_queries #
The aqo_queries view shows optimization settings for different query classes. One query executed in two different databases is stored twice although the fs is the same.
Table F.2. aqo_queries View
| Setting | Description |
|---|---|
fs | The identifier of the feature space. |
dbid | The identifier of the database in which the query was executed. |
learn_aqo | Shows whether statistics collection for this query class is enabled. |
use_aqo | Shows whether the aqo cardinality prediction for the next execution of this query class is enabled. |
auto_tuning | Shows whether aqo can dynamically change When For queries with |
smart_timeout | The value of the smart statement timeout for this query class. The initial value of the smart statement timeout for any query is defined by the statement_timeout configuration parameter. |
count_increase_timeout | Shows how many times the smart statement timeout increased for this query class. |
F.2.4.2.3. aqo_data #
The aqo_data view shows machine learning data for cardinality estimation refinement. The number of rows is limited by aqo.fss_max_items. To discard all the collected statistics for a particular query class, you can delete all rows from aqo_data with the corresponding fs.
Table F.3. aqo_data View
| Data | Description |
|---|---|
fs | The identifier (hash) of the feature space. |
fss | The identifier (hash) of the feature subspace. |
dbid | The identifier of the database. |
delta_rows | If true, aqo makes predictions based on the planner's estimates, otherwise false. |
nfeatures | Feature-subspace size for the query plan node. |
features | Logarithm of the selectivity which the cardinality prediction is based on. |
targets | Cardinality logarithm for the query plan node. |
reliability | Confidence level of the learning statistics. Equals:
|
oids | List of IDs of tables that were involved in the prediction for this node. |
tmpoids | List of IDs of temporary tables that were involved in the prediction for this node. |
F.2.4.2.4. aqo_query_stat #
The aqo_query_stat view shows statistics on query execution, by query class. aqo uses this data when auto_tuning is enabled for a particular query class.
Table F.4. aqo_query_stat View
| Data | Description |
|---|---|
fs | The identifier of the feature space. |
dbid | The identifier of the database. |
execution_time_with_aqo | Array of execution times for queries run with aqo enabled. |
execution_time_without_aqo | Array of execution times for queries run with aqo disabled. |
planning_time_with_aqo | Array of planning times for queries run with aqo enabled. |
planning_time_without_aqo | Array of planning times for queries run with aqo disabled. |
cardinality_error_with_aqo | Array of cardinality estimation errors in the selected query plans with aqo enabled. |
cardinality_error_without_aqo | Array of cardinality estimation errors in the selected query plans with aqo disabled. |
executions_with_aqo | Number of queries run with aqo enabled. |
executions_without_aqo | Number of queries run with aqo disabled. |
F.2.4.3. Functions #
aqo adds several functions to Postgres Pro catalog.
F.2.4.3.1. Storage Management Functions #
Important
Functions aqo_queries_update, aqo_query_texts_update, aqo_query_stat_update, aqo_data_update and aqo_data_delete modify data files underlying aqo views. Therefore, call these functions only if you understand the logic of adaptive query optimization.
aqo_cleanup() →setof integerRemoves data related to query classes that are linked (may be partially) with removed relations. Returns the number of removed feature spaces (classes) and feature subspaces. Insensitive to removing other objects.
aqo_enable_class(fsbigint,dbidoid) →voidSets
learn_aqo,use_aqoandauto_tuning(only in theintelligentmode) to true for the query class with the specifiedfsanddbid. You can setdbidto NULL instead of the ID of the current database.aqo_disable_class(fsbigint,dbidoid) →voidSets
learn_aqo,use_aqoandauto_tuningto false for the query class with the specifiedfsanddbid. You can setdbidto NULL instead of the ID of the current database.aqo_drop_class(fsbigint,dbidoid) →integerRemoves all data related to the specified query class and database from the aqo storage. You can set
dbidto NULL instead of the ID of the current database. Returns the number of records removed from the aqo storage.aqo_reset(dbidoid) →bigintRemoves records from the specified database: machine learning data, query texts, statistics and query class preferences. If
dbidis omitted, removes the data from the current database. Ifdbidis NULL, removes all records from the aqo storage. Returns the number of records removed.aqo_queries_update(fsbigint,dbidoid,learn_aqoboolean,use_aqoboolean,auto_tuningboolean) →booleanUpdates or inserts a record in a data file underlying the aqo_queries view for the specified
fsanddbid. You can setdbidto NULL instead of the ID of the current database. NULL values for parameters being set mean leave them as is. Note that records with a zero value offsordbidcannot be updated. Returnsfalsein case of error,trueotherwise.aqo_query_texts_update(fsbigint,dbidoid,query_texttext) →booleanUpdates or inserts a record in a data file underlying the aqo_query_texts view for the specified
fsanddbid. You can setdbidto NULL instead of the ID of the current database. Note that records with a zero value offsordbidcannot be updated. Returnsfalsein case of error,trueotherwise.aqo_query_stat_update(fsbigint,dbidoid,execution_time_with_aqodouble precision[],execution_time_without_aqodouble precision[],planning_time_with_aqodouble precision[],planning_time_without_aqodouble precision[],cardinality_error_with_aqodouble precision[],cardinality_error_without_aqodouble precision[],executions_with_aqobigint[],executions_without_aqobigint[]) →booleanUpdates or inserts a record in a data file underlying the aqo_query_stat view for the specified
fsanddbid. You cannot update a record for a common feature space, i.e., forfsordbidequal to zero. You can setdbidto NULL instead of the ID of the current database. Returnsfalsein case of error,trueotherwise.aqo_data_update(fsbigint,fssinteger,dbidoid,delta_rowsboolean,nfeaturesinteger,featuresdouble precision[][],targetsdouble precision[],reliabilitydouble precision[],oidsoid[],tmpoidsoid[]) →booleanUpdates or inserts a record in a data file underlying the aqo_data view for the specified
fs,fssanddbid. You can setdbidto NULL instead of the ID of the current database. If you setdelta_rowsto NULL, the value ofaqo.delta_rowsis used. Returnsfalsein case of error,trueotherwise.aqo_data_delete(fsbigint,fssinteger,dbidoid,delta_rowsboolean) →booleanRemoves a record from a data file underlying the aqo_data view for the specified
fs,fssanddbid. You can setdbidto NULL instead of the ID of the current database. If you omitdelta_rowsor set it to NULL, the value ofaqo.delta_rowsis used. Returnsfalsein case of error,trueotherwise.
F.2.4.3.2. Memory Management Functions #
aqo_memory_usage() →setof recordShows allocated and used sizes of aqo memory contexts and hash tables. Returns a table:
nameShort description of the memory context or hash table
allocated_sizeTotal size of the allocated memory
used_sizeSize of the currently used memory
F.2.4.3.3. Functions for Analytics #
aqo_cardinality_error(controlledboolean) →setof recordShows the cardinality error for the last execution of queries. If
controlledis true, shows queries executed with aqo enabled. Ifcontrolledis false, shows queries that were executed with aqo disabled, but that have collected aqo statistics. Returns a table:numSequential number
dbidThe identifier of the database
fsThe identifier of the feature space. Can be zero or the basic hash.
erroraqo error calculated on query plan nodes
nexecsNumber of executions of queries associated with this
fs
aqo_execution_time(controlledboolean) →setof recordShows the execution time for queries. If
controlledis true, shows the execution time of the last execution with aqo enabled. Ifcontrolledis false, returns the average execution time for all logged executions with aqo disabled. Execution time without aqo can be collected when aqo.mode =intelligentor aqo.force_collect_stat =on. Returns a table:numSequential number
dbidThe identifier of the database
fsThe identifier of the feature space. Can be zero or the basic hash.
exec_timeIf
controlled= true, last query execution time with aqo, otherwise, average execution time for all executions without aqonexecsNumber of executions of queries associated with this
fs
F.2.5. Examples #
Example F.1. Learning on a Query (Basic Mode)
Consider optimization of a query using aqo.
When the query is executed for the first time, it is missing in tables underlying aqo views. So there is no data for predicting with aqo for each plan node, and “AQO not used” lines appear in the EXPLAIN output:
demo=# EXPLAIN (ANALYZE, SUMMARY OFF, TIMING OFF)
SELECT bp.*
FROM ticket_flights tf
JOIN flights f ON f.flight_id = tf.flight_id
JOIN boarding_passes bp ON bp.ticket_no = tf.ticket_no AND bp.flight_id = tf.flight_id
WHERE f.scheduled_departure > '2017-08-15 15:00:00+00';
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=63796.33..140286.73 rows=477728 width=25) (actual rows=2215 loops=1)
AQO not used, fss=8598194613120045129
Hash Cond: ((bp.flight_id = f.flight_id) AND (bp.ticket_no = tf.ticket_no))
-> Seq Scan on boarding_passes bp (cost=0.00..32894.95 rows=1894295 width=25) (actual rows=1894295 loops=1)
AQO not used, fss=1362775811343989307
-> Hash (cost=51379.43..51379.43 rows=595260 width=22) (actual rows=468255 loops=1)
Buckets: 131072 Batches: 8 Memory Usage: 4223kB
-> Hash Join (cost=1859.80..51379.43 rows=595260 width=22) (actual rows=468255 loops=1)
AQO not used, fss=-7651474063207585780
Hash Cond: (tf.flight_id = f.flight_id)
-> Seq Scan on ticket_flights tf (cost=0.00..43323.35 rows=2360335 width=18) (actual rows=2360335 loops=1)
AQO not used, fss=-6410966714754547713
-> Hash (cost=1652.80..1652.80 rows=16560 width=4) (actual rows=16340 loops=1)
Buckets: 32768 Batches: 1 Memory Usage: 831kB
-> Seq Scan on flights f (cost=0.00..1652.80 rows=16560 width=4) (actual rows=16340 loops=1)
AQO not used, fss=-1289471166524579716
Filter: (scheduled_departure > '2017-08-15 22:00:00+07'::timestamp with time zone)
Rows Removed by Filter: 49324
Using aqo: true
AQO mode: LEARN
AQO advanced: OFF
Query hash: 402936317627943257
JOINS: 2
(23 rows)
If there is no information on a certain node in the aqo_data view, aqo will add the appropriate record there for future learning and predictions except for nodes with fss=0 in the EXPLAIN output. As each of features and targets in the aqo_data view is a logarithm to base e, to get the actual value, raise e to this power. For example: exp(7.703007682479236):
demo=# select * from aqo_data;
fs | fss | dbid | delta_rows | nfeatures | features | targets | reliability | oids | tmpoids
----+----------------------+-------+------------+-----------+---------------------------------------------+----------------------+-------------+---------------------+-------
0 | 1362775811343989307 | 16429 | f | 0 | | {14.454357295615447} | {1} | {16452} |
0 | -6410966714754547713 | 16429 | f | 0 | | {14.674314116080508} | {1} | {16479} |
0 | -1289471166524579716 | 16429 | f | 1 | {{-1.3775704575284085}} | {9.701371368413994} | {1} | {16458} |
0 | 8598194613120045129 | 16429 | f | 2 | {{-13.492416828684513,-1.3775704575284085}} | {7.703007682479236} | {1} | {16479,16458,16452} |
0 | -7651474063207585780 | 16429 | f | 2 | {{-11.092306109090387,-1.3775704575284085}} | {13.056768298305919} | {1} | {16479,16458} |
(5 rows)
When the query is executed for the second time, aqo recognizes the query and makes a prediction. Pay attention to the cardinality predicted by aqo and the value of aqo error (“error=0%”).
demo=# EXPLAIN (ANALYZE, SUMMARY OFF, TIMING OFF)
SELECT bp.*
FROM ticket_flights tf
JOIN flights f ON f.flight_id = tf.flight_id
JOIN boarding_passes bp ON bp.ticket_no = tf.ticket_no AND bp.flight_id = tf.flight_id
WHERE f.scheduled_departure > '2017-08-15 15:00:00+00';
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=61144.50..136890.86 rows=2215 width=25) (actual rows=2215 loops=1)
AQO: rows=2215, error=0%, fss=8598194613120045129
Hash Cond: ((bp.flight_id = f.flight_id) AND (bp.ticket_no = tf.ticket_no))
-> Seq Scan on boarding_passes bp (cost=0.00..32894.95 rows=1894295 width=25) (actual rows=1894295 loops=1)
AQO: rows=1894295, error=0%, fss=1362775811343989307
-> Hash (cost=51376.68..51376.68 rows=468255 width=22) (actual rows=468255 loops=1)
Buckets: 131072 Batches: 4 Memory Usage: 7438kB
-> Hash Join (cost=1857.05..51376.68 rows=468255 width=22) (actual rows=468255 loops=1)
AQO: rows=468255, error=0%, fss=-7651474063207585780
Hash Cond: (tf.flight_id = f.flight_id)
-> Seq Scan on ticket_flights tf (cost=0.00..43323.35 rows=2360335 width=18) (actual rows=2360335 loops=1)
AQO: rows=2360335, error=0%, fss=-6410966714754547713
-> Hash (cost=1652.80..1652.80 rows=16340 width=4) (actual rows=16340 loops=1)
Buckets: 16384 Batches: 1 Memory Usage: 703kB
-> Seq Scan on flights f (cost=0.00..1652.80 rows=16340 width=4) (actual rows=16340 loops=1)
AQO: rows=16340, error=0%, fss=-1289471166524579716
Filter: (scheduled_departure > '2017-08-15 22:00:00+07'::timestamp with time zone)
Rows Removed by Filter: 49324
Using aqo: true
AQO mode: LEARN
AQO advanced: OFF
Query hash: 402936317627943257
JOINS: 2
(23 rows)
Let's change a constant in the query, and you will notice that the prediction is made with an error:
demo=# EXPLAIN (ANALYZE, SUMMARY OFF, TIMING OFF)
SELECT bp.*
FROM ticket_flights tf
JOIN flights f ON f.flight_id = tf.flight_id
JOIN boarding_passes bp ON bp.ticket_no = tf.ticket_no AND bp.flight_id = tf.flight_id
WHERE f.scheduled_departure > '2017-08-10 15:00:00+00';
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=61144.50..136890.86 rows=2215 width=25) (actual rows=111397 loops=1)
AQO: rows=2215, error=-4929%, fss=8598194613120045129
Hash Cond: ((bp.flight_id = f.flight_id) AND (bp.ticket_no = tf.ticket_no))
-> Seq Scan on boarding_passes bp (cost=0.00..32894.95 rows=1894295 width=25) (actual rows=1894295 loops=1)
AQO: rows=1894295, error=0%, fss=1362775811343989307
-> Hash (cost=51376.68..51376.68 rows=468255 width=22) (actual rows=577437 loops=1)
Buckets: 131072 (originally 131072) Batches: 8 (originally 4) Memory Usage: 7169kB
-> Hash Join (cost=1857.05..51376.68 rows=468255 width=22) (actual rows=577437 loops=1)
AQO: rows=468255, error=-23%, fss=-7651474063207585780
Hash Cond: (tf.flight_id = f.flight_id)
-> Seq Scan on ticket_flights tf (cost=0.00..43323.35 rows=2360335 width=18) (actual rows=2360335 loops=1)
AQO: rows=2360335, error=0%, fss=-6410966714754547713
-> Hash (cost=1652.80..1652.80 rows=16340 width=4) (actual rows=19040 loops=1)
Buckets: 32768 (originally 16384) Batches: 1 (originally 1) Memory Usage: 926kB
-> Seq Scan on flights f (cost=0.00..1652.80 rows=16340 width=4) (actual rows=19040 loops=1)
AQO: rows=16340, error=-17%, fss=-1289471166524579716
Filter: (scheduled_departure > '2017-08-10 22:00:00+07'::timestamp with time zone)
Rows Removed by Filter: 46624
Using aqo: true
AQO mode: LEARN
AQO advanced: OFF
Query hash: 402936317627943257
JOINS: 2
(23 rows)
However, instead of recalculating features and targets, aqo added new values of selectivity and cardinality for this query to aqo_data:
demo=# select * from aqo_data;
fs | fss | dbid | delta_rows | nfeatures | features | targets | reliability | oids | tmpoids
----+----------------------+-------+------------+-----------+---------------------------------------------------------------------------------------+-----------------------------------------+-------------+---------------------+---------
0 | 1362775811343989307 | 16429 | f | 0 | | {14.454357295615447} | {1} | {16452} |
0 | -6410966714754547713 | 16429 | f | 0 | | {14.674314116080508} | {1} | {16479} |
0 | -1289471166524579716 | 16429 | f | 1 | {{-1.3775704575284085},{-1.2287385463463019}} | {9.701371368413994,9.854297308345357} | {1,1} | {16458} |
0 | 8598194613120045129 | 16429 | f | 2 | {{-13.492416828684513,-1.3775704575284085},{-13.492416828684513,-1.2287385463463019}} | {7.703007682479236,11.620855676130656} | {1,1} | {16479,16458,16452} |
0 | -7651474063207585780 | 16429 | f | 2 | {{-11.092306109090387,-1.3775704575284085},{-11.092306109090387,-1.2287385463463019}} | {13.056768298305919,13.266354624518149} | {1,1} | {16479,16458} |
(5 rows)
Now the prediction has a small error of about 3%, which can be explained by a calculation error:
demo=# EXPLAIN (ANALYZE, SUMMARY OFF, TIMING OFF)
SELECT bp.*
FROM ticket_flights tf
JOIN flights f ON f.flight_id = tf.flight_id
JOIN boarding_passes bp ON bp.ticket_no = tf.ticket_no AND bp.flight_id = tf.flight_id
WHERE f.scheduled_departure > '2017-08-10 15:00:00+00';
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=63438.72..139820.11 rows=108541 width=25) (actual rows=111397 loops=1)
AQO: rows=108541, error=-3%, fss=8598194613120045129
Hash Cond: ((bp.flight_id = f.flight_id) AND (bp.ticket_no = tf.ticket_no))
-> Seq Scan on boarding_passes bp (cost=0.00..32894.95 rows=1894295 width=25) (actual rows=1894295 loops=1)
AQO: rows=1894295, error=0%, fss=1362775811343989307
-> Hash (cost=51410.19..51410.19 rows=576635 width=22) (actual rows=577437 loops=1)
Buckets: 131072 Batches: 8 Memory Usage: 4966kB
-> Hash Join (cost=1890.56..51410.19 rows=576635 width=22) (actual rows=577437 loops=1)
AQO: rows=576635, error=-0%, fss=-7651474063207585780
Hash Cond: (tf.flight_id = f.flight_id)
-> Seq Scan on ticket_flights tf (cost=0.00..43323.35 rows=2360335 width=18) (actual rows=2360335 loops=1)
AQO: rows=2360335, error=0%, fss=-6410966714754547713
-> Hash (cost=1652.80..1652.80 rows=19021 width=4) (actual rows=19040 loops=1)
Buckets: 32768 Batches: 1 Memory Usage: 926kB
-> Seq Scan on flights f (cost=0.00..1652.80 rows=19021 width=4) (actual rows=19040 loops=1)
AQO: rows=19021, error=-0%, fss=-1289471166524579716
Filter: (scheduled_departure > '2017-08-10 22:00:00+07'::timestamp with time zone)
Rows Removed by Filter: 46624
Using aqo: true
AQO mode: LEARN
AQO advanced: OFF
Query hash: 402936317627943257
JOINS: 2
(23 rows)
We can modify the query by adding some table to the JOIN list. In this case, aqo will predict the cardinality of nodes on which it learned before.
demo=# EXPLAIN (ANALYZE, SUMMARY OFF, TIMING OFF)
SELECT t.*
FROM ticket_flights tf
JOIN flights f ON f.flight_id = tf.flight_id
JOIN boarding_passes bp ON bp.ticket_no = tf.ticket_no AND bp.flight_id = tf.flight_id
JOIN tickets t ON t.ticket_no = tf.ticket_no
WHERE f.scheduled_departure > '2017-08-15 15:00:00+00';
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=61158.91..134296.78 rows=2273 width=104) (actual rows=2215 loops=1)
AQO not used, fss=-8581941154270057231
-> Hash Join (cost=61158.48..133208.83 rows=2273 width=28) (actual rows=2215 loops=1)
AQO: rows=2273, error=3%, fss=8598194613120045129
Hash Cond: ((bp.flight_id = f.flight_id) AND (bp.ticket_no = tf.ticket_no))
-> Seq Scan on boarding_passes bp (cost=0.00..32894.95 rows=1894295 width=18) (actual rows=1894295 loops=1)
AQO: rows=1894295, error=0%, fss=1362775811343989307
-> Hash (cost=51376.89..51376.89 rows=468906 width=22) (actual rows=468255 loops=1)
Buckets: 131072 Batches: 4 Memory Usage: 7438kB
-> Hash Join (cost=1857.26..51376.89 rows=468906 width=22) (actual rows=468255 loops=1)
AQO: rows=468906, error=0%, fss=-7651474063207585780
Hash Cond: (tf.flight_id = f.flight_id)
-> Seq Scan on ticket_flights tf (cost=0.00..43323.35 rows=2360335 width=18) (actual rows=2360335 loops=1)
AQO: rows=2360335, error=0%, fss=-6410966714754547713
-> Hash (cost=1652.80..1652.80 rows=16357 width=4) (actual rows=16340 loops=1)
Buckets: 16384 Batches: 1 Memory Usage: 703kB
-> Seq Scan on flights f (cost=0.00..1652.80 rows=16357 width=4) (actual rows=16340 loops=1)
AQO: rows=16357, error=0%, fss=-1289471166524579716
Filter: (scheduled_departure > '2017-08-15 22:00:00+07'::timestamp with time zone)
Rows Removed by Filter: 49324
-> Index Scan using tickets_pkey on tickets t (cost=0.42..0.48 rows=1 width=104) (actual rows=1 loops=2215)
AQO not used, fss=2731022528523952664
Index Cond: (ticket_no = bp.ticket_no)
Using aqo: true
AQO mode: LEARN
AQO advanced: OFF
Query hash: 7809046947949890015
JOINS: 3
(28 rows)
Example F.2. Using the aqo_query_stat View
The aqo_query_stats view shows statistics on the query planning time, query execution time and cardinality error. Based on this data you can make a decision whether to use aqo predictions for different query classes.
Let's query the aqo_query_stats view:
demo=# SELECT * FROM aqo_query_stat \gx
-[ RECORD 1 ]-----------------+---------------------------------------------------------------
fs | 7809046947949890015
dbid | 16429
execution_time_with_aqo | {1.039218233,0.925258453,0.831166925,0.779602353}
execution_time_without_aqo | {1.022052611,0.936486619}
planning_time_with_aqo | {0.003305339,0.002129048,0.002538877,0.002142972}
planning_time_without_aqo | {0.000767553,0.000711208}
cardinality_error_with_aqo | {0.4854215265638894,0,1.1711726076352047,0.007732205169478082}
cardinality_error_without_aqo | {0.4854215265638894,1.571562511977072}
executions_with_aqo | 4
executions_without_aqo | 2
The retrieved data is for the query from Example F.1, which was executed once without aqo for each of the parameters f.scheduled_departure > '2017-08-10 15:00:00+00' and f.scheduled_departure > '2017-08-15 15:00:00+00' and twice with aqo for each of these parameters. It is clear that with aqo, the cardinality error decreases to 0.0077, while the minimum cardinality error without aqo is 0.4854. Besides, the execution time with aqo is lower than without it. So the conclusion is that aqo learns well on this query, and the prediction can be used for this query class.
Example F.3. Using aqo in the Advanced Mode
The advanced mode allows a more flexible control over aqo. When this mode is activated, that is,
demo=# SET aqo.advanced = on;
aqo will collect the machine learning data separately for each query executed. For example:
demo=# EXPLAIN (ANALYZE, SUMMARY OFF, TIMING OFF)
SELECT bp.*
FROM ticket_flights tf
JOIN flights f ON f.flight_id = tf.flight_id
JOIN boarding_passes bp ON bp.ticket_no = tf.ticket_no AND bp.flight_id = tf.flight_id
WHERE f.scheduled_departure > '2017-08-15 15:00:00+00';
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=63796.33..140286.73 rows=477728 width=25) (actual rows=2215 loops=1)
AQO not used, fss=8598194613120045129
Hash Cond: ((bp.flight_id = f.flight_id) AND (bp.ticket_no = tf.ticket_no))
-> Seq Scan on boarding_passes bp (cost=0.00..32894.95 rows=1894295 width=25) (actual rows=1894295 loops=1)
AQO not used, fss=1362775811343989307
-> Hash (cost=51379.43..51379.43 rows=595260 width=22) (actual rows=468255 loops=1)
Buckets: 131072 Batches: 8 Memory Usage: 4223kB
-> Hash Join (cost=1859.80..51379.43 rows=595260 width=22) (actual rows=468255 loops=1)
AQO not used, fss=-7651474063207585780
Hash Cond: (tf.flight_id = f.flight_id)
-> Seq Scan on ticket_flights tf (cost=0.00..43323.35 rows=2360335 width=18) (actual rows=2360335 loops=1)
AQO not used, fss=-6410966714754547713
-> Hash (cost=1652.80..1652.80 rows=16560 width=4) (actual rows=16340 loops=1)
Buckets: 32768 Batches: 1 Memory Usage: 831kB
-> Seq Scan on flights f (cost=0.00..1652.80 rows=16560 width=4) (actual rows=16340 loops=1)
AQO not used, fss=-1289471166524579716
Filter: (scheduled_departure > '2017-08-15 22:00:00+07'::timestamp with time zone)
Rows Removed by Filter: 49324
Using aqo: true
AQO mode: LEARN
AQO advanced: ON
Query hash: 402936317627943257
JOINS: 2
(23 rows)
Now this query is stored in aqo_data with a non-zero fs (fs is equal to the query hash by default):
demo=# SELECT * FROM aqo_data;
fs | fss | dbid | delta_rows | nfeatures | features | targets | reliability | oids | tmpoids
--------------------+----------------------+-------+------------+-----------+---------------------------------------------+----------------------+-------------+---------------------+---------
402936317627943257 | -6410966714754547713 | 16429 | f | 0 | | {14.674314116080508} | {1} | {16479} |
402936317627943257 | 1362775811343989307 | 16429 | f | 0 | | {14.454357295615447} | {1} | {16452} |
402936317627943257 | -1289471166524579716 | 16429 | f | 1 | {{-1.3775704575284085}} | {9.701371368413994} | {1} | {16458} |
402936317627943257 | 8598194613120045129 | 16429 | f | 2 | {{-13.492416828684513,-1.3775704575284085}} | {7.703007682479236} | {1} | {16479,16458,16452} |
402936317627943257 | -7651474063207585780 | 16429 | f | 2 | {{-11.092306109090387,-1.3775704575284085}} | {13.056768298305919} | {1} | {16479,16458} |
(5 rows)
We can make a few settings individually for this query. These are values of learn_aqo, use_aqo and auto_tuning in the aqo_queries view:
demo=# SELECT * FROM aqo_queries;
fs | dbid | learn_aqo | use_aqo | auto_tuning | smart_timeout | count_increase_timeout
--------------------+-------+-----------+---------+-------------+---------------+------------------------
402936317627943257 | 16429 | t | t | f | 0 | 0
0 | 0 | f | f | f | 0 | 0
(2 rows)
Let's set use_aqo to false:
demo=# SELECT aqo_queries_update(402936317627943257, NULL, NULL, false, NULL); aqo_queries_update -------------------- t (1 row)
Now we change a constant in the query:
demo=# EXPLAIN (ANALYZE, SUMMARY OFF, TIMING OFF)
SELECT bp.*
FROM ticket_flights tf
JOIN flights f ON f.flight_id = tf.flight_id
JOIN boarding_passes bp ON bp.ticket_no = tf.ticket_no AND bp.flight_id = tf.flight_id
WHERE f.scheduled_departure > '2017-08-10 15:00:00+00';
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=65822.16..142872.60 rows=554378 width=25) (actual rows=111397 loops=1)
AQO not used, fss=0
Hash Cond: ((bp.flight_id = f.flight_id) AND (bp.ticket_no = tf.ticket_no))
-> Seq Scan on boarding_passes bp (cost=0.00..32894.95 rows=1894295 width=25) (actual rows=1894295 loops=1)
AQO not used, fss=0
-> Hash (cost=51412.64..51412.64 rows=690768 width=22) (actual rows=577437 loops=1)
Buckets: 131072 Batches: 8 Memory Usage: 4966kB
-> Hash Join (cost=1893.01..51412.64 rows=690768 width=22) (actual rows=577437 loops=1)
AQO not used, fss=0
Hash Cond: (tf.flight_id = f.flight_id)
-> Seq Scan on ticket_flights tf (cost=0.00..43323.35 rows=2360335 width=18) (actual rows=2360335 loops=1)
AQO not used, fss=0
-> Hash (cost=1652.80..1652.80 rows=19217 width=4) (actual rows=19040 loops=1)
Buckets: 32768 Batches: 1 Memory Usage: 926kB
-> Seq Scan on flights f (cost=0.00..1652.80 rows=19217 width=4) (actual rows=19040 loops=1)
AQO not used, fss=0
Filter: (scheduled_departure > '2017-08-10 22:00:00+07'::timestamp with time zone)
Rows Removed by Filter: 46624
Using aqo: false
AQO mode: LEARN
AQO advanced: ON
Query hash: 402936317627943257
JOINS: 2
(23 rows)
aqo was not used for this query, but there is new data in the aqo_data view:
demo=# SELECT * FROM aqo_data;
fs | fss | dbid | delta_rows | nfeatures | features | targets | reliability | oids | tmpoids
--------------------+----------------------+-------+------------+-----------+---------------------------------------------------------------------------------------+-----------------------------------------+-------------+---------------------+---------
402936317627943257 | -6410966714754547713 | 16429 | f | 0 | | {14.674314116080508} | {1} | {16479} |
402936317627943257 | 1362775811343989307 | 16429 | f | 0 | | {14.454357295615447} | {1} | {16452} |
402936317627943257 | -1289471166524579716 | 16429 | f | 1 | {{-1.3775704575284085},{-1.2287385463463019}} | {9.701371368413994,9.854297308345357} | {1,1} | {16458} |
402936317627943257 | 8598194613120045129 | 16429 | f | 2 | {{-13.492416828684513,-1.3775704575284085},{-13.492416828684513,-1.2287385463463019}} | {7.703007682479236,11.620855676130656} | {1,1} | {16479,16458,16452} |
402936317627943257 | -7651474063207585780 | 16429 | f | 2 | {{-11.092306109090387,-1.3775704575284085},{-11.092306109090387,-1.2287385463463019}} | {13.056768298305919,13.266354624518149} | {1,1} | {16479,16458} |
(5 rows)
The use_aqo setting does not apply to other queries. After executing another query twice, we can see that aqo learns on it and makes prediction for it:
EXPLAIN (ANALYZE, SUMMARY OFF, TIMING OFF)
SELECT t.*
FROM ticket_flights tf
JOIN flights f ON f.flight_id = tf.flight_id
JOIN boarding_passes bp ON bp.ticket_no = tf.ticket_no AND bp.flight_id = tf.flight_id
JOIN tickets t ON t.ticket_no = tf.ticket_no
WHERE f.scheduled_departure > '2017-08-15 15:00:00+00';
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=61144.93..134251.05 rows=2215 width=104) (actual rows=2215 loops=1)
AQO: rows=2215, error=0%, fss=-8581941154270057231
-> Hash Join (cost=61144.50..133190.86 rows=2215 width=28) (actual rows=2215 loops=1)
AQO: rows=2215, error=0%, fss=8598194613120045129
Hash Cond: ((bp.flight_id = f.flight_id) AND (bp.ticket_no = tf.ticket_no))
-> Seq Scan on boarding_passes bp (cost=0.00..32894.95 rows=1894295 width=18) (actual rows=1894295 loops=1)
AQO: rows=1894295, error=0%, fss=1362775811343989307
-> Hash (cost=51376.68..51376.68 rows=468255 width=22) (actual rows=468255 loops=1)
Buckets: 131072 Batches: 4 Memory Usage: 7438kB
-> Hash Join (cost=1857.05..51376.68 rows=468255 width=22) (actual rows=468255 loops=1)
AQO: rows=468255, error=0%, fss=-7651474063207585780
Hash Cond: (tf.flight_id = f.flight_id)
-> Seq Scan on ticket_flights tf (cost=0.00..43323.35 rows=2360335 width=18) (actual rows=2360335 loops=1)
AQO: rows=2360335, error=0%, fss=-6410966714754547713
-> Hash (cost=1652.80..1652.80 rows=16340 width=4) (actual rows=16340 loops=1)
Buckets: 16384 Batches: 1 Memory Usage: 703kB
-> Seq Scan on flights f (cost=0.00..1652.80 rows=16340 width=4) (actual rows=16340 loops=1)
AQO: rows=16340, error=0%, fss=-1289471166524579716
Filter: (scheduled_departure > '2017-08-15 22:00:00+07'::timestamp with time zone)
Rows Removed by Filter: 49324
-> Index Scan using tickets_pkey on tickets t (cost=0.42..0.48 rows=1 width=104) (actual rows=1 loops=2215)
AQO not used, fss=2731022528523952664
Index Cond: (ticket_no = bp.ticket_no)
Using aqo: true
AQO mode: LEARN
AQO advanced: ON
Query hash: 7809046947949890015
JOINS: 3
(28 rows)
Example F.4. Using the Sandbox Mode
SET aqo.sandbox = ON; SET aqo.enable = ON; SET aqo.advanced = OFF; -- Clean up the sandbox knowledge base without touching the main data SELECT aqo_reset(); EXPLAIN (ANALYZE, SUMMARY OFF, TIMING OFF) SELECT t.* FROM ticket_flights tf JOIN flights f ON f.flight_id = tf.flight_id JOIN boarding_passes bp ON bp.ticket_no = tf.ticket_no AND bp.flight_id = tf.flight_id JOIN tickets t ON t.ticket_no = tf.ticket_no WHERE f.scheduled_departure > '2017-08-15 15:00:00+00'; -- Be executing the previous query until plans get stabilized ... -- Copy data obtained with aqo.advanced = OFF from sandbox CREATE TABLE aqo_data_sandbox AS SELECT * FROM aqo_data; SET aqo.sandbox = OFF; SELECT aqo_data_update (fs, fss, dbid, delta_rows, nfeatures, features, targets, reliability, oids, tmpoids) FROM aqo_data_sandbox WHERE fs = 0; DROP TABLE aqo_data_sandbox;
F.2.6. Author #
Oleg Ivanov