MySQL Shell's table export utility
      util.exportTable() exports a MySQL relational
      table into a data file, either on the local server or in an Oracle
      Cloud Infrastructure Object Storage bucket. The data can then be
      uploaded into a table on a target MySQL server using
      MySQL Shell's parallel table import utility
      util.importTable() (see
      Section 11.4, “Parallel Table Import Utility”), which
      uses parallel connections to provide rapid data import for large
      data files. The data file can also be used to import data to a
      different application, or as a lightweight logical backup for a
      single data table.
    
        By default, the table export utility produces a data file in the
        default format for MySQL Shell's parallel table import utility.
        Preset options are available to export CSV files for either DOS
        or UNIX systems, and TSV files. The table export utility cannot
        produce JSON data. You can also set field- and line-handling
        options as for the
        SELECT...INTO
        OUTFILE statement to create data files in arbitrary
        formats.
      
        util.exportTable() can be used with
        partitioned and subpartitioned tables, but does not perform any
        special handling of these. One file is always created per table
        by this utility, regardless of release version.
      
When choosing a destination for the table export file, note that for import into a MySQL HeatWave Service DB System, the MySQL Shell instance where you run the parallel table import utility must be installed on an Oracle Cloud Infrastructure Compute instance that has access to the MySQL HeatWave Service DB System. If you export the table to a file in an Object Storage bucket, you can access the Object Storage bucket from the Compute instance. If you create the table export file on your local system, you need to transfer it to the Oracle Cloud Infrastructure Compute instance using the copy utility of your choice, depending on the operating system you chose for your Compute instance.
The following requirements apply to exports using the table export utility:
MySQL 5.7 or later is required for the source MySQL instance and the destination MySQL instance.
The upload method used to transfer files to an Oracle Cloud Infrastructure Object Storage bucket has a file size limit of 1.2 TiB.
The table export utility uses the MySQL Shell global session to obtain the connection details of the target MySQL server from which the export is carried out. You must open the global session (which can have an X Protocol connection or a classic MySQL protocol connection) before running the utility. The utility opens its own session for each thread, copying options such as connection compression and SSL options from the global session, and does not make any further use of the global session. You can limit the maximum rate of data transfer to balance the load on the network.
        In the MySQL Shell API, the table export utility is a function
        of the util global object, and has the following
        signature:
      
util.exportTable(table, outputUrl[, options])
        table is the name of the relational data
        table to be exported to the data file. The table name can be
        qualified with a valid schema name, and quoted with the backtick
        character if needed. If the schema is omitted, the active schema
        for the MySQL Shell global session is used.
      
        options is a dictionary of options that can
        be omitted if it is empty. The options are listed in the final
        section of this topic.
      
        If you are exporting the data to the local filesystem,
        outputUrl is a string specifying the path to
        the exported data file, and the file name itself, with an
        appropriate extension. You can specify an absolute path or a
        path relative to the current working directory. You can prefix a
        local directory path with the file:// schema.
        In this example in MySQL Shell's JavaScript mode, the user
        exports the employees table from the
        hr schema using the default dialect. The file
        is written to the exports directory in the
        user's home directory, and is given a .txt
        extension that is appropriate for a file in this format:
      
shell-js> util.exportTable("hr.employees", "file:///home/hanna/exports/employees.txt")
        The target directory must exist before the export takes place,
        but it does not have to be empty. If the exported data file
        already exists there, it is overwritten. For an export to a
        local directory, the data file is created with the access
        permissions rw-r----- (on operating systems
        where these are supported). The owner of the file is the user
        account that is running MySQL Shell.
      
        If you are exporting the data to an Oracle Cloud Infrastructure Object Storage bucket,
        or to S3-compatible storage,outputUrl is the
        name for the data file in the bucket, including a suitable file
        extension. You can include directory separators to simulate a
        directory structure. Use the osBucketName
        option to provide the name of the Object Storage bucket, and the
        osNamespace option to identify the namespace
        for the bucket. In this example in MySQL Shell's Python mode,
        the user exports the employees table from the
        hr schema as a file in TSV format to the
        Object Storage bucket hanna-bucket:
      
shell-py> util.export_table("hr.employees", "dump/employees.tsv", {
        > dialect: "tsv", "osBucketName": "hanna-bucket", "osNamespace": "idx28w1ckztq" })
        The namespace for an Object Storage bucket is displayed in the
        Bucket Information tab of the bucket
        details page in the Oracle Cloud Infrastructure console, or can be obtained using the
        Oracle Cloud Infrastructure command line interface. A connection is established to the
        Object Storage bucket using the default profile in the default
        Oracle Cloud Infrastructure CLI configuration file, or
        alternative details that you specify using the
        ociConfigFile and
        ociProfile options. For instructions to set
        up a CLI configuration file, see
        SDK
        and CLI Configuration File.
      
- 
            
where: "string" - 
A valid SQL condition expression used to filter the data being exported.
NoteThe SQL is validated only when it is executed. If you are exporting many tables, any SQL-syntax-related issues will only be seen late in the process. As such, it is recommended you test your SQL condition before using it in a long-running export process.
In the following example,
whereexports only those rows of the tablesakila.actorwhere the value ofactor_idis greater than 150, to a file nameddump.csv:util.exportTable("sakila.actor", "dump.csv", {"where" : "actor_id > 150"}) - 
            
partitions: ["string","string",..] - 
A list of valid partition names which limits the export to the specified partitions.
The following example exports the partitions p1 and p2 from
schema.tableto a file nameddump.csv:util.exportTable("schema.table", "dump.csv", {"partitions" : ["p1", "p2"]} - 
            
dialect: [default|csv|csv-unix|tsv] - 
Specify a set of field- and line-handling options for the format of the exported data file. You can use the selected dialect as a base for further customization, by also specifying one or more of the
linesTerminatedBy,fieldsTerminatedBy,fieldsEnclosedBy,fieldsOptionallyEnclosed, andfieldsEscapedByoptions to change the settings.The default dialect produces a data file matching what would be created using a
SELECT...INTO OUTFILEstatement with the default settings for that statement..txtis an appropriate file extension to assign to these output files. Other dialects are available to export CSV files for either DOS or UNIX systems (.csv), and TSV files (.tsv).The settings applied for each dialect are as follows:
Table 11.1 Dialect settings for table export utility
dialectlinesTerminatedByfieldsTerminatedByfieldsEnclosedByfieldsOptionallyEnclosedfieldsEscapedBydefault[LF]
[TAB]
[empty]
false\
csv[CR][LF]
,
''
true\
csv-unix[LF]
,
''
false\
tsv[CR][LF]
[TAB]
''
true\
NoteThe carriage return and line feed values for the dialects are operating system independent.
If you use the
linesTerminatedBy,fieldsTerminatedBy,fieldsEnclosedBy,fieldsOptionallyEnclosed, andfieldsEscapedByoptions, depending on the escaping conventions of your command interpreter, the backslash character (\) might need to be doubled if you use it in the option values.Like the MySQL server with the
SELECT...INTO OUTFILEstatement, MySQL Shell does not validate the field- and line-handling options that you specify. Inaccurate selections for these options can cause data to be exported partially or incorrectly. Always verify your settings before starting the export, and verify the results afterwards.
 - 
            
linesTerminatedBy: "characters" One or more characters (or an empty string) with which the utility terminates each of the lines in the exported data file. The default is as for the specified dialect, or a linefeed character (
\n) if the dialect option is omitted. This option is equivalent to theLINES TERMINATED BYoption for theSELECT...INTO OUTFILEstatement. Note that the utility does not provide an equivalent for theLINES STARTING BYoption for theSELECT...INTO OUTFILEstatement, which is set to the empty string.- 
            
fieldsTerminatedBy: "characters" One or more characters (or an empty string) with which the utility terminates each of the fields in the exported data file. The default is as for the specified dialect, or a tab character (
\t) if the dialect option is omitted. This option is equivalent to theFIELDS TERMINATED BYoption for theSELECT...INTO OUTFILEstatement.- 
            
fieldsEnclosedBy: "character" A single character (or an empty string) with which the utility encloses each of the fields in the exported data file. The default is as for the specified dialect, or the empty string if the dialect option is omitted. This option is equivalent to the
FIELDS ENCLOSED BYoption for theSELECT...INTO OUTFILEstatement.- 
            
fieldsOptionallyEnclosed: [ true | false ] Whether the character given for
fieldsEnclosedByis to enclose all of the fields in the exported data file (false), or to enclose a field only if it has a string data type such asCHAR,BINARY,TEXT, orENUM(true). The default is as for the specified dialect, orfalseif the dialect option is omitted. This option makes thefieldsEnclosedByoption equivalent to theFIELDS OPTIONALLY ENCLOSED BYoption for theSELECT...INTO OUTFILEstatement.- 
            
fieldsEscapedBy: "character" The character that is to begin escape sequences in the exported data file. The default is as for the specified dialect, or a backslash (\) if the dialect option is omitted. This option is equivalent to the
FIELDS ESCAPED BYoption for theSELECT...INTO OUTFILEstatement. If you set this option to the empty string, no characters are escaped, which is not recommended because special characters used bySELECT...INTO OUTFILEmust be escaped.- 
            
maxRate: "string" The maximum number of bytes per second per thread for data read throughput during the export. The unit suffixes
kfor kilobytes,Mfor megabytes, andGfor gigabytes can be used (for example, setting100Mlimits throughput to 100 megabytes per second per thread). Setting0(which is the default value), or setting the option to an empty string, means no limit is set.- 
            
showProgress: [ true | false ] Display (
true) or hide (false) progress information for the export. The default istrueifstdoutis a terminal (tty), such as when MySQL Shell is in interactive mode, andfalseotherwise. The progress information includes the estimated total number of rows to be exported, the number of rows exported so far, the percentage complete, and the throughput in rows and bytes per second.- 
            
compression: "string;level=n" - 
The compression type and level of compression to use when writing the exported data file. The following compression options are available:
none: Default. No compression is applied.- 
gzip: Uses the gzip compression library. Compression level can be set from 0 to 9. Default compression level is 1. For example:"compression": "gzip;level=4" - 
zstd: Uses the zstd compression library. Compression level can be set from 1 to 22. Default compression level is 1. For example:"compression": "zstd;level=15" 
 - 
            
defaultCharacterSet: "string" The character set to be used during the session connections that are opened by MySQL Shell to the server for the export. The default is
utf8mb4. The session value of the system variablescharacter_set_client,character_set_connection, andcharacter_set_resultsare set to this value for each connection. The character set must be permitted by thecharacter_set_clientsystem variable and supported by the MySQL instance.
- 
            
osBucketName: "string" The name of the Oracle Cloud Infrastructure Object Storage bucket to which the exported data file is to be written. By default, the
[DEFAULT]profile in the Oracle Cloud Infrastructure CLI configuration file located at~/.oci/configis used to establish a connection to the bucket. You can substitute an alternative profile to be used for the connection with theociConfigFileandociProfileoptions. For instructions to set up a CLI configuration file, see SDK and CLI Configuration File.- 
            
osNamespace: "string" The Oracle Cloud Infrastructure namespace where the Object Storage bucket named by
osBucketNameis located. The namespace for an Object Storage bucket is displayed in the Bucket Information tab of the bucket details page in the Oracle Cloud Infrastructure console, or can be obtained using the Oracle Cloud Infrastructure command line interface.- 
            
ociConfigFile: "string" An Oracle Cloud Infrastructure CLI configuration file that contains the profile to use for the connection, instead of the one in the default location
~/.oci/config.- 
            
ociProfile: "string" The profile name of the Oracle Cloud Infrastructure profile to use for the connection, instead of the
[DEFAULT]profile in the Oracle Cloud Infrastructure CLI configuration file used for the connection.
MySQL Shell supports exporting tables to S3-compatible buckets, such as Amazon Web Services (AWS) S3.
MySQL Shell supports AWS S3 configuration in command line options, environment variables, and configuration files. Command line options override environment variables, configuration files, and default options.
For information on configuration requirements, see Section 4.7, “Cloud Service Configuration”.
- 
            
s3BucketName: "string" The name of the S3 bucket to which the export is to be written. By default, the
defaultprofile of theconfigandcredentialsfiles located at~/.aws/are used to establish a connection to the S3 bucket. You can substitute alternative configurations and credentials for the connection with thes3ConfigFileands3CredentialsFileoptions. For instructions on installing and configuring the AWS CLI, see Getting started with the AWS CLI.- 
            
s3CredentialsFile:"string" A credentials file that contains the user's credentials to use for the connection, instead of the one in the default location,
~/.aws/credentials. Typically, the credentials file contains theaws_access_key_idandaws_secret_access_keyto use for the connection.- 
            
s3ConfigFile: "string" A configuration file that contains the profile to use for the connection, instead of the one in the default location, such as
~/.aws/config. Typically, the config file contains the region and output type to use for the connection.- 
            
s3Profile: "string" The profile name of the s3 CLI profile to use for the connection, instead of the
defaultprofile.- 
            
s3Region: "string" The name of the region to use for the connection.
- 
            
s3EndpointOverride: "string" - 
The URL of the endpoint to use instead of the default.
When connecting to the Oracle Cloud Infrastructure S3 compatbility API, the endpoint takes the following format:
https://. Replacenamespace.compat.objectstorage.region.oraclecloud.comnamespacewith the Object Storage namespace andregionwith your region identifier. For example, the region identifier for the US East (Ashburn) region isus-ashburn-1.For a namespace named axaxnpcrorw5 in the US East (Ashburn) region:
https://axaxnpcrorw5.compat.objectstorage.us-ashburn-1.oraclecloud.com. 
MySQL Shell supports exporting to Microsoft Azure Blob Storage.
MySQL Shell supports Microsoft Azure Blob Storage configuration in command line options, environment variables, and configuration files. Command line options override environment variables, and configuration files.
For information on configuration requirements and the order of precedence of the configuration types, see Section 4.7, “Cloud Service Configuration”.
- 
            
azureContainerName: "string" Mandatory. The name of the Azure container to which the export is to be written. The container must exist.
- 
            
azureConfigFile: "string" - 
Optional. A configuration file that contains the storage connection parameters, instead of the one in the default location, such as
~/.azure/config. If this is not defined, the default configuration file is used.azureContainerNamemust be defined, and not be empty. - 
            
azureStorageAccount: "string" Optional. The name of the Azure storage account to use for the operation.
- 
            
azureStorageSasToken: "string" Optional. Azure Shared Access Signature (SAS) token to be used for the authentication of the operation, instead of a key.
        In the following example, the configuration uses a configuration
        string for the connection parameters, which means the
        exportTable command requires only the
        azureContainerName.
      
        Example config file:
      
        [cloud]
         name = AzureCloud
        [storage]
         connection_string=alphanumericConnectionString
        Example exportTable command, which exports
        the sakila.actor table, as a TSV file, to a
        container named mysqlshellazure:
      
        util.exportTable("sakila.actor", "actor.tsv", {dialect: "tsv", azureContainerName: "mysqlshellazure"})