Monday, 22 March 2021

Oracle 19c New Features Data Pump Parameter (MAX_DATAPUMP_JOBS_PER_PDB) MAX_DATAPUMP_JOBS_PER_PDB

 Description:-

o    It determines the maximum number of concurrent Oracle Data Pump jobs per PDB.

o    DBA can more easily govern Oracle Data Pump resource utilization

o    The default value will not work for all databases. Database administrators will have to determine 

if the default value works well for their database.

o    When this parameter has a value of AUTO, Oracle Data Pump will derive 

its actual value to be 50% of the SESSIONS initialization parameter.

o    A value that is too large could cause Oracle Data Pump to consume too many system resources, 

while a value that is too small could prevent users from performing their Oracle Data Pump tasks.

o    The main resource Oracle Data Pump uses is shared pool in the System Global Area (SGA) for the database.

 Parallel jobs increase the number of sessions and, depending on the job, the number of PQ slaves used.

Demo:-

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED READ ONLY NO
3 DBWRPDB READ WRITE NO

parameter change:-

2nd Session


[oracle@dev19c ~]$ sqlplus / as sysdba 

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 29 21:55:48 2019

Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.3.0.0.0

SQL> show parameter MAX_DATAPUMP_JOBS_PER_PDB

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

max_datapump_jobs_per_pdb string 100

- for Multitenant, in CDB$ROOT:

SQL> alter system set MAX_DATAPUMP_JOBS_PER_PDB=2 container=all;

System altered.

- for non-Multitenant:-

SQL> alter system set MAX_DATAPUMP_JOBS_PER_PDB=2;

System altered.

SQL> show parameter MAX_DATAPUMP_JOBS_PER_PDB

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

max_datapump_jobs_per_pdb string    2

Testing on Datapump

- two DataPump jobs are started and running:

1st session

[oracle@dev19c ~]$ export ORACLE_SID=dbwr

[oracle@dev19c ~]$ expdp system/oracle@dbwrpdb directory=DATA_PUMP_DIR dumpfile=uat1.dmp full=yes

Export: Release 19.0.0.0.0 – Production on Sun Sep 29 22:08:02 2019

Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Production

Starting “SYSTEM”.”SYS_EXPORT_FULL_01″: system/********@dbwrpdb directory=DATA_PUMP_DIR dumpfile=uat1.dmp full=yes

Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA

—–> still running


[oracle@dev19c ~]$ export ORACLE_SID=dbwr
[oracle@dev19c ~]$ expdp system/oracle@dbwrpdb directory=DATA_PUMP_DIR dumpfile=dev1.dmp full=yes

Export: Release 19.0.0.0.0 – Production on Sun Sep 29 22:08:05 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Production
Starting “SYSTEM”.”SYS_EXPORT_FULL_02″: system/********@dbwrpdb directory=DATA_PUMP_DIR dumpfile=dev1.dmp full=yes
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA

—–> still running

3rd session

[oracle@dev19c ~]$ export ORACLE_SID=dbwr
[oracle@dev19c ~]$ expdp system/oracle@dbwrpdb directory=DATA_PUMP_DIR 

dumpfile=test1.dmp full=yes

Export: Release 19.0.0.0.0 – Production on Sun Sep 29 22:08:07 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Production
ORA-31626: job does not exist
ORA-31637: cannot create job SYS_EXPORT_FULL_03 for user SYSTEM
ORA-06512: at “SYS.KUPV$FT”, line 1142
ORA-06512: at “SYS.KUPV$FT”, line 1744
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPV$FT_INT”, line 1099
ORA-39391: maximum number of Data Pump jobs (2) exceeded
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 86
ORA-06512: at “SYS.KUPV$FT_INT”, line 969
ORA-06512: at “SYS.KUPV$FT”, line 1646
ORA-06512: at “SYS.KUPV$FT”, line 1103

 

Overview of Oracle Data Pump

Oracle Data Pump technology enables very high-speed movement of data and metadata 

from one database to another.

An understanding of the following topics can help you to successfully use 

Oracle Data Pump to its fullest advantage:

Parent topic: Oracle Data Pump

1.1 Data Pump Components

Oracle Data Pump is made up of three distinct components. 

They are the command-line clients, expdp and impdp

the DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API), and the DBMS_METADATA PL/SQL package (also known as the Metadata API).

The Data Pump clients, expdp and impdp, start the Data Pump Export utility 

and Data Pump Import utility, respectively.

The expdp and impdp clients use the procedures provided in the DBMS_DATAPUMP PL/SQL package

 to execute export and import commands, using the parameters entered at the command line. 

These parameters enable the exporting and importing of data and metadata for a complete 

database or for subsets of a database.

When metadata is moved, Data Pump uses functionality provided 

by the DBMS_METADATA PL/SQL package. The DBMS_METADATA package provides a 

centralized facility for the extraction, manipulation, and re-creation of dictionary metadata.

The DBMS_DATAPUMP and DBMS_METADATA PL/SQL packages can be used independently of the 

Data Pump clients.

Note:

All Data Pump Export and Import processing, including the reading and writing of dump files, is done on the system (server) selected by the specified database connect string. This means that for unprivileged users, the database administrator (DBA) must create directory objects for the Data Pump files that are read and written on that server file system. (For security reasons, DBAs must ensure that only approved users are allowed access to directory objects.) For privileged users, a default directory object is available. See "Default Locations for Dump_ Log_ and SQL Files" for more information about directory objects.

See Also:

·         Oracle Database PL/SQL Packages and Types Reference for a description of the DBMS_DATAPUMP and the DBMS_METADATA packages

·         Oracle Database SecureFiles and Large Objects Developer's Guide for information about guidelines to consider when creating directory objects

Parent topic: Overview of Oracle Data Pump

1.2 How Does Data Pump Move Data?

This the methods that Data Pump uses to move data in and out of databases, and when each of the methods is used.

Note:

Data Pump does not load tables with disabled unique indexes. To load data into the table, the indexes must be either dropped or reenabled.

  • Using Data File Copying to Move Data
    The fastest method of moving data is to copy the database data files to the target database without interpreting or altering the data. With this method, Data Pump Export is used to unload only structural information (metadata) into the dump file.
  • Using Direct Path to Move Data
    After data file copying, direct path is the fastest method of moving data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation.
  • Using External Tables to Move Data
    When data file copying is not selected and the data cannot be moved using direct path, the external tables mechanism is used.
  • Using Conventional Path to Move Data
    Where there are conflicting table attributes, Data Pump uses conventional path to move data.
  • Using Network Link Import to Move Data
    When the Import 
    NETWORK_LINK parameter is used to specify a network link for an import operation, the direct path method is used by default.

Parent topic: Overview of Oracle Data Pump

1.2.1 Using Data File Copying to Move Data

The fastest method of moving data is to copy the database data files to the target database without interpreting or altering the data. With this method, Data Pump Export is used to unload only structural information (metadata) into the dump file.

·         The TRANSPORT_TABLESPACES parameter is used to specify a transportable tablespace export. Only metadata for the specified tablespaces is exported.

·         The TRANSPORTABLE=ALWAYS parameter is supplied on a table mode export (specified with the TABLES parameter) or a full mode export (specified with the FULL parameter) or a full mode network import (specified with the FULL and NETWORK_LINK parameters).

When an export operation uses data file copying, the corresponding import job always also uses data file copying. During the ensuing import operation, both the data files and the export dump file must be loaded.

Note:

During transportable imports tablespaces are temporarily made read/write and then set back to read-only. This is new behavior introduced as of Oracle Database 12c Release 1 (12.1.0.2) to improve performance. However, you should be aware that this behavior also causes the SCNs of the import job's data files to change, which can cause issues during future transportable imports of those files.

For example, if a transportable tablespace import fails at any point after the tablespaces have been made read/write (even if they are now read-only again), then the data files become corrupt. They cannot be recovered.

Since transportable jobs are not restartable, the failed job needs to be restarted from the beginning. The corrupt datafiles must be deleted and fresh versions must be copied to the target destination.

When transportable jobs are performed, it is best practice to keep a copy of the data files on the source system until the import job has successfully completed on the target system. If the import job should fail for some reason, you will still have uncorrupted copies of the data files.

When data is moved by using data file copying, there are some limitations regarding character set compatibility between the source and target databases.

If the source platform and the target platform are of different endianness, then you must convert the data being transported so that it is in the format of the target platform. You can use the DBMS_FILE_TRANSFER PL/SQL package or the RMAN CONVERT command to convert the data.

See Also:

·         Oracle Database Backup and Recovery Reference for information about the RMAN CONVERT command

·         Oracle Database Administrator's Guide for a description and example (including how to convert the data) of transporting tablespaces between databases

Parent topic: How Does Data Pump Move Data?

1.2.2 Using Direct Path to Move Data

After data file copying, direct path is the fastest method of moving data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation.

Data Pump automatically uses the direct path method for loading and unloading data unless the structure of a table does not allow it. For example, if a table contains a column of type BFILE, then direct path cannot be used to load that table and external tables is used instead.

The following sections describe situations in which direct path cannot be used for loading and unloading.

Situations in Which Direct Path Load Is Not Used

If any of the following conditions exist for a table, then Data Pump uses external tables rather than direct path to load the data for that table:

·         A domain index that is not a CONTEXT type index exists for a LOB column.

·         A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.

·         A table is in a cluster.

·         There is an active trigger on a preexisting table.

·         Fine-grained access control is enabled in insert mode on a preexisting table.

·         A table contains BFILE columns or columns of opaque types.

·         A referential integrity constraint is present on a preexisting table.

·         A table contains VARRAY columns with an embedded opaque type.

·         The table has encrypted columns.

·         The table into which data is being imported is a preexisting table and at least one of the following conditions exists:

·         There is an active trigger

·         The table is partitioned

·         Fine-grained access control is in insert mode

·         A referential integrity constraint exists

·         A unique index exists

·         Supplemental logging is enabled and the table has at least one LOB column.

·         The Data Pump command for the specified table used the QUERYSAMPLE, or REMAP_DATA parameter.

·         A table contains a column (including a VARRAY column) with a TIMESTAMP WITH TIME ZONE data type and the version of the time zone data file is different between the export and import systems.

Situations in Which Direct Path Unload Is Not Used

If any of the following conditions exist for a table, then Data Pump uses external tables rather than direct path to unload the data:

·         Fine-grained access control for SELECT is enabled.

·         The table is a queue table.

·         The table contains one or more columns of type BFILE or opaque, or an object type containing opaque columns.

·         The table contains encrypted columns.

·         The table contains a column of an evolved type that needs upgrading.

·         The Data Pump command for the specified table used the QUERYSAMPLE, or REMAP_DATA parameter.

·         Prior to the unload operation, the table was altered to contain a column that is NOT NULL and also has a default value specified.

Parent topic: How Does Data Pump Move Data?

1.2.3 Using External Tables to Move Data

When data file copying is not selected and the data cannot be moved using direct path, the external tables mechanism is used.

The external tables mechanism creates an external table that maps to the dump file data for the database table. The SQL engine is then used to move the data. If possible, the APPEND hint is used on import to speed the copying of the data into the database. The representation of data for direct path data and external table data is the same in a dump file. Therefore, Data Pump might use the direct path mechanism at export time, but use external tables when the data is imported into the target database. Similarly, Data Pump might use external tables for the export, but use direct path for the import.

In particular, Data Pump uses external tables in the following situations:

·         Loading and unloading very large tables and partitions in situations where it is advantageous to use parallel SQL capabilities

·         Loading tables with global or domain indexes defined on them, including partitioned object tables

·         Loading tables with active triggers or clustered tables

·         Loading and unloading tables with encrypted columns

·         Loading tables with fine-grained access control enabled for inserts

·         Loading a table not created by the import operation (the table exists before the import starts)

Note:

When Data Pump uses external tables as the data access mechanism, it uses the ORACLE_DATAPUMP access driver. However, it is important to understand that the files that Data Pump creates when it uses external tables are not compatible with files created when you manually create an external table using the SQL CREATE TABLE ... ORGANIZATION EXTERNAL statement.

See Also:

·         The ORACLE_DATAPUMP Access Driver

·         Oracle Database SQL Language Reference for information about using the APPEND hint

Parent topic: How Does Data Pump Move Data?

1.2.4 Using Conventional Path to Move Data

Where there are conflicting table attributes, Data Pump uses conventional path to move data.

In situations where there are conflicting table attributes, Data Pump is not able to load data into a table using either direct path or external tables. In such cases, conventional path is used, which can affect performance.

Parent topic: How Does Data Pump Move Data?

1.2.5 Using Network Link Import to Move Data

When the Import NETWORK_LINK parameter is used to specify a network link for an import operation, the direct path method is used by default.

If direct path cannot be used (for example, because one of the columns is a BFILE), then SQL is used to move the data using an INSERT SELECT statement. (Prior to Oracle Database 12c Release 2 (12.2.0.1), the default was to use the INSERT SELECT statement.) The SELECT clause retrieves the data from the remote database over the network link. The INSERT clause uses SQL to insert the data into the target database. There are no dump files involved.

When the Export NETWORK_LINK parameter is used to specify a network link for an export operation, the data from the remote database is written to dump files on the target database. (Note that to export from a read-only database, the NETWORK_LINK parameter is required.)

Because the link can identify a remotely networked database, the terms database link and network link are used interchangeably.

Supported Link Types

The following types of database links are supported for use with Data Pump Export and Import:

·         Public fixed user

·         Public connected user

·         Public shared user (only when used by link owner)

·         Private shared user (only when used by link owner)

·         Private fixed user (only when used by link owner)

Unsupported Link Types

The following types of database links are not supported for use with Data Pump Export and Import:

·         Private connected user

·         Current user

See Also:

·         The Export NETWORK_LINK parameter for information about performing exports over a database link

·         The Import NETWORK_LINK parameter for information about performing imports over a database link

·         Oracle Database Administrator's Guide for information about creating database links and the different types of links

Parent topic: How Does Data Pump Move Data?

1.3 Using Data Pump With CDBs

Data Pump can migrate all, or portions of, a database from a non-CDB into a PDB, between PDBs within the same or different CDBs, and from a PDB into a non-CDB.

A multitenant container database (CDB) is an Oracle database that includes zero, one, or many user-created pluggable databases (PDBs). A PDB is a portable set of schemas, schema objects, and nonschema objects that appear to an Oracle Net client as a non-CDB. A non-CDB is an Oracle database that is not a CDB.

You can use Data Pump to migrate all, or portions of, a database from a non-CDB into a PDB, between PDBs within the same or different CDBs, and from a PDB into a non-CDB. In general, using Data Pump with PDBs is identical to using Data Pump with a non-CDB.

Note:

Data Pump does not support any CDB-wide operations. Data Pump issues the following warning if you are connected to the root or seed database of a CDB:

ORA-39357: Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Parent topic: Overview of Oracle Data Pump

1.3.1 Using Data Pump to Move Databases Into a CDB

After you create an empty PDB in the CDB, you can use an Oracle Data Pump full-mode export and import operation to move data into the PDB.

The job can be performed with or without the transportable option. If you use the transportable option on a full mode export or import, it is referred to as a full transportable export/import.

When the transportable option is used, export and import use both transportable tablespace data movement and conventional data movement; the latter for those tables that reside in non-transportable tablespaces such as SYSTEM and SYSAUX. Using the transportable option can reduce the export time and especially, the import time, because table data does not need to be unloaded and reloaded and index structures in user tablespaces do not need to be re-created.

To specify a particular PDB for the export/import operation, on the Data Pump command line supply a connect identifier in the connect string when you start Data Pump. For example, to import data to a PDB named pdb1, you could enter the following on the Data Pump command line:

impdp hr@pdb1 DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp TABLES=employees

Be aware of the following requirements when using Data Pump to move data into a CDB:

·         To administer a multitenant environment, you must have the CDB_DBA role.

·         Full database exports from Oracle Database 11.2.0.2 and earlier may be imported into Oracle Database 12c (CDB or non-CDB). However, Oracle recommends the source database first be upgraded to Oracle Database 11g release 2 (11.2.0.3 or later) so that information about registered options and components is included in the export.

·         When migrating Oracle Database 11g release 2 (11.2.0.3 or later) to a CDB (or to a non-CDB) using either full database export or full transportable database export, you must set the Data Pump Export parameter VERSION=12 in order to generate a dump file that is ready for import into Oracle Database 12c. If you do not set VERSION=12, then the export file that is generated will not contain complete information about registered database options and components.

·         Network-based full transportable imports require use of the FULL=YESTRANSPORTABLE=ALWAYS, and TRANSPORT_DATAFILES=datafile_name parameters. When the source database is Oracle Database 11g release 11.2.0.3 or later, but earlier than Oracle Database 12c Release 1 (12.1), the VERSION=12 parameter is also required.

·         File-based full transportable imports only require use of the TRANSPORT_DATAFILES=datafile_name parameter. Data Pump Import infers the presence of the TRANSPORTABLE=ALWAYS and FULL=YES parameters.

·         As of Oracle Database 12c release 2 (12.2), in a multitenant container database (CDB) environment, the default Data Pump directory object, DATA_PUMP_DIR, is defined as a unique path for each PDB in the CDB, whether or not the PATH_PREFIX clause of the CREATE PLUGGABLE DATABASE statement is defined for relative paths.

Parent topic: Using Data Pump With CDBs

1.3.2 Using Data Pump to Move PDBs Within Or Between CDBs

Data Pump export and import operations on PDBs are identical to those on non-CDBs with the exception of how common users are handled.

If you have created a common user in a CDB, then a full database or privileged schema export of that user from within any PDB in the CDB results in a standard CREATE USER C##common name DDL statement being performed upon import. The statement will fail because of the common user prefix C## on the user name. The following error message will be returned:

ORA-65094:invalid local user or role name

In the PDB being exported, if you have created local objects in that user's schema and you want to import them, then either make sure a common user of the same name already exists in the target CDB instance or use the Data Pump Import REMAP_SCHEMA parameter on the impdp command, as follows:

REMAP_SCHEMA=C##common name:local user name

See Also:

·         Oracle Database Concepts for more information about CDBs

·         Oracle Database Administrator's Guide for information about using Data Pump to move a non-CDB into a CDB

·         Oracle Database Security Guide for more information about privileges and roles in CDBs and PDBs

·         Using the Transportable Option During Full Mode Exports

·         Using the Transportable Option During Full Mode Imports

·         Network Considerations for more information about supplying a connect identifier on the command line

Parent topic: Using Data Pump With CDBs

1.4 Required Roles for Data Pump Export and Import Operations

Many Data Pump Export and Import operations require the user to have the DATAPUMP_EXP_FULL_DATABASE role or the DATAPUMP_IMP_FULL_DATABASE role or both.

These roles are automatically defined for Oracle databases when you run the standard scripts that are part of database creation. (Note that although the names of these roles contain the word FULL, these roles actually apply to any privileged operations in any export or import mode, not only Full mode.)

The DATAPUMP_EXP_FULL_DATABASE role affects only export operations. The DATAPUMP_IMP_FULL_DATABASE role affects import operations and operations that use the Import SQLFILE parameter. These roles allow users performing exports and imports to do the following:

·         Perform the operation outside the scope of their schema

·         Monitor jobs that were initiated by another user

·         Export objects (such as tablespace definitions) and import objects (such as directory definitions) that unprivileged users cannot reference

These are powerful roles. Database administrators should use caution when granting these roles to users.

Although the SYS schema does not have either of these roles assigned to it, all security checks performed by Data Pump that require these roles also grant access to the SYS schema.

Note:

If you receive an ORA-39181: Only Partial Data Exported Due to Fine Grain Access Control error message, then see the My Oracle Support note 422480.1 at http://support.oracle.com for information about security during an export of table data with fine-grained access control policies enabled.

See Also:

Oracle Database Security Guide for more information about predefined roles in an Oracle Database installation

Parent topic: Overview of Oracle Data Pump

1.5 What Happens During Execution of a Data Pump Job?

Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of progress.

  • Coordination of a Job
    A master process is created to coordinate every Data Pump Export and Data Pump Import job.
  • Tracking Progress Within a Job
    While the data and metadata are being transferred, a master table is used to track the progress within a job.
  • Filtering Data and Metadata During a Job
    You can use the 
    EXCLUDE and INCLUDE parameters to filter the types of objects that are exported and imported.
  • Transforming Metadata During a Job
    Transformations on the metadata can be done using the Data Pump Import parameters 
    REMAP_DATAFILEREMAP_SCHEMAREMAP_TABLE,REMAP_TABLESPACETRANSFORM, and PARTITION_OPTIONS.
  • Maximizing Job Performance
    Data Pump can employ multiple worker processes, running in parallel, to increase job performance.
  • Loading and Unloading of Data
    The worker processes unload and load metadata and table data. For export, all metadata and data are unloaded in parallel, with the exception of jobs that use transportable tablespace. For import, objects must be created in the correct dependency order.

Parent topic: Overview of Oracle Data Pump

1.5.1 Coordination of a Job

A master process is created to coordinate every Data Pump Export and Data Pump Import job.

The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.

Parent topic: What Happens During Execution of a Data Pump Job?

1.5.2 Tracking Progress Within a Job

While the data and metadata are being transferred, a master table is used to track the progress within a job.

The master table is implemented as a user table within the database. The specific function of the master table for export and import jobs is as follows:

·         For export jobs, the master table records the location of database objects within a dump file set. Export builds and maintains the master table for the duration of the job. At the end of an export job, the content of the master table is written to a file in the dump file set.

·         For import jobs, the master table is loaded from the dump file set and is used to control the sequence of operations for locating objects that need to be imported into the target database.

The master table is created in the schema of the current user performing the export or import operation. Therefore, that user must have the CREATE TABLE system privilege and a sufficient tablespace quota for creation of the master table. The name of the master table is the same as the name of the job that created it. Therefore, you cannot explicitly give a Data Pump job the same name as a preexisting table or view.

For all operations, the information in the master table is used to restart a job. (Note that transportable jobs are not restartable.)

The master table is either retained or dropped, depending on the circumstances, as follows:

·         Upon successful job completion, the master table is dropped. You can override this by setting the Data Pump KEEP_MASTER=YES parameter for the job.

·         The master table is automatically retained for jobs that do not complete successfully.

·         If a job is stopped using the STOP_JOB interactive command, then the master table is retained for use in restarting the job.

·         If a job is killed using the KILL_JOB interactive command, then the master table is dropped and the job cannot be restarted.

·         If a job terminates unexpectedly, then the master table is retained. You can delete it if you do not intend to restart the job.

·         If a job stops before it starts running (that is, before any database objects have been copied), then the master table is dropped.

See Also:

JOB_NAME for more information about how job names are formed

Parent topic: What Happens During Execution of a Data Pump Job?

1.5.3 Filtering Data and Metadata During a Job

You can use the EXCLUDE and INCLUDE parameters to filter the types of objects that are exported and imported.

Within the master table, specific objects are assigned attributes such as name or owning schema. Objects also belong to a class of objects (such as TABLEINDEX, or DIRECTORY). The class of an object is called its object type. You can use the EXCLUDE and INCLUDE parameters to restrict the types of objects that are exported and imported. The objects can be based upon the name of the object or the name of the schema that owns the object. You can also specify data-specific filters to restrict the rows that are exported and imported.

See Also:

·         Filtering During Export Operations

·         Filtering During Import Operations

Parent topic: What Happens During Execution of a Data Pump Job?

1.5.4 Transforming Metadata During a Job

Transformations on the metadata can be done using the Data Pump Import parameters REMAP_DATAFILEREMAP_SCHEMAREMAP_TABLE,REMAP_TABLESPACETRANSFORM, and PARTITION_OPTIONS.

When you are moving data from one database to another, it is often useful to perform transformations on the metadata for remapping storage between tablespaces or redefining the owner of a particular set of objects.

Parent topic: What Happens During Execution of a Data Pump Job?

1.5.5 Maximizing Job Performance

Data Pump can employ multiple worker processes, running in parallel, to increase job performance.

Use the PARALLEL parameter to set a degree of parallelism that takes maximum advantage of current conditions. For example, to limit the effect of a job on a production system, the database administrator (DBA) might want to restrict the parallelism. The degree of parallelism can be reset at any time during a job. For example, PARALLEL could be set to 2 during production hours to restrict a particular job to only two degrees of parallelism, and during nonproduction hours it could be reset to 8. The parallelism setting is enforced by the master process, which allocates work to be executed to worker processes that perform the data and metadata processing within an operation. These worker processes operate in parallel. For recommendations on setting the degree of parallelism, see the Export PARALLEL and Import PARALLEL parameter descriptions.

Note:

The ability to adjust the degree of parallelism is available only in the Enterprise Edition of Oracle Database.

See Also:

·         Using PARALLEL During An Export In An Oracle RAC Environment

·         Using PARALLEL During An Import In An Oracle RAC Environment

Parent topic: What Happens During Execution of a Data Pump Job?

1.5.6 Loading and Unloading of Data

The worker processes unload and load metadata and table data. For export, all metadata and data are unloaded in parallel, with the exception of jobs that use transportable tablespace. For import, objects must be created in the correct dependency order.

If there are enough objects of the same type to make use of multiple workers, then the objects will be imported by multiple worker processes. Some metadata objects have interdependencies which require one worker process to create them serially to satisfy those dependencies. Worker processes are created as needed until the number of worker processes equals the value supplied for the PARALLEL command-line parameter. The number of active worker processes can be reset throughout the life of a job. Worker processes can be started on different nodes in an Oracle Real Application Clusters (Oracle RAC) environment.

Note:

The value of PARALLEL is restricted to 1 in the Standard Edition of Oracle Database.

When a worker process is assigned the task of loading or unloading a very large table or partition, it may choose to use the external tables access method to make maximum use of parallel execution. In such a case, the worker process becomes a parallel execution coordinator. The actual loading and unloading work is divided among some number of parallel I/O execution processes (sometimes called slaves) allocated from a pool of available processes in an Oracle RAC environment.

See Also:

·         The Export PARALLEL parameter

·         The Import PARALLEL parameter

Parent topic: What Happens During Execution of a Data Pump Job?

1.6 Monitoring Job Status

The Data Pump Export and Import client utilities can attach to a job in either logging mode or interactive-command mode.

In logging mode, real-time detailed status about the job is automatically displayed during job execution. The information displayed can include the job and parameter descriptions, an estimate of the amount of data to be processed, a description of the current operation or item being processed, files used during the job, any errors encountered, and the final job state (Stopped or Completed).

In interactive-command mode, job status can be displayed on request. The information displayed can include the job description and state, a description of the current operation or item being processed, files being written, and a cumulative status.

A log file can also be optionally written during the execution of a job. The log file summarizes the progress of the job, lists any errors that were encountered during execution of the job, and records the completion status of the job.

An alternative way to determine job status or to get other information about Data Pump jobs, would be to query the DBA_DATAPUMP_JOBSUSER_DATAPUMP_JOBS, or DBA_DATAPUMP_SESSIONS views. See Oracle Database Reference for descriptions of these views.

  • Monitoring the Progress of Executing Jobs
    Data Pump operations that transfer table data (export and import) maintain an entry in the 
    V$SESSION_LONGOPS dynamic performance view indicating the job progress (in megabytes of table data transferred). The entry contains the estimated transfer size and is periodically updated to reflect the actual amount of data transferred.

See Also:

·         The Export STATUS parameter for information about changing the frequency of the status display in command-line Export

·         The Import STATUS parameter for information about changing the frequency of the status display in command-line Import

·         The interactive Export STATUS command

·         The interactive Import STATUS command

·         The Export LOGFILE parameter for information on how to set the file specification for an export log file

·         The Import LOGFILE parameter for information on how to set the file specification for a import log file

Parent topic: Overview of Oracle Data Pump

1.6.1 Monitoring the Progress of Executing Jobs

Data Pump operations that transfer table data (export and import) maintain an entry in the V$SESSION_LONGOPS dynamic performance view indicating the job progress (in megabytes of table data transferred). The entry contains the estimated transfer size and is periodically updated to reflect the actual amount of data transferred.

Use of the COMPRESSIONENCRYPTIONENCRYPTION_ALGORITHMENCRYPTION_MODEENCRYPTION_PASSWORDQUERY, and REMAP_DATA parameters are not reflected in the determination of estimate values.

The usefulness of the estimate value for export operations depends on the type of estimation requested when the operation was initiated, and it is updated as required if exceeded by the actual transfer amount. The estimate value for import operations is exact.

The V$SESSION_LONGOPS columns that are relevant to a Data Pump job are as follows:

·         USERNAME - job owner

·         OPNAME - job name

·         TARGET_DESC - job operation

·         SOFAR - megabytes transferred thus far during the job

·         TOTALWORK - estimated number of megabytes in the job

·         UNITS - megabytes (MB)

·         MESSAGE - a formatted status message of the form:

'job_name: operation_name : nnn out of mmm MB done'

Parent topic: Monitoring Job Status

1.7 File Allocation

Understanding how Data Pump allocates and handles files will help you to use Export and Import to their fullest advantage.

Data Pump jobs manage the following types of files:

·         Dump files to contain the data and metadata that is being moved.

·         Log files to record the messages associated with an operation.

·         SQL files to record the output of a SQLFILE operation. A SQLFILE operation is started using the Data Pump Import SQLFILE parameter and results in all the SQL DDL that Import would be executing based on other parameters, being written to a SQL file.

·         Files specified by the DATA_FILES parameter during a transportable import.

Note:

If your Data Pump job generates errors related to Network File Storage (NFS), then consult the installation guide for your platform to determine the correct NFS mount settings.

  • Specifying Files and Adding Additional Dump Files
    For export operations, you can specify dump files at the time the job is defined, and also at a later time during the operation.
  • Default Locations for Dump, Log, and SQL Files
    Data Pump is server-based rather than client-based. Dump files, log files, and SQL files are accessed relative to server-based directory paths.
  • Using Substitution Variables
    Instead of, or in addition to, listing specific file names, you can use the 
    DUMPFILE parameter during export operations to specify multiple dump files, by using a substitution variable in the file name. This is called a dump file template.

Parent topic: Overview of Oracle Data Pump

1.7.1 Specifying Files and Adding Additional Dump Files

For export operations, you can specify dump files at the time the job is defined, and also at a later time during the operation.

If you discover that space is running low during an export operation, then you can add additional dump files by using the Data Pump Export ADD_FILE command in interactive mode.

For import operations, all dump files must be specified at the time the job is defined.

Log files and SQL files overwrite previously existing files. For dump files, you can use the Export REUSE_DUMPFILES parameter to specify whether to overwrite a preexisting dump file.

Parent topic: File Allocation

1.7.2 Default Locations for Dump, Log, and SQL Files

Data Pump is server-based rather than client-based. Dump files, log files, and SQL files are accessed relative to server-based directory paths.

Data Pump requires that directory paths be specified as directory objects. A directory object maps a name to a directory path on the file system. DBAs must ensure that only approved users are allowed access to the directory object associated with the directory path.

The following example shows a SQL statement that creates a directory object named dpump_dir1 that is mapped to a directory located at /usr/apps/datafiles.

SQL> CREATE DIRECTORY dpump_dir1 AS '/usr/apps/datafiles';

The reason that a directory object is required is to ensure data security and integrity. For example:

·         If you were allowed to specify a directory path location for an input file, then you might be able to read data that the server has access to, but to which you should not.

·         If you were allowed to specify a directory path location for an output file, then the server might overwrite a file that you might not normally have privileges to delete.

On UNIX and Windows operating systems, a default directory object, DATA_PUMP_DIR, is created at database creation or whenever the database dictionary is upgraded. By default, it is available only to privileged users. (The user SYSTEM has read and write access to the DATA_PUMP_DIR directory, by default.) The definition of the DATA_PUMP_DIR directory may be changed by Oracle during upgrades or when patches are applied.

If you are not a privileged user, then before you can run Data Pump Export or Data Pump Import, a directory object must be created by a database administrator (DBA) or by any user with the CREATE ANY DIRECTORY privilege.

After a directory is created, the user creating the directory object must grant READ or WRITE permission on the directory to other users. For example, to allow the Oracle database to read and write files on behalf of user hr in the directory named by dpump_dir1, the DBA must execute the following command:

SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir1 TO hr;

Note that READ or WRITE permission to a directory object only means that the Oracle database can read or write files in the corresponding directory on your behalf. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges. Similarly, the Oracle database requires permission from the operating system to read and write files in the directories.

Data Pump Export and Import use the following order of precedence to determine a file's location:

1.      If a directory object is specified as part of the file specification, then the location specified by that directory object is used. (The directory object must be separated from the file name by a colon.)

2.      If a directory object is not specified as part of the file specification, then the directory object named by the DIRECTORY parameter is used.

3.      If a directory object is not specified as part of the file specification, and if no directory object is named by the DIRECTORY parameter, then the value of the environment variable, DATA_PUMP_DIR, is used. This environment variable is defined using operating system commands on the client system where the Data Pump Export and Import utilities are run. The value assigned to this client-based environment variable must be the name of a server-based directory object, which must first be created on the server system by a DBA. For example, the following SQL statement creates a directory object on the server system. The name of the directory object is DUMP_FILES1, and it is located at '/usr/apps/dumpfiles1'.

SQL> CREATE DIRECTORY DUMP_FILES1 AS '/usr/apps/dumpfiles1';

Then, a user on a UNIX-based client system using csh can assign the value DUMP_FILES1 to the environment variable DATA_PUMP_DIR. The DIRECTORY parameter can then be omitted from the command line. The dump file employees.dmp, and the log file export.log, are written to '/usr/apps/dumpfiles1'.

%setenv DATA_PUMP_DIR DUMP_FILES1
%expdp hr TABLES=employees DUMPFILE=employees.dmp

5.      If none of the previous three conditions yields a directory object and you are a privileged user, then Data Pump attempts to use the value of the default server-based directory object, DATA_PUMP_DIR. This directory object is automatically created at database creation or when the database dictionary is upgraded. You can use the following SQL query to see the path definition for DATA_PUMP_DIR:

SQL> SELECT directory_name, directory_path FROM dba_directories
7.       2 WHERE directory_name='DATA_PUMP_DIR';

If you are not a privileged user, then access to the DATA_PUMP_DIR directory object must have previously been granted to you by a DBA.

Do not confuse the default DATA_PUMP_DIR directory object with the client-based environment variable of the same name.

Parent topic: File Allocation

1.7.2.1 Oracle RAC Considerations

Considerations to keep in mind when working in an Oracle RAC environment.

·         To use Data Pump or external tables in an Oracle RAC configuration, you must ensure that the directory object path is on a cluster-wide file system.

The directory object must point to shared physical storage that is visible to, and accessible from, all instances where Data Pump and/or external tables processes may run.

·         The default Data Pump behavior is that worker processes can run on any instance in an Oracle RAC configuration. Therefore, workers on those Oracle RAC instances must have physical access to the location defined by the directory object, such as shared storage media. If the configuration does not have shared storage for this purpose, but you still require parallelism, then you can use the CLUSTER=NO parameter to constrain all worker processes to the instance where the Data Pump job was started.

·         Under certain circumstances, Data Pump uses parallel query slaves to load or unload data. In an Oracle RAC environment, Data Pump does not control where these slaves run, and they may run on other instances in the Oracle RAC, regardless of what is specified for CLUSTER and SERVICE_NAME for the Data Pump job. Controls for parallel query operations are independent of Data Pump. When parallel query slaves run on other instances as part of a Data Pump job, they also require access to the physical storage of the dump file set.

Parent topic: Default Locations for Dump, Log, and SQL Files

1.7.2.2 Using Directory Objects When Oracle Automatic Storage Management Is Enabled

You can use Data Pump Export or Import with Oracle Automatic Storage Management (Oracle ASM) enabled. You must define the directory object used for the dump file so that the Oracle ASM disk group name is used (instead of an operating system directory path).

A separate directory object, which points to an operating system directory path, should be used for the log file. For example, you would create a directory object for the Oracle ASM dump file as follows:

SQL> CREATE or REPLACE DIRECTORY dpump_dir as '+DATAFILES/';

Then you would create a separate directory object for the log file:

SQL> CREATE or REPLACE DIRECTORY dpump_log as '/homedir/user1/';

To enable user hr to have access to these directory objects, you would assign the necessary privileges, for example:

SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir TO hr;
SQL> GRANT READ, WRITE ON DIRECTORY dpump_log TO hr;

You would then use the following Data Pump Export command (you will be prompted for a password):

> expdp hr DIRECTORY=dpump_dir DUMPFILE=hr.dmp LOGFILE=dpump_log:hr.log

Note:

If you simply want to copy Data Pump dump files between ASM and disk directories, you can use the DBMS_FILE_TRANSFER PL/SQL package.

See Also:

·         The Export DIRECTORY parameter

·         The Import DIRECTORY parameter

·         Oracle Database SQL Language Reference for information about the CREATE DIRECTORY command

·         Oracle Automatic Storage Management Administrator's Guide for more information about Oracle ASM

·         Oracle Database PL/SQL Packages and Types Reference for more information about the DBMS_FILE_TRANSFER PL/SQL package

Parent topic: Default Locations for Dump, Log, and SQL Files

1.7.2.3 The DATA_PUMP_DIR Directory Object and Pluggable Databases

The default Data Pump directory object, DATA_PUMP_DIR, is defined as a unique path for each PDB in the CDB.

As of Oracle Database 12c release 2 (12.2), in a multitenant container database (CDB) environment, the default Data Pump directory object, DATA_PUMP_DIR, is defined as a unique path for each PDB in the CDB, whether or not the PATH_PREFIX clause of the CREATE PLUGGABLE DATABASE statement is defined for relative paths.

Parent topic: Default Locations for Dump, Log, and SQL Files

1.7.3 Using Substitution Variables

Instead of, or in addition to, listing specific file names, you can use the DUMPFILE parameter during export operations to specify multiple dump files, by using a substitution variable in the file name. This is called a dump file template.

Note:

This section uses %U to explain how Data Pump uses substitution variables. For information about other available substitution variables, see the Data Pump Export DUMPFILE parameter and the Data Pump Import DUMPFILE parameter.

New dump files are created as they are needed. For example, if you are using the substitution variable %U, then new dump files are created as needed beginning with 01 for %U, then using 0203, and so on. Enough dump files are created to allow all processes specified by the current setting of the PARALLEL parameter to be active. If one of the dump files becomes full because its size has reached the maximum size specified by the FILESIZE parameter, then it is closed and a new dump file (with a new generated name) is created to take its place.

If multiple dump file templates are provided, they are used to generate dump files in a round-robin fashion. For example, if expa%Uexpb%U, and expc%U were all specified for a job having a parallelism of 6, then the initial dump files created would be expa01.dmpexpb01.dmpexpc01.dmpexpa02.dmpexpb02.dmp, and expc02.dmp.

For import and SQLFILE operations, if dump file specifications expa%Uexpb%U, and expc%U are specified, then the operation begins by attempting to open the dump files expa01.dmpexpb01.dmp, and expc01.dmp. It is possible for the master table to span multiple dump files, so until all pieces of the master table are found, dump files continue to be opened by incrementing the substitution variable and looking up the new file names (for example, expa02.dmpexpb02.dmp, and expc02.dmp). If a dump file does not exist, then the operation stops incrementing the substitution variable for the dump file specification that was in error. For example, if expb01.dmp and expb02.dmp are found but expb03.dmp is not found, then no more files are searched for using the expb%U specification. Once the entire master table is found, it is used to determine whether all dump files in the dump file set have been located.

Parent topic: File Allocation

1.8 Exporting and Importing Between Different Database Releases

Data Pump can be used to migrate all or any portion of a database between different releases of the database software.

The Data Pump Export VERSION parameter is typically used to do this. This will generate a Data Pump dump file set compatible with the specified version.

The default value for VERSION is COMPATIBLE, indicating that exported database object definitions will be compatible with the release specified for the COMPATIBLE initialization parameter.

In an upgrade situation, when the target release of a Data Pump-based migration is higher than the source, the VERSION parameter typically does not have to be specified because all objects in the source database will be compatible with the higher target release. An exception is when an entire Oracle Database 11g (release 11.2.0.3 or higher) is exported in preparation for importing into Oracle Database 12c Release 1 (12.1.0.1) or later. In this case, explicitly specify VERSION=12 in conjunction with FULL=YES in order to include a complete set of Oracle internal component metadata.

In a downgrade situation, when the target release of a Data Pump-based migration is lower than the source, the VERSION parameter should be explicitly specified to be the same version as the target. An exception is when the target release version is the same as the value of the COMPATIBLE initialization parameter on the source system; then VERSION does not need to be specified. In general however, Data Pump import cannot read dump file sets created by an Oracle release that is newer than the current release unless the VERSION parameter is explicitly specified.

Keep the following information in mind when you are exporting and importing between different database releases:

·         On a Data Pump export, if you specify a database version that is older than the current database version, then a dump file set is created that you can import into that older version of the database. For example, if you are running Oracle Database 12c Release 1 (12.1.0.2) and specify VERSION=11.2 on an export, then the dump file set that is created can be imported into an Oracle 11.2 database.

Note:

Note the following about importing into earlier releases:

·         Database privileges that are valid only in Oracle Database 12c Release 1 (12.1.0.2) and later (for example, the READ privilege on tables, views, materialized views, and synonyms) cannot be imported into Oracle Database 12c Release 1 (12.1.0.1) or earlier. If an attempt is made to do so, then Import reports it as an error and continues the import operation.

·         When you export to a release earlier than Oracle Database 12c Release 2 (12.2.0.1), Data Pump does not filter out object names longer than 30 bytes. The objects are exported and at import time, an error is returned if you attempt to create an object with a name longer than 30 bytes.

·         If you specify a database release that is older than the current database release, then certain features and data types may be unavailable. For example, specifying VERSION=10.1 causes an error if data compression is also specified for the job because compression was not supported in Oracle Database 10g release 1 (10.1). Another example would be if a user-defined type or Oracle-supplied type in the source database is a later version than the type in the target database, then it will not be loaded because it does not match any version of the type in the target database.

·         Data Pump Import can always read Data Pump dump file sets created by older releases of the database.

·         When operating across a network link, Data Pump requires that the source and target databases differ by no more than two versions. For example, if one database is Oracle Database 12c, then the other database must be 12c, 11g, or 10g. Note that Data Pump checks only the major version number (for example, 10g,11g, 12c), not specific release numbers (for example, 12.2, 12.1, 11.1, 11.2, 10.1, or 10.2).

·         Importing Oracle Database 11g dump files that contain table statistics into Oracle Database 12c Release 1 (12.1) or later may result in an Oracle ORA-39346 error. This is because Oracle Database 11g dump files contain table statistics as metadata, whereas Oracle Database 12c Release 1 (12.1) and later expect table statistics to be presented as table data. The workaround is to ignore the error and after the import operation completes, regather table statistics.

See Also:

·         The Export VERSION parameter

·         The Import VERSION parameter

·         Oracle Database Security Guide for more information about the READ and READ ANY TABLE privileges

Parent topic: Overview of Oracle Data Pump

1.9 SecureFiles LOB Considerations

When you use Data Pump Export to export SecureFiles LOBs, the resulting behavior depends on several things, including the value of the Export VERSION parameter, whether ContentType is present, and whether the LOB is archived and data is cached.

The following scenarios cover different combinations of these variables:

·         If a table contains SecureFiles LOBs with ContentType and the Export VERSION parameter is set to a value earlier than 11.2.0.0.0, then the ContentType is not exported.

·         If a table contains SecureFiles LOBs with ContentType and the Export VERSION parameter is set to a value of 11.2.0.0.0 or later, then the ContentType is exported and restored on a subsequent import.

·         If a table contains a SecureFiles LOB that is currently archived and the data is cached, and the Export VERSION parameter is set to a value earlier than 11.2.0.0.0, then the SecureFiles LOB data is exported and the archive metadata is dropped. In this scenario, if VERSION is set to 11.1 or later, then the SecureFiles LOB becomes a vanilla SecureFiles LOB. But if VERSION is set to a value earlier than 11.1, then the SecureFiles LOB becomes a BasicFiles LOB.

·         If a table contains a SecureFiles LOB that is currently archived but the data is not cached, and the Export VERSION parameter is set to a value earlier than 11.2.0.0.0, then an ORA-45001 error is returned.

·         If a table contains a SecureFiles LOB that is currently archived and the data is cached, and the Export VERSION parameter is set to a value of 11.2.0.0.0 or later, then both the cached data and the archive metadata is exported.

See Also:

Oracle Database SecureFiles and Large Objects Developer's Guide for more information about SecureFiles

Parent topic: Overview of Oracle Data Pump

1.10 Data Pump Exit Codes

Data Pump reports the results of export and import operations in a log file and in a process exit code.

Oracle Data Pump provides the results of export and import operations immediately upon completion. In addition to recording the results in a log file, Data Pump may also report the outcome in a process exit code. This allows you to check the outcome of a Data Pump job from the command line or a script.

Table 1-1 describes the Data Pump exit codes for Linux, Unix, and Windows operating systems.

Table 1-1 Data Pump Exit Codes

Exit Code

Meaning

EX_SUCC 0

The export or import job completed successfully. No errors are displayed to the output device or recorded in the log file, if there is one.

EX_SUCC_ERR 5

The export or import job completed successfully but there were errors encountered during the job. The errors are displayed to the output device and recorded in the log file, if there is one.

EX_FAIL 1

The export or import job encountered one or more fatal errors, including the following:

·         Errors on the command line or in command syntax

·         Oracle database errors from which export or import cannot recover

·         Operating system errors (such as malloc)

·         Invalid parameter values that prevent the job from starting (for example, an invalid directory object specified in the DIRECTORY parameter)

A fatal error is displayed to the output device but may not be recorded in the log file. Whether it is recorded in the log file can depend on several factors, including:

·         Was a log file specified at the start of the job?

·         Did the processing of the job proceed far enough for a log file to be opened?

Parent topic: Overview of Oracle Data Pump

1.11 Auditing Data Pump Jobs

Perform auditing on Data Pump jobs to monitor and record specific user database actions.

You can perform auditing on Data Pump jobs to monitor and record specific user database actions. Data Pump uses unified auditing, in which all audit records are centralized in one place.

To set up unified auditing you create a unified audit policy or alter an existing policy. An audit policy is a named group of audit settings that enable you to audit a particular aspect of user behavior in the database. To create the policy, use the SQL CREATE AUDIT POLICY statement.

After creating the audit policy, use the AUDIT and NOAUDIT SQL statements to, respectively, enable and disable the policy.

See Also:

·         Oracle Database SQL Language Reference for more information about the SQL CREATE AUDIT POLICY,ALTER AUDIT POLICY, AUDIT, and NOAUDIT statements

·         Oracle Database Security Guide for more information about using auditing in an Oracle database

Parent topic: Overview of Oracle Data Pump

1.12 How Does Data Pump Handle Timestamp Data?

This section describes factors that can affect successful completion of export and import jobs that involve the timestamp data types TIMESTAMP WITH TIMEZONE and TIMESTAMP WITH LOCAL TIMEZONE.

Note:

The information in this section applies only to Oracle Data Pump running on Oracle Database 12c and later.

Parent topic: Overview of Oracle Data Pump

1.12.1 TIMESTAMP WITH TIME ZONE Restrictions

Export and import jobs that have TIMESTAMP WITH TIME ZONE data are restricted.

Successful job completion can depend on:

·         The version of the Oracle Database time zone files on the source and target databases.

·         The export/import mode and whether the Data Pump version being used supports TIMESTAMP WITH TIME ZONE data. (Data Pump 11.2.0.1 and later provide support for TIMESTAMP WITH TIME ZONE data.)

To identify the time zone file version of a database, you can execute the following SQL statement:

SQL> SELECT VERSION FROM V$TIMEZONE_FILE;

See Also:

Oracle Database Globalization Support Guide for more information about time zone files

Parent topic: How Does Data Pump Handle Timestamp Data?

1.12.1.1 Time Zone File Versions on the Source and Target

Successful job completion can depend on whether the source and target time zone file versions match.

·         If the Oracle Database time zone file version is the same on the source and target databases, then conversion of TIMESTAMP WITH TIME ZONE data is not necessary. The export/import job should complete successfully.

The exception to this is a transportable tablespace or transportable table export performed using a Data Pump release earlier than 11.2.0.1. In that case, tables in the dump file that have TIMESTAMP WITH TIME ZONE columns are not created on import even though the time zone file version is the same on the source and target.

·         If the source time zone file version is not available on the target database, then the job fails. The version of the time zone file on the source may not be available on the target because the source may have had its time zone file updated to a later version but the target has not. For example, if the export is done on Oracle Database 11g release 2 (11.2.0.2) with a time zone file version of 17, and the import is done on 11.2.0.2 with only a time zone file of 16 available, then the job fails.

Parent topic: TIMESTAMP WITH TIME ZONE Restrictions

1.12.1.2 Data Pump Support for TIMESTAMP WITH TIME ZONE Data

Data Pump supports TIMESTAMP WITH TIME ZONE data during different export and import modes like non-transportable mode, transportable tablespace and transportable table mode, and full transportable mode.

This section describes Data Pump support for TIMESTAMP WITH TIME ZONE data during different export and import modes when versions of the Oracle Database time zone file are different on the source and target databases.

Non-transportable Modes

·         If the dump file is created with a Data Pump version that supports TIMESTAMP WITH TIME ZONE data (11.2.0.1 or later), then the time zone file version of the export system is recorded in the dump file. Data Pump uses that information to determine whether data conversion is necessary. If the target database knows about the source time zone version, but is actually using a later version, then the data is converted to the later version. TIMESTAMP WITH TIME ZONE data cannot be downgraded, so if you attempt to import to a target that is using an earlier version of the time zone file than the source used, the import fails.

·         If the dump file is created with a Data Pump version prior to Oracle Database 11g release 2 (11.2.0.1), then TIMESTAMP WITH TIME ZONE data is not supported, so no conversion is done and corruption may occur.

Transportable Tablespace and Transportable Table Modes

·         In transportable tablespace and transportable table modes, if the source and target have different time zone file versions, then tables with TIMESTAMP WITH TIME ZONE columns are not created. A warning is displayed at the beginning of the job showing the source and target database time zone file versions. A message is also displayed for each table not created. This is true even if the Data Pump version used to create the dump file supports TIMESTAMP WITH TIME ZONE data. (Release 11.2.0.1 and later support TIMESTAMP WITH TIMEZONE data.)

·         If the source is earlier than Oracle Database 11g release 2 (11.2.0.1), then the time zone file version must be the same on the source and target database for all transportable jobs regardless of whether the transportable set uses TIMESTAMP WITH TIME ZONE columns.

Full Transportable Mode

Full transportable exports and imports are supported when the source database is at least Oracle Database 11g release 2 (11.2.0.3) and the target is Oracle Database 12c release 1 (12.1) or later.

Data Pump 11.2.0.1 and later provide support for TIMESTAMP WITH TIME ZONE data. Therefore, in full transportable operations, tables with TIMESTAMP WITH TIME ZONE columns are created. If the source and target database have different time zone file versions, then TIMESTAMP WITH TIME ZONE columns from the source are converted to the time zone file version of the target.

See Also:

·         Oracle Database Administrator's Guide for more information about transportable tablespaces

·         Using the Transportable Option During Full Mode Exports for more information about full transportable exports

·         Using the Transportable Option During Full Mode Imports for more information about full transportable imports

Parent topic: TIMESTAMP WITH TIME ZONE Restrictions

1.12.2 TIMESTAMP WITH LOCAL TIME ZONE Restrictions

Moving tables using a transportable mode is restricted.

If a table is moved using a transportable mode (transportable table, transportable tablespace, or full transportable), and the following conditions exist, then a warning is issued and the table is not created:

·         The source and target databases have different database time zones.

·         The table contains TIMESTAMP WITH LOCAL TIME ZONE data types.

To successfully move a table that was not created because of these conditions, use a non-transportable export and import mode.

Parent topic: How Does Data Pump Handle Timestamp Data?

1.13 Character Set and Globalization Support Considerations

Globalization support behavior of Data Pump Export and Import.

The following sections describe the globalization support behavior of Data Pump Export and Import with respect to character set conversion of user data and data definition language (DDL).

Parent topic: Overview of Oracle Data Pump

1.13.1 Data Definition Language (DDL)

The Export utility writes dump files using the database character set of the export system.

When the dump file is imported, a character set conversion is required for DDL only if the database character set of the import system is different from the database character set of the export system.

To minimize data loss due to character set conversions, ensure that the import database character set is a superset of the export database character set.

Parent topic: Character Set and Globalization Support Considerations

1.13.2 Single-Byte Character Sets and Export and Import

Ensure that the export database and the import database use the same character set.

If the system on which the import occurs uses a 7-bit character set, and you import an 8-bit character set dump file, then some 8-bit characters may be converted to 7-bit equivalents. An indication that this has happened is when accented characters lose the accent mark.

To avoid this unwanted conversion, ensure that the export database and the import database use the same character set.

Parent topic: Character Set and Globalization Support Considerations

1.13.3 Multibyte Character Sets and Export and Import

During character set conversion, any characters in the export file that have no equivalent in the import database character set are replaced with a default character. The import database character set defines the default character.

If the import system has to use replacement characters while converting DDL, then a warning message is displayed and the system attempts to load the converted DDL.

If the import system has to use replacement characters while converting user data, then the default behavior is to load the converted data. However, it is possible to instruct the import system to reject rows of user data that were converted using replacement characters. See the Import DATA_OPTIONS parameter for details.

To guarantee 100% conversion, the import database character set must be a superset (or equivalent) of the character set used to generate the export file.

Caution:

When the database character set of the export system differs from that of the import system, the import system displays informational messages at the start of the job that show what the database character set is.

When the import database character set is not a superset of the character set used to generate the export file, the import system displays a warning that possible data loss may occur due to character set conversions.

Parent topic: Character Set and Globalization Support Considerations

1.14 Oracle Data Pump Behavior with Data-Bound Collation

Oracle Data Pump supports data-bound collation (DBC).

Data Pump Export always includes all available collation metadata into the created dump file. This includes:

·         Current default collations of exported users' schemas

·         Current default collations of exported tables, views, materialized views and PL/SQL units (including user-defined types)

·         Declared collations of all table and cluster character data type columns

When importing a dump file exported from an Oracle Database 12c Release 2 (12.2) database, Data Pump Import's behavior depends on the effective value of the Data Pump VERSION parameter at the time of import and on whether the data-bound collation (DBC) feature is enabled in the target database. The effective value of the VERSION parameter is determined by how it is specified. The parameter may be specified as follows:

·         VERSION=n, which means the effective value is the specific version number n, for example, VERSION=12.2

·         VERSION=LATEST, which means the effective value is the currently  running database version

·         VERSION=COMPATIBLE, which means the effective value is the same as the value of the database initialization parameter COMPATIBLE. This is also true if no value is specified for VERSION.

For the DBC feature to be enabled in a database, the initialization parameter COMPATIBLE must be set to 12.2 or higher and the initialization parameter MAX_STRING_SIZE must be set to EXTENDED.

If the effective value of the Data Pump Import VERSION parameter is 12.2 and DBC is enabled in the target database, then Data Pump Import generates DDL statements with collation clauses referencing collation metadata from the dump file. Exported objects are created with the original collation metadata that they had in the source database. 

No collation syntax is generated if DBC is disabled or if the Data Pump Import VERSION parameter is set to a value lower than 12.2.

 

No comments:

Post a Comment