Oracle8 Utilities Release 8.0 A58244-01 |
|
This chapter describes how to use the Import utility, which reads an export file into an Oracle database.
Import reads only export files created by Export. For information on how to export a database, see Chapter 1, "Export". To load data from other operating system files, see the discussion of SQL*Loader in Part II of this manual.
This chapter discusses the following topics:
Note: If you are working with the Advanced Replication Option, refer to Oracle8 Replication, Appendix B, "Migration and Compatibility." If you are using Trusted Oracle, see the Trusted Oracle documentation for information about using the Import utility in that environment.
The basic concept behind Import is very simple. Import inserts the data objects extracted from one Oracle database by the Export utility (and stored in an Export dump file) into another Oracle database. Export dump file can only be read by Import. See Chapter 1, "Export" for more information about Oracle's Export utility.
Import reads the object definitions and table data that the Export utility extracted from an Oracle database and stored in an Oracle binary-format Export dump file located typically on disk or tape.
Such files are typically FTPed or physically transported (in the case of tape) to a different site and used, with the Import utility, to transfer data between databases that are on machines not connected via a network or as backups in addition to normal backup procedures.
The Export and Import utilities can also facilitate certain aspects of Oracle Advanced Replication functionality like offline instantiation. See Oracle8 Replication for more information.
Note that, Export dump files can only be read by the Oracle utility Import. If you need to load data from ASCII fixed-format or delimited files, see Part II, SQL*Loader of this manual.
Note: If you are working with the Advanced Replication Option, refer to the information about migration and compatibility in Oracle8 Replication. If you are using Trusted Oracle, see the Trusted Oracle documentation for information about using the Export utility in that environment.
Figure 2-1 illustrates the process of importing from an Export dump file.
Table objects are imported as they are read from the export file. The export file contains objects in the following order:
First, new tables are created. Then, data is imported and indexes are built. Then triggers are imported, integrity constraints are enabled on the new tables, and any bitmap indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data (once when it was originally inserted and again during the import).
For example, if the EMP table has a referential integrity constraint on the DEPT table and the EMP table is imported first, all EMP rows that reference departments that have not yet been imported into DEPT would be rejected if the constraints were enabled.
When data is imported into existing tables however, the order of import can still produce referential integrity failures. In the situation just given, if the EMP table already existed and referential integrity constraints were in force, many rows could be rejected.
A similar situation occurs when a referential integrity constraint on a table references itself. For example, if SCOTT's manager in the EMP table is DRAKE, and DRAKE's row has not yet been loaded, SCOTT's row will fail, even though it would be valid at the end of the import.
Suggestion: For the reasons mentioned previously, it is a good idea to disable referential constraints when importing into an existing table. You can then re-enable the constraints after the import is completed.
Import can read export files created by Export Version 5.1.22 and later.
The Import utility provides three modes of import. The objects that are imported depend on the Import mode you choose for the import and the mode that was used during the export. All users have two choices of import mode. A user with the IMP_FULL_DATABASE role (a privileged user) has three choices:
See "Import Parameters" on page 2-16 for information on specifying each mode.
A user with the IMP_FULL_DATABASE role must specify one of these options or specify an incremental import. Otherwise, an error results. If a user without the IMP_FULL_DATABASE role fails to specify one of these options, a user-level import is performed.
Table 1-1 on page 1 - 4 shows the objects that are exported and imported in each mode.
You can import tables and partitions in the following ways:
You must set the parameter IGNORE = Y when loading data into an existing table. See "IGNORE" on page 2-23 for information on the parameter IGNORE.
For each specified table, table-level Import imports all of the table's rows. With table-level Import:
If the table does not exist, and if the exported table was partitioned, table-level Import creates a partitioned table. If the table creation is successful, table-level Import reads all of the source data from the export file into the target table. After Import, the target table contains the partition definitions of all of the partitions associated with the source table in the Export file. This operation ensures that the physical and logical attributes (including partition bounds) of the source partitions are maintained on Import.
Partition-level Import imports a set of partitions from a source table into a target table. Note the following points:
Partition-level Export and Import provide a way to merge partitions in the same table, even though SQL does not explicitly support merging partitions. A DBA can use partition-level Import to merge a table partition into the next highest partition on the same system. See "Example 2: Merging Partitions of a Table" on page 2-38 for an example of merging partitions.
Partition-level Export and Import do not provide for splitting a partition. For information on how to split a partition, refer to the Oracle8 Administrator's Guide. For information about Import, see "Using Table-Level and Partition-Level Export and Import" on page 2-30.
This section describes what you need to do before you begin importing and how to invoke and use the Import utility.
To use Import, you must run either the script CATEXP.SQL or CATALOG.SQL (which runs CATEXP.SQL) after the database has been created.
Additional Information: The actual names of the script files depend on your operating system. The script file names and the method for running them are described in your Oracle operating system-specific documentation.
CATEXP.SQL or CATALOG.SQL need to be run only once on a database. You do not need to run either script again before performing future import operations. Both scripts performs the following tasks to prepare the database for Import:
You can invoke Import in three ways:
imp
username/password PARFILE=filename
PARFILE is a file containing the Import parameters you typically use. If you use different parameters for different databases, you can have multiple parameter files. This is the recommended method. See "The Parameter File" on page 2-11 for information on how to use the parameter file.
imp username/password <parameters>
replacing <parameters> with various parameters you intend to use. Note that the number of parameters cannot exceed the maximum length of a command line on your operating system.
to begin an interactive session, and let Import prompt you for the information it needs. Note that the interactive method does not provide as much functionality as the parameter-driven method. It exists for backward compatibility.
You can use a combination of the first and second options. That is, you can list parameters both in the parameters file and on the command line. In fact, you can specify the same parameter in both places. The position of the PARFILE parameter and other parameters on the command line determines what parameters override others. For example, assume the parameters file params.dat
contains the parameter INDEXES=Y and Import is invoked with the following line:
imp system/manager PARFILE=params.dat INDEXES=N
In this case, because INDEXES=N
occurs after PARFILE=params.dat
, INDEXES=N
overrides the value of the INDEXES parameter in the PARFILE.
You can specify the username and password in the parameter file, although, for security reasons, this is not recommended.
If you omit the username and password, Import prompts you for it.
See "Import Parameters" on page 2-16 for a description of each parameter.
Typically, you should not need to invoke Import as SYSDBA. However, if you are using Tablespace Point-In-Time Recovery (TSPITR) which enables you to quickly recover one or more tablespaces to a point-in-time different from that of the rest of the database, you will need to know how to do so.
Attention: It is recommended that you read the information about TSPITR in the Oracle8 Backup and Recovery Guide, "POINT_IN_TIME_RECOVER" on page 2-26, and "RECOVERY_TABLESPACES" on page 1-19 before continuing with this section.
To invoke Import as SYSDBA, use the following syntax:
imp username/password AS SYSDBA
or, optionally
imp username/password@instance AS SYSDBA
Note: Since the string "AS SYSDBA" contains a blank, most operating systems require that entire string `username/password AS SYSDBA' be placed in quotes or marked as a literal by some method. Note that some operating systems also require that quotes on the command line be escaped as well. Please see your operating system-specific Oracle documentation for information about special and reserved characters on your system.
Note that if either the username or password is omitted, Import will prompt you for it.
If you use the Import interactive mode, you will not be prompted to specify whether you want to connect as SYSDBA or @instance. You must specify "AS SYSDBA" and/or "@instance" with the username.
So the response to the Import interactive username prompt could be for example:
username/password@instance as sysdba username/password@instance username/password as sysdba username/password username@instance as sysdba (prompts for password) username@instance (prompts for password) username (prompts for password) username as sysdba (prompts for password) / as sysdba (no prompt for password, OS authentication is used) / (no prompt for password, OS authentication is used) /@instance as sysdba (no prompt for password, OS authentication is used) /@instance (no prompt for password, OS authentication is used)
Note: if you omit the password and allow Import to prompt you for it, you cannot specify the @instance string as well. You can specify @instance only with username.
Import provides online help. Enter imp help=y
on the command line to see a help printout like the one shown below.
> imp help=y Import: Release 8.0.4.0.0 - Production on Fri Nov 07 12:14:11 1997
(c) Copyright 1997 Oracle Corporation. All rights reserved.
You can let Import prompt you for parameters by entering the IMP command followed by your username/password:
Example: IMP SCOTT/TIGER
Or, you can control how Import runs by entering the IMP command followed by various arguments. To specify parameters, you use keywords:
Format: IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN) Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
USERID must be the first parameter on the command line.
Keyword Description (Default) Keyword Description (Default) -------------------------------------------------------------------------- USERID username/password FULL import entire file (N) BUFFER size of data buffer FROMUSER list of owner usernames FILE input file (EXPDAT.DMP) TOUSER list of usernames SHOW just list file contents (N) TABLES list of table names IGNORE ignore create errors (N) RECORDLENGTH length of IO record GRANTS import grants (Y) INCTYPE incremental import type INDEXES import indexes (Y) COMMIT commit array insert (N) ROWS import data rows (Y) PARFILE parameter filename LOG log file of screen output DESTROY overwrite tablespace data file (N) INDEXFILE write table/index info to specified file CHARSET character set of export file (NLS_LANG) POINT_IN_TIME_RECOVER Tablespace Point-in-time Recovery (N) SKIP_UNUSABLE_INDEXES skip maintenance of unusable indexes (N) ANALYZE execute ANALYZE statements in dump file (Y) FEEDBACK display progress every x rows(0) VOLSIZE number of bytes in file on each volume of a file on tape Import terminated successfully without warnings.
The parameter file allows you to specify Import parameters in a file where they can be easily modified or reused. Create a parameter file using any flat file text editor. The command line option PARFILE=<filename> tells Import to read the parameters from the specified file rather than from the command line. For example:
imp parfile=filename
or
imp username/password parfile=filename
The syntax for parameter file specifications is one of the following:
KEYWORD=value KEYWORD=(value) KEYWORD=(value1, value2, ...)
You can add comments to the parameter file by preceding them with the pound (#) sign. All characters to the right of the pound (#) sign are ignored.
The following is an example of a partial parameter file listing:
FULL=Y
FILE=DBA.DMP
GRANTS=Y
INDEXES=Y # import all indexes
See "Import Parameters" on page 2-16 for a description of each parameter.
This section describes the privileges you need to use the Import utility and to import objects into your own and others' schemas.
To use Import, you need the privilege CREATE SESSION to log on to the Oracle8 server. This privilege belongs to the CONNECT role established during database creation.
You can do an import even if you did not create the export file. However, if the export file was created by someone using the EXP_FULL_DATABASE role, you can import that file only if you have the IMP_FULL_DATABASE role.
Table 2-1 lists the privileges required to import objects into your own schema. All of these privileges initially belong to the RESOURCE role.
To import the privileges that a user has granted to others, the user initiating the import must either own the objects or have object privileges with the WITH GRANT OPTION. Table 2-2 shows the required conditions for the authorizations to be valid on the target system.
Table 2-2 Privileges Required to Import Grants
To import objects into another user's schema, you must have the IMP_FULL_DATABASE role enabled.
To import system objects from a full database export file, the role IMP_FULL_DATABASE must be enabled. The parameter FULL specifies that these system objects are included in the import when the export file is a full export:
When user definitions are imported into an Oracle database, they are created with the CREATE USER command. So, when importing from export files created by previous versions of Export, users are not granted CREATE SESSION privileges automatically.
This section describes factors to take into account when you import data into existing tables.
When you choose to create tables manually before importing data into them from an export file, you should use either the same table definition previously used or a compatible format. For example, while you can increase the width of columns and change their order, you cannot do the following:
In the normal import order, referential constraints are imported only after all tables are imported. This sequence prevents errors that could occur if a referential integrity constraint existed for data that has not yet been imported.
These errors can still occur when data is loaded into existing tables, however. For example, if table EMP has a referential integrity constraint on the MGR column that verifies the manager number exists in EMP, a perfectly legitimate employee row might fail the referential integrity constraint if the manager's row has not yet been imported.
When such an error occurs, Import generates an error message, bypasses the failed row, and continues importing other rows in the table. You can disable constraints manually to avoid this.
Referential constraints between tables can also cause problems. For example, if the AEMP table appears before the BDEPT table in the export file, but a referential check exists from the AEMP table into the BDEPT table, some of the rows from the AEMP table may not be imported due to a referential constraint violation.
To prevent errors like these, you should disable referential integrity constraints when importing data into existing tables.
When the constraints are re-enabled after importing, the entire table is checked, which may take a long time for a large table. If the time required for that check is too long, it may be beneficial to order the import manually.
To do so, do several imports from an export file instead of one. First, import tables that are the targets of referential checks, before importing the tables that reference them. This option works if tables do not reference each other in circular fashion, and if a table does not reference itself.
The following diagrams show the syntax for the parameters that you can specify in the parameter file or on the command line:
The remainder of this section describes each parameter.
The parameters are as follows:
Default: Y
Specifies whether or not the Import utility executes SQL ANALYZE statements found in the export file.
Default: operating system-dependent
The buffer-size is the size, in bytes, of the buffer through which data rows are transferred.
The parameter BUFFER (buffer size) determines the number of rows in the array inserted by Import. The following formula gives an approximation of the buffer size that inserts a given array of rows:
buffer_size = rows_in_array * maximum_row_size
For tables containing LONG, LOB, BFILE, REF, ROWID or type columns, rows are inserted individually. The size of the buffer must be large enough to contain the entire row, except for LOB columns. If the buffer cannot hold the longest row in a table, Import attempts to allocate a larger buffer.
Additional Information: See your Oracle operating system-specific documentation to determine the default value for this parameter.
Default: none
Oracle Version 5 and 6 export files do not contain the NLS character set identifier. However, a Version 5 or 6 export file does indicate whether the user session character set was ASCII or EBCDIC.
Use this parameter to indicate the actual character set used at the time of export. The Import utility will verify whether the specified character set is ASCII or EBCDIC based on the character set in the export file.
If you do not specify a value for the CHARSET parameter, Import will verify that the user session character set is ASCII, if the export file is ASCII, or EBCDIC, if the export file is EBCDIC.
Use of this parameter is not recommended. It is provided only for compatibility with previous versions. Eventually, it will no longer be supported.
If you are using an Oracle7 or Oracle8 Export file, the character set is specified within the export file, and conversion to the current database's character set is automatic. Specification of this parameter serves only as a check to ensure that the export file's character set matches the expected value. If not, an error results.
Default: N
Specifies whether Import should commit after each array insert. By default, Import commits after loading each table, and Import performs a rollback when an error occurs, before continuing with the next object.
If a table has nested table columns or attributes, the contents of the nested tables are imported as separate tables. Therefore, the contents of the nested tables are always committed in a transaction distinct from the transaction used to commit the outer table.
If COMMIT=N and a table is partitioned, each partition in the Export file is imported in a separate transaction.
Specifying COMMIT=Y prevents rollback segments from growing inordinately large and improves the performance of large imports. Specifying COMMIT=Y is advisable if the table has a uniqueness constraint. If the import is restarted, any rows that have already been imported are rejected with a non-fatal error. Note that, if a table does not have a uniqueness constraint, and you specify COMMIT=Y, Import could produce duplicate rows when you re-import the data.
For tables containing LONG, LOB, BFILE, REF, ROWID or type columns, array inserts are not done. If COMMIT=Y, Import commits these tables after each row.
Default: N
Specifies whether or not the existing data files making up the database should be reused. That is, the DESTROY parameter specifies that Import should include the reuse option in the datafile clause of the CREATE TABLESPACE command.
The export file contains the datafile names used in each tablespace. If you attempt to create a second database on the same machine (for testing or other purposes), the Import utility overwrites the original database's data files when it creates the tablespace. This is undesirable. With this parameter set to N (the default), an error occurs if the data files already exist when the tablespace is created.
To eliminate this error when you import into a secondary database, pre-create the tablespace and specify its data files. (Specifying IGNORE=Y suppresses the object creation error that the tablespace already exists.)
To bypass the error when you import into the original database, specify IGNORE=Y to add to the existing data files without replacing them. To reuse the original database's data files after eliminating their contents, specify DESTROY=Y.
Note: If you have pre-created your tablespace, you must specify DESTROY=N or your pre-created tablespace will be lost.
Warning: If datafiles are stored on a raw device, DESTROY=N does not prevent files from being overwritten.
Default: 0 (zero)
Specifies that Import should display a progress meter in the form of a dot for n number of rows imported. For example, if you specify FEEDBACK=10, Import displays a dot each time 10 rows have been imported. The FEEDBACK value applies to all tables being imported; it cannot be set on a per-table basis.
Default: expdat.dmp
The name of the export file to import. You do not have to be the Oracle user who exported the file. However, you do need to have current access to the file. The default extension is .dmp, but you can specify any extension.
Default: none
A list of schemas containing objects to import. The default for users without the IMP_FULL_DATABASE role is a user mode import. That is, all objects for the current user are imported. (If the TABLES parameter is also specified, a table mode import is performed.)
When importing in user mode, all other objects in the export file are ignored. The effect is the same as if the export file had been created in user mode (or table mode). See Table 1-1 on page 1 - 4 for the list of objects that are imported in user mode and table mode.
For example, the following command treats the export file as though it were simply a user mode export of SCOTT's objects:
imp system/manager FROMUSER=scott
If user SCOTT does not exist in the current database, his objects are imported into the importer's schema - in this case, the system's. Otherwise, the objects are imported in SCOTT's schema. If a list of schemas is given, each schema can be specified only once. Duplicate schema names are ignored. The following example shows an import from two schemas:
imp system/manager FROMUSER=scott,blake
Note: Specifying FROMUSER=SYSTEM does not import system objects. It imports only those objects that belong to user SYSTEM.
When FROMUSER is specified and TOUSER is missing, the objects of FROMUSER are imported back to FROMUSER. However, if the schema specified in FROMUSER does not exist in the current database, the objects are imported into the importer's schema.
To import system objects (for example, user definitions and tablespaces), you must import from a full export file specifying FULL=Y.
Default: N
Specifies whether to import the entire export file.
Default: Y
Specifies whether to import object grants.
By default, the Import utility imports any object grants that were exported. If the export was a user-mode Export, the export file contains only first-level object grants (those granted by the owner). If the export was a full database mode Export, the export file contains all object grants, including lower-level grants (those granted by users given a privilege with the WITH GRANT OPTION). If you specify GRANTS=N, the Import utility does not import object grants. (Note that system grants are imported even if GRANTS=N.
Note: Export does not export grants on data dictionary views for security reasons that affect Import. If such grants were exported, access privileges would be changed and the user would not be aware of this. Also, not forcing grants on import allows the user more flexibility to set up appropriate grants on import.
Default: N
Displays a description of the Import parameters.
Default: N
Specifies how object creation errors should be handled. If you specify IGNORE=Y, Import overlooks object creation errors when it attempts to create database objects. If you specify IGNORE=Y, Import continues without reporting the error.
If you accept the default IGNORE=N, Import logs and/or displays the object creation error before continuing.
For tables, IGNORE=Y causes rows to be imported into existing tables. No message is given. IGNORE=N causes an error to be reported, and the table is skipped if it already exists.
Note that only object creation errors are ignored; other errors, such as operating system, database, and SQL errors, are not ignored and may cause processing to stop.
In situations where multiple refreshes from a single export file are done with IGNORE=Y, certain objects can be created multiple times (although they will have unique system-defined names). You can prevent this for certain objects (for example, constraints) by doing an export in table mode with the CONSTRAINTS=N parameter. Note that, if you do a full export with the CONSTRAINTS parameter set to N, no constraints for any tables are exported.
If you want to import data into tables that already exist- perhaps because you want to use new storage parameters, or because you have already created the table in a cluster - specify IGNORE=Y. The Import utility imports the rows of data into the existing table.
Warning: When you import into existing tables, if no column in the table is uniquely indexed, rows could be duplicated if they were already present in the table. (This warning applies to non-incremental imports only. Incremental imports replace the table from the last complete export and then rebuild it to its last backup state from a series of cumulative and incremental exports.)
Default: undefined
Specifies the type of incremental import.
The options are:
See "Importing Incremental, Cumulative, and Complete Export Files" on page 2-43 for more information about the INCTYPE parameter.
Default: Y
Specifies whether or not to import indexes. System-generated indexes such as LOB indexes, OID indexes, or unique constraint indexes are re-created by Import regardless of the setting of this parameter.
If indexes for the target table already exist, Import performs index maintenance when data is inserted into the table.
You can postpone all user-generated index creation until after Import completes by specifying INDEXES = N.
Default: none
Specifies a file to receive index-creation commands.
When this parameter is specified, index-creation commands for the requested mode are extracted and written to the specified file, rather than used to create indexes in the database. Tables and other database objects are not imported.
The file can then be edited (for example, to change storage parameters) and used as a SQL script to create the indexes. To make it easier to identify the indexes defined in the file, the export file's CREATE TABLE statements and CREATE CLUSTER statements are included as comments.
Note: Since Release 7.1, the commented CREATE TABLE statement in the indexfile does not include primary/unique key clauses.
Perform the following steps to use this feature:
[This step imports the database objects while preventing Import from using the index definitions stored in the export file.]
The INDEXFILE parameter can be used only with the FULL=Y, FROMUSER, TOUSER, or TABLES parameters.
Default: none
Specifies a file to receive informational and error messages. If you specify a log file, the Import utility writes all information to the log in addition to the terminal display.
Default: undefined
Specifies a filename for a file that contains a list of Import parameters. For more information on using a parameter file, see "The Parameter File" on page 2-11.
Default: N
Indicates whether or not Import recovers one or more tablespaces in an Oracle database to a prior point in time, without affecting the rest of the database. For more information, see the Oracle8 Backup and Recovery Guide.
Default: operating system-dependent
Specifies the length, in bytes, of the file record. The RECORDLENGTH parameter is necessary when you must transfer the export file to another operating system that uses a different default value.
If you do not define this parameter, it defaults to your platform-dependent value for BUFSIZ. For more information about the BUFSIZ default value, see your operating system-specific documentation.
You can set RECORDLENGTH to any value equal to or greater than your system's BUFSIZ. (The highest value is 64KB.) Changing the RECORDLENGTH parameter affects only the size of data that accumulates before writing to the disk. It does not affect the operating system file block size.
Note: You can use this parameter to specify the size of the Export I/O buffer.
Additional Information: See your Oracle operating system-specific documentation to determine the proper value or to create a file with a different record size.
Default: Y
Specifies whether or not to import the rows of table data.
Default: N
When you specify SHOW, the contents of the export file are listed to the display and not imported. The SQL statements contained in the export are displayed in the order in which Import will execute them.
The SHOW parameter can be used only with the FULL=Y, FROMUSER, TOUSER, or TABLES parameters.
Default: N
Specifies whether or not Import skips building indexes that were set to the Index Unusable state (set by either system or user). Refer to "ALTER SESSION SET SKIP_UNUSABLE_INDEXES=TRUE" in the Oracle8 SQL Reference manual for details. Other indexes (not previously set Index Unusable) continue to be updated as rows are inserted.
This parameter allows you to postpone index maintenance on selected index partitions until after row data has been inserted. You then have the responsibility to rebuild the affected index partitions after the Import. You can use the INDEXFILE parameter in conjunction with INDEXES = N to provide the SQL scripts for re-creating the index. Without this parameter, row insertions that attempt to update unusable indexes fail.
Default: none
Specifies a list of table names to import. Use an asterisk (*) to indicate all tables. When specified, this parameter initiates a table mode import, which restricts the import to tables and their associated objects, as listed in Table 1-1 on page 1 - 4. The number of tables that can be specified at the same time is dependent on command line limits.
Any table-level Import or partition-level Import attempts to create a partitioned table with the same partition names as the exported partitioned table, including names of the form SYS_Pnnn. If a table with the same name already exists, Import processing depends on the setting of the IGNORE parameter.
Unless SKIP_UNUSABLE_INDEXES=Y, inserting the exported data into the target table fails if Import cannot update a non-partitioned index or index partition that is marked Indexes Unusable or otherwise not suitable.
Although you can qualify table names with schema names (as in SCOTT.EMP) when exporting, you cannot do so when importing. In the following example, the TABLES parameter is specified incorrectly:
imp system/manager TABLES=(jones.accts, scott.emp,scott.dept)
The valid specification to import these tables is:
imp system/manager FROMUSER=jones TABLES=(accts) imp system/manager FROMUSER=scott TABLES=(emp,dept)
If TOUSER is specified, SCOTT's objects are stored in the schema specified by TOUSER. If user SCOTT does not exist in the current database, his tables are imported into the importer's schema - system in the previous example. Otherwise, the tables and associated objects are installed in SCOTT's schema.
Additional Information: Some operating systems, such as UNIX, require that you use escape characters before special characters, such as a parenthesis, so that the character is not treated as a special character. On UNIX, use a backslash (\) as the escape character, as shown in the following example:
TABLES=\(EMP,DEPT\)
Table names specified on the command line or in the parameter file cannot include a pound (#) sign, unless the table name is enclosed in quotation marks.
For example, if the parameter file contains the following line, Import interprets everything on the line after EMP# as a comment. As a result, DEPT and MYDATA are not imported.
TABLES=(EMP#, DEPT, MYDATA)
However, if the parameter file contains the following line, the Import utility imports all three tables:
TABLES=("EMP#", DEPT, MYDATA)
Attention: When you specify the table name in quotation marks, it is case sensitive. The name must exactly match the table name stored in the database. By default, database names are stored as uppercase.
Additional Information: Some operating systems require single quotes instead of double quotes. See your Oracle operating system-specific documentation.
Default: none
Specifies a list of usernames whose schemas will be imported. The IMP_FULL_DATABASE role is required to use this parameter.
To import to a different schema than the one that originally contained the object, specify TOUSER. For example:
imp system/manager FROMUSER=scott TOUSER=joe TABLES=emp
If multiple schemas are specified, the schema names are paired. The following example imports SCOTT's objects into JOE's schema, and FRED's objects into TED's schema:
imp system/manager FROMUSER=scott,fred TOUSER=joe,ted
Note: If the FROMUSER list is longer than the TOUSER list, you can use the following syntax to ensure that any extra objects go into the TOUSER schema:
emp system/manager FROMUSER=scott TOUSER=ted,ted
Note that user Ted is listed twice.
Default: undefined
Specifies the username/password (and optional connect string) of the user performing the import.
When using Tablespace Point-in-Time-Recovery USERID can also be:
username/password AS SYSDBA
or
username/password@instance AS SYSDBA
See "Invoking Import as SYSDBA" on page 2-8 for more information. Note also that your operating system may require you to treat AS SYSDBA as a special string requiring you to enclose the entire string in quotes as described on 2 - 8.
Optionally, you can specify the @connect_string clause for Net8. See the user's guide for your Net8 protocol for the exact syntax of @connect_string. See also Oracle8 Distributed Database Systems.
Both table-level Export and partition-level Export can migrate data across tables and partitions.
This section provides more detailed information about partition-level Import. For general information, see "Understanding Table-Level and Partition-Level Import" on page 2-6.
Partition-level Import cannot import a non-partitioned exported table. However, a partitioned table can be imported from a non-partitioned exported table using table-level Import. Partition-level Import is legal only if the source table (that is, the table called tablename at export time) was partitioned and exists in the Export file.
If ROWS = Y (default), and the table does not exist in the Import target system, all of the rows for the specified partition in the table are inserted into the same partition in the table in the Import target system.
If ROWS = Y (default), but the table already existed before Import, all the rows for the specified partition in the table are inserted into the table. The rows are stored according to the partitioning scheme of the target table. If the target table is partitioned, Import reports any rows that are rejected because they fall above the highest partition of the target table.
If ROWS = N, Import does not insert data into the target table and continues to process other objects associated with the specified table and partition in the file.
If the target table is non-partitioned, the partitions are imported into the entire table. Import requires IGNORE = Y to import one or more partitions from the Export file into a non-partitioned table on the import target system.
The presence of a table-name:partition-name with the TABLES parameter results in reading from the Export file only data rows from the specified source partition. If you do not specify the partition name, the entire table is used as source.
Import issues a warning if the specified partition is not in the list of partitions in the exported table.
Data exported from one or more partitions can be imported into one or more partitions. Import inserts rows into partitions based on the partitioning criteria in the import database.
In the following example, the Import utility imports the row data from the source partition py
of table scott.b
into the py
partition of target table scott.b
, after the table and its partitions are created:
imp system/manager FILE = export.dmp FROMUSER = scott TABLES=b:py
The following example causes row data of partitions qc
and qd
of table scott.e
to be imported into the table scott.e
:
imp scott/tiger FILE = export.dmp TABLES = (e:qc, e:qd) IGNORE=y
If table "e"
does not exist on the Import target system, it is created and data is inserted into the same partitions. If table "e"
existed on the target system before Import, the row data is inserted into the partitions whose range allows insertion. The row data can end up in partitions of names other than qc
and qd
.
Note: With partition-level Import to an existing table, you must set up the target partitions properly and use IGNORE=Y.
Partition merging allows data from multiple partitions to be merged into one partition on the target system. Because partitions are re-created identical to those on the exported table when IGNORE=N, partition merging succeeds only when
IGNORE = Y and the partitioned table with the proper partition bounds exists on the Import target system. The following example assumes the presence of the Export file (exp.dmp
) containing a partitioned table "c"
that has partitions qa, qb
, and qc
. Prior to Import, the target table c
is created with partitions, including the qc
partition. Partition qd
is created with a partition range that can take row data from partitions qa
and qb
of source table "c"
.
imp mary/lamb FILE = exp.dmp TABLES = (c:qa, c:qb) IGNORE = Y
This command line causes Import to import partitions qa
and qb
of table mary.c
into partition qd
of mary.c
in the Import target system.
See "Example 2: Merging Partitions of a Table" on page 2-38 for an additional example of merging partitions.
You can use partition-level Export and Import to reconfigure partitions. Perform the following steps:
For example, a user can move data that was previously stored in partitions P1
and P2
in table T
into partitions of different ranges of table T
on the same system. Assume that the source table T
has other partitions besides P1
and P2
. Assume that the target partitions are now called P1
, P2
, P3
, and P4
. The following steps can be used:
P1
, P2
of Table T
(or export table T
).
Issue the SQL statement ALTER TABLE T DROP PARTITION for P1
and P2
.
Issue the SQL statement ALTER TABLE T ADD PARTITION for P1
, P2
, P3
, and P4
.
P1
and P2
(or import table T
) with IGNORE = Y.
If the target table is partitioned, Import rejects any rows that fall above the highest partition of the target table.
If you want to achieve some parallelism, export each partition of the original table at the same time into separate files. After you create the table again, issue multiple import commands to import each of the export files in parallel.
This section gives some examples of import sessions that show you how to use the parameter file and command-line methods. The examples illustrate four scenarios:
In this example, using a full database export file, an administrator imports the DEPT and EMP tables into the SCOTT schema. If the SCOTT schema does not exist, the tables are imported into the SYSTEM schema.
>
imp system/manager parfile=params.dat
The params.dat file contains the following information:
FILE=dba.dmp SHOW=n IGNORE=n GRANTS=y FROMUSER=scott TABLES=(dept,emp)
>
imp system/manager file=dba.dmp fromuser=scott tables= (dept,emp)
Import: Release 8.0.4.0.0 - Production on Mon Nov 7 10:22:12 1997 (c) Copyright 1997 Oracle Corporation. All rights reserved. Connected to: Oracle8 Server Release 8.0.4.0.0 - Production PL/SQL Release 8.0.4.0.0 - Production Export file created by EXPORT:V08.00.03 via conventional path . importing SCOTT's objects into SCOTT . . importing table "DEPT" 4 rows imported . . importing table "EMP" 14 rows imported Import terminated successfully without warnings.
This example illustrates importing the UNIT and MANAGER tables from a file exported by BLAKE into the SCOTT schema.
>
imp scott/tiger parfile=params.dat
The params.dat file contains the following information:
FILE=blake.dmp SHOW=n IGNORE=n GRANTS=y ROWS=y FROMUSER=blake TOUSER=scott TABLES=(unit,manager)
>
imp scott/tiger fromuser=blake touser=scott file=blake.dmp tables=(unit,manager)
Import: Release 8.0.4.0.0 - Production on Fri Nov 7 14:55:17 1997 (c) Copyright 1997 Oracle Corporation. All rights reserved. Connected to: Oracle8 Server Release 8.0.4.0.0 - Production PL/SQL Release 8.0.4.0.0 - Production Export file created by EXPORT:V08.00.04 via conventional path Warning: the objects were exported by BLAKE, not by you . importing BLAKE's objects into SCOTT . . importing table "UNIT" 4 rows imported . . importing table "MANAGER" 4 rows imported Import terminated successfully without warnings.
In this example, a DBA imports all tables belonging to SCOTT into user BLAKE's account.
>
imp system/manager parfile=params.dat
The params.dat file contains the following information:
FILE=scott.dmp FROMUSER=scott TOUSER=blake TABLES=(*)
>
imp system/manager file=scott.dmp fromuser=scott touser=blake tables=(*)
Import: Release 8.0.4.0.0 - Production on Fri Nov 7 21.14.1 1997 (c) Copyright 1997 Oracle Corporation. All rights reserved. Connected to: Oracle8 Server Release 8.0.4.0.0 - Production PL/SQL Release 8.0.4.0.0 - Production Export file created by EXPORT:V08.00.04 via conventional path Warning: the objects were exported by SCOTT, not by you . . importing table "BONUS" 0 rows imported . . importing table "DEPT" 4 rows imported . . importing table "EMP" 14 rows imported . . importing table "SALGRADE" 5 rows imported Import terminated successfully without warnings.
This section describes how to use partition-level Import to partition an unpartitioned table, merge partitions of a table, and repartition a table on a different column.
The examples in this section assume that the following tablespaces exist:
Perform the following steps to partition an unpartitioned table:
The following example shows how to partition an unpartitioned table:
> exp scott/tiger tables=emp file=empexp.dmp . . . About to export specified tables via Conventional Path ... . . exporting table EMP 14 rows exported Export terminated successfully without warnings. . . . SQL> drop table emp cascade constraints; Table dropped. SQL> create table emp 2 ( 3 empno number(4) not null, 4 ename varchar2(10), 5 job varchar2(9), 6 mgr number(4), 7 hiredate date, 8 sal number(7,2), 9 comm number(7,2), 10 deptno number(2) 11 ) 12 partition by range (empno) 13 ( 14 partition emp_low values less than (7600) 15 tablespace tbs_e1, 16 partition emp_mid values less than (7900) 17 tablespace tbs_e2, 18 partition emp_high values less than (8100) 19 tablespace tbs_e3 20 ); Table created. SQL> exit imp scott/tiger tables=emp file=empexp.dmp ignore=y . . . Export file created by EXPORT:V08.00.03 via conventional path . importing SCOTT's objects into SCOTT . . importing table "EMP" 14 rows imported Import terminated successfully without warnings
The following SELECT statements show that the data is partitioned on the empno column:
SQL> select empno from emp partition (emp_low); EMPNO ---------- 7369 7499 7521 7566 4 rows selected. SQL> select empno from emp partition (emp_mid); EMPNO ---------- 7654 7698 7782 7788 7839 7844 7876 7 rows selected. SQL> select empno from emp partition (emp_high); EMPNO ---------- 7900 7902 7934 3 rows selected.
This example assumes the EMP table has three partitions, based on the EMPNO column, as shown in Example 1.
Perform the following steps to merge partitions of a table:
The following example shows how to merge partitions of a table:
exp scott/tiger tables=emp:emp_mid file=empprt.dmp . . . About to export specified tables via Conventional Path ... . . exporting table EMP . . exporting partition EMP_MID 7 rows exported Export terminated successfully without warnings. . . . SQL> alter table emp drop partition emp_mid; Table altered. . . . imp scott/tiger fromuser=scott tables=emp:emp_mid file=empprt.dmp ignore=y . . . Export file created by EXPORT:V08.00.04 via conventional path . importing SCOTT's objects into SCOTT . . importing partition "EMP":"EMP_MID" 7 rows imported Import terminated successfully without warnings.
The following SELECT statements show the data from the deleted EMP_MID partition now merged in the EMP_HIGH partition:
SQL> select empno from emp partition (emp_low); EMPNO ---------- 7369 7499 7521 7566 4 rows selected. SQL> select empno from emp partition (emp_high); EMPNO ---------- 7900 7902 7934 7654 7698 7782 7788 7839 7844 7876 10 rows selected.
This example assumes the EMP table has two partitions, based on the EMPNO column, as shown in Example 2. This example repartitions the EMP table on the DEPTNO coumn.
Perform the following steps to repartition a table on a different column:
The following example shows how to repartition a table on a different column:
exp scott/tiger tables=emp file=empexp.dat . . . About to export specified tables via Conventional Path ... . . exporting table EMP . . exporting partition EMP_LOW 4 rows exported . . exporting partition EMP_HIGH 10 rows exported Export terminated successfully without warnings. . . . SQL> drop table emp cascade constraints; Table dropped. SQL> SQL> create table emp 2 ( 3 empno number(4) not null, 4 ename varchar2(10), 5 job varchar2(9), 6 mgr number(4), 7 hiredate date, 8 sal number(7,2), 9 comm number(7,2), 10 deptno number(2) 11 ) 12 partition by range (deptno) 13 ( 14 partition dept_low values less than (15) 15 tablespace tbs_d1, 16 partition dept_mid values less than (25) 17 tablespace tbs_d2, 18 partition dept_high values less than (35) 19 tablespace tbs_d3 20 ); Table created. SQL> exit imp scott/tiger tables=emp file=empexp.dat ignore=y . . . Export file created by EXPORT:V08.00.04 via conventional path . importing SCOTT's objects into SCOTT . . importing partition "EMP":"EMP_LOW" 4 rows imported . . importing partition "EMP":"EMP_HIGH" 10 rows imported Import terminated successfully without warnings.
The following SELECT statements show that the data is partitioned on the DEPTNO column:
SQL> select empno, deptno from emp partition (dept_low); EMPNO DEPTNO ---------- ---------- 7934 10 7782 10 7839 10 3 rows selected. SQL> select empno, deptno from emp partition (dept_mid); EMPNO DEPTNO ---------- ---------- 7369 20 7566 20 7902 20 7788 20 7876 20 5 rows selected. SQL> select empno, deptno from emp partition (dept_high); EMPNO DEPTNO ---------- ---------- 7499 30 7521 30 7900 30 7654 30 7698 30 7844 30 6 rows selected.
Starting Import from the command line with no parameters initiates the interactive method. The interactive method does not provide prompts for all Import functionality. The interactive method is provided only for backward compatibility.
If you do not specify a username/password on the command line, the Import utility prompts you for this information. The following example shows the interactive method:
> imp system/manager Import: Release 8.0.4.0.0 - Production on Fri Nov 7 10:11:37 1997 (c) Copyright 1997 Oracle Corporation. All rights reserved. Connected to: Oracle8 Server Release 8.0.4.0.0 - Production PL/SQL Release 8.0.4.0.0 - Production Import file: expdat.dmp > Enter insert buffer size (minimum is 4096) 30720> Export file created by EXPORT:V08.00.04 via conventional path Warning: the objects were exported by BLAKE, not by you List contents of import file only (yes/no): no > Ignore create error due to object existence (yes/no): no > Import grants (yes/no): yes > Import table data (yes/no): yes > Import entire export file (yes/no): no > y . importing BLAKE's objects into SYSTEM . . importing table "DEPT" 4 rows imported . . importing table "MANAGER" 3 rows imported Import terminated successfully without warnings.
You may not see all the prompts in a given Import session because some prompts depend on your responses to other prompts. Some prompts show a default answer; if the default is acceptable, press [RETURN].
Note: If you specify N at the previous prompt, Import prompts you for a schema name and the tables names you want to import for that schema:
Enter table(T) or partition(T:P) names. Null list means all tables for user
Entering a null table list causes all tables in the schema to be imported. You can only specify one schema at a time when you use the interactive method.
Typically, you should not need to invoke Import as SYSDBA. However, if you are using Tablespace Point-In-Time Recovery (TSPITR) which enables you to quickly recover one or more tablespaces to a point-in-time different from that of the rest of the database, you will need to know how to do so.
Attention: It is recommended that you read the information about TSPITR in the Oracle8 Backup and Recovery Guide, "POINT_IN_TIME_RECOVER" on page 2-26, and "RECOVERY_TABLESPACES" on page 1-19 before continuing with this section.
Because an incremental export extracts only tables that have changed since the last incremental, cumulative, or complete export, an import from an incremental export file imports the table's definition and all its data, not just the changed rows.
Because imports from incremental export files are dependent on the method used to export the data, you should also read "Incremental, Cumulative, and Complete Exports" on page 1-37.
It is important to note that, because importing an incremental export file imports new versions of existing objects, existing objects are dropped before new ones are imported. This behavior differs from a normal import. During a normal import, objects are not dropped and an error is usually generated if the object already exists.
The order in which incremental, cumulative, and complete exports are done is important. A set of objects cannot be restored until a complete export has been run on a database. Once that has been done, the process of restoring objects follows the steps listed below.
Note: To restore a set of objects, you must first import the most recent incremental export file to import the system objects (that is, specify INCTYPE=SYSTEM for the import). Then, you must import the export files in chronological order, based on their export time (that is, specify INCTYPE=RESTORE for the import).
For example, if you have the following:
then you should import in the following order:
imp system/manager INCTYPE=SYSTEM FULL=Y FILE=I3 imp system/manager INCTYPE=RESTORE FULL=Y FILE=X1 imp system/manager INCTYPE=RESTORE FULL=Y FILE=C1 imp system/manager INCTYPE=RESTORE FULL=Y FILE=C2 imp system/manager INCTYPE=RESTORE FULL=Y FILE=I1 imp system/manager INCTYPE=RESTORE FULL=Y FILE=I2 imp system/manager INCTYPE=RESTORE FULL=Y FILE=I3
Notes:
For incremental imports only, object types and foreign function libraries are handled as system objects. That is, their definitions are only imported with the other system objects when INCTYPE = SYSTEM. This imports the most recent definition of the object type (including the object identifier) and the most recent definition of the library specification.
Then, as tables are imported from earlier incremental export files using INCTYPE=RESTORE, Import verifies that any object types needed by the table exist and have the same object identifier. If the object type does not exist, or if it exists but its object identifier does not match, the table is not imported.
This indicates the object type had been dropped or replaced subsequent to the incremental export, requiring that all tables dependent on the object also had been dropped.
If a user had execute access to an object type and created a table containing data of that object type, but the execute privilege is later revoked, import of that table will fail. The user must be regranted execute privilege to successfully import the table.
This section describes the behavior of Import with respect to index creation and maintenance.
If SKIP_UNUSABLE_INDEXES=Y, the Import utility postpones maintenance on all indexes that were set to Index Unusable before Import. Other indexes (not previously set Index Unusable) continue to be updated as rows are inserted. This approach saves on index updates during Import and assumes that users can issue the appropriate ALTER INDEX statements for other indexes not covered in the exported list, before Import.
Delayed index maintenance may cause a violation of an existing unique integrity constraint supported by the index. The existence of a unique integrity constraint on a table does not prevent existence of duplicate keys in a table that was imported with INDEXES = N. The supporting index will be in UNUSABLE state until the duplicates are removed and the index is rebuilt.
Table 2-3 is a summary of results for combinations of IGNORE and INDEXES parameters with partition-level Import.
Table 2-3 Partition-Level IGNORE and INDEXES CombinationsImport provides you with the capability of delaying index creation and maintenance services until after completion of the import and insertion of exported data. Performing index (re)creation or maintenance after Import completes is generally faster than updating the indexes for each row inserted by Import.
Index creation can be time consuming, and therefore can be done more efficiently after the Imports of all other objects have completed. You can postpone creation of global and local indexes until after the Import completes by specifying INDEXES = N (INDEXES = Y is the default) and INDEXFILE = filename. The index-creation commands that would otherwise be issued by Import are instead stored in the specified file.
After the Import is complete, you must create the indexes, typically by using the contents of the file (specified with INDEXFILE) as an SQL script.
If the total amount of index updates are smaller during data insertion than at index rebuild time after Import, users can choose to update those indexes at table data insertion time by setting INDEXES = Y.
For example, assume that partitioned table t
with partitions p1
and p2
exists on the Import target system. Assume that local indexes p1_ind
on partition p1
and p2_ind
on partition p2
exist also. Assume that partition p1
contains a much larger amount of data in the existing table t
, compared with the amount of data to be inserted by the Export file (expdat.dmp). Assume that the reverse is true for p2
.
Consequently, performing index updates for p1_ind
during table data insertion time is more efficient than at partition index rebuild time. The opposite is true for p2_ind
.
Users can postpone local index maintenance for p2_ind
during Import by using the following steps:
ALTER TABLE t MODIFY PARTITION p2 UNUSABLE LOCAL INDEXES;
This example executes the ALTER SESSION SET SKIP_UNUSABLE_INDEXES=Y statement before performing the import.
ALTER TABLE t MODIFY PARTITION p2 REBUILD UNUSABLE LOCAL INDEXES;
In this example, local index p1_ind
on p1
will be updated when table data is inserted into partition p1
during Import. Local index p2_ind
on p2
will be updated at index rebuild time, after Import.
A database with many non-contiguous, small blocks of free space is said to be fragmented. A fragmented database should be reorganized to make space available in contiguous, larger blocks. You can reduce fragmentation by performing a full database export and import as follows:
See the Oracle8 Administrator's Guide for more information about creating databases.
By default, Import displays all error messages. If you specify a log file by using the LOG parameter, Import writes the error messages to the log file in addition to displaying them on the terminal. You should always specify a log file when you import. (You can redirect Import's output to a file on those systems that permit I/O redirection.)
Additional Information: For information on the LOG parameter, see "LOG" on page 2-26. Also see your operating system-specific documentation for information on redirecting output.
When an import completes without errors, the message "Import terminated successfully without warnings" is issued. If one or more non-fatal errors occurred, and Import was able to continue to completion, the message "Import terminated successfully with warnings" occurs. If a fatal error occurs, Import ends immediately with the message "Import terminated unsuccessfully."
Additional Information: Messages are documented in Oracle8 Messages and your operating system-specific documentation.
This section describes errors that can occur when you import database objects.
If a row is rejected due to an integrity constraint violation or invalid data, Import displays a warning message but continues processing the rest of the table. Some errors, such as "tablespace full," apply to all subsequent rows in the table. These errors cause Import to stop processing the current table and skip to the next table.
A row error is generated if a row violates one of the integrity constraints in force on your system, including:
See the Oracle8 Application Developer's Guide and Oracle8 Concepts for more information on integrity constraints.
Row errors can also occur when the column definition for a table in a database is different from the column definition in the export file. The error is caused by data that is too long to fit into a new table's columns, by invalid data types, and by any other INSERT error.
Errors can occur for many reasons when you import database objects, as described in this section. When such an error occurs, import of the current database object is discontinued. Import then attempts to continue with the next database object in the export file.
If a database object to be imported already exists in the database, an object creation error occurs. What happens next depends on the setting of the IGNORE parameter.
If IGNORE=N (the default), the error is reported, and Import continues with the next database object. The current database object is not replaced. For tables, this behavior means that rows contained in the export file are not imported.
If IGNORE=Y, object creation errors are not reported. Although the database object is not replaced, if the object is a table, rows are imported into it. Note that only object creation errors are ignored, all other errors (such as operating system, database, and SQL) are reported and processing may stop.
Warning: Specifying IGNORE=Y can cause duplicate rows to be entered into a table unless one or more columns of the table are specified with the UNIQUE integrity constraint. This could occur, for example, if Import were run twice.
If sequence numbers need to be reset to the value in an export file as part of an import, you should drop sequences. A sequence that is not dropped before the import is not set to the value captured in the export file, because Import does not drop and re-create a sequence that already exists. If the sequence already exists, the export file's CREATE SEQUENCE statement fails and the sequence is not imported.
Resource limitations can cause objects to be skipped. When you are importing tables, for example, resource errors can occur as a result of internal problems, or when a resource such as memory has been exhausted.
If a resource error occurs while you are importing a row, Import stops processing the current table and skips to the next table. If you have specified COMMIT=Y, Import commits the partial import of the current table. If not, a rollback of the current table occurs before Import continues. (See the description of "COMMIT" on page 2-20 for information about the COMMIT parameter.)
When a fatal error occurs, Import terminates. For example, if you enter an invalid username/password combination or attempt to run Export or Import without having prepared the database by running the scripts CATEXP.SQL or CATALOG.SQL, a fatal error occurs and causes Import to terminate.
This section describes factors to take into account when using Export and Import across a network.
When transferring an export file across a network, be sure to transmit the file using a protocol that preserves the integrity of the file. For example, when using FTP or a similar file transfer protocol, transmit the file in binary mode. Transmitting export files in character mode causes errors when the file is imported.
By eliminating the boundaries between different machines and operating systems on a network, Net8 provides a distributed processing environment for Oracle8 products. Net8 lets you export and import over a network.
For example, running Export locally, you can write data from a remote Oracle database into a local export file. Running Import locally, you can read data into a remote Oracle database.
To use Export or Import with Net8, you must include the connection qualifier string @connect_string when entering the username/password in the exp or imp command. For the exact syntax of this clause, see the user's guide for your Net8 protocol. For more information on Net8, see the Net8 Administrator's Guide. See also Oracle8 Distributed Database Systems. See also "Interactively Invoking Import as SYSDBA" on page 2-42 if you are using Tablespace Point-in-Time Recovery.
The three interrelated objects in a snapshot system are the master table, optional snapshot log, and the snapshot itself. The tables (master table, snapshot log table definition, and snapshot tables) can be exported independently of one another. Snapshot logs can be exported only if you export the associated master table. You can export snapshots using full database or user-mode Export; you cannot use table-mode Export.
This section discusses how fast refreshes are affected when these objects are imported. Oracle8 Replication provides more information about snapshots and snapshot logs.
The imported data is recorded in the snapshot log if the master table already exists for the database to which you are importing and it has a snapshot log.
When a snapshot log is exported, ROWIDs stored in the snapshot log have no meaning upon import. As a result, each ROWID snapshot's first attempt to do a fast refresh fails, generating an error indicating that a complete refresh is required.
To avoid the refresh error, do a complete refresh after importing a ROWID snapshot log. After you have done a complete refresh, subsequent fast refreshes will work properly.
In contrast, when a snapshot log is exported, primary key values do retain their meaning upon Import. Therefore, primary key snapshots can do a fast refresh after the import. See Oracle8 Replication for information about primary key snapshots.
A snapshot that has been restored from an export file has "gone back in time" to a previous state. On import, the time of the last refresh is imported as part of the snapshot table definition. The function that calculates the next refresh time is also imported.
Each refresh leaves a signature. A fast refresh uses the log entries that date from the time of that signature to bring the snapshot up to date. When the fast refresh is complete, the signature is deleted and a new signature is created. Any log entries that are not needed to refresh other snapshots are also deleted (all log entries with times before the earliest remaining signature).
When you restore a snapshot from an export file, you may encounter a problem under certain circumstances.
Assume that a snapshot is refreshed at time A, exported at time B, and refreshed again at time C. Then, because of corruption or other problems, the snapshot needs to be restored by dropping the snapshot and importing it again. The newly imported version has the last refresh time recorded as time A. However, log entries needed for a fast refresh may no longer exist. If the log entries do exist (because they are needed for another snapshot that has yet to be refreshed), they are used, and the fast refresh completes successfully. Otherwise, the fast refresh fails, generating an error that says a complete refresh is required.
Snapshots, snapshot logs, and related items are exported with the schema name explicitly given in the DDL statements, therefore, snapshots and their related items cannot be imported into a different schema.
If you attempt to use FROMUSER/TOUSER to import snapshot data, an error will be written to the Import log file and the items will not be imported.
By default, a table is imported into its original tablespace.
If the tablespace no longer exists, or the user does not have sufficient quota in the tablespace, the system uses the default tablespace for that user unless the table:
If the user does not have sufficient quota in the default tablespace, the user's tables are not imported. (See "Reorganizing Tablespaces" on page 2-54 to see how you can use this to your advantage.)
The storage parameter OPTIMAL for rollback segments is not preserved during export and import.
Tables are exported with their current storage parameters. For object tables, the OIDINDEX is created with its current storage parameters and name, if given. For tables that contain LOB columns, LOB data and LOB indexes are created with their current storage parameters. If users alter the storage parameters of existing tables prior to export, the tables are exported using those altered storage parameters. The storage parameters for LOB data and LOB indexes cannot be altered.
Note that LOB data and LOB indexes might not reside in the same tablespace as the containing table. The tablespace for that data must be read/write at the time of import or the table will not be imported.
If LOB data or LOB indexes reside in a tablespace that does not exist at the time of import or the user does not have the necessary quota in that tablespace, the table will not be imported. Because there can be multiple tablespace clauses, including one for the table, Import cannot determine which tablespace clause caused the error.
Export files include table storage parameters, so you may want to pre-create large tables with the different storage parameters before importing the data. If so, you must specify the following on the command line or in the parameter file:
IGNORE=Y
By default at export time, the storage parameters for large tables are adjusted to consolidate all data for the table into its initial extent. To preserve the original size of an initial extent, you must specify at export time that extents not be consolidated (by setting COMPRESS=N.) See "COMPRESS" on page 1-13 for a description of the COMPRESS parameter.
Read-only tablespaces can be exported. On import, if the tablespace does not already exist in the target database, the tablespace is created as a read/write tablespace. If you want read-only functionality, you must manually make the tablespace read-only after the import.
If the tablespace already exists in the target database and is read-only, you must make it read/write before the import.
When you initialize a database, Oracle creates a single system rollback segment (named SYSTEM). Oracle uses this rollback segment only for transactions that manipulate objects in the SYSTEM tablespace. However, if you want to import into a different tablespace, you must create a new rollback segment. This restriction does not apply if you intend to import only into the SYSTEM tablespace. For details on creating rollback segments, see the Oracle8 Administrator's Guide.
You can drop a tablespace by redefining the objects to use different tablespaces before the import. You can then issue the import command and specify IGNORE=Y.
In many cases, you can drop a tablespace by doing a full database export, then creating a zero-block tablespace with the same name (before logging off) as the tablespace you want to drop. During import, with IGNORE=Y, the relevant CREATE TABLESPACE command will fail and prevent the creation of the unwanted tablespace.
All objects from that tablespace will be imported into their owner's default tablespace with the exception of partitioned tables, type tables, and tables that contain LOB columns. Import cannot determine which tablespace caused the error. Instead, the user must create the table and import the table again, specifying IGNORE=Y.
Objects are not imported into the default tablespace if the tablespace does not exist or the user does not have the necessary quotas for the default tablespace.
If a user's quotas allow it, the user's tables are imported into the same tablespace from which they were exported. However, if the tablespace no longer exists or the user does not have the necessary quota, the system uses the default tablespace for that user as long as the table is unpartitioned, contains no LOB columns, and the table is not a type table.
If the user is unable to access the default tablespace, the tables cannot be imported. This scenario can be used to move user's tables from one tablespace to another.
For example, you need to move JOE's tables from tablespace A to tablespace B after a full database export. Follow these steps:
Note: Role revokes do not cascade. Therefore, users who were granted other roles by JOE will be unaffected.
Note: An index on the table is created in the same tablespace as the table itself, unless it already exists.
This section describes the behavior of Export and Import with respect to character sets and National Language Support (NLS).
Export writes export files using the character set specified for the user session, for example, 7-bit ASCII or IBM Code Page 500 (EBCDIC).
The import session and the target database character sets can differ from the source database character set. This circumstance requires one or more character set conversion operations. The export file identifies the character encoding scheme used for its character data.
If necessary, Import automatically translates the data to the character set of its host system. Import converts character data to the user-session character set if that character set is different from the one in the export file. See also "Character Set Conversion" on page 1-36 and page 1 - 45 for a description of how Export handles character set issues.
Import can convert data to the user-session character set only if the ratio of the width of the widest character in the import character set to the width of the smallest character in the export character set is 1.
If the export file character set is any single-byte character set (for example, EBCDIC or USASCII7), and if the character set used by the target database is also a single-byte character set, the data is converted automatically during import to the character encoding scheme specified for the user session by the NLS_LANG parameter. After the data is converted to the session character set, it is converted to the database character set.
Some 8-bit characters can be lost (that is, converted to 7-bit equivalents) when importing an 8-bit character set export file. This occurs if the machine on which the import occurs has a native 7-bit character set, or the NLS_LANG operating system environment variable is set to a 7-bit character set. Most often, this is seen when accented characters lose the accent mark.
To avoid this unwanted conversion, you can set the NLS_LANG operating system environment variable to be that of the export file character set.
When importing an Oracle Version 5 or 6 export file with a character set different from that of the native operating system or the setting for NLS_LANG, you must set the CHARSET import parameter to specify the character set of the export file.
Refer to the sections "Character Set Conversion" on page 1-45 and "Single-Byte Character Sets During Export and Import" on page 1-46 for additional information on single-byte character sets.
For multi-byte character sets, Import can convert data to the user-session character set only if the ratio of the width of the widest character in the import character set to the width of the smallest character in the export character set is 1. If the ratio is not 1, the user-session character set should be set to match the export character set, so that Import does no conversion.
During the conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character. (The default character is defined by the target character set.) To guarantee 100% conversion, the target character set must be a superset (or equivalent) of the source character set.
For more information, refer to the National Language Support section of the Oracle8 Reference.
Because character set conversion lengthens the processing time required for import, limit the number of character set conversions to as few as possible.
In the ideal scenario, the import session and target database character sets are the same as the source database character set, requiring no conversion.
If the import session character set and the target database character set are the same, but differ from the source database character set, one character set conversion is required.
Oracle8 can export and import NCHAR datatypes. Import does no translation of NCHAR data, but, if needed, OCI automatically converts the data to the national character set of the Import server.
This section describes the behavior of various database objects during Import.
The Oracle8 server assigns object identifiers to uniquely identify object types, object tables, and rows in object tables. These object identifiers are preserved by import.
For object types, if IGNORE=Y and the object type already exists and the object identifiers match, no error is reported. If the object identifiers do not match, an error is reported and any tables using the object type are not imported.
For object types, if IGNORE=N and the object type already exists, an error is reported. If the object identifiers do not match, any tables using the object type are not imported.
For object tables, if IGNORE=Y and the table already exists and the object identifiers match, no error is reported. Rows are imported into the object table. If the object identifiers do not match, an error is reported and the table is not imported.
For object tables, if IGNORE = N and the table already exists, an error is reported and the table is not imported.
For object tables, if IGNORE=Y and if the table already exists and the table's object identifiers match, import of rows may fail if rows with the same object identifier already exist in the object table.
Because Import preserves object identifiers of object types and object tables, note the following considerations when importing objects from one schema into another schema, using the FROMUSER and TOUSER parameters:
Users frequently pre-create tables before import to reorganize tablespace usage or change a table's storage parameters. The tables must be created with the same definitions as were previously used or a compatible format (except for storage parameters). For object tables and tables that contain columns of object types, format compatibilities are more restrictive.
For object tables, the same object type must be specified and that object type must have the same object identifier as the original. The object table's object type and identifier must be the same as the original object table's object type.
For tables containing columns of object types, the same object type must be specified and that type must have the same object identifier as the original.
Export writes information about object types used by a table in the Export file, including object types from different schemas. Object types from different schemas used as top level columns are verified for matching name and object identifier at import time. Object types from different schemas that are nested within other object types are not verified. If the object type already exists, its object identifier is verified. Import retains information about what object types it has created, so that if an object type is used by multiple tables, it is created only once.
In all cases, the object type must be compatible in terms of the internal format used for storage. Import does not verify that the internal format of a type is compatible. If the exported data is not compatible, the results are unpredictable.
For nested tables, the storage information for the inner tables is exported with a DDL statement separate from the creation of the outer table. Import handles the multiple statements as one atomic operation. If the creation of the outer table fails, the inner table storage information is not executed. If the creation of the outer table succeeds, but the storage creation for any inner nested table fails, the entire table is dropped and table data is not imported. If the outer table already exists and IGNORE=Y, the inner table storage is not created.
Because inner nested tables are imported separately from the outer table, attempts to access data from them while importing may produce unexpected results. For example, if an outer row is accessed before its inner rows are imported, Import returns an incomplete row to the user.
Inner nested tables are exported separately from the outer table. Therefore, situations may arise where data in an inner nested table might not be properly imported:
REF columns and attributes may contain a hidden ROWID that points to the referenced type instance. Import does not automatically recompute these ROWIDs for the target database. You should execute the following command to reset the
ROWIDs to their proper values:
ANALYZE TABLE [schema.]table VALIDATE REF UPDATE
See the Oracle8 SQL Reference manual for more information about the ANALYZE TABLE command.
When the Import utility processes array columns or attributes, it allocates buffers to accommodate an array using the largest dimensions that could be expected for the column or attribute. If the maximum dimension of the array greatly exceeds the memory used in each instance of the array, the Import may fail due to memory exhaustion.
Export and Import do not copy data referenced by BFILE columns and attributes from the source database to the target database. Export and Import only propagate the names of the files and the directory aliases referenced by the BFILE columns. It is the responsibility of the DBA or user to move the actual files referenced through BFILE columns and attributes.
When you import table data that contains BFILE columns, the BFILE locator is imported with the directory alias and file name that was present at export time. Import does not verify that the directory alias or file exists. If the directory alias or file does not exist, an error occurs when the user accesses the BFILE data.
For operating system directory aliases, if the directory syntax used in the export system is not valid on the import system, no error is reported at import time. Subsequent access to the file data receives an error.
It is the responsibility of the DBA or user to ensure the directory alias is valid on the import system.
Note: Instances of BFILE columns and attributes contain text strings for the name of the DIRECTORY ALIAS and file name referenced by the instance. Export stores these strings in the character set of the exported database. Import converts the text used to store the directory alias name and the file name of a BFILE column from the character set of the export database to the character set of the import database.
Import does not verify that the location referenced by the foreign function library is correct. If the formats for directory and file names used in the library's specification on the export file are invalid on the import system, no error is reported at import time. Subsequent usage of the callout functions will receive an error.
It is the responsibility of the DBA or user to manually move the library and ensure the library's specification is valid on the import system.
When a local stored procedure, function, or package is imported, it retains its original specification timestamp. The procedure, function, or package is recompiled upon import. If the compilation is successful, it can be accessed by remote procedures without error.
Procedures are exported after tables, views, and synonyms, therefore they usually compile successfully since all dependencies will already exist. However, procedures, functions, and packages are not exported in dependency order. If a procedure, function, or package depends on a procedure, function, or package that is stored later in the Export dump file, it will not compile successfully. Later use of the procedure, function, or package will automatically cause a recompile and, if successful, will change the timestamp. This may cause errors in the remote procedures that call it.
Importing a queue also imports any underlying queue tables and the related dictionary tables. A queue can be imported only at the granularity level of the queue table. When a queue table is imported, export pre-table and post-table action procedures maintain the queue dictionary.
LONG columns can be up to 2 gigabytes in length. In importing and exporting, the LONG columns must fit into memory with the rest of each row's data. The memory used to store LONG columns, however, does not need to be contiguous because LONG data is loaded in sections.
Views are exported in dependency order. In some cases, Export must determine the ordering, rather than obtaining the order from the server database. In doing so, Export may not always be able to duplicate the correct ordering, resulting in compilation warnings when a view is imported and the failure to import column comments on such views. In particular, if VIEWA uses the stored procedure PROCB and PROCB uses the view VIEWC, Export cannot determine the proper ordering of VIEWA and VIEWC. If VIEWA is exported before VIEWC and PROCB already exists on the import system, VIEWA receives compilation warnings at import time.
Grants on views are imported even if a view has compilation errors. A view could have compilation errors if an object it depends on, such as a table, procedure, or another view, does not exist when the view is created. If a base table does not exist, the server cannot validate that the grantor has the proper privileges on the base table with the GRANT OPTION.
Therefore, access violations could occur when the view is used, if the grantor does not have the proper privileges after the missing tables are created.
The Export parameter STATISTICS controls the generation of database optimizer statistics during import.
When you specify either the COMPUTE or ESTIMATE option of the STATISTICS parameter, all indexes, tables, and clusters that have had ANALYZE applied to them are exported with the commands necessary to generate the appropriate statistics (estimated or computed) on import. You can set the ANALYZE Import parameter to N to prevent Import from generating optimizer statistics.
Note: Generation of statistics is limited to those objects that already had them before export. Statistics are not automatically generated for every index, table, and cluster in the database as a result of this option.
If your installation generally uses either estimated or computed statistics, it is a good idea to include the STATISTICS parameter whenever you use Export. The cost during Export is negligible - statistics are not recorded in the export file, only a command to generate them. The default is STATISTICS=ESTIMATE. See Oracle8 Concepts for more information about the optimizer.
By using the STATISTICS parameter during Export, you ensure that the appropriate statistics are gathered when the data is imported. If your export file was created without this parameter, or if you have changed your method of collecting statistics, use Import's INDEXFILE parameter to generate a list of imported objects. Then, edit that list to produce a series of ANALYZE commands on them and execute the resulting SQL script. (For more information, see "INDEXFILE" on page 2-25.)
This section describes guidelines and restrictions that apply when you import data from an Oracle7 database into an Oracle8 server. Additional information may be found inOracle8 Migration.
In Oracle8, check constraints on DATE columns must use the TO_DATE function to specify the format of the date. Because this function was not required in earlier Oracle versions, data imported from an earlier Oracle database might not have used the TO_DATE function. In such cases, the constraints are imported into the Oracle8 database, but they are flagged in the dictionary as invalid.
The catalog views DBA_CONSTRAINTS, USER_CONSTRAINTS, and ALL_CONSTRAINTS can be used to identify such constraints. Import issues a warning message if invalid date constraints are in the database.
This section describes guidelines and restrictions that apply when you import data from an Oracle Version 6 database into an Oracle8 server. Additional information may be found in the Oracle8 Migration manual.
Oracle Version 6 CHAR columns are automatically converted into the Oracle VARCHAR2 datatype.
Although the SQL syntax for integrity constraints in Oracle Version 6 is different from the Oracle7 and Oracle8 syntax, integrity constraints are correctly imported into Oracle8.
NOT NULL constraints are imported as ENABLED. All other constraints are imported as DISABLED.
A table with a default column value that is longer than the maximum size of that column generates the following error on import to Oracle8:
ORA-1401: inserted value too large for column
Oracle Version 6 did not check the columns in a CREATE TABLE statement to be sure they were long enough to hold their DEFAULT values so these tables could be imported into a Version 6 database. The Oracle8 server does make this check, however. As a result, tables that could be imported into a Version 6 database may not import into Oracle8.
If the DEFAULT is a value returned by a function, the column must be large enough to hold the maximum value that can be returned by that function. Otherwise, the CREATE TABLE statement recorded in the export file produces an error on import.
Note: The maximum value of the USER function increased in Oracle7, so columns with a default of USER may not be long enough. To determine the maximum size that the USER function returns, execute the following SQL command:
DESCRIBE user_sys_privs
The length shown for the USERNAME column is the maximum length returned by the USER function.
Oracle8 Import reads Export dump files created by Oracle Version 5.1.22 and later. Keep in mind the following: