Considerations When Importing Database Objects - Oracle 8i

This section describes the behavior of various database objects during Import.

Importing Object Identifiers
The Oracle database server assigns object identifiers to uniquely identify object types, object tables, and rows in object tables. These object identifiers are preserved by import.

When you import a table that references a type, but a type of that name already exists in the database, Import attempts to verify that the preexisting type is, in fact, the type used by the table (rather than a different type that just happens to have the same name).

To do this, Import compares the type's unique identifier (TOID) with the identifier stored in the export file. Import will not import the table rows if the TOIDs do not match.

In some situations, you may not want this validation to occur on specified types (for example, if the types were created by a cartridge installation). You can use the parameter TOID_NOVALIDATE to specify types to exclude from TOID comparison.

Caution: Be very careful about using TOID_NOVALIDATE, because type validation provides an important capability that helps avoid data corruption. Be sure you feel confident of your knowledge of type validation and how it works before attempting to import with this feature disabled.

Import uses the following criteria to decide how to handle object types, object tables, and rows in object tables:

  • For object types, if IGNORE=Y and the object type already exists and the object identifiers match, no error is reported. If the object identifiers do not match and the parameter TOID_NOVALIDATE has not been set to ignore the object type, an error is reported and any tables using the object type are not imported.
  • For object types, if IGNORE=N and the object type already exists, an error is reported. If the object identifiers do not match and the parameter TOID_ NOVALIDATE has not been set to ignore the object type, any tables using the object type are not imported.
  • For object tables, if IGNORE=Y and the table already exists and the object identifiers match, no error is reported. Rows are imported into the object table. Import of rows may fail if rows with the same object identifier already exist in the object table. If the object identifiers do not match and the parameter TOID_NOVALIDATE has not been set to ignore the object type, an error is reported and the table is not imported.
  • For object tables, if IGNORE=N and the table already exists, an error is reported and the table is not imported.
  • Because Import preserves object identifiers of object types and object tables, note the following considerations when you import objects from one schema into another schema using the FROMUSER and TOUSER parameters:

  • If the FROMUSER’s object types and object tables already exist on the target system, errors occur because the object identifiers of the TOUSER’s object types and object tables are already in use. The FROMUSER’s object types and object tables must be dropped from the system before the import is started.
  • If an object table was created using the OID AS option to assign it the same object identifier as another table, both tables cannot be imported. One may be imported, but the second receives an error because the object identifier is already in use.

Importing Existing Object Tables and Tables That Contain Object Types
Users frequently create tables before importing data to reorganize tablespace usage or change a table's storage parameters. The tables must be created with the same definitions as were previously used or a compatible format (except for storage parameters). For object tables and tables that contain columns of object types, format compatibilities are more restrictive.

For tables containing columns of object types, the same object type must be specified, and that type must have the same object identifier as the original. If the parameter TOID_NOVALIDATE has been set to ignore the object type, the object IDs do not need to match.

Export writes information about object types used by a table in the Export file, including object types from different schemas. Object types from different schemas used as top level columns are verified for matching name and object identifier at import time. Object types from different schemas that are nested within other object types are not verified.

If the object type already exists, its object identifier is verified. If the parameter TOID_NOVALIDATE has been set to ignore the object type, the object IDs do not need to match. Import retains information about what object types it has created, so that if an object type is used by multiple tables, it is created only once.

Note: In all cases, the object type must be compatible in terms of the internal format used for storage. Import does not verify that the internal format of a type is compatible. If the exported data is not compatible, the results can be unpredictable.

Importing Nested Tables
Inner nested tables are exported separately from the outer table. Therefore, situations may arise where data in an inner nested table might not be properly imported:

  • Suppose a table with an inner nested table is exported and then imported without dropping the table or removing rows from the table. If the IGNORE=Y parameter is used, there will be a constraint violation when inserting each row in the outer table. However, data in the inner nested table may be successfully imported, resulting in duplicate rows in the inner table.
  • If fatal errors occur inserting data in outer tables, the rest of the data in the outer table is skipped, but the corresponding inner table rows are not skipped. This may result in inner table rows not being referenced by any row in the outer table.
  • If an insert to an inner table fails after a nonfatal error, its outer table row will already have been inserted in the outer table and data will continue to be inserted in it and any other inner tables of the containing table. This circumstance results in a partial logical row.
  • If fatal errors occur inserting data in an inner table, Import skips the rest of that inner table’s data but does not skip the outer table or other nested tables.

You should always carefully examine the logfile for errors in outer tables and inner tables. To be consistent, table data may need to be modified or deleted.

Because inner nested tables are imported separately from the outer table, attempts to access data from them while importing may produce unexpected results. For example, if an outer row is accessed before its inner rows are imported, an incomplete row may be returned to the user.

Importing REF Data
REF columns and attributes may contain a hidden ROWID that points to the referenced type instance. Import does not automatically recompute these ROWIDs for the target database. You should execute the following command to reset the ROWIDs to their proper values:

ANALYZE TABLE [schema.]table VALIDATE REF UPDATE

Importing BFILE Columns and Directory Aliases
Export and Import do not copy data referenced by BFILE columns and attributes from the source database to the target database. Export and Import only propagate the names of the files and the directory aliases referenced by the BFILE columns. It is the responsibility of the DBA or user to move the actual files referenced through BFILE columns and attributes.

When you import table data that contains BFILE columns, the BFILE locator is imported with the directory alias and file name that was present at export time. Import does not verify that the directory alias or file exists. If the directory alias or file does not exist, an error occurs when the user accesses the BFILE data.

For operating system directory aliases, if the directory syntax used in the export system is not valid on the import system, no error is reported at import time. Subsequent access to the file data receives an error.

It is the responsibility of the DBA or user to ensure the directory alias is valid on the import system.

Importing Foreign Function Libraries
Import does not verify that the location referenced by the foreign function library is correct. If the formats for directory and file names used in the library's specification on the export file are invalid on the import system, no error is reported at import time. Subsequent usage of the callout functions will receive an error.

It is the responsibility of the DBA or user to manually move the library and ensure the library's specification is valid on the import system.

Importing Stored Procedures, Functions, and Packages
When a local stored procedure, function, or package is imported, it retains its original specification timestamp. The procedure, function, or package is recompiled upon import. If the compilation is successful, it can be accessed by remote procedures without error.

Procedures are exported after tables, views, and synonyms; therefore, they usually compile successfully because all dependencies already exist. However, procedures, functions, and packages are not exported in dependency order. If a procedure, function, or package depends on a procedure, function, or package that is stored later in the Export dump file, it will not compile successfully. Later use of the procedure, function, or package will automatically cause a recompile and, if successful, will change the timestamp. This may cause errors in the remote procedures that call it.

Importing Java Objects
When a Java source or class is imported, it retains its original resolver (the list of schemas used to resolve Java full names). If the object is imported into a different schema, that resolver may no longer be valid. For example, the default resolver for a Java object in SCOTT’s schema is ((* SCOTT) (* PUBLIC)). If the object is imported into BLAKE’s schema, it may be necessary to alter the object so that the resolverreferences BLAKE’s schema.

Importing Advanced Queue (AQ) Tables
Importing a queue also imports any underlying queue tables and the related dictionary tables. A queue can be imported only at the granularity level of the queue table. When a queue table is imported, export pre-table and post-table action procedures maintain the queue dictionary.

Importing LONG Columns
LONG columns can be up to 2 gigabytes in length. In importing and exporting, the LONG columns must fit into memory with the rest of each row’s data. The memory used to store LONG columns, however, does not need to be contiguous because LONG data is loaded in sections.

Importing Views
Views are exported in dependency order. In some cases, Export must determine the ordering, rather than obtaining the order from the server database. In doing so, Export may not always be able to duplicate the correct ordering, resulting in compilation warnings when a view is imported and the failure to import column comments on such views.

In particular, if VIEWA uses the stored procedure PROCB and PROCB uses the view VIEWC, Export cannot determine the proper ordering of VIEWA and VIEWC. If VIEWA is exported before VIEWC and PROCB already exists on the import system, VIEWA receives compilation warnings at import time.

Grants on views are imported even if a view has compilation errors. A view could have compilation errors if an object it depends on, such as a table, procedure, or another view, does not exist when the view is created. If a base table does not exist, the server cannot validate that the grantor has the proper privileges on the base table with the GRANT OPTION.

Therefore, access violations could occur when the view is used, if the grantor does not have the proper privileges after the missing tables are created.

Importing views that contain references to tables in other schemas requires that the importer have SELECT ANY TABLE privilege. If the importer has not been granted this privilege, the views will be imported in an uncompiled state. Note that granting the privilege to a role is insufficient. For the view to be compiled, the privilege must be granted directly to the importer.

Importing Tables
Import attempts to create a partitioned table with the same partition or subpartition names as the exported partitioned table, including names of the form SYS_Pnnn. If a table with the same name already exists, Import processing depends on the value of the IGNORE parameter.

Unless SKIP_UNUSABLE_INDEXES=Y, inserting the exported data into the target table fails if Import cannot update a nonpartitioned index or index partition that is marked Indexes Unusable or otherwise not suitable.

All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Oracle 8i Topics