Skip Headers

Oracle® Database Administrator's Guide
10g Release 1 (10.1)

Part Number B10739-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Feedback

Go to previous page
Previous
Go to next page
Next
View PDF

14 Managing Tables

This chapter describes the various aspects of managing tables, and includes the following topics:

About Tables

Tables are the basic unit of data storage in an Oracle Database. Data is stored in rows and columns. You define a table with a table name, such as employees, and a set of columns. You give each column a column name, such as employee_id, last_name, and job_id; a datatype, such as VARCHAR2, DATE, or NUMBER; and a width. The width can be predetermined by the datatype, as in DATE. If columns are of the NUMBER datatype, define precision and scale instead of width. A row is a collection of column information corresponding to a single record.

You can specify rules for each column of a table. These rules are called integrity constraints. One example is a NOT NULL integrity constraint. This constraint forces the column to contain a value in every row.

After you create a table, insert rows of data using SQL statements. Table data can then be queried, deleted, or updated using SQL.


See Also:


Guidelines for Managing Tables

This section describes guidelines to follow when managing tables. Following these guidelines can make the management of your tables easier and can improve performance when creating the table, as well as when loading, updating, and querying the table data.

The following topics are discussed:

Design Tables Before Creating Them

Usually, the application developer is responsible for designing the elements of an application, including the tables. Database administrators are responsible for establishing the attributes of the underlying tablespace that will hold the application tables. Either the DBA or the applications developer, or both working jointly, can be responsible for the actual creation of the tables, depending upon the practices for a site.

Working with the application developer, consider the following guidelines when designing tables:

  • Use descriptive names for tables, columns, indexes, and clusters.

  • Be consistent in abbreviations and in the use of singular and plural forms of table names and columns.

  • Document the meaning of each table and its columns with the COMMENT command.

  • Normalize each table.

  • Select the appropriate datatype for each column.

  • Define columns that allow nulls last, to conserve storage space.

  • Cluster tables whenever appropriate, to conserve storage space and optimize performance of SQL statements.

Before creating a table, you should also determine whether to use integrity constraints. Integrity constraints can be defined on the columns of a table to enforce the business rules of your database automatically.


See Also:

Oracle Database Application Developer's Guide - Fundamentals for information about designing tables

Consider Your Options for the Type of Table to Create

What types of tables can you create? Here are some choices:

Type of Table Description
Ordinary (heap-organized) table This is the basic, general purpose type of table which is the primary subject of this chapter. Its data is stored as an unordered collection (heap)
Clustered table A clustered table is a table that is part of a cluster. A cluster is a group of tables that share the same data blocks because they share common columns and are often used together.

Clusters and clustered tables are discussed in Chapter 17, " Managing Clusters".

Index-organized table Unlike an ordinary (heap-organized) table, data for an index-organized table is stored in a B-tree index structure in a primary key sorted manner. Besides storing the primary key column values of an index-organized table row, each index entry in the B-tree stores the nonkey column values as well.

Index-organized tables are discussed in "Managing Index-Organized Tables ".

Partitioned table Partitioned tables allow your data to be broken down into smaller, more manageable pieces called partitions, or even subpartitions. Each partition can be managed individually, and can operate independently of the other partitions, thus providing a structure that can be better tuned for availability and performance.

Partitioned tables are discussed in Chapter 16, " Managing Partitioned Tables and Indexes".


Specify How Data Block Space Is to Be Used

By specifying the PCTFREE and PCTUSED parameters during the creation of each table, you can affect the efficiency of space utilization and amount of space reserved for updates to the current data in the data blocks of a table data segment. The PCTFREE and PCTUSED parameters are discussed in "Managing Space in Data Blocks".


Note:

When you create a table in a locally managed tablespace for which automatic segment-space management is enabled, the need to specify the PCTUSED (or FREELISTS) parameter is eliminated, and if specified, is ignored. Automatic segment-space management is specified at the tablespace level. The Oracle Database server automatically and efficiently manages free and used space within objects created in such tablespaces.

Locally managed tablespaces and automatic segment space management are discussed in "Locally Managed Tablespaces".


Specify the Location of Each Table

It is advisable to specify the TABLESPACE clause in a CREATE TABLE statement to identify the tablespace that is to store the new table. Ensure that you have the appropriate privileges and quota on any tablespaces that you use. If you do not specify a tablespace in a CREATE TABLE statement, the table is created in your default tablespace.

When specifying the tablespace to contain a new table, ensure that you understand implications of your selection. By properly specifying a tablespace during the creation of each table, you can increase the performance of the database system and decrease the time needed for database administration.

The following situations illustrate how not specifying a tablespace, or specifying an inappropriate one, can affect performance:

  • If users' objects are created in the SYSTEM tablespace, the performance of the database can suffer, since both data dictionary objects and user objects must contend for the same datafiles. Users' objects should not be stored in the SYSTEM tablespace. To avoid this, ensure that all users are assigned default tablespaces when they are created in the database.

  • If application-associated tables are arbitrarily stored in various tablespaces, the time necessary to complete administrative operations (such as backup and recovery) for the data of that application can be increased.

Consider Parallelizing Table Creation

You can utilize parallel execution when creating tables using a subquery (AS SELECT) in the CREATE TABLE statement. Because multiple processes work together to create the table, performance of the table creation operation is improved.

Parallelizing table creation is discussed in the section "Parallelizing Table Creation".

Consider Using NOLOGGING When Creating Tables

To create a table most efficiently use the NOLOGGING clause in the CREATE TABLE ... AS SELECT statement. The NOLOGGING clause causes minimal redo information to be generated during the table creation. This has the following benefits:

  • Space is saved in the redo log files.

  • The time it takes to create the table is decreased.

  • Performance improves for parallel creation of large tables.

The NOLOGGING clause also specifies that subsequent direct loads using SQL*Loader and direct load INSERT operations are not logged. Subsequent DML statements (UPDATE, DELETE, and conventional path insert) are unaffected by the NOLOGGING attribute of the table and generate redo.

If you cannot afford to lose the table after you have created it (for example, you will no longer have access to the data used to create the table) you should take a backup immediately after the table is created. In some situations, such as for tables that are created for temporary use, this precaution may not be necessary.

In general, the relative performance improvement of specifying NOLOGGING is greater for larger tables than for smaller tables. For small tables, NOLOGGING has little effect on the time it takes to create a table. However, for larger tables the performance improvement can be significant, especially when you are also parallelizing the table creation.

Consider Using Table Compression when Creating Tables

The Oracle Database table compression feature compresses data by eliminating duplicate values in a database block. Duplicate values in all the rows and columns in a block are stored once at the beginning of the block, in what is called a symbol table for that block. All occurrences of such values are replaced with a short reference to the symbol table.

Using table compression reduces disk use and memory use in the buffer cache, often resulting in better scale-up for read-only operations. Table compression can also speed up query execution. There is, however, a slight cost in CPU overhead.

Compression occurs when data is inserted with a bulk (direct-path) insert operation. A table can consist of compressed and uncompressed blocks transparently. Any DML operation can be applied to a table storing compressed blocks. However, conventional DML operations cause records to be stored uncompressed afterward the operations and the operations themselves are subject to a small performance overhead due to the nature of the table compression.

Consider using table compression when your data is mostly read only. Do not use table compression for tables that updated frequently.

Estimate Table Size and Plan Accordingly

Estimate the sizes of tables before creating them. Preferably, do this as part of database planning. Knowing the sizes, and uses, for database tables is an important part of database planning.

You can use the combined estimated size of tables, along with estimates for indexes, undo space, and redo log files, to determine the amount of disk space that is required to hold an intended database. From these estimates, you can make correct hardware purchases.

You can use the estimated size and growth rate of an individual table to better determine the attributes of a tablespace and its underlying datafiles that are best suited for the table. This can enable you to more easily manage the table disk space and improve I/O performance of applications that use the table.

Restrictions to Consider When Creating Tables

Here are some restrictions that may affect your table planning and usage:

  • Tables containing object types cannot be imported into a pre-Oracle8 database.

  • You cannot merge an exported table into a preexisting table having the same name in a different schema.

  • You cannot move types and extent tables to a different schema when the original data still exists in the database.

  • Oracle Database has a limit on the total number of columns that a table (or attributes that an object type) can have. See Oracle Database Reference for this limit.

    Further, when you create a table that contains user-defined type data, the database maps columns of user-defined type to relational columns for storing the user-defined type data. This causes additional relational columns to be created. This results in "hidden" relational columns that are not visible in a DESCRIBE table statement and are not returned by a SELECT * statement. Therefore, when you create an object table, or a relational table with columns of REF, varray, nested table, or object type, be aware that the total number of columns that the database actually creates for the table can be more than those you specify.


    See Also:

    Oracle Database Application Developer's Guide - Object-Relational Features for more information about user-defined types

Creating Tables

To create a new table in your schema, you must have the CREATE TABLE system privilege. To create a table in another user's schema, you must have the CREATE ANY TABLE system privilege. Additionally, the owner of the table must have a quota for the tablespace that contains the table, or the UNLIMITED TABLESPACE system privilege.

Create tables using the SQL statement CREATE TABLE.

This section contains the following topics:

Creating a Table

When you issue the following statement, you create a table named admin_emp in the hr schema and store it in the admin_tbs tablespace with an initial extent size of 50K:

CREATE TABLE        hr.admin_emp (
         empno      NUMBER(5) PRIMARY KEY,
         ename      VARCHAR2(15) NOT NULL,
         job        VARCHAR2(10),
         mgr        NUMBER(5),
         hiredate   DATE DEFAULT (sysdate),
         sal        NUMBER(7,2),
         comm       NUMBER(7,2),
         deptno     NUMBER(3) NOT NULL
                     CONSTRAINT admin_dept_fkey REFERENCES hr.departments
                     (department_id))
   TABLESPACE admin_tbs
   STORAGE ( INITIAL 50K);

In this CREATE TABLE statement, integrity constraints are defined on several columns of the table. Integrity constraints are discussed in "Managing Integrity Constraints".


See Also:

Oracle Database SQL Reference for description of the datatypes that can be specified for columns

Creating a Temporary Table

It is also possible to create a temporary table. The definition of a temporary table is visible to all sessions, but the data in a temporary table is visible only to the session that inserts the data into the table. Use the CREATE GLOBAL TEMPORARY TABLE statement to create a temporary table. The ON COMMIT clause indicate if the data in the table is transaction-specific (the default) or session-specific, the implications of which are as follows:

ON COMMIT Setting Description
DELETE ROWS This creates a temporary table that is transaction specific. A session becomes bound to the temporary table with a transactions first insert into the table. The binding goes away at the end of the transaction. The database truncates the table (delete all rows) after each commit.
PRESERVE ROWS This creates a temporary table that is session specific. A session gets bound to the temporary table with the first insert into the table in the session. This binding goes away at the end of the session or by issuing a TRUNCATE of the table in the session. The database truncates the table when you terminate the session.

Temporary tables are useful in applications where a result set is to be buffered, perhaps because it is constructed by running multiple DML operations. For example, consider the following:

A Web-based airlines reservations application allows a customer to create several optional itineraries. Each itinerary is represented by a row in a temporary table. The application updates the rows to reflect changes in the itineraries. When the customer decides which itinerary she wants to use, the application moves the row for that itinerary to a persistent table.

During the session, the itinerary data is private. At the end of the session, the optional itineraries are dropped.

This statement creates a temporary table that is transaction specific:

CREATE GLOBAL TEMPORARY TABLE admin_work_area
        (startdate DATE,
         enddate DATE,
         class CHAR(20))
      ON COMMIT DELETE ROWS;

Indexes can be created on temporary tables. They are also temporary and the data in the index has the same session or transaction scope as the data in the underlying table.

Unlike permanent tables, temporary tables and their indexes do not automatically allocate a segment when they are created. Instead, segments are allocated when the first INSERT (or CREATE TABLE AS SELECT) is performed. This means that if a SELECT, UPDATE, or DELETE is performed before the first INSERT, the table appears to be empty.

DDL operations (except TRUNCATE) are allowed on an existing temporary table only if no session is currently bound to that temporary table.

If you rollback a transaction, the data you entered is lost, although the table definition persists.

A transaction-specific temporary table allows only one transaction at a time. If there are several autonomous transactions in a single transaction scope, each autonomous transaction can use the table only as soon as the previous one commits.

Because the data in a temporary table is, by definition, temporary, backup and recovery of temporary table data is not available in the event of a system failure. To prepare for such a failure, you should develop alternative methods for preserving temporary table data.

Parallelizing Table Creation

When you specify the AS SELECT clause to create a table and populate it with data from another table, you can utilize parallel execution. The CREATE TABLE ... AS SELECT statement contains two parts: a CREATE part (DDL) and a SELECT part (query). Oracle Database can parallelize both parts of the statement. The CREATE part is parallelized if one of the following is true:

  • A PARALLEL clause is included in the CREATE TABLE ... AS SELECT statement

  • An ALTER SESSION FORCE PARALLEL DDL statement is specified

The query part is parallelized if all of the following are true:

  • The query includes a parallel hint specification (PARALLEL or PARALLEL_INDEX) or the CREATE part includes the PARALLEL clause or the schema objects referred to in the query have a PARALLEL declaration associated with them.

  • At least one of the tables specified in the query requires either a full table scan or an index range scan spanning multiple partitions.

If you parallelize the creation of a table, that table then has a parallel declaration (the PARALLEL clause) associated with it. Any subsequent DML or queries on the table, for which parallelization is possible, will attempt to use parallel execution.

The following simple statement parallelizes the creation of a table and stores the result in a compressed format, using table compression:

CREATE TABLE hr.admin_emp_dept
     PARALLEL COMPRESS
     AS SELECT * FROM hr.employees
     WHERE department_id = 10;

In this case, the PARALLEL clause tells the database to select an optimum number of parallel execution servers when creating the table.


See Also:


Inserting Data Into Tables Using Direct-Path INSERT

There are several means of inserting or initially loading data into your tables. Most commonly used might be the following:

Method Description
SQL*Loader This Oracle utility program loads data from external files into tables of an Oracle Database.

For information about SQL*Loader, see Oracle Database Utilities.

CREATE TABLE ... AS SELECT statement (CTAS) Using this SQL statement you can create a table and populate it with data selected from another existing table.
INSERT statement The INSERT statement enables you to add rows to a table, either by specifying the column values or by selecting data from another existing table.
MERGE statement The MERGE statement enables you to insert rows into or update rows of a table, by selecting rows from another existing table. If a row in the new data corresponds to an item that already exists in the table, then an UPDATE is performed, else an INSERT is performed.

Oracle Database inserts data into a table in one of two ways:

Further, the data can be inserted either in serial mode, where one process executes the statement, or parallel mode, where multiple processes work together simultaneously to run a single SQL statement. The latter is referred to as parallel execution.

This section discusses one aspect of inserting data into tables. Specifically, using the direct-path form of the INSERT statement. It contains the following topics:

Advantages of Using Direct-Path INSERT

The following are performance benefits of direct-path INSERT:

  • During direct-path INSERT, you can disable the logging of redo and undo entries. Conventional insert operations, in contrast, must always log such entries, because those operations reuse free space and maintain referential integrity.

  • To create a new table with data from an existing table, you have the choice of creating the new table and then inserting into it, or executing a CREATE TABLE ... AS SELECT statement. By creating the table and then using direct-path INSERT operations, you update any indexes defined on the target table during the insert operation. The table resulting from a CREATE TABLE ... AS SELECT statement, in contrast, does not have any indexes defined on it; you must define them later.

  • Direct-path INSERT operations ensure atomicity of the transaction, even when run in parallel mode. Atomicity cannot be guaranteed during parallel direct-path loads (using SQL*Loader).

  • If errors occur during parallel direct-path loads, some indexes could be marked UNUSABLE at the end of the load. Parallel direct-path INSERT, in contrast, rolls back the statement if errors occur during index update.

  • Direct-path INSERT must be used if you want to store the data in compressed form using table compression.

Enabling Direct-Path INSERT

You can implement direct-path INSERT operations by using direct-path INSERT statements, inserting data in parallel mode, or by using the Oracle SQL*Loader utility in direct-path mode. Direct-path inserts can be done in either serial or parallel mode.

To activate direct-path INSERT in serial mode, you must specify the APPEND hint in each INSERT statement, either immediately after the INSERT keyword, or immediately after the SELECT keyword in the subquery of the INSERT statement.

When you are inserting in parallel DML mode, direct-path INSERT is the default. In order to run in parallel DML mode, the following requirements must be met:

  • You must have Oracle Enterprise Edition installed.

  • You must enable parallel DML in your session. To do this, run the following statement:

    ALTER SESSION { ENABLE | FORCE } PARALLEL DML;
    
    
  • You must specify the parallel attribute for the target table, either at create time or subsequently, or you must specify the PARALLEL hint for each insert operation.

To disable direct-path INSERT, specify the NOAPPEND hint in each INSERT statement. Doing so overrides parallel DML mode.


Notes:

  • Direct-path INSERT supports only the subquery syntax of the INSERT statement, not the VALUES clause. For more information on the subquery syntax of INSERT statements, see Oracle Database SQL Reference.

  • There are some additional restrictions for using direct-path INSERT. These are listed in the Oracle Database SQL Reference.



See Also:

Oracle Database Performance Tuning Guide for more information on using hints

How Direct-Path INSERT Works

You can use direct-path INSERT on both partitioned and nonpartitioned tables.

Serial Direct-Path INSERT into Partitioned or Nonpartitioned Tables

The single process inserts data beyond the current high water mark of the table segment or of each partition segment. (The high-water mark is the level at which blocks have never been formatted to receive data.) When a COMMIT runs, the high-water mark is updated to the new value, making the data visible to users.

Parallel Direct-Path INSERT into Partitioned Tables

This situation is analogous to serial direct-path INSERT. Each parallel execution server is assigned one or more partitions, with no more than one process working on a single partition. Each parallel execution server inserts data beyond the current high-water mark of its assigned partition segment(s). When a COMMIT runs, the high-water mark of each partition segment is updated to its new value, making the data visible to users.

Parallel Direct-Path INSERT into Nonpartitioned Tables

Each parallel execution server allocates a new temporary segment and inserts data into that temporary segment. When a COMMIT runs, the parallel execution coordinator merges the new temporary segments into the primary table segment, where it is visible to users.

Specifying the Logging Mode for Direct-Path INSERT

Direct-path INSERT lets you choose whether to log redo and undo information during the insert operation.

  • You can specify logging mode for a table, partition, index, or LOB storage at create time (in a CREATE statement) or subsequently (in an ALTER statement).

  • If you do not specify either LOGGING or NOLOGGING at these times:

    • The logging attribute of a partition defaults to the logging attribute of its table.

    • The logging attribute of a table or index defaults to the logging attribute of the tablespace in which it resides.

    • The logging attribute of LOB storage defaults to LOGGING if you specify CACHE for LOB storage. If you do not specify CACHE, then the logging attributes defaults to that of the tablespace in which the LOB values resides.

  • You set the logging attribute of a tablespace in a CREATE TABLESPACE or ALTER TABLESPACE statements.


    Note:

    If the database or tablespace is in FORCE LOGGING mode, then direct path INSERT always logs, regardless of the logging setting.

Direct-Path INSERT with Logging

In this mode, Oracle Database performs full redo logging for instance and media recovery. If the database is in ARCHIVELOG mode, then you can archive redo logs to tape. If the database is in NOARCHIVELOG mode, then you can recover instance crashes but not disk failures.

Direct-Path INSERT without Logging

In this mode, Oracle Database inserts data without redo or undo logging. (Some minimal logging is done to mark new extents invalid, and data dictionary changes are always logged.) This mode improves performance. However, if you subsequently must perform media recovery, the extent invalidation records mark a range of blocks as logically corrupt, because no redo data was logged for them. Therefore, it is important that you back up the data after such an insert operation.

Additional Considerations for Direct-Path INSERT

The following are some additional considerations when using direct-path INSERT.

Index Maintenance with Direct-Path INSERT

Oracle Database performs index maintenance at the end of direct-path INSERT operations on tables (partitioned or nonpartitioned) that have indexes. This index maintenance is performed by the parallel execution servers for parallel direct-path INSERT or by the single process for serial direct-path INSERT. You can avoid the performance impact of index maintenance by dropping the index before the INSERT operation and then rebuilding it afterward.

Space Considerations with Direct-Path INSERT

Direct-path INSERT requires more space than conventional-path INSERT, because direct-path INSERT does not use existing space in the free lists of the segment.

All serial direct-path INSERT operations, as well as parallel direct-path INSERT into partitioned tables, insert data above the high-water mark of the affected segment. This requires some additional space.

Parallel direct-path INSERT into nonpartitioned tables requires even more space, because it creates a temporary segment for each degree of parallelism. If the nonpartitioned table is not in a locally managed tablespace in automatic segment-space management mode, you can modify the values of the NEXT and PCTINCREASE storage parameter and MINIMUM EXTENT tablespace parameter to provide sufficient (but not excess) storage for the temporary segments. Choose values for these parameters so that:

  • The size of each extent is not too small (no less than 1 MB). This setting affects the total number of extents in the object.

  • The size of each extent is not so large that the parallel INSERT results in wasted space on segments that are larger than necessary.

After the direct-path INSERT operation is complete, you can reset these parameters to settings more appropriate for serial operations.

Locking Considerations with Direct-Path INSERT

During direct-path INSERT, the database obtains exclusive locks on the table (or on all partitions of a partitioned table). As a result, users cannot perform any concurrent insert, update, or delete operations on the table, and concurrent index creation and build operations are not permitted. Concurrent queries, however, are supported, but the query will return only the information before the insert operation.

Automatically Collecting Statistics on Tables

The PL/SQL package DBMS_STATS lets you generate and manage statistics for cost-based optimization. You can use this package to gather, modify, view, export, import, and delete statistics. You can also use this package to identify or name statistics that have been gathered.

Formerly, you enabled DBMS_STATS to automatically gather statistics for a table by specifying the MONITORING keyword in the CREATE (or ALTER) TABLE statement. Starting with Oracle Database 10g, the MONITORING and NOMONITORING keywords have been deprecated and statistics are collected automatically. If you do specify these keywords, they are ignored.

Monitoring tracks the approximate number of INSERT, UPDATE, and DELETE operations for the table since the last time statistics were gathered. Information about how many rows are affected is maintained in the SGA, until periodically (about every three hours) SMON incorporates the data into the data dictionary. This data dictionary information is made visible through the DBA_TAB_MODIFICATIONS,ALL_TAB_MODIFICATIONS, or USER_TAB_MODIFICATIONS views. The database uses these views to identify tables with stale statistics.

To disable monitoring of a table, set the STATISTICS_LEVEL initialization parameter to BASIC. Its default is TYPICAL, which enables automatic statistics collection. Automatic statistics collection and the DBMS_STATS package enable the optimizer to generate accurate execution plans.


See Also:


Altering Tables

You alter a table using the ALTER TABLE statement. To alter a table, the table must be contained in your schema, or you must have either the ALTER object privilege for the table or the ALTER ANY TABLE system privilege.

Many of the usages of the ALTER TABLE statement are presented in the following sections:

Reasons for Using the ALTER TABLE Statement

You can use the ALTER TABLE statement to perform any of the following actions that affect a table:

  • Modify physical characteristics (PCTFREE, PCTUSED, INITRANS, or storage parameters)

  • Move the table to a new segment or tablespace

  • Explicitly allocate an extent or deallocate unused space

  • Add, drop, or rename columns, or modify an existing column definition (datatype, length, default value, and NOT NULL integrity constraint)

  • Modify the logging attributes of the table

  • Modify the CACHE/NOCACHE attributes

  • Add, modify or drop integrity constraints associated with the table

  • Enable or disable integrity constraints or triggers associated with the table

  • Modify the degree of parallelism for the table

  • Rename a table

  • Add or modify index-organized table characteristics

  • Alter the characteristics of an external table

  • Add or modify LOB columns

  • Add or modify object type, nested table, or varray columns

Many of these operations are discussed in succeeding sections.

Altering Physical Attributes of a Table

When altering the data block space usage parameters (PCTFREE and PCTUSED) of a table, note that new settings apply to all data blocks used by the table, including blocks already allocated and subsequently allocated for the table. However, the blocks already allocated for the table are not immediately reorganized when space usage parameters are altered, but as necessary after the change. The data block storage parameters are described in "Managing Space in Data Blocks".

When altering the transaction entry setting INITRANS of a table, note that a new setting for INITRANS applies only to data blocks subsequently allocated for the table. To better understand this transaction entry setting parameter, see "Specifying the INITRANS Parameter".

The storage parameters INITIAL and MINEXTENTS cannot be altered. All new settings for the other storage parameters (for example, NEXT, PCTINCREASE) affect only extents subsequently allocated for the table. The size of the next extent allocated is determined by the current values of NEXT and PCTINCREASE, and is not based on previous values of these parameters. Storage parameters are discussed in "Managing Storage Parameters".

Moving a Table to a New Segment or Tablespace

The ALTER TABLE ... MOVE statement enables you to relocate data of a nonpartitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE ... MOVE statement with the COMPRESS keyword to store the new segment using table compression.

The following statement moves the hr.admin_emp table to a new segment, specifying new storage parameters:

ALTER TABLE hr.admin_emp MOVE
      STORAGE ( INITIAL 20K
                NEXT 40K
                MINEXTENTS 2
                MAXEXTENTS 20
                PCTINCREASE 0 );

Moving a table changes the rowids of the rows in the table. This causes indexes on the table to be marked UNUSABLE, and DML accessing the table using these indexes will receive an ORA-01502 error. The indexes on the table must be dropped or rebuilt. Likewise, any statistics for the table become invalid and new statistics should be collected after moving the table.

If the table includes LOB column(s), this statement can be used to move the table along with LOB data and LOB index segments (associated with this table) which the user explicitly specifies. If not specified, the default is to not move the LOB data and LOB index segments.

Manually Allocating Storage for a Table

Oracle Database dynamically allocates additional extents for the data segment of a table, as required. However, perhaps you want to allocate an additional extent for a table explicitly. For example, in an Oracle Real Application Clusters environment, an extent of a table can be allocated explicitly for a specific instance.

A new extent can be allocated for a table using the ALTER TABLE ... ALLOCATE EXTENT clause.

You can also explicitly deallocate unused space using the DEALLOCATE UNUSED clause of ALTER TABLE. This is described in "Reclaiming Unused Space".


See Also:

Oracle Real Application Clusters Administrator's Guide for information about using the ALLOCATE EXTENT clause in an Oracle Real Application Clusters environment

Modifying an Existing Column Definition

Use the ALTER TABLE ... MODIFY statement to modify an existing column definition. You can modify column datatype, default value, or column constraint.

You can increase the length of an existing column, or decrease it, if all existing data satisfies the new length. You can change a column from byte semantics to CHAR semantics or vice versa. You must set the initialization parameter BLANK_TRIMMING=TRUE to decrease the length of a nonempty CHAR column.

If you are modifying a table to increase the length of a column of datatype CHAR, realize that this can be a time consuming operation and can require substantial additional storage, especially if the table contains many rows. This is because the CHAR value in each row must be blank-padded to satisfy the new column length.


See Also:

Oracle Database SQL Reference for additional information about modifying table columns and additional restrictions

Adding Table Columns

To add a column to an existing table, use the ALTER TABLE ... ADD statement.

The following statement alters the hr.admin_emp table to add a new column named bonus:

ALTER TABLE hr.admin_emp
      ADD (bonus NUMBER (7,2));

If a new column is added to a table, the column is initially NULL unless you specify the DEFAULT clause. When you specify a default value, the database updates each row in the new column with the values specified. Specifying a DEFAULT value is not supported for tables using table compression.

You can add a column with a NOT NULL constraint to a table only if the table does not contain any rows, or you specify a default value.


See Also:

Oracle Database SQL Reference for additional information about adding table columns and additional restrictions

Renaming Table Columns

Oracle Database lets you rename existing columns in a table. Use the RENAME COLUMN clause of the ALTER TABLE statement to rename a column. The new name must not conflict with the name of any existing column in the table. No other clauses are allowed in conjunction with the RENAME COLUMN clause.

The following statement renames the comm column of the hr.admin_emp table.

ALTER TABLE hr.admin_emp
      RENAME COLUMN comm TO commission;

As noted earlier, altering a table column can invalidate dependent objects. However, when you rename a column, the database updates associated data dictionary tables to ensure that function-based indexes and check constraints remain valid.

Oracle Database also lets you rename column constraints. This is discussed in "Renaming Constraints".


Note:

The RENAME TO clause of ALTER TABLE appears similar in syntax to the RENAME COLUMN clause, but is used for renaming the table itself.

Dropping Table Columns

You can drop columns that are no longer needed from a table, including an index-organized table. This provides a convenient means to free space in a database, and avoids your having to export/import data then re-create indexes and constraints.

You cannot drop all columns from a table, nor can you drop columns from a table owned by SYS. Any attempt to do so results in an error.


See Also:

Oracle Database SQL Reference for information about additional restrictions and options for dropping columns from a table

Removing Columns from Tables

When you issue an ALTER TABLE ... DROP COLUMN statement, the column descriptor and the data associated with the target column are removed from each row in the table. You can drop multiple columns with one statement. The ALTER TABLE ... DROP COLUMN statement is not supported for tables using table compression.

The following statements are examples of dropping columns from the hr.admin_emp table. The first statement drops only the sal column:

ALTER TABLE hr.admin_emp DROP COLUMN sal;

The next statement drops both the bonus and comm columns:

ALTER TABLE hr.admin_emp DROP (bonus, commission);

Marking Columns Unused

If you are concerned about the length of time it could take to drop column data from all of the rows in a large table, you can use the ALTER TABLE ... SET UNUSED statement. This statement marks one or more columns as unused, but does not actually remove the target column data or restore the disk space occupied by these columns. However, a column that is marked as unused is not displayed in queries or data dictionary views, and its name is removed so that a new column can reuse that name. All constraints, indexes, and statistics defined on the column are also removed.

To mark the hiredate and mgr columns as unused, execute the following statement:

ALTER TABLE hr.admin_emp SET UNUSED (hiredate, mgr);

You can later remove columns that are marked as unused by issuing an ALTER TABLE ... DROP UNUSED COLUMNS statement. Unused columns are also removed from the target table whenever an explicit drop of any particular column or columns of the table is issued.

The data dictionary views USER_UNUSED_COL_TABS, ALL_UNUSED_COL_TABS, or DBA_UNUSED_COL_TABS can be used to list all tables containing unused columns. The COUNT field shows the number of unused columns in the table.

SELECT * FROM DBA_UNUSED_COL_TABS;

OWNER                       TABLE_NAME                  COUNT
--------------------------- --------------------------- -----
HR                          ADMIN_EMP                       2

Removing Unused Columns

The ALTER TABLE ... DROP UNUSED COLUMNS statement is the only action allowed on unused columns. It physically removes unused columns from the table and reclaims disk space.

In the ALTER TABLE statement that follows, the optional clause CHECKPOINT is specified. This clause causes a checkpoint to be applied after processing the specified number of rows, in this case 250. Checkpointing cuts down on the amount of undo logs accumulated during the drop column operation to avoid a potential exhaustion of undo space.

ALTER TABLE hr.admin_emp DROP UNUSED COLUMNS CHECKPOINT 250;

Redefining Tables Online

In highly available systems, it is occasionally necessary to redefine large "hot" tables to improve the performance of queries or DML performed against these tables. The database provide a mechanism to redefine tables online. This mechanism provides a significant increase in availability compared to traditional methods of redefining tables that require tables to be taken offline.

When a table is redefined online, it is accessible to DML during much of the redefinition process. The table is locked in the exclusive mode only during a very small window which is independent of the size of the table and the complexity of the redefinition.

This section contains the following topics:

Features of Online Table Redefinition

Online table redefinition enables you to:

  • Modify the storage parameters of the table

  • Move the table to a different tablespace in the same schema

  • Add support for parallel queries

  • Add or drop partitioning support

  • Re-create the table to reduce fragmentation

  • Change the organization of a normal table (heap organized) to an index-organized table and vice versa

  • Add or drop a column

The DBMS_REDEFINITION Package

The mechanism for performing online redefinition is the PL/SQL package DBMS_REDEFINITION. Execute privileges on this package is granted to EXECUTE_CATALOG_ROLE. In addition to having execute privileges on this package, you must be granted the following privileges:

  • CREATE ANY TABLE

  • ALTER ANY TABLE

  • DROP ANY TABLE

  • LOCK ANY TABLE

  • SELECT ANY TABLE

  • CREATE ANY TRIGGER

  • CREATE ANY INDEX


    See Also:

    PL/SQL Packages and Types Reference for a description of the DBMS_REDEFINITION package

Steps for Online Redefinition of Tables

In order to perform an online redefinition of a table the user must perform the following steps.

  1. Choose one of the following two methods of redefinition:

    • The first method of redefinition is to use the primary keys or pseudo-primary keys to perform the redefinition. Pseudo-primary keys are unique keys with all component columns having NOT NULL constraints. For this method, the versions of the tables before and after redefinition should have the same primary key columns. This is the preferred and default method of redefinition.

    • The second method of redefinition is to use rowids. For this method, the table to be redefined should not be an index organized table. Also, in this method of redefinition, a hidden column named M_ROW$$ is added to the post-redefined version of the table and it is recommended that this column be marked as unused or dropped after the redefinition is completed.

  2. Verify that the table can be online redefined by invoking the DBMS_REDEFINITION.CAN_REDEF_TABLE() procedure and use the OPTIONS_FLAG parameter to specify the method of redefinition to be used. If the table is not a candidate for online redefinition, then this procedure raises an error indicating why the table cannot be online redefined.

  3. Create an empty interim table (in the same schema as the table to be redefined) with all of the desired attributes. If columns are to be dropped, do not include them in the definition of the interim table. If a column is to be added, then add the column definition to the interim table.

    It is possible to perform table redefinition in parallel. If you specify a degree of parallelism on both of the tables and you ensure that parallel execution is enabled for the session, the database will use parallel execution whenever possible to perform the redefinition. You can use the PARALLEL clause of the ALTER SESSION statement to enable parallel execution.

  4. Start the redefinition process by calling DBMS_REDEFINITION.START_REDEF_TABLE(), providing the following:

    • The table to be redefined

    • The interim table name

    • The column mapping

    • The method of redefinition

    • Optionally, the columns to be used in ordering rows

    • Optionally, specify the ORDER BY columns

    If the column mapping information is not supplied, then it is assumed that all the columns (with their names unchanged) are to be included in the interim table. If the column mapping is supplied, then only those columns specified explicitly in the column mapping are considered. If the method of redefinition is not specified, then the default method of redefinition using primary keys is assumed.

    You can optionally specify the ORDERBY_COLS parameter to specify how rows should be ordered during the initial instantiation of the interim table.

  5. You have two methods for creating (cloning) dependent objects such as triggers, indexes, grants, and constraints on the interim table. Method 1 is the most automatic and preferred method, but there may be times that you would choose to use method 2.

    • Method 1: Automatically Creating Dependent Objects

      Use the COPY_TABLE_DEPENDENTS procedure to automatically create dependent objects such as triggers, indexes, grants, and constraints on the interim table. This procedure also registers the dependent objects. Registering the dependent objects enables the identities of these objects and their cloned counterparts to be automatically swapped later as part of the redefinition completion process. The result is that when the redefinition is completed, the names of the dependent objects will be the same as the names of the original dependent objects.

      You can discover if errors occurred while copying dependent objects by checking the NUM_ERRORS output variable. If the IGNORE_ERRORS parameter is set to TRUE, the COPY_TABLE_DEPENDENTS procedure continues cloning dependent objects even if an error is encounter when creating an object. The errors can later be viewed by querying the DBA_REDIFINITION_ERRORS view. Reasons for errors include a lack of system resources or a change in the logical structure of the table.

      If IGNORE_ERRORS is set to FALSE, the COPY_TABLE_DEPENDENTS procedure stops cloning objects as soon as any error is encountered.

      After you correct any errors you can attempt again to clone the failing object or objects by reexecuting the COPY_TABLE_DEPENDENTS procedure. Optionally you can create the objects manually and then register them as explained in method 2.

      The COPY_TABLE_DEPENDENTS procedure can be used multiple times as necessary. If an object has already been successfully cloned, it will ignore the operation.

    • Method 2: Manually Creating Dependent Objects

      You can manually create dependent objects on the interim table.


      Note:

      In previous releases you were required to manually create the triggers, indexes, grants, and constraints on the interim table, and there may still be situations where to want to or must do so. In such cases, any referential constraints involving the interim table (that is, the interim table is either a parent or a child table of the referential constraint) must be created disabled. Until the redefinition process is either completed or aborted, any trigger defined on the interim table will not execute.

      Use the REGISTER_DEPENDENT_OBJECT procedure after you create dependent objects manually. You can also use the COPY_TABLE_DEPENDENTS procedure to do the registration. Note that the COPY_TABLE_DEPENDENTS procedure does not clone objects that are registered manually.

      You would also use the REGISTER_DEPENDENT_OBJECT procedure if the COPY_TABLE_DEPENDENTS procedure failed to copy a dependent object and manual intervention is required.

      You can query the DBA_REDEFINITION_OBJECTS view to determine which dependent objects are registered. This view shows dependent objects that were registered explicitly with the REGISTER_DEPENDENT_OBJECT procedure or implicitly with the COPY_TABLE_DEPENDENTS procedure. Only current information is shown in the view.

      The UNREGISTER_DEPENDENT_OBJECT procedure can be used to unregister a dependent object on the table being redefined and on the interim table.

  6. Execute the DBMS_REDEFINITION.FINISH_REDEF_TABLE procedure to complete the redefinition of the table. During this procedure, the original table is locked in the exclusive mode for a very short time, independent of the amount of data in the original table. However, FINISH_REDEF_TABLE will wait for all pending DML that was initiated before it was invoked to commit before completing the redefinition.

    As a result of this procedure, the following occur:

    1. The original table is redefined such that it has all the attributes, indexes, constraints, grants and triggers of the interim table

    2. The referential constraints involving the interim table now involve the post redefined table and are enabled.

    3. Dependent objects that were registered, either explicitly using REGISTER_DEPENDENT_OBJECT or implicitly using COPY_TABLE_DEPENDENTS, are renamed automatically.


      Note:

      If no registration is done or no automatic cloning is done, then you must manually rename the dependent objects.

  7. If the redefinition was done using rowids, the post-redefined table will have a hidden column (M_ROW$$) and it is recommended that the user set this hidden column to unused as follows:

    ALTER TABLE table_name SET UNUSED (M_ROW$$)
    
    

The following is the end result of the redefinition process:

  • The original table is redefined with the attributes and features of the interim table.

  • The triggers, grants, indexes and constraints defined on the interim table after START_REDEF_TABLE() and before FINISH_REDEF_TABLE() are now defined on the post-redefined table. Any referential constraints involving the interim table before the redefinition process was finished now involve the post-redefinition table and are enabled.

  • Any indexes, triggers, grants and constraints defined on the original table (prior to redefinition) are transferred to the interim table and are dropped when the user drops the interim table. Any referential constraints involving the original table before the redefinition now involve the interim table and are disabled.

  • Any PL/SQL procedures and cursors defined on the original table (prior to redefinition) are invalidated. They are automatically revalidated (this revalidation can fail if the shape of the table was changed as a result of the redefinition process) whenever they are used next.

Intermediate Synchronization

After the redefinition process has been started by calling START_REDEF_TABLE() and before FINISH_REDEF_TABLE() has been called, it is possible that a large number of DML statements have been executed on the original table. If you know this is the case, it is recommended that you periodically synchronize the interim table with the original table. This is done by calling the DBMS_REDEFINITION.SYNC_INTERIM_TABLE() procedure. Calling this procedure reduces the time taken by FINISH_REDEF_TABLE() to complete the redefinition process.

The small amount of time that the original table is locked during FINISH_REDEF_TABLE() is independent of whether SYNC_INTERIM_TABLE() has been called.

Terminate and Clean Up After Errors

In the event that an error is raised during the redefinition process, or if you choose to terminate the redefinition process, call DBMS_REDEFINITION.ABORT_REDEF_TABLE(). This procedure drops temporary logs and tables associated with the redefinition process. After this procedure is called, you can drop the interim table and its associated objects.

Example of Online Table Redefinition

This example illustrates online redefinition of the previously created table hr.admin_emp, which at this point only contains columns: empno, ename, job, deptno. The table is redefined as follows:

  • New columns mgr, hiredate, sal, and bonus (these existed in the original table but were dropped in previous examples) are added.

  • The new column bonus is initialized to 0

  • The column deptno has its value increased by 10.

  • The redefined table is partitioned by range on empno.

The steps in this redefinition are illustrated below.

  1. Verify that the table is a candidate for online redefinition. In this case you specify that the redefinition is to be done using primary keys or pseudo-primary keys.

    BEGIN
    DBMS_REDEFINITION.CAN_REDEF_TABLE('hr','admin_emp',
            dbms_redefinition.cons_use_pk);
    END;
    /
    
    
  2. Create an interim table hr.int_admin_emp.

    CREATE TABLE hr.int_admin_emp
           (empno      NUMBER(5) PRIMARY KEY,
             ename      VARCHAR2(15) NOT NULL,
             job        VARCHAR2(10),
             mgr        NUMBER(5),
             hiredate   DATE DEFAULT (sysdate),
             sal        NUMBER(7,2),
             deptno     NUMBER(3) NOT NULL,
             bonus      NUMBER (7,2) DEFAULT(1000))
         PARTITION BY RANGE(empno)
           (PARTITION emp1000 VALUES LESS THAN (1000) TABLESPACE admin_tbs,
            PARTITION emp2000 VALUES LESS THAN (2000) TABLESPACE admin_tbs2);
    
    
  3. Start the redefinition process.

    BEGIN
    DBMS_REDEFINITION.START_REDEF_TABLE('hr', 'admin_emp','int_admin_emp',
           'empno empno, ename ename, job job, deptno+10 deptno, 0 bonus',
            dbms_redefinition.cons_use_pk);
    END;
    /
    
    
  4. Automatically create any triggers, indexes and constraints on hr.int_admin_emp.

    BEGIN
    DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('hr', 'admin_emp','int_admin_emp',
           TRUE, TRUE, TRUE, FALSE);
    END;
    
    
  5. Optionally, synchronize the interim table hr.int_admin_emp.

    BEGIN 
    DBMS_REDEFINITION.SYNC_INTERIM_TABLE('hr', 'admin_emp', 'int_admin_emp');
    END;
    /
    
    
  6. Complete the redefinition.

    BEGIN
    DBMS_REDEFINITION.FINISH_REDEF_TABLE('hr', 'admin_emp', 'int_admin_emp');
    END;
    /
    
    

    The table hr.admin_emp is locked in the exclusive mode only for a small window toward the end of this step. After this call the table hr.admin_emp is redefined such that it has all the attributes of the hr.int_admin_emp table.

  7. Drop the interim table.

Restrictions for Online Redefinition of Tables

The following restrictions apply to the online redefinition of tables:

  • If the table is to be redefined using primary key or pseudo-primary keys (unique keys or constraints with all component columns having not null constraints), then the table to be redefined must have the same primary key or pseudo-primary key columns. If the table is to be redefined using rowids, then the table must not be an index-organized table.

  • Tables that are replicated in an n-way master configuration can be redefined, but horizontal subsetting (subset of rows in the table), vertical subsetting (subset of columns in the table), and column transformations are not allowed.

  • The overflow table of an index-organized table cannot be online redefined.

  • Tables with user-defined types (objects, REFs, collections, typed tables) cannot be online redefined.

  • Tables with BFILE columns cannot be online redefined.

  • Tables with LONG columns can be online redefined, but those columns must be converted to CLOBS. Tables with LONG RAW columns must be converted to BLOBS. Tables with LOB columns are acceptable.

  • The table to be redefined cannot be part of a cluster.

  • Tables in the SYS and SYSTEM schema cannot be online redefined.

  • Temporary tables cannot be redefined.

  • A subset of rows in the table cannot be redefined.

  • Only simple deterministic expressions, sequences, and SYSDATE can be used when mapping the columns in the interim table to those of the original table. For example, subqueries are not allowed.

  • If new columns (which are not instantiated with existing data for the original table) are being added as part of the redefinition, then they must not be declared NOT NULL until the redefinition is complete.

  • There cannot be any referential constraints between the table being redefined and the interim table.

  • Table redefinition cannot be done NOLOGGING.

  • Tables with materialized view logs defined on them cannot be online redefined.

Auditing Table Changes Using Flashback Transaction Query


Note:

You must be using automatic undo management to use the Flashback Transaction Query feature. It is based on undo information stored in an undo tablespace.

To understand how to configure your database for the Flashback Transaction Query feature, see "Monitoring the Undo Tablespace".


You may discover that somehow data in a table has been inappropriately changed. To research this change, you can use multiple flashback queries to view row data at specific points in time. More efficiently, you can use the Flashback Version Query feature to view all changes to a row over a period of time. That feature lets you append a VERSIONS clause to a SELECT statement that specifies an SCN or timestamp range between which you want to view changes to row values. The query also can return associated metadata, such as the transaction responsible for the change.

Further, once you identify an erroneous transaction, you can then use the Flashback Transaction Query feature to identify other changes that were done by the transaction, and to request the undo SQL to reverse those changes. But using the undo SQL is only one means of recovering your data. You also have the option of using the Flashback Table feature, described in "Recovering Tables Using the Flashback Table Feature", to restore the table to a state before the changes were made.


See Also:

Oracle Database Application Developer's Guide - Fundamentals for directions for using Oracle Flashback Query and Flashback Version Query.

Recovering Tables Using the Flashback Table Feature


Note:

You must be using automatic undo management to use the Flashback Table feature. It is based on undo information stored in an undo tablespace.

To understand how to configure your undo tablespace for the Flashback Table feature, see "Monitoring the Undo Tablespace".


The FLASHBACK TABLE statement enables users to recover a table to a previous point in time. It provides a fast, online solution for recovering a table that has been accidentally modified or deleted by a user or application. In many cases, this Flashback Table feature alleviates the need for you, as the administrator, to perform more complicated point in time recovery operations.

The functionality of the Flashback Table feature can be summarized as follows:


See Also:

Oracle Database Backup and Recovery Advanced User's Guide for more information about the FLASHBACK TABLE statement.

Dropping Tables

To drop a table that you no longer need, use the DROP TABLE statement. The table must be contained in your schema or you must have the DROP ANY TABLE system privilege.


Caution:

Before dropping a table, familiarize yourself with the consequences of doing so:
  • Dropping a table removes the table definition from the data dictionary. All rows of the table are no longer accessible.

  • All indexes and triggers associated with a table are dropped.

  • All views and PL/SQL program units dependent on a dropped table remain, yet become invalid (not usable). See "Managing Object Dependencies" for information about how the database manages dependencies.

  • All synonyms for a dropped table remain, but return an error when used.

  • All extents allocated for a table that is dropped are returned to the free space of the tablespace and can be used by any other object requiring new extents or new objects. All rows corresponding to a clustered table are deleted from the blocks of the cluster. Clustered tables are the subject of Chapter 17, " Managing Clusters".


The following statement drops the hr.int_admin_emp table:

DROP TABLE hr.int_admin_emp;

If the table to be dropped contains any primary or unique keys referenced by foreign keys of other tables and you intend to drop the FOREIGN KEY constraints of the child tables, then include the CASCADE clause in the DROP TABLE statement, as shown below:

DROP TABLE hr.admin_emp CASCADE CONSTRAINTS;

When you drop a table, normally the database does not immediately release the space associated with the table. Rather, the database renames the table and places it in a recycle bin, where it can later be recovered with the FLASHBACK TABLE statement if you find that you dropped the table in error. If you should want to immediately release the space associated with the table at the time you issue the DROP TABLE statement, include the PURGE clause as shown in the following statement:

DROP TABLE hr.admin_emp PURGE;

Perhaps instead of dropping a table, you want to truncate it. The TRUNCATE statement provides a fast, efficient method for deleting all rows from a table, but it does not affect any structures associated with the table being truncated (column definitions, constraints, triggers, and so forth) or authorizations. The TRUNCATE statement is discussed in "Truncating Tables and Clusters".

Using Flashback Drop and Managing the Recycle Bin

When you drop a table, the database does not immediately remove the space associated with the table. The database renames the table and places it and any associated objects in a recycle bin, where, in case the table was dropped in error, it can be recovered at a later time. This feature is called Flashback Drop, and the FLASHBACK TABLE statement is used to restore the table. Before discussing the use of the FLASHBACK TABLE statement for this purpose, it is important to understand how the recycle bin works, and how you manage its contents.

This section contains the following topics:

What Is the Recycle Bin?

The recycle bin is actually a data dictionary table containing information about dropped objects. Dropped tables and any associated objects such as indexes, constraints, nested tables, and the likes are not removed and still occupy space. They continue to count against user space quotas, until specifically purged from the recycle bin or the unlikely situation where they must be purged by the database because of tablespace space constraints.

Each user can be thought of as having his own recycle bin, since unless a user has the SYSDBA privilege, the only objects that the user has access to in the recycle bin are those that the user owns. A user can view his objects in the recycle bin using the following statement:

SELECT * FROM RECYCLEBIN;

When you drop a tablespace including its contents, the objects in the tablespace are not placed in the recycle bin and the database purges any entries in the recycle bin for objects located in the tablespace. The database also purges any recycle bin entries for objects in a tablespace when you drop the tablespace, not including contents, and the tablespace is otherwise empty. Likewise:

  • When you drop a user, any objects belonging to the user are not placed in the recycle bin and any objects in the recycle bin are purged.

  • When you drop a cluster, its member tables are not placed in the recycle bin and any former member tables in the recycle bin are purged.

  • When you drop a type, any dependent objects such as subtypes are not placed in the recycle bin and any former dependent objects in the recycle bin are purged.

Renaming Objects in the Recycle Bin

When a dropped table is moved to the recycle bin, the table and its associated objects are given system-generated names. This is necessary to avoid name conflicts that may arise if multiple tables have the same name. This could occur under the following circumstances:

  • A user drops a table, re-creates it with the same name, then drops it again.

  • Two users have tables with the same name, and both users drop their tables.

The renaming convention is as follows:

BIN$unique_id$version

where:

  • unique_id is a 26-character globally unique identifier for this object, which makes the recycle bin name unique across all databases

  • version is a version number assigned by the database

Viewing and Querying Objects in the Recycle Bin

Oracle Database provides two views for obtaining information about objects in the recycle bin:

View Description
USER_RECYCLEBIN This view can be used by users to see their own dropped objects in the recycle bin. It has a synonym RECYCLEBIN, for ease of use.
DBA_RECYCLEBIN This view gives administrators visibility to all dropped objects in the recycle bin

One use for these views is to identify the name that the database has assigned to a dropped object, as shown in the following example:

SELECT object_name, original_name FROM dba_recyclebin
   WHERE owner = 'HR';

OBJECT_NAME                    ORIGINAL_NAME
------------------------------ --------------------------------
BIN$yrMKlZaLMhfgNAgAIMenRA==$0 EMPLOYEES

You can also view the contents of the recycle bin using the SQL*Plus command SHOW RECYCLEBIN.

SQL> show recyclebin

ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
---------------- ------------------------------ ------------ -------------------
EMPLOYEES        BIN$yrMKlZaVMhfgNAgAIMenRA==$0 TABLE        2003-10-27:14:00:19

You can query objects that are in the recycle bin, just as you can query other objects. However, you must specify the name of the object as it is identified in the recycle bin. For example:

SELECT * FROM "BIN$yrMKlZaVMhfgNAgAIMenRA==$0";

Purging Objects in the Recycle Bin

If you decide that you are never going to restore an item from the recycle bin, you can use the PURGE statement to remove the items and their associated objects from the recycle bin and release their storage space. You need the same privileges as if you were dropping the item.

When you use the PURGE statement to purge a table, you can use the name that the table is known by in the recycle bin or the original name of the table. The recycle bin name can be obtained from either the DBA_ or USER_RECYCLEBIN view as shown in "Viewing and Querying Objects in the Recycle Bin". The following hypothetical example purges the table hr.int_admin_emp, which was renamed to BIN$jsleilx392mk2=293$0 when it was placed in the recycle bin:

PURGE TABLE BIN$jsleilx392mk2=293$0;

You can achieve the same result with the following statement:

PURGE TABLE int_admin_emp;

You can use the PURGE statement to purge all the objects in the recycle bin that are from a specified tablespace or only the tablespace objects belonging to a specified user, as shown in the following examples:

PURGE TABLESPACE example;
PURGE TABLESPACE example USER oe;

Users can purge the recycle bin or their own objects, and release space for objects, by using the following statement:

PURGE RECYCLEBIN;

If you have the SYSDBA privilege, then you can purge the entire recycle bin by specifying DBA_RECYCLEBIN, instead of RECYCLEBIN in the previous statement.

You can also use the PURGE statement to purge an index from the recycle bin or to purge from the recycle bin all objects in a specified tablespace.


See Also:

Oracle Database SQL Reference for more information on the PURGE statement

Restoring Tables from the Recycle Bin

Use the FLASHBACK TABLE ... TO BEFORE DROP statement to recover objects from the recycle bin. You can specify either the name of the table in the recycle bin or the original table name. An optional RENAME TO clause lets you rename the table as you recover it. The recycle bin name can be obtained from either the DBA_ or USER_RECYCLEBIN view as shown in "Viewing and Querying Objects in the Recycle Bin". To use the FLASHBACK TABLE ... TO BEFORE DROP statement, you need the same privileges you need to drop the table.

The following example restores int_admin_emp table and assigns to it a new name:

FLASHBACK TABLE int_admin_emp TO BEFORE DROP 
   RENAME TO int2_admin_emp;

The system-generated recycle bin name is very useful if you have dropped a table multiple times. For example, suppose you have three versions of the int2_admin_emp table in the recycle bin and you want to recover the second version. You can do this by issuing two FLASHBACK TABLE statements, or you can query the recycle bin and then flashback to the appropriate system-generated name, as shown in the following example:

SELECT object_name, original_name FROM recyclebin;    

OBJECT_NAME                    ORIGINAL_NAME
------------------------------ --------------------------------
BIN$yrMKlZaLMhfgNAgAIMenRA==$0 INT2_ADMIN_EMP
BIN$yrMKlZaVMhfgNAgAIMenRA==$0 INT2_ADMIN_EMP
BIN$yrMKlZaQMhfgNAgAIMenRA==$0 INT2_ADMIN_EMP

FLASHBACK TABLE BIN$yrMKlZaVMhfgNAgAIMenRA==$0 TO BEFORE DROP 
   RENAME TO int2_admin_emp;

Managing Index-Organized Tables

This section describes aspects of managing index-organized tables, and contains the following topics:

What Are Index-Organized Tables?

An index-organized table has a storage organization that is a variant of a primary B-tree. Unlike an ordinary (heap-organized) table whose data is stored as an unordered collection (heap), data for an index-organized table is stored in a B-tree index structure in a primary key sorted manner. Each leaf block in the index structure stores both the key and nonkey columns.

The structure of an index-organized table provides the following benefits:

  • Fast random access on the primary key because an index-only scan is sufficient. And, because there is no separate table storage area, changes to the table data (such as adding new rows, updating rows, or deleting rows) result only in updating the index structure.

  • Fast range access on the primary key because the rows are clustered in primary key order.

  • Lower storage requirements because duplication of primary keys is avoided. They are not stored both in the index and underlying table, as is true with heap-organized tables.

Index-organized tables have full table functionality. They support features such as constraints, triggers, LOB and object columns, partitioning, parallel operations, online reorganization, and replication. And, they offer these additional features:

  • Key compression

  • Overflow storage area and specific column placement

  • Secondary indexes, including bitmap indexes.

Index-organized tables are ideal for OLTP applications, which require fast primary key access and high availability. Queries and DML on an orders table used in electronic order processing are predominantly primary-key based and heavy volume causes fragmentation resulting in a frequent need to reorganize. Because an index-organized table can be reorganized online and without invalidating its secondary indexes, the window of unavailability is greatly reduced or eliminated.

Index-organized tables are suitable for modeling application-specific index structures. For example, content-based information retrieval applications containing text, image and audio data require inverted indexes that can be effectively modeled using index-organized tables. A fundamental component of an internet search engine is an inverted index that can be modeled using index-organized tables.

These are but a few of the applications for index-organized tables.


See Also:


Creating Index-Organized Tables

You use the CREATE TABLE statement to create index-organized tables, but you must provide additional information:

  • An ORGANIZATION INDEX qualifier, which indicates that this is an index-organized table

  • A primary key, specified through a column constraint clause (for a single column primary key) or a table constraint clause (for a multiple-column primary key).

Optionally, you can specify the following:

  • An OVERFLOW clause, which preserves dense clustering of the B-tree index by storing the row column values exceeding a specified threshold in a separate overflow data segment.

  • A PCTTHRESHOLD value, which defines the percentage of space reserved in the index block for an index-organized table. Any portion of the row that exceeds the specified threshold is stored in the overflow segment. In other words, the row is broken at a column boundary into two pieces, a head piece and tail piece. The head piece fits in the specified threshold and is stored along with the key in the index leaf block. The tail piece is stored in the overflow area as one or more row pieces. Thus, the index entry contains the key value, the nonkey column values that fit the specified threshold, and a pointer to the rest of the row.

  • An INCLUDING clause, which can be used to specify nonkey columns that are to be stored in the overflow data segment.

Creating an Index-Organized Table

The following statement creates an index-organized table:

CREATE TABLE admin_docindex(
        token char(20), 
        doc_id NUMBER,
        token_frequency NUMBER,
        token_offsets VARCHAR2(512),
        CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id))
    ORGANIZATION INDEX 
    TABLESPACE admin_tbs
    PCTTHRESHOLD 20
    OVERFLOW TABLESPACE admin_tbs2;

Specifying ORGANIZATION INDEX causes the creation of an index-organized table, admin_docindex, where the key columns and nonkey columns reside in an index defined on columns that designate the primary key or keys for the table. In this case, the primary keys are token and doc_id. An overflow segment is specified and is discussed in "Using the Overflow Clause ".

Creating Index-Organized Tables that Contain Object Types

Index-organized tables can store object types. The following example creates object type admin_typ, then creates an index-organized table containing a column of object type admin_typ:

CREATE OR REPLACE TYPE admin_typ AS OBJECT
    (col1 NUMBER, col2 VARCHAR2(6));
CREATE TABLE admin_iot (c1 NUMBER primary key, c2 admin_typ)
    ORGANIZATION INDEX;

You can also create an index-organized table of object types. For example:

CREATE TABLE admin_iot2 OF admin_typ (col1 PRIMARY KEY)
    ORGANIZATION INDEX;

Another example, that follows, shows that index-organized tables store nested tables efficiently. For a nested table column, the database internally creates a storage table to hold all the nested table rows.

CREATE TYPE project_t AS OBJECT(pno NUMBER, pname VARCHAR2(80));
/
CREATE TYPE project_set AS TABLE OF project_t;
/
CREATE TABLE proj_tab (eno NUMBER, projects PROJECT_SET)
    NESTED TABLE projects STORE AS emp_project_tab
                ((PRIMARY KEY(nested_table_id, pno)) 
    ORGANIZATION INDEX)
    RETURN AS LOCATOR;

The rows belonging to a single nested table instance are identified by a nested_table_id column. If an ordinary table is used to store nested table columns, the nested table rows typically get de-clustered. But when you use an index-organized table, the nested table rows can be clustered based on the nested_table_id column.


See Also:


Using the Overflow Clause

The overflow clause specified in the statement shown in "Creating an Index-Organized Table" indicates that any nonkey columns of rows exceeding 20% of the block size are placed in a data segment stored in the admin_tbs2 tablespace. The key columns should fit the specified threshold.

If an update of a nonkey column causes the row to decrease in size, the database identifies the row piece (head or tail) to which the update is applicable and rewrites that piece.

If an update of a nonkey column causes the row to increase in size, the database identifies the piece (head or tail) to which the update is applicable and rewrites that row piece. If the target of the update turns out to be the head piece, note that this piece can again be broken into two to keep the row size below the specified threshold.

The nonkey columns that fit in the index leaf block are stored as a row head-piece that contains a rowid field linking it to the next row piece stored in the overflow data segment. The only columns that are stored in the overflow area are those that do not fit.

Choosing and Monitoring a Threshold Value

You should choose a threshold value that can accommodate your key columns, as well as the first few nonkey columns (if they are frequently accessed).

After choosing a threshold value, you can monitor tables to verify that the value you specified is appropriate. You can use the ANALYZE TABLE ... LIST CHAINED ROWS statement to determine the number and identity of rows exceeding the threshold value.


See Also:


Using the INCLUDING Clause

In addition to specifying PCTTHRESHOLD, you can use the INCLUDING clause to control which nonkey columns are stored with the key columns. The database accommodates all nonkey columns up to the column specified in the INCLUDING clause in the index leaf block, provided it does not exceed the specified threshold. All nonkey columns beyond the column specified in the INCLUDING clause are stored in the overflow area.


Note:

Oracle Database moves all primary key columns of an indexed-organized table to the beginning of the table (in their key order), in order to provide efficient primary key based access. As an example:
CREATE TABLE admin_iot4(a INT, b INT, c INT, d INT, 
                primary key(c,b))
    ORGANIZATION INDEX;

The stored column order is: c b a d (instead of: a b c d). The last primary key column is b, based on the stored column order. The INCLUDING column can be the last primary key column (b in this example), or any nonkey column (that is, any column after b in the stored column order).


The following CREATE TABLE statement is similar to the one shown earlier in "Creating an Index-Organized Table" but is modified to create an index-organized table where the token_offsets column value is always stored in the overflow area:

CREATE TABLE admin_docindex2(
        token CHAR(20), 
        doc_id NUMBER,
        token_frequency NUMBER,
        token_offsets VARCHAR2(512),
        CONSTRAINT pk_admin_docindex2 PRIMARY KEY (token, doc_id))
    ORGANIZATION INDEX 
    TABLESPACE admin_tbs
    PCTTHRESHOLD 20
    INCLUDING token_frequency
    OVERFLOW TABLESPACE admin_tbs2;

Here, only nonkey columns prior to token_offsets (in this case a single column only) are stored with the key column values in the index leaf block.

Parallelizing Index-Organized Table Creation

The CREATE TABLE ... AS SELECT statement enables you to create an index-organized table and load data from an existing table into it. By including the PARALLEL clause, the load can be done in parallel.

The following statement creates an index-organized table in parallel by selecting rows from the conventional table hr.jobs:

CREATE TABLE admin_iot3(i PRIMARY KEY, j, k, l) 
     ORGANIZATION INDEX 
     PARALLEL
     AS SELECT * FROM hr.jobs;

This statement provides an alternative to parallel bulk-load using SQL*Loader.

Using Key Compression

Creating an index-organized table using key compression enables you to eliminate repeated occurrences of key column prefix values.

Key compression breaks an index key into a prefix and a suffix entry. Compression is achieved by sharing the prefix entries among all the suffix entries in an index block. This sharing can lead to huge savings in space, allowing you to store more keys in each index block while improving performance.

You can enable key compression using the COMPRESS clause while:

  • Creating an index-organized table

  • Moving an index-organized table

You can also specify the prefix length (as the number of key columns), which identifies how the key columns are broken into a prefix and suffix entry.

CREATE TABLE admin_iot5(i INT, j INT, k INT, l INT, PRIMARY KEY (i, j, k)) 
    ORGANIZATION INDEX COMPRESS;

The preceding statement is equivalent to the following statement:

CREATE TABLE admin_iot6(i INT, j INT, k INT, l INT, PRIMARY KEY(i, j, k)) 
    ORGANIZATION INDEX COMPRESS 2;

For the list of values (1,2,3), (1,2,4), (1,2,7), (1,3,5), (1,3,4), (1,4,4) the repeated occurrences of (1,2), (1,3) are compressed away.

You can also override the default prefix length used for compression as follows:

CREATE TABLE admin_iot7(i INT, j INT, k INT, l INT, PRIMARY KEY (i, j, k)) 
    ORGANIZATION INDEX COMPRESS 1;

For the list of values (1,2,3), (1,2,4), (1,2,7), (1,3,5), (1,3,4), (1,4,4), the repeated occurrences of 1 are compressed away.

You can disable compression as follows:

ALTER TABLE admin_iot5 MOVE NOCOMPRESS;

One application of key compression is in a time-series application that uses a set of time-stamped rows belonging to a single item, such as a stock price. Index-organized tables are attractive for such applications because of the ability to cluster rows based on the primary key. By defining an index-organized table with primary key (stock symbol, time stamp), you can store and manipulate time-series data efficiently. You can achieve more storage savings by compressing repeated occurrences of the item identifier (for example, the stock symbol) in a time series by using an index-organized table with key compression.


See Also:

Oracle Database Concepts for more information about key compression

Maintaining Index-Organized Tables

Index-organized tables differ from ordinary tables only in physical organization. Logically, they are manipulated in the same manner as ordinary tables. You can specify an index-organized table just as you would specify a regular table in INSERT, SELECT, DELETE, and UPDATE statements.

Altering Index-Organized Tables

All of the alter options available for ordinary tables are available for index-organized tables. This includes ADD, MODIFY, and DROP COLUMNS and CONSTRAINTS. However, the primary key constraint for an index-organized table cannot be dropped, deferred, or disabled

You can use the ALTER TABLE statement to modify physical and storage attributes for both primary key index and overflow data segments. All the attributes specified prior to the OVERFLOW keyword are applicable to the primary key index segment. All attributes specified after the OVERFLOW key word are applicable to the overflow data segment. For example, you can set the INITRANS of the primary key index segment to 4 and the overflow of the data segment INITRANS to 6 as follows:

ALTER TABLE admin_docindex INITRANS 4 OVERFLOW INITRANS 6;

You can also alter PCTTHRESHOLD and INCLUDING column values. A new setting is used to break the row into head and overflow tail pieces during subsequent operations. For example, the PCTHRESHOLD and INCLUDING column values can be altered for the admin_docindex table as follows:

ALTER TABLE admin_docindex PCTTHRESHOLD 15 INCLUDING doc_id;

By setting the INCLUDING column to doc_id, all the columns that follow token_frequency and token_offsets, are stored in the overflow data segment.

For index-organized tables created without an overflow data segment, you can add an overflow data segment by using the ADD OVERFLOW clause. For example, you can add an overflow segment to table admin_iot3 as follows:

ALTER TABLE admin_iot3 ADD OVERFLOW TABLESPACE admin_tbs2;

Moving (Rebuilding) Index-Organized Tables

Because index-organized tables are primarily stored in a B-tree index, you can encounter fragmentation as a consequence of incremental updates. However, you can use the ALTER TABLE ... MOVE statement to rebuild the index and reduce this fragmentation.

The following statement rebuilds the index-organized table admin_docindex:

ALTER TABLE admin_docindex MOVE;

You can rebuild index-organized tables online using the ONLINE keyword. The overflow data segment, if present, is rebuilt when the OVERFLOW keyword is specified. For example, to rebuild the admin_docindex table but not the overflow data segment, perform a move online as follows:

ALTER TABLE admin_docindex MOVE ONLINE;

To rebuild the admin_docindex table along with its overflow data segment perform the move operation as shown in the following statement. This statement also illustrates moving both the table and overflow data segment to new tablespaces.

ALTER TABLE admin_docindex MOVE TABLESPACE admin_tbs2 
    OVERFLOW TABLESPACE admin_tbs3;

In this last statement, an index organized table with a LOB column (CLOB) is created. Later, the table is moved with the LOB index and data segment being rebuilt and moved to a new tablespace.

CREATE TABLE admin_iot_lob
   (c1 number (6) primary key,
    admin_lob CLOB)
   ORGANIZATION INDEX
   LOB (admin_lob) STORE AS (TABLESPACE admin_tbs2);
.
.
.
ALTER TABLE admin_iot_lob MOVE LOB (admin_lob) STORE AS (TABLESPACE admin_tbs3); 

See Also:

Oracle Database Application Developer's Guide - Large Objects contains information about LOBs in index-organized tables

Creating Secondary Indexes on Index-Organized Tables

You can create secondary indexes on an index organized tables to provide multiple access paths. Secondary indexes on index-organized tables differ from indexes on ordinary tables in two ways:

  • They store logical rowids instead of physical rowids. This is necessary because the inherent movability of rows in a B-tree index results in the rows having no permanent physical addresses. If the physical location of a row changes, its logical rowid remains valid. One effect of this is that a table maintenance operation, such as ALTER TABLE ... MOVE, does not make the secondary index unusable.

  • The logical rowid also includes a physical guess which identifies the database block address at which the row is likely to be found. If the physical guess is correct, a secondary index scan would incur a single additional I/O once the secondary key is found. The performance would be similar to that of a secondary index-scan on an ordinary table.

Unique and nonunique secondary indexes, function-based secondary indexes, and bitmap indexes are supported as secondary indexes on index-organized tables.

Creating a Secondary Index on an Index-Organized Table

The following statement shows the creation of a secondary index on the docindex index-organized table where doc_id and token are the key columns:

CREATE INDEX Doc_id_index on Docindex(Doc_id, Token);

This secondary index allows the database to efficiently process a query, such as the following, the involves a predicate on doc_id:

SELECT Token FROM Docindex WHERE Doc_id = 1;

Maintaining Physical Guesses in Logical Rowids

A logical rowid can include a guess, which identifies the block location of a row at the time the guess is made. Instead of doing a full key search, the database uses the guess to search the block directly. However, as new rows are inserted, guesses can become stale. The indexes are still usable through the primary key-component of the logical rowid, but access to rows is slower.

Collect index statistics with the DBMS_STATS package to monitor the staleness of guesses. The database checks whether the existing guesses are still valid and records the percentage of rows with valid guesses in the data dictionary. This statistic is stored in the PCT_DIRECT_ACCESS column of the DBA_INDEXES view (and related views).

To obtain fresh guesses, you can rebuild the secondary index. Note that rebuilding a secondary index on an index-organized table involves reading the base table, unlike rebuilding an index on an ordinary table. A quicker, more light weight means of fixing the guesses is to use the ALTER INDEX ... UPDATE BLOCK REFERENCES statement. This statement is performed online, while DML is still allowed on the underlying index-organized table.

After you rebuild a secondary index, or otherwise update the block references in the guesses, collect index statistics again.

Bitmap Indexes

Bitmap indexes on index-organized tables are supported, provided the index-organized table is created with a mapping table. This is done by specifying the MAPPING TABLE clause in the CREATE TABLE statement that you use to create the index-organized table, or in an ALTER TABLE statement to add the mapping table later.


See Also:

Oracle Database Concepts for a description of mapping tables

Analyzing Index-Organized Tables

Just like ordinary tables, index-organized tables are analyzed using the DBMS_STATS package, or the ANALYZE statement.

Collecting Optimizer Statistics for Index-Organized Tables

To collect optimizer statistics, use the DBMS_STATS package.

For example, the following statement gathers statistics for the index-organized countries table in the hr schema:

EXECUTE DBMS_STATS.GATHER_TABLE_STATS ('HR','COUNTRIES');

The DBMS_STATS package analyzes both the primary key index segment and the overflow data segment, and computes logical as well as physical statistics for the table.

  • The logical statistics can be queried using USER_TABLES, ALL_TABLES or DBA_TABLES.

  • You can query the physical statistics of the primary key index segment using USER_INDEXES, ALL_INDEXES or DBA_INDEXES (and using the primary key index name). For example, you can obtain the primary key index segment physical statistics for the table admin_docindex as follows:

    SELECT LAST_ANALYZED, BLEVEL,LEAF_BLOCKS, DISTINCT_KEYS 
       FROM DBA_INDEXES WHERE INDEX_NAME= 'PK_ADMIN_DOCINDEX';
    
    
  • You can query the physical statistics for the overflow data segment using the USER_TABLES, ALL_TABLES or DBA_TABLES. You can identify the overflow entry by searching for IOT_TYPE = 'IOT_OVERFLOW'. For example, you can obtain overflow data segment physical attributes associated with the admin_docindex table as follows:

    SELECT LAST_ANALYZED, NUM_ROWS, BLOCKS, EMPTY_BLOCKS 
       FROM DBA_TABLES WHERE IOT_TYPE='IOT_OVERFLOW' 
              and IOT_NAME= 'ADMIN_DOCINDEX';
    

    See Also:


Validating the Structure of Index-Organized Tables

Use the ANALYZE statement if you want to validate the structure of your index-organized table or to list any chained rows. These operations are discussed in the following sections located elsewhere in this book:

Using the ORDER BY Clause with Index-Organized Tables

If an ORDER BY clause only references the primary key column or a prefix of it, then the optimizer avoids the sorting overhead, as the rows are returned sorted on the primary key columns.

The following queries avoid sorting overhead because the data is already sorted on the primary key:

SELECT * FROM admin_docindex2 ORDER BY token, doc_id;
SELECT * FROM admin_docindex2 ORDER BY token;

If, however, you have an ORDER BY clause on a suffix of the primary key column or non-primary-key columns, additional sorting is required (assuming no other secondary indexes are defined).

SELECT * FROM admin_docindex2 ORDER BY doc_id;
SELECT * FROM admin_docindex2 ORDER BY token_frequency;

Converting Index-Organized Tables to Regular Tables

You can convert index-organized tables to regular tables using the Oracle import or export utilities, or the CREATE TABLE ... AS SELECT statement.

To convert an index-organized table to a regular table:

  • Export the index-organized table data using conventional path.

  • Create a regular table definition with the same definition.

  • Import the index-organized table data, making sure IGNORE=y (ensures that object exists error is ignored).


    Note:

    Before converting an index-organized table to a regular table, be aware that index-organized tables cannot be exported using pre-Oracle8 versions of the Export utility.


    See Also:

    Oracle Database Utilities for more details about using the IMPORT and EXPORT utilities

Managing External Tables

Oracle Database allows you read-only access to data in external tables. External tables are defined as tables that do not reside in the database, and can be in any format for which an access driver is provided. By providing the database with metadata describing an external table, the database is able to expose the data in the external table as if it were data residing in a regular database table. The external data can be queried directly and in parallel using SQL.

You can, for example, select, join, or sort external table data. You can also create views and synonyms for external tables. However, no DML operations (UPDATE, INSERT, or DELETE) are possible, and no indexes can be created, on external tables.

External tables also provide a framework to unload the result of an arbitrary SELECT statement into a platform-independent Oracle-proprietary format that can be used by Oracle Data Pump.


Note:

The DBMS_STATS package can be used for gathering statistics for external tables. The ANALYZE statement is not supported for gathering statistics for external tables.

For information about using the DBMS_STATS package, see Oracle Database Performance Tuning Guide


The means of defining the metadata for external tables is through the CREATE TABLE ... ORGANIZATION EXTERNAL statement. This external table definition can be thought of as a view that allows running any SQL query against external data without requiring that the external data first be loaded into the database. An access driver is the actual mechanism used to read the external data in the table. When you use external tables to unload data, the metadata is automatically created based on the datatypes in the SELECT statement (sometimes referred to as the shape of the query).

Oracle Database provides two access drivers for external tables. The default access driver is ORACLE_LOADER, which allows the reading of data from external files using the Oracle loader technology. The ORACLE_LOADER access driver provides data mapping capabilities which are a subset of the control file syntax of SQL*Loader utility. The second access driver, ORACLE_DATAPUMP, lets you unload data--that is, read data from the database and insert it into an external table, represented by one or more external files--and then reload it into an Oracle Database.

The Oracle Database external tables feature provides a valuable means for performing basic extraction, transformation, and loading (ETL) tasks that are common for data warehousing.

These following sections discuss the DDL statements that are supported for external tables. Only DDL statements discussed are supported, and not all clauses of these statements are supported.

Creating External Tables

You create external tables using the ORGANIZATION EXTERNAL clause of the CREATE TABLE statement. You are not in fact creating a table; that is, an external table does not have any extents associated with it. Rather, you are creating metadata in the data dictionary that enables you to access external data.

The following example creates an external table and then uploads the data to a database table. Alternatively, you can unload data through the external table framework by specifying the AS subquery clause of the CREATE TABLE statement. External table data pump unload can use only the ORACLE_DATAPUMP access driver.


EXAMPLE: Creating an External Table and Loading Data

The file empxt1.dat contains the following sample data:

360,Jane,Janus,ST_CLERK,121,17-MAY-2001,3000,0,50,jjanus
361,Mark,Jasper,SA_REP,145,17-MAY-2001,8000,.1,80,mjasper
362,Brenda,Starr,AD_ASST,200,17-MAY-2001,5500,0,10,bstarr
363,Alex,Alda,AC_MGR,145,17-MAY-2001,9000,.15,80,aalda

The file empxt2.dat contains the following sample data:

401,Jesse,Cromwell,HR_REP,203,17-MAY-2001,7000,0,40,jcromwel
402,Abby,Applegate,IT_PROG,103,17-MAY-2001,9000,.2,60,aapplega
403,Carol,Cousins,AD_VP,100,17-MAY-2001,27000,.3,90,ccousins
404,John,Richardson,AC_ACCOUNT,205,17-MAY-2001,5000,0,110,jrichard

The following hypothetical SQL statements create an external table in the hr schema named admin_ext_employees and load its data into the hr.employees table.

CONNECT  /  AS SYSDBA;
-- Set up directories and grant access to hr 
CREATE OR REPLACE DIRECTORY admin_dat_dir
    AS '/flatfiles/data'; 
CREATE OR REPLACE DIRECTORY admin_log_dir 
    AS '/flatfiles/log'; 
CREATE OR REPLACE DIRECTORY admin_bad_dir 
    AS '/flatfiles/bad'; 
GRANT READ ON DIRECTORY admin_dat_dir TO hr; 
GRANT WRITE ON DIRECTORY admin_log_dir TO hr; 
GRANT WRITE ON DIRECTORY admin_bad_dir TO hr;
-- hr connects 
CONNECT hr/hr
-- create the external table
CREATE TABLE admin_ext_employees
                   (employee_id       NUMBER(4), 
                    first_name        VARCHAR2(20),
                    last_name         VARCHAR2(25), 
                    job_id            VARCHAR2(10),
                    manager_id        NUMBER(4),
                    hire_date         DATE,
                    salary            NUMBER(8,2),
                    commission_pct    NUMBER(2,2),
                    department_id     NUMBER(4),
                    email             VARCHAR2(25) 
                   ) 
     ORGANIZATION EXTERNAL 
     ( 
       TYPE ORACLE_LOADER 
       DEFAULT DIRECTORY admin_dat_dir 
       ACCESS PARAMETERS 
       ( 
         records delimited by newline 
         badfile admin_bad_dir:'empxt%a_%p.bad' 
         logfile admin_log_dir:'empxt%a_%p.log' 
         fields terminated by ',' 
         missing field values are null 
         ( employee_id, first_name, last_name, job_id, manager_id, 
           hire_date char date_format date mask "dd-mon-yyyy", 
           salary, commission_pct, department_id, email 
         ) 
       ) 
       LOCATION ('empxt1.dat', 'empxt2.dat') 
     ) 
     PARALLEL 
     REJECT LIMIT UNLIMITED; 
-- enable parallel for loading (good if lots of data to load)
ALTER SESSION ENABLE PARALLEL DML;
-- load the data in hr employees table
INSERT INTO employees (employee_id, first_name, last_name, job_id, manager_id,
                       hire_date, salary, commission_pct, department_id, email) 
            SELECT * FROM admin_ext_employees;

The following paragraphs contain descriptive information about this example.

The first few statements in this example create the directory objects for the operating system directories that contain the data sources, and for the bad record and log files specified in the access parameters. You must also grant READ or WRITE directory object privileges, as appropriate.


Note:

When creating a directory object or BFILEs, ensure that the following conditions are met:
  • The operating system file must not be a symbolic or hard link.

  • The operating system directory path named in the Oracle Database directory object must be an existing OS directory path.

  • The operating system directory path named in the directory object should not contain any symbolic links in its components.


The TYPE specification indicates the access driver of the external table. The access driver is the API that interprets the external data for 5the database. Oracle Database provides two access drivers: ORACLE_LOADER and ORACLE_DATAPUMP. If you omit the TYPE specification, ORACLE_LOADER is the default access driver. You must specify the ORACLE_DATAPUMP access driver if you specify the AS subquery clause to unload data from one Oracle Database and reload it into the same or a different Oracle Database.

The access parameters, specified in the ACCESS PARAMETERS clause, are opaque to the database. These access parameters are defined by the access driver, and are provided to the access driver by the database when the external table is accessed. See Oracle Database Utilities for a description of the ORACLE_LOADER access parameters.

The PARALLEL clause enables parallel query on the data sources. The granule of parallelism is by default a data source, but parallel access within a data source is implemented whenever possible. For example, if PARALLEL=3 were specified, then more than one parallel execution server could be working on a data source. But, parallel access within a data source is provided by the access driver only if all of the following conditions are met:

  • The media allows random positioning within a data source

  • It is possible to find a record boundary from a random position

  • The data files are large enough to make it worthwhile to break up into multiple chunks


    Note:

    Specifying a PARALLEL clause is of value only when dealing with large amounts of data. Otherwise, it is not advisable to specify a PARALLEL clause, and doing so can be detrimental.

The REJECT LIMIT clause specifies that there is no limit on the number of errors that can occur during a query of the external data. For parallel access, this limit applies to each parallel execution server independently. For example, if REJECT LIMIT is specified, each parallel query process is allowed 10 rejections. Hence, the only precisely enforced values for REJECT LIMIT on parallel query are 0 and UNLIMITED.

In this example, the INSERT INTO TABLE statement generates a dataflow from the external data source to the Oracle Database SQL engine where data is processed. As data is parsed by the access driver from the external table sources and provided to the external table interface, the external data is converted from its external representation to its Oracle Database internal datatype.


See Also:

Oracle Database SQL Reference provides details of the syntax of the CREATE TABLE statement for creating external tables and specifies restrictions on the use of clauses

Altering External Tables

You can use any of the ALTER TABLE clauses shown in Table 14-1to change the characteristics of an external table. No other clauses are permitted.

Table 14-1 Altering an External Table

ALTER TABLE Clause Description Example
REJECT LIMIT Changes the reject limit
ALTER TABLE admin_ext_employees
   REJECT LIMIT 100;
PROJECT COLUMN Determines how the access driver validates rows in subsequent queries:
  • PROJECT COLUMN REFERENCED: the access driver processes only the columns in the select list of the query. This setting may not provide a consistent set of rows when querying a different column list from the same external table. This is the default.

  • PROJECT COLUMN ALL: the access driver processes all of the columns defined on the external table. This setting always provides a consistent set of rows when querying an external table.

ALTER TABLE admin_ext_employees
   PROJECT COLUMN REFERNCED;

ALTER TABLE admin_ext_employees
   PROJECT COLUMN ALL;
DEFAULT DIRECTORY Changes the default directory specification
ALTER TABLE admin_ext_employees 
    DEFAULT DIRECTORY admin_dat2_dir;
ACCESS PARAMETERS Allows access parameters to be changed without dropping and re-creating the external table metadata
ALTER TABLE admin_ext_employees
    ACCESS PARAMETERS
       (FIELDS TERMINATED BY ';');
LOCATION Allows data sources to be changed without dropping and re-creating the external table metadata
ALTER TABLE admin_ext_employees
   LOCATION ('empxt3.txt',
             'empxt4.txt');
PARALLEL No difference from regular tables. Allows degree of parallelism to be changed. No new syntax
ADD COLUMN No difference from regular tables. Allows a column to be added to an external table. No new syntax
MODIFY COLUMN No difference from regular tables. Allows an external table column to be modified. No new syntax
DROP COLUMN No difference from regular tables. Allows an external table column to be dropped. No new syntax
RENAME TO No difference from regular tables. Allows external table to be renamed. No new syntax

Dropping External Tables

For an external table, the DROP TABLE statement removes only the table metadata in the database. It has no affect on the actual data, which resides outside of the database.

System and Object Privileges for External Tables

System and object privileges for external tables are a subset of those for regular table. Only the following system privileges are applicable to external tables:

  • CREATE ANY TABLE

  • ALTER ANY TABLE

  • DROP ANY TABLE

  • SELECT ANY TABLE

Only the following object privileges are applicable to external tables:

  • ALTER

  • SELECT

However, object privileges associated with a directory are:

  • READ

  • WRITE

For external tables, READ privileges are required on directory objects that contain data sources, while WRITE privileges are required for directory objects containing bad, log, or discard files.

Viewing Information About Tables

The following views allow you to access information about tables.

View Description
DBA_TABLES

ALL_TABLES

USER_TABLES

DBA view describes all relational tables in the database. ALL view describes all tables accessible to the user. USER view is restricted to tables owned by the user. Some columns in these views contain statistics that are generated by the DBMS_STATS package or ANALYZE statement.
DBA_TAB_COLUMNS

ALL_TAB_COLUMNS

USER_TAB_COLUMNS

These views describe the columns of tables, views, and clusters in the database. Some columns in these views contain statistics that are generated by the DBMS_STATS package or ANALYZE statement.
DBA_ALL_TABLES

ALL_ALL_TABLES

USER_ALL_TABLES

These views describe all relational and object tables in the database. Object tables are not specifically discussed in this book.
DBA_TAB_COMMENTS

ALL_TAB_COMMENTS

USER_TAB_COMMENTS

These views display comments for tables and views. Comments are entered using the COMMENT statement.
DBA_COL_COMMENTS

ALL_COL_COMMENTS

USER_COL_COMMENTS

These views display comments for table and view columns. Comments are entered using the COMMENT statement.
DBA_EXTERNAL_TABLES

ALL_EXTERNAL_TABLES

USER_EXTERNAL_TABLES

These views list the specific attributes of external tables in the database.
DBA_EXTERNAL_LOCATIONS

ALL_EXTERNAL_LOCATIONS

USER_EXTERNAL_LOCATIONS

These views list the data sources for external tables.
DBA_TAB_HISTOGRAMS

ALL_TAB_HISTOGRAMS

USER_TAB_HISTOGRAMS

These views describe histograms on tables and views.
DBA_TAB_COL_STATISTICS

ALL_TAB_COL_STATISTICS

USER_TAB_COL_STATISTICS

These views provide column statistics and histogram information extracted from the related TAB_COLUMNS views.
DBA_TAB_MODIFICATIONS

ALL_TAB_MODIFICATIONS

USER_TAB_MODIFICATIONS

These views describe tables that have been modified since the last time table statistics were gathered on them. They are not populated immediately, but after a time lapse (usually 3 hours).
DBA_UNUSED_COL_TABS

ALL_UNUSED_COL_TABS

USER_UNUSED_COL_TABS

These views list tables with unused columns, as marked by the ALTER TABLE ... SET UNUSED statement.
DBA_PARTIAL_DROP_TABS

ALL_PARTIAL_DROP_TABS

USER_PARTIAL_DROP_TABS

These views list tables that have partially completed DROP COLUMN operations. These operations could be incomplete because the operation was interrupted by the user or a system failure.


Example: Displaying Column Information

Column information, such as name, datatype, length, precision, scale, and default data values can be listed using one of the views ending with the _COLUMNS suffix. For example, the following query lists all of the default column values for the emp and dept tables:

SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE, DATA_LENGTH, LAST_ANALYZED
    FROM DBA_TAB_COLUMNS
    WHERE OWNER = 'HR'
    ORDER BY TABLE_NAME;
    

The following is the output from the query:

TABLE_NAME           COLUMN_NAME          DATA_TYPE   DATA_LENGTH LAST_ANALYZED
-------------------- -------------------- ---------- ------------ -------------
COUNTRIES            COUNTRY_ID           CHAR                  2 05-FEB-03
COUNTRIES            COUNTRY_NAME         VARCHAR2             40 05-FEB-03
COUNTRIES            REGION_ID            NUMBER               22 05-FEB-03
DEPARTMENTS          DEPARTMENT_ID        NUMBER               22 05-FEB-03
DEPARTMENTS          DEPARTMENT_NAME      VARCHAR2             30 05-FEB-03
DEPARTMENTS          MANAGER_ID           NUMBER               22 05-FEB-03
DEPARTMENTS          LOCATION_ID          NUMBER               22 05-FEB-03
EMPLOYEES            EMPLOYEE_ID          NUMBER               22 05-FEB-03
EMPLOYEES            FIRST_NAME           VARCHAR2             20 05-FEB-03
EMPLOYEES            LAST_NAME            VARCHAR2             25 05-FEB-03
EMPLOYEES            EMAIL                VARCHAR2             25 05-FEB-03
.
.
.
LOCATIONS            COUNTRY_ID           CHAR                  2 05-FEB-03
REGIONS              REGION_ID            NUMBER               22 05-FEB-03
REGIONS              REGION_NAME          VARCHAR2             25 05-FEB-03

51 rows selected.

See Also: