Database Administration New Features - Oracle 11g

Oracle Database 11g adds several important weapons to the toolkit of DBAs to facilitate the day-to-day management of the database as well as providing additional object management capabilities.In the following sections,we discuss the most important of these enhancements.

Virtual Columns

Sometimes you might want to store data in a column based on the evaluation of an expression.Oracle Database 11g provides a new type of column you can include in a table,called a virtual column.Virtual columns are similar to normal table columns,with two major differences:

  • You can’t write to a virtual column.
  • A virtual column is populated by the evaluation of an expression.

You can derive the virtual column from evaluating an expression on columns from the same table,from constants,or as the result of evaluating a SQL or PL/SQL function. You’ll see the values under a virtual column only when you query that column,but the values aren’t permanently stored on disk,as is the case for normal columns.The values in a virtual column are calculated only when it’s queried by dynamically computing a function or a column expression.

You can do the following things with a virtual column:

  • Use them in both DDL and DML statements.
  • Collect statistics on them.
  • Define indexes on them.The indexes you define are similar to function-based indexes you create on normal columns.Behind the scenes,Oracle creates a function-based index on the virtual column when you create an index on that column.

Creating a Table with a Virtual Column

To create a virtual column, you must use the clause generated always as after the virtual column name when you create a table. Here’s an example showing how to incorporate a virtual columnin a table:

Line 6 in the previous example creates the virtual column hrly_rate.If youwant,you can also use the keyword virtual after this line to make it syntactically complete,but the keyword is purely optional.

The following example shows how to use the optional keyword virtual as part of a table creation statement that also creates a check constraint on the virtual column:

The column hrly_rate is a virtual column since the column values aregenerated by evaluating the expression sal/2080 for each row in the table emp.The generated always as clause means the values of the virtual column are created on the fly in response to a query on the virtual column.

The values for the hrly_rate column for each employee are derived by divi ding the employee’s annual salary by the total number of hours the employee worked during that year.The column expression can refer to a user-created function.A virtual column can use any scalar datatype or the XML type.

You can modify a table containing a virtual column just as you would a table with only normal columns by using the alter table statement.

Adding a Virtual Column to a Table

You can add a virtual column to an existing table just as you would a normal column.In the following example,we show how to add a new virtual column named income, which is derived by computing the product of the salary and commission_pct columns for each row

Note that in this alter table statement we haven’t specified a data type for the new virtual column named income.Oracle will automatically assign a datatype for the column,based on the data type of the two columns,salary and commission_pct,that are part of the column expression that defines the virtual column.

When you create a new virtual column,the values for that column areautomatically computed based on the column expression.If you query the employees table now,you’ll see values in the new virtual column income,as shown in the following example:

Although you can see the virtual column values when you query the table, the data isn’t stored on disk but is calculated from the virtual column expression each time you query it.

Virtual Columns Limitations

The following are restrictions on virtual columns:

    • You can create virtual columns only on ordinary (heap-organized) tables.You can’t create virtual columns on an index-organized table, an external table, a temporary table, an object, or a cluster.
    • You can’t create a virtual column as a user-defined type, LOB, or RAW.
    • All columns in the column expression must belong to the same table.
    • The column expression must result in a scalar value.
    • The column expression in the generated always as clause can’t refer to another virtual column.
    • You can’t update a virtual column by using it in the set clause of an update statement.
    • You can’t perform a delete or an insert operation on a virtual column.

Virtual Columns Limitations

New Data Partitioning Schemes

In Oracle Database 11g, there are several new partitioning techniques to facilitate storing large amounts of data as well as increasing the performance of queries based on the partitioned tables.We list the four new partitioning schemes here. which discusses new data warehousing features in Oracle Database 11g.

    • Reference partitioning enables you to logically equipartition tables with parent-child relationships by inheriting the parti tioning key from the parent table.
    • Interval partitioning lets you automate the creation of range partitions by creating partitions on demand.
    • System partitioning enables application controlled partitioning.
    • New composite partitioning schemes let you perform logical range partitioning along
    two dimensions.
    • Virtual column partitioning lets you use partitioning key columns defined on a table’s virtual columns (discussed earlier in this chapter), thus over coming limitations where the partitioning needs aren’t being served by the currently defined table columns

Allowing DDL Locks to Wait for DML Locks

In previous releases,by default,any DDL locks wouldn’t wait for a DML lock but would fail right away if they couldn’t obtain the DDL lock.In Oracle Database 11g, you can use the new initialization parameter ddl_lock_timeout to specify the duration for which a DDL statement will wait for a DML lock.

The default value of the ddl_lock_timeout parameter is zero, meaning that DDL statements won’t wait for a DML lock.The maximum value of 1,000,000 seconds means you can potentially set the wait duration for as long as 11.5 days.You can use the alter session statement to specify the duration of wait for a DML lock,as shown here:

Explicit Locking of Tables

If you want to add a column to a table,Oracle requires that an exclusive lock be requested when you issue the DDL command to add the column.If Oracle fails to obtain an immediate DDL lock on the table in question,the DDLcommand will fail right away.Oracle Database 11g lets you allow a DDLcommand to wait for a specified length of time before it fails because of its inability to acquire the necessary DML lock on the table.

Oracle now allows new syntax with the lock table command that lets you specify the maximum time a statement should wait to obtain a DML lock on a table.This feature is handy for operations such as adding a column to a table that users are frequently updating.The new syntax of the lock table command is as follows:

LOCK TABLE ... IN lockmode MODE [NOWAIT | WAIT integer]

Here’s what the mode parameters nowait and wait mean:

    • If you want the database to return control to you immediately upon finding that a necessary table is already locked by other users,specify the nowait option.
    • You can use the wait parameter to specify the number of seconds the lock table statement can wait in order to acquire a DML lock.You can set the value of this parameter to any integer value you want—there’s no limit.
    • If you don’t specify either wait or nowait, the database will wait until the locked table is available and then lock it before returning control to you.

Invisible Indexes

Suppose you have a situation where you have to create a huge index for some special queries but don’t want the optimizer to change its query plans based on this index.Furthermore,the situation may call for dropping and re-creating the index often, which leads to problems because the index building takes conside rable time.

In Oracle Database 11g, you can create an index as an invisible index,which really means the index won’t be visible to the cost optimizer when it’s figur ing out the execution plans.

You can use the invisible index only by specifying the index hint in your SQL statements.In sum, this means you can make selective use of an index, choosing to employ the index only when you want it, not when the optimizer thinks it’s necessary.As far as the optimizer is concerned,the index simply doesn’t exist, unless you reveal its existence.

The following situations make the invisible index a handy feature:

    • You want to test the usefulness of a new index but don’t want to change the optimizer’s execution plans before you are sure about the usefulness of the new index.You can now make the index invisible and have the optimizer generate execution plans without that index.If the performance is the same or better,you may decide to drop that index for good.If,on the other hand,the performance is better with the index,you want to make that index visible,that is,change it back to a normal index or change its status to visible,as we show later.
    • You want to use an index for a temporary purpose, such as during an application upgrade,but don’t want the index to be considered by the optimizer subsequently.

Although an invisible index is unknown to the cost optimizer unless you cause the index to be considered by using the index hint, the index continues to be maintained just as any other normal index.This means when you add new data to the table or modify existing data,the invisible index will be updated accordingly by the data base.

Creating an Invisible Index

By default,all indexes are visible to the optimizer.The new visibleinvisible clause of the create index statement lets you designate the
visibility of an index.If you don’t use this clause, Oracle creates the index as a normal index.You can create an invi sible index by simply appending the keyword invisible at the end of the normal create index statement,as shown here:

You can also specify a tablespace while creating the invisible index,as the following example shows:

The index name_idx2 is stored in the tablespace example, just as any other normal visible index.The database will update this index as necessary follow ing all DML operations.

Making an Existing Index Invisible

Besides the ability to create a new index as invisible from its inception, you can also change the status of an existing index from a normal index into an invisible index.This gives you the ability to toggle the status of an index,sometimes making it available to the optimizer and at times invisible to the optimizer.Here’s the command to make an existing index invisible:

Once you make an existing index invisible as shown here, the optimizer won’t be able to “see” the index any longer when it’s figuring out the most efficient or optimal execution plan.You can use the alter index visible statement to make the invisible index visible to the optimizer, as shown here:

Thus,once you create an invisible index,you can toggle its status between visible and invisible as you want.

As in the case of the normal indexes, you can’t create two indexes on the same column,even if one of them is an invisible index.If you try to create an invisible index on an already indexed column, you’ll get this error:

As the preceding example hints,the care and feeding of an invisible index isn’t really different from that of regular indexes.

How the Optimizer Treats the Invisible Index

When you execute a SQL query that involves an invisible index, the cost optimizer will disregardthe index.You can make an invisible index visible to the cost optimizer in two ways.If you want the optimizer to take into account all invisible indexes in the data bases,you can do so by setting the new initialization parameter optimizer_use_invisible_indexes to true.When you do this,the optimizer will treat all invisible indexes as normal indexes.

The default value of this parameter is set to false, meaning that the use of invisible indexes is disabled by default.The database maintains the invisi ble indexes as any other normal index,but the optimizer will ignore them by default.You can enable the use of invisible indexes at the session or the system level, as shown here:

SQL> alter session set optimizer_use_invisible_indexes=true;
SQL> alter system set optimizer_use_invisible_indexes=true;

Setting the initialization parameter optimizer_use_invisible_indexes to true(the default is false, meaning the invisible indexes are indeed invisible) makes all invisible indexes visible to the optimizer.If you want the optim izer to take only a specific invisible index into account rather than all the invisible indexes in the database,you can do that as well by using the index hint in a query.

When the optimizer “sees” the index hint embedded in the select statement, it takes that index into account when figuring out an optimal execution plan for a query.If you remove the index hint from the query, the index remains invisible to the optimizer.

Let’s first look at the explain plan output for the following select statement without the index hint:


Heres the explain plan output without the index hint:

You can clearly see that the optimizer does not pick up the index hint and opts to perform a full table scan.If you want the optimizer to take theinvisible index into account, you must use the index hint, as shown in the following example:

1 select /*+ index (edba_documents EDBA_DOCUMENTS_I10) */ 2 edba_document_id
3 from edba_documents
4* where edba_document_id=1280
19:07:35 SQL> Elapsed: 00:00:00.00 Execution Plan

The explain plan for the select statement with the index hint reveals that the optimizer performs an index range scan instead of a full table scan, owing to it taking the invisible index into account.

Checking the Visibility Status of an Index

Oracle Database 11g has added a new column named visibility to the DBA_INDEXES table to enable the checking of an index’s visi bility status. Use the following command to check the visibility status of an index:

By default,all indexes are visible, unless you explicitly create them as invisible indexes.

Read-Only Tables

In earlier releases of the Oracle database,you could have read-only table spaces but not readonly tables.One of the more useful but simple toimplement new features in Oracle Database 11g is the read-only table feature.Simply specify the clause read only in the alter tablestatement as shown in the following example,where we first create a normal read-write table and then convert that into a read-only table:

The new column READ_ONLY in the DBA_TABLES view shows whether a column has readonly status.You can return a read-only table to a read-write status by using the read write clause in the alter table statement,as shown in the following example:

SQL> alter table test1 read write; Table altered. SQL>

Once you put a table in a read-only mode,you can’t issue any DML statements such as update,insert,or delete.You also can’t issue a select for update state ment involving a readonly table.You can issue DDL statements such as drop table and alter table on a read-only table,however you can use the read-only feature to prevent changes to a table’s data during maintenance operations. You can perform maintenance operations on any of the indexes that you have defined on the read-only table, prior to changing their status to read-only.You can, of course, use this feature for security reasons,where you want to grant users the ability to read but not modify table data.

Temporary Tablespace Management

There are several improvements in the management of temporarytablespaces in Oracle Database 11g.Chief among these are the new
capabilities to shrink temporary tablespaces andtempfiles and to specify a temporary tablespace during the creation of temporary tables.We’ll cover both of the temporary tablespace innovations in more detail in the following sections.

Shrinking Temporary Tablespaces and Tempfiles

Often you’ll find you need to reduce the space that you had allocated to the temporary tablespace.It’s not uncommon to see huge tempfiles floating around on disk,even after a massive sort job has finished.You have increased the allocation temporarily for a large job and want to reclaim the space after the job is completed.

in previous releases of the Oracle database,you just couldn’t do this,because there was no way to reduce the size of the tempo rary tablespace.You could drop the large temporary tablespace and re-create it with a smaller size, but you have to ensure that the users aren’t using the temporary tablespace for sorting operations during that time.

In Oracle Database 11g,you can shrink the temporary tablespace by specifying the new clause shrink space in an alter tablespace statement.You can also shrink tempfiles using the new command alter table space shrinktempfile,thus making the temporary table space smaller Here are a couple of examples,the first one showing how to shrink a temporary tablespace and the second showing how to shrink a tempfile:

If you have multiple datafiles in the temporary tablespace,the shrink space clause will shrink all the tempfiles to a data base-determined minimum size, which is about 1m.

The minimum size the tablespaces are shrunk to take into account the table space storage requirements.If you want the temporary tablespace/tempfile to be at least a certain size,you can use the keep clause to specify a lower bound for the table space/tempfile, as shown in the example here, which shrinks the temporary tablespace named temp.

SQL> alter tablespace temp shrink space keep 100m;

In the keep clause in this example, 100m is the value for the size sub clause.Let’s look atthe logic Oracle uses to shrink the temp tablespace.In the following example, there are two 1GB tempfiles.The command to shrink the temp tablespace to 1GB was issued immediately after ward.

Now, query the V$TEMPFILE view to see the amount of space remaining in the temporary tablespace:

Instead of equally evenly shrinking the two tempfiles to 500MB each,Oracle shrinks onetempfile all the way down to 1MB and the other down by just 1MB,with the size of the tempfile going down to 999MB.

If you want to specify a particular tempfile when specifying a minimum space for it after shrinking,you can do so as well,as shown in the following example:

The previous statement will not shrink the entire temp tablespace—it shrinks the specified tempfile but only down to the 100MB level you specified with the keep clause.All other tempfiles that are part of the temp tablespace willremain unaffected.

The new view DBA_TEMP_FREE_SPACE shows you temporary space usage information, as shown here:

The information in the DBA_TEMP_FREE_SPACE view is derived from existing views.

Specifying Temporary Tablespace for Global Temporary Tables

Prior to Oracle Database 11g,when you created a global temporary table,you couldn’t specify the tablespace for that table.

The global temporary table would be created in your default temporarytablespace as a result.This meant the global temporary table would be forced to use the extent size of the default temporary tablespace, no matter what its sort usage size is.

In Oracle Database 11g,you can specify a temporary tablespace clause when you create a global temporary table.Here’s the new command:

You can thus ensure that the database allocates the appropriate extent sizes to match your sort usage.This is especially handy when your databaseperforms several types of temporary space usage with different sort resize requirements.

Creating an Initialization Parameter File from Memory

In previous Oracle releases,you used the create pfile from spfile statement to create a text initialization parameter (init.ora) file from the server parameter file.In Oracle Database 11g,you can create the text initialization parameter file using the current system-wide parameter settings by specifying the new option memory when creating the pfile.

When you specify memory instead of spfile,the instance generates the initialization parameter values that are currently in force.You simply issue the following statement to create the init.ora file from the current system-wide parameter settings in use by the instance:

You can also use the memory clause to create an spfile, as shown here:

The ability to create the pfile or spfile entirely from memory is wonderful,since that provides a fallback option when you lose your initialization file or the server parameter file,or both, provided the instance is up.In the following example,an attempt to create a new init.ora file from the server parameter file fails, because the server parameter file is lost:

In cases such as this, you simply use the create pfile from memory statement to re-create your init.ora file entirely from the instance memory.An interesting thing about the initialization parameter file you re-create this way is that the file will contain about 150 initialization parameters altogether, including many parameters you haven’t explicitly set.

Oracle will simply take both the default values of parameters you didn’t set values for and the values for the explicitly set parameters and put them in the new initialization file.Here’s an example:

You’ll see all the parameters currently in force for the instance when you use the memory option to generate your parameter file.

Restore Point Enhancements

Oracle Database 10g introduced restore points,which are associated with the system change number (SCN)of the database when you create a restore point.Restore points serve as user defined bookmarks to specific points in time when the database was in a good or normal state.

Restore points are highly useful when you’re flashing back a database,because they eliminate the need for determining the SCN or the
timestamp for the point in time to which you want to flash back the database. You simply specify the restore point instead.

Oracle Database 11g introduces significant enhancements in the restore point syntax,allowing you to create restore points specific to a particular SCN or point in time,as well as the capability to preserve the restore points.In addition,you use them to implement archival backups.

Creating Restore Points “as of” an SCN or a Timestamp

In earlier releases,when you issued the create restore point command,the database created a restore point associated with the SCN at the time of the restore point creation.You can now create a restore point for a specific SCN in the past or a past point in time.

You use the as of construct to create a restore point corresponding to a previous SCN or timestamp.Of course,the SCN you specify must be a valid SCN from a past time period.

Let’s see how you can use the as of clause to create a restore point with an older SCN or time stamp.First,check the current SCN of the database:

Next,create a restore point,restore1,as of the SCN 2400000, which is older than the current SCN of 2409686:

SQL> create restore point restore2 as of scn 2400000;Restore point created. SQL>

Instead of an SCN, you can use a timestamp with the as of clause to specify a past period of time,as shown here:

Of course,the database must be able to go back to the time point you select with the timestamp clause.

Preserving Restore Points

Often you may create restore points to use with the flashback features.In such a case,you want to make sure the restore points aren’t automatically deleted by the database,say because it has reached the maximum number of restore points, in which case it drops the oldest restore point in the database.

In Oracle Database 11g, you can specify the preserve clause to indicate that the database can’t automatically drop the restore point.Here’s an example:

You can check whether a restore point is preserved by querying the V$RESTORE_POINT view in the following way:

Note that a guaranteed restore point is always preserved,unlike normalrestore points.Apreserved restore point isn’t the same as a guaranteed restore point, which enables you to flash back a database to a restore point regardless of the setting for the db_flashback retention_target parameter.A preserved restore point is simply a normal restore point that requires that you explicitly delete it.In Oracle Database 11g, you can use restore points during the implementation of archival backups.

Database Resident Connection Pooling

Traditionally,Oracle offered you two different types of server processes, which handle the requests of use processes.These are the dedicated server process,which handles one user process at a time,and the shared server process, which serves multiple user processes simultaneously through dis patchers.

By default,an Oracle database always allows a dedicated server process,but the shared server process needs special configu ration.Shared serverprocesses offer an efficient way to scale up when faced with large numbers of user connections by eliminating the need to start a dedicated server for each user connection.It’s common for web-based client applications that have similar database credentials and that use identical schema to share or reuse sessions.Huge numbers of client connections also mean memory requirements become a problem if connections and scalability of the middle tier and database tiers are an issue.Web-based applications aren’t always “active” throughout a session.Rather, they show bursts of activity, and they don’t maintain state through the time they are connected to the database.

You can use database connection pooling so a small number of databasesessions can service a large number of end users, increasing bothperformance and scalability.However,technologies such as PHP can’t takeadvantage of connection pooling via the application server, since they require a separate database connection for each web server process.Even if an application can use application-server-level connection pooling, it will still be stymied by a high number of database connections when dealing with large numbers of application servers.

In Oracle Database 11g,you can configure scarce resources even better in a large-user environment by using the new database resident connection pooling (DRCP) feature.DRCP employs a pool of dedicated servers for servicing larger numbers of connections and is capable of handling tens of thousands of simultaneous connections to the database while using very little memory when compared to both a dedicated server and a shared server approach.

Applications that currently can’t avail of application server connectionpooling,as well as applications that run on large application server farms,can take advantage of this feature to increase scalability.DRCP is especially designed to help architectures that use multiprocess, single-threadedapplication servers (PHP with Apache,for example) that are incapable of middle-tier connection pooling.Using DRCP,these applications can now effortlessly scale to tens of thousands of simultaneous connections.

DRCP is similar to the traditional dedicated server connection model, which is the default method of connection to an Oracle database but doesn’t need an exclusive dedicated server for each user connection through out the life time of the connection.

The server in DRCP works just like the dedicated server while the userconnection is actually working with it, but is referred to as a pooled server.

The pooled servers combine an Oracle server process and user session,just as a dedicated server does.The server is a pooled server rather than a dedicated server because each connection acquires a server for a temporary period from a pool of servers. When the user connection is done using the database,it relinquishes the server connection back to the server pool.In a dedicatedserver connection method, on the other hand, the server connection is terminated only after the client connection is terminated.So,even if a client isn’t actively usingthe dedicated server,the connection is main tained for the life time of the connection, wasting crucial memory resources in the bargain.By enabling resource sharing among middle-tier client applications,you can now scale up to a large number of user connections with a much smaller resource usage.

How DRCP Works

When you use a dedicated server,the instance creates a new server process in response to a new client connection request,along with a new clientsession.Under DRCP,a connection broker assigns the new client connection to a pooled server,if one is already available.

The client connection will release the pooled server back to the connection pool once its request is serviced by the database. In both the dedicated server and shared server methods,the client connection releases the memory and other resources back to the instance only after the termination of the session.

If the connection broker can’t find an available free pooled server to assign to the new client connection,it will create a new pooled server.However, if the number of pooled servers is at the maximum level,the connection broker won’t be able to create a new pooled server.

The client connection will have to go into a wait queue until an existing pooled server is released back to the connection pool by the client connections that are using the available pooled servers.Both the dedicated server method and the DRCP method allocate memory from the program global area (PGA).

However,the big advantage of using the DRCP method lies in the fact that the memory requirement is proportional to the number of active pooled servers in the connection pool,and not to the number of client connections,as is the case under the dedicated server approach.Here’s an example that shows the tremendous gains you can achieve in memory usage by switching to DRCP from traditional dedicated server connections.

Let’s say there are 10,000 client connections, with each client session requiring 200KB of memory and each server process requiring 5MB.Let’s assume that the maximum number of pooled servers is 200.Here are the totalrequirements for a dedicated server approach and the new DRCP method:

Dedicated server Total memory required = 10000 X (200KB + 5MB) = 520GB Database Resident Connection Pooling Total Memory Required = 200 X (200KB + 5MB) = 1.04 GB Shared Server 1000 X 200KB + 200 X 5MB = 21GB

You can see that DRCP requires only a little more than 1GB of memory for the 10,000 users,whereas the dedicated server approach will require a mammoth 520GB of memory!

How DRCP Works

Enabling and Disabling DRCP

You don’t have to create a database-resident connection pool, since the database already comes bwith the resident pool configured.

The default connection pool has the name SYS_DEFAULT_CONNECTION_POOL.All you have to do to enable DRCP in a database is to simply start the database-resident connection pool.Use the new Oraclepackage DBMS_CONNECTION_POOL to do so,as shown here:

You can specify a single parameter with the start_pool procedure pool_name, but currently only the default pool name, SYS_DEFAULT_CONNECTION_POOL,is supported.Check the status of the connection pool with the following query:

You use the start_pool() procedure only once to start the connection pool.The connection pool you start with the start_pool() function survives instance shutdowns.When you restart an instance,the connection pool will beautomatically restarted as well, provided you have enabled the DRCP before the database shutdown.You can stop the connection pool and thus disable DRCP by using the stop_pool procedure,as shown here:

The stop_pool() procedure stops the connection pool and takes it offline. Note that when you restart the instance,the connection pool starts automatically.You don’t have to use the start_pool procedure each time your restart the instance.

When you enable DRCP,clients connect to the new background process named connection monitor (CMON),which manages the server-side connection pool, instead of connecting to a dedicated server.A client that uses a common user name will use a previously allocated session.

When the applications disconnects from the database,it returns the dedi cated serverprocess to CMON,which puts the process back in the connection pool to serve other client requests.

Applications can invoke the DRCP connection method in two ways. They can specify :POOLED in the EZ Connect string,or if they are using tnsnames. ora, they can specify SERVER=POOLED in the TNS connect string.Here is anexample showing how to leverage DRCP in a TNS connect string:

If you’re using an EZ Connect string instead, specify DRCP in the following way: myhost.comany.com:1521/mydb.company.com:POOLED

Enabling and Disabling DRCP

Oracle has extended the OCISessionPool APIs so they work with DRCP.To maximize the gains from DRCP,Oracle recommends that these applications using the OCISessionPool APIs specify the connection class attribute (to set the logical name for the application) when obtaining a session using OCI Session Get().

In addition, they must set the session purity attribute to SELF.By doing this,the application is set up to reuse a pooled session.When you specify theconnection class and session purity as suggested here,the session will be obta ined from and released back to the DRCP.In addition,the connections to the connection broker are cached,thus improving performance.

Configuring DRCP

You can configure the connection pool based on your data base’s usage requirements.Here are the main DRCP configuration para meters:

    • inactivity timeout:The maximum idle time allowable to a pooled server before it is terminated
    • max_lifetime_per_session: Sets the time to live (TTL) duration for a pooled session
    • max_uses_per_session:The maximum number of times a pooled server can be released to the connection pool
    • max_size and min_size:The maximum and minimum number of pooled servers in the connection pool
    • max_think_time:The maximum time a client can remain inactive after obtaining a pooled server from the connection pool

If you want to adjust the values of several or all the configuration para meters of the connection pool, you can do so by executing theconfigure_pool procedure of the DBMS_CONNECTION_POOL package.

If you want to adjust the value of a single connection pool configuration parameter,you can execute the alter_param procedure of the DBMS_CONNECTION_POOL package/as shown here:

SQL> exec dbms_connection_pool.alter_param(' ','INACTIVITY_TIMEOUT','3600')

In the example here,the alter_param procedure specifies a value of 3600 seconds (1 hour)as the value for the inactivity_timeout parameter,meaning that a pooled server is allowed a maximum idle time of one hour before the server is terminated.

After making any configuration changes,you can always return to the original settings for the connection pool by executing the restore_defaultsprocedure,as shown here:

SQL> exec dbms_connection_pool.restore_defaults()

The restore_defaults procedure restores the default values for all connection pool configuration parameters.

Monitoring DRCP

You can use the following data dictionary views to monitor database resident connection pooling:

    DB_CPOOL_INFO: Shows the name of the connection pool, its status, the maximum and minimum number of connections,and the timeout for idle sessions.
    V$CPOOL_STAT: Shows pools statistics such as the number of session requests and wait times for a session request.
    V$CPOOL_CC_STATS: Shows details about the connection class-level statistics.

Use DRCP to improve the scalability of your databases and applications and to share resources among multiple middle-tier client applications.If you want to support a large number of client connections with small memory requirements or if you have client applications that can share or reuse sessions, DRCP will be very useful for you.

If you want to reduce memory usage, DRCP is the way to go.Also consider using DRCP if client applications use the database only intermittently and session affinity isn’t a requirement across client requests.

Comparing and Converging Database Objects

It’s typical for replication environments to share database objects such as tables and indexes.These objects are known as shared database objects,since multiple databases share them. Shared database objects are commonly used by materialized views and Oracle Streams components,which maintain copies of the same tables and other objects in multiple databases.

Replication environments such as these strive to keep the common database objects synchronized at the multiple sites. However, it’s not uncommon for shared database objects to become unsynchronized,with the result that a table will have a different number of rows and/or different data in the rows when compared to the same table in another database.

These data divergences, caused by network problems,user errors, config urationchanges, materialized view refresh problems,and so on may result in a failure to capture data changes on a database or to successfully transfer them to all databases in the configuration.

Oracle Database 11g provides the new DBMS _COMPARISON package,which lets you compare database objects in different databases.If the comparison process shows there are important differences in data between two databases, you can use the same package to converge the data in both databases so the two data bases are consistent datawise. You can compare and converge the following types of data:

  • Tables
  • Views on single tables
  • Materialized views
  • Synonyms for the previous three types of objects

In an example a little later,we compare two tables on different databases that have the same name and the same columns. However,you can compare two tables that have different names,as well as tables that have differentcolumns,as long as the columns share the same data type.You can also compare (and converge) a subset of columnsand rows instead of an entire table or a materialized view.

Comparing Data

In the following example,we compare a simple shared database object,which is a table in the user HR’s schema.To ensure that we can show how to converge data, we first change the data in three rows of the shared database object (table departments) in the remote data base.

We then use the DBMS_COMPARE package to perform a comparison of the two tables on the two databases and then use the same package to merge the differences so the two tables are in sync once again.

The only requirement for using the DBMS_COMPARE package is that the two tables we’re comparing have at least one column that the package can identify as an index column.

This index column must uniquely identify each row that’s part of thecomparison,meaning the index has to be either a primary key constraint or a unique constraint on a non-null column.Otherwise, the package can’t compare the two objects.The table rmp has a primary key column in both data bases,so we’re OK here.

  1. Create a database link from the primary database (or11) to the secondary database(tenner).In the example,we use the user system as the owner of the database link to ensure that the user has the necessary privileges to execute the procedures in the DBMS_COMPARISON package and the privileges to access and modify tables in both databases.The remote database is named tenner and so is our database link to that database from the primary database or 11.
  2. The next step is to create a divergence between the data in an identical table on the two databases.
  3. On the secondary database, make some changes in the scott.emp table so the data diverges from the hr.emp table on the primary database:
  4. Now that we made sure the emp table in the two databases diverges, it’s time to run the create_comparison procedure to trap the data divergence between the two tables.
  5. Create a comparison for the hr.emp table on the two databases by running the create_comparison procedure, as shown here:
  6. Execute the compare function to see whether the create_comparison procedure has found any differences between the two tables.
  7. The compare function uses the scan ID 4 and prints the statement “Differences were found.”
  8. Since there are differences, you can run the following query, which uses the viewsDBA_COMPARISON and DBA_COMPARISON_SCAN_SUMMARY to tell us how many differences were found during the table comparison:

The current_diff_count column from the DBA_COMPARISON_SCAN_SUMMARY view shows that there are three rows that are different between the emp table in the or11 databaseand the emp table in the tenner database.

The differences could be because a row is present in one but not the other database,or the row is present in both data bases but with different data in the row.

Converging Data

Since we’ve discovered a data divergence between the local and the remote databases,we may want to synchronize the emp table in the two databases so they have identical data.You do this by using the converge procedure of the DBMS_COMPARISON package,as shown here:

  1. Connect to the remote database from the local database as the system owner,who happens to be the owner of the database link that we created earlier between the two databases:
  2. $ sqlplus sytem/sammyy1@or11 SQL>
  3. Execute the converge procedure of the DBMS_COMPARISON package to synchronize the data between the two databases:

In this example,we chose to replace the data in the emp table at the remote database with the data from the emp table on the local database.To do this,we use cmp_converge_local_wins as the converge option,meaning that the data from the local database trumps that in the remote data base. However,we could also have chosen to do the reverse by specifying cmp_converge_remote_wins instead,which would have required that the remote database table’s data replace the local database table’s data.

The converge procedure may modify or delete data from one of the databases to synchronize the data in both databases.The output that is printed after the converge procedure finished its execution shows that two rows in the remote database were merged, because they were different from the rows in the local database. Merging here means the local table’s rows replace the rows in the remote table.

One row shows up in the Remote Rows Deleted column. This was a row that was found in the remote database, but not in the local data base.Since we chose to make the remote database data conform to the local database data,the converge procedure deletes that row from the remote database.

Assuming there were no further changes made during the data synchronization process,the two tables in the local and remote databases are now completely synchronized.

Note that you can also compare and converge different types of database objects at two databases.For example,you can compare and converge a table on one database and a materialized view on the other.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Oracle 11g Topics