# ASM Preferred Reads - Oracle 11g

In normal or high redundancy, ASM creates and maintains two or more copies of the ASM extent: a primary mirrored extent andsecondary/tertiary mirrored extent.In Oracle Database 10g,the ASM instance always reads the primary copy of the mirrored extent unless the primary cannot be accessed, and then the secondary will be read.

Oracle Database 11g introduces the ASM preferred mirror read, which provides the database with the ability to read from the secondary mirrored extent first before reading the primary mirrored extent.

This feature provides the greatest benefit for extended stretched RAC cluster implementations. With stretched RAC clusters, diskgroups are configured with the local primary copy in the local data center and the failure diskgroups at the remote data center.

The remote RAC instance is now able to read a local copy of the ASM extent, thus significantly reducing the amount of network latency to read from the remote data center and effectively increasing performance of the appli cations.If that localized read fails, then ASM will make an effort to read from the secondary mirrored extent from the remote failure group.

In Oracle Database 10g, the ASM instance always reads the primary copy of the mirrored extent, even if this means accessing the extent across the interconnect,in other words,remote extent access.If the interconnect latency is high, then any remote node access will impact database performance.

Oracle enables ASM preferred reads through the initialization parameterasm_ preferred_read_failure_groups.This parameter specifies the preferred read mirror names and allows each ASM instance to read from the localized mirror copy.

This parameter accepts values in pairs of diskgroup_name.failure group name delimited by a period.This parameter will guarantee localized reads for the ASM instance.You can provide multiple values to this parameter by putting commas between the values.

This parameter is dynamically adjustable and can be modified with the alter system command. Even though this parameter is dynamically adjustable, this parameter does not take effect until the diskgroup is remounted.

The alter system command updates the SGA structure of a disk in RDBMS to reflect whether it is a preferred read disk.For normal and high redundancy diskgroups,there should be one failure group on each site of the extended cluster.For normal redundancy diskgroups, there should be two sites. Likewise,for high redundancy diskgroups, the extended cluster configurations should be three sites.If there are more diskgroups in one site over another (in other words,four failure groups and three extended RAC clusters),extents will end up mirrored to the same site,thus eliminating the high-availability benefit of setting up high redundancy.

Here’s the syntax to create a normal redundancy so you can see how it is applicable to the preferred mirror reads:

create diskgroup DATA normal redundancy
failgroup fg1 disk 'ORCL:CTCVOL1',
'ORCL:CTCVOL2',
'ORCL:CTCVOL3'
failgroup fg2 disk 'ORCL:LTCVOL1',
'ORCL:LTCVOL2',
'ORCL:LTCVOL3'
SQL> /

The fg1 failgroup is recognized as a localized read from RAC node #1 in SiteA, while it is recognized as a remote mirror read from RAC node #2 in SiteB.Likewise, fg2 failgroup is recognized as a localized mirror read from node #2 in SiteB and as a remote mirror read from RAC node #1 in SiteA.For the SiteA servers,the ASM instances should read from the SAN storage devices that are from the SiteA data center:

Set the parameter for the database servers in SiteB to the following:

You can also configure the preferred read failure groups using Enterprise Manager Database Console.This option is the Preferred Read Failure Groups field on the Configuration tab of the ASM home page. You can specify a list of failure groups delimited by commas whose member disks will serve as the preferred read disks for this node.

Oracle Database 11g adds the new column PREFERRED_READ to the V$ASM_DISK dynamic view.This column holds a Y or an N value to designate that the read is a localized and preferred read failure group. You can also view the performance characteristics of the ASM preferred read failure group by querying V$ASM_DISK_IOSTAT. The result of this view is to provide information at the ASM instance level:

SQL> select instname, failgroup,
3* from v$asm_disk_iostat SQL> / INST FAIL READ_ WRITE_ BYTES_ BYTES_ NAME GROUP TIME TIME READ WRITTEN -------- ----- ---------- ------- -------- -------- DBA11g1 DATA1 161904000 2641000 58054144 752128 DBA11g1 FRA1 1249000 2575000 1730560 3363840 DBA11g1 DATA2 188251000 3564000 61817344 3549696 ASM Restricted Mode Oracle Database 11g ASM implements the restricted start-up option for maintenance mode.While in restricted mode,only the starting ASM instance has exclusive access to the disk groups.When in restricted mode, databases will not be permitted to access the ASM instance.You can see that startup restrict; will mount all the diskgroups in restricted mode: SQL> startup restrict; ASM instance started Total System Global Area 138158080 bytes Fixed Size 1296012 bytes Variable Size 111696244 bytes ASM Cache 25165824 bytes ASM diskgroups mounted SQL> select name,state from v$asm_diskgroup;
NAME STATE
---------------- -----------
DATA RESTRICTED
FRA RESTRICTED

When you try to start a database while the ASM instance is in restricted mode, you will receive the following error message:

SQL> startup
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/ICEMAN/spfileICEMAN.ora'
ORA-17503: ksfdopn:2 Failed to open file +DATA/ICEMAN/spfileICEMAN.ora
ORA-17503: ksfdopn:DGOpenFile05 Failed to open file +DATA/iceman/spfileiceman.ora
ORA-17503: ksfdopn:2 Failed to open file +DATA/iceman/spfileiceman.ora
ORA-15236: diskgroup DATA mounted in restricted mode
ORA-06512: at line 4

The restricted option can be specified at the diskgroup level to provide a granular level of restriction.

Let’s look at how to enable restricted access at the diskgroup level:

SQL> alter diskgroup data mount restricted;
Diskgroup altered.
SQL> select name, state from v$asm_diskgroup; NAME STATE ------------------ ------------------- DATA RESTRICTED FRA MOUNTED The restricted option can be used in RAC or non-RAC environments to perform a maintenance task such as a rebalance operation. In restricted mode, a re balance mitigates ASM to ASM extent relocation messaging, making the rebalance operation much faster. ASM Diskgroup Maintenance Oracle Database 11g introduces the force option to the mount and drop diskgroup syntax.In Oracle Database 10g, to mount a diskgroup, DBAs would issue the following command: SQL> alter diskgroup data mount; In Oracle Database 11g, you can force the mount of the diskgroup even if not all the disks participating in the diskgroup are available: SQL> alter diskgroup data mount force; By default, the mount of the diskgroup is done with the noforce option. All disks must be available for the mount to succeed.With the force option, ASM simply offlines any disks that are missing and mounts the diskgroup with the available disks. This option can cause problems if used incorrectly.The issue could be something as simple as an incorrect asm_diskstring parameter.By using the force option to mount a diskgroup, you can cause more problems than not. The force option should be used cautiously.The force option can be leveraged with ASM redundancy diskgroups.ASM will force a mount of a diskgroup even with missing or damaged failure groups. In a RAC environment, only one RAC node may mount a diskgroup with the force option. Once a diskgroup is mounted, the remaining nodes will receive an alert when it tries to mount the diskgroup with the force option. In the world of ASM, it is common practice to use the dd utility to clear out the header of disks.In Oracle Database 10g, in order to drop a diskgroup, it had to be mounted.If the diskgroup cannot be mounted, you can clear the disk(s) using the dd command. In Oracle Database 11g, ASM introduces the force option to the drop disk group command.The force option can be used if the target diskgroup cannot be mounted.The force option marks the headers of disks with a “FORMER” status. In Oracle Database 11g, Oracle lifts the requirement that the diskgroup has to be mounted to drop the diskgroup.It is considered a best practice to mount the diskgroup to issue the drop dis kgroup command.Only when the diskgroup cannot be mounted should you use the force option.You can use the following syntax to drop the diskgroup: SQL> drop diskgroup data force; When using the force option, you can include the including contents option: SQL> drop diskgroup data force including contents; Similarly,you can perform a drop diskgroup operation using Enterprise Manager Database Console. From the Diskgroup main page, go to the ASM Instance Main page and then the Diskgroup tab, and you will see a Remove button. You can drop a particular diskgroup by selecting a diskgroup from the list and clicking the Remove button. On this screen,you have the option to drop with force or with out force, as displayed in Figure Figure ASM diskgroup drop You will also notice that there is a drop-down list for the rebalance operation. You can choose your power limit for the rebalance after the drop operation.ASM performs some minor checks prior to dropping a diskgroup.First, ASM checks to see whether the diskgroup is being used by another ASM instance in the same SAN storage subsystem. If so, ASM checks to see whether the ASM diskgroup is participating in the same cluster. If so,the command will return an error. Next, ASM will check to see whether the diskgroup is mounted by any other cluster.If so, the command will return an error.Please use extreme caution when dropping diskgroups with the force option. The effects are irreversible, and the damage can be severe.ASM does perform minor checks, but they are not definitive. After a successful drop force of a diskgroup, you can view the ASM alert logs for details: SQL> alter diskgroup data dismount NOTE: cache dismounting group 1/0x5E08702D (DATA) NOTE: cache dismounted group 1/0x5E08702D (DATA) NOTE: De-assigning number (1,0) from disk (ORCL:VOL1) NOTE: De-assigning number (1,1) from disk (ORCL:VOL2) SUCCESS: diskgroup DATA was dismounted SQL> drop diskgroup data force including contents NOTE: Assigning number (1,0) to disk (ORCL:VOL1) NOTE: Assigning number (1,1) to disk (ORCL:VOL2) Thu May 31 04:55:33 2007 NOTE: erasing header on grp 1 disk VOL1 NOTE: erasing header on grp 1 disk VOL2 NOTE: De-assigning number (1,0) from disk (ORCL:VOL1) NOTE: De-assigning number (1,1) from disk (ORCL:VOL2) SUCCESS: diskgroup DATA was force dropped You can also perform ASM diskgroup mount and dismount operations using Enterprise Manager Database Console.The Diskgroup Mount/Dismount button is located on the Diskgrou main screen on the ASM Home ➤ Disk Groups tab. When you choose to mount a particular diskgroup by selecting a disk group from the list and clicking the Mount button, you will be directed to the Diskgroup Mount page, as shown in Figure Figure ASM diskgroup mount You will also notice that you can check a box to mount the diskgroup in restricted mode.On the other hand, you may have to dismount a diskgroup. You can dismount a particular diskgroup by selecting a diskgroup from the list and clicking the Dismount button. On this screen,you have the option to dismount with force or without force, as displayed in Figure Figure ASM diskgroup dismount Diskgroup Checks Starting in Oracle Database 11g, you can validate the internal consistency of ASM diskgroup metadata using the alter diskgroup ...check command. The check clause does the following: • Checks the link between the alias metadata directory and the file directory • Checks the alias directory tree links • Checks the ASM metadata directories for unreachable allocated blocks • Checks the consistency of the disk • Checks the consistency of file extent maps and allocation tables The alter diskgroup check command is applicable to a specific file in a diskgroup, one or more disks in a diskgroup, or specific failure groups in a diskgroup.This command also checks all the metadata directories.The following example checks the DATA diskgroup: SQL> alter diskgroup data check; A summary of errors is logged in the ASM alert log file. Here’s an excerpt from the alert log file: SQL> alter diskgroup data check WARNING: Deprecated privilege SYSDBA for command 'ALTER DISKGROUP CHECK' NOTE: starting check of diskgroup DATA kfdp_checkDsk(): 9 kfdp_checkDsk(): 10 Tue Sep 11 03:17:59 2007 SUCCESS: check of diskgroup DATA found no errors SUCCESS: alter diskgroup data check The default behavior of the check clause is to repair the discovered errors. However,you can choose the norepair option if you do not want ASM to resolve the errors automatically.The good news is that you will still receive alerts about inconsistencies: SQL> alter diskgroup data check norepair; In the previous release, you had the following options to the check command:all,disk,disks in failgroup, and file.These options are now deprecated.If these options are specified,the commands will continue to work, but messages will be generated to the alert log. You can also perform diskgroup checks using Enterprise Manager Database Console.The diskgroup Check button is located on the Diskgroup tab of the ASM Home page, as shown in Figure. Figure ASM diskgroup Check button Clicking the Check button will direct you to the Check Diskgroup page.On this screen,you have the option to check the diskgroup with or without the repair option,as displayed in Figure. Figure ASM diskgroup check Diskgroup Attributes Oracle Database 11g introduces a new concept called ASM attributes.Attributes provide a granular level of control for DBAs at the diskgroup level. Here are the attributes you can set: • Allocation unit (AU) sizes. Starting in Oracle Database 11g, AU can be specified at diskgroup creation time. The AU can be 1, 2, 4, 8, 16, 32, or 64MB in size. • The compatible.rdbms attribute. • The compatible.asm attribute. • disk_repair_time in units of minute (M) or hour (H). • The redundancy attribute for a specific template. • The stripping attribute for a specific template. The attributes for the diskgroup can be established at create diskgroup time or can be modified using the alter diskgroup command later.All of the diskgroup attributes can be queried from the V$ASM_ATTRIBUTE view.

Now we’ll show how attributes can be set and modified for ASM diskgroups.First, we’ll create a diskgroup with 10.1 diskgroup compatibility and then advance it to 11.1 using the alter diskgroup command.You can use the compatible.asm attribute to advance this attribute to 11.1:

create diskgroup data
disk '/dev/raw/raw1',
'/dev/raw/raw2',
'/dev/raw/raw3'
attribute 'compatible.asm' = '10.1'
SQL> /

Alternatively, if you use ASMLIB, you can create the diskgroup using the ASMLIB disk names:

create diskgroup data
disk 'ORCL:VOL1',
'ORCL:VOL2',
'ORCL:VOL3'
attribute 'compatible.asm' = '10.1'
SQL> /

Now, we’ll show the syntax to advance the disk group ASM attribute to 11.1. Please remember that once you advance an attribute to a higher version,you cannot reverse this action.Once you set any attributes to 11.1, you cannot go back to 10.x.

SQL> alter diskgroup data set attribute'compatible.asm' = '11.1.0.0.0';

You can also set ASM and RDBMS attributes for the diskgroup using Enterprise Manager Database Console.The Advanced Attributes section of the diskgroup is on the Diskgroup main screen on the ASM Home ➤ Disk Groups tab. Click the diskgroup, as shown in Figure.

Figure ASM diskgroup attributes Edit button

Figure ASM diskgroup attributes

On the successful advancement of the diskgroup to11.1, the following message is listed in the ASM alert log file:

NOTE: Advancing ASM compatibility to 11.1.0.0.0 for grp 1
NOTE: initiating PST update: grp = 1
Wed May 30 19:48:02 2007
NOTE: Advancing compatible.asm on grp 1 disk VOL1
NOTE: Advancing compatible.asm on grp 1 disk VOL2
NOTE: PST update grp = 1 completed successfully
SUCCESS: Advanced compatible.asm to 11.1.0.0.0 for grp 1

Let’s query the V$ASM_ATTRIBUTE view to confirm that the compatible is truly set: &select name, value from v$asm_attribute
where group_number=1
SQL> /
NAME VALUE
-------------------- --------------------
disk_repair_time 5H
au_size 1048576
compatible.asm 11.1.0.0.0
compatible.rdbms 10.1.0.0.0

Using the DATA diskgroup we created earlier,let’s change the compatible.rdbms attribute to 11.1:

SQL> alter diskgroup data set attribute'compatible.rdbms' = '11.1';

By querying V$ASM_ATTRIBUTE, you can see that the compatibility for the RDBMS is set: select name, value from v$asm_attribute
where group_number=1
SQL> / NAME VALUE
-------------------- --------------------
disk_repair_time 5H
au_size 1048576
compatible.asm 11.1.0.0.0
compatible.rdbms 11.1.0.0.0

You can specify a combination of attributes at diskgroup creation time. We’ll show another example where the au_size and compatible.asm attributes are specified in a single create diskgroup command:

create diskgroup fra disk '/dev/raw/raw11',
'/dev/raw/raw12',
'/dev/raw/raw13'
attribute 'au_size' = '16M','compatible.asm' = '11.1'
SQL> /
Again, you can create this same FRA diskgroup with the ASMLIB syntax: create diskgroup fra disk 'ORCL:VOL3',
'ORCL:VOL4',
'ORCL:VOL5'
attribute 'au_size' = '16M', 'compatible.asm' = '11.1'
SQL> /

Allocation Unit (AU) Sizes

Prior to Oracle Database 11g, the AU size could not be specified at diskgroup creation time.All AUs were 1MB.There is a workaround, but not very many DBAs are aware that you can have multiple au_sizes in Oracle Database 10g.Specifying au_size in Oracle Database 10g means setting an underscore initialization parameter and creating the diskgroup.

If you want multiple AU sizes, you must set the initialization parameter and bounce the ASM instance with the new AU size.The following underscore initialization parameters allow a 16MB AU size and 1MB AU stripe size.This is recommended only for VLDB databases or may be suitable for databases that have large objects (BLOBs and CLOBs).

• _asm_ausize=16777216
• _asm_stripesize=1048576

In Oracle Database 11g,the au_size attribute can be specified only at diskgroup creation time. Because it involves storage characteristics, this attribute cannot be modified using the alter diskgroup command.

Starting in Oracle Database 11g,you can set the ASM allocation unit (AU) size from 1MB all the way to 64MB in powers of 2.As of Oracle Database 11g,the valid extent sizes are 1, 2, 4, 8, 16, 32, or 64MB.

The larger AUs can be beneficial for large VLDB databases or data warehouses that perform large sequential reads.In addition, organizations that store BLOB or SecureFiles inside the database can benefit from larger AUs.

Variable-Size Extents

Variable-size extents provide support for larger ASM files.Moreover,this feature reduces the SGA requirements to manage the extent maps in the RDBMS instance.Setting the AU to a higher number also reduces the metadata space usage since it reduces the number of extent pointers associated with the metadata.

Moreover,variable-size extents can significantly improve database open time and reduce memory utilization in the shared pool.Variable-size extents allow you to support databases that are hundreds of TB and even several PB in size.

Variable-size extents dynamically change extent size depending on how many AUs have been allocated.The management of variable-size extents is automatic and does not require manual intervention.

The ASM variable extent feature will kick in and start allocating 8× AU extent sizes after 20,000 extents have been allocated.Variable-size extents are similar to the uniform extent allocation in the database and are allocated in 1, 8,and 64 AU chunks.

This is automatically handled by ASM. Variable-size extents minimize the overhead associated with maintaining ASM metadata. After 40,000 extents, the extents are allocated at 64× AU size. The largest size extent supported in Oracle Database 11g will be 64MB.

ASM files larger than 20GB and up to 128TB are great candidates for variable-size extents.

We’ll now take the default 1MB allocation unit and provide an example of how this works.When the file hits 20GB (20,000 extents),the extent size will change from 1MB to 8MB since 8MB is 8 times the AU.At this point, AU will grab 8MB chunks until it reaches 40,000 extents; 20,000 additional extents at 8MB will be 160,000GB.When the file reaches the 180GB threshold (160GB + 20GB), the AU will change to 64MB.

There is a rare incident where large amounts of noncontiguous small data extents are allocated and freed and can cause situations when large contiguous space becomes unavailable.This causes fragmen tation to occur in the diskgroup.

The remedy is simple—rebalance the diskgroup to reclaim large contiguous space.If you do not perform a rebalance, it can cause a slight degradation in performance since ASM automatically performs defragmentation if extent allocation cannot sequester the storage.

Manually Allocate Larger Extents

Starting with Oracle Database 11g,you can allocate larger size extents when you create a disk group.Instead of the 1MB-sized extent,you can create diskgroups with larger AU.You can change the allocation unit size to 16MB for the DATA diskgroup using the create diskgroup command and setting the appropriate attributes for the disk group,as displayed here:

SQL> create diskgroup data
disk 'ORCL:DISK1',
'ORCL:DISK2'
attribute 'au_size' = '16M',
'compatible.asm' = '11.1'
'compatible.rdbms' = '11.1';

RDBMS and ASM Compatibility

Two kinds of compatibility settings are relative to ASM and the database. When DBAs think about compatibility,they think about the initialization para meter in the init.ora file or spfile that dictates which version of function ality will be available to the ASM or database instance.

The compatible parameter is one of the two kinds of compatibility settings.The compatible parameter can be set for either ASM or the database instance. Here are the valid compatible values for both the ASM and database instance:

• 10.1
• 10.2
• 11.1

Obviously,10.1 is the lowest level of compatibility since ASM was introduced as a feature in 10.1.Setting the initialization parameter to a lesser value than the software release will exclude availability of the new features introduced in the new release.

For example,if the compatible para meter is set to 10.2 for an 11.1 ASM instance,the new Oracle Database 11g features will not be available such as drop diskgroup with the force option as mentioned earlier.Similarly, the compatible parameter for the database plays a key role in negotiating what features are supported.

The other compatibility setting applies to ASM diskgroups and the functionality provided at the diskgroup level.There are attribute settings at the diskgroup level that control what features are available to the ASM diskgroup and which capabilities are available at the database level.

These attributes are called ASM compatibility (compatible.asm) and RDBMS compatibility (compatible.rdbms).At each diskgroup level,you can adjust these two compatibility settings to meet the business or technology requirements.The disk group compatibility information is stored in the diskgroup metadata and provides the multiple-level support of different versions (10g, 10g Release 2, and 11g) of the databases.

RDBMS Compatibility

The terms RDBMS compatibility and ASM compatibility refer exclusively to diskgroup-level compatibility.RDBMS compatibility is classified by the compatible.rdbms attribute. Surprisingly,the compatible.rdbms attribute defaults to 10.1 in Oracle Database 11gThis attribute dictates the minimum compatible version setting of a database that is allowed to mount the disk group,and controls the format of the messages exchanged between the ASM and RDBMS instances.

ASM Compatibility

ASM diskgroup compatibility is defined by the compatible.asm attribute and controls the persistent format of the disk ASM metadata structures. Similar to the compatible.rdbms attribute, ASM defaults the compatible.asm attribute to 10.1.The general rule to remember is that the compatible.asm attribute must always be greater than or equal to the RDBMS compatibility level.

The compatible attributes can be either in short form or in long form for the version number. The range of values for compatible.rdbms or compatible.asm can be from one version all the way to five versions: 11.1 to 11.1.0.0.0. For example,the compatible attribute can specified as 11.1 or 11.1.0.0. Similarly, for Oracle Database 10g environments, it can be set to 10.1 or 10.1.0.0.0.

The combination of compatible.asm and compatible.rdbms attributes control the persistent format of the disk ASM metadata structures.These attributes influence whether a database instance can mount a diskgroup.Three things determine whether a database instance is allowed to mount a disk group:

• The compatible initialization setting of the database
• The compatible.rdbms attribute of the diskgroup
• The software version of the database

Even though you upgrade an ASM instance to Oracle Database 11g,the compatible.asm and compatible.rdbms attributes for the diskgroups still remain at 10.1,which is the default and the lowest attribute level for ASM.You can see here that the ASM compatibility and RDBMS compatibility are still at 10.1 for the FRA diskgroup:

select name, block_size,
allocation_unit_size au_size, state,
compatibility asm_comp,
database_compatibility db_comp
from v$asm_diskgroup SQL> / NAME BLOCK_ AU STATE ASM_COMP DB_COMP SIZE SIZE ----- ------- --------- ---------- ----------- ---------- DATA 4096 1048576 CONNECTED 11.1.0.0.0 11.1.0.0.0 FRA 4096 1048576 CONNECTED 10.1.0.0.0 10.1.0.0.0 ASM instances can support multiple databases on different versions.Each of the databases can have a different compatibility setting.The key point to remember is that the compatible initiali zation parameter must be greater than or equal to the RDBMS compatibility of all diskgroups used by the database. As stated earlier, ASM compatibility of a diskgroup can be set to 11.0, while its RDBMS compatibility could be 10.1.This implies that the diskgroup can be managed only by ASM software version 11.0 or higher, while any database software version must be 10.1 or higher.To determine the software version and compatibility setting of the database, you can query the V$ASM_CLIENT view, as displayed here

select db_name,
status,software_version,compatible_version
from v$asm_client SQL> / DB_NAME STATUS SOFTWARE_ COMPATIBLE_ VERSION VERSION VERSION VERSION ---------- ------------ ------------- ------------ DBA11g1 CONNECTED 11.1.0.6.0 11.1.0.0.0 DBA11g1 CONNECTED 11.1.0.6.0 11.1.0.0.0 Fast Mirror Resync In Oracle Database 10g,ASM offlines a disk when it is not able to write an extent or access the disk and is not needed anymore,shortly after the disk is dropped from the diskgroup.At this point,ASM will perform a rebalance on the extents of the surviving disk members in the diskgroup using the mirror extent copies. This rebalance operation is extremely costly and can take hours. Even for nondisk problems such as bad cables or problems with HBA or controllers, disks may get dropped from the diskgroup, and rebalance activity may occur.ASM is ignorant of what is causing the issue. ASM just knows that it is not able to complete a write operation. In Oracle Database 11g,Oracle assumes the content of a dropped disk is not damaged or modified and preserves its membership in the diskgroup. Oracle introduces a new feature called ASM fast disk resync and does not automatically drop a disk from the diskgroup for a write failure.When a disk goes offline,Oracle now tracks all modified extents for a specified duration and keeps the disk membership in the diskgroup intact. Once the disk is repaired or the temporary problem is resolved(that is, cable issue,controller, HBA, and so on),ASM can resynchronize the tracked extents that were modified during outage. In Oracle Database 11g, the time to recover from a perceived disk failure is directly relative to how many extents have changed during the outage.ASM can quickly resynchronize the changed extents on the failed disk with its surviving disks. The potential increase in performance for the fast disk resync feature is proportional to the number of changed allocation units.Resync activities that take hours in Oracle Database 10g can perceivably be down to minutes in Oracle Database 11g. Disk Repair Time We’ll now talk about how the disk repair timer works.In Oracle Database 11g,the fast mirror resync feature is implemented using a grace period allotted to repair an offline disk. There is a new diskgroup attribute called disk_repair_time that allows you to specify the maximum amount of time before dropping a failed disk from the diskgroup. The purpose of disk_repair_ time is to prevent dropping the disk from the diskgroup since a resynchronization operation is significantly less expensive than a rebalance operation.The default value of disk_repair_time is 3.6 hours,or 12,960 seconds. The maximum allowable value for this attribute is 136 years.To take advantage of the fast disk resync feature,the compatibility attribute of the ASM diskgroup must be set to 11.1.0.0 or higher. The V$ASM_ATTRIBUTE view provides information about the compatibility level of the diskgroups:

SQL> select name,compatibility
2* from v$asm_diskgroup SQL> / NAME COMPATIBILITY ---------- ----------------------- DATA 10.1.0.0.0 FRA 10.1.0.0.0 By default,the compatibility for diskgroup is set to 10.1. You need to advance this to 11.1 to take advantage of disk_repair_time. You cannot set the disk_repair_time attribute with the create diskgroup syntax.It can be set only with the alter diskgroup command.In this example, we’ll show how to advance the compatible.asm and compatible.rdbms attributes for the DATA diskgroup to 11.1 to take advantage of the repair timer: SQL> alter diskgroup data set attribute 'compatible.asm' = '11.1'; Diskgroup altered.SQL> alter diskgroup data set attribute 'compatible.rdbms'='11.1'; Diskgroup altered. Now you can query the V$ASM_ATTRIBUTE view to confirm the settings:

SQL> select name, value from v$asm_attribute; NAME VALUE -------------------- -------------------- disk_repair_time 3.6h au_size 1048576 compatible.asm 11.1.0.0.0 compatible.rdbms 11.1 If you need to increase the duration of the disk_repair_time to five hours,you can change it with the following command: You can specify this disk_repair_time unit in minutes (M or m) or hours(H or h).If not specified,the default is hours.As you can see here, you can also set the disk_repair_time at the minute granularity: You can query the REPAIR_TIMER column to see the remaining time left in seconds before ASM drops an offline disk in the V$ASM_DISK or V$ASM_DISK_IOSTAT view.In addition, the disk resynchronization operation appears as a “SYNC” value in the OPERATION column of the V$ASM_OPERATION view.

Oracle’s background process,Diskgroup Monitor (GMON),wakes up every three minutes and checks all mounted diskgroups for offline disks.GMON will notify a slave process to increment their timer values(by three minutes)and initiate a dropfor the offline disks with the timer values exceeding their deadlines if it detects an offline disk.This is shown in the REPAIR_TIMER column of V$ASM_DISK. Online and Offline Disks Associated with this grace period attribute of the diskgroup,Oracle Database 11g ASM also provides the online option to the alter diskgroup disk command to initiate a disk resynchronization operation.This statement will copy all the extents that are marked as stale using the redundant copies: SQL> alter diskgroup disk data_0001 online; Similar to onlining a disk,you can offline a disk with the alter disk group disk offline SQL command for preventive maintenance: SQL> alter diskgroup data offline disk data_0000 drop after 20 m; You can accomplish this same onlining and offlining of diskgroups using the Enterprise Manager Database Console.From the Disk group main page for the specific diskgroup,you will notice the Online and Offline buttons, as displayed in Figure. Figure ASM diskgroup Offline disk button You can pick the disk in question and click the Offline button.This will direct you to the Disk Offline Confirmation page.You can choose the appropriate disk repair time and offline specified disk.Figure shows this example. Figure ASM diskgroup offline disk ASM will update the MOUNT_STATUS and MODE_STATUS columns as MISSING and OFFLINE states.Also, the REPAIR_TIMER column will start counting down to drop the disk from the diskgroup. The following example will query the V$ASM_DISK view to check the status of the disk and to look at the REPAIR_TIMER column to see the remaining time:

mount_status, mode_status, state, repair_timer
from v$asm_disk where group_number=1 SQL> / NAME HEADER_ MOUNT_ MODE_ STATE REPAIR_ STAUS STATUS STATUS TIMER --------- --------- ------- -------- ------ ------- DATA_0003 MEMBER CACHED ONLINE NORMAL 0 DATA_0002 MEMBER CACHED ONLINE NORMAL 0 DATA_0001 MEMBER CACHED ONLINE NORMAL 0 DATA_0000 UNKNOWN MISSING OFFLINE NORMAL 840 Assuming the disk did not drop from the diskgroup,you can online the disk using alter disk group disk online once the maintenance is complete. The following is an excerpt from the ASM alert.log file during the offline and online of the disk: SUCCESS: Advanced compatible.asm to 11.1.0.0.0 for grp 1 SQL> ALTER DISKGROUP data OFFLINE DISK VOL1 NOTE: DRTimer CodCreate: of disk group 1 disks 0 WARNING: initiating offline of disk 0.3951611545 (VOL1) with mask 0x7e NOTE: initiating PST update: grp = 1, dsk = 0, mode = 0x15 NOTE: group DATA: updated PST location: disk 0001 (PST copy 0) NOTE: PST update grp = 1 completed successfully NOTE: initiating PST update: grp = 1, dsk = 0, mode = 0x1 NOTE: group DATA: updated PST location: disk 0001 (PST copy 0) NOTE: PST update grp = 1 completed successfully Thu May 31 07:47:19 2007 NOTE: cache closing disk 0 of grp 1: VOL1 Thu May 31 07:47:31 2007 SQL> ALTER DISKGROUP data ONLINE DISK VOL1 Thu May 31 07:47:31 2007 NOTE: initiating online of disk group 1 disks 0 WARNING: initiating offline of disk 0.3951611545 (VOL1) with mask 0x7e NOTE: initiating PST update: grp = 1, dsk = 0, mode = 0x1 NOTE: PST update grp = 1 completed successfully NOTE: initiating PST update: grp = 1, dsk = 0, mode = 0x1 NOTE: PST update grp = 1 completed successfully Thu May 31 07:47:31 2007 NOTE: cache closing disk 0 of grp 1: VOL1 NOTE: F1X0 copy 1 relocating from 0:2 to 0:4294967294 NOTE: F1X0 copy 2 relocating from 1:2 to 1:2 NOTE: F1X0 copy 3 relocating from 65534:4294967294 to 65534:4294967294 NOTE: initiating PST update: grp = 1, dsk = 0, mode = 0x19 Thu May 31 07:47:31 2007 NOTE: group DATA: updated PST location: disk 0001 (PST copy 0) NOTE: PST update grp = 1 completed successfully NOTE: requesting all-instance disk validation for group=1 Thu May 31 07:47:31 2007 NOTE: disk validation pending for group 1/0xeac83e65 (DATA) WARNING: ignoring disk in deep discovery NOTE: cache opening disk 0 of grp 1: VOL1 label:VOL1 SUCCESS: validated disks for 1/0xeac83e65 (DATA) NOTE: initiating PST update: grp = 1, dsk = 0, mode = 0x5d NOTE: group DATA: updated PST location: disk 0001 (PST copy 0) NOTE: group DATA: updated PST location: disk 0000 (PST copy 1) NOTE: PST update grp = 1 completed successfully NOTE: initiating PST update: grp = 1, dsk = 0, mode = 0x7d NOTE: PST update grp = 1 completed successfully NOTE: F1X0 copy 1 relocating from 0:4294967294 to 0:2 NOTE: F1X0 copy 2 relocating from 1:2 to 1:2 NOTE: F1X0 copy 3 relocating from 65534:4294967294 to 65534:4294967294 NOTE: initiating PST update: grp = 1, dsk = 0, mode = 0x7f NOTE: PST update grp = 1 completed successfully NOTE: completed online of disk group 1 disks Once the disk issue is resolved,you can bring the disk back online using the online option to the alter diskgroup command: SQL> alter diskgroup data online disk data_0000; You should see now see that REPAIR_TIMER is set back to 0 and all the status of the disks are back to ONLINE.You can query the V$ASM_DISK view to confirm that the disk is ONLINE,as displayed here:

mode_status, state, repair_timer
from v\$asm_disk
where group_number=1
SQL> /
NAME HEADER_ MOUNT_ MODE_ STATE REPAIR_
STAUS STATUS STATUS TIMER
--------- -------- ------- ------- -------- -------
DATA_0003 MEMBER CACHED ONLINE NORMAL 0
DATA_0002 MEMBER CACHED ONLINE NORMAL 0
DATA_0001 MEMBER CACHED ONLINE NORMAL 0
DATA_0000 MEMBER CACHED ONLINE NORMAL 0

SAN administrators and SAs may receive alerts from the SAN monitoring tools that a particular disk may go bad soon. For such situations, disks will have to be replaced and measures will need to take place so that it does not become an outage.