Ever wonder about Big Data and what exactly it means, especially if
you are already an Oracle Database professional? Or, do you get lost
in the jargon warfare that spews out terms like Hadoop, Map/Reduce
and HDFS? In this post I will attempt to explain these terms from
the perspective of a traditional database practitioner but getting
wet on the Big Data issues by being thrown in the water. I was
inspired to write this from the recent incident involving NSA
scooping up Verizon call data invading privacy of the citizens.
Confessions of an Oracle Database Junkie - Arup Nanda The opinions expressed here are mine and mine alone. They may not necessarily reflect that of my employers and customers - both past or present. The comments left by the reviewers are theirs alone and may not reflect my opinion whether implied or not. None of the advice is warranted to be free of errors and ommision. Please use at your own risk and after thorough testing in your environment.
Showing posts with label DBA. Show all posts
Showing posts with label DBA. Show all posts
Friday, June 07, 2013
Thursday, October 15, 2009
OOW09 Session#4 DBA 11g New Features
For all those who came to my last of my four sessions - 11g New Features for DBAs - I appreciate your taking the time. It was a pleasant surprise to see about 500 people showing up at a lunch time slot on the last day of the conference.
Here is the presentation link. I hope you enjoyed the session and found it useful.
Here is the presentation link. I hope you enjoyed the session and found it useful.
Sunday, September 28, 2008
OOW'08 Oracle 11g New Features for DBAs
It was my second session at Open World this year. It was full with 332 attendees with a whopping 277 attendees on wait list! the room capacity was 397. Of course, the room did have some fragmentation and not everyone could make it.
Here is the abstract:
There is a world outside the glittering marketing glitz surrounding Oracle 11g. In this session, a DBA and author of the popular 11g New Features series on OTN covers features that stand out in the real world and make your job easier, your actions more efficient and resilient, and so on. Learn the new features with working examples: how to use Database Replay and SQL Performance Analyzer to accurately predict the effect of changes and Recovery Manager (RMAN) Data Recovery Advisor to catch errors and corruption so new stats won't cause issues.
Thank you very much for those who decided to attend. I hope you found it useful. Here is the presentation. You can download it from the Open World site too. Please note, the companion site to see al working examples and a more detailed coverage is still my Oracle 11g New Features Series on Oracle Technology Network.
Here is the abstract:
There is a world outside the glittering marketing glitz surrounding Oracle 11g. In this session, a DBA and author of the popular 11g New Features series on OTN covers features that stand out in the real world and make your job easier, your actions more efficient and resilient, and so on. Learn the new features with working examples: how to use Database Replay and SQL Performance Analyzer to accurately predict the effect of changes and Recovery Manager (RMAN) Data Recovery Advisor to catch errors and corruption so new stats won't cause issues.
Thank you very much for those who decided to attend. I hope you found it useful. Here is the presentation. You can download it from the Open World site too. Please note, the companion site to see al working examples and a more detailed coverage is still my Oracle 11g New Features Series on Oracle Technology Network.
OOW'08 : Real Life Best Practices for DBAs
Once more, it was a full session with 278 attendees and 251 waitlisted due to lack of space! I wish the organizers simply moved to a bigger room. Oh, well!
For those who attended, I truly wish to express my gratitude. As a speaker, I feel honored when people choose to attend my session over others. I wish you found something useful here.
If you haven't already downloaded it from Open World site, here is the presentation: http://www.proligence.com/OOW08_DBA.pdf.
Here is the abstract:
This session covers best practices for excelling at being a DBA in the field from someone who has been managing Oracle Database instances for 14 years; was named the DBA of the Year in 2003; and has been through it all, from modeling to performance tuning, disaster recovery, security, and beyond. Best practices should be justifiable and situation-aware, not just because someone said they were good. Hear several tips and tricks for surviving and succeeding in the part art/part wizardry of database administration. The session covers Oracle 11g.
I will highly appreciate if you post your comments.
For those who attended, I truly wish to express my gratitude. As a speaker, I feel honored when people choose to attend my session over others. I wish you found something useful here.
If you haven't already downloaded it from Open World site, here is the presentation: http://www.proligence.com/OOW08_DBA.pdf.
Here is the abstract:
This session covers best practices for excelling at being a DBA in the field from someone who has been managing Oracle Database instances for 14 years; was named the DBA of the Year in 2003; and has been through it all, from modeling to performance tuning, disaster recovery, security, and beyond. Best practices should be justifiable and situation-aware, not just because someone said they were good. Hear several tips and tricks for surviving and succeeding in the part art/part wizardry of database administration. The session covers Oracle 11g.
I will highly appreciate if you post your comments.
OOW'08 Partitioning - Why, When, What and How
I had a full session (300+ in attendance with 120 wait-listed). There was a gentleman from the organizing team who was trying to let in people flow in to the room as unobtrusively as possible. He was the same one for my other two sessions (which happend to be in the same room - full and and some 100+ wait listed) and he was visibly flustered to conduct the same routine. "You must be quite popular" - he exasperated! I took it as a compliment. For those who attended, thank you for taking the time to spend with me. I truly feel honored. For those who couldn't come in, I assume you have found some other helpful session.
Here is the abstract:
How do you decide which partitioning schemes to use, which column to partition on, or whether or not to use more than one column for partitioning keys? These are questions you must answer to develop a good strategy. Too many architects and DBAs choose a wrong strategy, making partitioning a failure. In this session, you will *not* learn the syntax or types of partitioning but how to decide the best strategy, presented through real-life case studies and examples. This session covers Oracle 11g features in partitioning.
If you haven't already done so from the Oracle Open World site, you can download here: http://www.proligence.com/OOW08_Part_ppt.pdf
By the way, I have also written an article on the same presentation, which you can find here: http://www.proligence.com/OOW08_Part_doc.pdf
Please leave your comments on these two documents. As always, thank you visiting and offering your feedback.
Here is the abstract:
How do you decide which partitioning schemes to use, which column to partition on, or whether or not to use more than one column for partitioning keys? These are questions you must answer to develop a good strategy. Too many architects and DBAs choose a wrong strategy, making partitioning a failure. In this session, you will *not* learn the syntax or types of partitioning but how to decide the best strategy, presented through real-life case studies and examples. This session covers Oracle 11g features in partitioning.
If you haven't already done so from the Oracle Open World site, you can download here: http://www.proligence.com/OOW08_Part_ppt.pdf
By the way, I have also written an article on the same presentation, which you can find here: http://www.proligence.com/OOW08_Part_doc.pdf
Please leave your comments on these two documents. As always, thank you visiting and offering your feedback.
Wednesday, September 17, 2008
Speaking on Partitioning at NY Metro Oracle Day
I recently delivered a session at New York Metro Area Oracle Day organized by User Groups in New York, New Jersey and Connecticut. If you attended, thank you for coming. There is nothing much satisfying than see people taking time off their daily grind and be in the room to listen to someone they may or may not know; but trust they can get something from. Thank you for all those who attended.
The session was about how to use partitioning, not what it is. So I didn't talk about syntax; but how to exploit partitioning techniques to accomplish common objectives.
Here is the powerpoint presentation deck. However, you may find a descriptive article more suitable for reading.
Congratulations to the lady who won my book RMAN Recipes for asking the best question. I apologize, I couldn't get your name. If you read this, please get back to me.
The question was: In a RAC envrionment does partitioning help or hurt?
Answer: Partitioning helps in these ways:
(1) it allows several segments to be create for the same object. Therefore the chances of cache buffer chains gets reduced, which ultimately reduces the global cache related waits.
(2) it reduces the chance that rows will be found on one block, leading to block contention among instances and reduces gc buffer busy waits.
(3) in hash partitioned indexes, it reduces contention for a few index blocks, leading to lesser gc current grant related waits.
The downside? Well, not much. Partition operations (drop/add/split) are DDL, not DML operations. So, when you perform one of these, the library cache must be synced up on all instances, whihc may create library cache lock/latch issues. So, as a rule, perform partition maintenance operations during off-hours or periods of light activity.
The session was about how to use partitioning, not what it is. So I didn't talk about syntax; but how to exploit partitioning techniques to accomplish common objectives.
Here is the powerpoint presentation deck. However, you may find a descriptive article more suitable for reading.
Congratulations to the lady who won my book RMAN Recipes for asking the best question. I apologize, I couldn't get your name. If you read this, please get back to me.
The question was: In a RAC envrionment does partitioning help or hurt?
Answer: Partitioning helps in these ways:
(1) it allows several segments to be create for the same object. Therefore the chances of cache buffer chains gets reduced, which ultimately reduces the global cache related waits.
(2) it reduces the chance that rows will be found on one block, leading to block contention among instances and reduces gc buffer busy waits.
(3) in hash partitioned indexes, it reduces contention for a few index blocks, leading to lesser gc current grant related waits.
The downside? Well, not much. Partition operations (drop/add/split) are DDL, not DML operations. So, when you perform one of these, the library cache must be synced up on all instances, whihc may create library cache lock/latch issues. So, as a rule, perform partition maintenance operations during off-hours or periods of light activity.
Monday, September 01, 2008
Magic of Block Change Tracking
The other day, one of the instances of our main 3-node RAC database crashed due to an I/O error. Apparently there were many "dead paths" from the host to the SAN, which made some LUNs in the ASM diskgroup not being recognizable by the ASM instance. Why? Well, that's the topic for another blog. The point behind this one has nothing to do with the reason. All I want to convey that there was a LUN problem on one of the nodes which brought the instance down. Being RAC, the database was serviced from other two nodes - praise God for RAC! - and the users didn't notice it terribly (or, I would like to think that way).
After a few days we noticed the incremental RMAN backup taking a long time. This caused major issues - it took a long time and I/O waits went through the roof. In fact it took increasingly longer every day that passed by that unfortunate collapse of the node. Everyone was quite intrigued - what could be the connection between an instance crash and instance crashing? All sorts of theories cropped up - from failed HBA cards to undiscovered RAC bugs.
This is where I got involved. The following chronicles the diagnosis of the issue and the resolution.
First, the increased length of time is obviously a result of the incemental backups doing more work, i.e. more changed blocks. What caused so many changed blocks? Interviews with stakeholdrs yielded no clear answer - there was absolutely no reason for increased activity. Since we are doing proper research, I decided to start with the facts. How much was the extra blocks processed by incrementals?
I started with this simple query:
select completion_time, datafile_blocks, blocks_read, blocks
from v$backup_datafile
where file# = 1
order by 1
/
Output:
COMPLETIO DATAFILE_BLOCKS BLOCKS_READ BLOCKS
--------- --------------- ----------- ----------
18-JUL-08 524288 32023 31713
19-JUL-08 524288 11652 10960
20-JUL-08 524288 524288 12764
21-JUL-08 524288 524288 5612
22-JUL-08 524288 524288 11089
The columns are:
DATAFILE_BLOCKS - the number of blocks in the datafile at that time
BLOCKS_READ - the exact number of blocks the RMAN incremental backup read
BLOCKS - the numberof blocks it actually backed up
From the above output, a pattern emerges - until Jul 20th, the backup read only a few blocks; but on July 20th, it started scanning the entire file - all the blocks! I checked for a few other datafiles and the story is the same everywhere. With a 4.5 TB database, if the incremental backup reads the datafiles in entirity, then I/O would obviously go for a toss. That explains the I/O and time issue.
But why did RMAN switch from reading a few blocks to the whole file that day? The #1 suspect is Block Change Tracking. The 10g feature BCT allows RMAN to scan only the changed blocks and not the entire file. We use that. So, did something happen to make that disappear?
to answer, I issued a modified query:
select completion_time, datafile_blocks, blocks_read, blocks, used_change_tracking
from v$backup_datafile
where file# = 1
order by 1
/
Output:
COMPLETIO DATAFILE_BLOCKS BLOCKS_READ BLOCKS USE
--------- --------------- ----------- ---------- ---
18-JUL-08 524288 32023 31713 YES
19-JUL-08 524288 11652 10960 YES
20-JUL-08 524288 524288 12764 NO
21-JUL-08 524288 524288 5612 NO
22-JUL-08 524288 524288 11089 NO
Bingo! The BCT use ceased from the 20th July date. That was what caused the whole file to be scanned. But why was it stopped? No one actually stopped it.
Investigating even further, I found from the alert log of Node 1:
Sun Jul 20 00:23:52 2008
CHANGE TRACKING ERROR in another instance, disabling change tracking
Block change tracking service stopping.
From Node 2:
Sun Jul 20 00:23:51 2008
CHANGE TRACKING ERROR in another instance, disabling change tracking
Block change tracking service stopping.
Alert log of Node 3 showed the issue:
Sun Jul 20 00:23:50 2008
Unexpected communication failure with ASM instance:
ORA-12549: TNS:operating system resource quota exceeded
CHANGE TRACKING ERROR 19755, disabling change tracking
Sun Jul 20 00:23:50 2008
Errors in file /xxx/oracle/admin/XXXX/bdump/xxx3_ctwr_20729.trc:
ORA-19755: could not open change tracking file
ORA-19750: change tracking file: '+DG1/change_tracking.dbf'
ORA-17503: ksfdopn:2 Failed to open file +DG1/change_tracking.dbf
ORA-12549: TNS:operating system resource quota exceeded
Block change tracking service stopping.
The last message shows the true error. The error was “operating system resource quota exceeded”, making the diskgroup unavailable. Since the ASM diskgroup was down, all the files were also not available, including BCT file. Surprisingly, Oracle decided to stop BCT altogether rather than report it as a problem and let the user decide the next steps. So block change tracking was silently disabled and the DBAs didn't get a hint of that. Ouch!
Resolution
Well, now that we discovered the issue, we took the necessary steps to correct it. Because of the usual change control process, it took some time to have the change approved and put in place. We executed the following to put the BCT file.
alter database enable block change tracking using file '+DG1/change_tracking.dbf'
The entry in alert log confirms it (all all nodes)
Block change tracking file is current.
But this does not solve the issue completely. to use block change tracking, there has to be a baseline, which is generally a full backup. We never take a full backup. We always take an incremental image copy and then merge to a full backup on a separate location. So, the first order of business was to take a full backup. After that we immediately took an incremental. It took just about an hour, down from some 18+ hours earlier.
Here is some analysis. Looking at the backup of just one file - file#1, i.e. SYSTEM datafile:
select COMPLETION_TIME, USED_CHANGE_TRACKING, BLOCKS, BLOCKS_READ
from v$backup_datafile
where file# = 1
order by 1
/
The output:
COMPLETIO USE BLOCKS BLOCKS_READ
--------- --- ---------- -----------
18-AUG-08 NO 31713 524288
18-AUG-08 NO 10960 524288
20-AUG-08 NO 12764 524288
21-AUG-08 NO 5612 524288
22-AUG-08 NO 11089 524288
23-AUG-08 NO 8217 524288
23-AUG-08 NO 8025 524288
25-AUG-08 NO 3230 524288
26-AUG-08 NO 6629 524288
27-AUG-08 NO 11094 524288 <= the filesize was increased 28-AUG-08 NO 3608 786432 29-AUG-08 NO 8199 786432 29-AUG-08 NO 12893 786432 31-AUG-08 YES 1798 6055 01-SEP-08 YES 7664 35411
Columns descriptions:
USE - was Block Change Tracking used?
BLOCKS - the number of blocks backed up
BLOCKS_READ - the number of blocks read by the backup
Note, when the BCT was not used, the *entire* file - 524288 blocks - were
being read every time. Of course only a percent of that was being backed up
since that percentage changed; but the whole file was being checked.
After BCT, note how the "blocks read" number dropped dramatically. That is
the magic behind the dropped time.
I wanted to find out exactly how much I/O savings BCT was bringing us. A simple query would show that:
select sum(BLOCKS_READ)/sum(DATAFILE_BLOCKS)
from v$backup_datafile
where USED_CHANGE_TRACKING = 'YES'
/
The output:
.09581342
That's just 9.58%. After BCT, only 9.58% of the blocks of the datafiles were scanned! Consider the impact of that. Before BCT, the entire file was scanned for changed blocks. After BCT, only about 9.58% of the blocks were scanned for changed blocks. Just 9.58%. How sweet is that?!!!
Here are three representative files:
File# Blocks Read Actual # of blocks Pct Read
----- ------------- ------------------- --------
985 109 1254400 .009
986 1 786432 .000
987 1 1048576 .000
Note, files 986 and 987 were virtually unread (only one block was read). Before BCT, all the 1048576 blocks were read; after BCT only 1 was. This makes perfect sense. These files are essentially older data; so nothing changes there. RMAN incremental is now blazing fast because it scans less than 10% of the blocks. The I/O problem disappered too, making the database performance even better.
So, we started with some random I/O issue, causing a node failure, which led to increased time for incremental, which was tracjed down to a block change tracking file being suspended by Oracle silently without raising an error.
Takeaways:
The single biggest takeway you should get is that just because it is defined, don't get the idea it is going to be there. So, a periodic check for the BCT file is a must. I will work on developing an automated tool to check for non-use of BCT file. The tool will essentially issue:
SELECT count(1)
FROM v$backup_datafile
where USED_CHANGE_TRACKING = 'NO'
/
If the output is >1, then an alert should be issued. Material for the next blog. Thanks for reading.
After a few days we noticed the incremental RMAN backup taking a long time. This caused major issues - it took a long time and I/O waits went through the roof. In fact it took increasingly longer every day that passed by that unfortunate collapse of the node. Everyone was quite intrigued - what could be the connection between an instance crash and instance crashing? All sorts of theories cropped up - from failed HBA cards to undiscovered RAC bugs.
This is where I got involved. The following chronicles the diagnosis of the issue and the resolution.
First, the increased length of time is obviously a result of the incemental backups doing more work, i.e. more changed blocks. What caused so many changed blocks? Interviews with stakeholdrs yielded no clear answer - there was absolutely no reason for increased activity. Since we are doing proper research, I decided to start with the facts. How much was the extra blocks processed by incrementals?
I started with this simple query:
select completion_time, datafile_blocks, blocks_read, blocks
from v$backup_datafile
where file# = 1
order by 1
/
Output:
COMPLETIO DATAFILE_BLOCKS BLOCKS_READ BLOCKS
--------- --------------- ----------- ----------
18-JUL-08 524288 32023 31713
19-JUL-08 524288 11652 10960
20-JUL-08 524288 524288 12764
21-JUL-08 524288 524288 5612
22-JUL-08 524288 524288 11089
The columns are:
DATAFILE_BLOCKS - the number of blocks in the datafile at that time
BLOCKS_READ - the exact number of blocks the RMAN incremental backup read
BLOCKS - the numberof blocks it actually backed up
From the above output, a pattern emerges - until Jul 20th, the backup read only a few blocks; but on July 20th, it started scanning the entire file - all the blocks! I checked for a few other datafiles and the story is the same everywhere. With a 4.5 TB database, if the incremental backup reads the datafiles in entirity, then I/O would obviously go for a toss. That explains the I/O and time issue.
But why did RMAN switch from reading a few blocks to the whole file that day? The #1 suspect is Block Change Tracking. The 10g feature BCT allows RMAN to scan only the changed blocks and not the entire file. We use that. So, did something happen to make that disappear?
to answer, I issued a modified query:
select completion_time, datafile_blocks, blocks_read, blocks, used_change_tracking
from v$backup_datafile
where file# = 1
order by 1
/
Output:
COMPLETIO DATAFILE_BLOCKS BLOCKS_READ BLOCKS USE
--------- --------------- ----------- ---------- ---
18-JUL-08 524288 32023 31713 YES
19-JUL-08 524288 11652 10960 YES
20-JUL-08 524288 524288 12764 NO
21-JUL-08 524288 524288 5612 NO
22-JUL-08 524288 524288 11089 NO
Bingo! The BCT use ceased from the 20th July date. That was what caused the whole file to be scanned. But why was it stopped? No one actually stopped it.
Investigating even further, I found from the alert log of Node 1:
Sun Jul 20 00:23:52 2008
CHANGE TRACKING ERROR in another instance, disabling change tracking
Block change tracking service stopping.
From Node 2:
Sun Jul 20 00:23:51 2008
CHANGE TRACKING ERROR in another instance, disabling change tracking
Block change tracking service stopping.
Alert log of Node 3 showed the issue:
Sun Jul 20 00:23:50 2008
Unexpected communication failure with ASM instance:
ORA-12549: TNS:operating system resource quota exceeded
CHANGE TRACKING ERROR 19755, disabling change tracking
Sun Jul 20 00:23:50 2008
Errors in file /xxx/oracle/admin/XXXX/bdump/xxx3_ctwr_20729.trc:
ORA-19755: could not open change tracking file
ORA-19750: change tracking file: '+DG1/change_tracking.dbf'
ORA-17503: ksfdopn:2 Failed to open file +DG1/change_tracking.dbf
ORA-12549: TNS:operating system resource quota exceeded
Block change tracking service stopping.
The last message shows the true error. The error was “operating system resource quota exceeded”, making the diskgroup unavailable. Since the ASM diskgroup was down, all the files were also not available, including BCT file. Surprisingly, Oracle decided to stop BCT altogether rather than report it as a problem and let the user decide the next steps. So block change tracking was silently disabled and the DBAs didn't get a hint of that. Ouch!
Resolution
Well, now that we discovered the issue, we took the necessary steps to correct it. Because of the usual change control process, it took some time to have the change approved and put in place. We executed the following to put the BCT file.
alter database enable block change tracking using file '+DG1/change_tracking.dbf'
The entry in alert log confirms it (all all nodes)
Block change tracking file is current.
But this does not solve the issue completely. to use block change tracking, there has to be a baseline, which is generally a full backup. We never take a full backup. We always take an incremental image copy and then merge to a full backup on a separate location. So, the first order of business was to take a full backup. After that we immediately took an incremental. It took just about an hour, down from some 18+ hours earlier.
Here is some analysis. Looking at the backup of just one file - file#1, i.e. SYSTEM datafile:
select COMPLETION_TIME, USED_CHANGE_TRACKING, BLOCKS, BLOCKS_READ
from v$backup_datafile
where file# = 1
order by 1
/
The output:
COMPLETIO USE BLOCKS BLOCKS_READ
--------- --- ---------- -----------
18-AUG-08 NO 31713 524288
18-AUG-08 NO 10960 524288
20-AUG-08 NO 12764 524288
21-AUG-08 NO 5612 524288
22-AUG-08 NO 11089 524288
23-AUG-08 NO 8217 524288
23-AUG-08 NO 8025 524288
25-AUG-08 NO 3230 524288
26-AUG-08 NO 6629 524288
27-AUG-08 NO 11094 524288 <= the filesize was increased 28-AUG-08 NO 3608 786432 29-AUG-08 NO 8199 786432 29-AUG-08 NO 12893 786432 31-AUG-08 YES 1798 6055 01-SEP-08 YES 7664 35411
Columns descriptions:
USE - was Block Change Tracking used?
BLOCKS - the number of blocks backed up
BLOCKS_READ - the number of blocks read by the backup
Note, when the BCT was not used, the *entire* file - 524288 blocks - were
being read every time. Of course only a percent of that was being backed up
since that percentage changed; but the whole file was being checked.
After BCT, note how the "blocks read" number dropped dramatically. That is
the magic behind the dropped time.
I wanted to find out exactly how much I/O savings BCT was bringing us. A simple query would show that:
select sum(BLOCKS_READ)/sum(DATAFILE_BLOCKS)
from v$backup_datafile
where USED_CHANGE_TRACKING = 'YES'
/
The output:
.09581342
That's just 9.58%. After BCT, only 9.58% of the blocks of the datafiles were scanned! Consider the impact of that. Before BCT, the entire file was scanned for changed blocks. After BCT, only about 9.58% of the blocks were scanned for changed blocks. Just 9.58%. How sweet is that?!!!
Here are three representative files:
File# Blocks Read Actual # of blocks Pct Read
----- ------------- ------------------- --------
985 109 1254400 .009
986 1 786432 .000
987 1 1048576 .000
Note, files 986 and 987 were virtually unread (only one block was read). Before BCT, all the 1048576 blocks were read; after BCT only 1 was. This makes perfect sense. These files are essentially older data; so nothing changes there. RMAN incremental is now blazing fast because it scans less than 10% of the blocks. The I/O problem disappered too, making the database performance even better.
So, we started with some random I/O issue, causing a node failure, which led to increased time for incremental, which was tracjed down to a block change tracking file being suspended by Oracle silently without raising an error.
Takeaways:
The single biggest takeway you should get is that just because it is defined, don't get the idea it is going to be there. So, a periodic check for the BCT file is a must. I will work on developing an automated tool to check for non-use of BCT file. The tool will essentially issue:
SELECT count(1)
FROM v$backup_datafile
where USED_CHANGE_TRACKING = 'NO'
/
If the output is >1, then an alert should be issued. Material for the next blog. Thanks for reading.
Saturday, August 23, 2008
Diagnosing Library Cache Latch Contention: A Real Case Study
The other day, our data warehouse database was effectively down. The database was up and running; but all connection attempts to the database would just hang. Since connections just failed, the DBA couldn’t even get in to see if the sessions connected were running normally or not. The ETL team reported that the processing slowed down to a crawl. The DBA, as instructed, went on to check the wait events; but since she couldn’t even log in, she couldn’t do that. Interestingly, CPU was around 70%, normal for the time of the day; and I/O was about 90%, again normal.
So, in simple terms – she was stuck and she applied the age-old technique: when in doubt, reboot. She asked the SA to reboot the server. It took about 30 minutes; and after it cam backup and the database started everything seemed normal for about 10 minutes. And, then the same problem stuck – the database got unresponsive.
At that point, I was called for help. In this blog I am sort of trying to paint a picture of what I did in the next half hour or so to resolve the issue. I hope by giving this running commentary of the events, facts and figures, you will follow my thoughts as I went about diagnosing the issue.
Symptoms
(1) Database connections were hanging
(2) Connections on the server using SQL*Plus as sysdba was also hanging; so we couldn’t even examine the issue.
(3) The system couldn’t be rebooted everytime; every reboot means more work for the ETL folks to clean up the failed sessions.
Initial Analysis
Obviously something is holding some resource back in the database; but what? Unless we connected to the database, we couldn’t check. And since couldn’t check, we couldn’t remediate. It was a classic case of Catch-22.
Action
This is one of those cases where a backdoor entry to the database instance comes really handy. Most people are not aware of the option in SQL*Plus called “prelim”. This option connects to the SGA; but does not open a session.
(1) So, the first thing I did is to open a sessionless SQL*Plus session using the Oracle user:
$ sqlplus -prelim
SQL>
Note, it didn’t say anything familiar like “Connected to Oracle Database 10.2.0.3”, etc. All it showed was the “SQL>” prompt. That was because it didn’t actually connect to the database.
(2) Then I used the “oradebug” utility to analyze the SGA
SQL> oradebug setmypid
SQL> oradebug hanganalyze 12
This produced a tracefile in the user_dump_dest directory. The file wasn’t difficult to find, since it was the last file created. Even if I didn’t find the file, I could have used the process ID to find the file. The file would have been named crmprd1_ora_13392.trc, assuming 13392 was the process ID.
(3) Let’s examine the file. Here are first few lines:
*** 2008-08-23 01:21:44.200
==============
HANG ANALYSIS:
==============
Found 163 objects waiting for
<0/226/17/0x1502dab8/16108/no>
Open chains found:
Chain 1 ::
<0/226/17/0x1502dab8/16108/no>
<0/146/1/0x1503e898/19923/latch:>
This tells me a lot. First it shows that the SID 146 Serial# 17 is waiting for library cache latch. The blocking session is SID 226 Serial# 17. The latter is not waiting for anything of blocking nature. I also noted the OS process IDs of these sessions – 16108 and 19923.
(4) Next I checked for two more tracefiles with these OS PIDs in their names.
crmprd1_ora_16108.trc
crmprd1_ora_19923.trc
(5) I opened the first one, the one that is the blocker. Here are the first few lines:
*** 2008-08-23 01:08:18.840
*** SERVICE NAME:(SYS$USERS) 2008-08-23 01:08:18.781
*** SESSION ID:(226.17) 2008-08-23 01:08:18.781
LIBRARY OBJECT HANDLE: handle=c0000008dc703810 mtx=c0000008dc703940(8000) cdp=32737
name=UPDATE DW_ETL.FRRS_PROFILER SET CONSUMER_LINK = :"SYS_B_0", ADDRESS_LINK = :"SYS_B_1", ADDRESS_MATCH = :"SYS_B_2", PROC
ESSED=:"SYS_B_3" WHERE RNUM = :"SYS_B_4"
hash=a029fce7bb89655493e7e51a544592a4 timestamp=08-23-2008 00:10:23
namespace=CRSR flags=RON/KGHP/TIM/OBS/PN0/MED/KST/DBN/MTX/[504100d0]
kkkk-dddd-llll=0000-0001-0001 lock=N pin=0 latch#=10 hpc=0058 hlc=0058
lwt=c0000008dc7038b8[c0000008dc7038b8,c0000008dc7038b8] ltm=c0000008dc7038c8[c0000008dc7038c8,c0000008dc7038c8]
pwt=c0000008dc703880[c0000008dc703880,c0000008dc703880] ptm=c0000008dc703890[c0000008dc703890,c0000008dc703890]
ref=c0000008dc7038e8[c0000008dc7038e8,c0000008dc7038e8] lnd=c0000008dc703900[c0000008dc703900,c0000008dc703900]
LOCK OWNERS:
lock user session count mode flags
---------------- ---------------- ---------------- ----- ---- ------------------------
c0000008d079f1b8 c0000006151744d8 c0000006151744d8 16 N [00]
c0000008d4e90c40 c0000006151bcb58 c0000006151bcb58 16 N [00]
c0000008d0812c40 c0000008151a0438 c0000008151a0438 16 N [00]
(6) This is a treasure trove of information for debugging. First it shows the SID and Serial# (226.17), which confirms the SID we identified earlier. It shows the exact SQL statement being used. Finally it shows all the locks. I didn’t particularly care about the specifics of locks; but it gave me enough information to prove that the SID 226 was causing a wait for a whole lot of other sessions.
(7) My investigation is not done; I need to find out the sessions waiting for this. So, I searched the file for a section called “PROCESS STATE”. Here is a snippet from the file:
PROCESS STATE
-------------
Process global information:
process: c00000081502dab8, call: c000000817167890, xact: 0000000000000000, curses: c00000081519ef88, usrses: c000000815
19ef88
----------------------------------------
SO: c00000081502dab8, type: 2, owner: 0000000000000000, flag: INIT/-/-/0x00
(process) Oracle pid=370, calls cur/top: c000000817167890/c000000817167890, flag: (0) -
int error: 0, call error: 0, sess error: 0, txn error 0
(post info) last post received: 115 0 4
last post received-location: kslpsr
last process to post me: c000000615002038 1 6
last post sent: 0 0 24
last post sent-location: ksasnd
last process posted by me: c000000615002038 1 6
(latch info) wait_event=0 bits=20
holding (efd=4) c0000008d7b69598 Child library cache level=5 child#=10
Location from where latch is held: kglhdgc: child:: latch
Context saved from call: 13
state=busy, wlstate=free
waiters [orapid (seconds since: put on list, posted, alive check)]:
291 (197, 1219468295, 197)
279 (197, 1219468295, 197)
374 (197, 1219468295, 197)
267 (197, 1219468295, 197)
372 (197, 1219468295, 197)
... several lines sniped ...
307 (15, 1219468295, 15)
181 (6, 1219468295, 6)
waiter count=58
Process Group: DEFAULT, pseudo proc: c0000008e03150d8
O/S info: user: oracrmp, term: UNKNOWN, ospid: 16108
OSD pid info: Unix process pid: 16108, image: oracle@sdwhpdb1
(8) This told me everything I needed to know. There were 58 sessions waiting for library cache latch held by SID 226. I also knew the OS Process ID and the SQL statement of the blocking session.
(9) At that time we engaged the Application Owner to explain what was going on. As he explained it, he issues the update statement in a loop. And that’s not all; he executes it in 8 different threads. No wonder we have had library cache latch contention. So, we had to track 8; not just one session. We trudged on. All the sessions dumped their information. So, I searched the directory for some other files with the same issues:
$ grep “UPDATE DW_ETL” *.trc
(10) And I found 9 more sessions (or, rather, processes). Here is a snippet from another file:
350 (167, 1219470122, 167)
197 (167, 1219470122, 167)
waiter count=185
Process Group: DEFAULT, pseudo proc: c0000008e03150d8
O/S info: user: oracrmp, term: UNKNOWN, ospid: 16114
This process had 185 waiters! Ouch!
(11) Now comes a decision point. I knew who is blocking and who were being blocked; although I didn’t yet know what latch exactly is being contented for. I could have dumped the library cache latches to get that information; but the application owner volunteered to terminate the sessions. The application, fortunately, was restartable. So, we decided to kill all of these errant sessions on the unix prompt.
$ kill -9
(12) After killing a few processes, the database started responding. After killing all of them, the database wait events came back to completely normal. Connections were established and applications started behaving normally.
After step 11, I could have used the library cache dump to examine the exact library element in the center of the contention; but that’s a topic for another blog.
Takeaways
(1) When something seems to be “hung”, don’t get hung up on that. A session almost always waits for something; rarely it is just “hung”. You should check what it is waiting for by selecting the EVENT column of V$SESSION (10g) or V$SESSION_WAIT (9i).
(2) When you can’t logon to the database to get the information, try using oradebug command.
(3) To use oradebug, you should use SQL*Plus. Since you can’t login, use “sqlplus -prelim" to get the SQL prompt.
(4) Use oradebug setmypid to start the oradebug session, and then use oradebug hanganalyze to create a dumpfile of all hang related issues
(5) Use oradebug help to see all oradebug commands
So, in simple terms – she was stuck and she applied the age-old technique: when in doubt, reboot. She asked the SA to reboot the server. It took about 30 minutes; and after it cam backup and the database started everything seemed normal for about 10 minutes. And, then the same problem stuck – the database got unresponsive.
At that point, I was called for help. In this blog I am sort of trying to paint a picture of what I did in the next half hour or so to resolve the issue. I hope by giving this running commentary of the events, facts and figures, you will follow my thoughts as I went about diagnosing the issue.
Symptoms
(1) Database connections were hanging
(2) Connections on the server using SQL*Plus as sysdba was also hanging; so we couldn’t even examine the issue.
(3) The system couldn’t be rebooted everytime; every reboot means more work for the ETL folks to clean up the failed sessions.
Initial Analysis
Obviously something is holding some resource back in the database; but what? Unless we connected to the database, we couldn’t check. And since couldn’t check, we couldn’t remediate. It was a classic case of Catch-22.
Action
This is one of those cases where a backdoor entry to the database instance comes really handy. Most people are not aware of the option in SQL*Plus called “prelim”. This option connects to the SGA; but does not open a session.
(1) So, the first thing I did is to open a sessionless SQL*Plus session using the Oracle user:
$ sqlplus -prelim
SQL>
Note, it didn’t say anything familiar like “Connected to Oracle Database 10.2.0.3”, etc. All it showed was the “SQL>” prompt. That was because it didn’t actually connect to the database.
(2) Then I used the “oradebug” utility to analyze the SGA
SQL> oradebug setmypid
SQL> oradebug hanganalyze 12
This produced a tracefile in the user_dump_dest directory. The file wasn’t difficult to find, since it was the last file created. Even if I didn’t find the file, I could have used the process ID to find the file. The file would have been named crmprd1_ora_13392.trc, assuming 13392 was the process ID.
(3) Let’s examine the file. Here are first few lines:
*** 2008-08-23 01:21:44.200
==============
HANG ANALYSIS:
==============
Found 163 objects waiting for
<0/226/17/0x1502dab8/16108/no>
Open chains found:
Chain 1 :
<0/226/17/0x1502dab8/16108/no>
<0/146/1/0x1503e898/19923/latch:>
This tells me a lot. First it shows that the SID 146 Serial# 17 is waiting for library cache latch. The blocking session is SID 226 Serial# 17. The latter is not waiting for anything of blocking nature. I also noted the OS process IDs of these sessions – 16108 and 19923.
(4) Next I checked for two more tracefiles with these OS PIDs in their names.
crmprd1_ora_16108.trc
crmprd1_ora_19923.trc
(5) I opened the first one, the one that is the blocker. Here are the first few lines:
*** 2008-08-23 01:08:18.840
*** SERVICE NAME:(SYS$USERS) 2008-08-23 01:08:18.781
*** SESSION ID:(226.17) 2008-08-23 01:08:18.781
LIBRARY OBJECT HANDLE: handle=c0000008dc703810 mtx=c0000008dc703940(8000) cdp=32737
name=UPDATE DW_ETL.FRRS_PROFILER SET CONSUMER_LINK = :"SYS_B_0", ADDRESS_LINK = :"SYS_B_1", ADDRESS_MATCH = :"SYS_B_2", PROC
ESSED=:"SYS_B_3" WHERE RNUM = :"SYS_B_4"
hash=a029fce7bb89655493e7e51a544592a4 timestamp=08-23-2008 00:10:23
namespace=CRSR flags=RON/KGHP/TIM/OBS/PN0/MED/KST/DBN/MTX/[504100d0]
kkkk-dddd-llll=0000-0001-0001 lock=N pin=0 latch#=10 hpc=0058 hlc=0058
lwt=c0000008dc7038b8[c0000008dc7038b8,c0000008dc7038b8] ltm=c0000008dc7038c8[c0000008dc7038c8,c0000008dc7038c8]
pwt=c0000008dc703880[c0000008dc703880,c0000008dc703880] ptm=c0000008dc703890[c0000008dc703890,c0000008dc703890]
ref=c0000008dc7038e8[c0000008dc7038e8,c0000008dc7038e8] lnd=c0000008dc703900[c0000008dc703900,c0000008dc703900]
LOCK OWNERS:
lock user session count mode flags
---------------- ---------------- ---------------- ----- ---- ------------------------
c0000008d079f1b8 c0000006151744d8 c0000006151744d8 16 N [00]
c0000008d4e90c40 c0000006151bcb58 c0000006151bcb58 16 N [00]
c0000008d0812c40 c0000008151a0438 c0000008151a0438 16 N [00]
(6) This is a treasure trove of information for debugging. First it shows the SID and Serial# (226.17), which confirms the SID we identified earlier. It shows the exact SQL statement being used. Finally it shows all the locks. I didn’t particularly care about the specifics of locks; but it gave me enough information to prove that the SID 226 was causing a wait for a whole lot of other sessions.
(7) My investigation is not done; I need to find out the sessions waiting for this. So, I searched the file for a section called “PROCESS STATE”. Here is a snippet from the file:
PROCESS STATE
-------------
Process global information:
process: c00000081502dab8, call: c000000817167890, xact: 0000000000000000, curses: c00000081519ef88, usrses: c000000815
19ef88
----------------------------------------
SO: c00000081502dab8, type: 2, owner: 0000000000000000, flag: INIT/-/-/0x00
(process) Oracle pid=370, calls cur/top: c000000817167890/c000000817167890, flag: (0) -
int error: 0, call error: 0, sess error: 0, txn error 0
(post info) last post received: 115 0 4
last post received-location: kslpsr
last process to post me: c000000615002038 1 6
last post sent: 0 0 24
last post sent-location: ksasnd
last process posted by me: c000000615002038 1 6
(latch info) wait_event=0 bits=20
holding (efd=4) c0000008d7b69598 Child library cache level=5 child#=10
Location from where latch is held: kglhdgc: child:: latch
Context saved from call: 13
state=busy, wlstate=free
waiters [orapid (seconds since: put on list, posted, alive check)]:
291 (197, 1219468295, 197)
279 (197, 1219468295, 197)
374 (197, 1219468295, 197)
267 (197, 1219468295, 197)
372 (197, 1219468295, 197)
... several lines sniped ...
307 (15, 1219468295, 15)
181 (6, 1219468295, 6)
waiter count=58
Process Group: DEFAULT, pseudo proc: c0000008e03150d8
O/S info: user: oracrmp, term: UNKNOWN, ospid: 16108
OSD pid info: Unix process pid: 16108, image: oracle@sdwhpdb1
(8) This told me everything I needed to know. There were 58 sessions waiting for library cache latch held by SID 226. I also knew the OS Process ID and the SQL statement of the blocking session.
(9) At that time we engaged the Application Owner to explain what was going on. As he explained it, he issues the update statement in a loop. And that’s not all; he executes it in 8 different threads. No wonder we have had library cache latch contention. So, we had to track 8; not just one session. We trudged on. All the sessions dumped their information. So, I searched the directory for some other files with the same issues:
$ grep “UPDATE DW_ETL” *.trc
(10) And I found 9 more sessions (or, rather, processes). Here is a snippet from another file:
350 (167, 1219470122, 167)
197 (167, 1219470122, 167)
waiter count=185
Process Group: DEFAULT, pseudo proc: c0000008e03150d8
O/S info: user: oracrmp, term: UNKNOWN, ospid: 16114
This process had 185 waiters! Ouch!
(11) Now comes a decision point. I knew who is blocking and who were being blocked; although I didn’t yet know what latch exactly is being contented for. I could have dumped the library cache latches to get that information; but the application owner volunteered to terminate the sessions. The application, fortunately, was restartable. So, we decided to kill all of these errant sessions on the unix prompt.
$ kill -9
(12) After killing a few processes, the database started responding. After killing all of them, the database wait events came back to completely normal. Connections were established and applications started behaving normally.
After step 11, I could have used the library cache dump to examine the exact library element in the center of the contention; but that’s a topic for another blog.
Takeaways
(1) When something seems to be “hung”, don’t get hung up on that. A session almost always waits for something; rarely it is just “hung”. You should check what it is waiting for by selecting the EVENT column of V$SESSION (10g) or V$SESSION_WAIT (9i).
(2) When you can’t logon to the database to get the information, try using oradebug command.
(3) To use oradebug, you should use SQL*Plus. Since you can’t login, use “sqlplus -prelim" to get the SQL prompt.
(4) Use oradebug setmypid to start the oradebug session, and then use oradebug hanganalyze to create a dumpfile of all hang related issues
(5) Use oradebug help to see all oradebug commands
Tuesday, August 19, 2008
Put External Procedures on a Different Listener
As a best practice, I recommend placing external procedures, if you use them, on a different listener.ora file. Why so?
External Procedures are actually O/S level programs executed by the listener, as the user "oracle" (or whatever the username for Oracle software is).
Let me repeat that: external procs allow database sessions to execute O/S level programs with the privilege level of the Oracle user! Do you see a problem here? Since the programs run as "oracle", they can do anything the user can do at the command line: "ls -l", "rm listener.log", or even "rm datafiles". Do you want that? Of course not.
Unfortunately there are several vulnerabilities that exist which exploit this particular feature. the CPU patches address some of them; but the bad guys respond by exposing even more holes. In this cat and mouse game, the best thing, in my opinion, is to reduce or even eliminate the possibility if feasible. If you don't use external procs (most people don't), why put them on the listener.ora file? Unfortunately the default config puts the external procs; so you should remove them and you can do that easily.
If you must use external procs, then I suggest using a different listener for that. Doing so allows you to shutdown that functionality while allowing normal database connectivity. If you perceive an imminent threat, you can take evasive action by shutting down the ext proc listener. You can't do that if the external jobs are on the same listener.
Again, consider this: what is the harm is doing that? Nothing. and what is the benefit? I just showed you, however small that may be. So, if you lost nothing and potentially gain something, why not do it?
External Procedures are actually O/S level programs executed by the listener, as the user "oracle" (or whatever the username for Oracle software is).
Let me repeat that: external procs allow database sessions to execute O/S level programs with the privilege level of the Oracle user! Do you see a problem here? Since the programs run as "oracle", they can do anything the user can do at the command line: "ls -l", "rm listener.log", or even "rm datafiles". Do you want that? Of course not.
Unfortunately there are several vulnerabilities that exist which exploit this particular feature. the CPU patches address some of them; but the bad guys respond by exposing even more holes. In this cat and mouse game, the best thing, in my opinion, is to reduce or even eliminate the possibility if feasible. If you don't use external procs (most people don't), why put them on the listener.ora file? Unfortunately the default config puts the external procs; so you should remove them and you can do that easily.
If you must use external procs, then I suggest using a different listener for that. Doing so allows you to shutdown that functionality while allowing normal database connectivity. If you perceive an imminent threat, you can take evasive action by shutting down the ext proc listener. You can't do that if the external jobs are on the same listener.
Again, consider this: what is the harm is doing that? Nothing. and what is the benefit? I just showed you, however small that may be. So, if you lost nothing and potentially gain something, why not do it?
Subscribe to:
Posts (Atom)