For all those who came to my last of my four sessions - 11g New Features for DBAs - I appreciate your taking the time. It was a pleasant surprise to see about 500 people showing up at a lunch time slot on the last day of the conference.
Here is the presentation link. I hope you enjoyed the session and found it useful.
Confessions of an Oracle Database Junkie - Arup Nanda The opinions expressed here are mine and mine alone. They may not necessarily reflect that of my employers and customers - both past or present. The comments left by the reviewers are theirs alone and may not reflect my opinion whether implied or not. None of the advice is warranted to be free of errors and ommision. Please use at your own risk and after thorough testing in your environment.
Thursday, October 15, 2009
OOW09 Session# 3
I just finished the third of my four presentations - SQL Plan Management. Considering it was at 9 AM on the last day of the conference, right after the big Aerosmith concert, I was expecting a lot less crowd. But, to my pleasant surprise about 150 brave souls turned up. Thank you all. I hope you found it useful.
Here is the presentation material. While you are there, feel free to browse around. And, of course, I will highly appreciate if you could send me your comments, either here or via email - whatever you are comfortable with.
Here is the presentation material. While you are there, feel free to browse around. And, of course, I will highly appreciate if you could send me your comments, either here or via email - whatever you are comfortable with.
Wednesday, October 14, 2009
OOW09 My Session#2
Today I delivered the second of my four sessions - "Upgrade Case Study: Database Replay, Snapshot Standby and Plan Baselines".
For those you attended, I thank you very much. Here is the presentation.
For those you attended, I thank you very much. Here is the presentation.
Tuesday, October 13, 2009
OOW Day2
Why do you come to Open World? I'm sure we will get all kinds of reasons, as many as there are stars in the sky. Some predominant themes are - getting to know more about the Oracle (or related) technologies by attending sessions, reconnecting with old friends and building networking. Of course, getting freebies from the Exhibit Halls, I'm sure, can't be far behind as a motivator.
I come to OOW for all those reasons as well. But high up in my list is the visit to the Exhibit Halls. No; not for the tee-shirts that do not fit me and graphics I don't really dig. I visit the demogrounds and exhibit halls to know about the products and tools that I should be aware of. Where else would you find 1000+ companies advertising the products at one place? Sure, I can call them and find out; but ho do I find them? OOW exhibit halls are prime "hunting" grounds to look for new ideas and tools that I should be interested in; or at least be aware of. I can not only look at the tools; I can actually get some relevant technical facts in 5 minutes which might take weeks of scheduling and hours of marketing talk. And, if I decide the product is not relevant; I can always walk away. I have the privilege of walking away; they don't. If I call them to my office, "they" have that option; not me :) If I find something attractive, I can always follow up and get to know more.
Oracle demogrounds are even better. Not only I can meet Oracle PMs there; but the people who never come out to the public world - developers, development managers, architects and so on. These unsung heroes are mostly the reason why Oracle is what it is now. I meet the known faces, get to know new ones and establish new relationships. They hear from me what customers want and I learn the innards of some features I am curious about.
So, I spent almost the whole day yesterday navigating through demo grounds and exhibit halls. I could cover only a small fraction. In between I had to attend some meetings at work. Going to OOW is never "going away". I wish it was.
I come to OOW for all those reasons as well. But high up in my list is the visit to the Exhibit Halls. No; not for the tee-shirts that do not fit me and graphics I don't really dig. I visit the demogrounds and exhibit halls to know about the products and tools that I should be aware of. Where else would you find 1000+ companies advertising the products at one place? Sure, I can call them and find out; but ho do I find them? OOW exhibit halls are prime "hunting" grounds to look for new ideas and tools that I should be interested in; or at least be aware of. I can not only look at the tools; I can actually get some relevant technical facts in 5 minutes which might take weeks of scheduling and hours of marketing talk. And, if I decide the product is not relevant; I can always walk away. I have the privilege of walking away; they don't. If I call them to my office, "they" have that option; not me :) If I find something attractive, I can always follow up and get to know more.
Oracle demogrounds are even better. Not only I can meet Oracle PMs there; but the people who never come out to the public world - developers, development managers, architects and so on. These unsung heroes are mostly the reason why Oracle is what it is now. I meet the known faces, get to know new ones and establish new relationships. They hear from me what customers want and I learn the innards of some features I am curious about.
So, I spent almost the whole day yesterday navigating through demo grounds and exhibit halls. I could cover only a small fraction. In between I had to attend some meetings at work. Going to OOW is never "going away". I wish it was.
Sunday, October 11, 2009
OOW09 - RAC Performance Tuning
For all those who came to my session - many, many thanks. There is no better sight for a presenter than to see a roomful of attendees, especially with people standing near the walls. The fire marshal was not amused probably; but I was grateful. The harrowing incident of a blue screen of death on my PC - not just once but twice - just before the presentation was about to start was enough to throw me into a panic mode; but the third time was a charm. It worked. Phew!
You can download the presentation here. And while you are there, look around and download some more of my sessions as well.
Thanks a lot once again. I'm off the keynote now.
You can download the presentation here. And while you are there, look around and download some more of my sessions as well.
Thanks a lot once again. I'm off the keynote now.
ACE Directors Product Briefing '09
One of the most valuable benefits of being an Oracle ACE Director is the briefings by Oracle Product Managers at the Oracle HQ. This year the briefing was on Friday Oct 9th at Oracle conference center rather than the customary Hilton Hotel.
While I was a little disappointed at the coverage of the database topics, I quickly recovered from the alphabet soup that makes up the netherworld of middleware and tools. However, a surprise visit by Thomas Kurian to address questions from the audience about the various product roadmaps was testimonial that Oracle is dead serious about the ACE Program. That proves the commitment Oracle has made for the user community - very heartening.
As always, Vikky Lira and Lillian Buziak did a wonderful job of organizing the event. Considering about 100 ACE Directors from 20+ countries, that is no small task. Perhaps the highlight of the organization was the detailed briefing sheets Lillian prepared for each one individually, down to what car service one takes and when - simply superb! No amount of thanks will be enough. From the bottom of my heart, thank you, Vikky and Lillian. And, thank you Justin Kestelyn - for kicking off and running the event year after year.
While I was a little disappointed at the coverage of the database topics, I quickly recovered from the alphabet soup that makes up the netherworld of middleware and tools. However, a surprise visit by Thomas Kurian to address questions from the audience about the various product roadmaps was testimonial that Oracle is dead serious about the ACE Program. That proves the commitment Oracle has made for the user community - very heartening.
As always, Vikky Lira and Lillian Buziak did a wonderful job of organizing the event. Considering about 100 ACE Directors from 20+ countries, that is no small task. Perhaps the highlight of the organization was the detailed briefing sheets Lillian prepared for each one individually, down to what car service one takes and when - simply superb! No amount of thanks will be enough. From the bottom of my heart, thank you, Vikky and Lillian. And, thank you Justin Kestelyn - for kicking off and running the event year after year.
Open World 09 Starts
Oracle Open World 2009 has officially started with the User Group sessions today. I am presenting a session today. I started off by registering and getting my cool Blogger badge holder, hanging off the even cooler ACE Director lanyard.
I went off to the first session of today on the IOUG bucket - Workload Management by Alex Gorbachev. Alex is one of those people who know their stuff; so there is always something to be learned from there. Alex successfully demonstrated the difference between Connection Load Balancing and Server Side Listener Load Balancing, with pmon trace to show how the sessions are balanced. It sheds light on the question - why Oracle is not balancing the workload.
If you didn't attend this, you should definitely download the presentation and check it out later.
I went off to the first session of today on the IOUG bucket - Workload Management by Alex Gorbachev. Alex is one of those people who know their stuff; so there is always something to be learned from there. Alex successfully demonstrated the difference between Connection Load Balancing and Server Side Listener Load Balancing, with pmon trace to show how the sessions are balanced. It sheds light on the question - why Oracle is not balancing the workload.
If you didn't attend this, you should definitely download the presentation and check it out later.
Thursday, September 03, 2009
ASM Dynamic Volume Manager and ASM Clustered File System
Two of the top features in 11gR2 are the ASM Dynamic Volume Manager (ADVM) and ASM Clustered File System (ACFS). What is the big deal about these two?
ADVM allows you to create a volume from an ASM diskgroup. Here is an example where we created a volume called asm_vol1 of 100 MB on a diskgroup called DATA:
Internally it issues the command
Now you enable the volume you just created:
Internally it issues:
You can perform other commands like resize, delete, disable; but more on that later on a full length article.
Now that the volume is created, what can you do with it. Well, like all volumes, you can create a filesystem on it. Here is an example of creating a FS called acfs1:
Register MountPoint Command:
If you get an error, use the force option:
Now you mount the the filesystem:
Now if you will check the filesystem, you will notice a new one - /acfs1
This new filesystem is actually carved out of the ASM diskspace. This can be used as a regular filesystem:
So, what't the big deal about it? Plenty.
First, this is part of the ASM management; so all the bells and whistles of ASM - such as asynch i/o, etc. - applies to this filesystem.
Second, this is now a "cluster" filesystem; it is visible across a cluster. So, now you have a fully functional generic filesystem visible across the cluster.
Third, this is now protected by the Grid infrastructure, if you have installed it. Remember from my earlier posts that in 11gR2 you can now have a restartable grid infrastructure even on a single instance.
More on ASM Dynamic Volume Manager later in a full length article. But I hope this makes you really interested.
ADVM allows you to create a volume from an ASM diskgroup. Here is an example where we created a volume called asm_vol1 of 100 MB on a diskgroup called DATA:
ASMCMD [+] > volcreate -G DATA -s 100M asm_vol1
Internally it issues the command
alter diskgroup DATA add volume 'asm_vol1' size 100M;
Now you enable the volume you just created:
ASMCMD [+] > volenable -G DATA asm_vol1
Internally it issues:
alter diskgroup DATA enable volume 'asm_vol1';
You can perform other commands like resize, delete, disable; but more on that later on a full length article.
Now that the volume is created, what can you do with it. Well, like all volumes, you can create a filesystem on it. Here is an example of creating a FS called acfs1:
[root@oradba2 ~]# mkdir /acfs1
[root@oradba2 ~]# /sbin/mkfs -t acfs /dev/asm/asm_vol1-207
mkfs.acfs: version = 11.2.0.1.0.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/asm_vol1-207
mkfs.acfs: volume size = 268435456
Register MountPoint Command:
[root@oradba2 ~]# /sbin/acfsutil registry -a -f /dev/asm/asm_vol1-207 /acfs1
acfsutil registry: mount point /acfs1 successfully added to Oracle Registry
If you get an error, use the force option:
[root@oradba2 /]# /sbin/mkfs.acfs -f /dev/asm/asm_vol1-207
mkfs.acfs: version = 11.2.0.1.0.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/asm_vol1-207
mkfs.acfs: volume size = 268435456
mkfs.acfs: Format complete.
Now you mount the the filesystem:
[root@oradba2 /]# /bin/mount -t acfs /dev/asm/asm_vol1-207 /acfs1
Now if you will check the filesystem, you will notice a new one - /acfs1
[root@oradba2 /]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
149313708 22429752 119176924 16% /
/dev/sda1 101086 11765 84102 13% /boot
tmpfs 907628 591640 315988 66% /dev/shm
/dev/asm/asm_vol1-207
262144 37632 224512 15% /acfs1
This new filesystem is actually carved out of the ASM diskspace. This can be used as a regular filesystem:
[root@oradba2 /]# cd /acfs1
[root@oradba2 acfs1]# ls
lost+found
[root@oradba2 acfs1]# touch 1
So, what't the big deal about it? Plenty.
First, this is part of the ASM management; so all the bells and whistles of ASM - such as asynch i/o, etc. - applies to this filesystem.
Second, this is now a "cluster" filesystem; it is visible across a cluster. So, now you have a fully functional generic filesystem visible across the cluster.
Third, this is now protected by the Grid infrastructure, if you have installed it. Remember from my earlier posts that in 11gR2 you can now have a restartable grid infrastructure even on a single instance.
More on ASM Dynamic Volume Manager later in a full length article. But I hope this makes you really interested.
Wednesday, September 02, 2009
Oracle 11g R2 Features
Continuing on the previous posts, here is another gee-whiz feature of 11gR2 - the "deinstall" feature. Yes, that's right the deinstall one. Sometimes installations fail; sometimes you have to deinstall something to clean out the server for other use. Sometimes, I did, you have to clean out beta code to install production code. A deinstall utility stops all the processes, removes all the relevant software and components (such as diskgroups), updates all config files and make all necessary modifications to the other files. All these are done without you ever bothering about remnants that may cause issues later.
You have to download the deinstall software from 11gR2 download from OTN. Choose "see all" to get to that software.
Here is the demonstration of the deinstall utility:
After some initial question and answer it shows a summary of activities and prompts you for a confirmation:
After you press "y", it starts the operation of a clean deinstallation. The output continues as shown below:
At some point you will be asked to shutdown cssd, etc. which need root privileges. The deinstall utility shows a comamnd string you can run as root to accomplish this task:
Running the command on a different terminal as root:
Now going back to the original terminal where deinstall was called from, press Enter. The output continues:
The components are cleanly deinstalled now. The directories have been cleaned up by this tool.
This was available in 11gR1 as well; but R2 just makes it very user friendly.
More on this later.
You have to download the deinstall software from 11gR2 download from OTN. Choose "see all" to get to that software.
Here is the demonstration of the deinstall utility:
[oracle@oradba2 deinstall]$ ./deinstall -home /opt/oracle/product/11.2/grid1
ORACLE_HOME = /opt/oracle/product/11.2/grid1
Location of logs /opt/oracle/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################## CHECK OPERATION START ########################
Install check configuration START
Checking for existence of the Oracle home location /opt/oracle/product/11.2/grid
1
Oracle Home type selected for de-install is: SIHA
Oracle Base selected for de-install is: /opt/oracle
Checking for existence of central inventory location /opt/oracle/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/oracle/produc
t/11.2/grid1
Install check configuration END
Traces log file: /opt/oracle/oraInventory/logs//crsdc.log
Network Configuration check config START
Network de-configuration trace file location: /opt/oracle/oraInventory/logs/netd
c_check22387.log
Specify all Oracle Restart enabled listeners that are to be de-configured [LISTE
NER]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /opt/oracle/oraInventory/logs/asmcadc_
check22388.log
Automatic Storage Management (ASM) instance is detected in this Oracle home /opt/oracle/product/
11.2/grid1.
ASM Diagnostic Destination : /opt/oracle
ASM Diskgroups : +DATA1,+FRA1
Diskgroups will be dropped
De-configuring ASM will drop all the diskgroups and it's contents at cleanup time. This will aff
ect all of the databases and ACFS that use this ASM instance(s).
After some initial question and answer it shows a summary of activities and prompts you for a confirmation:
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/oracle/product/11.2/grid1
The cluster node(s) on which the Oracle home exists are: (Please input nodes seperated by ",", eg: node1,node2,...)null
Oracle Home selected for de-install is: /opt/oracle/product/11.2/grid1
Inventory Location where the Oracle home registered is: /opt/oracle/oraInventory
Following Oracle Restart enabled listener(s) will be de-configured: LISTENER
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/opt/oracle/oraInventory/logs/deinstall_deconfig2009-09-02_02-12-22-PM.out'
Any error messages from this session will be written to: '/opt/oracle/oraInventory/logs/deinstall_deconfig2009-09-02_02-12-22-PM.err'
After you press "y", it starts the operation of a clean deinstallation. The output continues as shown below:
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /opt/oracle/oraInventory/logs/asmcadc_clean22389.log
ASM Clean Configuration START
ASM deletion in progress. This operation may take few minutes.
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /opt/oracle/oraInventory/logs/netdc_clean22390.log
De-configuring Oracle Restart enabled listener(s): LISTENER
De-configuring listener: LISTENER
Stopping listener: LISTENER
Listener stopped successfully.
Unregistering listener: LISTENER
Listener unregistered successfully.
Deleting listener: LISTENER
Listener deleted successfully.
Listener de-configured successfully.
De-configuring Listener configuration file...
Listener configuration file de-configured successfully.
De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
At some point you will be asked to shutdown cssd, etc. which need root privileges. The deinstall utility shows a comamnd string you can run as root to accomplish this task:
Run the following command as the root user or the administrator on node "oradba2".
/opt/oracle/software/11gR2/deinstall/perl/bin/perl -I/opt/oracle/software/11gR2/deinstall/perl/lib -I/opt/oracle/software/11gR2/deinstall/crs/install /opt/oracle/software/11gR2/deinstall/crs/install/roothas.pl -force -delete -paramfile /opt/oracle/software/11gR2/deinstall/response/deinstall_Ora11g_gridinfrahome1.rsp
Press Enter after you finish running the above commands
Running the command on a different terminal as root:
[root@oradba2 ~]# /opt/oracle/software/11gR2/deinstall/perl/bin/perl -I/opt/oracle/software/11gR2/deinstall/perl/lib -I/opt/oracle/software/11gR2/deinstall/crs/install /opt/oracle/software/11gR2/deinstall/crs/install/roothas.pl -force -delete -paramfile /opt/oracle/software/11gR2/deinstall/response/deinstall_Ora11g_gridinfrahome1.rsp
2009-09-02 14:20:57: Checking for super user privileges
2009-09-02 14:20:57: User has super user privileges
2009-09-02 14:20:57: Parsing the host name
Using configuration parameter file: /opt/oracle/software/11gR2/deinstall/response/deinstall_Ora11g_gridinfrahome1.rsp
CRS-2673: Attempting to stop 'ora.cssd' on 'oradba2'
CRS-2677: Stop of 'ora.cssd' on 'oradba2' succeeded
CRS-4549: Stopping resources.
CRS-2673: Attempting to stop 'ora.diskmon' on 'oradba2'
CRS-2677: Stop of 'ora.diskmon' on 'oradba2' succeeded
CRS-4133: Oracle High Availability Services has been stopped.
ACFS-9200: Supported
Successfully deconfigured Oracle Restart stack
Now going back to the original terminal where deinstall was called from, press Enter. The output continues:
Oracle Universal Installer clean START
Detach Oracle home '/opt/oracle/product/11.2/grid1' from the central inventory on the local node : Done
Delete directory '/opt/oracle/product/11.2/grid1' on the local node : Done
The Oracle Base directory '/opt/oracle' will not be removed on local node. The directory is in use by Oracle Home '/opt/oracle/product/10.2/db1'.
The Oracle Base directory '/opt/oracle' will not be removed on local node. The directory is in use by central inventory.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
Oracle install clean START
Clean install operation removing temporary directory '/tmp/install' on node 'oradba2'
Oracle install clean END
Moved default properties file /opt/oracle/software/11gR2/deinstall/response/deinstall_Ora11g_gridinfrahome1.rsp as /opt/oracle/software/11gR2/deinstall/response/deinstall_Ora11g_gridinfrahome1.rsp1
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following Oracle Restart enabled listener(s) were de-configured successfully: LISTENER
Oracle Restart was already stopped and de-configured on node "oradba2"
Oracle Restart is stopped and de-configured successfully.
Successfully detached Oracle home '/opt/oracle/product/11.2/grid1' from the central inventory on the local node.
Successfully deleted directory '/opt/oracle/product/11.2/grid1' on the local node.
Oracle Universal Installer cleanup was successful.
Oracle install successfully cleaned up the temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
The components are cleanly deinstalled now. The directories have been cleaned up by this tool.
This was available in 11gR1 as well; but R2 just makes it very user friendly.
Tuesday, September 01, 2009
Oracle 11g Release 2 is Finally Out
Finally, it's that time again - the birth of a new versionof Oracle - 11g Release 2. Being Release 2, it does not have as much bells and whistles as the 11g.
I downloaded it immediately and started installation. Some of the gee-whiz features of this release are:
(1) Editions
(2) ASM Filesystem
(3) Oracle Restart
(5) Columnar Compression
I have been beta testing this for some time; so I had seen previews of the release. Continuing the previous serieses, I will write the new features series for 11gR2 on OTN as well - it will be a 11 part series.
A little bit about Oracle Restart. It adds a lightweight clusterware functionality to a single instance database. If the instance crashes, OR brings it up, monitors it ans so on. And by the way, this is called "Grid Infrastructure". So you have to install two Oracle Homes - one each for grid and the rdbms.
When there is Grid, there is srvctl, of course. The grid infrastructure comes with srvctl. Here is how you check what is running from a specific Oracle Home:
oracle@oradba1 ~# srvctl status home -o /opt/oracle/product/11gR2/db1 -s state.txt
Database d112d1 is running on node oradba1
The above command create a file called state.txt.
oracle@oradba1 ~# cat state.txt
db-d112d1
It shows the database name - D112D1.
This is done on a single instance Oracle database; not a cluster. But the grid infrastructure looks and feels like a cluster. Here are some more commands to check status:
oracle@oradba1 ~# srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): oradba1
oracle@oradba1 ~# srvctl status asm -a
ASM is running on oradba1
ASM is enabled.
A bunch of new processes suppor this grid infrastructure:
Let's see what happens when you kill the instance.
This will, of course, crash the instance. Let's chck after some time:
Where did the pmon come from? Didn't the instance just crash?
The instance was restarted by Oracle Restart.
What if you want to just keep the instance down, e.g. during a maintenance. Well, just shutdown normally; the instance will stay down. When you are ready, start the instance using either SQL*Plus or srvctl:
oracle@oradba1 ~# srvctl start database -d d112d1
Remember, D112D1 is a single instance database.
More on this later, on OTN.
I downloaded it immediately and started installation. Some of the gee-whiz features of this release are:
(1) Editions
(2) ASM Filesystem
(3) Oracle Restart
(5) Columnar Compression
I have been beta testing this for some time; so I had seen previews of the release. Continuing the previous serieses, I will write the new features series for 11gR2 on OTN as well - it will be a 11 part series.
A little bit about Oracle Restart. It adds a lightweight clusterware functionality to a single instance database. If the instance crashes, OR brings it up, monitors it ans so on. And by the way, this is called "Grid Infrastructure". So you have to install two Oracle Homes - one each for grid and the rdbms.
When there is Grid, there is srvctl, of course. The grid infrastructure comes with srvctl. Here is how you check what is running from a specific Oracle Home:
oracle@oradba1 ~# srvctl status home -o /opt/oracle/product/11gR2/db1 -s state.txt
Database d112d1 is running on node oradba1
The above command create a file called state.txt.
oracle@oradba1 ~# cat state.txt
db-d112d1
It shows the database name - D112D1.
This is done on a single instance Oracle database; not a cluster. But the grid infrastructure looks and feels like a cluster. Here are some more commands to check status:
oracle@oradba1 ~# srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): oradba1
oracle@oradba1 ~# srvctl status asm -a
ASM is running on oradba1
ASM is enabled.
A bunch of new processes suppor this grid infrastructure:
oracle 19046 1 0 18:13 ? 00:00:03 /opt/oracle/product/11gR2/grid1/bin/ohasd.bin reboot
oracle 19487 1 0 18:15 ? 00:01:14 /opt/oracle/product/11gR2/grid1/bin/oraagent.bin
oracle 19502 1 0 18:15 ? 00:00:01 /opt/oracle/product/11gR2/grid1/bin/tnslsnr LISTENER -inherit
oracle 19656 1 0 18:15 ? 00:00:01 /opt/oracle/product/11gR2/grid1/bin/cssdagent
oracle 19658 1 0 18:15 ? 00:00:02 /opt/oracle/product/11gR2/grid1/bin/orarootagent.bin
oracle 19674 1 0 18:15 ? 00:00:01 /opt/oracle/product/11gR2/grid1/bin/ocssd.bin
oracle 19687 1 0 18:15 ? 00:00:00 /opt/oracle/product/11gR2/grid1/bin/diskmon.bin -d -f
Let's see what happens when you kill the instance.
oracle@oradba1 ~# ps -aefgrep pmon
oracle 14225 13768 0 23:15 pts/7 00:00:00 grep pmon
oracle 19866 1 0 18:16 ? 00:00:00 asm_pmon_+ASM
oracle 26965 1 0 20:53 ? 00:00:00 ora_pmon_D112D1
oracle@oradba1 ~# kill -9 26965
This will, of course, crash the instance. Let's chck after some time:
oracle@oradba1 ~# ps -aef|grep pmon
oracle 14315 1 0 23:15 ? 00:00:00 ora_pmon_D112D1
oracle 14686 11492 0 23:17 pts/2 00:00:00 grep pmon
oracle 19866 1 0 18:16 ? 00:00:00 asm_pmon_+ASM
Where did the pmon come from? Didn't the instance just crash?
The instance was restarted by Oracle Restart.
What if you want to just keep the instance down, e.g. during a maintenance. Well, just shutdown normally; the instance will stay down. When you are ready, start the instance using either SQL*Plus or srvctl:
oracle@oradba1 ~# srvctl start database -d d112d1
Remember, D112D1 is a single instance database.
More on this later, on OTN.
Monday, May 11, 2009
Collaborate 09 Ends
I wrapped up a hectic three days at Collaborate this year. As if to echo the sentiments on the economy in general, and Oracle database technology related in particular, the attendance was way down this year. The vast expanse of Orange County Convention Center didn't make it easy either; it made it appear even smaller! Well, it was Orlando; the weather and traffic cooperated and I'm pretty sure some attendees picked up a few well deserved resting points either alone or with visiting family.
I presented three sessions:
(1) RAC Performance Tuning
(2) Real World Best Practices for DBAs (with Webcast)
(3) All About Deadlocks
I also participated in the RAC Experts' Panel as a Panelist.
As always, I used to opportunity to meet old friends and acquintances. Collaborate is all about knowledge sharing; I didn't lose when it came to getting some myself. I attended some highly useful sessions:
(1) Database Capacity Planning by Ashish Rege
(2) Partitioning for DW by Vincent
(3) Hints on Hints by Jonathan Lewis
(4) 11g Performance Tuning by Rich Niemic
(5) HA Directions from Oracle by Ashish Ray
(6) Storage Performance Diagnosis by Gaja
(7) Two sessions - on Database Replay and Breaking Oracle by Jeremiah Wilton
and some more. It has been a worthy conference.
I presented three sessions:
(1) RAC Performance Tuning
(2) Real World Best Practices for DBAs (with Webcast)
(3) All About Deadlocks
I also participated in the RAC Experts' Panel as a Panelist.
As always, I used to opportunity to meet old friends and acquintances. Collaborate is all about knowledge sharing; I didn't lose when it came to getting some myself. I attended some highly useful sessions:
(1) Database Capacity Planning by Ashish Rege
(2) Partitioning for DW by Vincent
(3) Hints on Hints by Jonathan Lewis
(4) 11g Performance Tuning by Rich Niemic
(5) HA Directions from Oracle by Ashish Ray
(6) Storage Performance Diagnosis by Gaja
(7) Two sessions - on Database Replay and Breaking Oracle by Jeremiah Wilton
and some more. It has been a worthy conference.
Friday, March 13, 2009
Excellent Information on Exadata
If you are interested to know more about the technology (not the marketing hype) behind Exadata - the datawarehouse appliance from Oracle, I just discovered a treasure trove of information on this blog:
http://www.pythian.com/news/1267/interview-kevin-closson-on-the-oracle-exadata-storage-server#comment-350433
Kevin Closson, one the lead architects of the Exadata machine has been interviewed by Christo Kutovsky and Paul Paul Vallée of Phythian. Kevin, in his usual detailed style explains the innards of the beast, aided by the excellent leading questioning by Christo.
I believe this is best writing I have seen on Exadata - all good stuff, no fluff. Thank you, Kevin.
http://www.pythian.com/news/1267/interview-kevin-closson-on-the-oracle-exadata-storage-server#comment-350433
Kevin Closson, one the lead architects of the Exadata machine has been interviewed by Christo Kutovsky and Paul Paul Vallée of Phythian. Kevin, in his usual detailed style explains the innards of the beast, aided by the excellent leading questioning by Christo.
I believe this is best writing I have seen on Exadata - all good stuff, no fluff. Thank you, Kevin.
Saturday, January 24, 2009
Ultra-Fast MV Alteration using Prebuilt Table Option
Here is an interesting question posed to me one time and I had found a solution. After 9 years, I encountered the same question and was shocked to find that many people still don't know about a little trick that could avoid a potential problem later.

Someone asked me how to modify a column of a Materialized View, e.g. from varchar2(20) to varchar2(25), or something similar. Drop and recreate? Not an option. We are talking about a several hundred GB MV with a very complex query that will take days to complete.
Problem
Problem
When you alter a materialized view to add a column or modify a column definition, unfortunately there is no command functionally equivalent to ALTER MATERIALIZED VIEW … ADD COLUMN. The only way to alter an MV is to completely drop and recreate it with the alteration. That approach may be acceptable for small MVs; but for larger MVs the cost of rebuilding can make the process quite infeasible. In addition to the time it will take to rebuild the entire MV (which could be days, depending on the size), the redo/undo generation and the surge in logical I/O due to the MV query may seriously affect the performance of the source database. In some cases, large MVs may even fail to be rebuilt as sometimes the undo segments may not have the undo information for long running queries – causing ORA-1555 errors.
So is there a better approach? Yes, there is. In this document I am going to explain a better approach for creating an MV that makes the alterations possible without rebuilding the MV – a task accomplished in mere seconds as opposed to potentially days.
Concept of Segments
Segments are stored units in Oracle. So, a table has a segment; not a view – since the contents of the view are not stored; only the view definition is. A Materialized View, however, stores the contents; so it is a segment.
Actually, the concept of segment goes a little bit further. If the table is partitioned, then each partition is a different segment. So, the relationship between tables and segments is one-to-many.
When you create an object that needs storage, such as a table, an MV or an index, Oracle first creates the corresponding segment. Once that is complete, the segment is shrouded by the cover of the object. The segment still continue to exist; but is now connected to the object. Until the segment is completely created and populated, the object technically does not exist. The segment may, in some cases, have a different name from the object. If the segment creation (or population) fails, Oracle automatically cleans up the remnants of the failed segment; but sometimes it may not be, leaving behind the chards that are eventually cleaned up by SMON process.
MVs and Segments
Anyway, how is this discussion about segments relevant to our objective here –the fast alteration of MViews?
Plenty. Remember, MVs are nothing but tables behind the covers? Property-wise, MVs and tables are like sisters, not even cousins. You can think of MVs are regular tables with some built in intelligence about how they were created (the defining query), how often they should be refreshed automatically by a job and how queries should be transformed to take advantage of the presence of the MVs. But apart from that, there is not much difference. You can directly insert into an MV, create indexes and so on. As far as a segment is concerned, there is no difference between an MV and a table. In fact Oracle stores the segment as a table:
SQL> select SEGMENT_TYPE
2 from user_segments
3 where SEGMENT_NAME = 'MV1';
SEGMENT_TYPE
------------------
TABLE
However, the biggest difference is the very issue we are discussing – you can’t add/modify columns of an MV while you can do that freely for a table. If I could attempt to logically represent tables and MVs, here is how it would look like.
So is there a better approach? Yes, there is. In this document I am going to explain a better approach for creating an MV that makes the alterations possible without rebuilding the MV – a task accomplished in mere seconds as opposed to potentially days.
Concept of Segments
Segments are stored units in Oracle. So, a table has a segment; not a view – since the contents of the view are not stored; only the view definition is. A Materialized View, however, stores the contents; so it is a segment.
Actually, the concept of segment goes a little bit further. If the table is partitioned, then each partition is a different segment. So, the relationship between tables and segments is one-to-many.
When you create an object that needs storage, such as a table, an MV or an index, Oracle first creates the corresponding segment. Once that is complete, the segment is shrouded by the cover of the object. The segment still continue to exist; but is now connected to the object. Until the segment is completely created and populated, the object technically does not exist. The segment may, in some cases, have a different name from the object. If the segment creation (or population) fails, Oracle automatically cleans up the remnants of the failed segment; but sometimes it may not be, leaving behind the chards that are eventually cleaned up by SMON process.
MVs and Segments
Anyway, how is this discussion about segments relevant to our objective here –the fast alteration of MViews?
Plenty. Remember, MVs are nothing but tables behind the covers? Property-wise, MVs and tables are like sisters, not even cousins. You can think of MVs are regular tables with some built in intelligence about how they were created (the defining query), how often they should be refreshed automatically by a job and how queries should be transformed to take advantage of the presence of the MVs. But apart from that, there is not much difference. You can directly insert into an MV, create indexes and so on. As far as a segment is concerned, there is no difference between an MV and a table. In fact Oracle stores the segment as a table:
SQL> select SEGMENT_TYPE
2 from user_segments
3 where SEGMENT_NAME = 'MV1';
SEGMENT_TYPE
------------------
TABLE
However, the biggest difference is the very issue we are discussing – you can’t add/modify columns of an MV while you can do that freely for a table. If I could attempt to logically represent tables and MVs, here is how it would look like.

The segment is the same. If it was created as an MV, the properties of MV take over the segment. If it was created as a table, the properties of a table take over the control.
Prebuilt Table
Since under the covers the segment is the same for both MV and table, can’t you take advantage of the fact? Suppose you have a table and you now want to convert that to an MV. In other words, you want to repoint that arrow initially pointed at the table to the MV properties:

Can you do it? Yes, of course you can. Since at the segment level it is the same, Oracle allows you to do it. When you create an MV, you can use a special clause ON PREBUILT TABLE. Here is how you create a MV in the regular approach:
create materialized view mv1
never refresh as
select cast(count (1) as number(10)) cnt from t1;
If you check the objects created:
SQL> select object_id, data_object_id, object_type
2 from user_objects
3 where object_name = 'MV1';
OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE
---------- -------------- -------------------
74842 74842 TABLE
74843 MATERIALIZED VIEW
So, it creates two objects – a table and an MV - anyway. Note a very important difference though: the DATA_OBJECT_ID for the MV object is null. If you drop the MV and check for the objects:
SQL> drop materialized view mv1;
Materialized View dropped.
SQL> select object_id, data_object_id, object_type
2 from user_objects
3 where object_name = 'MV1';
no rows selected
Even though there were two objects – a table and an MV, when you dropped the MV, both were dropped. The table object didn’t have an independent existence. Dropping the MV drops the table automatically.
Now, in the modified approach, you first create the table in the same name as the MV you are going to create:
SQL> create table mv1 (cnt number(10));
Next you create the MV by adding a new clause called ON PREBUILT TABLE shown below:
create materialized view mv1
on prebuilt table
never refresh
as
select cast(count (1) as number(10)) cnt from t1;
Now there will be two objects as well – one table and one MV. The MV simply took over the command over the segment but since the table already existed, it did not recreate the table object. So there are still only 2 objects.
One concern: since you created the table manually, can you accidentally drop it? Let’s see:
SQL> drop table mv1;
drop table mv1
*
ERROR at line 1:
ORA-12083: must use DROP MATERIALIZED VIEW to drop "ARUP"."MV1"
That answers it. The table simply loses its independent existence. However, see what happens when you drop the MV:
SQL> DROP MATERIALIZED VIEW mv1;
Materialized view dropped.
Now check the segment:
SQL> select segment_type
2 from user_segments
3 where segment_name = 'MV1';
SEGMENT_TYPE
------------------
TABLE
The segment still exists! When you dropped the MV, the segment was not dropped; it simply reverted to being a table. You can confirm that by checking the objects view:
OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE
---------- -------------- -------------------
77432 77432 TABLE
Voila! The object still exists as a table. Previously you saw dropping the MV removed all the objects and the segment. However, in this approach the segment was preserved. Since it reverted to a table, you can do all things possible in a table – select from it, create index, and - most important – modify the column. You can alter the column to make NUMBER(11).
SQL> alter table mv1 modify (cnt number(11));
Table altered.
Now, create the MV again:
create materialized view mv1
on prebuilt table
never refresh as
select cast(count (1) as number(11)) cnt from t1;
That’s it. The MV is altered. The whole process took about a few seconds, and since you didn’t have to recreate the segment, you saved enormous load on the database. Here a schematic representation of what happened.
Prebuilt Table
Since under the covers the segment is the same for both MV and table, can’t you take advantage of the fact? Suppose you have a table and you now want to convert that to an MV. In other words, you want to repoint that arrow initially pointed at the table to the MV properties:

Can you do it? Yes, of course you can. Since at the segment level it is the same, Oracle allows you to do it. When you create an MV, you can use a special clause ON PREBUILT TABLE. Here is how you create a MV in the regular approach:
create materialized view mv1
never refresh as
select cast(count (1) as number(10)) cnt from t1;
If you check the objects created:
SQL> select object_id, data_object_id, object_type
2 from user_objects
3 where object_name = 'MV1';
OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE
---------- -------------- -------------------
74842 74842 TABLE
74843 MATERIALIZED VIEW
So, it creates two objects – a table and an MV - anyway. Note a very important difference though: the DATA_OBJECT_ID for the MV object is null. If you drop the MV and check for the objects:
SQL> drop materialized view mv1;
Materialized View dropped.
SQL> select object_id, data_object_id, object_type
2 from user_objects
3 where object_name = 'MV1';
no rows selected
Even though there were two objects – a table and an MV, when you dropped the MV, both were dropped. The table object didn’t have an independent existence. Dropping the MV drops the table automatically.
Now, in the modified approach, you first create the table in the same name as the MV you are going to create:
SQL> create table mv1 (cnt number(10));
Next you create the MV by adding a new clause called ON PREBUILT TABLE shown below:
create materialized view mv1
on prebuilt table
never refresh
as
select cast(count (1) as number(10)) cnt from t1;
Now there will be two objects as well – one table and one MV. The MV simply took over the command over the segment but since the table already existed, it did not recreate the table object. So there are still only 2 objects.
One concern: since you created the table manually, can you accidentally drop it? Let’s see:
SQL> drop table mv1;
drop table mv1
*
ERROR at line 1:
ORA-12083: must use DROP MATERIALIZED VIEW to drop "ARUP"."MV1"
That answers it. The table simply loses its independent existence. However, see what happens when you drop the MV:
SQL> DROP MATERIALIZED VIEW mv1;
Materialized view dropped.
Now check the segment:
SQL> select segment_type
2 from user_segments
3 where segment_name = 'MV1';
SEGMENT_TYPE
------------------
TABLE
The segment still exists! When you dropped the MV, the segment was not dropped; it simply reverted to being a table. You can confirm that by checking the objects view:
OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE
---------- -------------- -------------------
77432 77432 TABLE
Voila! The object still exists as a table. Previously you saw dropping the MV removed all the objects and the segment. However, in this approach the segment was preserved. Since it reverted to a table, you can do all things possible in a table – select from it, create index, and - most important – modify the column. You can alter the column to make NUMBER(11).
SQL> alter table mv1 modify (cnt number(11));
Table altered.
Now, create the MV again:
create materialized view mv1
on prebuilt table
never refresh as
select cast(count (1) as number(11)) cnt from t1;
That’s it. The MV is altered. The whole process took about a few seconds, and since you didn’t have to recreate the segment, you saved enormous load on the database. Here a schematic representation of what happened.
Now you know how powerful prebuilt table option is. It only affects how you define the MV; nothing else. All other properties of the MV remain intact. The end users don’t even know about the prebuilt table option; but for the DBA it remains a powerful tool in the arsenal. As a best practice I recommend creating any MV, regardless of size, with the ON PREBUILT TABLE clause. In small tables you probably don’t see a huge advantage; but what if today’s small table grows to a large one tomorrow? It’s better to be safe than sorry.
Conversion to the New Approach
Now that you understand the power of the prebuilt option, you may be wondering how to convert the existing MVs to the new clause. Unfortunately there is no conversion path. You have to drop and recreate the MVs. That is why this time – when we are moving MVs to new tablespaces – we have the golden opportunity.
One approach is to create new tables with new names and then rename them. Here are the steps:
1. Create a table with nologging clause from the old MV
create table new_mv1
nologging
as
select * from mv1;
2. Capture the MV definition from the data dictionary:
select dbms_metadata.get_ddl ('MATERIALIZED_VIEW','MV1')
from dual ;
DBMS_METADATA.GET_DDL('MATERIALIZED_VIEW','MV1')
------------------------------------------------
CREATE MATERIALIZED VIEW "ARUP"."MV1" ("CNT")
ORGANIZATION HEAP PCTFREE 10
… and so on …
3. Spool this to a file to be executed later.
4. Edit this file to place ON PREBUILT TABLE CLAUSE.
CREATE MATERIALIZED VIEW "ARUP"."MV1" ("CNT")
ORGANIZATION HEAP ON PREBUILT TABLE PCTFREE 10
5. Take a Data Pump export with CONTENTS=METADATA_ONLY option. This creates all relevant privileges on the export dump file. Keep it aside.
6. Drop the Materialized View MV1.
7. Rename table NEW_MV1 to MV1
8. Execute the script you created earlier to recreate the MV.
9. Import the export dump file. It will recreate all the privileges.
This is slow; but the best approach since it generates minimum amount of redo and undo.
Hope this is helpful. You may look at an article I wrote http://www.dbazine.com/oracle/or-articles/nanda2 The article describes, with complete code, how to alter an MV where the refresh occurs across databases.
Conversion to the New Approach
Now that you understand the power of the prebuilt option, you may be wondering how to convert the existing MVs to the new clause. Unfortunately there is no conversion path. You have to drop and recreate the MVs. That is why this time – when we are moving MVs to new tablespaces – we have the golden opportunity.
One approach is to create new tables with new names and then rename them. Here are the steps:
1. Create a table with nologging clause from the old MV
create table new_mv1
nologging
as
select * from mv1;
2. Capture the MV definition from the data dictionary:
select dbms_metadata.get_ddl ('MATERIALIZED_VIEW','MV1')
from dual ;
DBMS_METADATA.GET_DDL('MATERIALIZED_VIEW','MV1')
------------------------------------------------
CREATE MATERIALIZED VIEW "ARUP"."MV1" ("CNT")
ORGANIZATION HEAP PCTFREE 10
… and so on …
3. Spool this to a file to be executed later.
4. Edit this file to place ON PREBUILT TABLE CLAUSE.
CREATE MATERIALIZED VIEW "ARUP"."MV1" ("CNT")
ORGANIZATION HEAP ON PREBUILT TABLE PCTFREE 10
5. Take a Data Pump export with CONTENTS=METADATA_ONLY option. This creates all relevant privileges on the export dump file. Keep it aside.
6. Drop the Materialized View MV1.
7. Rename table NEW_MV1 to MV1
8. Execute the script you created earlier to recreate the MV.
9. Import the export dump file. It will recreate all the privileges.
This is slow; but the best approach since it generates minimum amount of redo and undo.
Hope this is helpful. You may look at an article I wrote http://www.dbazine.com/oracle/or-articles/nanda2 The article describes, with complete code, how to alter an MV where the refresh occurs across databases.
Tuesday, January 20, 2009
The other day, we had a serious issue in the ASM diskgroups - one diskgroup refused to come up because one disk was missing; but it was not clear from the message which of the 283 devices was missing. This underscores the difficulty in diagnosing ASM discovery related issues. In this post, I have tried to present a way to diagnose this sort of issues through a real example.
We had planned to move from one storage device to another (EMC DMX-3 to DMX-4) using SRDF/A technology. The new storage was attached to a new server. The idea was to replicate data at the storage level using SRDF/A. At the end of the replication process, we shut the database and ASM down and brought up the ASM instance in the newer storage on the new server. Since the copy was disk level, the disk signatures were intact and the ASM disks retained their identity from the older storage. So, when the ASM instance was started, it recognized all the disks and mounted all the diskgroups (10 of them) except one.
While bringing up a disk group called APP_DG3 on the new server it complained that disk number “1” is missing; but it was not clear which particular disk was. In this blog the situation was diagnosed and performed.
Note: the asm disk paths changed on the storage. This was not really a problem; since we could simply define a new asm_diskstring. Remember: the diskstring initialization parameter simply tells the ASM instance which disks should be looked at while discovering. Once those disks are identified, ASM looks at its signature on the disk headers to check the properties - the disk number, the diskgroup it belongs to, the capacity, version compatibilty and so on. So as long as the correct asm_diskstring init parameter is provided, ASM can readily discover the disks and get the correct names.
Diagnosis
This issue arises when ASM does not find all the disks required for that disk group. There could be several problems:
(i) the disk itself is physically not present
(ii) it’s present but not allowed to be read/write at the SAN level
(iii) it’s present but permissions not present in the OS
(iv) it’s present but the disk is not mapped properly; so the disk header shows something else. ASM knows the disk number, group, etc. from the disk header. If the disk header is not readable; or is not an ASM disk, the header will not reveal anything to ASM and hence will not mount.
If an ASM diskgroup is not mounted, the group_number for that disk shows “0”. If it’s mounted, the group number shows whatever the group number of the disk group is. Please note: the disk numbers are dynamic. So, APP_DG1 may have a group number “1” but the number may change to “2” after the next recycle.
Since the issue involved APP_DG3, I checked the group number for the group APP_DG3 from the production ASM (the old SAN on the old server) by issuing the query:
ASM> select group_number, name
2 from v$asm_diskgroup
3 where name = 'APP_DG3'
4 /
GROUP_NUMBER NAME
------------ ------------------------------
4 ASM_DG3
This shows the group number is 4 for the APP_DG3 group. I will use this information later during the analsysis.
On the current production server, I checked the devices and disk number of group number 4:
ASM> select disk_number, path
2 from v$asm_disk
3 where group_number = 4;
DISK_NUMBER PATH
----------- --------------------
0 /dev/rdsk/c83t7d0
1 /dev/rdsk/c83t13d5
2 /dev/rdsk/c83t7d2
… and so on …
53 /dev/rdsk/c83t7d1
54 rows selected.
On the new server, I listed out the disks not mounted by the disk groups. Knowing that disks belonging to an unmounted diskgroup show a group number 0, the following query pulls the information:
ASM> select disk_number, path
2 from v$asm_disk
3 where group_number = 0;
DISK_NUMBER PATH
----------- --------------------
0 /dev/rdsk/c110t1d1
2 /dev/rdsk/c110t1d2
3 /dev/rdsk/c110t1d3
… and so on …
254 /dev/rdsk/c110t6d7
54 rows selected.
Carefully study the output. The results did not show anything for disk number “1”. The disks were numbered 0 followed by 2, 3 and so on. The final disk was numbered “254”, instead of 54. So, the disk number “1” was not discovered by ASM.
From the output we know that production disk /dev/rdsk/c83t7d0 mapped to new server disk /dev/rdsk/c110t1d1, since they have the same disk# (“0”). For disk# 2, production disk /dev/rdsk/c83t7d2 is mapped to /dev/rdsk/c110t1d2 and so on. However, production disk /dev/rdsk/c83t13d5 is not mapped to anything on the new server, since there is no disk #1 on the new server.
Next I asked the Storage Admin what he mapped for disk /dev/rdsk/c83t13d5 from production. He mentioned a disk called c110t6d25.
I checked in the new server, if that disk is even visible:
ASM> select path
2 from v$asm_disk
3 where path = '/dev/rdsk/c110t6d25'
4 /
no rows selected
It confirmed my suspicion – ASM can’t even read the disk. Again, the reasons could any of the above mentioned ones - disk is not presented, does not have correct permission, etc.
In this case the physical disk was actually present and was owned by “oracle”; but not accessible to ASM. The issue was with SRDF not making the disk read/write. It was still in sync mode, preventing the disk to be enabled for writing. ASM couldn't open the disk in read write mode; so it rejected it as a member of any diskgroup and assigned it a default disk number 254.
After Storage Admin fixed the issue by making the disk read write, I re-issued the discovery:
ASM> select path
2 from v$asm_disk
3 where path = '/dev/rdsk/c110t6d25'
4 /
PATH
-------
/dev/rdsk/c110t6d25
It returned with a value. Now ASM can read it correctly. I mounted the disk:
ASM> alter diskgroup APP_DG3 mount;
It mounted successfully; because it got all the disks to make up the complete group.
After that the disk# 254 also went away. Now the disks showed 0, 1, 2, 3, … 53 for the group on both prod and the new server.
We had planned to move from one storage device to another (EMC DMX-3 to DMX-4) using SRDF/A technology. The new storage was attached to a new server. The idea was to replicate data at the storage level using SRDF/A. At the end of the replication process, we shut the database and ASM down and brought up the ASM instance in the newer storage on the new server. Since the copy was disk level, the disk signatures were intact and the ASM disks retained their identity from the older storage. So, when the ASM instance was started, it recognized all the disks and mounted all the diskgroups (10 of them) except one.
While bringing up a disk group called APP_DG3 on the new server it complained that disk number “1” is missing; but it was not clear which particular disk was. In this blog the situation was diagnosed and performed.
Note: the asm disk paths changed on the storage. This was not really a problem; since we could simply define a new asm_diskstring. Remember: the diskstring initialization parameter simply tells the ASM instance which disks should be looked at while discovering. Once those disks are identified, ASM looks at its signature on the disk headers to check the properties - the disk number, the diskgroup it belongs to, the capacity, version compatibilty and so on. So as long as the correct asm_diskstring init parameter is provided, ASM can readily discover the disks and get the correct names.
Diagnosis
This issue arises when ASM does not find all the disks required for that disk group. There could be several problems:
(i) the disk itself is physically not present
(ii) it’s present but not allowed to be read/write at the SAN level
(iii) it’s present but permissions not present in the OS
(iv) it’s present but the disk is not mapped properly; so the disk header shows something else. ASM knows the disk number, group, etc. from the disk header. If the disk header is not readable; or is not an ASM disk, the header will not reveal anything to ASM and hence will not mount.
If an ASM diskgroup is not mounted, the group_number for that disk shows “0”. If it’s mounted, the group number shows whatever the group number of the disk group is. Please note: the disk numbers are dynamic. So, APP_DG1 may have a group number “1” but the number may change to “2” after the next recycle.
Since the issue involved APP_DG3, I checked the group number for the group APP_DG3 from the production ASM (the old SAN on the old server) by issuing the query:
ASM> select group_number, name
2 from v$asm_diskgroup
3 where name = 'APP_DG3'
4 /
GROUP_NUMBER NAME
------------ ------------------------------
4 ASM_DG3
This shows the group number is 4 for the APP_DG3 group. I will use this information later during the analsysis.
On the current production server, I checked the devices and disk number of group number 4:
ASM> select disk_number, path
2 from v$asm_disk
3 where group_number = 4;
DISK_NUMBER PATH
----------- --------------------
0 /dev/rdsk/c83t7d0
1 /dev/rdsk/c83t13d5
2 /dev/rdsk/c83t7d2
… and so on …
53 /dev/rdsk/c83t7d1
54 rows selected.
On the new server, I listed out the disks not mounted by the disk groups. Knowing that disks belonging to an unmounted diskgroup show a group number 0, the following query pulls the information:
ASM> select disk_number, path
2 from v$asm_disk
3 where group_number = 0;
DISK_NUMBER PATH
----------- --------------------
0 /dev/rdsk/c110t1d1
2 /dev/rdsk/c110t1d2
3 /dev/rdsk/c110t1d3
… and so on …
254 /dev/rdsk/c110t6d7
54 rows selected.
Carefully study the output. The results did not show anything for disk number “1”. The disks were numbered 0 followed by 2, 3 and so on. The final disk was numbered “254”, instead of 54. So, the disk number “1” was not discovered by ASM.
From the output we know that production disk /dev/rdsk/c83t7d0 mapped to new server disk /dev/rdsk/c110t1d1, since they have the same disk# (“0”). For disk# 2, production disk /dev/rdsk/c83t7d2 is mapped to /dev/rdsk/c110t1d2 and so on. However, production disk /dev/rdsk/c83t13d5 is not mapped to anything on the new server, since there is no disk #1 on the new server.
Next I asked the Storage Admin what he mapped for disk /dev/rdsk/c83t13d5 from production. He mentioned a disk called c110t6d25.
I checked in the new server, if that disk is even visible:
ASM> select path
2 from v$asm_disk
3 where path = '/dev/rdsk/c110t6d25'
4 /
no rows selected
It confirmed my suspicion – ASM can’t even read the disk. Again, the reasons could any of the above mentioned ones - disk is not presented, does not have correct permission, etc.
In this case the physical disk was actually present and was owned by “oracle”; but not accessible to ASM. The issue was with SRDF not making the disk read/write. It was still in sync mode, preventing the disk to be enabled for writing. ASM couldn't open the disk in read write mode; so it rejected it as a member of any diskgroup and assigned it a default disk number 254.
After Storage Admin fixed the issue by making the disk read write, I re-issued the discovery:
ASM> select path
2 from v$asm_disk
3 where path = '/dev/rdsk/c110t6d25'
4 /
PATH
-------
/dev/rdsk/c110t6d25
It returned with a value. Now ASM can read it correctly. I mounted the disk:
ASM> alter diskgroup APP_DG3 mount;
It mounted successfully; because it got all the disks to make up the complete group.
After that the disk# 254 also went away. Now the disks showed 0, 1, 2, 3, … 53 for the group on both prod and the new server.
Monday, January 19, 2009
Making a Shell Variable Read Only
Being inherently lazy, I am always a sucker for shortcuts, neat tricks to cut my work and, most important, not to do the same thing again and again. Here is a tip I find useful.
Have you ever been frustrated to find that some line has changed some important shell variable such as ORACLE_BASE inside a shell script? The list of variables that are important to safety and efficiency of your shell is a long one - PS1, ORACLE_BASE, PATH, and so on. Using this little known command, you can easily "protect" a variable. The trick is to make it readonly. First, set the variable:
# export ORACLE_BASE=/opt/oracle
Then make it readonly:
# readonly ORACLE_BASE
Now if you want to set it:
# export ORACLE_BASE=/opt/oracle1
-bash: ORACLE_BASE: readonly variable
You can't. You can't even unset the variable:
# unset ORACLE_BASE
-bash: unset: ORACLE_BASE: cannot unset: readonly variable
This is a cool way to protect important variables.
To get a list of variables that are readonly, use
# declare -r
declare -ar BASH_VERSINFO='([0]="3" [1]="00" [2]="15" [3]="1" [4]="release" [5]="i386-redhat-linux-gnu")'
declare -ir EUID="500"
declare -rx ORACLE_BASE="/opt/oracle"
declare -ir PPID="13204"
declare -r SHELLOPTS="braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor"
declare -ir UID="500"
Unfortunately there is no comamnd to make it readwrite.
In the same way, you can also prevent a specific variable not to be set. LD_LIBRARY_PATH should not be set during some type of installations. To force it that way:
# export LD_LIBRARY_PATH
# readonly LD_LIBRARY_PATH
Now if you want to assign a value:
# export LD_LIBRARY_PATH=d
-bash: LD_LIBRARY_PATH: readonly variable
You will not be able to. You can also achieve the same goal by:
# declare -r LD_LIBRARY_PATH=
I hope you find it useful.
Have you ever been frustrated to find that some line has changed some important shell variable such as ORACLE_BASE inside a shell script? The list of variables that are important to safety and efficiency of your shell is a long one - PS1, ORACLE_BASE, PATH, and so on. Using this little known command, you can easily "protect" a variable. The trick is to make it readonly. First, set the variable:
# export ORACLE_BASE=/opt/oracle
Then make it readonly:
# readonly ORACLE_BASE
Now if you want to set it:
# export ORACLE_BASE=/opt/oracle1
-bash: ORACLE_BASE: readonly variable
You can't. You can't even unset the variable:
# unset ORACLE_BASE
-bash: unset: ORACLE_BASE: cannot unset: readonly variable
This is a cool way to protect important variables.
To get a list of variables that are readonly, use
# declare -r
declare -ar BASH_VERSINFO='([0]="3" [1]="00" [2]="15" [3]="1" [4]="release" [5]="i386-redhat-linux-gnu")'
declare -ir EUID="500"
declare -rx ORACLE_BASE="/opt/oracle"
declare -ir PPID="13204"
declare -r SHELLOPTS="braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor"
declare -ir UID="500"
Unfortunately there is no comamnd to make it readwrite.
In the same way, you can also prevent a specific variable not to be set. LD_LIBRARY_PATH should not be set during some type of installations. To force it that way:
# export LD_LIBRARY_PATH
# readonly LD_LIBRARY_PATH
Now if you want to assign a value:
# export LD_LIBRARY_PATH=d
-bash: LD_LIBRARY_PATH: readonly variable
You will not be able to. You can also achieve the same goal by:
# declare -r LD_LIBRARY_PATH=
I hope you find it useful.
Friday, October 10, 2008
About Clear Communications
Like everyone else I like to rant once in a while. I rant about the shortcomings in Oracle software, the tools and technologies I work with. But this time I want to rant about a decisively non-technical topic. Arguably it is something that everyone must have felt - at least once. It's about communicating clearly. Why don't people do it? Why don't they articulate whatever they are trying to say. Instead they spit out incoherently with thoughts coming across as sloppily slapped together expecting the other person to somehow put it all together. I am not talking about children; these are responsible adults who supposedly make up policies and act as thought leaders. Unclear thoughts and communication not only frustrates people; but is dangerous. It misdirects efforts leading to wastage and often utter failure.
Today I had on the receiving end of such a travesty. Earlier, a manager of an application team wrote to me this email (reproduced verbatim) about a requirement:
We are having a shortage of capabilities on the servers. So we want to increase the capabilities somehow. What do you recommend?
I was scratching my head. How can I comment or influence the capabilities of their applications? Perhaps they are asking about some limitations which might be solved by some Oracle technology features. So, I called them for a quick chat. After half hour I still wasn't clear about what limitations they are trying to solve.
And then, after one hour, I got it: they are talking about capacity; not capability! And not only that it's about the database server; not the app server. [Trying to pull my hair out at this time]
My recommendation would have been to send them to an English school; but, being occasionally wise, I kept it to myself.
OK; let's move on. I promised to have a DBA look at the capacity issue.
And a DBA did. Sme days went by and it apprently reached a boiling point. I was told nothing has been done by the DBA and, well, that's not acceptable. so, I intervened. I asked the DBA for her side of the story. It was pretty simple - the CPU, I/O are all normal, way below utilization. The growth projections were eight times. Yes, eight times. So, the DBA made a request for the increase in capacity and that's where the friction has started. No one anticipated the eight fold increase; so there is simply no room. Stalemate!
As the head of database architecture, I question any growth projections, especially ones that go up 8 times. And I did. Here was the response "we are running on 2 legs and we will run on 16 legs in the near future".
2 legs?!!! What is that? What is a leg?
As it turns out, the application is running 2 java virtual machines. the app architects are recommending to run 16 JVMs to add redundancy and distribute local processing. From the database perspective, that means 2 clients now will become 16; but the overall processing will *not* go up.
Instead of saying that in plain English, the App manager coined a term "leg" to describe the issue in apprently in some technical way. This was communiated to his management chain, which in turn interpreted it as 8 fold increase in processing and demanded that we create 7 more databases. They approved the new "databases" and allocated the budget. But since the friction came with a member of my team, I became involved. As I always do, I questioned the decision and that's how the truth came out, which in turn also cleared the mystery behind the "capability" story described above.
All these hoopla about people not communicating in a clear manner. Adding 16 more clients should have simple enough to connvery even to the mail guy; calling it "legs" confused every one, wasted time and increased frustration. If I hadn't questioned it, it would have been implemented too. 7 more databases doing nothing to solve the issues present.
Communication is all about articulation and presentation of thoughts. The key is empathy - put yourself in the recipient's shoes and try to look at what you are saying from that persepctive. It's not about English (or whatever the language used); it's about the clarity of thought process. Granted not everyone will be able to present the thought process equally coherently while speaking; but what about writing? There is no excuse for not writing coherently, is there? The only reason for being incoherent is just a I-don't-care attitude. Of course, there are other things - unfamilairity with the language used, lack of time, environment (typing on the blackberry with one finger while cooking with the other hand), state of mind (typing out the report while waiting for the third drink to arrive at the bar); but most of the time it's just plain lack of empathy. If you don't have that, you just don't communicate; at least not effectively. And when you don't communicate, you fail, regardless of your professional stature.
As Karen Morton once said during a HotSos Symposium, and a mantra I took to heart and live by the lines everyday:
Knowledge and Experience Makes you Credible
Ability to Communicate Effectively makes you Useful
Useful - that's what I and you want to be; not just credible. Thank you, Karen.
Today I had on the receiving end of such a travesty. Earlier, a manager of an application team wrote to me this email (reproduced verbatim) about a requirement:
We are having a shortage of capabilities on the servers. So we want to increase the capabilities somehow. What do you recommend?
I was scratching my head. How can I comment or influence the capabilities of their applications? Perhaps they are asking about some limitations which might be solved by some Oracle technology features. So, I called them for a quick chat. After half hour I still wasn't clear about what limitations they are trying to solve.
And then, after one hour, I got it: they are talking about capacity; not capability! And not only that it's about the database server; not the app server. [Trying to pull my hair out at this time]
My recommendation would have been to send them to an English school; but, being occasionally wise, I kept it to myself.
OK; let's move on. I promised to have a DBA look at the capacity issue.
And a DBA did. Sme days went by and it apprently reached a boiling point. I was told nothing has been done by the DBA and, well, that's not acceptable. so, I intervened. I asked the DBA for her side of the story. It was pretty simple - the CPU, I/O are all normal, way below utilization. The growth projections were eight times. Yes, eight times. So, the DBA made a request for the increase in capacity and that's where the friction has started. No one anticipated the eight fold increase; so there is simply no room. Stalemate!
As the head of database architecture, I question any growth projections, especially ones that go up 8 times. And I did. Here was the response "we are running on 2 legs and we will run on 16 legs in the near future".
2 legs?!!! What is that? What is a leg?
As it turns out, the application is running 2 java virtual machines. the app architects are recommending to run 16 JVMs to add redundancy and distribute local processing. From the database perspective, that means 2 clients now will become 16; but the overall processing will *not* go up.
Instead of saying that in plain English, the App manager coined a term "leg" to describe the issue in apprently in some technical way. This was communiated to his management chain, which in turn interpreted it as 8 fold increase in processing and demanded that we create 7 more databases. They approved the new "databases" and allocated the budget. But since the friction came with a member of my team, I became involved. As I always do, I questioned the decision and that's how the truth came out, which in turn also cleared the mystery behind the "capability" story described above.
All these hoopla about people not communicating in a clear manner. Adding 16 more clients should have simple enough to connvery even to the mail guy; calling it "legs" confused every one, wasted time and increased frustration. If I hadn't questioned it, it would have been implemented too. 7 more databases doing nothing to solve the issues present.
Communication is all about articulation and presentation of thoughts. The key is empathy - put yourself in the recipient's shoes and try to look at what you are saying from that persepctive. It's not about English (or whatever the language used); it's about the clarity of thought process. Granted not everyone will be able to present the thought process equally coherently while speaking; but what about writing? There is no excuse for not writing coherently, is there? The only reason for being incoherent is just a I-don't-care attitude. Of course, there are other things - unfamilairity with the language used, lack of time, environment (typing on the blackberry with one finger while cooking with the other hand), state of mind (typing out the report while waiting for the third drink to arrive at the bar); but most of the time it's just plain lack of empathy. If you don't have that, you just don't communicate; at least not effectively. And when you don't communicate, you fail, regardless of your professional stature.
As Karen Morton once said during a HotSos Symposium, and a mantra I took to heart and live by the lines everyday:
Knowledge and Experience Makes you Credible
Ability to Communicate Effectively makes you Useful
Useful - that's what I and you want to be; not just credible. Thank you, Karen.
Sunday, October 05, 2008
Upgrade Experience for Patchkit 11.1.0.7
The first patchkit for 11g - 11.1.0.7 - came out a few weeks ago, for Linux systems. Unlike most other patchkits, this one contains some new functionalities. Oracle patchkits are usually only bugfixes with al patches regression tested collectively; not added features. This one does have some new (albeit minor) features.
The upgrade was relatively easy but took a long time - about 30 minutes. I encountered one issue and a roadblock. Here is the annoying roadblock
A Very Import Pre-req
Do not forget to check for this pre-requisite. This is the time_zone check, as specified in the patchkit readme file.
SQL> select version from v$timezone_file;
VERSION
----------
4
In my case it returned 4, which means no further action is necessary. Had it returned something else, I would have to follow the instructions mentioned in MetaLink Note 568125.1.
Asking for Email for Security Notification
It was interesting to note that the installer asked me to enter my Email address to send me updates on security. Funny; I get that already anyway. So I ignored and clicked "Next". Nope; it popped a little window asking me to confirm that I do not need email. Sure, I confirmed and pressed Next. No, the same issue, it asked me again to confirm and so on. Vicious circle!
There was a little checkbox below that said "track security via MetaLink" and asked my MetaLink password (not "id", which it presumed to be my email). I checked that box and entered my email and password, and it allowed me to go further.
This is a little awkward. Not everyone will want to get the email. And in any case, a mere email will explain little about the security issues. And why would anyone want to embed his/her MetaLink password in a response file?
Cool Feature: The Pre-upgrade Script
Oracle now provides a script utlu111i.sql in the $OH/rdbms/admin directory. Run this script before you shut the database down for upgrade. It will tell a lot about the issues you may need to fix before trying the upgrade.
The Issue
The issue I encountered was while running the catupgrd.sql script. After applyng the patch, I started up with the UPGRADE option and then executed catupgrd.sql script, which promptly failed. From the error message (which, I unfortunately, I couldn't capture), it was clear that the failure was due to the presence of Data Vault feature. I apparently installed Data Vault option while installing the software. The message says it clearly:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Well, I need to remove it before I can upgrade. It needed a recomp of the Oracle software:
oracle@prolin1$ cd $ORACLE_HOME/rdbms/lib
oracle@prolin1$ make -f ins_rdbms.mk dv_off
Next, I relinked the Oracle software:
oracle@prolin1$ relink oracle
Now the catupgrd.sql script ran successfully as expected. It took about 1 hour to complete.
SQL> select * from v$version
2 /
BANNER
-----------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
The upgrade was relatively easy but took a long time - about 30 minutes. I encountered one issue and a roadblock. Here is the annoying roadblock
A Very Import Pre-req
Do not forget to check for this pre-requisite. This is the time_zone check, as specified in the patchkit readme file.
SQL> select version from v$timezone_file;
VERSION
----------
4
In my case it returned 4, which means no further action is necessary. Had it returned something else, I would have to follow the instructions mentioned in MetaLink Note 568125.1.
Asking for Email for Security Notification
It was interesting to note that the installer asked me to enter my Email address to send me updates on security. Funny; I get that already anyway. So I ignored and clicked "Next". Nope; it popped a little window asking me to confirm that I do not need email. Sure, I confirmed and pressed Next. No, the same issue, it asked me again to confirm and so on. Vicious circle!
There was a little checkbox below that said "track security via MetaLink" and asked my MetaLink password (not "id", which it presumed to be my email). I checked that box and entered my email and password, and it allowed me to go further.
This is a little awkward. Not everyone will want to get the email. And in any case, a mere email will explain little about the security issues. And why would anyone want to embed his/her MetaLink password in a response file?
Cool Feature: The Pre-upgrade Script
Oracle now provides a script utlu111i.sql in the $OH/rdbms/admin directory. Run this script before you shut the database down for upgrade. It will tell a lot about the issues you may need to fix before trying the upgrade.
The Issue
The issue I encountered was while running the catupgrd.sql script. After applyng the patch, I started up with the UPGRADE option and then executed catupgrd.sql script, which promptly failed. From the error message (which, I unfortunately, I couldn't capture), it was clear that the failure was due to the presence of Data Vault feature. I apparently installed Data Vault option while installing the software. The message says it clearly:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Well, I need to remove it before I can upgrade. It needed a recomp of the Oracle software:
oracle@prolin1$ cd $ORACLE_HOME/rdbms/lib
oracle@prolin1$ make -f ins_rdbms.mk dv_off
Next, I relinked the Oracle software:
oracle@prolin1$ relink oracle
Now the catupgrd.sql script ran successfully as expected. It took about 1 hour to complete.
SQL> select * from v$version
2 /
BANNER
-----------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
Sunday, September 28, 2008
Most Striking Object at Open World 08

Hands down, it is the compostible drinking cup! Yes, compostible, not recyclable. It looks just like any other plastic (not paper) drink cup you see at almost any public water dispensers. The difference? It's made of corn syrup, I was told, not plastic and hence compostible. Wow!
I am not a green fanatic; but I consider myself a responsible adult concerned about the environment doing his share to reduce the landfills, pollutions and paper consumption. I do things that are practical: I don't print something I can read on the monitor; project emails, powerpoints on the screen/projector while conferring with colleagues rather than printing; use back sides of printouts to scribble; use 2-sided printing; donate kids' toys and cloths to charity rather than throw them in trash and so on. But there are some things I just couldn't jettison; at least not yet. One of them was the ubiquitous plastic drinking cup and the bottled water. The convenience of the water bottle was just too much to ignore and my lazy bones reigned over my conscience and I always gravitated, albeit a little guiltly, to the water bottle.
Not any more. I hope these compostible corn syrup based polymer material makes its way to all things plastic - bottles, cups, packaging and so on. The material is called polylactic acid (PLA), which is a polymer made from lactic acid from strachy produce like corn, wheat, patato and beet. However, due to its low melting point, it's not suitable for hot liquids, at least not yet. There is a compostible version - paper cups lines with PLA instea dof petroleum based products. But that's still paper; not 100% PLA.
According to a Smithsonian article, producing this PLA requires 65% less energy and emits 68% fewer greenhouse gases. Wow! That's good enough for me.
But, is it all rosy and smell nice? Well, afraid not. The biggest caveat: the PLA decomposes in a controlled composting facility, not the backyard composting bin. you need something of industrail strength - the sort used by municipalities and large industrial plants. Do they exist? Yes, for commercial use; but almost none for residential use. So, that's the catch. While the material is compostible; the facility to compost is not available.
But I am not going to look at it as glass half full. This is a major first step. Perhaps the ecological and political pressures will force the residential facilities to open up as well. Until then, let the power be with PLA.
OOW'08 Oracle 11g New Features for DBAs
It was my second session at Open World this year. It was full with 332 attendees with a whopping 277 attendees on wait list! the room capacity was 397. Of course, the room did have some fragmentation and not everyone could make it.
Here is the abstract:
There is a world outside the glittering marketing glitz surrounding Oracle 11g. In this session, a DBA and author of the popular 11g New Features series on OTN covers features that stand out in the real world and make your job easier, your actions more efficient and resilient, and so on. Learn the new features with working examples: how to use Database Replay and SQL Performance Analyzer to accurately predict the effect of changes and Recovery Manager (RMAN) Data Recovery Advisor to catch errors and corruption so new stats won't cause issues.
Thank you very much for those who decided to attend. I hope you found it useful. Here is the presentation. You can download it from the Open World site too. Please note, the companion site to see al working examples and a more detailed coverage is still my Oracle 11g New Features Series on Oracle Technology Network.
Here is the abstract:
There is a world outside the glittering marketing glitz surrounding Oracle 11g. In this session, a DBA and author of the popular 11g New Features series on OTN covers features that stand out in the real world and make your job easier, your actions more efficient and resilient, and so on. Learn the new features with working examples: how to use Database Replay and SQL Performance Analyzer to accurately predict the effect of changes and Recovery Manager (RMAN) Data Recovery Advisor to catch errors and corruption so new stats won't cause issues.
Thank you very much for those who decided to attend. I hope you found it useful. Here is the presentation. You can download it from the Open World site too. Please note, the companion site to see al working examples and a more detailed coverage is still my Oracle 11g New Features Series on Oracle Technology Network.
OOW'08 : Real Life Best Practices for DBAs
Once more, it was a full session with 278 attendees and 251 waitlisted due to lack of space! I wish the organizers simply moved to a bigger room. Oh, well!
For those who attended, I truly wish to express my gratitude. As a speaker, I feel honored when people choose to attend my session over others. I wish you found something useful here.
If you haven't already downloaded it from Open World site, here is the presentation: http://www.proligence.com/OOW08_DBA.pdf.
Here is the abstract:
This session covers best practices for excelling at being a DBA in the field from someone who has been managing Oracle Database instances for 14 years; was named the DBA of the Year in 2003; and has been through it all, from modeling to performance tuning, disaster recovery, security, and beyond. Best practices should be justifiable and situation-aware, not just because someone said they were good. Hear several tips and tricks for surviving and succeeding in the part art/part wizardry of database administration. The session covers Oracle 11g.
I will highly appreciate if you post your comments.
For those who attended, I truly wish to express my gratitude. As a speaker, I feel honored when people choose to attend my session over others. I wish you found something useful here.
If you haven't already downloaded it from Open World site, here is the presentation: http://www.proligence.com/OOW08_DBA.pdf.
Here is the abstract:
This session covers best practices for excelling at being a DBA in the field from someone who has been managing Oracle Database instances for 14 years; was named the DBA of the Year in 2003; and has been through it all, from modeling to performance tuning, disaster recovery, security, and beyond. Best practices should be justifiable and situation-aware, not just because someone said they were good. Hear several tips and tricks for surviving and succeeding in the part art/part wizardry of database administration. The session covers Oracle 11g.
I will highly appreciate if you post your comments.
Subscribe to:
Posts (Atom)