Thank you all those who came to my session at Sangam 16, the conference of All India Oracle User Group, where I presented as a part of the OTN ACE Directors APAC Tour.
You can download the materials here. Slides and Scripts used in the demo
As always, I would highly appreciate your feedback, be it here as a comment, or via social media or email.
Email: arup@proligence.com
Twitter: @ArupNanda
Facebook.com/ArupKNanda
Google+: +ArupNanda
Confessions of an Oracle Database Junkie - Arup Nanda The opinions expressed here are mine and mine alone. They may not necessarily reflect that of my employers and customers - both past or present. The comments left by the reviewers are theirs alone and may not reflect my opinion whether implied or not. None of the advice is warranted to be free of errors and ommision. Please use at your own risk and after thorough testing in your environment.
Showing posts with label #ACED. Show all posts
Showing posts with label #ACED. Show all posts
Friday, November 11, 2016
Monday, October 31, 2016
OTN ACE Director APAC Tour in Sydney Sessions
Thank you all for coming to my sessions at the Sydney conference of Oracle Technology Network +Oracle Community Network ACE Directors Tour in APAC. Please feel free to download the materials presented.
1. Secure your database in 1 day Presentation and Scripts
2. Prevent Bad SQLs in the Cloud Presentation
3. Becoming a Multitenant DBA Presentation
As always, I will appreciate to hear from you how you "really" felt about them--the good , the bad and the ugly. You can post a comment on the blog right here, or reach me via:
Email: arup@proligence.com
Twitter: @ArupNanda
Google Plus: +Arup Nanda
Facebook.com/ArupKNanda
1. Secure your database in 1 day Presentation and Scripts
2. Prevent Bad SQLs in the Cloud Presentation
3. Becoming a Multitenant DBA Presentation
As always, I will appreciate to hear from you how you "really" felt about them--the good , the bad and the ugly. You can post a comment on the blog right here, or reach me via:
Email: arup@proligence.com
Twitter: @ArupNanda
Google Plus: +Arup Nanda
Facebook.com/ArupKNanda
Monday, September 29, 2014
OOW14 Session: Cache Fusion Demystified
Thank you all those who attended my session Cache Fusion Demystified.
As I mentioned, it was impossible to fit in a demo within that compressed 45 minute timeline. I put the details of the demo in a separate file with all the scripts and instructions that allows you to execute the demos at your own site.Download the slide deck as well as the scripts. In addition I have also written a whitepaper to explain the concepts clearer.
Download the slides, the paper, the demo scripts and instructions here.
I sincerely hope it demystified one of the most complex topics in Oracle RAC. As always, I would love to hear from you about your experience.
As I mentioned, it was impossible to fit in a demo within that compressed 45 minute timeline. I put the details of the demo in a separate file with all the scripts and instructions that allows you to execute the demos at your own site.Download the slide deck as well as the scripts. In addition I have also written a whitepaper to explain the concepts clearer.
Download the slides, the paper, the demo scripts and instructions here.
I sincerely hope it demystified one of the most complex topics in Oracle RAC. As always, I would love to hear from you about your experience.
Saturday, September 27, 2014
My Small World at Oracle Open World 2014
Here is y speaking schedule at Oracle Open World #oow14. I have a
whopping 6 sessions and one SIG. Don't have analyst meeting for a change. Yes,
do have a customer reference.
1 1.
Sunday Sep 28th 11:00 Demystifying Cache Buffer
Chains. Moscone South 309
2. Monday Sep 29th 13:30 Understanding Cache Fusion Moscone North 130
3. Tuesday Sep 30th 15:00 Exadata SIG Moscone South 208
4. Tuesday Sep 30th 16:00 SQL Tuning Without Trying Moscone South 104
5. Thursday Oct 2nd 9:30 RAC'ifyng Multitenant Moscone South 102
6. Thursday Oct 2nd 13:15 Art and Craft of Tracing Moscone North 131
7. Thursday Oct 2nd 14:30 Near Zero Downtime Migration, Intercontinental Grand Ballroom B
2. Monday Sep 29th 13:30 Understanding Cache Fusion Moscone North 130
3. Tuesday Sep 30th 15:00 Exadata SIG Moscone South 208
4. Tuesday Sep 30th 16:00 SQL Tuning Without Trying Moscone South 104
5. Thursday Oct 2nd 9:30 RAC'ifyng Multitenant Moscone South 102
6. Thursday Oct 2nd 13:15 Art and Craft of Tracing Moscone North 131
7. Thursday Oct 2nd 14:30 Near Zero Downtime Migration, Intercontinental Grand Ballroom B
I hope to meet and say hello to many of you at one of these. if you can't make it. the preso, scripts and [possibly] whitepapers will be posted on my blog after the event. But, of course, do come. Looking forward to meeting you there.
And there will be tons of sessions I must attend, at #oow14 and Oak Table World.
Saturday, April 16, 2011
Can I Fit a 80MB Database Completely in a 80MB Buffer Cache?
This is in the Series "100 Things You Probably Didn't Know About Oracle". If you haven't already, I urge you to read the other parts -
During the recently concluded Collaborate 11 (http://collaborate11.ioug.org) I was asked a question which led me to the idea for this entry - the fourth in the series. If the database size is 100GB and you have a 100GB buffer cache, all the blocks will be in the memory and there will never be an I/O to the datafiles other than the regular DBWR lazy writes, right?
This is a very important question and you must consider the implications carefully. Many folks assume that by getting a large buffer cache eliminates or reduces the buffer related I/O - a very wrong assumption.A large buffer cache helps; but the relationship between buffer and block is not one to one. A block may have more than one buffer in the buffer cache. How so? Let's see how that happens. We will examine this in a non-RAC database to keep it simple.
Setup
First, let's create a table and insert some rows into it.
Note how I used char(2000) instead of varchar2(2000). The reason is simple. The char datatype takes up entire string of values, all 2000 of them regardless of the actual value placed there. So, even though I put 'x' there, the entire row will be quite a long one.
After the insertion, check which blocks these rows went into. We can do that by calling dbms_rowid.rowid_block_number() function.
From the output we can see that rows 1 through 3 went to block 4423 and 4 and 5 went to block 4419. We also need the object ID of the object
This completes the setup. In summary, we know that these rows are in blocks 4419 and 4423 and the object ID is 99360.
Experiment
If possible, clear out the cache by shutting down and restarting the database. This brings up an empty cache. It's not absolutely necessary though. Now select from the table:
This will bring up all the blocks of the table into the buffer cache, To check for the same, you can check the view V$BH (the buffer headers). The column OBJD is the object_id. (Actually it's the DATA_OBJECT_ID. In this case both are the same; but may not be in all cases). Here are the columns of interest to us:
To make it simpler to understand, we will use a decode() on the class# field to show the type of the block. With that, here is our query:
Save this query because we will be using it a lot in this experiment. Here is the output.
There are 7 buffers. In this example we have not restarted the cache. So there are two buffers for the segment header. There is one buffer for each data block - from 4419 to 4423. The status is "xcur", which stands for Exclusive Current. I will explain that in detail in a later blog. But in summary it means that the buffer was acquired (or filled by a block) with the intention of being modified. If the intention is merely to select, then the status would have shown CR (Consistent Read). In this case since the rows were inserted modifying the buffer, the blocks were gotten in xcur mode. From a different session update a single row. For easier identification I have used Sess2> as the prompt:
From the original session, check the buffers:
There are 8 buffers now, up one from the previous seven. Note there are two buffers for block ID 4423. One CR and one xcur. Why two?
It's because when the update statement was issued, it would have modified the block. Instead of modifying the existing buffer, Oracle creates a "copy" of the buffer and modifies that. This copy is now [Note there was a typo earlier "not", it should have been "now". Corrected. Thanks to Martin Bex] XCUR status because it was acquired for the purpose of being modified. The previous buffer of this block, which used to be xcur, is converted to "CR". There can't be more than one XCUR buffer for a specific block, that's why it is exclusive. If someone wants to find out the most recently updated buffer, it will just have to look for the copy with the XCUR status. All others are marked CR.
Let's continue with this experiment. From a third session, update a different row in the same block.
From the original session, find out the buffers.
There are 4 buffers for block 4423 now - up from 2. What happened? Since the buffer was required to be modified once more, Oracle created yet another "copy", marked it "xcur" and relegated the older one to "cr". What about the extra CR copy? That was done because Oracle had to perform something called CR processing to create a CR copy from another CR copy or an XCUR copy.
You can notice how the number of buffers proliferate. Let's change the experiment a little bit. From a 4th session, select from the table, instead of updating a row:
From the original session, check for the buffers.
Whoa! there are 12 buffers now. Block 4423 now has 6 buffers - up from 4 earlier. This was merely a select statement, which, by definition does not change data. Why did Oracle create a buffer for that?
Again, the answer is CR processing. The CR processing creates copies of the buffer and rolls them back or forward to create the CR copy as of the correct SCN number. This created 2 additional CR copies. From one block, now you have 6 buffers and some buffers were created as a result of select statement. This should answer the question whether the buffer cache of size of the database will be able to hold all the buffers.
Free Buffers
While being on the subject, let's ponder over another question - what happens when you flush the buffer cache? Let's see.
Checking the buffers using the script shown earlier:
The buffers are still there and marked as belonging to the object. However the status is "free", i.e. the buffers can be reused if some session wants them for some other block. If a session wants a free buffer and can't find one, it waits with the wait event "free buffer wait". At that point, Oracle makes room in the buffer cache for the blocks requested by the session by forcing the buffers out of the buffer cache. The CR copies are discarded (since they were not updated) and the XCUR copies were written to the disk (if not written already).
Who does the writing? It's the process known as DBWR - Database Buffer Writer, which is generally named DBW0, DBW1, etc. The DBWR is a very lazy process. It sleeps most of the time, unless it is woken up by someone. In this case the session (actually the server process) kicks DBWn to write the dirty buffers to the disk and change the status to non-dirty. This is why sometimes SELECT statement may generate writing to data files. Until the buffers are freed up, the session waits patiently and displays to all that it's waiting on free buffer waits. You can check it by selecting the EVENT column from V$SESSION.
Impact on Segment Header
So far we talked about the data block. When does the segment header see some action? Segment header does not contain any data; so it does not need to be updated every time an update is made. But when the segment itself is modified, segment header is updated. Let's see an example with adding a column:
Check the buffers:
There are 6 copies of the segment header. Table alteration changed the segment header block - block# 4418. Just like any other block, the buffer was copied over and modified, creating multiple copies of the block. Whe you issue another DDL operation - truncate - the result is the same:
There are now additional segment header buffers, since truncate is a DDL command and segment header is modified. Even though there was no block of the table in the buffer cache (remember, we flushed it), the segment header still needs to be updated. Truncate also marks all blocks as free, and resets the high water mark and updates the bitmap block. The bitmap block (BMB) is used in tablespaces with automatic segment space management (ASSM) to display whether a block is free or not, which is similar to the functionality of freelists. The truncate caused the BMBs (there are two - 1st level and 2nd level) to be modified as well and they also come into the buffer cache.
Takeaways
From the above discussion you saw how a table with just two blocks populated fills up the buffer cache with 20 buffers. Imagine a normal database with, say 10000 filled blocks (8KX10,000 = 80M). It might easily fill 200,000 buffers. With a 8K block size that amounts to 8 K X 200 K = 1600M, or about 1.6 GB of buffer cache. The amount of buffers taken up depends on several factors:
(1) less buffers consumed in selects
(2) less buffers consumed if the commits occur more frequently, since the demand for CR processing is less
(3) more buffers are consumed if the activity updates buffers
(4) more the logical I/O, the more the need for buffers
(5) DDL activities increase the demand for the buffers, even if the blocks are empty
Food for Thought
Now that you know how the buffers are consumed, what do you think the situation is in a RAC environment where buffers are transferred from one instance to the other - better, worse, or about the same? The answer will come in a future blog. Stay tuned.
- Part1 (Commit does not force writing of buffers into the disk),
- Part2 (How Oracle Locking Works),
- Part3 (More on Interested Transaction Lists).
During the recently concluded Collaborate 11 (http://collaborate11.ioug.org) I was asked a question which led me to the idea for this entry - the fourth in the series. If the database size is 100GB and you have a 100GB buffer cache, all the blocks will be in the memory and there will never be an I/O to the datafiles other than the regular DBWR lazy writes, right?
This is a very important question and you must consider the implications carefully. Many folks assume that by getting a large buffer cache eliminates or reduces the buffer related I/O - a very wrong assumption.A large buffer cache helps; but the relationship between buffer and block is not one to one. A block may have more than one buffer in the buffer cache. How so? Let's see how that happens. We will examine this in a non-RAC database to keep it simple.
Setup
First, let's create a table and insert some rows into it.
SQL> create table bhtest (col1 number, col2 char(2000)); Table created. SQL> insert into bhtest values (&n,'&c'); Enter value for n: 1 Enter value for c: x old 1: insert into bhtest values (&n,'&c') new 1: insert into bhtest values (1,'x') 1 row created. SQL> / Enter value for n: 2 Enter value for c: x old 1: insert into bhtest values (&n,'&c') new 1: insert into bhtest values (2,'x') 1 row created. ... insert 6 rows ... commit;
Note how I used char(2000) instead of varchar2(2000). The reason is simple. The char datatype takes up entire string of values, all 2000 of them regardless of the actual value placed there. So, even though I put 'x' there, the entire row will be quite a long one.
After the insertion, check which blocks these rows went into. We can do that by calling dbms_rowid.rowid_block_number() function.
select col1, dbms_rowid.rowid_block_number(rowid) from bhtest; COL1 DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) ---------- ------------------------------------ 4 4419 5 4419 1 4423 2 4423 3 4423 5 rows selected.
From the output we can see that rows 1 through 3 went to block 4423 and 4 and 5 went to block 4419. We also need the object ID of the object
SQL> select object_id from dba_objects 2 where object_name = 'BHTEST' 3 / OBJECT_ID ---------- 99360
This completes the setup. In summary, we know that these rows are in blocks 4419 and 4423 and the object ID is 99360.
Experiment
If possible, clear out the cache by shutting down and restarting the database. This brings up an empty cache. It's not absolutely necessary though. Now select from the table:
SQL> select * from bhtest;
This will bring up all the blocks of the table into the buffer cache, To check for the same, you can check the view V$BH (the buffer headers). The column OBJD is the object_id. (Actually it's the DATA_OBJECT_ID. In this case both are the same; but may not be in all cases). Here are the columns of interest to us:
- FILE# - the file_id
- BLOCK# - the block number
- CLASS# - the type of the block, e.g. data block, segment header, etc. Shown as a code
- STATUS - the status of the buffer, Exclusive Current, Current, etc.
- LOCK_ELEMENT_ADDR - if there is a buffer lock on the buffer, then the address of the lock
To make it simpler to understand, we will use a decode() on the class# field to show the type of the block. With that, here is our query:
select file#, block#, decode(class#,1,'data block',2,'sort block',3,'save undo block', 4, 'segment header',5,'save undo header',6,'free list',7,'extent map', 8,'1st level bmb',9,'2nd level bmb',10,'3rd level bmb', 11,'bitmap block', 12,'bitmap index block',13,'file header block',14,'unused', 15,'system undo header',16,'system undo block', 17,'undo header', 18,'undo block') class_type, status, lock_element_addr from v$bh where objd = 99360 order by 1,2,3 /
Save this query because we will be using it a lot in this experiment. Here is the output.
FILE# BLOCK# CLASS_TYPE STATUS LOCK_ELE ---------- ---------- ------------------ ---------- -------- 7 4418 segment header cr 00 7 4418 segment header xcur 00 7 4419 data block xcur 00 7 4420 data block xcur 00 7 4421 data block xcur 00 7 4422 data block xcur 00 7 4423 data block xcur 00 7 rows selected.
There are 7 buffers. In this example we have not restarted the cache. So there are two buffers for the segment header. There is one buffer for each data block - from 4419 to 4423. The status is "xcur", which stands for Exclusive Current. I will explain that in detail in a later blog. But in summary it means that the buffer was acquired (or filled by a block) with the intention of being modified. If the intention is merely to select, then the status would have shown CR (Consistent Read). In this case since the rows were inserted modifying the buffer, the blocks were gotten in xcur mode. From a different session update a single row. For easier identification I have used Sess2> as the prompt:
Sess> update bhtest set col2 = 'Y' where col1 = 1; 1 row updated.
From the original session, check the buffers:
FILE# BLOCK# CLASS_TYPE STATUS LOCK_ELE ---------- ---------- ------------------ ---------- -------- 7 4418 segment header cr 00 7 4418 segment header xcur 00 7 4419 data block xcur 00 7 4420 data block xcur 00 7 4421 data block xcur 00 7 4422 data block xcur 00 7 4423 data block cr 00 7 4423 data block xcur 00 8 rows selected.
There are 8 buffers now, up one from the previous seven. Note there are two buffers for block ID 4423. One CR and one xcur. Why two?
It's because when the update statement was issued, it would have modified the block. Instead of modifying the existing buffer, Oracle creates a "copy" of the buffer and modifies that. This copy is now [Note there was a typo earlier "not", it should have been "now". Corrected. Thanks to Martin Bex] XCUR status because it was acquired for the purpose of being modified. The previous buffer of this block, which used to be xcur, is converted to "CR". There can't be more than one XCUR buffer for a specific block, that's why it is exclusive. If someone wants to find out the most recently updated buffer, it will just have to look for the copy with the XCUR status. All others are marked CR.
Let's continue with this experiment. From a third session, update a different row in the same block.
Sess3> update bhtest set col2 = 'Y' where col1 = 2; 1 row updated.
From the original session, find out the buffers.
FILE# BLOCK# CLASS_TYPE STATUS LOCK_ELE ---------- ---------- ------------------ ---------- -------- 7 4418 segment header xcur 00 7 4418 segment header cr 00 7 4419 data block xcur 00 7 4420 data block xcur 00 7 4421 data block xcur 00 7 4422 data block xcur 00 7 4423 data block cr 00 7 4423 data block xcur 00 7 4423 data block cr 00 7 4423 data block cr 00
There are 4 buffers for block 4423 now - up from 2. What happened? Since the buffer was required to be modified once more, Oracle created yet another "copy", marked it "xcur" and relegated the older one to "cr". What about the extra CR copy? That was done because Oracle had to perform something called CR processing to create a CR copy from another CR copy or an XCUR copy.
You can notice how the number of buffers proliferate. Let's change the experiment a little bit. From a 4th session, select from the table, instead of updating a row:
Sess4> select * from bhtest ;
From the original session, check for the buffers.
FILE# BLOCK# CLASS_TYPE STATUS LOCK_ELE ---------- ---------- ------------------ ---------- -------- 7 4418 segment header xcur 00 7 4418 segment header cr 00 7 4419 data block xcur 00 7 4420 data block xcur 00 7 4421 data block xcur 00 7 4422 data block xcur 00 7 4423 data block cr 00 7 4423 data block cr 00 7 4423 data block cr 00 7 4423 data block cr 00 7 4423 data block cr 00 7 4423 data block xcur 00 12 rows selected.
Whoa! there are 12 buffers now. Block 4423 now has 6 buffers - up from 4 earlier. This was merely a select statement, which, by definition does not change data. Why did Oracle create a buffer for that?
Again, the answer is CR processing. The CR processing creates copies of the buffer and rolls them back or forward to create the CR copy as of the correct SCN number. This created 2 additional CR copies. From one block, now you have 6 buffers and some buffers were created as a result of select statement. This should answer the question whether the buffer cache of size of the database will be able to hold all the buffers.
Free Buffers
While being on the subject, let's ponder over another question - what happens when you flush the buffer cache? Let's see.
SQL> alter system flush buffer_cache; System altered.
Checking the buffers using the script shown earlier:
FILE# BLOCK# CLASS_TYPE STATUS LOCK_ELE ---------- ---------- ------------------ ---------- -------- 7 4418 segment header free 00 7 4418 segment header free 00 7 4419 data block free 00 7 4420 data block free 00 7 4421 data block free 00 7 4422 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00
The buffers are still there and marked as belonging to the object. However the status is "free", i.e. the buffers can be reused if some session wants them for some other block. If a session wants a free buffer and can't find one, it waits with the wait event "free buffer wait". At that point, Oracle makes room in the buffer cache for the blocks requested by the session by forcing the buffers out of the buffer cache. The CR copies are discarded (since they were not updated) and the XCUR copies were written to the disk (if not written already).
Who does the writing? It's the process known as DBWR - Database Buffer Writer, which is generally named DBW0, DBW1, etc. The DBWR is a very lazy process. It sleeps most of the time, unless it is woken up by someone. In this case the session (actually the server process) kicks DBWn to write the dirty buffers to the disk and change the status to non-dirty. This is why sometimes SELECT statement may generate writing to data files. Until the buffers are freed up, the session waits patiently and displays to all that it's waiting on free buffer waits. You can check it by selecting the EVENT column from V$SESSION.
Impact on Segment Header
So far we talked about the data block. When does the segment header see some action? Segment header does not contain any data; so it does not need to be updated every time an update is made. But when the segment itself is modified, segment header is updated. Let's see an example with adding a column:
SQL> alter table bhtest add (col3 number); Table altered.
Check the buffers:
FILE# BLOCK# CLASS_TYPE STATUS LOCK_ELE ---------- ---------- ------------------ ---------- -------- 7 4418 segment header cr 00 7 4418 segment header cr 00 7 4418 segment header cr 00 7 4418 segment header cr 00 7 4418 segment header free 00 7 4418 segment header free 00 7 4419 data block free 00 7 4420 data block free 00 7 4421 data block free 00 7 4422 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00 16 rows selected.
There are 6 copies of the segment header. Table alteration changed the segment header block - block# 4418. Just like any other block, the buffer was copied over and modified, creating multiple copies of the block. Whe you issue another DDL operation - truncate - the result is the same:
SQL> truncate table bhtest; FILE# BLOCK# CLASS_TYPE STATUS LOCK_ELE ---------- ---------- ------------------ ---------- -------- 7 4416 1st level bmb free 00 7 4417 2nd level bmb free 00 7 4418 segment header cr 00 7 4418 segment header cr 00 7 4418 segment header free 00 7 4418 segment header free 00 7 4418 segment header cr 00 7 4418 segment header cr 00 7 4418 segment header cr 00 7 4418 segment header free 00 7 4419 data block free 00 7 4420 data block free 00 7 4421 data block free 00 7 4422 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00 7 4423 data block free 00 20 rows selected.
There are now additional segment header buffers, since truncate is a DDL command and segment header is modified. Even though there was no block of the table in the buffer cache (remember, we flushed it), the segment header still needs to be updated. Truncate also marks all blocks as free, and resets the high water mark and updates the bitmap block. The bitmap block (BMB) is used in tablespaces with automatic segment space management (ASSM) to display whether a block is free or not, which is similar to the functionality of freelists. The truncate caused the BMBs (there are two - 1st level and 2nd level) to be modified as well and they also come into the buffer cache.
Takeaways
From the above discussion you saw how a table with just two blocks populated fills up the buffer cache with 20 buffers. Imagine a normal database with, say 10000 filled blocks (8KX10,000 = 80M). It might easily fill 200,000 buffers. With a 8K block size that amounts to 8 K X 200 K = 1600M, or about 1.6 GB of buffer cache. The amount of buffers taken up depends on several factors:
(1) less buffers consumed in selects
(2) less buffers consumed if the commits occur more frequently, since the demand for CR processing is less
(3) more buffers are consumed if the activity updates buffers
(4) more the logical I/O, the more the need for buffers
(5) DDL activities increase the demand for the buffers, even if the blocks are empty
Food for Thought
Now that you know how the buffers are consumed, what do you think the situation is in a RAC environment where buffers are transferred from one instance to the other - better, worse, or about the same? The answer will come in a future blog. Stay tuned.
Sunday, September 12, 2010
A Tool to Enable Stats Collection for Future Sessions for Application Profiling
The other day I was putting together my presentation for Oracle Open World on Application Profiling in RAC. I was going to describe a methodology for putting a face to an app by measuring how it behaves in a database – a sort of a signature of that application. I was going to use the now-ubiquitous 10046 trace for wait events and other activities inside the database. For resource consumption such as redo generated, logical I/Os, etc., I used the v$sesstat; but then I was stuck. How would I collect the stats of a session when the session has not even started and I don’t know the SID. That problem led to the development of this tool where the stats of a future session can be recorded based on some identifying factors such as username, module, etc. Hope this helps in your performance management efforts.
The Problem
Suppose you want to find out the resource consumed by a session. The resources could be redo generation, CPU used, logical I/O, undo records generated – the list is endless. This is required for a lot of things. Consider a case where you want to find out which apps are generating the most redo; you would issue a query like this:
The value column will show the redo generated. From the SID you can identify the session. Your next stop is v$session to get the other relevant information such as username, module, authentication scheme, etc. Problem solved, right?
Not so fast. Look at the above query; it selects from v$sesstat. When the session is disconnected, the stats disappear, making the entries for that session go from v$sesstat. If you run the query, you will not find these sessions. You have to constantly select from the v$sesstat view to capture the stats of the sessions hoping that you would capture the stats before the session disconnects. But it will be not be guaranteed. Some short sessions will be missed in between collection samples. Even if you are lucky to capture some stats of a short session, the other relevant information from v$session will be gone.
Oracle provides a package dbms_monitor, where a procedure named client_id_stat_enable allows you to enable stats collection on a future session where the client_id matches a specific value, e.g. CLIENT1. Here is an example:
execute dbms_monitor.client_id_stat_enable('CLIENT1');
However there are three issues:
(1) It collects only about 27 stats, out of 400+
(2) It offers only three choices for selecting sessions – client_id, module_name and service_name.
(3) It aggregate them, sums up all stats for a specific client_id. That is pretty much useless without a detailed session level.
So, in short, I didn’t have a readily available solution.
Solution
Well, necessity is the mother of invention. When you can’t find a decent tool; you build it; and so did I. I built this tool to capture the stats. This is version 1 of the tool. It has some limitations, as shown at the end. These limitations do not apply to all situations; so the tool may be useful in a majority of the cases. Later I will expand the tool to overcome these limitations.
Concept
The fundamental problem, as you recall, is not the dearth of data (v$sesstat has plenty); it’s the sessions in the future. To capture those sessions, the tool relies on a post-logon database trigger to capture the values.
The second problem was persistence. V$SESSTAT is a dynamic performance view, which means the records of the session will be gone when the session disappears. So, the tool relies on a table to store the data.
The third problem is the getting the values at the very end of the session. The difference between the values captured at the end and beginning of the session are the stats. To capture the values at the very end; not anytime before, the tool relies on a pre-logoff database trigger.
The fourth challenge is identification of sessions. SID of a session is not unique; it can be reused for a new session; it will definitely be reused when the database is recycled. So, the tool uses a column named CAPTURE_ID, a sequentially incremented number for each capture. Since we capture once at the beginning and then at the end, I must use the same capture_id. I use a package variable to store that capture_Id.
Finally, the tool allows you to enable stats collections based on some session attributes such as username, client_id, module, service_name, etc. For instance you may want to enable stats for any session where the username = ‘SCOTT’ or where the os_user is ‘ananda’, etc. These preferences are stored in a table reserved for that purpose.
Construction
Now that you understand how the tool is structured, let me show the actual code and scripts to create the tool.
(1) First, we should create the table that holds the preferences. Let’s call this table RECSTATS_ENABLED. This table holds all the different sessions attributes (ip address, username, module, etc.) that can enable stats collection in a session.
If you want to enable stats collection of a session based on a session attribute, insert a record into this table with the session attribute and the value. Here are some examples. I want to collect stats on all sessions where client_info matches ‘MY_CLIENT_INFO1’. You would insert a record like this:
insert into recstats_enabled values ('CLIENT_INFO','MY_CLIENT_INFO1');
Here are some more examples. All sessions where ACTION is ‘MY_ACTION1’:
insert into recstats_enabled values ('ACTION','MY_ACTION1');
Those of user SCOTT:
insert into recstats_enabled values ('SESSION_USER','SCOTT')
Those with service name APP:
insert into recstats_enabled values ('SERVICE_NAME','APP')
You can insert as many preferences as you want. You can even insert multiple values of a specific attribute. For instance, to enable stats on sessions with service names APP1 and APP2, insert two records.
Important: the session attribute names follow the naming convention of the USERENV context used in SYS_CONTEXT function.
(2) Next, we will create a table to hold the statistics
Note, I used the tablespace USERS; because I don’t want this table, which can potentially grow to huge size, to overwhelm the system tablespace. The STATISTIC_NAME and STATISTIC_VALUE columns record the stats collected. The other columns record the other relevant data from the sessions. All the attributes here have been shown with VARCHAR2(2000) for simplicity; of course they don’t need that much of space. In the future versions, I will put a more meaningful limit; but 2000 does not hurt as it is varchar2.
The capture point will show when the values were captured – START or END of the session.
(3) We will also need a sequence to identify the sessions. Each session will have 400+ stats; we will have a set at the end and once at the beginning. We could choose SID as an identifier; but SIDs could be reused. So, we need something that is truly unique – a sequence number. This will be recorded in the CAPTURE_ID column in the stats table.
SQL> create sequence seq_recstats;
(4) To store the capture ID when the post-logon trigger is fired, to be used inside the pre-logoff trigger, we must use a variable that would be visible to entire session. A package variable is the best for that.
(5) Finally, we will go on to the meat of the tool – the triggers. First, the post-logon trigger to capture the stats in the beginning of the session:
The code is self explanatory. I have provided more explanation as comments where needed.
(6) Next, the pre-logoff trigger to capture the stats at the end of the session:
Again the code is self explanatory. We capture the stats only of the global capture ID has been set by the post-logoff trigger. If we didn’t do that all the sessions would have started recording stats at their completion.
Execution
Now that the setup is complete, let’s perform a test by connecting as a user with the service name APP:
SQL> connect arup/arup@app
In this session, perform some actions that will generate a lot of activity. The following SQL will do nicely:
SQL> create table t as select * from all_objects;
SQL> exit
Now check the RECSTATS table to see the stats on this catured_id, which happens to be 1330.
Here is the output:
This clearly shows you all the stats of that session. Of course the table recorded all other details of the session as well – such as username, client_id, etc., which are useful later for more detailed analysis. You can perform aggregations as well now. Here is an example of the stats collected for redo size:
You can disassemble the aggregates to several attributes as well. Here is an example where you want to find out the redo generated from different users coming from different client machines
Granularity like this shows you how the application from different client servers helped; not just usernames.
Limitations
As I mentioned, there are some limitations you should be aware of. I will address them in the next iterations of the tool. These are not serious and applicable in only certain cases. As long as you don’t encounter that case, you should be fine.
(1) The logoff trigger does not fire when the user exits from the session ungracefully, such as closing down the SQL*Plus window, or closing the program before exiting. In such cases the stats at the end of the session will not be recorded. In most application infrastructure it does not happen; but it could happen for adhoc user sessions such as people connecting through TOAD.
(2) The session attributes such as module, client_id and action can be altered within the session. If that is the case, this tool does not record that fact since there is no triggering event. The logoff trigger records the module, action and client_id set at that time. These attributes are not usually changed in application code; so it may not apply to your case.
(3) Parallel Query sessions will have a special issue since there will be no logoff trigger. So in case of parallel queries, you will not see any differential stats. If you don’t use PQ, as most OLTP applications do, you will not be affected.
(4) If the session just sits there without disconnecting, the logoff trigger will never fire and the stats will never be captured. Of course, it will be eventually captured when the session exits.
Once again, these limitations apply only to certain occasions. As long as you are aware of these caveats, you will be able to use this tool to profile many of your applications.
Happy Profiling!
The Problem
Suppose you want to find out the resource consumed by a session. The resources could be redo generation, CPU used, logical I/O, undo records generated – the list is endless. This is required for a lot of things. Consider a case where you want to find out which apps are generating the most redo; you would issue a query like this:
select sid, value from v$sesstat s, v$statname n where n.statistic# = s.statistic# and n.name = 'redo size' /
The value column will show the redo generated. From the SID you can identify the session. Your next stop is v$session to get the other relevant information such as username, module, authentication scheme, etc. Problem solved, right?
Not so fast. Look at the above query; it selects from v$sesstat. When the session is disconnected, the stats disappear, making the entries for that session go from v$sesstat. If you run the query, you will not find these sessions. You have to constantly select from the v$sesstat view to capture the stats of the sessions hoping that you would capture the stats before the session disconnects. But it will be not be guaranteed. Some short sessions will be missed in between collection samples. Even if you are lucky to capture some stats of a short session, the other relevant information from v$session will be gone.
Oracle provides a package dbms_monitor, where a procedure named client_id_stat_enable allows you to enable stats collection on a future session where the client_id matches a specific value, e.g. CLIENT1. Here is an example:
execute dbms_monitor.client_id_stat_enable('CLIENT1');
However there are three issues:
(1) It collects only about 27 stats, out of 400+
(2) It offers only three choices for selecting sessions – client_id, module_name and service_name.
(3) It aggregate them, sums up all stats for a specific client_id. That is pretty much useless without a detailed session level.
So, in short, I didn’t have a readily available solution.
Solution
Well, necessity is the mother of invention. When you can’t find a decent tool; you build it; and so did I. I built this tool to capture the stats. This is version 1 of the tool. It has some limitations, as shown at the end. These limitations do not apply to all situations; so the tool may be useful in a majority of the cases. Later I will expand the tool to overcome these limitations.
Concept
The fundamental problem, as you recall, is not the dearth of data (v$sesstat has plenty); it’s the sessions in the future. To capture those sessions, the tool relies on a post-logon database trigger to capture the values.
The second problem was persistence. V$SESSTAT is a dynamic performance view, which means the records of the session will be gone when the session disappears. So, the tool relies on a table to store the data.
The third problem is the getting the values at the very end of the session. The difference between the values captured at the end and beginning of the session are the stats. To capture the values at the very end; not anytime before, the tool relies on a pre-logoff database trigger.
The fourth challenge is identification of sessions. SID of a session is not unique; it can be reused for a new session; it will definitely be reused when the database is recycled. So, the tool uses a column named CAPTURE_ID, a sequentially incremented number for each capture. Since we capture once at the beginning and then at the end, I must use the same capture_id. I use a package variable to store that capture_Id.
Finally, the tool allows you to enable stats collections based on some session attributes such as username, client_id, module, service_name, etc. For instance you may want to enable stats for any session where the username = ‘SCOTT’ or where the os_user is ‘ananda’, etc. These preferences are stored in a table reserved for that purpose.
Construction
Now that you understand how the tool is structured, let me show the actual code and scripts to create the tool.
(1) First, we should create the table that holds the preferences. Let’s call this table RECSTATS_ENABLED. This table holds all the different sessions attributes (ip address, username, module, etc.) that can enable stats collection in a session.
CREATE TABLE SYS.RECSTATS_ENABLED ( SESSION_ATTRIBUTE VARCHAR2(200 BYTE), ATTRIBUTE_VALUE VARCHAR2(2000 BYTE) ) /
If you want to enable stats collection of a session based on a session attribute, insert a record into this table with the session attribute and the value. Here are some examples. I want to collect stats on all sessions where client_info matches ‘MY_CLIENT_INFO1’. You would insert a record like this:
insert into recstats_enabled values ('CLIENT_INFO','MY_CLIENT_INFO1');
Here are some more examples. All sessions where ACTION is ‘MY_ACTION1’:
insert into recstats_enabled values ('ACTION','MY_ACTION1');
Those of user SCOTT:
insert into recstats_enabled values ('SESSION_USER','SCOTT')
Those with service name APP:
insert into recstats_enabled values ('SERVICE_NAME','APP')
You can insert as many preferences as you want. You can even insert multiple values of a specific attribute. For instance, to enable stats on sessions with service names APP1 and APP2, insert two records.
Important: the session attribute names follow the naming convention of the USERENV context used in SYS_CONTEXT function.
(2) Next, we will create a table to hold the statistics
CREATE TABLE SYS.RECSTATS ( CAPTURE_ID NUMBER, CAPTURE_POINT VARCHAR2(10 BYTE), SID NUMBER, SERIAL# NUMBER, ACTION VARCHAR2(2000 BYTE), CLIENT_DENTIFIER VARCHAR2(2000 BYTE), CLIENT_INFO VARCHAR2(2000 BYTE), CURRENT_EDITION_NAME VARCHAR2(2000 BYTE), CURRENT_SCHEMA VARCHAR2(2000 BYTE), CURRENT_USER VARCHAR2(2000 BYTE), DATABASE_ROLE VARCHAR2(2000 BYTE), HOST VARCHAR2(2000 BYTE), IDENTIFICATION_TYPE VARCHAR2(2000 BYTE), IP_ADDRESS VARCHAR2(2000 BYTE), ISDBA VARCHAR2(2000 BYTE), MODULE VARCHAR2(2000 BYTE), OS_USER VARCHAR2(2000 BYTE), SERVICE_NAME VARCHAR2(2000 BYTE), SESSION_USER VARCHAR2(2000 BYTE), TERMINAL VARCHAR2(2000 BYTE), STATISTIC_NAME VARCHAR2(2000 BYTE), STATISTIC_VALUE NUMBER; ) TABLESPACE USERS
Note, I used the tablespace USERS; because I don’t want this table, which can potentially grow to huge size, to overwhelm the system tablespace. The STATISTIC_NAME and STATISTIC_VALUE columns record the stats collected. The other columns record the other relevant data from the sessions. All the attributes here have been shown with VARCHAR2(2000) for simplicity; of course they don’t need that much of space. In the future versions, I will put a more meaningful limit; but 2000 does not hurt as it is varchar2.
The capture point will show when the values were captured – START or END of the session.
(3) We will also need a sequence to identify the sessions. Each session will have 400+ stats; we will have a set at the end and once at the beginning. We could choose SID as an identifier; but SIDs could be reused. So, we need something that is truly unique – a sequence number. This will be recorded in the CAPTURE_ID column in the stats table.
SQL> create sequence seq_recstats;
(4) To store the capture ID when the post-logon trigger is fired, to be used inside the pre-logoff trigger, we must use a variable that would be visible to entire session. A package variable is the best for that.
create or replace package pkg_recstats is g_recstats_id number; end;
(5) Finally, we will go on to the meat of the tool – the triggers. First, the post-logon trigger to capture the stats in the beginning of the session:
CREATE OR REPLACE TRIGGER SYS.tr_post_logon_recstats after logon on database declare l_stmt varchar2(32000); l_attr_val recstats_enabled.attribute_value%TYPE; l_capture_point recstats.capture_point%type := 'START'; l_matched boolean := FALSE; begin pkg_recstats.g_recstats_id := null; for r in ( select session_attribute, attribute_value from recstats_enabled order by session_attribute ) loop exit when l_matched; -- we select the userenv; but the null values may cause -- problems in matching; so let’s use a value for NVL -- that will never be used - !_!_! l_stmt := 'select nvl(sys_context(''USERENV'','''|| r.session_attribute||'''),''!_!_!_!'') from dual'; execute immediate l_stmt into l_attr_val; if l_attr_val = r.attribute_value then -- match; we should record the stats -- and exit the loop; since stats should -- be recorded only for one match. l_matched := TRUE; select seq_recstats.nextval into pkg_recstats.g_recstats_id from dual; insert into recstats select pkg_recstats.g_recstats_id, l_capture_point, sys_context('USERENV','SID'), null, sys_context('USERENV','ACTION'), sys_context('USERENV','CLIENT_IDENTIFIER'), sys_context('USERENV','CLIENT_INFO'), sys_context('USERENV','CURRENT_EDITION_NAME'), sys_context('USERENV','CURRENT_SCHEMA'), sys_context('USERENV','CURRENT_USER'), sys_context('USERENV','DATABASE_ROLE'), sys_context('USERENV','HOST'), sys_context('USERENV','IDENTIFICATION_TYPE'), sys_context('USERENV','IP_ADDRESS'), sys_context('USERENV','ISDBA'), sys_context('USERENV','MODULE'), sys_context('USERENV','OS_USER'), sys_context('USERENV','SERVICE_NAME'), sys_context('USERENV','SESSION_USER'), sys_context('USERENV','TERMINAL'), n.name, s.value from v$mystat s, v$statname n where s.statistic# = n.statistic#; end if; end loop; end;
The code is self explanatory. I have provided more explanation as comments where needed.
(6) Next, the pre-logoff trigger to capture the stats at the end of the session:
CREATE OR REPLACE TRIGGER SYS.tr_pre_logoff_recstats before logoff on database declare l_capture_point recstats.capture_point%type := 'END'; begin if (pkg_recstats.g_recstats_id is not null) then insert into recstats select pkg_recstats.g_recstats_id, l_capture_point, sys_context('USERENV','SID'), null, sys_context('USERENV','ACTION'), sys_context('USERENV','CLIENT_IDENTIFIER'), sys_context('USERENV','CLIENT_INFO'), sys_context('USERENV','CURRENT_EDITION_NAME'), sys_context('USERENV','CURRENT_SCHEMA'), sys_context('USERENV','CURRENT_USER'), sys_context('USERENV','DATABASE_ROLE'), sys_context('USERENV','HOST'), sys_context('USERENV','IDENTIFICATION_TYPE'), sys_context('USERENV','IP_ADDRESS'), sys_context('USERENV','ISDBA'), sys_context('USERENV','MODULE'), sys_context('USERENV','OS_USER'), sys_context('USERENV','SERVICE_NAME'), sys_context('USERENV','SESSION_USER'), sys_context('USERENV','TERMINAL'), n.name, s.value from v$mystat s, v$statname n where s.statistic# = n.statistic#; commit; end if; end; /
Again the code is self explanatory. We capture the stats only of the global capture ID has been set by the post-logoff trigger. If we didn’t do that all the sessions would have started recording stats at their completion.
Execution
Now that the setup is complete, let’s perform a test by connecting as a user with the service name APP:
SQL> connect arup/arup@app
In this session, perform some actions that will generate a lot of activity. The following SQL will do nicely:
SQL> create table t as select * from all_objects;
SQL> exit
Now check the RECSTATS table to see the stats on this catured_id, which happens to be 1330.
col name format a60 col value format 999,999,999 select a.statistic_name name, b.statistic_value - a.statistic_value value from recstats a, recstats b where a.capture_id = 1330 and a.capture_id = b.capture_id and a.statistic_name = b.statistic_name and a.capture_point = 'START' and b.capture_point = 'END' and (b.statistic_value - a.statistic_value) != 0 order by 2 /
Here is the output:
NAME VALUE ------------------------------------------------------------ ------------ workarea memory allocated -2 change write time 1 parse time cpu 1 table scans (long tables) 1 cursor authentications 1 sorts (memory) 1 user commits 2 opened cursors current 2 IMU Flushes 2 index scans kdiixs1 2 parse count (hard) 2 workarea executions - optimal 2 redo synch writes 2 redo synch time 3 rows fetched via callback 5 table fetch by rowid 5 parse time elapsed 5 recursive cpu usage 8 switch current to new buffer 10 cluster key scan block gets 10 cluster key scans 10 deferred (CURRENT) block cleanout applications 10 Heap Segment Array Updates 10 table scans (short tables) 12 messages sent 13 index fetch by key 15 physical read total multi block requests 15 SQL*Net roundtrips to/from client 18 session cursor cache hits 19 session cursor cache count 19 user calls 25 CPU used by this session 28 CPU used when call started 29 buffer is not pinned count 33 execute count 34 parse count (total) 35 opened cursors cumulative 36 physical read total IO requests 39 physical read IO requests 39 calls to get snapshot scn: kcmgss 45 non-idle wait count 67 user I/O wait time 116 non-idle wait time 120 redo ordering marks 120 calls to kcmgas 143 enqueue releases 144 enqueue requests 144 DB time 149 hot buffers moved to head of LRU 270 recursive calls 349 active txn count during cleanout 842 cleanout - number of ktugct calls 842 consistent gets - examination 879 IMU undo allocation size 968 physical reads cache prefetch 997 physical reads 1,036 physical reads cache 1,036 table scan blocks gotten 1,048 commit cleanouts 1,048 commit cleanouts successfully completed 1,048 no work - consistent read gets 1,060 redo subscn max counts 1,124 Heap Segment Array Inserts 1,905 calls to kcmgcs 2,149 consistent gets from cache (fastpath) 2,153 free buffer requested 2,182 free buffer inspected 2,244 HSC Heap Segment Block Changes 2,519 db block gets from cache (fastpath) 2,522 consistent gets 3,067 consistent gets from cache 3,067 bytes received via SQL*Net from client 3,284 bytes sent via SQL*Net to client 5,589 redo entries 6,448 db block changes 9,150 db block gets 10,194 db block gets from cache 10,194 session logical reads 13,261 IMU Redo allocation size 16,076 table scan rows gotten 72,291 session uga memory 88,264 session pga memory 131,072 session uga memory max 168,956 undo change vector size 318,640 session pga memory max 589,824 physical read total bytes 8,486,912 cell physical IO interconnect bytes 8,486,912 physical read bytes 8,486,912 redo size 8,677,104
This clearly shows you all the stats of that session. Of course the table recorded all other details of the session as well – such as username, client_id, etc., which are useful later for more detailed analysis. You can perform aggregations as well now. Here is an example of the stats collected for redo size:
select session_user, sum(STATISTIC_VALUE) STVAL from recstats where STATISTIC_NAME = 'redo size' group by session_user / Output: SESSION_USER STVAL ------------ --------- ARUP 278616 APEX 4589343 … and so on …
You can disassemble the aggregates to several attributes as well. Here is an example where you want to find out the redo generated from different users coming from different client machines
select session_user, host, sum(STATISTIC_VALUE) stval from recstats where STATISTIC_NAME = 'redo size' group by session_user, host / Output: SESSION_USER HOST STVAL ------------ ----------- ------- ARUP oradba2 12356 ARUP oradba1 264567 APEX oradba2 34567 … and so on …
Granularity like this shows you how the application from different client servers helped; not just usernames.
Limitations
As I mentioned, there are some limitations you should be aware of. I will address them in the next iterations of the tool. These are not serious and applicable in only certain cases. As long as you don’t encounter that case, you should be fine.
(1) The logoff trigger does not fire when the user exits from the session ungracefully, such as closing down the SQL*Plus window, or closing the program before exiting. In such cases the stats at the end of the session will not be recorded. In most application infrastructure it does not happen; but it could happen for adhoc user sessions such as people connecting through TOAD.
(2) The session attributes such as module, client_id and action can be altered within the session. If that is the case, this tool does not record that fact since there is no triggering event. The logoff trigger records the module, action and client_id set at that time. These attributes are not usually changed in application code; so it may not apply to your case.
(3) Parallel Query sessions will have a special issue since there will be no logoff trigger. So in case of parallel queries, you will not see any differential stats. If you don’t use PQ, as most OLTP applications do, you will not be affected.
(4) If the session just sits there without disconnecting, the logoff trigger will never fire and the stats will never be captured. Of course, it will be eventually captured when the session exits.
Once again, these limitations apply only to certain occasions. As long as you are aware of these caveats, you will be able to use this tool to profile many of your applications.
Happy Profiling!
Wednesday, May 19, 2010
Mining Listener Logs
When is the last time you looked at the listener logs? Perhaps never. Not a very good idea. Listener logs contain a wealth of information on security events - it shows you the profile of hosts connecting to the database, the programs they are using and attempting to communicate but failed, among other things. But reading the listener logs is not exactly a pleasant experience. A while ago I wrote a series of articles on an online eZine called DBAZine.com on how to create an external table to read the listener logs using simple SQL; but unfortunately DBAZine.com has folded.
I have placed the articles on my website for your reference. As always, I would love to hear from you how you felt about these, stories of your own use and everything in between.
Mining Listener Logs Part 1
Mining Listener Logs Part 2
Mining Listener Logs Part 3
I have placed the articles on my website for your reference. As always, I would love to hear from you how you felt about these, stories of your own use and everything in between.
Mining Listener Logs Part 1
Mining Listener Logs Part 2
Mining Listener Logs Part 3
Wednesday, October 14, 2009
OOW09 My Session#2
Today I delivered the second of my four sessions - "Upgrade Case Study: Database Replay, Snapshot Standby and Plan Baselines".
For those you attended, I thank you very much. Here is the presentation.
For those you attended, I thank you very much. Here is the presentation.
Tuesday, October 13, 2009
OOW Day2
Why do you come to Open World? I'm sure we will get all kinds of reasons, as many as there are stars in the sky. Some predominant themes are - getting to know more about the Oracle (or related) technologies by attending sessions, reconnecting with old friends and building networking. Of course, getting freebies from the Exhibit Halls, I'm sure, can't be far behind as a motivator.
I come to OOW for all those reasons as well. But high up in my list is the visit to the Exhibit Halls. No; not for the tee-shirts that do not fit me and graphics I don't really dig. I visit the demogrounds and exhibit halls to know about the products and tools that I should be aware of. Where else would you find 1000+ companies advertising the products at one place? Sure, I can call them and find out; but ho do I find them? OOW exhibit halls are prime "hunting" grounds to look for new ideas and tools that I should be interested in; or at least be aware of. I can not only look at the tools; I can actually get some relevant technical facts in 5 minutes which might take weeks of scheduling and hours of marketing talk. And, if I decide the product is not relevant; I can always walk away. I have the privilege of walking away; they don't. If I call them to my office, "they" have that option; not me :) If I find something attractive, I can always follow up and get to know more.
Oracle demogrounds are even better. Not only I can meet Oracle PMs there; but the people who never come out to the public world - developers, development managers, architects and so on. These unsung heroes are mostly the reason why Oracle is what it is now. I meet the known faces, get to know new ones and establish new relationships. They hear from me what customers want and I learn the innards of some features I am curious about.
So, I spent almost the whole day yesterday navigating through demo grounds and exhibit halls. I could cover only a small fraction. In between I had to attend some meetings at work. Going to OOW is never "going away". I wish it was.
I come to OOW for all those reasons as well. But high up in my list is the visit to the Exhibit Halls. No; not for the tee-shirts that do not fit me and graphics I don't really dig. I visit the demogrounds and exhibit halls to know about the products and tools that I should be aware of. Where else would you find 1000+ companies advertising the products at one place? Sure, I can call them and find out; but ho do I find them? OOW exhibit halls are prime "hunting" grounds to look for new ideas and tools that I should be interested in; or at least be aware of. I can not only look at the tools; I can actually get some relevant technical facts in 5 minutes which might take weeks of scheduling and hours of marketing talk. And, if I decide the product is not relevant; I can always walk away. I have the privilege of walking away; they don't. If I call them to my office, "they" have that option; not me :) If I find something attractive, I can always follow up and get to know more.
Oracle demogrounds are even better. Not only I can meet Oracle PMs there; but the people who never come out to the public world - developers, development managers, architects and so on. These unsung heroes are mostly the reason why Oracle is what it is now. I meet the known faces, get to know new ones and establish new relationships. They hear from me what customers want and I learn the innards of some features I am curious about.
So, I spent almost the whole day yesterday navigating through demo grounds and exhibit halls. I could cover only a small fraction. In between I had to attend some meetings at work. Going to OOW is never "going away". I wish it was.
Sunday, October 11, 2009
ACE Directors Product Briefing '09
One of the most valuable benefits of being an Oracle ACE Director is the briefings by Oracle Product Managers at the Oracle HQ. This year the briefing was on Friday Oct 9th at Oracle conference center rather than the customary Hilton Hotel.
While I was a little disappointed at the coverage of the database topics, I quickly recovered from the alphabet soup that makes up the netherworld of middleware and tools. However, a surprise visit by Thomas Kurian to address questions from the audience about the various product roadmaps was testimonial that Oracle is dead serious about the ACE Program. That proves the commitment Oracle has made for the user community - very heartening.
As always, Vikky Lira and Lillian Buziak did a wonderful job of organizing the event. Considering about 100 ACE Directors from 20+ countries, that is no small task. Perhaps the highlight of the organization was the detailed briefing sheets Lillian prepared for each one individually, down to what car service one takes and when - simply superb! No amount of thanks will be enough. From the bottom of my heart, thank you, Vikky and Lillian. And, thank you Justin Kestelyn - for kicking off and running the event year after year.
While I was a little disappointed at the coverage of the database topics, I quickly recovered from the alphabet soup that makes up the netherworld of middleware and tools. However, a surprise visit by Thomas Kurian to address questions from the audience about the various product roadmaps was testimonial that Oracle is dead serious about the ACE Program. That proves the commitment Oracle has made for the user community - very heartening.
As always, Vikky Lira and Lillian Buziak did a wonderful job of organizing the event. Considering about 100 ACE Directors from 20+ countries, that is no small task. Perhaps the highlight of the organization was the detailed briefing sheets Lillian prepared for each one individually, down to what car service one takes and when - simply superb! No amount of thanks will be enough. From the bottom of my heart, thank you, Vikky and Lillian. And, thank you Justin Kestelyn - for kicking off and running the event year after year.
Open World 09 Starts
Oracle Open World 2009 has officially started with the User Group sessions today. I am presenting a session today. I started off by registering and getting my cool Blogger badge holder, hanging off the even cooler ACE Director lanyard.
I went off to the first session of today on the IOUG bucket - Workload Management by Alex Gorbachev. Alex is one of those people who know their stuff; so there is always something to be learned from there. Alex successfully demonstrated the difference between Connection Load Balancing and Server Side Listener Load Balancing, with pmon trace to show how the sessions are balanced. It sheds light on the question - why Oracle is not balancing the workload.
If you didn't attend this, you should definitely download the presentation and check it out later.
I went off to the first session of today on the IOUG bucket - Workload Management by Alex Gorbachev. Alex is one of those people who know their stuff; so there is always something to be learned from there. Alex successfully demonstrated the difference between Connection Load Balancing and Server Side Listener Load Balancing, with pmon trace to show how the sessions are balanced. It sheds light on the question - why Oracle is not balancing the workload.
If you didn't attend this, you should definitely download the presentation and check it out later.
Subscribe to:
Posts (Atom)