Showing posts with label Arup Nanda. Show all posts
Showing posts with label Arup Nanda. Show all posts

Friday, September 24, 2010

OOW 2010 Session Stats with Confidence

Thank you very much to all those who attended my session "Stats with Confidence". Unfortunately I was delayed by the keynote running late. With the big party coming up, I appreciate the spirit of those brave souls who stayed back. The late start didn't allow me to show the demo completely. But here are the scripts; hope you will be able to follow it along and run it on your own infrastructure.

It contains the presentation as well. Thanks for attending and hope you will find it useful.

Sunday, September 12, 2010

A Tool to Enable Stats Collection for Future Sessions for Application Profiling

The other day I was putting together my presentation for Oracle Open World on Application Profiling in RAC. I was going to describe a methodology for putting a face to an app by measuring how it behaves in a database – a sort of a signature of that application. I was going to use the now-ubiquitous 10046 trace for wait events and other activities inside the database. For resource consumption such as redo generated, logical I/Os, etc., I used the v$sesstat; but then I was stuck. How would I collect the stats of a session when the session has not even started and I don’t know the SID. That problem led to the development of this tool where the stats of a future session can be recorded based on some identifying factors such as username, module, etc. Hope this helps in your performance management efforts.


The Problem

Suppose you want to find out the resource consumed by a session. The resources could be redo generation, CPU used, logical I/O, undo records generated – the list is endless. This is required for a lot of things. Consider a case where you want to find out which apps are generating the most redo; you would issue a query like this:

select sid, value
from v$sesstat s, v$statname n
where n.statistic# = s.statistic#
and n.name = 'redo size'
/



The value column will show the redo generated. From the SID you can identify the session. Your next stop is v$session to get the other relevant information such as username, module, authentication scheme, etc. Problem solved, right?

Not so fast. Look at the above query; it selects from v$sesstat. When the session is disconnected, the stats disappear, making the entries for that session go from v$sesstat. If you run the query, you will not find these sessions. You have to constantly select from the v$sesstat view to capture the stats of the sessions hoping that you would capture the stats before the session disconnects. But it will be not be guaranteed. Some short sessions will be missed in between collection samples. Even if you are lucky to capture some stats of a short session, the other relevant information from v$session will be gone.

Oracle provides a package dbms_monitor, where a procedure named client_id_stat_enable allows you to enable stats collection on a future session where the client_id matches a specific value, e.g. CLIENT1. Here is an example:

execute dbms_monitor.client_id_stat_enable('CLIENT1');

However there are three issues:

(1) It collects only about 27 stats, out of 400+

(2) It offers only three choices for selecting sessions – client_id, module_name and service_name.

(3) It aggregate them, sums up all stats for a specific client_id. That is pretty much useless without a detailed session level.

So, in short, I didn’t have a readily available solution.

Solution

Well, necessity is the mother of invention. When you can’t find a decent tool; you build it; and so did I. I built this tool to capture the stats. This is version 1 of the tool. It has some limitations, as shown at the end. These limitations do not apply to all situations; so the tool may be useful in a majority of the cases. Later I will expand the tool to overcome these limitations.

Concept

The fundamental problem, as you recall, is not the dearth of data (v$sesstat has plenty); it’s the sessions in the future. To capture those sessions, the tool relies on a post-logon database trigger to capture the values.

The second problem was persistence. V$SESSTAT is a dynamic performance view, which means the records of the session will be gone when the session disappears. So, the tool relies on a table to store the data.

The third problem is the getting the values at the very end of the session. The difference between the values captured at the end and beginning of the session are the stats. To capture the values at the very end; not anytime before, the tool relies on a pre-logoff database trigger.

The fourth challenge is identification of sessions. SID of a session is not unique; it can be reused for a new session; it will definitely be reused when the database is recycled. So, the tool uses a column named CAPTURE_ID, a sequentially incremented number for each capture. Since we capture once at the beginning and then at the end, I must use the same capture_id. I use a package variable to store that capture_Id.

Finally, the tool allows you to enable stats collections based on some session attributes such as username, client_id, module, service_name, etc. For instance you may want to enable stats for any session where the username = ‘SCOTT’ or where the os_user is ‘ananda’, etc. These preferences are stored in a table reserved for that purpose.

Construction

Now that you understand how the tool is structured, let me show the actual code and scripts to create the tool.

(1) First, we should create the table that holds the preferences. Let’s call this table RECSTATS_ENABLED. This table holds all the different sessions attributes (ip address, username, module, etc.) that can enable stats collection in a session.

CREATE TABLE SYS.RECSTATS_ENABLED
(
  SESSION_ATTRIBUTE  VARCHAR2(200 BYTE),
  ATTRIBUTE_VALUE    VARCHAR2(2000 BYTE)
)
/

If you want to enable stats collection of a session based on a session attribute, insert a record into this table with the session attribute and the value. Here are some examples. I want to collect stats on all sessions where client_info matches ‘MY_CLIENT_INFO1’. You would insert a record like this:

insert into recstats_enabled values ('CLIENT_INFO','MY_CLIENT_INFO1');

Here are some more examples. All sessions where ACTION is ‘MY_ACTION1’:

insert into recstats_enabled values ('ACTION','MY_ACTION1');

Those of user SCOTT:

insert into recstats_enabled values ('SESSION_USER','SCOTT')

Those with service name APP:

insert into recstats_enabled values ('SERVICE_NAME','APP')



You can insert as many preferences as you want. You can even insert multiple values of a specific attribute. For instance, to enable stats on sessions with service names APP1 and APP2, insert two records.

Important: the session attribute names follow the naming convention of the USERENV context used in SYS_CONTEXT function.

(2) Next, we will create a table to hold the statistics

CREATE TABLE SYS.RECSTATS
(
  CAPTURE_ID            NUMBER,
  CAPTURE_POINT         VARCHAR2(10 BYTE),
  SID                   NUMBER,
  SERIAL#               NUMBER,
  ACTION                VARCHAR2(2000 BYTE),
  CLIENT_DENTIFIER      VARCHAR2(2000 BYTE),
  CLIENT_INFO           VARCHAR2(2000 BYTE),
  CURRENT_EDITION_NAME  VARCHAR2(2000 BYTE),
  CURRENT_SCHEMA        VARCHAR2(2000 BYTE),
  CURRENT_USER          VARCHAR2(2000 BYTE),
  DATABASE_ROLE         VARCHAR2(2000 BYTE),
  HOST                  VARCHAR2(2000 BYTE),
  IDENTIFICATION_TYPE   VARCHAR2(2000 BYTE),
  IP_ADDRESS            VARCHAR2(2000 BYTE),
  ISDBA                 VARCHAR2(2000 BYTE),
  MODULE                VARCHAR2(2000 BYTE),
  OS_USER               VARCHAR2(2000 BYTE),
  SERVICE_NAME          VARCHAR2(2000 BYTE),
  SESSION_USER          VARCHAR2(2000 BYTE),
  TERMINAL              VARCHAR2(2000 BYTE),
  STATISTIC_NAME        VARCHAR2(2000 BYTE),
  STATISTIC_VALUE       NUMBER;
)
TABLESPACE USERS



Note, I used the tablespace USERS; because I don’t want this table, which can potentially grow to huge size, to overwhelm the system tablespace. The STATISTIC_NAME and STATISTIC_VALUE columns record the stats collected. The other columns record the other relevant data from the sessions. All the attributes here have been shown with VARCHAR2(2000) for simplicity; of course they don’t need that much of space. In the future versions, I will put a more meaningful limit; but 2000 does not hurt as it is varchar2.

The capture point will show when the values were captured – START or END of the session.

(3) We will also need a sequence to identify the sessions. Each session will have 400+ stats; we will have a set at the end and once at the beginning. We could choose SID as an identifier; but SIDs could be reused. So, we need something that is truly unique – a sequence number. This will be recorded in the CAPTURE_ID column in the stats table.

SQL> create sequence seq_recstats;

(4) To store the capture ID when the post-logon trigger is fired, to be used inside the pre-logoff trigger, we must use a variable that would be visible to entire session. A package variable is the best for that.

create or replace package pkg_recstats
is
   g_recstats_id number;
end;


(5) Finally, we will go on to the meat of the tool – the triggers. First, the post-logon trigger to capture the stats in the beginning of the session:

CREATE OR REPLACE TRIGGER SYS.tr_post_logon_recstats
after logon on database
declare
 l_stmt varchar2(32000);
 l_attr_val recstats_enabled.attribute_value%TYPE;
 l_capture_point recstats.capture_point%type := 'START';
 l_matched boolean := FALSE;
begin
 pkg_recstats.g_recstats_id := null;
 for r in (
  select session_attribute, attribute_value
  from recstats_enabled
  order by session_attribute
 )
 loop
  exit when l_matched;
            -- we select the userenv; but the null values may cause
            -- problems in matching; so let’s use a value for NVL
            -- that will never be used - !_!_!
  l_stmt := 'select nvl(sys_context(''USERENV'','''||
   r.session_attribute||'''),''!_!_!_!'') from dual';
  execute immediate l_stmt into l_attr_val;
  if l_attr_val = r.attribute_value then
   -- match; we should record the stats
   -- and exit the loop; since stats should
   -- be recorded only for one match.
   l_matched := TRUE;
   select seq_recstats.nextval
   into pkg_recstats.g_recstats_id
   from dual;
   insert into recstats
   select
    pkg_recstats.g_recstats_id,
    l_capture_point,
    sys_context('USERENV','SID'),
    null,
    sys_context('USERENV','ACTION'),
    sys_context('USERENV','CLIENT_IDENTIFIER'),
    sys_context('USERENV','CLIENT_INFO'),
    sys_context('USERENV','CURRENT_EDITION_NAME'),
    sys_context('USERENV','CURRENT_SCHEMA'),
    sys_context('USERENV','CURRENT_USER'),
    sys_context('USERENV','DATABASE_ROLE'),
    sys_context('USERENV','HOST'),
    sys_context('USERENV','IDENTIFICATION_TYPE'),
    sys_context('USERENV','IP_ADDRESS'),
    sys_context('USERENV','ISDBA'),
    sys_context('USERENV','MODULE'),
    sys_context('USERENV','OS_USER'),
    sys_context('USERENV','SERVICE_NAME'),
    sys_context('USERENV','SESSION_USER'),
    sys_context('USERENV','TERMINAL'),
    n.name,
    s.value
   from v$mystat s, v$statname n
   where s.statistic# = n.statistic#;
  end if;
 end loop;
end;

The code is self explanatory. I have provided more explanation as comments where needed.

(6) Next, the pre-logoff trigger to capture the stats at the end of the session:

CREATE OR REPLACE TRIGGER SYS.tr_pre_logoff_recstats
before logoff on database
declare
 l_capture_point recstats.capture_point%type := 'END';
begin
 if (pkg_recstats.g_recstats_id is not null) then
  insert into recstats
  select
   pkg_recstats.g_recstats_id,
   l_capture_point,
   sys_context('USERENV','SID'),
   null,
   sys_context('USERENV','ACTION'),
   sys_context('USERENV','CLIENT_IDENTIFIER'),
   sys_context('USERENV','CLIENT_INFO'),
   sys_context('USERENV','CURRENT_EDITION_NAME'),
   sys_context('USERENV','CURRENT_SCHEMA'),
   sys_context('USERENV','CURRENT_USER'),
   sys_context('USERENV','DATABASE_ROLE'),
   sys_context('USERENV','HOST'),
   sys_context('USERENV','IDENTIFICATION_TYPE'),
   sys_context('USERENV','IP_ADDRESS'),
   sys_context('USERENV','ISDBA'),
   sys_context('USERENV','MODULE'),
   sys_context('USERENV','OS_USER'),
   sys_context('USERENV','SERVICE_NAME'),
   sys_context('USERENV','SESSION_USER'),
   sys_context('USERENV','TERMINAL'),
   n.name,
   s.value
  from v$mystat s, v$statname n
  where s.statistic# = n.statistic#;
  commit;
 end if;
end;
/

Again the code is self explanatory. We capture the stats only of the global capture ID has been set by the post-logoff trigger. If we didn’t do that all the sessions would have started recording stats at their completion.

Execution

Now that the setup is complete, let’s perform a test by connecting as a user with the service name APP:

SQL> connect arup/arup@app



In this session, perform some actions that will generate a lot of activity. The following SQL will do nicely:

SQL> create table t as select * from all_objects;


SQL> exit



Now check the RECSTATS table to see the stats on this catured_id, which happens to be 1330.

col name format a60
col value format 999,999,999
select a.statistic_name name, b.statistic_value - a.statistic_value value
from recstats a, recstats b
where a.capture_id = 1330
and a.capture_id = b.capture_id
and a.statistic_name = b.statistic_name
and a.capture_point = 'START'
and b.capture_point = 'END'
and (b.statistic_value - a.statistic_value) != 0
order by 2
/


Here is the output:


NAME                                                                VALUE
------------------------------------------------------------ ------------
workarea memory allocated                                              -2
change write time                                                       1
parse time cpu                                                          1
table scans (long tables)                                               1
cursor authentications                                                  1
sorts (memory)                                                          1
user commits                                                            2
opened cursors current                                                  2
IMU Flushes                                                             2
index scans kdiixs1                                                     2
parse count (hard)                                                      2
workarea executions - optimal                                           2
redo synch writes                                                       2
redo synch time                                                         3
rows fetched via callback                                               5
table fetch by rowid                                                    5
parse time elapsed                                                      5
recursive cpu usage                                                     8
switch current to new buffer                                           10
cluster key scan block gets                                            10
cluster key scans                                                      10
deferred (CURRENT) block cleanout applications                         10
Heap Segment Array Updates                                             10
table scans (short tables)                                             12
messages sent                                                          13
index fetch by key                                                     15
physical read total multi block requests                               15
SQL*Net roundtrips to/from client                                      18
session cursor cache hits                                              19
session cursor cache count                                             19
user calls                                                             25
CPU used by this session                                               28
CPU used when call started                                             29
buffer is not pinned count                                             33
execute count                                                          34
parse count (total)                                                    35
opened cursors cumulative                                              36
physical read total IO requests                                        39
physical read IO requests                                              39
calls to get snapshot scn: kcmgss                                      45
non-idle wait count                                                    67
user I/O wait time                                                    116
non-idle wait time                                                    120
redo ordering marks                                                   120
calls to kcmgas                                                       143
enqueue releases                                                      144
enqueue requests                                                      144
DB time                                                               149
hot buffers moved to head of LRU                                      270
recursive calls                                                       349
active txn count during cleanout                                      842
cleanout - number of ktugct calls                                     842
consistent gets - examination                                         879
IMU undo allocation size                                              968
physical reads cache prefetch                                         997
physical reads                                                      1,036
physical reads cache                                                1,036
table scan blocks gotten                                            1,048
commit cleanouts                                                    1,048
commit cleanouts successfully completed                             1,048
no work - consistent read gets                                      1,060
redo subscn max counts                                              1,124
Heap Segment Array Inserts                                          1,905
calls to kcmgcs                                                     2,149
consistent gets from cache (fastpath)                               2,153
free buffer requested                                               2,182
free buffer inspected                                               2,244
HSC Heap Segment Block Changes                                      2,519
db block gets from cache (fastpath)                                 2,522
consistent gets                                                     3,067
consistent gets from cache                                          3,067
bytes received via SQL*Net from client                              3,284
bytes sent via SQL*Net to client                                    5,589
redo entries                                                        6,448
db block changes                                                    9,150
db block gets                                                      10,194
db block gets from cache                                           10,194
session logical reads                                              13,261
IMU Redo allocation size                                           16,076
table scan rows gotten                                             72,291
session uga memory                                                 88,264
session pga memory                                                131,072
session uga memory max                                            168,956
undo change vector size                                           318,640
session pga memory max                                            589,824
physical read total bytes                                       8,486,912
cell physical IO interconnect bytes                             8,486,912
physical read bytes                                             8,486,912
redo size                                                       8,677,104




This clearly shows you all the stats of that session. Of course the table recorded all other details of the session as well – such as username, client_id, etc., which are useful later for more detailed analysis. You can perform aggregations as well now. Here is an example of the stats collected for redo size:

select session_user, sum(STATISTIC_VALUE) STVAL
from recstats
where STATISTIC_NAME = 'redo size'
group by session_user
/

Output:

SESSION_USER STVAL
------------ ---------
ARUP            278616
APEX           4589343
… and so on …

You can disassemble the aggregates to several attributes as well. Here is an example where you want to find out the redo generated from different users coming from different client machines

select session_user, host, sum(STATISTIC_VALUE) stval
from recstats
where STATISTIC_NAME = 'redo size'
group by session_user, host
/

Output:

SESSION_USER HOST          STVAL
------------ ----------- -------
ARUP         oradba2       12356
ARUP         oradba1      264567
APEX         oradba2       34567
… and so on …


Granularity like this shows you how the application from different client servers helped; not just usernames.



Limitations

As I mentioned, there are some limitations you should be aware of. I will address them in the next iterations of the tool. These are not serious and applicable in only certain cases. As long as you don’t encounter that case, you should be fine.

(1) The logoff trigger does not fire when the user exits from the session ungracefully, such as closing down the SQL*Plus window, or closing the program before exiting. In such cases the stats at the end of the session will not be recorded. In most application infrastructure it does not happen; but it could happen for adhoc user sessions such as people connecting through TOAD.

(2) The session attributes such as module, client_id and action can be altered within the session. If that is the case, this tool does not record that fact since there is no triggering event. The logoff trigger records the module, action and client_id set at that time. These attributes are not usually changed in application code; so it may not apply to your case.

(3) Parallel Query sessions will have a special issue since there will be no logoff trigger. So in case of parallel queries, you will not see any differential stats. If you don’t use PQ, as most OLTP applications do, you will not be affected.

(4) If the session just sits there without disconnecting, the logoff trigger will never fire and the stats will never be captured. Of course, it will be eventually captured when the session exits.

Once again, these limitations apply only to certain occasions. As long as you are aware of these caveats, you will be able to use this tool to profile many of your applications.

Happy Profiling!

Wednesday, June 16, 2010

Webcast for Latin American Oracle User Group

Thank you all those attended the websession today for LAOUG. It was a great honor to be the first speaker in the virtual conference series. Many thanks for inviting me, Francisco. I'm also looking forward to the next three I am supposed to deliver.

For those who attended, you may want to download the scripts at www.proligence.com/sec_scripts.zip

Wednesday, June 09, 2010

RACSIG Webcast on June 24th Files

Thank you all those who attended my presentation - "Under the Hoods: Cache Fusion, GCS, GES and GRD". I hope you found it useful. Here are the demo scripts I used during the session, if you want to play with on your own system.



The actual presentation itself will most likely be available at a later date on the oracleracsig.org website.

Wednesday, May 19, 2010

Mining Listener Logs

When is the last time you looked at the listener logs? Perhaps never. Not a very good idea. Listener logs contain a wealth of information on security events - it shows you the profile of hosts connecting to the database, the programs they are using and attempting to communicate but failed, among other things. But reading the listener logs is not exactly a pleasant experience. A while ago I wrote a series of articles on an online eZine called DBAZine.com on how to create an external table to read the listener logs using simple SQL; but unfortunately DBAZine.com has folded.

I have placed the articles on my website for your reference. As always, I would love to hear from you how you felt about these, stories of your own use and everything in between.

Mining Listener Logs Part 1
Mining Listener Logs Part 2
Mining Listener Logs Part 3

Tuesday, May 18, 2010

IOUG Webcast on Security

Many thanks to those who attended my webcast "Secure Your Database in a Single Day" for IOUG's wecast series. I hope you found it useful. I would highly appreciate if you take a moment to let me know how you felt - good, bad and ugly. Please write to me at arup@proligence.com.



You can find the scripts referenced in the webcast here.

Tuesday, April 20, 2010

My Sessions at IOUG Collaborate 2010

Thank you for all those attended my sessions during Collaborate 2010. Two of the sessions I presented were not mine; but that of Riyaj Shamsuddin. Riyaj was stuck in Denmark thanks to the ash cloud scenario in Europe and asked if I could present his. I agreed to and, with a lot of trepidation, I did. I hope I did justice to the sessions. For questions on those sessions, please reach out to Riyaj directly.

Going to my own sessions, here is where you can download the presentations. For the sessions I wanted to show live demos; but in a short span of 30 minutes for Quick Tips, it was impossible. You can download the scripts here so that you can check them out yourself. The slides show which scripts to execute.

RAC Performance Tuning, part of RAC Bootcamp (Recorded)
Stats with Intelligence (Recorded)
Publish Stats after Checking, part of Manageability Bootcamp (Recorded and shown via Webcast)

Once again, your patronage by attending is highly appreciated. A speaker is nothing without attendees. I sincerely hope that you got some value from the sessions. As always, I am looking forward to hearing from you – not just that you liked; but things you didn't.

Friday, March 26, 2010

Difference between Session and Process

Someone asked me the other day the difference between process and session in Oracle. Are they the same thing? Well, why then there are two parameters in the initialization parameter file – sessions and processes? Is it possible that a process does not have a corresponding session? Conversely, is it possible to have a session without a process?

And, furthermore, the processes parameter is larger than the sessions. So, are there more processes than sessions? To answer that, she got the count from v$session and v$process:

SQL> select count(1) from v$process;

COUNT(1)
----------
23

SQL> select count(1) from v$session;

COUNT(1)
----------
20

At least the answer was pretty clear – there are more processes than sessions (23 versus 20). But then, someone pointed to the view V$RESOURCE_LIMIT, which records the current usage of the various definable limits. Checking the limits for two resources – sessions and processes:

select resource_name, current_utilization, max_utilization, limit_value
from v$resource_limit
where resource_name in ( 'sessions', 'processes')
/

RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION LIMIT_VALUE
------------- ------------------- --------------- -----------
processes 23 23 40
sessions 26 26 49
This shows that the highest possible numbers for sessions and processes are 40 and 49 respectively. [Note, I have set the processes parameter in init.ora to 49 to reduce the limit artificially]. It shows that currently there are 26 sessions and 23 processes. Finally it shows the sessions and processes have touched a high number of 26 and 23 respectively.

This contradicts what we saw earlier. The processes usage matches – 23 processes seen from the resource limit view and count from v$process; but the session count differs. From the resource limit, we see that there are 26 sessions; but v$session shows only 20. Why there is a difference?

Processes without Sessions

Let’s see the first one – processes without sessions. We can easily find them:

select pid, spid, addr, program
from v$process
where addr not in (
select paddr from v$session
)

PID SPID ADDR PROGRAM
---------- ---------- -------- ------------------------------
1 4116A2B8 PSEUDO
18 23904 411758AC oracle@oradba1 (D000)
19 23906 41176360 oracle@oradba1 (S000)


Other than the first one, these are real processes. The processes refer to the dispatcher process and the shared server processes. These are not background processes; so they don’t appear in V$BGPROCESS.

Sessions without Processes

Now what about the second case – the sessions without processes? That one is harder to visualize. A session in Oracle is a connection to the database. As we know from the two-task architecture, a client process of a session kicks off a server process which in turn does the heavy lifting from the Oracle database. Keeping that in mind, it might be difficult to envision a session without a process. With a twist of the same concept, is it possible for two sessions to share a single process? Once again, that defies understanding as well. A session is an exclusive conduit to the database; how can that be shared? Even in case of share servers, each shared server session has its own process.

It is possible, under some special circumstances. They are used by Recursive Database Sessions, persistent for a very small amount of time; but they are sessions. When you create a table, alter an index or perform other tasks, Oracle must perform certain tasks on your behalf, which are called recursive actions. One example may make it clear. When you create a table, Oracle must determine that you have sufficient privileges to do that and the determination is made through a recursive session. Let’s see a demonstration of the sessions in action.

First connect to the database and get your own SID:

select sid, paddr, spid
from v$session s, v$process p
where s.audsid = userenv('SESSIONID')
and p.addr = s.paddr
/

SID PADDR SPID
---------- -------- ------------------------
17 4117F938 24042

This shows that my own session has the SID 17, the OS process id of 24042 and the address of the server process is 4117F938. I know, I know; this is a 32-bit system, so the process address is small.

If you look at v$process, you can get the details on that process as well, using the paddr value. Conversely, if you select the sessions with that paddr value:

SQL> select sid from v$session where paddr = '4117F938';

SID
---
17

There is exactly one session. If you check it for any process paddr, you will see exactly one SID. But, that’s where the mystery lies. The v$session view does not tell the whole story; it does not show all the sessions. Here is the definition of the view:

SQL> select view_definition from v$fixed_view_definition where view_name = 'GV$SESSION';

VIEW_DEFINITION
----------------------------------------------------------------------------------
select s.inst_id,s.addr,s.indx,s.ksuseser,s.ksuudses,s.ksusepro,s.ksuudlui,s.ksuudlna,s.ksuudoct,s.ksusesow, decode(s.ksusetrn,hextoraw('00'),null,s.ksusetr
n),decode(s.ksqpswat,hextoraw('00'),null,s.ksqpswat),decode(bitand(s.ksuseidl,11),1,'ACTIVE',0,decode(bitand(s.ksuseflg,4096),0,'INACTIVE','CACHED'),2,'SNIP
ED',3,'SNIPED', 'KILLED'),decode(s.ksspatyp,1,'DEDICATED',2,'SHARED',3,'PSEUDO','NONE'), s.ksuudsid,s.ksuudsna,s.ksuseunm,s.ksusepid,s.ksusemnm,s.ksusetid,
s.ksusepnm, decode(bitand(s.ksuseflg,19),17,'BACKGROUND',1,'USER',2,'RECURSIVE','?'), s.ksusesql, s.ksusesqh, s.ksusesqi, decode(s.ksusesch, 65535, to_numbe
r(null), s.ksusesch), s.ksusesesta, decode(s.ksuseseid, 0, to_number(null), s.ksuseseid), s.ksusepsq, s.ksusepha, s.ksusepsi, decode(s.ksusepch, 65535,
to_number(null), s.ksusepch), s.ksusepesta, decode(s.ksusepeid, 0, to_number(null), s.ksusepeid), decode(s.ksusepeo,0,to_number(null),s.ksusepeo), decod
e(s.ksusepeo,0,to_number(null),s.ksusepes), decode(s.ksusepco,0,to_number(null),s.ksusepco), decode(s.ksusepco,0,to_number(null),s.ksusepcs), s.ksuseapp,
s.ksuseaph, s.ksuseact, s.ksuseach, s.ksusecli, s.ksusefix, s.ksuseobj, s.ksusefil, s.ksuseblk, s.ksuseslt, s.ksuseltm, s.ksusectm,decode(bitand(s.ksusepxo
pt, 12),0,'NO','YES'),decode(s.ksuseft, 2,'SESSION', 4,'SELECT',8,'TRANSACTIONAL','NONE'),decode(s.ksusefm,1,'BASIC',2,'PRECONNECT',4,'PREPARSE','NONE'),dec
ode(s.ksusefs, 1, 'YES', 'NO'),s.ksusegrp,decode(bitand(s.ksusepxopt,4),4,'ENABLED',decode(bitand(s.ksusepxopt,8),8,'FORCED','DISABLED')),decode(bitand(s.ks
usepxopt,2),2,'FORCED',decode(bitand(s.ksusepxopt,1),1,'DISABLED','ENABLED')),decode(bitand(s.ksusepxopt,32),32,'FORCED',decode(bitand(s.ksusepxopt,16),16,'
DISABLED','ENABLED')), s.ksusecqd, s.ksuseclid, decode(s.ksuseblocker,4294967295,'UNKNOWN', 4294967294, 'UNKNOWN',4294967293,'UNKNOWN',4294967292,'NO HOLD
ER', 4294967291,'NOT IN WAIT','VALID'),decode(s.ksuseblocker, 4294967295,to_number(null),4294967294,to_number(null), 4294967293,to_number(null), 4294967292
,to_number(null),4294967291, to_number(null),bitand(s.ksuseblocker, 2147418112)/65536),decode(s.ksuseblocker, 4294967295,to_number(null),4294967294,to_numb
er(null), 4294967293,to_number(null), 4294967292,to_number(null),4294967291, to_number(null),bitand(s.ksuseblocker, 65535)),w.kslwtseq,w.kslwtevt,e.kslednam, e.ksledp1,w.kslwtp1,w.kslwtp1r, e.ksledp2,w.kslwtp2,w.kslwtp2r,e.ksledp3,w.kslwtp3,w.kslwtp3r, e.ksledclassid,e.ksledclass#,e.ksledclass, decode(w.kslwtin
wait, 0,decode(bitand(w.kslwtflags,256), 0,-2, decode(round(w.kslwtstime/10000), 0,-1,
round(w.kslwtstime/10000))), 0), decode(w.kslwtinwait,0,round((w.kslwtstime+w.kslwtltime)/1000000), round(w.kslwtstime/1000000)), de
code(w.kslwtinwait,1,'WAITING', decode(bitand(w.kslwtflags,256),0,'WAITED UNKNOWN TIME', decode(round(w.kslwtstime/10000),0,'WAITED SHORT TIME', 'WAIT
ED KNOWN TIME'))),w.kslwtstime, decode(w.kslwtinwait,0,to_number(null), decode(bitand(w.kslwtflags,64),64,0,w.kslwttrem)), , decode(
bitand(s.ksuseflg2,32),32,'ENABLED','DISABLED'),decode(bitand(s.ksuseflg2,64),64,'TRUE','FALSE'),decode(bitand(s.ksuseflg2,128),128,'TRUE','FALSE'),decode(b
itand(s.ksuseflg2,65536) + bitand(s.ksuseflg2,131072),65536,'ALL EXEC',131072,'NEVER',0,'FIRST EXEC'),s.ksuudsae,s.ksusecre,s.ksusecsn from x$ksuse s, x$ksled e, x$kslwt w where bitand(s.ksspaflg,1)!=0 and bitand(s.ksuseflg,1)!=0 and s.indx=w.kslwtsid and w.kslwtevt=e.indx

Note the section shown in red. The ksuseflg column in x$ksuse contains a lot of information, which the v$session view extracts and presents in a readable manner. Let’s use exactly the same SQL directly from the prompt. Noting that the ksusepro and indx and ksuudsna columns show the process address (paddr), the SID and the username respectively, we can extract the information:

select indx, decode(bitand(ksuseflg,19),17,'BACKGROUND',1,'USER',2,'RECURSIVE','UNKNOWN'),
ksuudsna user_name
from x$ksuse
where ksusepro = '41182408'
/
INDX DECODE(BIT USER_NAME
---------- ---------- ------------------------------
7 USER ARUP

It shows the same information that v$session would have shown; but with a big difference. Note the view definition again – there is a join condition at the end with two other views: x$ksled and x$kslwt. Not all the session information is available on the other views. So the join filters out some rows from x$ksuse. To examine when it does that and that sessions get created automatically, you have to conduct an experiment.

First you have to look at a view called V$RESOURCE_LIMIT, which shows the various definable limits, how much is being used and – most important – how much has been the high water mark usage of the limit. We are interested in only 2 limits – sessions and processes.

select resource_name, current_utilization, max_utilization, limit_value
from v$resource_limit
where resource_name in ( 'sessions', 'processes')
/

RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION LIMIT_VALUE
------------- ------------------- --------------- -----------
processes 23 23 40
sessions 26 26 49

This shows that the highest possible numbers for sessions and processes are 40 and 49 respectively. [Note, I have set the processes parameter in init.ora to 49 to reduce the limit artificially]. It shows that currently there are 26 sessions and 23 processes. Finally it shows the sessions and processes have touched a high number of 26 and 23 respectively.

In this experiment, keep the above session connected to SYS. On a different OS prompt, connect to database using SQL*Plus as user ARUP. At the SQL> prompt, type host, which will show an OS prompt. From that shell, connect to the database again as ARUP. From the prompt go to host prompt and connect using SQL*Plus. Do it over and over so that the number of sessions builds up as shown below:

SQL> host
oracle@oradba1 ~/arup# sqlplus arup/arup

SQL*Plus: Release 11.1.0.7.0 - Production on Wed Mar 24 21:54:56 2010

Copyright (c) 1982, 2008, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Real Application Testing options

SQL> host
oracle@oradba1 ~/arup# sqlplus arup/arup
.....
and so on ....

After some time you will receive an error like this:

ERROR:
ORA-00020: maximum number of processes (%s) exceeded

From session 1 (where you are connected as sysdba), check the resource limits:

select resource_name, current_utilization, max_utilization, limit_value
from v$resource_limit
where resource_name in ( 'sessions', 'processes')
/

RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION LIMIT_VALUE
------------- ------------------- --------------- -----------
processes 39 40 40
sessions 45 47 49

This makes perfect sense. Note the current number of processes – 39, and the limit is 40. The very last session tried to create a session; and it failed, since it would have pushed the limit above 40.

Now, in the session of ARUP, execute a PL/SQL code that creates and drops a lot of objects in succession.

begin
for i in 1..1000 loop
execute immediate 'drop table t';
execute immediate 'create table t as select * from all_objects where 1=2';
end loop;
end;
/

This simply creates and drops the table called t in succession, 1000 times. While this code is running, check for the session from the x$ksuse view directly. You already know the process address:

select indx, decode(bitand(ksuseflg,19),17,'BACKGROUND',1,'USER',2,'RECURSIVE','UNKNOWN'),
ksuudsna user_name
from x$ksuse
where ksusepro = '41182408'
/

INDX DECODE(BIT USER_NAME
---------- ---------- ------------------------------
7 USER ARUP
10 RECURSIVE SYS

This is interesting. There are two sessions – SIDs 7 and 10 against the same process with address 41182408. Why is that? SID 7 is the actual session which you connected. If you look at the decoded value, it shows RECURSIVE for the SID 10, which shows that for the user SYS. This session was kicked off by Oracle for the recursive actions, which pushed the session count up without a real user session. This is a situation where a single process serves 2 sessions.

Another interesting point is to know what this recursive session’s waiting on? To know that you have to check the view X$KSLWT. First set the null display to “?” so that null values show up as “?”:

SQL> set null ?
Now execute the query to get the wait event:

select s.indx, decode(bitand(ksuseflg,19),17,'BACKGROUND',1,'USER',2,'RECURSIVE','UNKNOWN'),
ksuudsna user_name, w.kslwtevt
from x$ksuse s, x$kslwt w
where ksusepro = '41182408'
and s.indx=w.kslwtsid (+)
/


INDX DECODE(BIT USER_NAME KSLWTEVT
---------- ---------- ------------------------------ ----------
4 RECURSIVE SYS ?
7 USER ARUP 72

Note, I used the outer join. Why? It’s because the session will not be found in the wait events view, which is confirmed by the null output. As you can glean from the above output, the wait event is not recorded for the recursive sessions. This is why a corresponding row was not found in the view V$SESSION which joins x$ksuse and x$kslwt without an outer join.

Now you must be curious about other such sessions where they share the same process. Let’s check:

SQL> select ksusepro, count(1)
2 from x$ksuse
3 group by ksusepro
4 having count(1) > 1;

KSUSEPRO COUNT(1)
-------- ----------
00 2
4117AE4C 2
41174344 2
41178E30 2
4117EE84 2
4117D91C 2
41174DF8 2
4117B900 2
4117837C 2
4117CE68 2
4116CD88 2
4117A398 2

We got several processes with addresses. Next, let’s find out the exact program names.

SQL> with proc_tab as
2 (
3 select ksusepro addr
4 from x$ksuse
5 group by ksusepro
6 having count(1) > 1
7 )
8 select v.addr, v.program
9 from proc_tab p, v$process v
10 where v.addr = p.addr;

ADDR PROGRAM
-------- ------------------------------------------------
4117AE4C oracle@oradba1 (FBDA)
41174344 oracle@oradba1 (MMON)
41178E30 oracle@oradba1 (ARC1)
4117EE84 oracle@oradba1 (W000)
4117D91C oracle@oradba1 (CJQ0)
41174DF8 oracle@oradba1 (MMNL)
4117B900 oracle@oradba1 (QMNC)
4117837C oracle@oradba1 (ARC0)
4117CE68 oracle@oradba1 (q001)
4116CD88 oracle@oradba1 (DBRM)
4117A398 oracle@oradba1 (ARC3)

These are the sessions related to the background processes – Flashback Data Archiver, Memory Monitor and so on. Let’s dive down even further and look into the sessions these processes serve:

SQL> with proc_tab as
2 (
3 select ksusepro addr
4 from x$ksuse
5 group by ksusepro
6 having count(1) > 1
7 )
8 select p.addr, indx sid, decode(bitand(ksuseflg,19),17,'BACKGROUND',1,'USER',2,'RECURSIVE','UNKNOWN'),
9 ksuudsna user_name
10 from x$ksuse x, proc_tab p
11 where ksusepro = p.addr
12 order by 1,2;

ADDR SID DECODE(BIT USER_NAME
-------- ---------- ---------- ------------------------------
00 1 RECURSIVE SYS
00 10 RECURSIVE SYS
4116CD88 23 RECURSIVE SYS
4116CD88 45 BACKGROUND SYS
41174344 30 RECURSIVE SYS
41174344 34 BACKGROUND SYS
41174DF8 20 RECURSIVE SYS
41174DF8 33 BACKGROUND SYS
4117837C 3 RECURSIVE SYS
4117837C 29 BACKGROUND SYS
41178E30 2 RECURSIVE SYS
41178E30 28 BACKGROUND SYS
4117A398 25 RECURSIVE SYS
4117A398 26 BACKGROUND SYS
4117AE4C 21 RECURSIVE SYS
4117AE4C 22 BACKGROUND SYS
4117B900 19 RECURSIVE SYS
4117B900 24 BACKGROUND SYS
4117CE68 15 RECURSIVE SYS

Note, for each of the background processes has a corresponding recursive session, which is counted towards the total session count in V$RESOURCE_LIMIT, even if you don’t actually create user sessions. These recursive sessions are not visible in V$SESSION.

Takeaways

1. The first lesson to learn is a process not always ≠ a session. In many cases there is a one-to-one relationship; but not always. When Oracle has to run a recursive SQL as a result of some user commands, it creates the session objects behind the scenes to fulfill that request.
2. Second, these recursive sessions count towards the overall sessions limit. To know the current sessions connected, use the V$RESOURCE_LIMIT view instead of selecting the sum from V$SESSION.

Wednesday, March 10, 2010

Thank you for all those who attended my webcast today on SQL Plan Management for All India Oracle User Group. It was a privilege to present before you and to be able to address your questions. I am sorry I couldn;t read all the questions properly; since it was extremely difficult to see the questions scrolling up in the tiny chat window. Also, as the webcast was designed, I couldn't hear anything the attendees were saying.

The presentation and the associated SQL scripts are available here. The article I referred to can be found here.

If you have a question regarding that specific webcast, please post a comment here and I will address it here. Please, limit your questions to the material discussed in the webcast only.

Tuesday, December 29, 2009

Instance_Number is busy Message during Standby Instance Startup

Recently I encountered a situation that demonstrates how, in an Oracle database, an error may stem from a very unrelated cause. A DBA was building a physical standby database for an upcominmg training. The two servers he was using were part of a RAC cluster; so the Oracle binaries were already there. He decided to use the same ORACLE_HOME for the new databases as well - a quite logical decision to savbe on space and administration issues. He created a primary database on server n1 and a standby database on the server n2. Follooing the usual manual procedure in building the standby database, he copied the pfile from primary database, modified the parameters and brought up the standby instance in nomount mode on the server n2.

SQL> startup nomount pfile=initSTBY.ora

But it refused to come up, with the following error:

ORA-00304: requested INSTANCE_NUMBER is busy

Alert log showed:

USER (ospid: 14210): terminating the instance due to error 304
Instance terminated by USER, pid = 14210

This was highly unusual. The primary and standby both were non-RAC; there was no instance number concept in a non-RAC database. By the way, the RAC instance on that server (or on the server n1) was not running; so there was no question of any conflict with the RAC instances either. The primary database was called PRIM while the standby was called STBY - eliminating the possibility of an instance name clash as well. And this error came while merely trying to start the instance, not even while mounting - eliminating the standby controlfile as a cause.

The error 304 showed:

00304, 00000, "requested INSTANCE_NUMBER is busy"
// *Cause: An instance tried to start by using a value of the
// initialization parameter INSTANCE_NUMBER that is already in use.
// *Action: Either
// a) specify another INSTANCE_NUMBER,
// b) shut down the running instance with this number
// c) wait for instance recovery to complete on the instance with
// this number.

Needless to say, being for a non-RAC database there was no "instance_number" parameter in the initialization parameter file of primary or the standby. So, the suggestions for the resolution seemed odd. MetaLink provided no help. All the ORA-304 errors were related to RAC with the instance_number mismatch.

As it always happens, it fell on my lap at this time. With just days to go live, I had to find a solution quickly. Long hours of troubleshooting, tracing the processes and examination of occasional trace files did not yield any clue. All the clue seemed to point to RAC, which this database was not. The Oracle Home was a RAC home, which meant the oracle binary was linked with the "rac" option.

So, the next logical step was to install a new Oracle Home without the rac option. After doing so, I tried to bring up the instance, using the new ORACLE_HOME and LD_LIBRARY_PATH variable; but, alas, the same error popped up.

Just to eliminate the possibility of some unknown bug, I decided to put an instance_number parameter, setting it to "1", from the default "0". The same error. I changed it to "2", again, the result was the same error.

Although this didn't help, at least it gave a clue that the error was not related to instance_number. The error message was clearly wrong. With this in mind, I went back to the basics. I went through the alert log with a fine toothed comb, scanning and analyzing each line.

The following line drew my attention:

DB_UNQIUE_NAME STBY is not in the Data Guard configuration

This was odd; the db_unique_name STBY was not defined in DG configuration. [BTW, note the spelling of "unique" in the message above. That is not what I typed; it was a copy and paste from the actual message in the alert log. Someone in Oracle Development should really pay atytention to typos in messages. This is clearly more than a nuisance; what if some process scans for db_unique_name related errors? It will not find the message at all!]

Checking the dg configuration, I found that the DBA has correctly defined the primary and standby names. In any case, Data Guard has not been started yet; this is merely at the instance startup - why is it complaining for data guard configuration at this time.

Perplexed, I resorted to a different approach. I renamed the pfile and all other relevant files. Then I built the standby myself, from scratch - using the same names - PRIM and STBY. And this time, everything worked fine. The instance STBY did come up.

While this solvbed the urgency problem, everyone, inclduing myself, wanted to know what the issue was in the earlier case where the DBA had failed to bring up the instance. To get the answer, I compared the files I created with the DBA created when tried and failed. Voila! The cause was immediately clear - the DBA forgot to put a vital parameter in the pfile of the standby instance:

db_unique_name = 'STBY'

This parameter was absent; so it took the default value as the db_name, which was "PRIM". This caused the instance to fail with a seemingly unrelated message - "ORA-304 Instance_number is busy"!

Learning Points
  1. In Oracle, most errors are obvious; but some are not. So, do not assume the error message is accurate. If all logic fails, assume the error messsage is wrong, or at least inaccurate.
  2. Instead of over-analyzing the process already followed, it may make sense to take a breather, wipe out everything and start fropm scratch. This is evben mor effective when someone else does it, offering a fresh approach and possibly not repeating the same mistakes.
  3. Finally, the issue at hand: if you do not define db_unique_name parameter in the standnby instance, you will receive ORA-304 during instance startup.

Hope this was helpful. Happy New Year, everybody.

Wednesday, November 18, 2009

RAC Perf Tuning Seminar in Istanbul, Turkey

For those who attended my 2-day training event in Istanbul, I wish to express my sincere thanks for the participation. For me, or any speaker, the privilege of having your attention for 2 days away from your work and family, means a lot. Some folks came all the way from Ankara and had to go back to work the following day. I hope you all got to learn something worthwhile your time.

The scripts are located in: http://www.proligence.com/racperf_istanbul/scripts.zip The password and userid is the ones I gave you in the class.

I also want to reiterate my earlier request to send me your detailed and honest feedback at arup@proligence.com. You may want to write it in Turkish, if that is more convenient. The important thing is to provide the feedback; I can always translate using Google.

Thanks, Hande and Madalina from Oracle University for arranging it. Much appreciated. Now I am off to Estonia.

Thursday, October 15, 2009

OOW09 Session#4 DBA 11g New Features

For all those who came to my last of my four sessions - 11g New Features for DBAs - I appreciate your taking the time. It was a pleasant surprise to see about 500 people showing up at a lunch time slot on the last day of the conference.

Here is the presentation link. I hope you enjoyed the session and found it useful.

OOW09 Session# 3

I just finished the third of my four presentations - SQL Plan Management. Considering it was at 9 AM on the last day of the conference, right after the big Aerosmith concert, I was expecting a lot less crowd. But, to my pleasant surprise about 150 brave souls turned up. Thank you all. I hope you found it useful.

Here is the presentation material. While you are there, feel free to browse around. And, of course, I will highly appreciate if you could send me your comments, either here or via email - whatever you are comfortable with.

Wednesday, October 14, 2009

OOW09 My Session#2

Today I delivered the second of my four sessions - "Upgrade Case Study: Database Replay, Snapshot Standby and Plan Baselines".

For those you attended, I thank you very much. Here is the presentation.

Tuesday, October 13, 2009

OOW Day2

Why do you come to Open World? I'm sure we will get all kinds of reasons, as many as there are stars in the sky. Some predominant themes are - getting to know more about the Oracle (or related) technologies by attending sessions, reconnecting with old friends and building networking. Of course, getting freebies from the Exhibit Halls, I'm sure, can't be far behind as a motivator.

I come to OOW for all those reasons as well. But high up in my list is the visit to the Exhibit Halls. No; not for the tee-shirts that do not fit me and graphics I don't really dig. I visit the demogrounds and exhibit halls to know about the products and tools that I should be aware of. Where else would you find 1000+ companies advertising the products at one place? Sure, I can call them and find out; but ho do I find them? OOW exhibit halls are prime "hunting" grounds to look for new ideas and tools that I should be interested in; or at least be aware of. I can not only look at the tools; I can actually get some relevant technical facts in 5 minutes which might take weeks of scheduling and hours of marketing talk. And, if I decide the product is not relevant; I can always walk away. I have the privilege of walking away; they don't. If I call them to my office, "they" have that option; not me :) If I find something attractive, I can always follow up and get to know more.

Oracle demogrounds are even better. Not only I can meet Oracle PMs there; but the people who never come out to the public world - developers, development managers, architects and so on. These unsung heroes are mostly the reason why Oracle is what it is now. I meet the known faces, get to know new ones and establish new relationships. They hear from me what customers want and I learn the innards of some features I am curious about.

So, I spent almost the whole day yesterday navigating through demo grounds and exhibit halls. I could cover only a small fraction. In between I had to attend some meetings at work. Going to OOW is never "going away". I wish it was.

Sunday, October 11, 2009

OOW09 - RAC Performance Tuning

For all those who came to my session - many, many thanks. There is no better sight for a presenter than to see a roomful of attendees, especially with people standing near the walls. The fire marshal was not amused probably; but I was grateful. The harrowing incident of a blue screen of death on my PC - not just once but twice - just before the presentation was about to start was enough to throw me into a panic mode; but the third time was a charm. It worked. Phew!

You can download the presentation here. And while you are there, look around and download some more of my sessions as well.

Thanks a lot once again. I'm off the keynote now.

Open World 09 Starts

Oracle Open World 2009 has officially started with the User Group sessions today. I am presenting a session today. I started off by registering and getting my cool Blogger badge holder, hanging off the even cooler ACE Director lanyard.

I went off to the first session of today on the IOUG bucket - Workload Management by Alex Gorbachev. Alex is one of those people who know their stuff; so there is always something to be learned from there. Alex successfully demonstrated the difference between Connection Load Balancing and Server Side Listener Load Balancing, with pmon trace to show how the sessions are balanced. It sheds light on the question - why Oracle is not balancing the workload.

If you didn't attend this, you should definitely download the presentation and check it out later.

Sunday, September 28, 2008

Most Striking Object at Open World 08

What was the most amazing thing I witnessed for the first time at OOW 08?

Hands down, it is the compostible drinking cup! Yes, compostible, not recyclable. It looks just like any other plastic (not paper) drink cup you see at almost any public water dispensers. The difference? It's made of corn syrup, I was told, not plastic and hence compostible. Wow!

I am not a green fanatic; but I consider myself a responsible adult concerned about the environment doing his share to reduce the landfills, pollutions and paper consumption. I do things that are practical: I don't print something I can read on the monitor; project emails, powerpoints on the screen/projector while conferring with colleagues rather than printing; use back sides of printouts to scribble; use 2-sided printing; donate kids' toys and cloths to charity rather than throw them in trash and so on. But there are some things I just couldn't jettison; at least not yet. One of them was the ubiquitous plastic drinking cup and the bottled water. The convenience of the water bottle was just too much to ignore and my lazy bones reigned over my conscience and I always gravitated, albeit a little guiltly, to the water bottle.

Not any more. I hope these compostible corn syrup based polymer material makes its way to all things plastic - bottles, cups, packaging and so on. The material is called polylactic acid (PLA), which is a polymer made from lactic acid from strachy produce like corn, wheat, patato and beet. However, due to its low melting point, it's not suitable for hot liquids, at least not yet. There is a compostible version - paper cups lines with PLA instea dof petroleum based products. But that's still paper; not 100% PLA.

According to a Smithsonian article, producing this PLA requires 65% less energy and emits 68% fewer greenhouse gases. Wow! That's good enough for me.

But, is it all rosy and smell nice? Well, afraid not. The biggest caveat: the PLA decomposes in a controlled composting facility, not the backyard composting bin. you need something of industrail strength - the sort used by municipalities and large industrial plants. Do they exist? Yes, for commercial use; but almost none for residential use. So, that's the catch. While the material is compostible; the facility to compost is not available.

But I am not going to look at it as glass half full. This is a major first step. Perhaps the ecological and political pressures will force the residential facilities to open up as well. Until then, let the power be with PLA.

OOW'08 Oracle 11g New Features for DBAs

It was my second session at Open World this year. It was full with 332 attendees with a whopping 277 attendees on wait list! the room capacity was 397. Of course, the room did have some fragmentation and not everyone could make it.

Here is the abstract:

There is a world outside the glittering marketing glitz surrounding Oracle 11g. In this session, a DBA and author of the popular 11g New Features series on OTN covers features that stand out in the real world and make your job easier, your actions more efficient and resilient, and so on. Learn the new features with working examples: how to use Database Replay and SQL Performance Analyzer to accurately predict the effect of changes and Recovery Manager (RMAN) Data Recovery Advisor to catch errors and corruption so new stats won't cause issues.

Thank you very much for those who decided to attend. I hope you found it useful. Here is the presentation. You can download it from the Open World site too. Please note, the companion site to see al working examples and a more detailed coverage is still my Oracle 11g New Features Series on Oracle Technology Network.

OOW'08 : Real Life Best Practices for DBAs

Once more, it was a full session with 278 attendees and 251 waitlisted due to lack of space! I wish the organizers simply moved to a bigger room. Oh, well!

For those who attended, I truly wish to express my gratitude. As a speaker, I feel honored when people choose to attend my session over others. I wish you found something useful here.

If you haven't already downloaded it from Open World site, here is the presentation: http://www.proligence.com/OOW08_DBA.pdf.

Here is the abstract:

This session covers best practices for excelling at being a DBA in the field from someone who has been managing Oracle Database instances for 14 years; was named the DBA of the Year in 2003; and has been through it all, from modeling to performance tuning, disaster recovery, security, and beyond. Best practices should be justifiable and situation-aware, not just because someone said they were good. Hear several tips and tricks for surviving and succeeding in the part art/part wizardry of database administration. The session covers Oracle 11g.

I will highly appreciate if you post your comments.

Translate