Confessions of an Oracle Database Junkie - Arup Nanda The opinions expressed here are mine and mine alone. They may not necessarily reflect that of my employers and customers - both past or present. The comments left by the reviewers are theirs alone and may not reflect my opinion whether implied or not. None of the advice is warranted to be free of errors and ommision. Please use at your own risk and after thorough testing in your environment.
Friday, September 24, 2010
OOW 2010 Session Stats with Confidence
It contains the presentation as well. Thanks for attending and hope you will find it useful.
Sunday, September 12, 2010
A Tool to Enable Stats Collection for Future Sessions for Application Profiling
The Problem
Suppose you want to find out the resource consumed by a session. The resources could be redo generation, CPU used, logical I/O, undo records generated – the list is endless. This is required for a lot of things. Consider a case where you want to find out which apps are generating the most redo; you would issue a query like this:
select sid, value from v$sesstat s, v$statname n where n.statistic# = s.statistic# and n.name = 'redo size' /
The value column will show the redo generated. From the SID you can identify the session. Your next stop is v$session to get the other relevant information such as username, module, authentication scheme, etc. Problem solved, right?
Not so fast. Look at the above query; it selects from v$sesstat. When the session is disconnected, the stats disappear, making the entries for that session go from v$sesstat. If you run the query, you will not find these sessions. You have to constantly select from the v$sesstat view to capture the stats of the sessions hoping that you would capture the stats before the session disconnects. But it will be not be guaranteed. Some short sessions will be missed in between collection samples. Even if you are lucky to capture some stats of a short session, the other relevant information from v$session will be gone.
Oracle provides a package dbms_monitor, where a procedure named client_id_stat_enable allows you to enable stats collection on a future session where the client_id matches a specific value, e.g. CLIENT1. Here is an example:
execute dbms_monitor.client_id_stat_enable('CLIENT1');
However there are three issues:
(1) It collects only about 27 stats, out of 400+
(2) It offers only three choices for selecting sessions – client_id, module_name and service_name.
(3) It aggregate them, sums up all stats for a specific client_id. That is pretty much useless without a detailed session level.
So, in short, I didn’t have a readily available solution.
Solution
Well, necessity is the mother of invention. When you can’t find a decent tool; you build it; and so did I. I built this tool to capture the stats. This is version 1 of the tool. It has some limitations, as shown at the end. These limitations do not apply to all situations; so the tool may be useful in a majority of the cases. Later I will expand the tool to overcome these limitations.
Concept
The fundamental problem, as you recall, is not the dearth of data (v$sesstat has plenty); it’s the sessions in the future. To capture those sessions, the tool relies on a post-logon database trigger to capture the values.
The second problem was persistence. V$SESSTAT is a dynamic performance view, which means the records of the session will be gone when the session disappears. So, the tool relies on a table to store the data.
The third problem is the getting the values at the very end of the session. The difference between the values captured at the end and beginning of the session are the stats. To capture the values at the very end; not anytime before, the tool relies on a pre-logoff database trigger.
The fourth challenge is identification of sessions. SID of a session is not unique; it can be reused for a new session; it will definitely be reused when the database is recycled. So, the tool uses a column named CAPTURE_ID, a sequentially incremented number for each capture. Since we capture once at the beginning and then at the end, I must use the same capture_id. I use a package variable to store that capture_Id.
Finally, the tool allows you to enable stats collections based on some session attributes such as username, client_id, module, service_name, etc. For instance you may want to enable stats for any session where the username = ‘SCOTT’ or where the os_user is ‘ananda’, etc. These preferences are stored in a table reserved for that purpose.
Construction
Now that you understand how the tool is structured, let me show the actual code and scripts to create the tool.
(1) First, we should create the table that holds the preferences. Let’s call this table RECSTATS_ENABLED. This table holds all the different sessions attributes (ip address, username, module, etc.) that can enable stats collection in a session.
CREATE TABLE SYS.RECSTATS_ENABLED ( SESSION_ATTRIBUTE VARCHAR2(200 BYTE), ATTRIBUTE_VALUE VARCHAR2(2000 BYTE) ) /
If you want to enable stats collection of a session based on a session attribute, insert a record into this table with the session attribute and the value. Here are some examples. I want to collect stats on all sessions where client_info matches ‘MY_CLIENT_INFO1’. You would insert a record like this:
insert into recstats_enabled values ('CLIENT_INFO','MY_CLIENT_INFO1');
Here are some more examples. All sessions where ACTION is ‘MY_ACTION1’:
insert into recstats_enabled values ('ACTION','MY_ACTION1');
Those of user SCOTT:
insert into recstats_enabled values ('SESSION_USER','SCOTT')
Those with service name APP:
insert into recstats_enabled values ('SERVICE_NAME','APP')
You can insert as many preferences as you want. You can even insert multiple values of a specific attribute. For instance, to enable stats on sessions with service names APP1 and APP2, insert two records.
Important: the session attribute names follow the naming convention of the USERENV context used in SYS_CONTEXT function.
(2) Next, we will create a table to hold the statistics
CREATE TABLE SYS.RECSTATS ( CAPTURE_ID NUMBER, CAPTURE_POINT VARCHAR2(10 BYTE), SID NUMBER, SERIAL# NUMBER, ACTION VARCHAR2(2000 BYTE), CLIENT_DENTIFIER VARCHAR2(2000 BYTE), CLIENT_INFO VARCHAR2(2000 BYTE), CURRENT_EDITION_NAME VARCHAR2(2000 BYTE), CURRENT_SCHEMA VARCHAR2(2000 BYTE), CURRENT_USER VARCHAR2(2000 BYTE), DATABASE_ROLE VARCHAR2(2000 BYTE), HOST VARCHAR2(2000 BYTE), IDENTIFICATION_TYPE VARCHAR2(2000 BYTE), IP_ADDRESS VARCHAR2(2000 BYTE), ISDBA VARCHAR2(2000 BYTE), MODULE VARCHAR2(2000 BYTE), OS_USER VARCHAR2(2000 BYTE), SERVICE_NAME VARCHAR2(2000 BYTE), SESSION_USER VARCHAR2(2000 BYTE), TERMINAL VARCHAR2(2000 BYTE), STATISTIC_NAME VARCHAR2(2000 BYTE), STATISTIC_VALUE NUMBER; ) TABLESPACE USERS
Note, I used the tablespace USERS; because I don’t want this table, which can potentially grow to huge size, to overwhelm the system tablespace. The STATISTIC_NAME and STATISTIC_VALUE columns record the stats collected. The other columns record the other relevant data from the sessions. All the attributes here have been shown with VARCHAR2(2000) for simplicity; of course they don’t need that much of space. In the future versions, I will put a more meaningful limit; but 2000 does not hurt as it is varchar2.
The capture point will show when the values were captured – START or END of the session.
(3) We will also need a sequence to identify the sessions. Each session will have 400+ stats; we will have a set at the end and once at the beginning. We could choose SID as an identifier; but SIDs could be reused. So, we need something that is truly unique – a sequence number. This will be recorded in the CAPTURE_ID column in the stats table.
SQL> create sequence seq_recstats;
(4) To store the capture ID when the post-logon trigger is fired, to be used inside the pre-logoff trigger, we must use a variable that would be visible to entire session. A package variable is the best for that.
create or replace package pkg_recstats is g_recstats_id number; end;
(5) Finally, we will go on to the meat of the tool – the triggers. First, the post-logon trigger to capture the stats in the beginning of the session:
CREATE OR REPLACE TRIGGER SYS.tr_post_logon_recstats after logon on database declare l_stmt varchar2(32000); l_attr_val recstats_enabled.attribute_value%TYPE; l_capture_point recstats.capture_point%type := 'START'; l_matched boolean := FALSE; begin pkg_recstats.g_recstats_id := null; for r in ( select session_attribute, attribute_value from recstats_enabled order by session_attribute ) loop exit when l_matched; -- we select the userenv; but the null values may cause -- problems in matching; so let’s use a value for NVL -- that will never be used - !_!_! l_stmt := 'select nvl(sys_context(''USERENV'','''|| r.session_attribute||'''),''!_!_!_!'') from dual'; execute immediate l_stmt into l_attr_val; if l_attr_val = r.attribute_value then -- match; we should record the stats -- and exit the loop; since stats should -- be recorded only for one match. l_matched := TRUE; select seq_recstats.nextval into pkg_recstats.g_recstats_id from dual; insert into recstats select pkg_recstats.g_recstats_id, l_capture_point, sys_context('USERENV','SID'), null, sys_context('USERENV','ACTION'), sys_context('USERENV','CLIENT_IDENTIFIER'), sys_context('USERENV','CLIENT_INFO'), sys_context('USERENV','CURRENT_EDITION_NAME'), sys_context('USERENV','CURRENT_SCHEMA'), sys_context('USERENV','CURRENT_USER'), sys_context('USERENV','DATABASE_ROLE'), sys_context('USERENV','HOST'), sys_context('USERENV','IDENTIFICATION_TYPE'), sys_context('USERENV','IP_ADDRESS'), sys_context('USERENV','ISDBA'), sys_context('USERENV','MODULE'), sys_context('USERENV','OS_USER'), sys_context('USERENV','SERVICE_NAME'), sys_context('USERENV','SESSION_USER'), sys_context('USERENV','TERMINAL'), n.name, s.value from v$mystat s, v$statname n where s.statistic# = n.statistic#; end if; end loop; end;
The code is self explanatory. I have provided more explanation as comments where needed.
(6) Next, the pre-logoff trigger to capture the stats at the end of the session:
CREATE OR REPLACE TRIGGER SYS.tr_pre_logoff_recstats before logoff on database declare l_capture_point recstats.capture_point%type := 'END'; begin if (pkg_recstats.g_recstats_id is not null) then insert into recstats select pkg_recstats.g_recstats_id, l_capture_point, sys_context('USERENV','SID'), null, sys_context('USERENV','ACTION'), sys_context('USERENV','CLIENT_IDENTIFIER'), sys_context('USERENV','CLIENT_INFO'), sys_context('USERENV','CURRENT_EDITION_NAME'), sys_context('USERENV','CURRENT_SCHEMA'), sys_context('USERENV','CURRENT_USER'), sys_context('USERENV','DATABASE_ROLE'), sys_context('USERENV','HOST'), sys_context('USERENV','IDENTIFICATION_TYPE'), sys_context('USERENV','IP_ADDRESS'), sys_context('USERENV','ISDBA'), sys_context('USERENV','MODULE'), sys_context('USERENV','OS_USER'), sys_context('USERENV','SERVICE_NAME'), sys_context('USERENV','SESSION_USER'), sys_context('USERENV','TERMINAL'), n.name, s.value from v$mystat s, v$statname n where s.statistic# = n.statistic#; commit; end if; end; /
Again the code is self explanatory. We capture the stats only of the global capture ID has been set by the post-logoff trigger. If we didn’t do that all the sessions would have started recording stats at their completion.
Execution
Now that the setup is complete, let’s perform a test by connecting as a user with the service name APP:
SQL> connect arup/arup@app
In this session, perform some actions that will generate a lot of activity. The following SQL will do nicely:
SQL> create table t as select * from all_objects;
SQL> exit
Now check the RECSTATS table to see the stats on this catured_id, which happens to be 1330.
col name format a60 col value format 999,999,999 select a.statistic_name name, b.statistic_value - a.statistic_value value from recstats a, recstats b where a.capture_id = 1330 and a.capture_id = b.capture_id and a.statistic_name = b.statistic_name and a.capture_point = 'START' and b.capture_point = 'END' and (b.statistic_value - a.statistic_value) != 0 order by 2 /
Here is the output:
NAME VALUE ------------------------------------------------------------ ------------ workarea memory allocated -2 change write time 1 parse time cpu 1 table scans (long tables) 1 cursor authentications 1 sorts (memory) 1 user commits 2 opened cursors current 2 IMU Flushes 2 index scans kdiixs1 2 parse count (hard) 2 workarea executions - optimal 2 redo synch writes 2 redo synch time 3 rows fetched via callback 5 table fetch by rowid 5 parse time elapsed 5 recursive cpu usage 8 switch current to new buffer 10 cluster key scan block gets 10 cluster key scans 10 deferred (CURRENT) block cleanout applications 10 Heap Segment Array Updates 10 table scans (short tables) 12 messages sent 13 index fetch by key 15 physical read total multi block requests 15 SQL*Net roundtrips to/from client 18 session cursor cache hits 19 session cursor cache count 19 user calls 25 CPU used by this session 28 CPU used when call started 29 buffer is not pinned count 33 execute count 34 parse count (total) 35 opened cursors cumulative 36 physical read total IO requests 39 physical read IO requests 39 calls to get snapshot scn: kcmgss 45 non-idle wait count 67 user I/O wait time 116 non-idle wait time 120 redo ordering marks 120 calls to kcmgas 143 enqueue releases 144 enqueue requests 144 DB time 149 hot buffers moved to head of LRU 270 recursive calls 349 active txn count during cleanout 842 cleanout - number of ktugct calls 842 consistent gets - examination 879 IMU undo allocation size 968 physical reads cache prefetch 997 physical reads 1,036 physical reads cache 1,036 table scan blocks gotten 1,048 commit cleanouts 1,048 commit cleanouts successfully completed 1,048 no work - consistent read gets 1,060 redo subscn max counts 1,124 Heap Segment Array Inserts 1,905 calls to kcmgcs 2,149 consistent gets from cache (fastpath) 2,153 free buffer requested 2,182 free buffer inspected 2,244 HSC Heap Segment Block Changes 2,519 db block gets from cache (fastpath) 2,522 consistent gets 3,067 consistent gets from cache 3,067 bytes received via SQL*Net from client 3,284 bytes sent via SQL*Net to client 5,589 redo entries 6,448 db block changes 9,150 db block gets 10,194 db block gets from cache 10,194 session logical reads 13,261 IMU Redo allocation size 16,076 table scan rows gotten 72,291 session uga memory 88,264 session pga memory 131,072 session uga memory max 168,956 undo change vector size 318,640 session pga memory max 589,824 physical read total bytes 8,486,912 cell physical IO interconnect bytes 8,486,912 physical read bytes 8,486,912 redo size 8,677,104
This clearly shows you all the stats of that session. Of course the table recorded all other details of the session as well – such as username, client_id, etc., which are useful later for more detailed analysis. You can perform aggregations as well now. Here is an example of the stats collected for redo size:
select session_user, sum(STATISTIC_VALUE) STVAL from recstats where STATISTIC_NAME = 'redo size' group by session_user / Output: SESSION_USER STVAL ------------ --------- ARUP 278616 APEX 4589343 … and so on …
You can disassemble the aggregates to several attributes as well. Here is an example where you want to find out the redo generated from different users coming from different client machines
select session_user, host, sum(STATISTIC_VALUE) stval from recstats where STATISTIC_NAME = 'redo size' group by session_user, host / Output: SESSION_USER HOST STVAL ------------ ----------- ------- ARUP oradba2 12356 ARUP oradba1 264567 APEX oradba2 34567 … and so on …
Granularity like this shows you how the application from different client servers helped; not just usernames.
Limitations
As I mentioned, there are some limitations you should be aware of. I will address them in the next iterations of the tool. These are not serious and applicable in only certain cases. As long as you don’t encounter that case, you should be fine.
(1) The logoff trigger does not fire when the user exits from the session ungracefully, such as closing down the SQL*Plus window, or closing the program before exiting. In such cases the stats at the end of the session will not be recorded. In most application infrastructure it does not happen; but it could happen for adhoc user sessions such as people connecting through TOAD.
(2) The session attributes such as module, client_id and action can be altered within the session. If that is the case, this tool does not record that fact since there is no triggering event. The logoff trigger records the module, action and client_id set at that time. These attributes are not usually changed in application code; so it may not apply to your case.
(3) Parallel Query sessions will have a special issue since there will be no logoff trigger. So in case of parallel queries, you will not see any differential stats. If you don’t use PQ, as most OLTP applications do, you will not be affected.
(4) If the session just sits there without disconnecting, the logoff trigger will never fire and the stats will never be captured. Of course, it will be eventually captured when the session exits.
Once again, these limitations apply only to certain occasions. As long as you are aware of these caveats, you will be able to use this tool to profile many of your applications.
Happy Profiling!
Thursday, July 22, 2010
Webcast: Under the Hoods of Cache Fusion for LAOUG and NZOUG
Monday, June 28, 2010
Build a Simple Firewall for Databases Using SQL Net
So, you want to set up a secured database infrastructure?
You are not alone. With the proliferation of threats from all sources — identity thefts to corporate espionage cases — and with increased legislative pressures designed to protect and serve consumer privacy, security has a taken on a new meaning and purpose. Part of the security infrastructure of an organization falls right into your lap as a DBA, since it’s your responsibility to secure the database servers from malicious entities and curious insiders.
What are your options? Firewalls are first to come to mind. Using a firewall to protect a server, and not just a database server, is not a new concept and has been around for a while. However, a firewall may be overkill in some cases. Even if a firewall is desirable, it may still have to be configured and deployed properly. The complexity in administering a firewall, not to mention the cost to acquire one, may be prohibitive. If the threat level can be reduced by proper positioning of existing firewalls, the functionality of additional ones can be created by a tool available free with Oracle Net, Node Validation. In this article, you will learn how to build a rudimentary, but effective, firewall-like setup with just Oracle Net, and nothing else.
Background
Let’s see a typical setup. Acme, Inc. has several departments — two of which are Payroll and Benefits. Each department’s database resides on a separate server. Similarly, each department’s applications run on separate servers. There are several application servers and database servers for each department. To protect the servers from unauthorized access, each database server is placed behind a firewall with ports open to communicate SQL*Net traffic only. This can be depicted in figure 1 as follows:
Figure 1: Protecting departmental database and application servers using multiple firewalls.
This scheme works. But notice how many firewalls are required and the complexity that having this number adds to the administration process. What can we do to simplify the setup? How about removing all the firewalls and having one master firewall around the servers, as in Figure 2?

Figure 2: One master firewall around all the servers.
This method protects the servers from outside attacks; however, it still leaves inside doors open. For instance, the application server PAYROLL1 can easily connect to the database server of the Benefits Department BENEFITDB1, which is certainly inappropriate. In some organizations, there could be legal requirements to prevent this type of access.
Rather than creating a maze of firewalls as in the case we noted previously, we can take advantage of the SQL*Net Node Validation to create our own firewall. We will do this using only Oracle Net, which is already installed as a part of the database install. The rest of this article will explain in detail how to accomplish this.
Objective
Our objective is to design a setup as shown in figure 3. In this network, the application servers benefits1 and benefits2 access the database on server benefitsdb1. Similarly, application servers payroll1 and payroll2 access the database on server payrolldb1. Clients should be given free access to the intended machines. Client machines shouldn’t be allowed to access the database on other departments (e.g., benefits1 and benefits2 shouldn’t be able to access the database on payrolldb1). Likewise, application servers payroll1 and payroll2 should not be allowed to access benefitsdb1.
Figure 3: One master firewall and restricting access from non-departmental clients.
Note the key difference in requirements here — we are not interested in disallowing any type of access from client machines to servers of another department. Rather, it’s enough to disable access at the Oracle level only. This type of restriction is enforced by the listener. A listener can check the IP address of the client machine and, based on certain rules, decide to allow or deny the request. This can be enabled by a facility called Valid Node Checking, available as a part of Oracle Net installation. Let’s see how this can be done.
To set up valid node checking, simply place a set of lines on a specific file on the server. In our example, the following lines are placed in the parameter file on the server payrolldb1, allowing access to servers payroll1 and payroll2.
tcp.validnode_checking = yes
tcp.invited_nodes = (payroll1, payroll2)
Where this parameter file is located depends on the Oracle version. In Oracle 8i, it’s a file named protocol.ora; in Oracle 9i, it’s called sqlnet.ora. Both these files are located in the directory specified by the environmental variable TNS_ADMIN, which defaults to $ORACLE_HOME/network/admin in UNIX or %ORACLE_HOME%\network\admin in Windows.
These parameters are self-explanatory. The first line, tcp.validnode_checking = yes, specifies that the nodes are to be validated before accepting the connection.
The second line specifies that only the clients payroll1 and payroll2 are allowed to connect to the listener. The clients are indicated by either IP address (e.g., 192.168.1.1) or the node name as shown above. The list of node names is specified by a single line separated by commas. It is important to have only one line — you shouldn’t break it up.
The values take effect only during the startup of the listener. After making the change in protocol.ora (in Oracle 8i) or sqlnet.ora (in Oracle 9i and later), stop and restart the listener. After you’ve done so, if a user, regardless of the authentication in the database or authority level, attempts to connect to the database on benefits1 from the node payroll1, he receives the error as shown below.
$ sqlplus scott/tiger@payrolldb1
SQL*Plus: Release 9.2.0.4.0 - Production on Tue Jan 2o 9:03:33 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
ERROR: ORA-12537: TNS:connection closed
Enter user-name:
The error message is not very informative; it does not explicitly state the nature of the error. This error occured, however, because the connection request came from a client that is not listed as accepted. In this case, the listener simply rejected the connection originating from the node benefits1, regardless of the user. Yet the same user trying to connect from node payroll1 would succeed.
Excluded Nodes
In the previous example, we saw how to allow only a certain set of clients, and disallow all others. Similarly, you can specify the other type of rule — exclude some clients and allow all others. Say the lines in the parameter file are as follows:
tcp.validnode_checking = yes
tcp.excluded_nodes = (payroll3)
All clients but those connecting from payroll3 would be able to connect to all nodes. So, in this case, clients benefits1 and benefits2 would be able to connect to payrolldb1 in addition to clients payroll1 and payroll2. Isn’t that counter to what we wanted to achieve? Where can this exclusion be used?
In real life cases, networks are subdivided into subnetworks, and they offer adequate protection. In a particular subnet, there may be a greater number of clients needing access than the number being restricted. In such a case, it might be easier to specifically refuse access from a set of named clients, conveniently named in the tcp.excluded_nodes parameter. You can also use of this parameter to refuse access from certain machines that had been used to launch attacks in the past.
You can also mix excluded and included nodes, in which case, the invited nodes are given precedence over excluded ones. But there are three very big drawbacks to this approach.
1. There is no way to specify a wild card character in the node list. You must specify a node explicitly by its name or its IP address.
2. All excluded or invited nodes are to be specified in only one line, severely limiting your ability to specify a large number of nodes.
3. Since the validation is based on IP address or client names only and it’s relatively easy to spoof these two key pieces of identification, the system is not inherently secure.
For these reasons, mixing excluded and included nodes is not quite suitable for excluding a large list of servers from a network or subnetwork. This method can be used when the list of machines accessing the network is relatively small and the machines are in a subnetwork, behind a firewall. In such a configuration, the risk of external attacks is very slight, and the risk of unauthorized access by spoofing key identification is negligible.
Oracle Net also provides another means to develop a rudimentary firewall using a lesser known and even lesser used tool called Connection Manager. This tool is far more flexible in the setup; you can specify wildcards n-node names without restrictions such as the need to have only a single line for naming the nodes. A detailed discussion of Connection Manager with real-life examples can be found in the book Oracle Privacy Security Auditing.
Troubleshooting
Of course, things may not always proceed as smoothly as in the examples we’ve cited so far. One of the common problems you can encounter is that the exclusion may not work even though the files may be present and the parameters seem to be defined properly.
To diagnose a node checking issue you may encounter, you need to turn on tracing during the connection process. Tracing the process can be done in several levels of detail, and in this case, you should enable it for the level called support, or "16." Place the following line in the file sqlnet.ora:
trace_level_server = support
Doing this causes the connection process to write detailed information in a trace file under the directory $ORACLE_HOME/network/trace. The directory can be specified to a different value by a parameter in the file sqlnet.ora, as
trace_directory_server = /tmp
By doing this, the trace information to be written to the directory /tmp instead of the default. After setting the parameters as shown above, you should attempt the connection again. There is no need to bounce the listener. The connection attempt will create trace files named similar to svr_0.trc to be written in the proper directory. You should open this file in an editor (parts of the file are shown below).
[20-JAN-2004 12:00:01:234] Attempted load of system pfile
source /u02/oracle/product/9.2/network/admin/sqlnet.ora
[20-JAN-2004 12:00:01:234] Parameter source loaded successfully
[20-JAN-2004 12:00:01:234]
[20-JAN-2004 12:00:01:234] -> PARAMETER TABLE LOAD RESULTS FOLLOW <-
[20-JAN-2004 12:00:01:234] Successful parameter table load
[20-JAN-2004 12:00:01:234] -> PARAMETER TABLE HAS THE FOLLOWING CONTENTS <-
[20-JAN-2004 12:00:01:234] tcp.validnode_checking = yes
[20-JAN-2004 12:00:01:234] trace_level_server = support
[20-JAN-2004 12:00:01:234] tcp.invited_nodes = (192.168.1.1, 192.168.1.2)
[20-JAN-2004 18:27:04:484] NAMES.DIRECTORY_PATH = (TNSNAMES)
[20-JAN-2004 18:27:04:484] tcp.excluded_nodes = (192.168.1.3)
[20-JAN-2004 18:27:04:484] --- PARAMETER SOURCE INFORMATION ENDS ---
These lines indicate that
1. The parameter file /u02/oracle/product/9.2/network/admin/sqlnet.ora was read by the listener.
2. The parameters were loaded successfully.
3. The contents of the parameter were read as they were mentioned.
4. The names of the excluded and invited nodes are displayed.
If the information is not as shown here, the problem be caused by the way the parameter file is written; most likely a typographical error such as a missing parenthesis. This type of error should be fixed before proceeding further along the trace file.
If the parameters are indeed loaded properly, you should next check the section of the file in which the node validity checking is done. This section looks like this:
[20-JAN-2004 12:30:45:321] ntvllt: Found tcp.invited_nodes. Now loading...
[20-JAN-2004 12:30:45:321] ntvllhs: entry
[20-JAN-2004 12:30:45:321] ntvllhs: Adding Node 192.168.1.1
[20-JAN-2004 12:30:45:321] ntvllhs: Adding Node 192.168.1.2
[20-JAN-2004 12:30:45:321] ntvllhs: exit
[20-JAN-2004 12:30:45:321] ntvllt: exit
[20-JAN-2004 12:30:45:321] ntvlin: exit
[20-JAN-2004 12:30:45:321] nttcnp: Validnode Table IN use; err 0x0
The first line indicates that the parameter tcp.invited_nodes was found. Next, the entries in that list are read and displayed one after the other. This is the most important clue. If the addresses were written incorrectly, or the syntax were wrong, the trace files would have indicated this by not specifying the node names checked. The last line in this section shows that the ValidNode table was read and used with error code of 0x0 (in hexadecimal, equating to zero) — the table has no errors. If there were a problem in the way the valid node parameters were written in the parameter file, the trace file would have shown something different. For instance, say the parameters were written as
tcp.excluded_nodes = (192.168.1.3
Note how a parenthesis is left out, indicating a syntax problem. However, this does not affect the connection; the listener simply ignores the error and allows the connection without doing a valid node checking. Upon investigation, we would find the root of the problem in the trace file. The trace file shows the following information.
--- PARAMETER SOURCE INFORMATION FOLLOWS ---
[20-JAN-2004 12:45:03:214] Attempted load of system pfile
source /u201/oracle/product/9.2/network/admin/sqlnet.ora
[20-JAN-2004 12:45:03:214] Load contained errors 14] Error stack follows: NL-00422: premature end of file NL-00427: bad list
[20-JAN-2004 12:45:03:214]
[20-JAN-2004 12:45:03:214] -> PARAMETER TABLE LOAD RESULTS FOLLOW <-
[20-JAN-2004 12:45:03:214] Some parameters may not have been loaded
[20-JAN-2004 12:45:03:214]
See dump for parameters which loaded OK This clearly shows that the parameter file had errors that prevented the parameters from loading. Because of this, the valid node checking is turned on and in use, but there is nothing in the list of the excluded nodes as shown in the following line from the trace file:
[20-JAN-2004 12:45:03:214] nttcnp: Validnode Table IN use; err 0x0
Since the error is 0x0, no error is reported by the validity checking routine. The subsequent lines on the trace file show other valuable information. For instance this line,
[20-JAN-2004 12:45:13:211] nttbnd2addr: using host IP address: 192.168.1.1
shows that the IP address of the server to which the listener was supposed to route the connection was 192.168.1.1. If all goes well, the listener allows the client to open a connection. This is confirmed by the following line:
[20-JAN-2004 12:45:14:320] nttcon: NT layer TCP/IP connection has been established.
As the line says, the TCP/IP connection has been established. If any other problems exist, the trace file will show enough helpful information for a diagnosis.
Summary
To summarize:
- Node Validation can be used to instruct listeners to accept or reject a connection from a specific client.
- The parameter file is sqlnet.ora in Oracle 9i and protocol.ora in Oracle8i.
- The nodes must be explicitly specified by name or by IP Address; no wildcards are supported.
Wednesday, June 16, 2010
Webcast for Latin American Oracle User Group
For those who attended, you may want to download the scripts at www.proligence.com/sec_scripts.zip
Wednesday, June 09, 2010
RACSIG Webcast on June 24th Files
The actual presentation itself will most likely be available at a later date on the oracleracsig.org website.
Wednesday, May 19, 2010
Mining Listener Logs
I have placed the articles on my website for your reference. As always, I would love to hear from you how you felt about these, stories of your own use and everything in between.
Mining Listener Logs Part 1
Mining Listener Logs Part 2
Mining Listener Logs Part 3
Tuesday, May 18, 2010
IOUG Webcast on Security
You can find the scripts referenced in the webcast here.
Tuesday, April 20, 2010
My Sessions at IOUG Collaborate 2010
Going to my own sessions, here is where you can download the presentations. For the sessions I wanted to show live demos; but in a short span of 30 minutes for Quick Tips, it was impossible. You can download the scripts here so that you can check them out yourself. The slides show which scripts to execute.
RAC Performance Tuning, part of RAC Bootcamp (Recorded)
Stats with Intelligence (Recorded)
Publish Stats after Checking, part of Manageability Bootcamp (Recorded and shown via Webcast)
Once again, your patronage by attending is highly appreciated. A speaker is nothing without attendees. I sincerely hope that you got some value from the sessions. As always, I am looking forward to hearing from you – not just that you liked; but things you didn't.
Wednesday, March 10, 2010
The presentation and the associated SQL scripts are available here. The article I referred to can be found here.
If you have a question regarding that specific webcast, please post a comment here and I will address it here. Please, limit your questions to the material discussed in the webcast only.
Tuesday, December 29, 2009
Instance_Number is busy Message during Standby Instance Startup
SQL> startup nomount pfile=initSTBY.ora
But it refused to come up, with the following error:
ORA-00304: requested INSTANCE_NUMBER is busy
Alert log showed:
USER (ospid: 14210): terminating the instance due to error 304
Instance terminated by USER, pid = 14210
This was highly unusual. The primary and standby both were non-RAC; there was no instance number concept in a non-RAC database. By the way, the RAC instance on that server (or on the server n1) was not running; so there was no question of any conflict with the RAC instances either. The primary database was called PRIM while the standby was called STBY - eliminating the possibility of an instance name clash as well. And this error came while merely trying to start the instance, not even while mounting - eliminating the standby controlfile as a cause.
The error 304 showed:
00304, 00000, "requested INSTANCE_NUMBER is busy"
// *Cause: An instance tried to start by using a value of the
// initialization parameter INSTANCE_NUMBER that is already in use.
// *Action: Either
// a) specify another INSTANCE_NUMBER,
// b) shut down the running instance with this number
// c) wait for instance recovery to complete on the instance with
// this number.
Needless to say, being for a non-RAC database there was no "instance_number" parameter in the initialization parameter file of primary or the standby. So, the suggestions for the resolution seemed odd. MetaLink provided no help. All the ORA-304 errors were related to RAC with the instance_number mismatch.
As it always happens, it fell on my lap at this time. With just days to go live, I had to find a solution quickly. Long hours of troubleshooting, tracing the processes and examination of occasional trace files did not yield any clue. All the clue seemed to point to RAC, which this database was not. The Oracle Home was a RAC home, which meant the oracle binary was linked with the "rac" option.
So, the next logical step was to install a new Oracle Home without the rac option. After doing so, I tried to bring up the instance, using the new ORACLE_HOME and LD_LIBRARY_PATH variable; but, alas, the same error popped up.
Just to eliminate the possibility of some unknown bug, I decided to put an instance_number parameter, setting it to "1", from the default "0". The same error. I changed it to "2", again, the result was the same error.
Although this didn't help, at least it gave a clue that the error was not related to instance_number. The error message was clearly wrong. With this in mind, I went back to the basics. I went through the alert log with a fine toothed comb, scanning and analyzing each line.
The following line drew my attention:
DB_UNQIUE_NAME STBY is not in the Data Guard configuration
This was odd; the db_unique_name STBY was not defined in DG configuration. [BTW, note the spelling of "unique" in the message above. That is not what I typed; it was a copy and paste from the actual message in the alert log. Someone in Oracle Development should really pay atytention to typos in messages. This is clearly more than a nuisance; what if some process scans for db_unique_name related errors? It will not find the message at all!]
Checking the dg configuration, I found that the DBA has correctly defined the primary and standby names. In any case, Data Guard has not been started yet; this is merely at the instance startup - why is it complaining for data guard configuration at this time.
Perplexed, I resorted to a different approach. I renamed the pfile and all other relevant files. Then I built the standby myself, from scratch - using the same names - PRIM and STBY. And this time, everything worked fine. The instance STBY did come up.
While this solvbed the urgency problem, everyone, inclduing myself, wanted to know what the issue was in the earlier case where the DBA had failed to bring up the instance. To get the answer, I compared the files I created with the DBA created when tried and failed. Voila! The cause was immediately clear - the DBA forgot to put a vital parameter in the pfile of the standby instance:
db_unique_name = 'STBY'
This parameter was absent; so it took the default value as the db_name, which was "PRIM". This caused the instance to fail with a seemingly unrelated message - "ORA-304 Instance_number is busy"!
Learning Points
- In Oracle, most errors are obvious; but some are not. So, do not assume the error message is accurate. If all logic fails, assume the error messsage is wrong, or at least inaccurate.
- Instead of over-analyzing the process already followed, it may make sense to take a breather, wipe out everything and start fropm scratch. This is evben mor effective when someone else does it, offering a fresh approach and possibly not repeating the same mistakes.
- Finally, the issue at hand: if you do not define db_unique_name parameter in the standnby instance, you will receive ORA-304 during instance startup.
Hope this was helpful. Happy New Year, everybody.
Thursday, October 15, 2009
OOW09 Session#4 DBA 11g New Features
Here is the presentation link. I hope you enjoyed the session and found it useful.
OOW09 Session# 3
Here is the presentation material. While you are there, feel free to browse around. And, of course, I will highly appreciate if you could send me your comments, either here or via email - whatever you are comfortable with.
Wednesday, October 14, 2009
OOW09 My Session#2
For those you attended, I thank you very much. Here is the presentation.
Tuesday, October 13, 2009
OOW Day2
I come to OOW for all those reasons as well. But high up in my list is the visit to the Exhibit Halls. No; not for the tee-shirts that do not fit me and graphics I don't really dig. I visit the demogrounds and exhibit halls to know about the products and tools that I should be aware of. Where else would you find 1000+ companies advertising the products at one place? Sure, I can call them and find out; but ho do I find them? OOW exhibit halls are prime "hunting" grounds to look for new ideas and tools that I should be interested in; or at least be aware of. I can not only look at the tools; I can actually get some relevant technical facts in 5 minutes which might take weeks of scheduling and hours of marketing talk. And, if I decide the product is not relevant; I can always walk away. I have the privilege of walking away; they don't. If I call them to my office, "they" have that option; not me :) If I find something attractive, I can always follow up and get to know more.
Oracle demogrounds are even better. Not only I can meet Oracle PMs there; but the people who never come out to the public world - developers, development managers, architects and so on. These unsung heroes are mostly the reason why Oracle is what it is now. I meet the known faces, get to know new ones and establish new relationships. They hear from me what customers want and I learn the innards of some features I am curious about.
So, I spent almost the whole day yesterday navigating through demo grounds and exhibit halls. I could cover only a small fraction. In between I had to attend some meetings at work. Going to OOW is never "going away". I wish it was.
Sunday, October 11, 2009
OOW09 - RAC Performance Tuning
You can download the presentation here. And while you are there, look around and download some more of my sessions as well.
Thanks a lot once again. I'm off the keynote now.
ACE Directors Product Briefing '09
While I was a little disappointed at the coverage of the database topics, I quickly recovered from the alphabet soup that makes up the netherworld of middleware and tools. However, a surprise visit by Thomas Kurian to address questions from the audience about the various product roadmaps was testimonial that Oracle is dead serious about the ACE Program. That proves the commitment Oracle has made for the user community - very heartening.
As always, Vikky Lira and Lillian Buziak did a wonderful job of organizing the event. Considering about 100 ACE Directors from 20+ countries, that is no small task. Perhaps the highlight of the organization was the detailed briefing sheets Lillian prepared for each one individually, down to what car service one takes and when - simply superb! No amount of thanks will be enough. From the bottom of my heart, thank you, Vikky and Lillian. And, thank you Justin Kestelyn - for kicking off and running the event year after year.
Open World 09 Starts
I went off to the first session of today on the IOUG bucket - Workload Management by Alex Gorbachev. Alex is one of those people who know their stuff; so there is always something to be learned from there. Alex successfully demonstrated the difference between Connection Load Balancing and Server Side Listener Load Balancing, with pmon trace to show how the sessions are balanced. It sheds light on the question - why Oracle is not balancing the workload.
If you didn't attend this, you should definitely download the presentation and check it out later.
Sunday, September 28, 2008
OOW'08 ACE Directors Forum Session
On the panel we had Brad Brown (TUSC), Eddie Awad, Tim Hill, Mark Rittman, Hans Forbrich and your truly. Here is a coverage on Mark Rittman's blog (with a picture of the crew) http://www.rittmanmead.com/2008/09/26/oracle-open-world-2008-day-5-exadata-storage-server-and-ask-the-oracle-ace-directors/
I was nervous; and who wouldn't be, with the pressure? Fortunately, we, as a panel, with the expert moderation by Lewis, could ace the volleys. some of the questions I responded to with my answers:
- Q: Will Transparent Tablespace Encryption (TTE) spell the doom for Tranparent data Encryption (TDE)?
- A: Not at all. TDE encrypts a specific column or columns. TTE encrypts everything the tablespace - all tables and all columns. So, the performance definitely impacted. However, the biggest difference is the encryption in the database. Both technologies encrypt data in storage; but TTE decrypts the data in the SGA. So index scans do not suffer in case of TTE. TDE does *not* decrypt the values in SGA; so index scans are rather useless. So, in the case where a data value will most likely be found in SGA, the TTE option works well. The penalty is in the time when data is loaded from the database to the SGA. Since that happens a lot less, this will not cause a serious issue. In case data is frequently aged out of the buffer cache, the TTE option may prove expensive and TDE might become relatively attactive.
- Q: What approach would you recommend for upgrading a 10 GB database to 11g from 10g - Data Pump, Exp/Imp, Transportable Tablespace?
- A: None of the above. I would go for a Direct Path Insert (insert with the APPEND hint) over DB Link. This allows me several benefits - (i) I can do a database capture and replay it on 11g to minimize the risk of something breaking after upgrade. (ii) I can do a database reorg at the time of the move, i.e. partition unpartitioned objects, etc. (iii) have minimal time for migration.
- Q: What is your least favorite new feature in Oracle?
- A: I would rather answer it as most "unnecessary" new feature. It would be bigfile tablespaces - hands down. I always recommend creating smaller datafiles for tablespaces, no more thna 32 GB. This reduces the risk significantly in case of failures. If a block media recovery fails due to whatever reason, you can at least restore the file (or switch over to a copy) quickly. The bigger the file, the more time will be for restore and recovery. A large number of files increase the checkpoint time. so, try to find a balance. But in any case, dump bigfiles.
- Q: How has life changed for you after being an OCP?
- A: Not in the least. I have been an OCP since 8i and I finishd 9i, 10g and now 11g upgrade exams. However, no one ever bothered to ask me if I am an OCP.
Monday, September 01, 2008
Magic of Block Change Tracking
After a few days we noticed the incremental RMAN backup taking a long time. This caused major issues - it took a long time and I/O waits went through the roof. In fact it took increasingly longer every day that passed by that unfortunate collapse of the node. Everyone was quite intrigued - what could be the connection between an instance crash and instance crashing? All sorts of theories cropped up - from failed HBA cards to undiscovered RAC bugs.
This is where I got involved. The following chronicles the diagnosis of the issue and the resolution.
First, the increased length of time is obviously a result of the incemental backups doing more work, i.e. more changed blocks. What caused so many changed blocks? Interviews with stakeholdrs yielded no clear answer - there was absolutely no reason for increased activity. Since we are doing proper research, I decided to start with the facts. How much was the extra blocks processed by incrementals?
I started with this simple query:
select completion_time, datafile_blocks, blocks_read, blocks
from v$backup_datafile
where file# = 1
order by 1
/
Output:
COMPLETIO DATAFILE_BLOCKS BLOCKS_READ BLOCKS
--------- --------------- ----------- ----------
18-JUL-08 524288 32023 31713
19-JUL-08 524288 11652 10960
20-JUL-08 524288 524288 12764
21-JUL-08 524288 524288 5612
22-JUL-08 524288 524288 11089
The columns are:
DATAFILE_BLOCKS - the number of blocks in the datafile at that time
BLOCKS_READ - the exact number of blocks the RMAN incremental backup read
BLOCKS - the numberof blocks it actually backed up
From the above output, a pattern emerges - until Jul 20th, the backup read only a few blocks; but on July 20th, it started scanning the entire file - all the blocks! I checked for a few other datafiles and the story is the same everywhere. With a 4.5 TB database, if the incremental backup reads the datafiles in entirity, then I/O would obviously go for a toss. That explains the I/O and time issue.
But why did RMAN switch from reading a few blocks to the whole file that day? The #1 suspect is Block Change Tracking. The 10g feature BCT allows RMAN to scan only the changed blocks and not the entire file. We use that. So, did something happen to make that disappear?
to answer, I issued a modified query:
select completion_time, datafile_blocks, blocks_read, blocks, used_change_tracking
from v$backup_datafile
where file# = 1
order by 1
/
Output:
COMPLETIO DATAFILE_BLOCKS BLOCKS_READ BLOCKS USE
--------- --------------- ----------- ---------- ---
18-JUL-08 524288 32023 31713 YES
19-JUL-08 524288 11652 10960 YES
20-JUL-08 524288 524288 12764 NO
21-JUL-08 524288 524288 5612 NO
22-JUL-08 524288 524288 11089 NO
Bingo! The BCT use ceased from the 20th July date. That was what caused the whole file to be scanned. But why was it stopped? No one actually stopped it.
Investigating even further, I found from the alert log of Node 1:
Sun Jul 20 00:23:52 2008
CHANGE TRACKING ERROR in another instance, disabling change tracking
Block change tracking service stopping.
From Node 2:
Sun Jul 20 00:23:51 2008
CHANGE TRACKING ERROR in another instance, disabling change tracking
Block change tracking service stopping.
Alert log of Node 3 showed the issue:
Sun Jul 20 00:23:50 2008
Unexpected communication failure with ASM instance:
ORA-12549: TNS:operating system resource quota exceeded
CHANGE TRACKING ERROR 19755, disabling change tracking
Sun Jul 20 00:23:50 2008
Errors in file /xxx/oracle/admin/XXXX/bdump/xxx3_ctwr_20729.trc:
ORA-19755: could not open change tracking file
ORA-19750: change tracking file: '+DG1/change_tracking.dbf'
ORA-17503: ksfdopn:2 Failed to open file +DG1/change_tracking.dbf
ORA-12549: TNS:operating system resource quota exceeded
Block change tracking service stopping.
The last message shows the true error. The error was “operating system resource quota exceeded”, making the diskgroup unavailable. Since the ASM diskgroup was down, all the files were also not available, including BCT file. Surprisingly, Oracle decided to stop BCT altogether rather than report it as a problem and let the user decide the next steps. So block change tracking was silently disabled and the DBAs didn't get a hint of that. Ouch!
Resolution
Well, now that we discovered the issue, we took the necessary steps to correct it. Because of the usual change control process, it took some time to have the change approved and put in place. We executed the following to put the BCT file.
alter database enable block change tracking using file '+DG1/change_tracking.dbf'
The entry in alert log confirms it (all all nodes)
Block change tracking file is current.
But this does not solve the issue completely. to use block change tracking, there has to be a baseline, which is generally a full backup. We never take a full backup. We always take an incremental image copy and then merge to a full backup on a separate location. So, the first order of business was to take a full backup. After that we immediately took an incremental. It took just about an hour, down from some 18+ hours earlier.
Here is some analysis. Looking at the backup of just one file - file#1, i.e. SYSTEM datafile:
select COMPLETION_TIME, USED_CHANGE_TRACKING, BLOCKS, BLOCKS_READ
from v$backup_datafile
where file# = 1
order by 1
/
The output:
COMPLETIO USE BLOCKS BLOCKS_READ
--------- --- ---------- -----------
18-AUG-08 NO 31713 524288
18-AUG-08 NO 10960 524288
20-AUG-08 NO 12764 524288
21-AUG-08 NO 5612 524288
22-AUG-08 NO 11089 524288
23-AUG-08 NO 8217 524288
23-AUG-08 NO 8025 524288
25-AUG-08 NO 3230 524288
26-AUG-08 NO 6629 524288
27-AUG-08 NO 11094 524288 <= the filesize was increased 28-AUG-08 NO 3608 786432 29-AUG-08 NO 8199 786432 29-AUG-08 NO 12893 786432 31-AUG-08 YES 1798 6055 01-SEP-08 YES 7664 35411
Columns descriptions:
USE - was Block Change Tracking used?
BLOCKS - the number of blocks backed up
BLOCKS_READ - the number of blocks read by the backup
Note, when the BCT was not used, the *entire* file - 524288 blocks - were
being read every time. Of course only a percent of that was being backed up
since that percentage changed; but the whole file was being checked.
After BCT, note how the "blocks read" number dropped dramatically. That is
the magic behind the dropped time.
I wanted to find out exactly how much I/O savings BCT was bringing us. A simple query would show that:
select sum(BLOCKS_READ)/sum(DATAFILE_BLOCKS)
from v$backup_datafile
where USED_CHANGE_TRACKING = 'YES'
/
The output:
.09581342
That's just 9.58%. After BCT, only 9.58% of the blocks of the datafiles were scanned! Consider the impact of that. Before BCT, the entire file was scanned for changed blocks. After BCT, only about 9.58% of the blocks were scanned for changed blocks. Just 9.58%. How sweet is that?!!!
Here are three representative files:
File# Blocks Read Actual # of blocks Pct Read
----- ------------- ------------------- --------
985 109 1254400 .009
986 1 786432 .000
987 1 1048576 .000
Note, files 986 and 987 were virtually unread (only one block was read). Before BCT, all the 1048576 blocks were read; after BCT only 1 was. This makes perfect sense. These files are essentially older data; so nothing changes there. RMAN incremental is now blazing fast because it scans less than 10% of the blocks. The I/O problem disappered too, making the database performance even better.
So, we started with some random I/O issue, causing a node failure, which led to increased time for incremental, which was tracjed down to a block change tracking file being suspended by Oracle silently without raising an error.
Takeaways:
The single biggest takeway you should get is that just because it is defined, don't get the idea it is going to be there. So, a periodic check for the BCT file is a must. I will work on developing an automated tool to check for non-use of BCT file. The tool will essentially issue:
SELECT count(1)
FROM v$backup_datafile
where USED_CHANGE_TRACKING = 'NO'
/
If the output is >1, then an alert should be issued. Material for the next blog. Thanks for reading.