Friday, December 18, 2009

Resolving Gaps in Data Guard Apply Using Incremental RMAN BAckup

Recently, we had a glitch on a Data Guard (physical standby database) on infrastructure. This is not a critical database; so the monitoring was relatively lax. And that being done by an outsourcer does not help it either. In any case, the laxness resulted in a failure remaining undetected for quite some time and it was eventually discovered only when the customer complained. This standby database is usually opened for read only access from time to time.This time, however, the customer saw that the data was significantly out of sync with primary and raised a red flag. Unfortunately, at this time it had become a rather political issue.

Since the DBA in charge couldn’t resolve the problem, I was called in. In this post, I will describe the issue and how it was resolved. In summary, there are two parts of the problem:

(1) What happened
(2) How to fix it

What Happened

Let’s look at the first question – what caused the standby to lag behind. First, I looked for the current SCN numbers of the primary and standby databases. On the primary:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447102

On the standby:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1301571

Clearly there is a difference. But this by itself does not indicate a problem; since the standby is expected to lag behind the primary (this is an asynchronous non-real time apply setup). The real question is how much it is lagging in the terms of wall clock. To know that I used the scn_to_timestamp function to translate the SCN to a timestamp:

SQL> select scn_to_timestamp(1447102) from dual;

SCN_TO_TIMESTAMP(1447102)
-------------------------------
18-DEC-09 08.54.28.000000000 AM

I ran the same query to know the timestamp associated with the SCN of the standby database as well (note, I ran it on the primary database, though; since it will fail in the standby in a mounted mode):

SQL> select scn_to_timestamp(1301571) from dual;

SCN_TO_TIMESTAMP(1301571)
-------------------------------
15-DEC-09 07.19.27.000000000 PM

This shows that the standby is two and half days lagging! The data at this point is not just stale; it must be rotten.

The next question is why it would be lagging so far back in the past. This is a 10.2 database where FAL server should automatically resolved any gaps in archived logs. Something must have happened that caused the FAL (fetch archived log) process to fail. To get that answer, first, I checked the alert log of the standby instance. I found these lines that showed the issue clearly:


Fri Dec 18 06:12:26 2009
Waiting for all non-current ORLs to be archived...
Media Recovery Waiting for thread 1 sequence 700
Fetching gap sequence in thread 1, gap sequence 700-700

Fri Dec 18 06:13:27 2009
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 700-700
DBID 846390698 branch 697108460
FAL[client]: All defined FAL servers have been attempted.


Going back in the alert log, I found these lines:

Tue Dec 15 17:16:15 2009
Fetching gap sequence in thread 1, gap sequence 700-700
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:15 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Tue Dec 15 17:16:45 2009
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:45 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

This clearly showed the issue. On December 15th at 17:16:15, the Managed Recovery Process encountered an error while receiving the log information from the primary. The error was ORA-12514 “TNS:listener does not currently know of service requested in connect descriptor”. This is usually the case when the TNS connect string is incorrectly specified. The primary is called DEL1 and there is a connect string called DEL1 in the standby server.

The connect string works well. Actually, right now there is no issue with the standby getting the archived logs; so there connect string is fine - now. The standby is receiving log information from the primary. There must have been some temporary hiccups causing that specific archived log not to travel to the standby. If that log was somehow skipped (could be an intermittent problem), then it should have been picked by the FAL process later on; but that never happened. Since the sequence# 700 was not applied, none of the logs received later – 701, 702 and so on – were applied either. This has caused the standby to lag behind since that time.

So, the fundamental question was why FAL did not fetch the archived log sequence# 700 from the primary. To get to that, I looked into the alert log of the primary instance. The following lines were of interest:


Tue Dec 15 19:19:58 2009
Thread 1 advanced to log sequence 701 (LGWR switch)
Current log# 2 seq# 701 mem# 0: /u01/oradata/DEL1/onlinelog/o1_mf_2_5bhbkg92_.log
Tue Dec 15 19:20:29 2009Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3
Tue Dec 15 19:20:29 2009
FAL[server, ARC1]: FAL archive failed, see trace file.
Tue Dec 15 19:20:29 2009
Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed.
Archiver continuing
Tue Dec 15 19:20:29 2009
ORACLE Instance DEL1 - Archival Error. Archiver continuing.


These lines showed everything clearly. The issue was:

ORA-00308: cannot open archived log '/u01/oraback/1_700_697108460.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory


The archived log simply was not available. The process could not see the file and couldn’t get it across to the standby site.

Upon further investigation I found that the DBA actually removed the archived logs to make some room in the filesystem without realizing that his action has removed the most current one which was yet to be transmitted to the remote site. The mystery surrounding why the FAL did not get that log was finally cleared.

Solution

Now that I know the cause, the focus was now on the resolution. If the archived log sequence# 700 was available on the primary, I could have easily copied it over to the standby, registered the log file and let the managed recovery process pick it up. But unfortunately, the file was gone and I couldn’t just recreate the file. Until that logfile was applied, the recovery will not move forward. So, what are my options?

One option is of course to recreate the standby - possible one but not technically feasible considering the time required. The other option is to apply the incremental backup of primary from that SCN number. That’s the key – the backup must be from a specific SCN number. I have described the process since it is not very obvious. The following shows the step by step approach for resolving this problem. I have shown where the actions must be performed – [Standby] or [Primary].

1. [Standby] Stop the managed standby apply process:

SQL> alter database recover managed standby database cancel;

Database altered.

2. [Standby] Shutdown the standby database

3. [Primary] On the primary, take an incremental backup from the SCN number where the standby has been stuck:

RMAN> run {
2> allocate channel c1 type disk format '/u01/oraback/%U.rmb';
3> backup incremental from scn 1301571 database;
4> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=139 devtype=DISK

Starting backup at 18-DEC-09
channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/oradata/DEL1/datafile/o1_mf_system_5bhbh59c_.dbf

piece handle=/u01/oraback/06l16u1q_1_1.rmb tag=TAG20091218T083619 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:06
Finished backup at 18-DEC-09
released channel: c1

4. [Primary] On the primary, create a new standby controlfile:

SQL> alter database create standby controlfile as '/u01/oraback/DEL1_standby.ctl';

Database altered.

5. [Primary] Copy these files to standby host:

oracle@oradba1 /u01/oraback# scp *.rmb *.ctl oracle@oradba2:/u01/oraback
oracle@oradba2's password:
06l16u1q_1_1.rmb 100% 43MB 10.7MB/s 00:04
DEL1_standby.ctl 100% 43MB 10.7MB/s 00:04

6. [Standby] Bring up the instance in nomount mode:

SQL> startup nomount

7. [Standby] Check the location of the controlfile:

SQL> show parameter control_files

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_files string /u01/oradata/standby_cntfile.ctl

8. [Standby] Replace the controlfile with the one you just created in primary.

9. $ cp /u01/oraback/DEL1_standby.ctl /u01/oradata/standby_cntfile.ctl

10.[Standby] Mount the standby database:

SQL> alter database mount standby database;

11.[Standby] RMAN does not know about these files yet; so you must let it know – by a process called cataloging. Catalog these files:

$ rman target=/

Recovery Manager: Release 10.2.0.4.0 - Production on Fri Dec 18 06:44:25 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: DEL1 (DBID=846390698, not open)
RMAN> catalog start with '/u01/oraback';

using target database control file instead of recovery catalog
searching for all files that match the pattern /u01/oraback

List of Files Unknown to the Database
=====================================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb

12.Recover these files:

RMAN> recover database;

Starting recover at 18-DEC-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/oradata/DEL2/datafile/o1_mf_system_5lptww3f_.dbf
...
channel ORA_DISK_1: reading from backup piece /u01/oraback/05l16u03_1_1.rmb
channel ORA_DISK_1: restored backup piece 1
piece handle=/u01/oraback/05l16u03_1_1.rmb tag=TAG20091218T083619
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

starting media recovery

archive log thread 1 sequence 8012 is already on disk as file /u01/oradata/1_8012_697108460.dbf
archive log thread 1 sequence 8013 is already on disk as file /u01/oradata/1_8013_697108460.dbf


13. After some time, the recovery fails with the message:

archive log filename=/u01/oradata/1_8008_697108460.dbf thread=1 sequence=8009
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 12/18/2009 06:53:02
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/u01/oradata/1_8008_697108460.dbf'
ORA-00310: archived log contains sequence 8008; sequence 8009 required
ORA-00334: archived log: '/u01/oradata/1_8008_697108460.dbf'

This happens because we have come to the last of the archived logs. The expected archived log with sequence# 8008 has not been generated yet.

14.At this point exit RMAN and start managed recovery process:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

15.Check the SCN’s in primary and standby:

[Standby] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447474
[Primary] SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1447478
Now they are very close to each other. The standby has now caught up.

62 comments:

Noons said...

Excellent and useful stuff, Arup.
Definitely a bookmark on this one!

Anand said...

Hi Arup,

Thanks for sharing.Good stuff.

Regards,
Anand

Sara Reid said...

I thought I understood testing. Before I run anything in my production environment, I’m utterly strict that I test in a non-production environment first. Does not matter where that change comes from, it is always run into test first. This naturally includes any changes at the database level, rather than just inside a particular schema. When I have a set of instructions or steps to take the database from one particular environment, or to install a particular feature, I don’t tend to test just 1/2 the steps, but generally, if I have a sequence of steps I tend to test the entire sequence.

4gb micro sd karte

David Mann said...

Thanks for making this available. I often stand up DataGuard setups that I have to hand off to other DBAs. You can only get so much out of the DataGuard documentation, it is great to see a problem from detection to resolution like this.

Kumar Madduri said...

Great post Arup. Definately much better than rebuilding the standby.
I will try this for my next rebuild activity

Kumar

Tom said...

Good Stuff Arup! Another thing. If you are using ASM with Oracle Managed Files, the names will not match so do these steps.

RMAN > catalog start with '+DISKGROUP/db_name/datafile';

list datafilecopy all;

Now run this from your database in mount mode.

select 'switch datafile ' ||file#|| ' to copy;' from v$datafile;

This will spool out text. Run it in RMAN.

Then do the restore part and the rest is cake :)

Baskar said...

hi sir,

is that

'backup incremental from scn 1301571 database;'

will perform a backup from 1301571 scn to last scn of the primary database..

thanks,
baskar.l

Arup Nanda said...

@Baskar: "will perform a backup from 1301571 scn to last scn of the primary database?"

Yes. Not the very last SCN; but the last SCN number where a completed transaction marks its end.

Arup Nanda said...

@Tom: That's right. Thanks for the addiitonal info.

Vishal said...

Hi Arup,

We have 3-node RAC on ASM as Primary Production and 3-node RAC on ASM as Physical Standby.

I did take the incremental backup from the required scn from primary and was able to recover the gap to quite an extent.

Referred this post and metalink note: 836986.1

On Primary:

run {
allocate channel t1 type disk;
allocate channel t2 type disk;
BACKUP INCREMENTAL FROM SCN 27410362700 DATABASE FORMAT '/oratemp/standby_%U' tag 'FORSTANDBY';
release channel t1;
release channel t2;
}

On Standby after cataloging backup peices:

restore started:

run {
allocate channel t1 type disk;
allocate channel t2 type disk;
RECOVER DATABASE NOREDO;
release channel t1;
release channel t2;
}

Then on primary:

RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/tmp/ForStandbyCTRL.bck';

and on Standby I was able to restore it on Standby on ASM:

RMAN> RESTORE STANDBY CONTROLFILE FROM '/tmp/ForStandbyCTRL.bck';

Since the datafiles (OMF) on Standby and Primary were different, I used the following command on standby:

RMAN> CATALOG START WITH '+DATA/stdby/datafile'

The final step mentioned here was to:

RMAN> SWITCH DATABASE TO COPY;

But I got the following error:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 05/27/2010 17:25:42
RMAN-06571: datafile 1 does not have recoverable copy

I have checked the file does exist but since this command didn't work it still takes filenames (OMF) from Primary rather than switching it to the ones just cataloged.

Any suggestion/thoughts!

Thanks,
Vishal




We use OMF on ASM on both sites.

Arup Nanda said...

@Vishal - please see Tom's comemnts above. He specifically addressed the OMF situation.

Vishal said...

Arup,

I tried that and still got same error:



using target database control file instead of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 05/27/2010 22:38:29
RMAN-06571: datafile 1 does not have recoverable copy

I also verified by following command and there is datafile 1 on standby site in cataloged files list by using following command:

RMAN> list datafilecopy all;

Thanks
Vishal

Vishal said...

The command I used on mount stage after seeing Tim's comment was:

RMAN> switch datafile 1 to copy;

The error is the same. I did not receive any error while recovering the gap as well.

Thanks,
Vishal

Vishal said...

Here is how we passed this problem:

On Standby:

Took output of:

RMAN> LIST COPY OF DATABASE;

The above command gave the destination filenames (List 1) with file#

SQL> select file#, name from v$datafile;

The above command gave what are the filenames (List 2) currently in controlfile which was restored.

SQL> alter system set standby_file_management=manual scope=memory;

We had to change above parameter as it would not allow to rename file in Standby database with it being AUTO.

SQL> ALTER DATABASE RENAME FILE '' to '';

The key to rename was the file# in both outputs above to do this manually.

That worked and standby was put back in managed recovery mode.

Thanks,
Vishal

Arup Nanda said...

@Yasir the error is not connected to this problem; but it caused the issue. I had to mention it there to have a consistency.

Unknown said...

I am tryin to do the same..
while recove database I am getting following error, please help me, thanks in advance
RMAN> recover database;

Starting recover at 08-JUL-10
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u02/oradata/disdbprd/disdbprd/system01.dbf
destination for restore of datafile 00002: /u02/oradata/disdbprd/disdbprd/undotbs01.dbf
destination for restore of datafile 00003: /u02/oradata/disdbprd/disdbprd/sysaux01.dbf
destination for restore of datafile 00004: /u02/oradata/disdbprd/disdbprd/users01.dbf
destination for restore of datafile 00005: /u02/oradata/disdbprd/disdbprd/orabpel.dbf
channel ORA_DISK_1: reading from backup piece /u02/test/djli6iga_1_1.rmb
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 07/08/2010 06:15:44
ORA-19870: error reading backup piece /u02/test/djli6iga_1_1.rmb
ORA-19573: cannot obtain exclusive enqueue for datafile 5

Anonymous said...

Thanks for giving the solution:
I have the following issue:
RMAN> recover database;

Starting recover at 03-FEB-11
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
using channel ORA_DISK_5
using channel ORA_DISK_6
using channel ORA_DISK_7
using channel ORA_DISK_8
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 02/03/2011 17:47:45
RMAN-06094: datafile 1 must be restored
+++++++++++++++++++++++++++++++++++

my questions is why it is asking "datafile 1 must be restored
"

Regards,
Bhavani Gopal Balijepalli (Mr.)

Bhavani said...

Thanks Arup,

I got inspired by you and I solved one issue:

Mitigate/Work around ORA-00600: internal error code, arguments: [3020], [3], [346523], [12929435], [], [], [], [], [], [], [], [] – Oracle 11gr2 RAC – Active data-guard

Please follow my blog url:

http://bbalijepalli.blogspot.com/2011/02/work-around-ora-600-internal-error.html

Regards,
Bhavani

Rob said...

Great post. I wish I had read this 4 days ago before I had to rebuild my physical standby DB...Next time something happens to it, I will try this method. Thanks.

Anonymous said...

Hi Arup,
Nice post man, I really liked the way you explained it.
I'm a developer and I've a query regarding the replication server setup. We are planning to place the replication standby server in another country. After building the database in standby, how to sync the standby by RMAN. What should be my approach.
Please suggest.
Thanks,
Gopi Krishna

Anonymous said...

In continuation with the above query..
Here I don't want to use Data Gaurd which is with Enterprise edition, I want to implement this in Standard edition.
Please advise,
Gopi Krishna

yathish said...

Resolving Gaps in Data Guard Apply Using
Incremental RMAN BAckup

i taken incremental backup from primary database.i fallowed a/c to ur steps while recovering the database it throws this error RMAN-06054: media recovery requesting unknown log: thread 1 seq 1273 lowscn 579475962

yathish said...

Resolving Gaps in Data Guard Apply Using
Incremental RMAN BAckup

i taken incremental backup from primary database.i fallowed a/c to ur steps while recovering the database it throws this error RMAN-06054: media recovery requesting unknown log: thread 1 seq 1273 lowscn 579475962 arup help on this error how to resolve

Anonymous said...

@yathish:
just check if the redo log files are ok in the standby server.

Kevin said...

Question:Why do i need to restore the controlfile??won't the controlfile sequence on standby move forward by cataloging and recovering the backup/archivelogs??

jackson said...

This blog did a great work for me different ideas clear by this sharing i m thankful to you for good work keep sharing such kind of nice info i will keep visiting.

Anonymous said...

Thanks so much. I was in this situation today and it helped totally. The only thing I did differently is , I had to set "alter system set DG_BROKER_START = falst scope=both;". Without this as soon as I mount my DB, it went into "managed recovery" mode and would not allow "Restore database". I am not a DG expert, so not sure what is different in my config.

SuzieQ said...

I just found this because my log space had filled up on DR and there was a huge gap.

I also got error "datafile nn must be restored" so I adjusted slightly, had to copy the previous night's full backup over and did 'restore database' and then 'recover database'

Otherwise it worked brilliantly, thanks!

Anonymous said...

Hi Arup,

My Primary and DG mount points are different. So my datafile locations are different. While setting up data guard I had used db_file_name_convert and log_file_name_convert.
My question is can I follow your steps entirely or need some additional steps

Thanks
Sid

Arup Nanda said...

@Sid - it should work without any other additional steps. However, since I can't verify it at the moment, I can't say that fo rsure. It may not recognize the file names. In that case, merely issue SET NEWNAME comamnd for the datafiles. But first try without any modifications.

Jyothish said...

Simply Superb.

Schreven Valle said...

Arup,
Excellent post. I wish I had used incremental backup from SCN last week instead of totally recreating my logical standby. Next time I will be ready and this will save many hours.
-Steve

Unknown said...

hi Anup, how abt the usage of primary database control file in standby...if the file system of both primary and standby are different...

and the same thing in case of ASM...

Arup Nanda said...

@Akram - it should be no different. You can copy the controlfile or the incremental RMAN backuppieces from filesystem to ASM or vice versa. ASMCMD's CP command or dbms_file_transfer will be fine for that.

gpottapu said...

I had following scenario:
1. Took an RMAN database backup to disk on Primary in the order: DATABASE, ARCHIVELOG & CONTROLFILE
2. Copied it over to DR Remote Site to the location /oracle/STBY/rman.
3. After 3 Weeks, on remote site used this backup to setup standby:
spfile been updated with log_file_name_convert='/oracle/PRIM/redo01','/oracle/PRIM/redo01','/oracle/PRIM/redo02','/oracle/PRIM/redo01'

rman auxiliary /
startup clone nomount;
run {
allocate AUXILIARY channel t1 DEVICE TYPE DISK;
allocate AUXILIARY channel t2 DEVICE TYPE DISK;
DUPLICATE DATABASE FOR STANDBY BACKUP LOCATION '/oracle/STBY/rman/' NOFILENAMECHECK;
}
4. Everything worked good and Standby database was up.
5. Followed the process to resolve the Gaps. The Standby recovery still requesting Old Archivelogs dated 3 weeks back.
6. After banging my head over why it was not working in my case, finally figured out the following basic concept:
REASON: In MOUNT state oracle can only read the Controlfile metadata for datafiles and their backups. So, RESTORE is able to identify the latest backup from this controlfile(If that backup is available on disk. If it doesn’t find, it would look for previous….until it can find a good FULL backup.). Please be aware that the CURRENT_SCN on the Controlfile is way too latest than the database datafile backup. And in MOUNT state oracle can only pull metadata from controlfile(The query on v$database, v$datafile gets its data from the controlfile it can read in MOUNT state). The only way to get the SCN for incremental backup is through oracle RECOVER DATABASE command.
What happens when RECOVER DATABASE command if fired?
Oracle checks the SCN in the datafile header and sees that it is old (it has an SCN of 100 while my other datafiles have an SCN of 200).
Oracle says "to go back to SCN 100, I need archivelog123." and it starts applying the log to my new datafile (the current datafiles don't need any changing).

7. Realizing this fact, I fired RECOVER STANDBY DATABASE and got the SCN for the Standby recovery.

8. Using the above SCN, took an incremental backup on Primary. Used this backup and latest Standby controlfile backup and followed the process mentioned in this Blog and was successful in setting up the Standby for the Data Guard.

Thanks to Arup & Others for posting some valuable tips in this Blog.

Arup Nanda said...

@gpottapu - thank you for your kind words. I am glad it helped.

Anonymous said...

Arup,
There is one change that I had to make in the recover command


Use the RMAN RECOVER command with the NOREDO option to apply the incremental backup to the standby database. All changed blocks captured in the incremental backup are updated at the standby database, bringing it up to date with the primary database. With an RMAN client connected to the standby database, run the following command:

RMAN> RECOVER DATABASE NOREDO;

Unknown said...

Thanks Arup - this was a lifesaver! The last time my standby was out of sync with the primary, I had to actually blow it away and rebuild it. This time, I decided to research it some more - too bad I didn't find your blog the last time!

Unknown said...

Excellent and really useful.
Arup Thanks..

Unknown said...

Excellent and useful.

Thanks Arup

Anonymous said...

Thanks for this post!
The only thing that is different is our system, that we use DataGuard Broker.
Therefore the recover database failed with some MPR process related error.
I had to set the database state in DGMGRL to APPLY-OFF to successfully recover the database with RMAN.

edit database set state=APPLY-OFF

Hot Pics said...

The Celebs Info are providing match predictions for BPL, PSL, IPL, World Cup and much more visit the website now for Who Will Win Today Match Prediction

Today Match Prediction - Who Will Win

harry said...

Very Informative Blog Anup...Useful content. Thanks for sharing!

http://powerbitrainings.in/powerbi-training-institute-in-hyderabad/ said...

Hey Loved the post! Great article and congrats on Reaching the To 50! I will be back to visit often

Unknown said...

I can't believe I can earn money weekly from trading , this is amazing , and all this is from the effort of a company called skylink technology whom I met online and help me out in trading and gave me good tips about trading physiology... indeed skylink technology is a bitcoin/binary forex experts and company and I won't stop thanking them and sharing my testimony until am fully satisfied...... Interested traders should  free free to contact mail: skylinktechnes@yahoo.com  or  whatsapp/telegram: +1(213)785-1553 

Smart Research USA said...

Most criminology students prefer to use the services of Criminology Essay Writing ServicesAs well as looking for Criminology Research Writing Services because we provide students across the globe with Academic Essay Writing Services that are original and authentic.

Change QuickBooks Password said...

Excellent article content you have shared here website, thanks for taking the time to share with us such a great release and, I really appreciate your helpful work on this release. Such a great post!

aswin said...

IntelliMindz is the best IT Training in Bangalore with placement, offering 200 and more software courses with 100% Placement Assistance.

SAP MM Training in Bangalore
SAP SD Training in Bangalore
SAP ABAP Training in Bangalore
SAP ARIBA Training in Bangalore
SAP SCM Training in Bangalore
SAP BASIS Training in Bangalore
SAP BO Training in Bangalore
SAP PP Training in Bangalore
SAP HR Training in Bangalore
CATIA Training in Bangalore

Streaming Tv said...

I am commenting to let you know what a terrific experience my daughter enjoyed reading through your web page. She noticed a wide variety of pieces, with the inclusion of what it is like to have an awesome helping style to have the rest without hassle grasp some grueling matters.
FUBO tv/Connect
FUBO tv/Connect

Anonymous said...

Amazing website, Love it. Great work done. Nice website. Love it. This is really nice.
foxnews.com connect
hbomax.com tvsignin code
hbomax.com/tvsignin code
disneyplus.com login/begin
showtimeanytime.com/activate

Anonymous said...

Disneyplus com login begin is gaining popularity at an incredibly fast rate because of its amazing video collections that attracts the users to get Disney plus subscription disneyplus.com/begin

tramp taylor said...

I am the Tramp Taylor, working for redditbooks as PR consultant. With more than 6 years experience in PR and Digital Industry, helping teams to achieve goals by streamilining the process. READ MORE:- https://www.redditbooks.com/

tugaiye said...

Best SEO Agency – The e-commerce industry has been taking the United States by storm. The number of US-based online shoppers rose to more than 230 million in 2021, making the state one of the leading online markets in the world. Being the second-largest city in the US by population, Los Angeles has also been keeping up with the trends in the market. Read More:- https://www.techwadia.com/

tugaiye said...

Best SEO Agency – The e-commerce industry has been taking the United States by storm. The number of US-based online shoppers rose to more than 230 million in 2021, making the state one of the leading online markets in the world. Being the second-largest city in the US by population, Los Angeles has also been keeping up with the trends in the market. Read More:- techwadia

lemsay said...

Hi, I’m miffil. I’m a social media manager living in newyork. I am a fan of ,health,beauty,fitness. I’m also interested in reading and entrepreneurship. You can read my blog with a click on the bu…

READ MORE : medicalnewstodayblog

Suz Olliver said...

Thank you so much for the blog - I have used it a number of times and passed it on to other DBAs

namo said...

Thanks for posting something like this. I love sharing good information like this. Follow up on the next work. เกมยิงปลาเครดิตฟรี

BIMM said...

I like that idea. What a great idea! บาคาร่า 10 บาท

Anonymous said...

thank you!! sir for such good information i am happy for it. Students in worldwide can get our formula solver assignment help online from experts who have years of experience and knowledge in curating top-notch quality assignments from students across the world.

arif said...

Thank you, for such good information.

maimoona said...

Amazing website, Love it,Thank you

Radha said...

The Business Accounting and Taxation course equips participants with essential skills and knowledge to proficiently manage financial transactions, prepare accurate financial statements, and navigate complex tax regulations for effective business compliance.

Translate