We’re writing our snapshot controlfiles to a “non-shared” device in our RAC environment (?/dbs/snapcf_DBNAME.f) and this apparently can be a problem… but only when using RAC.
Per DocId 1473914.1, “…any instances in the cluster may write to the snapshot / backup controlfile. Hence the Snapshot controlfile need to be visible to all instances.” This hasn’t been a problem for us in the past, but apparently, since “any” instance can write to the snapshot controlfile, it finally did and threw the error.
An easy fix really, just configure RMAN to write to a shared device, in our case ASM, as seen below (see DocId 1472171.1):
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘+HCFLASH/DEMHT90/snapcf_DEMHT902.f’;
Not documented that I could find, however, it appears that when using an RMAN configuration to ensure archivelogs are applied to the standby before deleting them, RMAN relies on the data guard broker to take care of this check.
If you were to turn off the log transport via dgmgrl and run your RMAN backup, you would see the following error at the end of the output when deleting archivelogs:
RMAN-08591: WARNING: invalid archived log deletion policy
Looks as though RMAN depends on DMON when using this policy in the configuration:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
Ran into this bug (info here) over the weekend that affects 18.104.22.168 when logging into RMAN:
DBGSQL: TARGET> select count(*) into :dbstate from v$parameter where lower(name) = ‘_dummy_instance’ and upper(value) = ‘TRUE’
DBGSQL: sqlcode = 1008
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00554: initialization of internal recovery manager package failed
RMAN-06003: ORACLE error from target database:
ORA-01008: not all variables bound
The “possible” workaround from Oracle did indeed work:
SQL> alter system flush shared_pool;
I’ve done this with E-Biz so I thought I’d do it with People Soft. Now before you go off trying this script, TEST IT FIRST! If it blows up your database, you don’t know me! Also, this will not work unless you’re using a catalog/repository for RMAN
This script is really made up of 3 parts:
1.the addition of some code in your RMAN backup script (assuming you already have one that is) that builds an RMAN run module for the clone build, cleans up the previous days SCP and then SCP’s your rman backups and run modules to the clone box
2.the main cloning script. the script assumes you’re using ASM and you may need to fiddle with the script to fit your ASM structure, or if not using ASM, just comment it out, and
3.the ASM cleanup script (if you need it).
Now I’m sure there are a million ways to get this done and this may not be the most efficient…. but it works. As you can see from the main cloning script, we don’t do a lot of post-clone stuff with the people soft clones as we normally leave that up to the app developers.
If you see any glaring errors, please let me know.
So every night, as part of my RMAN backups, I sync the controlfiles with the catalog after the backup is done, which by the way, is much faster than connecting to the catalog “while” the backup is being done.
Anyway, I now have two incarnations of yesterday’s clone in the catalog which can play havoc with my scripts when I query the catalog for RMAN backup information. We need to get rid of the old incarnation.
Using the following method will not work on 22.214.171.124 and after the example, I’ll show you what does work.
ENV: 11gR2 (126.96.36.199), Windows 2008 R2
Ran into the same problem that was posted on Hemant’s blog (http://hemantoracledba.blogspot.com/2010/05/read-only-tablespaces-and-backup.html) concerning an RMAN backup of READ ONLY tablespaces with backup optimization set to ON:
You can see the datafiles being backed up (using 11gR2 active duplicate feature):
backup as copy reuse
14 auxiliary format
15 auxiliary format
16 auxiliary format