show parameter replacement uploaded in dba_scripts
The scripts for the PeopleSoft clone have been updated and tested. The two scripts are
located in the dba_scripts tab.
The 1st script is run by the user Oracle and the 2nd script is run by the ASM/Clusterware
owner, Grid. The 2nd script is running every 60 seconds in the user Grid’s crontab… it
polls for a file that the 1st script writes which triggers the 2nd script to run.
I had the privilege of attending some of this year’s OakTable World while attending OOW. Absolutely fantastic! If you get the chance to attend OOW in the future, biggie size your trip by spending much of your time at OakTable World and skip some of the OOW sessions. You won’t regret it. Check out some of the great slide decks and videos at the link below:
UPDATE: (11/30) Looks as if the corruption began just after our datacenter cutover to our dark fiber run down to Norman (standby site). Let’s see what the network guys come up with.
Redo log corruption occuring on the DR site on our physical standby. This is either occuring in flight (and we have no firewalls in place between our primary and standby site) or it’s occuring on the standby storage tier. The reason I say this is because I fixed the issue by re-applying the archive logs from the standby site after figuring out which redo logs in which thread occured before the last good SCN stamp on the physical standby (which can be found in the alert log as seen below).
Below are the notes from yesterdays exercise…
Seriously with the block corruption on PRDFT?????? just in….. hot off the press in the alert log on the DR1 instance:
Tue Nov 27 13:39:42 2012 Errors in file /opt/app/oracle/diag/rdbms/prdftdr/PRDFTDR1/trace/PRDFTDR1_pr05_19715.trc (incident=81053): ORA-00600: internal error code, arguments: [kdourp_inorder2], , , , , , , , , , ,  Incident details in: /opt/app/oracle/diag/rdbms/prdftdr/PRDFTDR1/incident/incdir_81053/PRDFTDR1_pr05_19715_i81053.trc
We’re writing our snapshot controlfiles to a “non-shared” device in our RAC environment (?/dbs/snapcf_DBNAME.f) and this apparently can be a problem… but only when using RAC.
Per DocId 1473914.1, “…any instances in the cluster may write to the snapshot / backup controlfile. Hence the Snapshot controlfile need to be visible to all instances.” This hasn’t been a problem for us in the past, but apparently, since “any” instance can write to the snapshot controlfile, it finally did and threw the error.
An easy fix really, just configure RMAN to write to a shared device, in our case ASM, as seen below (see DocId 1472171.1):
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘+HCFLASH/DEMHT90/snapcf_DEMHT902.f’;
Ran into this bug (info here) over the weekend that affects 220.127.116.11 when logging into RMAN:
DBGSQL: TARGET> select count(*) into :dbstate from v$parameter where lower(name) = ‘_dummy_instance’ and upper(value) = ‘TRUE’
DBGSQL: sqlcode = 1008
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00554: initialization of internal recovery manager package failed
RMAN-06003: ORACLE error from target database:
ORA-01008: not all variables bound