After completing several database migration and upgrade projects, utilizing Data Guard to migrate the database has been my preferred method when minimal downtime is required, and Golden Gate is not an option. From 11.2.0.1 onwards, it is possible to configure the Physical Standby database with a higher patch set or significant release from the Primary database for purposes of a migration to new hardware, provided that after a switchover to that standby, the database is not allowed to open and is immediately upgraded in the usual manner (refer to Oracle Doc ID 785347.1). The process requires three stages: creation – setup of physical standby, pre-upgrade, and cutover (activating and upgrading physical standby). The test scenario below migrates and upgrades a 12.2 database to 19c using the auto-upgrade tool.

Environment Details

I kept the same oracle_sid with the option of keeping the same info. We only need the db_unique_name to be different for the Data Guard configuration. After the cutover and upgrade are complete, we can change the db_unique_name to the same as the source.

SOURCE

hostnameol1.ol
db_namecdb1
db_unique_namecdb1
oracle_sidcdb1
oracle_home/u01/app/oracle/product/12.2.0/dbhome_1

DESTINATION

hostnameol2.ol
db_namecdb1
db_unique_namecdb19
oracle_sidcdb1
oracle_home/u01/app/oracle/product/19.0.0/dbhome_1

Data Guard Physical Standby Setup Phase

If the source database is in archive log mode, no downtime is required; otherwise, put the database in archive log mode.

1. Install 19c binaries on the destination server.

2. Enable force logging on the source database, adjust parameters, and add standby redo logs.

SQL> ALTER DATABASE FORCE LOGGING;
SQL> ALTER SYSTEM SET log_archive_config='DG_CONFIG=(cdb1,cdb19)' scope=both;
SQL> ALTER SYSTEM SET standby_file_management=AUTO scope=both;

-- check current redo logs
SQL> select a.group#,a.member,b.bytes/1024/1024 MB from v$logfile a join v$log b on a.group#=b.group# order by 1;

-- add standby redo logs
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 11 ('/u01/oradata/cdb1/standby_redo11.log') SIZE 250M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 12 ('/u01/oradata/cdb1/standby_redo12.log') SIZE 250M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 13 ('/u01/oradata/cdb1/standby_redo13.log') SIZE 250M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 14 ('/u01/oradata/cdb1/standby_redo14.log') SIZE 250M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 15 ('/u01/oradata/cdb1/standby_redo15.log') SIZE 250M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 16 ('/u01/oradata/cdb1/standby_redo16.log') SIZE 250M;

3. Add TNS entries on both source and destination servers.

CDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ol1.ol)(PORT = 1521))
     (CONNECT_DATA =
     (SERVER = DEDICATED)
     (SERVICE_NAME = cdb1)
    )
  )

CDB19 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ol2.ol)(PORT = 1521))
  	(CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = cdb19)(UR=A)
    )
  )

4. Copy the source pfile and password file to the destination server.

5. In the example below, the backup source database is backed up to a shared location accessible by both servers.

rman target /

RMAN> backup database format '/oracle/rman/db_%U' plus archivelog format '/oracle/rman/arc_%U';

RMAN> backup current controlfile format '/oracle/rman/ctl_%U';

6. On the destination server, edit pfile and add/change db_unique_name entry

db_unique_name='cdb19'

7. Create required directories

mkdir -p /u01/app/oracle/admin/cdb1/adump
mkdir -p /u01/oradata/cdb1
mkdir -p /u01/fra/CDB1

8. Create the spfile and start the database in nomount mode.

SQL> create spfile from pfile;
SQL> startup nomount force;

9. Create the standby database using the rman duplicate command.

rman auxiliary /

RMAN> duplicate database for standby backup location '/oracle/rman' nofilenamecheck;

10. Configure log shipping on the source database (PRIMARY)

SQL> ALTER SYSTEM SET log_archive_dest_2='SERVICE=CDB19 NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=cdb19' SCOPE=both;

SQL> ALTER SYSTEM SET log_archive_dest_state_2=enable scope=both;

SQL> ALTER SYSTEM SET fal_server='CDB19' scope=both;

11. Start the application process on the destination database (STANDBY)

SQL> ALTER SYSTEM SET fal_server='CDB1' scope=both;

-- start MRP
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

-- check MRP to make sure it started
SQL> select process, thread#, sequence#, status from v$managed_standby where process = 'MRP0';

PROCESS      THREAD#  SEQUENCE# STATUS
--------- ---------- ---------- ------------
MRP0               1         33 APPLYING_LOG

12. On the source (PRIMARY) database, check to ensure there is no GAP, perform a couple of log switches, and run the query below until the gap is 0 or is going down.

SQL> select a.not_applied - b.applied GAP
from
(select max(sequence#) not_applied
from v$archived_log
where dest_id = 1
and name is not null
and applied = 'NO') a,
(select max(sequence#) applied
from v$archived_log
where dest_id = 2
and name is not null
and applied = 'YES') b;


GAP
----------
   0

Pre-Upgrade Phase

Typically, no downtime is required here unless pre-upgrade findings and fixes require it (more than usual broken sys objects or components).

1. Download the latest auto-upgrade tool (Doc ID 2485457.1).

2. Create an auto-upgrade config file on the source (PRIMARY) database.

# source_cdb1.cfg

global.autoupg_log_dir=/home/oracle/upgrade/tmp
upg1.log_dir=/home/oracle/upgrade/tmp
upg1.sid=cdb1
upg1.source_home=/u01/app/oracle/product/12.2.0/dbhome_1
# target_home not required on analyze phase
upg1.target_home=
upg1.start_time=NOW
upg1.upgrade_node=ol1.ol
upg1.run_utlrp=yes
upg1.timezone_upg=yes
upg1.target_version=19

3. Execute auto-upgrade analyze mode and review logs.

$ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -config source_cdb1.cfg -mode analyze

# review logs
# /home/oracle/upgrade/tmp/cfgtoollogs/upgrade/auto/status/status.log
# /home/oracle/upgrade/tmp/cdb1/100/prechecks/cdb1_preupgrade.log

4. Execute auto-upgrade fixup mode (it can be optional to run the auto-fixup scripts for more control manually).

$ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -config source_cdb1.cfg -mode fixups

5. Prepare an auto-upgrade config file on the destination (STANDBY) database.

# dest_cdb1.cfg

global.autoupg_log_dir=/home/oracle/upgrade/tmp
upg1.log_dir=/home/oracle/upgrade/tmp
upg1.sid=cdb1
# source not required, use a temp location
upg1.source_home=/home/oracle/upgrade/tmp
upg1.target_home=/u01/app/oracle/product/19.0.0/dbhome_1
upg1.start_time=NOW
upg1.upgrade_node=ol2.ol
upg1.run_utlrp=yes
upg1.timezone_upg=yes
upg1.target_version=19

Cutover (activate and upgrade)

Planned downtime cutover activate standby should take less than 5 mins, while upgrade will take anywhere from 30-90 mins depending on the number of components installed. We can get accurate timing by testing the upgrade process, skipping step 1, and just making sure logs are in sync; after completing the process, use the guaranteed flashback to roll back the database and convert to physical standby without the need to rebuild.

1. Stop applications and shut down the source database.

2. Verify all logs are applied and stop the Managed Recovery Process.

SQL> alter database recover managed standby database cancel;

3. Create a guaranteed restore point.

SQL> Create restore point GRP_PRE19 guarantee flashback database;

4. Activate the physical standby database.

SQL> alter database activate standby database;

-- verify
SQL> select database_role,open_mode from v$database;

DATABASE_ROLE    OPEN_MODE
---------------- --------------------
PRIMARY          MOUNTED

5. Open database for upgrade

SQL> alter database open upgrade;

-- optional if using multitenant
SQL> alter pluggable database all open upgrade force;

6. Execute auto-upgrade upgrade mode.

$ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -config dest_cdb1.cfg -mode upgrade

7. Post steps and optional change db_unique_name in spfile and pfile.

Conclusion

In conclusion, Data Guard database migration provides an efficient and reliable method for upgrading to Oracle 19c with minimal downtime. By utilizing Oracle AutoUpgrade and configuring Mixed Versions Data Guard, you can streamline the migration process and ensure high availability throughout. This approach simplifies complex upgrades, making it an ideal choice for database administrators looking to maintain system performance and reliability during transitions.

For any questions or more information, please reach out to us.