Migrate your Oracle pluggable database to Amazon RDS for Oracle

by Jeevith Anumalla and Sagar Patel | on

Many customers have Oracle multi-tenant databases in their on-premises environments, and may be looking to use Amazon Relational Database Service (Amazon RDS) for Oracle features while using multi-tenant databases. As of August 2022, Amazon RDS for Oracle supports a subset of multi-tenant architecture called single-tenant architecture . A multi-tenant architecture enables an Oracle database to function as a multi-tenant container database (CDB). A CDB can include customer-created pluggable databases (PDBs). In RDS for Oracle, the CDB contains only one PDB. The single-tenant architecture uses the same Amazon RDS APIs as that of a non-CDB architecture. Amazon RDS for Oracle with a non-CDB database is mostly identical using a PDB.

In this post, we discuss a solution for migrating your on-premises Oracle multi-tenant pluggable databases to Amazon RDS for Oracle.

Solution overview

A pluggable database is a portable collection of schemas, schema objects, and nonschema objects that appears to an Oracle Net client as a non-CDB. PDBs can be plugged into CDBs, and a CDB can contain multiple PDBs. Each PDB appears on the network as a separate database.

When you create a CDB, specify the DB instance identifier just as you would for a non-CDB based single tenant Oracle RDS instance. The instance identifier forms the first part of your endpoint. The system identifier (SID) is the name of the CDB. The SID of every CDB is RDSCDB. You can’t choose a different value.

This section covers lift and shift procedure to migrate a pluggable database to Amazon RDS for Oracle. This involves downtime based on the size of the database, network bandwidth between on-premise and Amazon Web Services, and time consumed by export and import of database. In order to minimize the downtime you can refer to Migrating Oracle databases with near-zero downtime using Amazon Web Services DMS

For this post, we use Amazon Elastic Compute Cloud (Amazon EC2) as our source Oracle database server. The following diagram illustrates our architecture.

Prerequisites

Before you get started, complete the following prerequisite steps:

  1. Create an RDS for Oracle multitenant container database using the following Amazon Web Services CloudFormation template .
    Note: The CloudFormation template creates a Bring Your Own License ( BYOL ) RDS for Oracle instance.
  2. Make sure the source database is an Oracle Pluggable database located on-premises or on an EC2 instance which can connect to Amazon Web Services resources hosted in the respective account.
  3. Confirm that you can connect to both the on-premises Oracle database and RDS for Oracle instance.
  4. Add an Oracle RDS tnsentry in your source database server (Amazon EC2 in our case) to connect to the target.
  5. Install and configure the Amazon Web Services Command Line Interface (Amazon Web Services CLI) on a machine that has access to Amazon Simple Storage Service (Amazon S3) and the RDS instance. For instructions, refer to Installing or updating the latest version of the Amazon Web Services CLI .

After you create the RDS for Oracle database (the target database) as a single-tenant architecture, you can connect to the PDB database using the endpoint identifier and PDB database name. The following example shows the format for the connection string in SQL*Plus:

sqlplus 'dbuser@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=endpoint)(PORT=port))(CONNECT_DATA=(SID=pdb_name)))'

Export data from the source Oracle database

Connect to your source database and fetch the schemas and its respective tables associated with a PDB database:

SQL> select con_id, name, open_mode from v$pdbs;

    CON_ID NAME       OPEN_MODE
---------- ---------- ----------
         2 PDB$SEED   READ ONLY
         3 ORCLPDB    READ WRITE

SQL> ALTER SESSION SET CONTAINER = ORCLPDB;

Session altered.

SQL> SHOW CON_NAME;

CON_NAME
------------------------------
ORCLPDB
SQL>

SQL> l
  1  SELECT p.PDB_ID, p.PDB_NAME, t.OWNER, t.TABLE_NAME
  2   FROM DBA_PDBS p, CDB_TABLES t
  3   WHERE p.PDB_ID > 2 AND
  4   t.owner not like '%SYS%' and  t.OWNER NOT IN ('ORDDATA','DBSFWUSER',
  5          'CTXSYS',
  6          'RDSADMIN','PUBLIC',
  7          'XDB',
  8          'OUTLN',
  9          'ORACLE_OCM',
 10          'DBSNMP',
 11          'DBUPTIME',
 12          'GSMADMIN_INTERNAL',
 13          'SVCSAMLMSRO',
 14*         'REMOTE_SCHEDULER_AGENT','PERFSTAT')
SQL>
SQL> /

    PDB_ID PDB_NAME                            OWNER     TABLE_NAME
---------- ----------------------------------- --------- ------------------------------
         3 ORCLPDB                             HR        REGIONS
         3 ORCLPDB                             HR        COUNTRIES
         3 ORCLPDB                             HR        LOCATIONS
         3 ORCLPDB                             HR        DEPARTMENTS
         3 ORCLPDB                             HR        JOBS
         3 ORCLPDB                             HR        EMPLOYEES
         3 ORCLPDB                             HR        JOB_HISTORY
         3 ORCLPDB                             SALES     TEST

8 rows selected.

Connect to the source PDB database using TNS and perform an export of the schemas, that are in scope of the migration, using Oracle data pump :

$ expdp admin/admin@orclpdb directory=ORA_DIR dumpfile=hr_dmp_file2.dmp schemas=hr,sales

Export: Release 12.2.0.1.0 - Production on Fri Jun 10 19:12:54 2022

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "ADMIN"."SYS_EXPORT_SCHEMA_01":  admin/********@orclpdb directory=ORA_DIR dumpfile=hr_dmp_file2.dmp schemas=hr,sales
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
. . exported "HR"."EMPLOYEES"                            17.09 KB     107 rows
. . exported "HR"."LOCATIONS"                            8.437 KB      23 rows
. . exported "HR"."JOB_HISTORY"                          7.195 KB      10 rows
. . exported "HR"."JOBS"                                 7.109 KB      19 rows
. . exported "HR"."DEPARTMENTS"                          7.125 KB      27 rows
. . exported "HR"."COUNTRIES"                            6.367 KB      25 rows
. . exported "HR"."REGIONS"                              5.546 KB       4 rows
. . exported "SALES"."TEST"                              5.054 KB       1 rows
Master table "ADMIN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ADMIN.SYS_EXPORT_SCHEMA_01 is:
  /home/oracle/hr_dmp_file2.dmp
Job "ADMIN"."SYS_EXPORT_SCHEMA_01" successfully completed at Fri Jun 10 19:13:42 2022 elapsed 0 00:00:47

Transfer exported dump files to Amazon S3 and integrate an Amazon S3 role with the RDS instance

The RDS for Oracle instance must have access to an S3 bucket to work with the Amazon RDS for Oracle integration with Amazon S3 . After you create an Amazon RDS for Oracle instance and S3 bucket, create an IAM policy and an IAM role as outlined in this section, then attach the policy to the role.

To use the Amazon RDS for Oracle integration with Amazon S3, your Amazon RDS for Oracle instance must be associated with an option group that includes the S3_INTEGRATION option. An option group is already attached to our RDS instance; to add the S3_Integration feature, complete the following steps:

  1. On the Amazon RDS console, choose Option groups .
  2. Choose the group attached to the RDS instance.
  3. Choose Add option .
  4. For Option , choose S3_INTEGRATION .
  5. For Version , choose 1.0 .
  6. For Apply Immediately , select Yes .
  7. Choose Add Option .

After you add S3_Integration to the option group, create an IAM role to integrate with the Oracle RDS instance.

  1. In the navigation pane of the IAM console, choose Roles , then choose Create role .
  2. Under Select trusted entity , choose Amazon Web Services service and choose RDS .
  3. Under Add permissions , choose AmazonS3FullAccess .
  4. Under Role Details , enter RDS_S3_Integration_Role as the role name and choose Create role .

After the IAM role and S3_Integration is created, associate them with your RDS DB instance.

  1. On the Amazon RDS console, choose your DB instance.
  2. On the Connectivity & Security tab, choose Manage IAM roles .
  3. For Add IAM role to this instance , choose RDS_S3_Integration_Role (the role that you created).
  4. For Features , choose S3_INTEGRATION .
  5. Choose Add role .

After the IAM role and Amazon S3 integration feature are associated with your RDS for Oracle database, you can upload the data dump files from your on-premises Oracle database instance to Amazon S3 and also download from Amazon S3 to the RDS instance. This step requires Amazon Web Services CLI (Amazon Web Services Command line interface) to be available on the host, follow the step by step instructions on installing or updating the latest version of the Amazon Web Services CLI .

aws s3 cp hr_dmp_file2.dmp s3://mydbs3bucket/orcl

Download the dump files from Amazon S3 to the RDS instance

After you upload the data dump files to the S3 bucket, connect to your target database instance and download the data pump files from Amazon S3 to the DATA_PUMP_DIR of your target instance. See the following code:

Note: Make sure your RDS instance has enough storage space to accommodate the dump files. You can monitor the storage space from CloudWatch metrics, and prevent Amazon RDS from running out of space, by creating cloudwatch alarm on RDS storage .

SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
      p_bucket_name    =>  'mydbs3bucket ',
      p_s3_prefix      =>  'orcl/hr_dmp_file2.dmp',
      p_directory_name =>  'DATA_PUMP_DIR') 
AS TASK_ID FROM DUAL;
TASK_ID
---------------------------------------------------------------------
1654638896204-1266

This gives you the task ID 1654638896204-1266 . Verify the status of the file you uploaded to the RDS for Oracle instance with the following SQL query:

SQL> SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1654638896204-1266.log'));

TEXT
--------------------------------------------------------------------------------
2022-06-10 21:54:56.295 UTC [INFO ] This task is about to list the Amazon S3 obj
ects for AWS Region us-east-1, bucket name <S3-bucket>, and prefix orcl/hr_dmp_file2.dmp.

2022-06-10 21:54:56.343 UTC [INFO ] The task successfully listed the Amazon S3 o
bjects for AWS Region us-east-1, bucket name <S3-bucket>, and prefix orcl/hr_dmp_file2.dmp.

2022-06-10 21:54:56.360 UTC [INFO ] This task is about to download the Amazon S3
 object or objects in /rdsdbdata/datapump from bucket name <S3-bucket> and key
orcl/hr_dmp_file2.dmp.

2022-06-10 21:54:56.557 UTC [INFO ] The task successfully downloaded the Amazon
S3 object or objects from bucket name <S3-bucket> with key orcl/hr_dmp_file2.dmp
 to the location /rdsdbdata/datapump.
2022-06-10 21:54:56.557 UTC [INFO ] The task finished successfully.

After the SQL query output shows the file downloaded successfully, you can list the data pump file in the RDS for Oracle database with the following query:

SQL> SELECT * FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by mtime;

FILENAME				TYPE         FILESIZE 	MTIME
-----------------			------------ 	---------- 	---------

D118F77AAA8B1A73E0530100007FE761	directory		4096 	03-APR-22
        
hr_dmp_file2.dmp			file           638976 	10-JUN-22

datapump/				directory		4096 	10-JUN-22

Import the data

After the data dump file is available, you can create tablespaces, grants, and schemas on the target RDS for Oracle database before you initiate the import.

Grant privileges to the directories before the import with the following code:

GRANT READ, WRITE ON DIRECTORY DATA_PUMP_DIR to RDSADMIN;

Connect to the source Oracle server or another server that can make a connection to the target RDS instance and run the following command:

impdp admin@orclpdb directory=DATA_PUMP_DIR logfile=import.log dumpfile=schemas_exp.dmp

Import: Release 12.1.0.2.0 - Production on Fri Jun 10 15:59:06 2022

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Password:


Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Master table "ADMIN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_FULL_01":  admin/********@orclpdb directory=DATA_PUMP_DIR_PDB logfile=import.log dumpfile=schemas_exp.dmp
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "HR"."EMPLOYEES"                            17.09 KB     107 rows
. . imported "HR"."LOCATIONS"                            8.437 KB      23 rows
. . imported "HR"."JOB_HISTORY"                          7.195 KB      10 rows
. . imported "HR"."JOBS"                                 7.109 KB      19 rows
. . imported "HR"."DEPARTMENTS"                          7.125 KB      27 rows
. . imported "HR"."COUNTRIES"                            6.367 KB      25 rows
. . imported "HR"."REGIONS"                              5.546 KB       4 rows
. . imported "SALES"."TEST"                              5.054 KB       1 rows
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "ADMIN"."SYS_IMPORT_FULL_01" successfully completed at Fri Jun 10 19:59:34 2022 elapsed 0 00:00:22

Verify tables in the PDB on Amazon RDS

Connect to the RDS instance and run the following command to verify the tables:

SQL> l
  1  SELECT p.PDB_ID, p.PDB_NAME, t.OWNER, t.TABLE_NAME
  2    FROM DBA_PDBS p, CDB_TABLES t
  3    WHERE p.PDB_ID > 2 AND
  4    t.owner not like '%SYS%' and  t.OWNER NOT IN ('ORDDATA','DBSFWUSER',
  5           'CTXSYS',
  6           'RDSADMIN','PUBLIC',
  7           'XDB',
  8           'OUTLN',
  9           'ORACLE_OCM',
 10           'DBSNMP',
 11           'DBUPTIME',
 12           'GSMADMIN_INTERNAL',
 13           'SVCSAMLMSRO',
 14*          'REMOTE_SCHEDULER_AGENT','PERFSTAT')
SQL> /

    PDB_ID PDB_NAME             OWNER      TABLE_NAME
---------- -------------------- ---------- ----------
         3 TESTDB6              HR         COUNTRIES
         3 TESTDB6              HR         REGIONS
         3 TESTDB6              HR         LOCATIONS
         3 TESTDB6              HR         DEPARTMENTS
         3 TESTDB6              HR         JOBS
         3 TESTDB6              HR         EMPLOYEES
         3 TESTDB6              HR         JOB_HISTORY
         3 TESTDB6              SALES      TEST

8 rows selected.

Perform cutover

After data migration and validation is completed, perform the cutover using following steps:

  1. The RDS for Oracle DB is ready to serve traffic for the first time, so take a snapshot of it.
  2. Put your source database in read-only mode.
  3. Re-validate the data manually.
  4. Enable triggers if not enabled; you can use the below query to get the list of triggers which are disabled:
select 'alter trigger '||owner||'.'||trigger_name|| ' enable;' from dba_triggers where owner=&lt;OWNER_NAME&gt; and status = ‘DISABLED’;
  1. Get the state of the sequences and set the sequences to the appropriate values:
select sequence_owner,sequence_name,min_value,max_value,increment_by,last_number,cache_size from dba_sequences where sequence_owner=<SCHEMA_NAME>;
  1. Verify that there is no traffic to the source Oracle database and redirect all application traffic to the RDS instance.

Clean up

To remove all the components created by this solution, complete the following steps:

  1. Sign in to the Amazon Web Services Management Console .
  2. Choose the Region where your RDS for Oracle instance reside.
  3. On the CloudFormation console, choose your stack and select Delete and select Delete Stack .

Summary

In this post, we demonstrated how to migrate your on-premises pluggable or non-pluggable database to an Amazon RDS for Oracle database (a single-tenant architecture). You can perform these steps to migrate your databases to Amazon RDS for Oracle and take advantage of Amazon RDS managed services and features.

We encourage you to try this solution and take advantage of all the benefits of using Amazon Web Services DMS with Oracle databases. For more information, see Getting started with Amazon Web Services Database Migration Service and Best Practices for Amazon Web Services Database Migration Service . For more information on Oracle Database Migration, refer to the guide Migrating Oracle Databases to the Amazon Web Services Cloud .


About the authors

Jeevith Anumalla is a Senior Data Architect with the Professional Services team at Amazon Web Services. He works as data migration specialist, helps customers build data lake and analytics platform.

Sagar Patel is a Senior Database Specialty Architect with the Professional Services team at Amazon Web Services. He works as a database migration specialist to provide technical guidance and help Amazon customers to migrate their on-premises databases to Amazon Web Services.


The mentioned AWS GenAI Services service names relating to generative AI are only available or previewed in the Global Regions. Amazon Web Services China promotes AWS GenAI Services relating to generative AI solely for China-to-global business purposes and/or advanced technology introduction.