Upload
ethenhunt
View
49
Download
2
Tags:
Embed Size (px)
DESCRIPTION
oracle platform migration
Citation preview
“X” Platform Migrations -Challenges and Time Reduction Techniques
Raju KanumuryVice President
Database Administration Services
I n n o v a t e I n t e g r a t e O p e r a t eI n n o v a t e I n t e g r a t e O p e r a t e
AGENDA Requirements
Complexities
Oracle Migration Path
Customized Migration Path
Time Consuming Operations
Time Reduction Methods
Iterations
Testing Methods
Two SUN Solaris Machines to Four Linux Machines
Single node DB to Two Node RAC
Separation of single node admin/CM tier to Two Parallel Concurrent Processing nodes
Single Forms/Web node to TwoForm/Web nodes with Load Balancer
Requirements Complexities Oracle Migration PathCustomized Migration
Path
Seven HP Unix Machines to Eighteen Linux Machines
Single node DB to Six Node RAC
Separation of single node admin/CM tier to Two Parallel Concurrent Processing nodes
Four Forms/Web tiers to TenForms/Web Tiers with Shared Application Top
Integration and failover capability for third party/external software
Scenario I Scenario II
OS - Solaris 8 to OEL 5.3
DB - 8.1.7.4 to 10.2.0.4 (RAC)
App - 11.5.4 to 11.5.10 CU2
App - Latest Financial Family Pack
1.8 TB data migration to Linux
Downtime Allocated – 48 hrs
Requirements Complexities Oracle Migration PathCustomized Migration
Path
Scenario I Scenario II
OS – HP UX 11.11 to OEL 5.3
DB – Non RAC to RAC
4.0 TB data migration to Linux
JDBC driver updates for RAC compatibility of third party software
Resource Management of DB byusing services
Downtime Allocated – 40 hrs
Single Node DB to RAC DB with ASM
Parallel Concurrent Processing (PCP) configuration
Fixed and tight downtime because of their operational requirements
Anticipated increase in user load since they were planning to add more countries as a part of global rollout
Migration and functioning of customizations
Performance of critical functionalities & processes
Commonalities between Scenario I and II
Requirements Complexities Oracle Migration PathCustomized Migration
Path
Multiple DB Upgrades
8.1.7.4 to 9.2.0.6
9.2.0.6 to 10.2.0.4
Application Upgrade
11.5.4 to 11.5.10 CU2
Export and Import
1.8 TB of Data
Scenario I Scenario II
Export and Import of huge amount of data
4.0 TB of Data
Shared Application Tier Configuration
Adding numerous nodes
10 nodes and 2 CM nodes
Third Party integration that is crucial to most of business functionalities
Requirements Complexities Oracle Migration PathCustomized Migration
Path
Scenario - I
Tar
get
So
urc
e
Export
Data
Create
Shell
DB
Tech Stack
Install/Copy
Import
Data
Add Nodes
Configuration
Parallel
Concurrent
Processing
Configuration
Aplly Financial
Family Pack
DB
Backup
DB
Upgrade
9.2.0.6
APPS Upgrade
11.5.4 to 11.5.10 .2
DB
Upgrade
10.2.0.4
Scenario - II
Ta
rge
tS
ou
rce
Clone POC2
DB/APPS
from PROD
Use prepared
documentPrepare
DB for
migration
Export
Data
Create
Shell
DB
Tech Stack
Install/Copy
Import
Data
Shared Apps Tier
configuration
Add Nodes
Configuration
Parallel
Concurrent
Processing
Configuration
Third Party
Integration
Requirements Complexities Oracle Migration PathCustomized Migration
Path
Scenario I Scenario II
Pre-
Downtime
Downtime
Source DB Backup 4
Source DB Upgrade (9.2.0.6) 6
Source APP Upgrade 18
Source DB Upgrade (10.2.0.4) 6
Source Export 8
Target DB Shell 8
Target Import 15
Target TechStack 4
Target AutoConfig 2
Target Add Nodes 4
Target Apply Patches 6
Verification & Validation 4
Total Time 12 73
Major Activity
Pre-
Downtime
Downtime
Source DB Backup 4
Prepare Source DB for Export 2
Source Export 15
Target DB Shell 8
Target Import 32
Target TechStack 4
Shared Appl Top & Auto Config 2
PCP Config 2
Add Nodes – Total 10 forms/web nodes 6
Third party configuration 8
Verification & Validation 4
Total Time 12 75
Major Activity Time in Hrs
Requirements Complexities Oracle Migration PathCustomized Migration
Path
Scenario - I
Tar
get
So
urc
e
Export
Data
Create
Shell
DB
Fresh Install of
11.5.10.2
Import
Data
Add Nodes
Configuration
Parallel
Concurrent
Processing
Configuration
Aplly Financial
Family Pack
Build
Standby
DB
DB
Upgrade
9.2.0.6
APPS Upgrade
11.5.4 to 11.5.10 .2
DB
Upgrade
10.2.0.4X XMigrated Partial
Activity
(Source -> Target)
APPS Upgrade
11.5.4 to 11.5.10 .2
(“D” Driver)
Scenario - II
Ta
rge
tS
ou
rce
Use prepared
documentPrepare
DB for
migration
Create
Shell
DB
Tech Stack
Install/Copy
Shared Apps Tier
configuration
Add Nodes
Configuration
Parallel
Concurrent
Processing
Configuration
Third Party
Integration
Export
Metadata
Export
BigTables
&
FNDLOBS
Import
Users
Import Big
&
FNDLOBS
Build
Indexes
Export
AllTables
Export
Metadata
Import
AllTables
Import
Procs etc
Import
Procs etc
Apply Archive
Logs
Build
Stand
by DB
Requirements Complexities Oracle Migration PathCustomized Migration
Path
Scenario I – Run Times Scenario II – Run Times
Pre-
Downtime
Downtime
Source Standby DB 6
Source App Prep 2
Source DB Upgrade 6
Source Export 3
Target Fresh Install 5
Target DB Shell 8
Target Import 8
Target App Upgrade 8
Target AutoConfig 2
Target Add Nodes 4
Target Apply Patches 6
Verification & Validation 4
Total Time 19 43
Major Activity
Pre-
Downtime
Downtime
Source Standby DB 8
Source Export Big Tables/Metadata 12
Prepare Source DB for Export 2
Source Export All Tables/Metadata 6
Target DB Shell 8
Target Import - Users/Big Tables & Indexes 8
Target Import – AllTables/Sync Big Tables 6
Target Import – Procs Etc 3
Target Import - Const & Build Indexes 7
Shared Appl Top & Auto Config 2
PCP Config 2
Add Nodes – Total 10 forms/web nodes 2
Third party configuration 2
Verification & Validation 4
Total Time 36 36
Major Activity
Requirements Complexities Oracle Migration PathCustomized Migration
Path
Application Upgrade
“d” driver
DB Upgrade
Scenario I Scenario II
Size of the DB
Shared Application Tier
Number of nodes to be configured
Third Party integrations
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Backup Operation
Export/Import Operations
PCP Configuration
Application Configuration
Analyze whether work can be performed ahead of downtime. Push as much as activities to pre downtime
Build Standby DB
Sync tables by exporting ahead of downtime
Look whether there are any performance attributes or improvements that can be performed on an individual process
Number parallel workers (adpatch & datapump)
Datapump Performance Patches
Purge obsolete data
Based on DB size break the process into logical elements so that same process can be submitted in multiple threads
Break Export and Import into logical groups
Customize index creation process
Areas to focus
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Problem
Time taken for backup – depending on the size of DB they may take considerable time
In case of upgrades or conversions restoration time need to be considered as part of roll back
Solution
Create Physical Standby ahead of conversion and start applying logs. Except applying few logs after start of downtime majority of the work for this operation can be pushed in to pre downtime category
Restoration time need not be considered since original PROD system is intact
Backup Operation
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Problem
Oracle given parameter files have commands for full export with some user exclusions
In proof of concept exports it is observed that few big tables and FND_LOBS are taking considerable time in export
Upon analysis it is observed that top 10% of tables occupy 50 to 60% of total DB size
Solution
Identify and list top 10 to 15 non transactional tables related to history and TL
Create MVIEW logs on above tables to keep track of changes
Export Operation
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Solution
Split full export into multiple parts so that few tables can be exported ahead of downtime
Create Shell DB along with required tablespaces before downtime
Export Big tables, FND_LOBS and Metadata by using multiple parameter file instead of one
Exclude statistics from all the parameter files
Use enough parallel threads for faster export. Limit parallel threads not to exceed the number of dump files that are being generated by export
Export Operation
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Export Operation – Time Lines
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Problem
Oracle given parameter files have commands for full import
In proof of concept imports it is observed that majority of the time is being spent on building indexes and primary key constraints
Datapump serializes activities like index creation and procedures etc. which can consume lot of time
Solution
Split full import into multiple parts so that exported tables can be imported ahead of downtime
Develop custom process to synch the imported tables between source an target based on the changes logged in MVIEW logs
Import Operation
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Solution
As first pass import users, table structures and data only. This way you can assure sync process will not effect any other data
Except constraints other DB elements like views, triggers, procedures etc. should be imported as soon as data imports are completed
Exclude indexes in all import parameter files
Customize index creation by creating indexes externally. Come up with automated procedure to load indexes from dump file into a table. Write code so that multiple indexes can be created in parallel
Run import constraints in parallel with index creation which is performed external to datapump
Use enough parallel threads for faster import. Limit parallel threads not to exceed the number of dump files that are being generated by export
Import Operation
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Import Operation – Time Lines
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
03:00
Start of Import
11:00
End of Import03:25 - 11:00
Full Import – 32Hours
Oracle Suggested
Import
03:00 - 03:25
Import Users
Custom Pre Downtime
Import Activities
13:00 - 16:00
Build Indexes - Big Tables
08:00 - 08:25
Import Users
4/4/2010 8:01 AM
Start of Import
4/4/2010 3:59 PM
End of Import
08:25 - 13:00
Import BigTables
8:25 AM - 11:55 AM
Import FND_LOBS
03:00
Start of Import
11:00
End of Import03:25 - 11:00
Full Import – 32Hours
08:00 - 08:25
Import Users
08:00
Start of Import
16:00
End of Import
08:25 - 13:00
Import BigTables
08:25 - 11:55
Import FND_LOBS
03:00
Start of Import 09:00 - 11:59
Import Procs
03:00 - 09:00
Import AllTables
19:00
End of Import
Custom Pre Downtime
Import Activities
Oracle Suggested
Import
19:00 - 11:00
Downtime Savings
Custom Downtime
Import Activities
03:00 - 03:25
Import Users
12:00 - 16:00
Import Constraints
12:00 - 19:00
Build Indexes
13:00 - 16:00
Build Indexes - Big Tables
Problem
Code tree needs to be copied from source
New tech stack needs to installed using rapidwiz. Also developer patch sets need to be applied
Environment files and context files need to be modified to reflect correct configuration and instance
PCP settings need to be enabled. Profile options need to set correctly for PCP and managers need to be updated with right primary and secondary nodes
Solution
Implement code and patch freeze - one week before go live date is preferred
Conduct dry run on the future production infrastructure during that week simulating whole go live activities
Application/PCP Configuration
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Solution
Use the same naming conventions, ports, directories etc that will be used as part of future production
After configuration is complete on dry run, test all important components and functionalities
Upon successful testing download the manager data using FNDLOAD
Create scripts to update database components that are needed as part of PCP configuration like some profile values etc.
Preserve all components except the database. Drop the database and recreate shell database to be ready for go live activity
During go live activities execute database updates, loads and autoconfig only
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Application/PCP Configuration
Problem
Usually upgrades are performed applying relevant maintenance packs which are big in size. Extraction and verification takes time and it increases this effort if a customer has MLS
Out of three drivers depending on the products used by customer, “d” driver can take long time to complete its activity
It is also observed that some conversion programs are the main culprits
Lot of time is spent on compilation of objects by DB upgrades
Solution
Try to understand the products used by customer and critical functions in these processes
Application/DB Upgrade
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Solution
Suggest customer to purge any unused historical data related to these products
In proof of concept run identify the workers which took significant time and try to tune the sql or code
Create custom indexes based on the logic and drop them after processes is completed
Make sure archiving is turned off on DB during application of maintenance packs
Make sure that statistics related to objects being used are up to date
Use parallel compile options to speed up DB upgrade
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Application/DB Upgrade
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Iteration - I
Iteration 1 (Proof of Concept) – Hours Taken 71 DB Nodes: 1 App Nodes: 1
Ta
rg
et
So
urc
e
Clone POC
DB/APPS
from PROD
Read
Oracle
migration
notes
Prepare
Document with
Detailed Steps
needed for
migration
Prepare
DB for
migration
Export
Data
Create
Shell
DB
Tech Stack
Install/Copy
Import
Data
Shared Apps Tier
configuration
Thitrd Party
Integration
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Iteration - II
Iteration 2 (Verify Steps & Fine Tune Processes) Hours Taken 54 DB Nodes: 2 App Nodes: 4
Ta
rg
et
So
urc
e
Clone POC2
DB/APPS
from PROD
Use prepared
documentPrepare
DB for
migration
Export
Data
Create
Shell
DB
Tech Stack
Install/Copy
Import
Data
Shared Apps Tier
configuration
Add Nodes
Configuration
Fine Tune
DB & Export
Parameters
Parallel
Concurrent
Processing
Configuration
Third Party
Integration
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Iteration - III
Iteration 3 (CustomizeTime Consuming Processes) Hours Taken 42 DB Nodes: 3 App Nodes: 4
Ta
rg
et
So
urc
e
Clone POC 3
DB/APPS
from PROD
Use prepared
documentPrepare
DB for
migration
Export
Data
Create
Shell
DB
Tech Stack
Install/Copy
Import
Data
Shared Apps Tier
configuration
Add Nodes
Configuration
Parallel
Concurrent
Processing
Configuration
Third Party
Integration
Export
Metadata
Export
BigTables
&
FNDLOBS
Import
Users
Import Big
&
FNDLOBS
Build
Indexes
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Iteration - IV
Iteration 4 (Parallelize Operations within Processes) Hours Taken 36 DB Nodes: 3 App Nodes: 4
Ta
rg
et
So
urc
e
Clone POC4
DB/APPS
from PROD
Use prepared
documentPrepare
DB for
migration
Create
Shell
DB
Tech Stack
Install/Copy
Shared Apps Tier
configuration
Add Nodes
Configuration
Parallel
Concurrent
Processing
Configuration
Third Party
Integration
Export
Metadata
Export
BigTables
&
FNDLOBS
Import
Users
Import Big
&
FNDLOBS
Build
Indexes
Export
AllTables
Export
Metadata
Import
AllTables
Import
Procs etc
Import
Procs etc
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Iteration - V
Iteration 5 (Parallelize Operations within Processes) Hours Taken 32 DB Nodes: 6 App Nodes: 12
Dow
ntim
e
Targ
et
Dow
ntim
e
Sou
rce
Get
Tar
get R
eady
for
Dow
ntim
e
Pre
Dow
ntim
e
Targ
et
Pre
Dow
ntim
e
Sou
rce
Preserve
Teck Stack &
Code Trees
Build
Indexe
s
Add Nodes
Configuration
Prepare
DB for
migration
Import
Procs etc
Export
Metadata
Import Statistics
into New PROD
DB
Build
Indexes on
Imported
Tables
Import
Users
Import
Procs etc
Import Big
& FNDLOB
Tables
DB Updates
related to PCP
Import
Procs etc
Parallel
Concurrent
Processing
Configuration
Create
Shell
PROD DB
Third Party
Integration
Export
Metadata
Create
Shell
PROD
DB
Import
Procs etc
Tech Stack
Install/Copy
Third Party
Integration
Drop
PROD DB
on Target
Create
Materialized
Views on
Tables
Expported
Export Statistics
from Source
Use prepared
document
Shared Apps Tier
configuration
Export
BigTables
&
FNDLOBS
Import
Users
Import
BigTables
&
FNDLOBS
Run Auto Config
to configure
nodes
Build
Indexes
Import
AllTables
Use prepared
document
Export
AllTables
Import
AllTables
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Methods Used Functionality Testing
Can be performed by automated tools like Load Runner or Oracle Applications Testing Suite using prewritten test scripts
Can also be performed by super users to assure that main functionality is working
Should be part of iteration 1 and 3
Performance Testing
If the customer has any tools like Load Runner to simulate business functionalities it will be more scientific. In case of customers who do not have tools manual effort is required and testing will be limited to critical business functionalities
Online Transactions and Batch jobs/processes can be tracked for actual times in existing PROD system and then can be evaluated with migrated system
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Methods Used Performance Testing
Any processes that differ in elapsed time between existing PROD and migrated system have to be traced
Tracing for online transactions can be done with help of forms trace where as for batch programs it needs to be turned on at program level
All the trace files can be analyzed either by TKPROF or TRACE Analyzer for identifying the issues
This test needs to be performed in iteration 3 and 4
Failover Testing
This test mainly applies to systems which have multiple nodes and failover capabilities. Ex RAC Databases, Parallel Concurrent Processing (PCP) etc
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Methods Used
Failover Testing
RAC Database Failover
SQLPLUS Session Failover (Only Selects)
Application Functionality with Node down
PCP Failover
Couple of DB nodes down
CM node down
CM Managers Failover from primary to secondary
Load Balancer Failover
Shutdown of one or multiple nodes of APPS tier
Time Consuming Operations
Time Reduction Methods
Iterations Testing Methods
Methods Used
Load/Capacity Testing
This testing requires automated tools to simulate the load. Manual effort to replicate is very hard
This is usually done in batches with different set of users covering all business functionalities to produce real time scenario
Batch size of concurrent users 200, 400. 600, 1200 and 2000
Active Application Nodes 1, 2, 4, 6 and 8
Active DB Nodes 1, 2, 4, and 6
The following statistics are collected on both application and database tiers
CPU/Memory/Disk Utilization
Load Average
Functionality or Screen Timings
Thank You
Q&A
“X” Platform Migrations