22
IS 4510 – Database Administration Module – 2 Database Backup 10/24/2014 1 Compiled by: Zafar Iqbal Khan

IS 4510 – Database Administration Module – 2 Database Backup 10/24/20141Compiled by: Zafar Iqbal Khan

Embed Size (px)

Citation preview

Compiled by: Zafar Iqbal Khan 1

IS 4510 – Database Administration

Module – 2

Database Backup

10/24/2014

Compiled by: Zafar Iqbal Khan 2

Objectives

After the completion of this module, you will be able to

• Outline number of ways database backup can be performed

• Distinguish between logical & physical backup• Understand export & import processes for

oracle backup

10/24/2014

Compiled by: Zafar Iqbal Khan 3

Database Backup

Whole database and the log are periodically copied onto a cheap storage medium such as magnetic tapes or other large capacity offline storage devices. In case of a catastrophic system failure, the latest backup copy can be reloaded from the tape to the disk, and the system can be restarted.

Data from critical applications such as banking, insurance, stock market, and other similar databases is periodically backed up in its entirety and moved to physically separate safe locations.

10/24/2014

Compiled by: Zafar Iqbal Khan 4

Database Backup contd…

• There are three standard methods, and two types of backups available-

• Methods-– Exports– Offline backups– Online backups

• Types– Logical Backups– Physical Backups

10/24/2014

Compiled by: Zafar Iqbal Khan 5

Database Backup contd…

DB backup

Physical backup

Online backup

Offline backup

Logical backup Exports

10/24/2014

Compiled by: Zafar Iqbal Khan 6

Physical Backups

Physical backups involve copying the files that constitute the database. These backups are also

referred to as file system backups because they involve using operating system file backup

commands. Oracle supports two different types of physical file backups: offline backups and the

online backups (also known as cold and hot backups, respectively).

10/24/2014

Compiled by: Zafar Iqbal Khan 7

Offline Backups

Consistent offline backups occur when the database has been shut down normally (that is, not due to instance failure) using the normal, immediate, or transactional option of the shutdown command. While the database is “offline,” the following files should be backed up:-1. All data files2. All control files3. All archived redo log files4. The init.ora file or server parameter file (SPFILE)

10/24/2014

Compiled by: Zafar Iqbal Khan 8

Offline Backups contd…

Having all these files backed up while the database is closed provides a complete image of the database as it existed at the time it was closed. The full set of these files could be retrieved from the backups at a later date, and the database would be able to function

10/24/2014

Compiled by: Zafar Iqbal Khan 9

Online Backups

• Online backups could be made for any database that is running in ARCHIVELOG mode. In this mode, the online redo logs are archived, creating a log of all transactions within the database.

• An online backup involves setting each tablespace into a backup state, backing up its datafiles, and then restoring the tablespace to its normal state.

10/24/2014

Compiled by: Zafar Iqbal Khan 10

Online Backups contd…

• The database can be fully recovered from an online backup, and with the help of archived redo logs files, could be restored to any point in time before the failure.

• When the database is then opened, any committed transactions that were in the database at the time of the failure will have been restored, and any uncommitted transactions will have been rolled back.

• While the database is open, the following files can be backed up:1. All data files2. All archived redo log files3. One control file, via the alter database backup control file4. The server parameter file (SPFILE)

10/24/2014

Compiled by: Zafar Iqbal Khan 11

Online Backups contd…

• Online backup procedures are very powerful for two reasons. First, they provide full point-in-time recovery. Second, they allow the database to remain open during the file system backup.

• Even databases that cannot be shut down due to user requirements can still have file-system backups. Keeping the database open also keeps the System Global Area (SGA) of the database instance from being cleared when the database is shut down and restarted.

• Keeping the SGA memory from being cleared will improve the database’s performance because it will reduce the number of physical I/Os required by the database

10/24/2014

Compiled by: Zafar Iqbal Khan 12

Data Pump Export and Import

• This feature was introduced in Oracle 10g, data pump provides a server-based data-extraction and data-import facility.

• Its feature include significant architectural and functional upgrade over the original import and export utilities.

• It has many powerful and effective enhancement viz. ability to stop and restart job, see the status of running job, and restrict the data that is exported and imported.

10/24/2014

Compiled by: Zafar Iqbal Khan 13

Data Pump Export and Import contd…

• Because data pump runs as server process, gives users benefits in many ways:– Client process that starts the job may be detached

and later on reconnected as required– Faster in contrast to original export/import as

processing of data takes place at server– Has capabilities of being parallelized data pump

export and loads adding further performance enhancement

10/24/2014

Compiled by: Zafar Iqbal Khan 14

Creating a Directory• Data pump operation requires that you create a directory to stores data and log files

before starting the process.• First create a folder with sought name on windows explorer then map it at oracle from

following command:

SQL> create directory datapumpEXIM as 'd:\datapump';

Directory created.

SQL> grant read, write on directory datapumpEXIM to HR;

Grant succeeded.

• You have to login from sys as sysdba for doing this job• Directory can be created on local server, network server or any other node attached to

the Network, provided it is accessible and read/write permission has been granted over it.

10/24/2014

Compiled by: Zafar Iqbal Khan 15

Data Pump Export Options

• Oracle provides a utility viz. ‘expdp’ which acts like interface to data pump, there are some input parameters for expdp when job is created, there are five modes for expdp export1. Full export all database data & metadata2. Schema export data & metadata for specific user3. Tablespace export data & metadata for tablespaces4. Table export data & metadata for tables and table

partitions5. Transportable Tablespace export metadata for specific

tablespaces in preparation for transporting a tablespace from one database to another

10/24/2014

Compiled by: Zafar Iqbal Khan 1610/24/2014

Parameter Description ATTACH Connects a client session to a currently running Data Pump Export job. COMPRESS Specifies which data to compress: ALL, DATA_ONLY,METADATA_ONLY, NONE. CONTENT Filters what is exported: DATA_ONLY, METADATA_ONLY, or ALL. DATA_OPTIONS If set to XML_CLOBS, then XML Type columns are exported uncompressed. DIRECTORY Specifies the destination directory for the log file and the dump file set. DUMPFILE Specifies the names and directories for dump files., ENCRYPTION ENCRYPTED_COLUMNS_ONLY, METADATA_ONLY, NONE. ENCRYPTION_ALGORITHM The encryption method to perform the encryption: AES128, AES192, EAS256. ENCRYPTION_MODE Uses a password or Oracle wallet or both: values are DUAL, PASSWORD, TRANSPARENT. ESTIMATE Determines the method used to estimate the dump file size (BLOCKS or STATISTICS). ESTIMATE_ONLY A Y/N flag is used to instruct Data Pump whether the data should be exported or just estimated. EXCLUDE Specifies the criteria for excluding objects and data from being exported. FILESIZE Specifies the maximum file size of each export dump file. FLASHBACK_SCN The SCN for the database to flash back to during the export.

FLASHBACK_TIME The timestamp for the database to flash back to during the export. FLASHBACK_TIME and FLASHBACK_SCN are mutually exclusive.

FULL Tells Data Pump to export all data and metadata in a Full mode export. HELP Displays a list of available commands and options. INCLUDE Specifies the criteria for which objects and data will be exported. JOB_NAME Specifies a name for the job; the default is system-generated. LOGFILE The name and optional directory name for the export log. NETWORK_LINK Specifies the source database link for a Data Pump job exporting a remote database

Data Pump Export Options contd…

Compiled by: Zafar Iqbal Khan 17

Starting a Data Pump Export Job

• You can store your job parameters in a parameter file, referenced via the PARFILE parameter of expdp. E.g. we can create a file named dp_rjb.par with the following entries:directory= datapumpEXIM dumpfile=metadata_only.dmpcontent=metadata_only

• To initiate a data pump job, use following command

C:\>expdp hr/hr parfile=dp_rjb.par10/24/2014

Compiled by: Zafar Iqbal Khan 18

Starting a Data Pump Export Job contd..

We can use multiple directories and dump files for a single Data Pump export. Within the DUMPFILE parameter setting, list the directory along with the filename, in this format:DUMPFILE=directory1:file1.dmp,

directory2:file2.dmp

Using multiple directories in the DUMPFILE parameter has two benefits: the Data Pump job can use parallel processes (using the PARALLEL parameter), in addition to spreading out the dump file to wherever disk space is available. we can also use the substitution variable %U in the filename specification to automatically create multiple dump files that can be written to by multiple processes automatically. Even if only one process is writing the dump file, using the %U substitution variable in combination with the FILESIZE parameter will limit the size of each dump file.

10/24/2014

Compiled by: Zafar Iqbal Khan 19

Starting a Data Pump Import Job

• We can start data pump import with ‘impdp’ utility provided with Oracle 11g with following command:

• On command prompt write• C:\> impdp hr/hr directory= datapumpEXIM

dumpfile=employee.dmp logfile=imp.log– Here

• hr/hr username/password• datapumpEXIM directory made for export/import• employees.dmp dump file created by export process• Imp.log log file created by import process

10/24/2014

Compiled by: Zafar Iqbal Khan 20

Stopping and Restarting Running Jobs

• Once we have started a Data Pump Import job, we can close the client window we used to start the job. Because it is server based, the import will continue to run. we can then attach to the job, later after some time check its status, and alter it:

impdp hr/hr parfile=hr_dp_imp.par //hr_dp_imp.par is a parameter file • Press CTRL-C to leave the log display, and Data Pump Import will return you to the import prompt:• Import>• Exit to the operating system using the exit_client command:• Import> exit_client• Later, we can restart the client and attach to the currently running job under our schema:• impdp hr/hr attach• You can then issue the continue_client command to see the log entries as they are generated, or you

can alter the running job.• Import> continue_client• Not surprisingly, you can temporarily stop a job using the stop_job command:• Import> stop_job• While the job is stopped, you can increase its parallelism via the parallel option, and then restart the

job:• Import> start_job

10/24/2014

Compiled by: Zafar Iqbal Khan 21

Integration of Logical and Physical Backups

Method Type Recovery Characteristics

Data Pump Export Logical Can recover any database object to its status as of the moment it was exported.

Offline backups Physical

Can recover the database to its status as of the moment it was shut down. If the database is run in ARCHIVELOG mode, you can recover the database to a status at any point in time.

Online backups Physical Can recover the database to its status at any point in time

10/24/2014

Compiled by: Zafar Iqbal Khan 22

Integration of Logical and Physical Backups contd…

• Offline backups are the least flexible method of backing up the database if the database is running in NOARCHIVELOG mode. Offline backups are a point-in-time snapshot of the database. Also, because they are a physical backup, DBAs cannot selectively recover logical objects (such as tables) from them.

• Online backups, with the database running in ARCHIVELOG mode, allow you to recover the database to the point in time immediately preceding a system fault or a user error.

• Using a Data Pump Export-based strategy would limit you to only being able to go back to the data as it existed the last time the data was exported.

• A good backup strategy combines both logical and physical backups

• Data Pump Export validates that the data is logically sound, and physical backups that it is physically sound. A good database backup strategy integrates logical and physical backups.

10/24/2014