Upload
khansaf
View
216
Download
0
Embed Size (px)
Citation preview
8/14/2019 wk-10-11
1/19
PRESENTED BY S.HAYAT 1
Concurrency Control
ProblemProblem in a multi-user environment, simultaneous
access to data can result in interference and data loss
SolutionSolutionConcurrency Control The process of managing simultaneous operations
against a database so that data integrity is
maintained and the operations do not interfere with
each other in a multi-user environment.
8/14/2019 wk-10-11
2/19
PRESENTED BY S.HAYAT 2
Concurrency Control ProblemsConcurrency Control Problems
1. Lost Updates Problem
2. Inconsistent Read
LOST UPDATES PROBLEM The most common problem encountered when multiple
users attempt to update a database without adequate
concurrency control is that of lost updates.
The below Fig shows the lost update problem.
8/14/2019 wk-10-11
3/19
PRESENTED BY S.HAYAT 3
Figure LOST UPDATELOST UPDATE
Simultaneous access causes updates to cancel each other
A similar problem is the inconsistent readinconsistent read problem
8/14/2019 wk-10-11
4/19
PRESENTED BY S.HAYAT 4
Inconsistent Read Problem
This problem occurs when one user reads data that have
been partially updated by another user.
The read will be incorrect and is sometimes referred to as a
dirty read or an unrepeatable read.
The following fig represent inconsistent read problem.
8/14/2019 wk-10-11
5/19
PRESENTED BY S.HAYAT 5
Time T3 T4 balx
t1 begin-transaction 100
t2 read(balx) 100
t3
balx =
balx
+100 100
t4 begin_transaction write(balx) 200
t5 read(balx) : 200
t6 balx = balx -10 rollback 100
t7 write(balx) 190
t8 commit 190
Occurs when one transaction can see intermediate results of anothertransaction before it has committed.
T4 updates balx to 200 but it aborts, so balx should be back at originalvalue of 100.
T3 has read new value of balx (200) and uses value as basis of 10reduction, giving a new balance of 190, instead of 90.
Problem avoided by preventing T3 from reading balx until after T4commits or aborts.
8/14/2019 wk-10-11
6/19
8/14/2019 wk-10-11
7/19
PRESENTED BY S.HAYAT 7
Figure-Updates with locking for concurrency control
This prevents the lost update problem
8/14/2019 wk-10-11
8/19
PRESENTED BY S.HAYAT 8
Locking Mechanisms
Locking level: Database used during database updates
Table used for bulk updates
Block or page very commonly used
Record only requested row; fairly commonly used
Field requires significant overhead; impractical
Types of locks: Shared lock - Read but no update permitted. Used
when just reading to prevent another user from placingan exclusive lock on the record
Exclusive lock - No access permitted. Used whenpreparing to update
8/14/2019 wk-10-11
9/19
PRESENTED BY S.HAYAT 9
Deadlock
An impasse that results when two or more transactionshave locked common resources, and each waits for theother to unlock their resources
Figure:
A deadlock situation
UserA and UserB will waitUserA and UserB will wait
forever for each other toforever for each other to
release their locked resources!release their locked resources!
8/14/2019 wk-10-11
10/19
PRESENTED BY S.HAYAT 10
Managing Deadlock
Deadlock prevention: Lock all records required at the beginning of atransaction
Two-phase locking protocol Growing phase
Shrinking phase
May be difficult to determine all needed resources inadvance
Deadlock Resolution:
Allow deadlocks to occur
Mechanisms for detecting and breaking them Resource usage matrix
8/14/2019 wk-10-11
11/19
PRESENTED BY S.HAYAT 11
Versioning
Optimistic approach to concurrency control
Instead of locking
Assumption is that simultaneous updates will be
infrequent Each transaction can attempt an update as it wishes
The system will reject an update when it senses a
conflict
Use of rollback and commit for this
8/14/2019 wk-10-11
12/19
PRESENTED BY S.HAYAT 12
use of versioning
Better performance than locking
8/14/2019 wk-10-11
13/19
PRESENTED BY S.HAYAT 13
Database Recovery & Its Techniques
Mechanism for restoring a database quickly and accurately afterloss or damage.
OR
The process of restoring the database to the correct state after
loss or damage
Recovery Techniques/facilities:
Backup Facilities
Journalizing Facilities
Checkpoint Facility
Recovery Manager
8/14/2019 wk-10-11
14/19
PRESENTED BY S.HAYAT 14
Database Recovery & Its Techniques
Back-up facilities: the DBMS should provide back-up facilitiesthat produce a back-up copy(or save) of the entire database plus
files and journals.
Each DBMS normally provides a COPY utility for this purpose. Including database files , the back-up facility should create a copy
of related database objects including the repository (or system
catalog), database indexes , source libraries and so on.
A back-up copy is produced at least once per day.
The copy should be stored in a secured location where it is
protected from loss or damage.
8/14/2019 wk-10-11
15/19
PRESENTED BY S.HAYAT 15
Database Recovery & Its Techniques
Journalizing facilities: A DBMS must provide Journalizingfacilities to produce an audit trial of transactions and database
changes. In the event of a failure, a consistent database state can
be re-established using the information in the journals together.
Transaction log:contains a record of the essential data for eachtransaction include the transaction code or identification, action
or type of transaction, time of the transaction, user ID, input data
values, table and records accessed, records modified, and possibly
the old and new field values.
8/14/2019 wk-10-11
16/19
PRESENTED BY S.HAYAT 16
Database Recovery & Its Techniques
Database change log:contains before and after images ofrecords that have been modified by transactions.
Before-image: a copy of a record before it has been modified.
After-image: is a copy of the same record after it has beenmodified
8/14/2019 wk-10-11
17/19
PRESENTED BY S.HAYAT 17
From the backup and logs,databases can be restored in
case of damage or loss
Database audit trail
8/14/2019 wk-10-11
18/19
PRESENTED BY S.HAYAT 18
Database Recovery & Its Techniques
Checkpoint Facility:
DBMS periodically refuses to accept new transactions
system is in a quietstate
Database and transaction logs are synchronized
The DBMS writes a special record( called a checkpoint record) to
the log file, which is like a snapshot of the state of the database.
Checkpoints should be taken frequently when failures do occur, it
is often possible to resume processing from the most recent
checkpoint.
Only a few minutes of processing work must be repeated,
compared with several hours for a complete restart of the days
processing.
8/14/2019 wk-10-11
19/19
PRESENTED BY S.HAYAT 19
Database Recovery & Its Techniques
Recovery manager:a module of the DBMS that restores thedatabase to a correct condition when a failure occurs and then
resumes processing user questions