24
University of Maryland University College DBST 652 Advanced Relational/Object-Relational Database Systems Final Examination NAME Start Date Due Date INSTRUCTIONS This examination is comprised of 10 equally weighted questions. Answer all the questions. Your answers on this exam should be your original work and not copied verbatim from the text or other sources. Originality, depth of analysis, and illustrations will be taken into account in grading this midterm exam. Make sure you read the entire question and include all the information that is requested to get full credit. While composing your answers, be VERY careful to cite your sources. It is easy to get careless and forget to footnote a source. Remember, failure to cite sources constitutes an academic integrity violation. Use APA style for citations and references. The test is due no later than 11:59 p.m. Eastern Daylight Time on Sunday, April 25, 2010. Your answers should be contained in a Microsoft Word or compatible format document uploaded to your WebTycho Assignments folder. Be sure to put your name on the front of your exam.

Final DB Exam11[1]

Embed Size (px)

Citation preview

Page 1: Final DB Exam11[1]

University of Maryland University CollegeDBST 652 Advanced Relational/Object-Relational Database SystemsFinal Examination

NAME

Start DateDue Date

INSTRUCTIONS

This examination is comprised of 10 equally weighted questions. Answer all the questions.

Your answers on this exam should be your original work and not copied verbatim from the text or other sources. Originality, depth of analysis, and illustrations will be taken into account in grading this midterm exam.

Make sure you read the entire question and include all the information that is requested to get full credit.

While composing your answers, be VERY careful to cite your sources. It is easy to get careless and forget to footnote a source. Remember, failure to cite sources constitutes an academic integrity violation. Use APA style for citations and references.

The test is due no later than 11:59 p.m. Eastern Daylight Time on Sunday, April 25, 2010. Your answers should be contained in a Microsoft Word or compatible format document uploaded to your WebTycho Assignments folder. Be sure to put your name on the front of your exam.

Page 2: Final DB Exam11[1]

<<<<<<<<<<<<<<<<<<<<<<QUESTIONS BEGIN>>>>>>>>>>>>>>>>>>>>>>>>>>

Q1. DATA MODELING (1) Does the relational model adequately model today’s environment in terms of unstructured data, multimedia data, streaming data and particularly the data on the web? Discuss pros and cons of using the relational model and related DBMSs to do so. In particular, point out the shortcomings of the relational model and the difficulties of dealing with these data types in the relational model. What approaches are being proposed to deal with modeling of unstructured data for storage and management in traditional (including relational and object-oriented) database environments? Answer the question with some illustrative examples.

No, the relational model does not adequately model today’s environment in terms of unstructured data, multimedia data, streaming data and particularly data on the web. Although the relational model has been quite successful in developing the database technology required for traditional business applications, there are shortcomings when more complex database applications must be designed and implemented. The complex datasets that are used prominently in today’s society include, unstructured data, multimedia data, streaming data, geographic data, image data and data on the web (Elmasri, 2007).

The relational model was designed to handle rigidly structured data in the form of tables, columns and rows. Newer applications have requirements and characteristics different from the traditional business application, where more complex structures for objects, longer-duration transactions, new data types for storing images or large textual items and the need to define nonstandard application-specific operations (Elmasri, 2007).

The Object Oriented Model was proposed to meet the needs of these more complex applications. It offers the flexibility to handle some of these requirements without being limited by the data types and query language available in traditional database systems. Even with all of the technologically enhanced benefits of the OO Model, it has not really taken hold in the industry, The RDBMS is still dominant even with its shortcomings. Today, enhancements are being made to the Relational model in the form of Object oriented enhancements.

Commercial DBMS are moving towards the hybrid Object Relational Database Management System. This DBMS offers the best of both worlds by keeping the standard structured features of traditional business applications, while allowing the use of complex data. The Geographical Information System is a great example of a technology that has benefitted from the ORDBMS concept because it takes full advantage of structured tables to handle topological attribute data at the same time allowing objects and images to visually represent these topologies (Steiniger, 2009).

References

Elmasri, R. N. (2007). Fundamentals of Database Systems, 5th Edition. Boston: Addison Wesley.

Steiniger, S. W. (2009, May 08). GIS Software - A description in 1000 words. Retrieved April 10, 2010, from www.geo.unizh.ch: http://www.geo.unizh.ch/publications/sstein/gissoftware_steiniger2008.pdf

Page 3: Final DB Exam11[1]

Q2. DATA MODELLING (2) Many consider the relational data model as the lingua franca of data today in both business and government, this is largely because a significant amount of business transactions are stored and managed using the relational data model and RDBMS such as Oracle, DB2, and SQL server.

(a) Comment on the pros and cons of the relational data model in terms of data modeling, data storage, data processing, data structure, and data transportation.

(b) Show two examples of relational applications that demonstrate the advantages of using the relational data model over legacy models as the preferred format for representing and processing data.

(c) Identify and explain three unique challenges in developing a relational DBMS compared with any of the legacy DBMS.

(d) Describe your ideas or solutions to address the above three challenges.

The relational data model is helped by a very robust infrastructure in terms of the commercial DBMSs that have been designed to support it. It permits the database designer to create a consistent, logical representation of information. This consistency is achieved by including declared constraints in the database design which helps to maintain data integrity and reduction of redundancy. The relational data model falls short, however, when dealing with the challenges of newer applications. The problems mostly stem around the inability of the RDBMS to store a variety of complex data types. These unsupported data-types include text in CAD publishing, satellite imagery, nonconventional and unstructured data, genome information, audio and video files, and traffic data (Elmasri, 2007).

Hierarchical and network DBMSs, known as Legacy DBMSs, were introduced in the 1960s. These DBMSs at one time modeled the business needs of organizations well. However, because of the heavy use of pointers, the rigidity in terms of built-in hierarchical paths in the data, and their inability to handle objects and complex data-types, these systems are gradually being phased out. The relational data model has become a predominant choice for the storage of information in new databases used for financial records, manufacturing and logistical information, personnel data and much more. Relational databases have often replaced legacy hierarchical databases and network databases because they are easier to understand and use (Elmasri, 2007).

There are three main challenges that are faced when migrating from a legacy system to a RDBMS. The first is that organizations have invested man-centuries in the development of their mission-critical databases. Secondly, the lost opportunity costs of re-implementing systems is enormous, so even if people knew how they might implement them better m a relational database, they haven't got the time, money, resources or skills to do it. Lastly, the learning curve to utilize a new system is a cost to the organization (Burleson Corporation, 2010).

References

Elmasri, R. N. (2007). Fundamentals of Database Systems, 5th Edition. Boston: Addison Wesley.

Burleson Corporation. (2010). Legacy bumps slow trip to relational . Retrieved April 25, 2010, from www.dba-oracle.com: www.dba-oracle.com

Q3. DATABASE DESIGN

Page 4: Final DB Exam11[1]

A. Assume that a sizeable application exists using a legacy database management system in an organization which took over 20 man-years of effort to design, implement, test and deploy and has been running for the last 20 years. There is a committee assigned to evaluate whether this application should be moved over to relational environment and you are asked to prepare an “issues to consider” document for it. List a series of issues that you will consider to come up with a go vs. no-go decision on migrating the application. (Be sure to consider user, data , programming, testing and deployment related issues.)

B. Assuming that the decision is made to migrate the existing data and application from the legacy to relational environment, describe the series of steps that the team will go through to convert to relational environment, describe each step briefly and comment on the difficulties involved. Do you feel the technology today provides enough tool support to do these steps ? - name some tools and comment on their pluses and minuses.

C. “The relational model loses semantics of the application in mapping from a conceptual schema to relational”- to illustrate this point, do the following. Start with a relational design of any application of your choice and reverse engineer to an EER schema and illustrate how the EER schema explains the application much easier compared to the relational model.

The need to work with legacy data constrains a development team. It reduces flexibility because it is not easy to manipulate the source data schema. The legacy database application will have its own constraints in the form of other applications that work with it. These constraints will be transferred to the design team because they will have to figure out how to best implement these applications in the new relational database application. Legacy data is often difficult to work with because of a combination of quality, design, architecture, or political issues. Things that we should consider before accepting or denying the need to migrate a Legacy DBMS to a Relational DBMS follow:

1. What Data quality challenges will be encountered?2. What Database design problems will be encountered?3. What Data architecture problems will be encountered?4. What Process-related challenges will be faced?5. How will we work around the Legacy File Management System (Ambler, 2009)?

Assuming that the problems listed above are workable, there are a series of steps in the database application system life cycle.

1. System definition: scope of the database, users and applications are defined and interfaces for the users, response time constraints, and storage and processing needs are identified.2. Database design: complete logical and physical design of the database system on the RDBMS us completed.3. Database implementation: specifying the conceptual, external, and internal database definitions, creating empty database files, and implementing the software application4. Loading or data conversion: converting existing files into the database system format.5. Application conversion: converting all software applications from the old system to work with the new system.6. Testing and validation: testing and validation of the new system.7. Operation: database system and applications are put into operation.8. Monitoring and maintenance: constant monitoring and maintenance.

Page 5: Final DB Exam11[1]

In the case of moving from an established system to a new one, steps 4 and 5 above are the most time consuming steps in the database system lifecycle and are the most underestimated steps (Elmasri, 2007).

References

Ambler, S. (2009). The Process of Database Refactoring: Strategies for Improving Database Quality. Retrieved April 23, 2010, from www.agiledata.org: http://www.agiledata.org/essays/databaseRefactoring.html

Elmasri, R. N. (2007). Fundamentals of Database Systems, 5th Edition. Boston: Addison Wesley.

Page 6: Final DB Exam11[1]

Q4. VIEW PROCESSING IN SQL Consider the situation where we can define materialized views over our relational database. Since we store the result of the view physically, we can use this view relation when processing queries rather than using the base relations of our database. Using the materialized view may result in a more efficient query execution plan. In this question, I want you to discuss some of the issues involved with being able to rewrite an SQL query in terms of a materialized view, e.g. when is it correct to do so. In your discussion, you should consider the following cases:

An SQL query that does not contain any aggregate function and a materialized view that does contain an aggregate function(s) and grouping

An SQL query that contains aggregate function(s) and grouping, and a materialized view that also contains aggregate function(s) and grouping.

In your discussion include some examples considering the single relation, CustomerSales(CustomerID, ProductID, Month, Sales) where the primary key is underlined and the view definition, Create View V AS

Select ProductID, Month, Sum(Sales), Count(*) AS NumEntries From CustomerSales Where ProductID > 500 Group BY ProductID, Month;

According to Elmasri & Navathe, view materialization involves physically creating a temporary view

table when the view is first queried and keeping that table on the assumption that other queries on the view will follow. In the case of a view that is formed from the joins of 2 or more tables, it is more efficient to change the SQL to query the view because the view simplifies the specification of the query (2007). Materialized views are generic objects that are used to summarize, pre-compute, replicate, and distribute data.

Materialized views improve query performance by pre-calculating expensive join and aggregation operations on the database prior to execution time and storing these results in the database. The query optimizer can make use of materialized views by automatically recognizing when an existing materialized view can and should be used to satisfy a request. It then transparently rewrites the request to use the materialized view. Queries are then directed to the materialized view and not to the underlying detail tables or views. Rewriting queries to use materialized views rather than detail relations results in a significant performance gain. An SQL query that does not contain any aggregate function and a materialized view that does contain an aggregate function(s) and grouping would not benefit from a query rewrite that pulls from the materialized view. However, an SQL that contains aggregate function(s) and grouping, and a materialized view that also contains aggregate function(s) and grouping are more efficient and cost conscious when being rewritten to pull from the materialized view rather than the base tables (Bauer, 1999). The view that is defined above has a single defining table and is not updatable because the view attributes do not contain the entire primary key of the base table. It would be better to use the base table instead of re-writing a query to pull from this view.

ReferencesBauer, M. (1999). Oracle 8i Tuning. Redwood City: Oracle Corporation.

Elmasri, R. N. (2007). Fundamentals of Database Systems, 5th Edition. Boston: Addison Wesley.

Page 7: Final DB Exam11[1]

Q5. PHYSICAL DATABASE DESIGN Consider a relational database management system that needs to provide optimized access for applications that primarily do one of the following:

(a) Read access (e.g., Data warehouses, where some of the columns or aggregate values are returned)

(b) Write access (e.g., Online transaction processing, where new records are inserted) Discuss different physical storage organizations (i.e. physical storage architecture) for a relation that may be better suited for (a), and those better suited for (b). Give some justification to support your storage organization for a relation in each of the two cases.

The process of physical database design involves choosing the particular data organization techniques that best suit the given application requirements from among the options. Most database systems are stored permanently on magnetic disk secondary storage, for the following reasons:

1. Generally, databases are too large to fit entirely in main memory.2. The circumstances that cause permanent loss of stored data arise less frequently for disk

secondary storage than for primary storage. Secondary storage is normally considered to be nonvolatile storage.

3. The cost of storage per unit of data is an order of magnitude less for disk secondary storage than for primary storage. The data stored on disk is organized as files or records. There are several primary file

organizations that determine how the file records are physically placed on the disk and how the records can be accessed. Heap files place the records on disk in no particular order by appending newly inserted records at the end of the file. Sorted files keep the records ordered by the value of a particular field called the sort key. Hashed files use a hash function applied to a particular field, the hash key, to determine a record’s placement on disk. Tree structures, such as the B-trees, can also be used for file storage.

In the case of option a) above, where the primary access to the storage is for Read access, the Sorted File organization is the most efficient because the sorting of the records has already been done, finding the next record from the current one in order of the ordering key requires no additional block accesses, and this storage retrieval method can be enhanced when using a binary search technique. This same storage organization is not so efficient for a system with predominant Write access because inserting and deleting records are expensive operations because the file must remain in physical order. Predominant Write accesses, as in option b) above, would better be suited for a heap file because inserting a record is very efficient when it appends all new records to the end of the file regardless to the order of the data (Elmasri, 2007).

References

Elmasri, R. N. (2007). Fundamentals of Database Systems, 5th Edition. Boston: Addison Wesley.

Page 8: Final DB Exam11[1]

Q6. DATABASE TECHNOLOGIES AND APPLICATIONS Many technologies related to the database area are affecting the overall design and implementation of database applications. In particular, answer the following:

(a) What architectural approaches are available for interoperating multiple database applications – in terms of sharing of storage, processing and mediating between DBMSs themselves?

(b) Comment on any favorite database application area that interests you. Point out any challenging issues and the solution approaches being pursued by various organizations and/or groups.

a. We use the term interoperability to refer to the management and co-operation of multiple database

systems. The distributed database concept allows for the sharing of storage, processing and mediating between DBMSs. Distributed databases are defined as a collection of multiple logically interrelated databases distributed over a computer network. A distributed database management system (DDBMS) is a software system that manages a distributed database while making the distribution transparent to the user. Examples of distributed database system architectures are the federated database system where there is local autonomy on each database, the multi-database system architecture, where there is no local autonomy and all databases are globally accessed through a site inherent to the DDBMS.

b. I am particularly interested in the Geographic Information System applications. They are often defined as the systematic integration of hardware and software of capturing, storing, displaying, updating, manipulating and analyzing spatial data. Problems with GIS were encountered because there was no standardization between data that was used by different organizations, so it made it almost impossible to share information between systems. Also, the relational database model could not handle the complex raster/vector image files associated with topologies. Since, various organizations have come together to standardize the building blocks of the system- entities, methods, and constraints. The GIS community has been brought together by standards organizations such as the Open Geospatial Committee (OGC) and the U.S. Federal Geographic Data Committee (FGDC). Also, a standard language called Geographic Markup Language (GML) was introduced, which uses XML encoding tailored for geographic information. The Object Relational Database Management system was ushered in for use with GIS because it allowed for the handling of complex objects while allowing mapping to the attributes of the objects in the underlying relational database.

Page 9: Final DB Exam11[1]

Q.7 CONCURRENCY CONTROL AND RECOVERY (1) Two-Phase Locking (2PL) is the commonly used concurrency control method in production relational database management systems. Transactions using the general version of 2PL acquire all the locks before they release any of the locks. The main variant of 2PL adopted by production systems is the Strict 2PL, which releases the locks only after commit/abort decision. Explain the advantages and disadvantages of Strict 2PL, as compared to the general 2PL, in terms of:

(a) Total amount of concurrency allowed during transaction execution. The difference between two-phase locking (2PL) and strict 2PL is that in 2PL all locking operations

(read_ lock and write_lock) precede the first unlock operation in the transaction, whereas in strict 2PL a transaction does not release any of its exclusive (write) locks until after it commits or aborts. Since concurrency is defined as a property of systems in which several processes are executing at the same time, the amount of concurrency allowed in 2PL is greater than that of strict 2PL because records are not locked as long and other transactions are not waiting as long. In strict 2PL, serialization is guaranteed, which is good, however, concurrency is reduced and dead-locking of records is more prevalent. The trade off here is that data integrity is greater with strict 2PL.

(b) Complexity of recovery algorithms, particularly when aborts happen. In the case of the complexity of recovery algorithms, particularly when aborts happen, the greater

degree of concurrency we wish to achieve, the more time consuming the task of recovery becomes. Since, 2PL allows for more concurrency than strict 2PL, the recovery algorithm, in the event of an abort, is more complex. In the event of an abort using strict 2PL, there is no reason to redo the write operation for any transactions committed before the last checkpoint. Anything that was not committed only will need to be re-done.

Page 10: Final DB Exam11[1]

Q8. CONCURRENCY CONTROL AND RECOVERY (2) 

Although the Two-Phase Commit (2PC) protocol guarantees atomicity property in databases, the 2PC has been considered too restrictive for many practical situations.

(a) One reason 2PC is considered too restrictive is the time it forces the participants to maintain locks. This is particularly the case when the participants use Strict 2PL (see question above). Use quantitative arguments (e.g., the number of messages used by 2PC and its duration) to explain this problem for short (sub-second) atomic transactions.

The greatest disadvantage of the two-phase commit protocol is that it is a blocking protocol. A node will block while it is waiting for a message. Other processes competing for resource locks held by the blocked processes will have to wait for the locks to be released. A single node will continue to wait even if all other sites have failed. If the coordinator fails permanently, some cohorts will never resolve their transactions, causing resources to be tied up forever.

(b) Another reason 2PC is considered too restrictive is the atomicity property, which cannot be easily sub-divided. Explain the problem caused by atomicity using two illustrative applications that contain long duration activities (hours, days or months).

Atomicity requires that database modifications must follow an all or nothing rule. Each transaction is said to be atomic if when one part of the transaction fails, the entire transaction fails and database state is left unchanged. An example of atomicity is ordering an airline ticket where two actions are required: payment, and a seat reservation. The potential passenger must either: both pay for and reserve a seat; OR neither pay for nor reserve a seat. The booking system does not consider it acceptable for a customer to pay for a ticket without securing the seat, nor to reserve the seat without payment succeeding. Also, unless there is a timeout function on the transaction the seat or reservation can be locked indefinitely.

(c) Compare the atomicity property guaranteed by 2PC (e.g., as specified by the WS-Atomic Transaction protocol) with an extensible model (e.g., as specified by the WS-Coordination protocol). Your comparison should be concrete (e.g., in terms of data structures, protocol steps, and applications they serve) and limited to two major differences.

The atomicity property guaranteed by the 2PC is a part of an extensible model such as WS-Coordination protocol. This protocol works as a global recovery manger and is needed to maintain information needed for recovery, in addition to the local recovery mangers and the information they maintain.

Page 11: Final DB Exam11[1]

Q9. RELATIONAL OPERATIONS

(a) Prove that division is not strictly necessary, i.e. can be replaced by a combination of some relational algebra operations.

Division in relational algebra is used when we want to express queries with “all”. R ÷ S. The result consists of the restrictions of tuples in R to the attribute names unique to R, i.e., in the header of R but not in the header of S, for which it holds that all their combinations with tuples in S are present in R. For an example see the tables Completed, DBProject and their division:

Completed

Student Task

Fred Database1

Fred Database2

Fred Compiler1

Eugene

Database1

Eugene

Compiler1

Sara Database1

Sara Database2

DBProject

Task

Database1

Database2

Completed ÷ DBProject

Student

Fred

Sara

Page 12: Final DB Exam11[1]

Now, the DIVISION operation is not directly implemented in most RDBMSs with SQL as the primary language. The same result for a division operation can be attained using the following PROJECTS, CARTESION PRODUCT, and SUBTRACTION:We assume that a1,...,an are the attribute names unique to R and b1,...,bm are the attribute names of S. In the first step we project R on its unique attribute names and construct all combinations with tuples in S:

T := πa1,...,an(R) × S In the prior example, T would represent a table such that every Student (because Student is the unique key / attribute of the Completed table) is combined with every given Task. So Eugene, for instance, would have two rows, Eugene -> Database1 and Eugene -> Database2 in T.In the next step we subtract R from this relation:

U := T - R Note that in U we have the possible combinations that "could have" been in R, but weren't. So if we now take the projection on the attribute names unique to R then we have the restrictions of the tuples in R for which not all combinations with tuples in S were present in R:

V := πa1,...,an(U) So what remains to be done is take the projection of R on its unique attribute names and subtract those in V:

W := πa1,...,an(R) - V a combination of the PROJECT all of the unique attributes of R and INTERSECT them with S to create a new table, T. We can then SUBTRACT R from the relation T, creating a new Relation U.

(b) Express the natural join using the Renaming, Cartesian product, Selection, and Projection operators.

Renaming: select *from E as E(enr, ename, dept), D as D(dnr, dname)where dept = dnr

(ρ (enr, ename, dept)(E)) * dept = dnr (ρ (dnr, dname)(D))

Cartesian Product: select *from E, Dwhere dept = dnr

dept = dnr (E X D)

Selection: select A.x from A natural join B where X=1

σ A.x=1(A*B)

Projection:

(c) Consider the universal relation R = {A, B, C, D, E, F, G, H, I, J} and the set of functional dependencies F = {{A, B} -> {C}, {A} -> {D, E}, {B} -> {F}, {F} -> {G, H}, {D} -> {I, J}}. What is the key for R? Decompose R into 2NF, then 3NF relations.

Page 13: Final DB Exam11[1]

Since the closure of {A, B}, {A, B}+ = R, the key of R is {A, B} .

Decomposition of R into 2NFFirst, identify partial dependencies that violate 2NF. These are attributes that are functionally dependent on either parts of the key, {A} or {B}, alone. We can calculate the closures {A}+ and {B}+ to determine partially dependent attributes:

{A}+ = {A, D, E, I, J}. Hence {A}à{D, E, I, J}{B}+ = {B, F, G, H}, hence {B}à {F, G, H}

Remove the attributes that are functionally dependent on parts of the key (A or B) from R and place them in separate relations R1 and R2, along with the part of the key they depend on (A or B), which are copied into each of these relations but also remain in the original relation, which we call R3 below:

R1 = {A, D, E, I, J}, R2 = {B, F, G, H}, R3 = {A, B, C}

Decomposition of R into 3NF

Next, we look for transitive dependencies in R1, R2, R3. The relation R1 has the transitive dependency {A}à{D}à{I, J}, so remove the transitively dependent attributes {I, J} from R1 into a relation R11. Copy the attribute D they are dependent on into R11. The remaining attributes are kept in a relation R12. Hence, R1 is decomposed into R11 and R12 as follows:

R11 = {D, I, J}, R12 = {A, D, E}

The relation R2 is similarly decomposed into R21 and R22 based on the transitive dependency {B}à{F}à{G, H}:R21 = {F, G, H}, R22 = {B, F}The final set of relations in 3NF are {R11, R12, R21, R22, R3}

(d) Find a minimal set of relational algebra operations assuming that the set {union, difference, intersect, product, project, restrict, select, join, divide} is complete.

The algebra has a minimal set of operators: restrict, project, product, union and difference.

Page 14: Final DB Exam11[1]

Q10. QUERY OPTIMIZATION

(a) Discuss the cost components for a cost function that is used to estimate query execution cost. Which cost components are used most often as the basis for cost functions?

There are 5 major cost components used to estimate query execution cost. 1. Access cost to secondary storage: cost for searching for, reading, and writing data blocks that

reside mainly on disk.2. Storage cost: cost of storing intermediate files generated by a query execution strategy3. Computation costs: cost of performing in memory operations4. Memory usage cost: cost pertaining to the number of memory buffers needed during query

execution5. Communication cost: cost of shipping the query and its results from the databse sit to the site or

terminal where the query originated.

For large databases the main emphasis is on minimizing the access cost to secondary storage. In smaller databases, cost reduction emphasis is on minimizing computation costs. Lastly in distributed databases, cost reduction emphasis is on minimizing communication costs.

(b) Develop cost functions for the PROJECT, UNION, INTERSECTION, SET DIFFERENCE, and CARTESIAN PRODUCT algorithms discussed in Chapter 15 of your text.

(bE+b

D+bE

log2

bE+b

Dlog

2b

D)

(c) Develop cost functions for an algorithm that consists of two SELECTs, a JOIN, and a final PROJECT, in terms of the cost functions for the individual operations.

log2

b+[(s/bfr)]-1+(bR+|R|*(x

B+1))+((js*|R|*|S|)/bfrRS))+(b

R+bS+((js*|R|*|S|)/bfr

RS)+log2

b))

(d) What are the difficulties inherent in using the cost functions described above?

It is difficult to include all the cost components in a weighted cost function because of the difficulty of assigning suitable weights to the cost components.

<<<<<<<<<<<<<<<<<<<<<<<QUESTIONS END>>>>>>>>>>>>>>>>>>>>>>>>>>>