SAD_Notes

Embed Size (px)

DESCRIPTION

sad notes

Citation preview

  • VINAYAK SINGH

    1/147

    -QUESTION AND ANSWER SET OF

    SYSTEM ANALYSIS AND DESIGN

    1. How data dictionary can be used during software development?(May06-

    5Marks) Data Dictionary

    a) Data dictionary are integral components of structured analysis; since data flow diagrams by themselves do not fully describe a subject of the investigation.

    b) A data dictionary is a catalog-a repository of the element in the system. c) In data dictionary one will find a list of all the elements composing the data

    flowing through a system. The major elements are - Data flows

    Data stores

    Processes

    d) The dictionary is developed during data flows analysis and assist the analysis

    Involved in determining system requirements and its content are used during System design as well.

    e) The data dictionary contains the following description about the data:-

    1 The name of the data element. 2 The physical source/ Destination name. 3 The type of the data element.

    4 The size of data element.

    5 Usage such as input or output or update etc 6 Reference (/s) of DFD process nos, where used. 7 Any special information useful for system specification such as

    validation rules etc. f) this is appropriately called as system development data dictionary, since it is

    created during the system development, facilitating the development

    functions, used by the developers and is designed for the developers information needs in mind.

    g) for every single data element mentioned in the DFD there should be at least

    one and only one unique data element in the data dictionary.

    h) the type of data elements is considered as numeric, textual or image, audio etc

    i) usage should specify whether the referenced DFD process uses it as input

    data(only read) or creates data output (e.g Insert) or update. j) a data element can be mentioned with reference to multiple DFD processes

    but in that case if the usages are different , then there should be one entry

    for each usage.

    k) the data dictionary is used as an important basic information during the development stages.

    l) Importance of data dictionary: To manage the details in large system.

    To communicate a common meaning for all system elements.

    To document the features of the system.

    To facilitate analysis of the details in order to evaluate characteristics

    and determine where system changes should be made. To locate errors and omissions in the system.

  • VINAYAK SINGH

    2/147

    m) Points to be considered while constructing a data dictionary

    Each unique data flow in the DFD must have one data dictionary entry.

    There is also a data dictionary entry for each data store and process. Definition must be readily accessible by name.

    There should be no redundancy or unnecessary definition in the data

    definitions. It must also be simple to make updates. The procedures for writing definitions should be straightforward but

    specific there should be only one way of defining words.

    2. Under what circumstances or for what purposes would one use an

    interview rather than other data collection methods?Explain

    (May06-12Marks) OR

    Discuss hoe interview technique can be used to need of a new

    system(May-03)

    a) Interview is a fact finding technique whereby the system analyst collects

    information from individuals through face to face interaction. b) there are two types of interviews:

    a. Unstructured interviews: this is an interview that is conducted with

    only a general goal or subject in mind and with few, if any specific questions.

    b. Structured interviews: This is an interview in which the interviewer has a specific set of questions to ask the interviewee.

    c) unstructured interviews tend to involve asking open ended questions while structured interviews tend to involve asking more close ended questions.

    d) ADVANTAGES:

    e) Interviews gives the analyst an opportunity to motivate the interviewee to respond freely and openly to questions.

    f) Interviews allow the system analyst to probe (penetrating

    investigation) for more feedback from the interviewee.

    g) Interviews permit the system analyst to adapt or reword questions for each individual.

    h) A good system analyst may be able to obtain information by

    observing the interviewees body movements and facial expressions as well as listening to verbal replies to questions.

    i) this technique is advised in the following situations:-

    j) Where the application system under consideration is highly specific

    to the user organization. k) When the application system may not very specialize but the

    practices followed this organization may be specific. l) The organization does not have any documentation where the part

    of information requirements are documented or any such documentation is irrelevant or not available or cannot be shared

    with developers due to privacy issues etc.

    m) The organization is not decided on the details of the practices, the new application system would demand or would be used to

    implement new practices but want to decided when responding to the information requirements determination .

    n) A structured interview meeting is useful in the following situation:-

  • VINAYAK SINGH

    3/147

    a) When the development team and the user team members know the broad system environment with high

    familiarity. This reduces the amount of communication to

    significantly to just a few words or a couple of sentences. b) Responding to the questions involves collecting data from

    different sources or person and/or analyzing it and/or

    making the analysis ready for discussion at the meeting, in order to save on the duration of meeting.

    c) The costly communication media such as international phone/ conference call are to be used.

    d) Some of all of the members of the user team represent the top management or external consultants or specialist.

    o) A semi structured interview meeting is useful in the following situations :-

    p) Usually in the initial stage of information requirements determination, the users and the developers need to exchange significant amount of basic information. The

    structured interview meetings may not be effective here

    since there is not enough information to ask questions of importance.

    q) Also very initially or with the new organization , the

    personal meetings are important because ,they help the development team not only in knowing the decisions but also in understanding the decision making process of the

    organization ,the role of every user team members in the

    decision and their organizational and personal interests. These observations during the meetings can help the

    software development team significantly in future communications.

    r) When the new application system is not a standard one and /or the users follow radically different or highly specialized

    business practices ,then it is very difficult to predict which

    questions are important from the point of determining the information requirements.

    s) When the members of the development team expects to participate in formulating some new information

    requirements or freezing them, as an advisor, say then

    interview meeting has to be personal and therefore, semi structured.

    t) If the users are generally available , located very close and the number of top management users or external consultant

    is nil or a negligible issue, then the personal meeting are conducted.

    u) When there are no warranted reasons that only the

    structured interviews are to be conducted.

  • VINAYAK SINGH

    4/147

    3. Describe steps in SDLC model with an example(Nov-03,May-06,Sies-

    04,Dec-04)

    The steps in SDLC model are listed as follows a) System analysis:-

    It is a set of activity in which the system analyst gathers

    the information requirements of the users, analyses them systematically in the form of functionality of the application system, the input data requirement and their sources, the output data and their presentation

    requirements. The system analyst gathers data about the performance expectations of the users such as expected response

    time, the total turn around time etc. the system analyst finally prepares a document called as system requirement specification (SRS) which documents all the

    agreements reached between the users and the system

    analyst. b) System design:-

    It involves preparing the blue print of the new software

    system. Taking the SRS as a base to start with, it prepares various diagrammatic representations of the logical and physical artifacts to be developed during the

    software development stages to follow. The major

    artifacts include data models, process models and presentation model. Finally the system design is

    documented c) Coding or Construction:-

    This involves programming and testing individual programs on the basis of the design document. The

    developers responsible for programming also creates

    text data sets for inputs and verifies that the program generate the expected output for these inputs data sets

    The individual program are also reviewed to ensure that they meet programming standard as expected by the

    users. This is the only face where the conceptual system

    is first translated into a computer executable program sources.

    d) Testing:- It is to demonstrate to the development team members

    that the software system works exactly to meet the user expectation of information requirements as well as the

    performance expectation. It involves planning the test,

    creating the text data, executing text runs, matching the text results with the expected results, analyzing the differences, fixing the bugs and testing the bug fixing

    repeatedly until a satisfactory number of mismatches are removed.

    e) Implementation:-

    It involves installing the software system on the user computer system, conducting user training on new software system, data preparations, parallel running and

  • VINAYAK SINGH

    5/147

    going live as core activities. This is the stage where the software system is first transferred to the users

    premises and the users get a chance to work on the new

    software system for the first time. Also it involves the most important step of user acceptance testing which marks the technical and commercial

    milestone of the software development project

    f) Maintenance:- It involves maintaining the software system always up

    to date to ensure that it is in the line with current information requirements considering even the latest changes in the same. It helps keep the software system

    up to date thereby ensuring the users high return on their investment at operational level of the business. The developer analyses the changes in the light of the

    latest changes in the design identifies the new changes

    in the system design, verify quickly that it works as expected.

    E.G :- the library management system done as the assignment.

  • VINAYAK SINGH

    6/147

    5. What are the CASE tools? explain some CASE tools used for prototyping. (May-06-15Marks, Nov-03, M-05, Dec-04).

    Ans. Computer assisted software engineering (CASE)

    Computer assisted software engineering (CASE) is the use of automated software tools that support the drawing and

    analysis of system models and associated specifications.

    Some tools also provide pr typing and code generation

    facilities.

    At the center of any true CASE tools architecture is a developers database called a CASE repository where

    developers can store system models, detailed descriptions

    and specifications and other products of systems

    developments.

    A CASE tool enables people working on a software project to store data about a project, its plan and schedules, to be able to

    track its progress and, make changes easily, analyze and store

    data about user, store the design of a s stem through

    automation..

    A CASE environment makes system development economical and practical.The automated tools and

    environment provides a mechanism for system personnel to

    capture the document and model and information system.

    A CASE environment is a number of CASE tools, which use integrated approach to support the interaction between

    environments components and user of the environment.

    CASE Components

    CASE tools generally include five components -

    diagrammatic tools, an information repository interface

    generators, code generators, and management tools.

    Diagrammatic Tools

  • VINAYAK SINGH

    7/147

    Typically, they include the capabilities to produce data flow diagrams, data structure diagrams, and program structure

    charts.

    These high-level tools are essential for support of structured

    analysis mythology and CASE tools incorporate structured

    analysis extensively.

    They support the capability to draw diagram in chart and to

    store the details internally. When changes must be made

    the nature of changes is described to the system which can then withdraw the entire diagram automatically.

    The ability to change and redraw eliminates an activity that analyst finds both tedious and undesirable.

    1 Centralized Information Repository

    1 A centralized information repository or data dictionary aides

    the capture analysis processing and distribution of all

    system information.

    2 The dictionary contains the details system components such

    as data items, data flows and processes and also includes information describing the volumes and frequency of each activities.

    3 While dictionary are designed so that the information is

    easily accessible. They also includes built-in control and safeguards to preserve the accuracy and consistency of the

    system details.

    4 The use of authorization levels process validation and

    procedures for testing consistency of the description

    ensures that access to definitions and the revisions made to them in the information repository occur properly according

    to the prescribed procedures.

    Interface Generators

    System interfaces are the means by which users interact

    with an application, both to enter information and data and to receive information.

    Interface generator provides the capability to prepare

    mockups and prototypes of user interfaces.

    Typically the support the rapid creation of demonstration

    system menus, presentation screens and report layouts.

  • VINAYAK SINGH

    8/147

    Interfaces generators are an important element for application prototyping, although they are useful with all

    developments methods.

    1 Code Generators:

    1 Code generators automated the preparations of computer software.

    2 They incorporate method that allows the conversion of

    system specifications into executable source code.

    3 The best generator will produce the approximately 75

    percent of the source code for an application. The rest must be written by hand. The hand coding as this process is termed, is still necessary.

    4 Because CASE tools are general purpose tools not

    limited any specific area such as manufacturing control, investment portfolio analysis, or accounts management,

    the challenge of fully automating software generation is substantial.

    5 The greatest benefits accrue in the code generator are

    integrated with the central information repository such as combination achieved objective of creating reusable

    computer code.

    6 When specification change code can be regenerated by

    feeding details from data dictionary through the code

    generators. The dictionary contents can be reused to

    prepare the executable code. Management Tools: CASE systems also assist project manager in maintaining

    efficiency and effectiveness throughout the application development process.

    1 The CASE components assist development manager in

    the scheduling of the analysis and designing activities and allocation of resources to different project activities.

    2 Some CASE systems support the monitoring of project

    development schedules against the actual progress as well as the assignments of specific task individuals.

    3 Some CASE management tools allow project managers to specify custom elements. For example, they can

    select the graphic symbols to describe process, people,

    department, etc.

  • VINAYAK SINGH

    9/147

    4. What is cost benefit analysis? Describe any two methods of

    performing

    same(May- 06,May-04). Cost -benefit analysis:

    Cost benefit analysis is a procedure that gives the picture of various costs, benefits,

    and rules associated with each alternative system. The cost - benefit analysis is a part of economic feasibility study of a system. The basic tasks involved in cost-benefit analysis are as follows:

    1 To compute the total costs involved. 2 To compute the total benefits from the project. 3 Top compare to decide, if the project provides more net gains or no

    net gains. Procedure for cost and benefit determination:

    The determination of costs and benefits entails the following steps-

    1 Identify the costs and benefits pertaining to a given project. 2 Categorize the various costs and benefits for analysis. 3 Select a method for evaluation.

    4 Interpret the result of the system. 5 Take action.

    Costs and benefits are classified are classified as follows:

    i. Tangible or intangible cost and benefits: Tangibility refers to the ease with which costs or benefits can be

    measured. The following are the examples of tangible costs and benefits: Purchase of hardware or software.(tangible cost)

    Personnel training.(tangible cost)

    Reduced expenses.(tangible benefit)

    Increased sales. (tangible benefit)

    Costs and benefits that are known to exist but whose financial value

    cannot be accurately measured are referred to as intangible costs and

    benefits. The following are the examples of intangible costs and benefits:

    1 Employee morale problems caused by a new system.(intangible

    cost) 2 Lowered company image. (intangible cost) 3 Satisfied customers. (intangible benefit)

    4 Improved corporate image. (intangible benefit)

    ii. Direct or Indirect costs and benefits:

    Direct costs are those with which a money figure can be directly associated in a project. They are directly applied to a particular operation.

    Direct benefits can also be specifically attributable to a given project. The following are the examples:

    1 The purchase of a box of diskettes for 35$ is a direct cost.

    2 A new system that can handle 25% more transaction per day is a direct benefit.

    Indirect costs are the results of operation that are not directly

  • VINAYAK SINGH

    10/147

    associated with a given system or activity. They are often referred to as overhead.

    Indirect benefits are realized as a by-product of another activity or

    system. iii. Fixed or variable costs and benefits:

    Fixed costs are sunk costs. They are constant and do not change. Once encountered, they will not recur. For example: straight-line depreciation of hardware, exempt employee

    salary.

    Fixed benefits are also constant and do not change. For example: A decrease in the number of personnel by 20% resulting from the use of a new computer.

    Variable costs are incurred on a regular basis. They are usually proportional to work volume and continue as long as the system is in operation.

    For example: The costs of computer forms vary in proportion to amount of

    processing or the length of the reports required. Variable benefits are realized on a regular basis. For example: Consider a safe deposit tracking system that saves 20 minutes

    preparing customer with manual system. The following are the methods of performing cost and benefit analysis:

    1 Net benefit analysis.

    2 Present value analysis. 3 Payback analysis.

    4 Break-even analysis. 5 Cash flow analysis.

    6 Return on investment analysis.

    Net benefit analysis Net benefit analysis simply involves subtracting total costs from total benefits. It is

    easy to calculate, easy to interpret and easy to present. The main drawback is that it

    does not account for the time value of money and does not discount future cash flow.

    The time value of money is usually expressed in form of interest on the funds invested to realize the future value. Assuming compounded interest, the formula is:

    F = P (1 + i)n

    Where

    F= Future value of an investment. P= Present value of the investment.

    I = Interest rate per compounding year. N = Number of years .

    Present value analysis In developing long-term projects, it is often difficult to compare todays costs with the full value of tomorrows benefits. The time value of money allows for interest

  • VINAYAK SINGH

    11/147

    rates, inflation and other factors that alter the value of the investment. Present value analysis controls for these problems by calculating the costs and benefits of the

    system in terms of todays value at the investment and then comparing across

    alternatives. Present value = future value

    ---------------- (1+i)n Net present value is equal to discounted benefits minus discounted costs.

    5. Explain the concept of Normalization with examples. why would

    you

    denormilzation? (May-04) Normalization: Normalization is a process of simplifying the relationship between data elements in a

    record. Through normalization a collection of data in a record structure is replaced by

    successive record structures that are simpler and more predictable and therefore more manageable. Normalization is carried out for four reasons:

    1 To structure the data so that any important relationships between entities can be represented.

    2 To permit simple retrieval of data in response to query and report requests.

    3 To simplify the maintenance of the data through updates, insertions, and

    deletions. 4 To reduce the need to restructure or reorganize data when new application

    requirement arise.

    A great deal of research has been conducted to develop methods for carrying out the normalization. Systems Analysts should be familiar with the steps in normalization,

    since this process can improve the quality of design for an application.

    1. Decompose all data groups into two- dimensional records.

    2. Eliminate any relationships in which data elements do not fully depend on the primary key of the record.

    3. Eliminate any relationships that contain transitive dependencies.

    There are three normal forms. Research in database design has also identified other

    forms, but they are beyond what analysts use in application design.

    First Normal Form: First normal form is achieved when all repeating groups are removed so that a

    record is of fixed length. A repeating group, the reoccurrence of a data item or group

    of data items within a record is actually another relation. Hence, it is removed from the record and treated as an additional record structure, or relation. Consider the information contained in a customer order: order number, customer

    name, customer address, order date, as well as the item number, item description, price and quantity of the item ordered. Designing a record structure to handle an order containing such data is not difficult.

    The analyst must consider how to handle the order. The order can be treated

    as four separate records with the order and customer information included in each record. However it increases complexity of changing the details of any part of the order and uses additional space.

  • VINAYAK SINGH

    12/147

    Another alternative is to design the record to be of variable length. So when four items are ordered, the item details are repeated four times. This portion is termed as

    repeating group.

    First normal form is achieved when a record is designed to be of fixed length. This is accomplished by removing the repeating and creating a separate file or relation containing the repeating group. The original record and the new records are

    interrelated by a common data item. Second Normal Form: Second normal form is achieved when a record is in first normal form and each item

    in the record is fully dependent on the primary record key for identification. In other words, the analyst seeks functional dependency. For example: State motor vehicle departments go to great lengths to ensure that

    only one vehicle in the state is assigned a specific license tag number. The license number is uniquely identifies a specific vehicle; a vehicles serial number is associated with one and only one state license number. Thus if you know the serial

    number of a vehicle, you can determine the state license number. This is functional

    dependency. In contrast, if a motor vehicle record contains the name of all individuals who drive the vehicle, functional dependency is lost. If we know the license number, we

    do not know who the driver is -there can be many. And if we know the name of the driver, we do not know the specific license number or vehicle serial number, since a driver can be associated with more than one vehicle in the file.

    Thus to achieve second normal form, every data item in a record that is not

    dependent on the primary key of the record should be removed and used to form a separate relation.

    Third Normal Form:

    Third normal form is achieved when transitive dependencies are removed from record design. The following is the example of third normal form:

    1 A, B, C are three data items in a record.

    2 If C is a functionally dependent on B and 3 B is functionally dependent on A,

    4 Then C is functionally dependent on A 5 Therefore, a transitive dependency exists.

    In data management, transitive dependency is a concern because data can

    inadvertently be lost when the relationship is hidden. In the above case, if A is deleted, then B and C are deleted also, whether or not this is intended. This problem

    is eliminated by designing the record for third normal form. Conversion to third normal form removes the transitive dependency by splitting the relation into two

    separate relations.

    Denormilzation

    Performance needs dictate very quick retrieval capability for data stored in relational databases. To accomplish this, sometimes the decision is made to denormalize the physical implementation. Denormalization is the process of putting one fact in

    numerous places. This speeds data retrieval at the expense of data modification.

    Of course, a normalized set of relational tables is the optimal environment and

    should be implemented for whenever possible. Yet, in the real world, denormalization is sometimes necessary. Denormalization is not necessarily a bad decision if

    implemented wisely. You should always consider these issues before denormalizing:

    Can the system achieve acceptable performance without denormalizing?

  • VINAYAK SINGH

    13/147

    Will the performance of the system after denormalizing still be unacceptable?

    Will the system be less reliable due to denormalization?

    If the answer to any of these questions is "yes," then you should avoid denormalization because any benefit that is accrued will not exceed the cost. If, after

    considering these issues, you decide to denormalize be sure to adhere to the general guidelines that follow.

    The reasons for Denormalization

    Only one valid reason exists for denormalizing a relational design - to enhance

    performance. However, there are several indicators which will help to identify systems and tables which are potential denormalization candidates. These are:

    Many critical queries and reports exist which rely upon data from more than

    one table. Often times these requests need to be processed in an on-line environment.

    Repeating groups exist which need to be processed in a group instead of

    individually.

    Many calculations need to be applied to one or many columns before queries

    can be successfully answered.

    Tables need to be accessed in different ways by different users during the

    same timeframe.

    Many large primary keys exist which are clumsy to query and consume a

    large amount of DASD when carried as foreign key columns in related tables.

    Certain columns are queried a large percentage of the time. Consider 60% or

    greater to be a cautionary number flagging denormalization as an option.

    Be aware that each new RDBMS release usually brings enhanced performance and improved access options that may reduce the need for denormalization. However,

    most of the popular RDBMS products on occasion will require denormalized data

    structures. There are many different types of denormalized tables which can resolve the performance problems caused when accessing fully normalized data. The

    following topics will detail the different types and give advice on when to implement each of the denormalization types.

    6. Write detailed note about the different levels and methods of

    testing software (May-06).

    Ans. A table that is not sufficiently normalized can suffer from logical

    inconsistencies of various types, and from anomalies involving data operations. In such a table:

    The same fact can be expressed on multiple records; therefore updates to the

    table may result in logical inconsistencies. For example, each record in an

    unnormalized "DVD Rentals" table might contain a DVD ID, Member ID, and Member Address; thus a change of address for a particular member will

    potentially need to be applied to multiple records. If the update is not carried through successfully-if, that is, the member's address is updated on some

  • VINAYAK SINGH

    14/147

    records but not others-then the table is left in an inconsistent state. Specifically, the table provides conflicting answers to the question of what this

    particular member's address is. This phenomenon is known as an update anomaly.

    There are circumstances in which certain facts cannot be recorded at all. In

    the above example, if it is the case that Member Address is held only in the "DVD Rentals" table, then we cannot record the address of a member who

    has not yet rented any DVDs. This phenomenon is known as an insertion anomaly.

    There are circumstances in which the deletion of data representing certain

    facts necessitates the deletion of data representing completely different facts.

    For example, suppose a table has the attributes Student ID, Course ID, and

    Lecturer ID (a given student is enrolled in a given course, which is taught by a given lecturer). If the number of students enrolled in the course temporarily drops to zero, the last of the records referencing that course must be deleted-

    meaning, as a side-effect, that the table no longer tells us which lecturer has

    been assigned to teach the course. This phenomenon is known as a deletion anomaly.

    Ideally, a relational database should be designed in such a way as to exclude the

    possibility of update, insertion, and deletion anomalies. The normal forms of relational database theory provide guidelines for deciding whether a particular design will be vulnerable to such anomalies. It is possible to correct an unnormalized design

    so as to make it adhere to the demands of the normal forms: this is normalization. Normalization typically involves decomposing an unnormalized table into two or more tables which, were they to be combined (joined), would convey exactly the same

    information as the original tabl

    Background to normalization: definitions

    Functional dependency:Attribute B has a functional dependency on

    attribute A if, for each value of attribute A, there is exactly one value of attribute B. For example, Member Address has a functional dependency on

    Member ID, because a particular Member Address value corresponds to every

    Member ID value. An attribute may be functionally dependent either on a single attribute or on a combination of attributes. It is not possible to determine the extent to which a design is normalized without understanding

    what functional dependencies apply to the attributes within its tables; understanding this, in turn, requires knowledge of the problem domain.

    Trivial functional dependency: A trivial functional dependency is a

    functional dependency of an attribute on a superset of itself. {Member ID, Member Address} {Member Address} is trivial, as is {Member Address}

    {Member Address}.

    Full functional dependency: An attribute is fully functionally dependent on

    a set of attributes X if it is a) functionally dependent on X, and b) not

    functionally dependent on any proper subset of X. {Member Address} has a functional dependency on {DVD ID, Member ID}, but not a full functional dependency, for it is also dependent on {Member ID}.

    Multivalued dependency: A multivalued dependency is a constraint

  • VINAYAK SINGH

    15/147

    according to which the presence of certain rows in a table implies the presence of certain other rows: see the Multivalued Dependency article for a rigorous definition.

    Superkey: A superkey is an attribute or set of attributes that uniquely

    identifies rows within a table; in other words, two distinct rows are always

    guaranteed to have distinct superkeys. {DVD ID, Member ID, Member Address} would be a superkey for the "DVD Rentals" table; {DVD ID, Member ID} would also be a superkey.

    Candidate key: A candidate key is a minimal superkey, that is, a superkey

    for which we can say that no proper subset of it is also a superkey. {DVD ID, Member ID} would be a candidate key for the "DVD Rentals" table.

    Non-prime attribute: A non-prime attribute is an attribute that does not

    occur in any candidate key. Member Address would be a non-prime attribute in the "DVD Rentals" table.

    Primary key: Most DBMSs require a table to be defined as having a single

    unique key, rather than a number of possible unique keys. A primary key is a candidate key which the database designer has designated for this purpose.

    History

    Edgar F. Codd first proposed the process of normalization and what came to be known as the 1st normal form:

    There is, in fact, a very simple elimination[1] procedure which we shall call normalization. Through decomposition non-simple domains are replaced by

    "domains whose elements are atomic (non-decomposable) values."

    -Edgar F. Codd, A Relational Model of Data for Large Shared Data Banks[2]

    In his paper, Edgar F. Codd used the term "non-simple" domains to describe a

    heterogeneous data structure, but later researchers would refer to such a structure as an abstract data type.

    Normal forms

    The normal forms (abbrev. NF) of relational database theory provide criteria for

    determining a table's degree of vulnerability to logical inconsistencies and anomalies. The higher the normal form applicable to a table, the less vulnerable it is to such

    inconsistencies and anomalies. Each table has a "highest normal form" (HNF): by definition, a table always meets the requirements of its HNF and of all normal forms

    lower than its HNF; also by definition, a table fails to meet the requirements of any normal form higher than its HNF.

    The normal forms are applicable to individual tables; to say that an entire database is in normal form n is to say that all of its tables are in normal form n.

    Newcomers to database design sometimes suppose that normalization proceeds in

    an iterative fashion, i.e. a 1NF design is first normalized to 2NF, then to 3NF, and so

    on. This is not an accurate description of how normalization typically works. A sensibly designed table is likely to be in 3NF on the first attempt; furthermore, if it is

    3NF, it is overwhelmingly likely to have an HNF of 5NF. Achieving the "higher"

    normal forms (above 3NF) does not usually require an extra expenditure of effort on

  • VINAYAK SINGH

    16/147

    the part of the designer, because 3NF tables usually need no modification to meet the requirements of these higher normal forms.

    Edgar F. Codd originally defined the first three normal forms (1NF, 2NF, and 3NF).

    These normal forms have been summarized as requiring that all non-key attributes

    be dependent on "the key, the whole key and nothing but the key". The fourth and

    fifth normal forms (4NF and 5NF) deal specifically with the representation of many-to-many and one-to-many relationships among attributes. Sixth normal form (6NF) incorporates considerations relevant to temporal databases.

    First normal form

    Main article: First normal form

    The criteria for first normal form (1NF) are:

    A table must be guaranteed not to have any duplicate records;

    therefore it must have at least one candidate key.

    There must be no repeating groups, i.e. no attributes which occur a

    different number of times on different records. For example, suppose that an employee can have multiple skills: a possible representation of

    employees' skills is {Employee ID, Skill1, Skill2, Skill3 ...}, where {Employee ID} is the unique identifier for a record. This representation would not be in 1NF.

    Second normal form

    Main article: Second normal form

    The criteria for second normal form (2NF) are:

    The table must be in 1NF.

    None of the non-prime attributes of the table are functionally

    dependent on a part (proper subset) of a candidate key; in other words, all functional dependencies of non-prime attributes on

    candidate keys are full functional dependencies. For example, consider a "Department Members" table whose attributes are Department ID,

    Employee ID, and Employee Date of Birth; and suppose that an

    employee works in one or more departments. The combination of Department ID and Employee ID uniquely identifies records within the

    table. Given that Employee Date of Birth depends on only one of those attributes - namely, Employee ID - the table is not in 2NF.

    Note that if none of a 1NF table's candidate keys are composite - i.e.

    every candidate key consists of just one attribute - then we can say immediately that the table is in 2NF.

    Third normal form

    Main article: Third normal form

    The criteria for third normal form (3NF) are:

    The table must be in 2NF.

    There are no non-trivial functional dependencies between non-prime

    attributes. A violation of 3NF would mean that at least one non-prime

  • VINAYAK SINGH

    17/147

    attribute is only indirectly dependent (transitively dependent) on a candidate key, by virtue of being functionally dependent on another non-prime

    attribute. For example, consider a "Departments" table whose attributes are

    Department ID, Department Name, Manager ID, and Manager Hire Date; and suppose that each manager can manage one or more departments. {Department ID} is a candidate key. Although Manager Hire Date is

    functionally dependent on {Department ID}, it is also functionally dependent on the non-prime attribute Manager ID. This means the table is not in 3NF.

    Boyce-Codd normal form

    Main article: Boyce-Codd normal form The criteria for Boyce-Codd normal form (BCNF) are:

    The table must be in 3NF.

    Every non-trivial functional dependency must be a dependency on a

    superkey.

    Fourth normal form

    Main article: Fourth normal form The criteria for fourth normal form (4NF) are:

    The table must be in BCNF.

    There must be no non-trivial multivalued dependencies on something

    other than a superkey. A BCNF table is said to be in 4NF if and only if all of its multivalued dependencies are functional dependencies.

    Fifth normal form

    Main article: Fifth normal form

    The criteria for fifth normal form (5NF and also PJ/NF) are:

    The table must be in 4NF.

    There must be no non-trivial join dependencies that do not follow from

    the key constraints. A 4NF table is said to be in the 5NF if and only if every join dependency in it is implied by the candidate keys.

    Domain/key normal form

    Main article: Domain/key normal form Domain/key normal form (or DKNF) requires that a table not be subject to any constraints other than domain constraints and key constraints.

    Sixth normal form

    It has been suggested that this section be split into a new article entitled Sixth

    normal form. (Discuss)

    This normal form was, as of 2005, only recently proposed: the sixth normal form (6NF) was only defined when extending the relational model to take into account the

    temporal dimension. Unfortunately, most current SQL technologies as of 2005 do not

    take into account this work, and most temporal extensions to SQL are not relational.

  • VINAYAK SINGH

    18/147

    See work by Date, Darwen and Lorentzos[3] for a relational temporal extension, or see TSQL2 for a different approach.

    Denormalization

    Main article: Denormalization

    Databases intended for Online Transaction Processing (OLTP) are typically more

    normalized than databases intended for On Line Analytical Processing (OLAP). OLTP Applications are characterized by a high volume of small transactions such as updating a sales record at a super market checkout counter. The expectation is that

    each transaction will leave the database in a consistent state. By contrast, databases intended for OLAP operations are primarily "read only" databases. OLAP applications tend to extract historical data that has accumulated over a long period of time. For

    such databases, redundant or "denormalized" data may facilitate Business

    Intelligence applications. Specifically, dimensional tables in a star schema often contain denormalized data. The denormalized or redundant data must be carefully controlled during ETL processing, and users should not be permitted to see the data

    until it is in a consistent state. The normalized alternative to the star schema is the snowflake schema.

    Denormalization is also used to improve performance on smaller computers as in

    computerized cash-registers. Since these use the data for look-up only (e.g. price lookups), no changes are to be made to the data and a swift response is crucial.

    Non-first normal form (NF)

    In recognition that denormalization can be deliberate and useful, the non-first

    normal form is a definition of database designs which do not conform to the first normal form, by allowing "sets and sets of sets to be attribute domains" (Schek

    1982). This extension introduces hierarchies in relations.

    Consider the following table:

    Non-First Normal Form Person Favorite Colors

    Bob blue, red

    Jane green, yellow, red

    Assume a person has several favorite colors. Obviously, favorite colors consist of a set of colors modeled by the given table.

    To transform this NF table into a 1NF an "unnest" operator is required which

    extends the relational algebra of the higher normal forms. The reverse operator is called "nest" which is not always the mathematical inverse of "unnest", although

    "unnest" is the mathematical inverse to "nest". Another constraint required is for the operators to be bijective, which is covered by the Partitioned Normal Form (PNF).

    The most useful and practical approach is with the understanding that testing is the process of executing a program with tha explicit intention of finding

    errors, that is ,making the program dail

    TESTING STRATEGIES:

  • VINAYAK SINGH

    19/147

    A test case is a set of data that the system will process as normal input. However the data are created with the express intent of

    determining whther the system will process them correctly

    There are two logical strategies for testing software that is the strategic of code testing and specification testing

    CODE TESTING:

    The code testing strategy examines the logic of the program to follow this testing method the analyst develops test cases that result in executing every instruction in the program or module that is every path through the

    program is tested This testing strategy does not indicate whether the

    code meets its specification nor does it determine whether all aspects are even

    implemented. Code testing also does not check the range of data that the program will accept even through when the software failure occur in actual size it is frequently because users submitted data outside of expected ranges(for example a sales order

    for $1 the largest in the history of the organization) SPECIFICATION TESTING:

    To perform specification testing the analyst examines the specifications stating what the program should do and how it should

    perform under various conditions. Then test cases are developed for each condition or combination

    of condition and submitted for processing. by examining the results the analyst can determine whether the program perform according to its specified requirement

    LEVELS OF TESTING:

    Systems are not designed as entire systems nor are they tested as single systems.The analyst must perform both unit and system testing

    UNIT TESTING:

    In unit testing the analyst tests the programs making up a system.(for this reason sometimes unit testing is also called as program testing)

    Unit testing focuses first on the modules independently of one another to locate error.

    This enables the tester to detect errors in coding and logic that are contained within

    that module alone. Unit testing can be performed only from bottom up,starting with the smallest and

  • VINAYAK SINGH

    20/147

    lowest level modules are proceeding one at time .for each module in bottom up testing a shot program executes the module and provides the needed data so that

    the module is asked to perform the way it will when embedded within the larger

    system.when bottom level modules tested attention turns to those on the next level that use the lower ones. They are tested individually and the linked with the previously examined lower level modules

    SYSTEM TESTING:

    System testing does not test the software but rather the integration of each module in the system .it also tests to find discrepancies between the system and its original objective current specifications , and system documentation. The primary concern is

    the compatibility of individual modules .Analyst trying to find areas where modules have been designed with the different specifications for data length,type and data element name For example one module may expect the data item for customer

    identification number to be character data item

    7. What are structured walkthrough and how are they carried out?

    Describe the Composition of walkthrough system.(May-06,Nov-

    03,May-05). A structured walkthrough is a planned review of a system or its software by persons involved in the development efforts. Sometimes structured walkthrough is

    called peer walkthrough because the participant are colleagues of the same levels in

    the organization

    PURPOSE:

    1. The purpose of structured walkthrough is to find area where improvement can

    be made in the system or the development process.

    2. structured walkthrough are often employed to enhance quality and to provide guidance to system analyst and programmers.

    3. A walkthrough should be viewed by the programmers and analyst as an opportunity to receive assistance not as an obstacle to be avoided tolerated.

    4. The structured walkthrough can be used to process a constructive and cost

    effective management tool after detailed investigation following design and during program development.

    PROCESS OF STRUCTURED WALKTHROUGH:

    1. The walkthrough concept recognizes that system development is team

    process. The individuals who formulated the design specifications or crated the program code are parts of the review team

    2. A moderated is chosen to lead the review.

    3. A scribe a recorder is also needed to capture the details of the discussions and the ideas that are raised.

    4. Maintenance should be addressed during walkthrough.

    5. Generally no more than seven persons should be involved including the

    individuals who actually developed the product under review, the recorder and the review leader.

    6. structured review rarely exceed 90 minutes in length.

  • VINAYAK SINGH

    21/147

    REQUIREMENT REVIEW:

    1. A requirement review is a structured walkthrough conducted to examine the requirement specifications formulated by the analyst.

    2. It is also called as specification review.

    3. It aims at examining the functional activities and processes that the review the new system will handle.

    4. it includes documentation that participants read and study price to actual walkthrough.

    DESIGN REVIEW

    1. Design review focuses on design specification for meeting previously identified

    system requirements

    2. The purpose of this type of structured walkthrough is to determine whether

    the proposed design will meet the requirement effectively and efficiently. 3. If the participant find discrepancies between the design and requprement

    they will point out them discuss them.

    CODE REVIEW:

    1. A code review is structured walkthrough conducted to examine the program code developed in a system along its documentation.

    2. It is used for new systems and for systems under maintenance 3. A code review does not deal with the entire software but rather with

    individual modules or major components in a program. 4. when programs are reviewed the participants also assess the execution

    efficiency use of standard data names and modules and program errors.

  • VINAYAK SINGH

    22/147

    8. What is user interface design?What tools is used to chart a User

    Interface Design(May-06).

    Ans: User interface design is the spercification of of a conversation between the system user and the computer.it generally results in the form of input or output.there are

    several types of user interface styles including menu selection , instruction sets, question answer dialogue and direct manipulation.

    1. Menu selection: It is a strategy of dialogue design that presents a list of alternatives or options to the user.The system user selects the

    desired alternative or option by keying in the no. of alternatives associated with the options.

    2. Instruction sets: It is strategy where the application is designed osing

    a dialogue syntax that the user must learn.There are three types of syntax :structured English,mnemonic syntax and natural language.

    3. Question answer dialogue strategy: It is a style that wsa primerly used

    to supplement either menu-driven or syntax-driven dialogues.The

    system question involves yes or no.It was also popular in developing interfaces for character based screens inainframe applications.

    4. Direct manipulation: It allows graphical objects appear on a

    screen.Essentialy, this user interface style foceses on using icons , small graphical images ,to sugest functions to the user.

    9. Describe the different methods of file organization .Illustrate with

    examples. For what type of system,which type of file organization methods can be used(May-06,Nov-03)

    Ans. File Organisation :

    A file id organized to ensure that records are available for processing.It should be designed in line with the acivity and volatility of the information and the

    nature of the storage media and devices.

    There are Four basic File organization Methods:

    1. Sequential organization: i. Sequential organization simply means storing data in physical

    contigous blocks within files on tape or disk.Records are also in

    sequence with in each block. ii. To access a record previous records within the block are

    scanned.Thus sequential record design is best suited for get next activities,reading one record after another without a search delay.

    iii. In a sequential organistaion records can be added only at the end of the file.It is not possible to insert a record in the middle of the

    file without rewriting the file.

    iv. In this file update,transactions records are in same sequence as in the master file.Records from both files are matched.One recordat a time,resulting in an updated mster file.

    v. Advantages: - simple to design - easy to program

    - variable length and blocked records are available

    - best use of disk storage

    vi. Disadvantages : Records cannot be added to middle of the file.

  • VINAYAK SINGH

    23/147

    2. Indexed Sequencial Organisation : vii. Like Sequencial Organisation,keyed Sequencial Organisation stores

    data in physically contigous blocks.

    viii. The difference in the use of indexes to locate the records. ix. Disk storage is divided into three areas:

    a. Prime Area:

    The prime are contains file records stored by key or ID numbers.All records are initially stored in the prime area.

    b. OverFlow area:

    The overflow area contains records added to the files that cannot be placed in logical sequence in the prime area.

    c. Index area: The index area is more like a data dictionary.It contains keys of records and their loctions on the

    disks.A pointer associated with each key is an

    address that tells the system where to find a record. x. Advantages :

    a. Indexed sequential organization reduces the

    magnitude of the sequential search and provides quick access for sequential and direct processing.

    b. Records can be inserted or updated in the middle of

    the file.

    xi. Disadvantages : a. The Prime drawback is the extra storage required for

    the index. b. It also takes longer to search the index for data

    access or retrieval. c. Periodic reorganization of file is required.

    3. Inverted List Organisation : xii. Like the indexed-sequencial storage method the inverted list

    organization maintains an index. xiii. The two methods differ however in the index level and record

    storage.

    xiv. The indexed seqeuncial method has a multiple index for a given key,whereas the inverted list method has a single index for each

    key type. xv. In an inverted list records are not necessarily stored in aparticular

    sequence.They are placed in data storage area bur indexes are updated for the record keys and location.

    xvi. Advantage:

    Inverted lists are best for application that request specific data on multiple keys.They are ideal for static file as additions and deletions cause expensive pointer updating.

    4. Direct access Organisation:

    xvii. In direct-access file organization,records are placed randomly

    throughout the file.

    xviii. Records need not be in sequence because they are updated directly and rewritten back in the same location.

    xix. New records at the end of the file or inserted in specific locations

  • VINAYAK SINGH

    24/147

    based on software commands. xx. Records are accessed by address that specify their disk location.An

    address is required for locating a record,for linking records or for

    establishing relationships. xxi. Adantages:

    a. Records can be inserted or updated in middle of the

    file. b. Better control over at a location.

    xxii. Disadvantages: a. Address calculation is required for the processing.

    b. Variable legth records are nearly impossible to process.

    10. What is mean by prototype? What is its use in application prototyping.(May-06).

    Prototyping:

    a. Prototyping is a technique for quickly building a functioning but incomplete

    model of the information system. b. A prototype is a small, representative or working model of users

    requirements of a proposed design for an information system.

    c. The Development of the prototype is based on the fact that the requirements are seldom fully known at the beginning of the project. The idea is to build a first simplified version of the system and seek feedback

    from the people involved in order to then design a better subsequent

    version. This process is repeated until the system meets the client condition of acceptance.

    d. Any given prototype may omit certain functions or features until such a time as the prototype has sufficiently involved into an acceptable

    implementation of requirements. e. There are two types of prototyping models

    a. Throw-away prototyping - In this model, the prototype is discarded

    once it has served the purpose of clarifying the system requirements.

    b. Explanatory prototyping - In this model, the prototype evolves into the main system.

    Need for Prototyping: a. Information requirement are not always well defined. Users may know

    only that certain business areas need improvements or that the existing procedures must be changed. Or may know that they need

    better information for managing certain activities but are not sure what that information is.

    b. The users requirements may not be too vague to even begin

    formulating a design. c. Developers may have neither information nor experience of some

    unique situations or some high-cost or high-risk situations, in which

    the proposed design is new and untested. d. Developers may be unsure of the efficiency of an algorithm, the

    adaptabily of an operating system, or the form that human-machine

    interaction should take.

    e. In these and many other situations a prototypingapproach may offer the best approach.

  • VINAYAK SINGH

    25/147

    Steps in Prototyping: a. Requirement Analysis:

    Identify the users information and operating requirements.

    b. Prototype creation or modification : Develop a working prototype that focuses on only the most important functions using a basic database.

    c. Customer Evalution : Allow the customers to use the prototype and evaluate it.Gather the feedback, reviews, comments and suggestion.

    d. Prototype refinement :

    Integrate the most important changes into the current prototype on the basis of customer feedback.

    e. System development and implementation :

    Once the refined prototype satisfies all the clients conditions of acceptance, it is transformed into the actual system.

    Advantages :

    i. Shorter development time. ii. More accurate user requirements.

    iii. Greater user participation and support

    iv. Relatively inexpensive to build as compared with the cost of conventional system.

    Disadvantages :

    i. An appropriate operating system or programming languages may be used simply because is it available.

    ii. The completed system may not contain all the features and final touch. For instance headings titles and page numbers in the report

    may be missing. iii. File organization may be temporary nd record structures may left

    incomplete.

    iv. Processing and input controls may be missing and documentation of system may have been avoided entirely.

    v. Development of system may become never ending process as changes will keep happening.

    vi. Adds to cost and time of the developing system if left uncontrolled.

    Application :

    i. This method is most useful for unique applications where developers have little information or experience or where risk or error may be

    high. ii. It is useful to test the feasibility of the system or to identify user

    requirements.

    11. Distinguish between reliability and validity how are they

    related?(May-06,Nov-03).

    Ans: Reliability and validity are two faces of information gathering .the term reliability is synonymous with dependability,consistency and accuracy.Concern for reliability

    comes from the necesisity for dependability in measurement whereas validity is

    concerned on what is being measured rather than consistency and accuracy. Reliability can be approached in three ways 1.it is defined as stability, dependability, and predictability.

  • VINAYAK SINGH

    26/147

    2.It focuses on accuracy aspect. 3.Errors of measurement-they are random errors stemming from fatigue or

    fortuitous conditions at a given time.

    The most common question that defines validity is :Does the instrument measure what it is measuring? It refers to the notion that the question asked are worded to produce the information sought.in validity the emphasis is on what is being

    measured .

    12. Write short notes on(May-06) a. Structure charts (May-04)(May-03)(Dec-04)

    b. HIPO charts(May-04,May-03,May-05,Dec-04) c. Security and Disaster Recovery Dec-04 d. List of Deliverables Dec-04

    e. Warrier or Diagram (M-05,Dec-04,May-04) HIPO charts

    1. HIPO is a commonly used method for developing software.An

    acronym for Hierachical Input Process Output,this method was

    developed by IBM for its large,complex Operating Systems. 2. Purpose: The assumption on which HIPO is based is that one

    oftenly loses track of the intended function of large system. This

    is one reason why it is difficult to compare existing systems against their original specifications and therefore why failure can occur even in system that are technically well formulated. From

    the user view, single function can often extend across several

    modules. The concerns of the analyst is that understanding, describing and documenting the modules and their interaction in

    a way that provides sufficient details out that does not lose sight of the larger picture.

    3. HIPO diagrams are graphic, rather than prose or narrative, descriptions of the system.

    4. They assest the analyst in answering three guidelines.

    a. What does the system or module do? b. How does to do it?

    c. What are the inputs and outputs? 5. A HIPO descriptions for a system consists of

    d. Visual of the the table of contents(VTOC),and

    e. Functional diagrams. 6. Advantages:

    f. HIPO diagrams are effective for documenting a system. g. They also aid designers and force them to think about how

    specification will be met and where activities and components must be linked together.

    7. Disadvantages:

    h. They rely on a set of specialized symbols that require explanation. an extra concern when compared to the simplicity of, for example, data flow diagrams.

    i. HIPO diagrams are not as easy to use communication purpose as many people would like.And of course, they donot guarantee error free systems. Hence, their greatest strength is the

    documentation of a system.

    List of Deliverables: 1. The deliverables are those artifacts

  • VINAYAK SINGH

    27/147

    of the software system to be deliverable to the users.

    2. It is wrong to consider that only

    the executable code is deliverable. Some additional documentation as the part of the deliverables, but

    this too is incomplete list of deliverables.

    3. The deliverables of software system development project are

    numerous and vary with user organization.

    4. The broad classes of the system

    deliverables are as follows:- a. Document deliverables. b. Source code deliverables.

    5. Some of the document deliverables

    are listed as follows:- c. System requirement specifications. d. System Design documents.

    i. Data flow diagrams with process descriptions, data dictionary, etc.

    ii. System Flow chart.

    iii. Entity Relationship Diagrams.

    iv. Structure charts. v. Forms Design of Input data forms.

    vi. Forms Design of Outputs such as Query Responses/Reports.

    vii. Errors Messages: List, Contents, Suggested Corrective Actions.

    viii. Databases Schema.

    e. Testing documents: i. Test plan with Test Cases, Test Data and

    expected results. f. User Training Manual

    i. Operational manual-containing routine

    operations, exception

    processing, housekeeping

    functions etc. 6. The source code deliverables-these

    are also called as soft copy

    deliverables, because they are delivered in a soft form.(may be in addition to print form),Major

    deliverables in this category are listed as follows

    g. The source code of programs.

    h. The source code of libraries/sharable code.

    i. The data base schema source code. j. The database Triggers, Stored Procedures, etc. If not

    covered above.

  • VINAYAK SINGH

    28/147

    k. The online help pages contents. 7. The list of deliverables is generally

    a part of proposal. However, the

    final list of deliverables may include additional elements or clarifications after the system

    requirements specifications are finalized.

    Structure Charts: It is described as follows:-

    1. The structured chart is graphical representation of the hierarchical organization of program functions of a program and the communication between program components.

    2. It is drawn to describe a process or program component shown in System Flow chart in more details. The structure charts create further top-down decomposition. Thus it is another lower level of

    abstraction of the design before implementation.

    3. A computer program is modularized so that it is very easy to understand; it avoids repetition of the same code in multiple places of the same (or different) programs and can be reused to save

    development time, effort and costs. The Structured chart makes it possible to model even at the lower level of design.

    4. These modules of a program, called as subprograms, are placed

    physically one after the other serially. However, they are

    referenced in the order that the functionality requires them. Thus the program control is transferred from a line is the calling

    subprogram to the first line in the called subprograms. Thus they perform the role of calling and/or called subprograms, at

    different times during the program execution. 5. The calling subprograms is referred to as a parent and the called

    one is referred to as a child subprogram. Thus a program can be

    logically arranged into hierarchy of subprograms. The structured charts can be used to represent the parent-child relationships

    between subprograms effectively by correctly representing these relationships.

    6. The subprograms also communicate with each other in either

    direction. The structure chart can describe the data flows effectively. These individuals data items between programs

    modules are called as data couples . They are represented by an arrow starting from a hollow circle, as shown in the diagram. The

    arrow is labeled with the name of data item passed. 7. The subprograms also communicate among themselves a type of

    data item called as flag which is purely internal information

    between subprograms, used to indicate some result. They could be binary values, indicating presence or absence of a thing.

    Symbol below is of calling Module calls the called Module

    Warrier or Diagram:

    1. The ability to show the relationship between processes and steps in a process is not unique to Warnier/Orr diagrams, not is the use of iteration ,or treatment of individual cases.

  • VINAYAK SINGH

    29/147

    2. Both the structured flowcharts and structured-English methods do this equality well. However, the approach used to develop systems

    definitions with Warnier/Orr diagrams are different and fit well with

    those used in logical system design. 3. To develop a Warnier/Orr diagram, the analyst works backwards,

    starting with systems output and using on output-oriented analysis.

    On paper, the development moves from left to right. First, the intended output or results of processing are defined..

    4. At the next level, shown by inclusion with a bracket, the steps needed to produce the output are defined. Each step in turn is

    further defined. Additional brackets group the processes required to produce the result on the next level.

    5. A complete Warnier/Orr diagram includes both process groupings

    and data requirements. Both elements are listed for each process or process component. These data elements are the ones needed to determine which alternative or case should be handled by the system

    and to carry out the process.

    6. The analyst must determine where each data element originates how it is used, and how individual elements are combined. When the definition is completed, data structure for each process is

    documented. It is turn, is used by the programmers, who work from the diagrams to code the software.

    7. Advantages:

    a. Warnier/Orr diagrams offer some distinct advantages to

    system experts. They are simple in appearance and easy to understand. Yet they are powerful design tools.

    b. They have the advantage of showing grouping of processes and the data that must be passed from level to level.

    c. In addition, the sequence of working backwards ensures that the system will be result-oriented.

    d. This method is also a natural process. with structured

    flowcharts. for example it is often necessary to determine the innermost steps before interactions and modularity.

    Security and Disaster Recovery:

    The system security problem can be divided into four related

    issues: 1. System Security:

    System security refers to the technical innovations and procedures applied to the hardware and operating systems to

    protect deliberate or accidental damage from a defined threat. 2. System Integrity:

    System integrity refers to the proper functioning of hardware and

    programs, appropriate physical security, and safety against external threats such as eavesdropping and wiretapping.

    3. Privacy: Privacy defines the rights of the users or organizations to determine what information they are willing to share with or

    accept from others and how the organization can be

    protected against unwelcome, unfair, or excessive dissemination of information about it.

    4. Confidentiality:

  • VINAYAK SINGH

    30/147

    The term confidentiality is a special status given to sensitive information in a database to minimize the possible invasion of

    privacy. It is an attribute of information that characterizes its

    need for protection. Disaster/Recovery Planning:

    1. Disaster/recovery planning is a means of addressing the

    concern for system availability by identifying potential exposure, prioritizing applications, and designing safeguards that minimize loss if disaster occurs. It means that no matter what the disaster, the system can be recovered. The

    business will survive because a disaster/recovery plan allows quick recovery under the circumstances.

    2. In disaster/recovery planning, managements primary roll

    is to accept the need for the consistency planning, select an alternative measure, and recognize the benefits that can be delivered from establishing a disaster/recovery

    plan. Top management should establishing a

    disaster/recovery policy and commit corporate support staff for its implementation

    3. The users role is also important. The users responsibilities

    include the following: a. Identifying critical applications, why they are critical,

    and how computer unavailability would affect the

    department.

    b. Approving data protection procedures and determining how long and how well operations will

    continue without the data. c. Funding the costs of backup.

    13. What are the roles of the system Analyst in system analysis

    design?(Nov-03,May-01) The various roles of system analyst are as follows:

    1 Change Agent: The analyst may be viewed as an agent of change. A candidate system is designed to

    introduce change and reorientation in how the user organization handles information

    or makes decisions .In role of a change agent, the system analyst may select various styles to introduce change to user organization. The styles range from that of the

    persuader to imposer. In between, there are the catalyst and confronter roles. When user appears to have a tolerance for change ,the persuader or catalyst(helper)styles

    is appropriate. On the other hand , when drastic changes are requires, it may be necessary to adopt the confronter or even imposer style .No matter what style is

    used; however the goal is same :to achieve acceptance of candidate system with a

    minimum of resistance. 2 Investigator and Monitor.

    In defining a problem, the analyst pieces together the information gathered to

    determine why the present system does not work well and what changes will correct the problem .In one respect, this work is similar to that of an investigator. Related to the role of investigator is that of monitor programs in relation to time, cost, and

    quality. Of these resources, time is the most important, If time gets away, the

    project suffers from increased costs and wasted human resources. 3 Architect:

    Just as an architect related the client abstract design requirement and the

  • VINAYAK SINGH

    31/147

    contractors detailed building plan, an analyst relates the users logical design requirement with the detailed physical system design,As an architect,the analyst also

    creates a detailed physical system design of the candidate systems.

    4 Psychologist

    In system development, system are built around people. The analyst plays the role of psychologist in the way he/she reaches people, interprets their thoughts, assesses their behaviour and draws conclusions from these interactions. Understanding inetrfunctional relationships is important.It is important that analyst be aware of

    peoples feelings and be prepared to get around things in a graceful way. The art of listening is important in evaluating responses and feedback.

    5 Salesperson:

    Selling change can be as crucial as initiating change. Selling the system actually takes place at each step in the system life cycle. Sales skills and persuasiveness are crucial to the success of system.

    6 Motivator: The analyst role as a motivator becomes obvious during the first few weeks after implementation of new system and during times when turnover results in new people

    being trained to work with the candidate system. The amount of dedication if takes to motivate the users often taxes the analyst abilities to maintain the pace.

    7 Politcian:

    Related to the role of motivator is that of politician. In implementing a candidate system , the analyst tries to appeases all parties involved.Diplomacy and fitness in

    dealing with the people can improve acceptance of the system.In as much as a politician must have the support of his/her constituency, so as good analysts goal to

    have the support of the userss staff.

    14. What is the requirement of good system analyst (M-04,May-01,M-

    06) Requirements of a good System Analyst:

    An analyst must possess various skills to effectively carry out the job. The skills required by system analyst can be divided into following categories:

    i) Technical Skills: Technical skills focus on procedures and techniques for operetional analysis,

    system analysis and computer science. The technical skills relevant to system include the following:

    a. Working Knowledge of Information Technologies: The system analyst must be aware of both existing and

    emerging information technologies. They should also stay

    through disciplined reading and participation in current appropriate professional societies.

    b. Computer programming experience and expertise:

    System Analyst must have some programming experience. Most system analyst need to be proficient in one or more high level programming language.

    ii) Interpersonal Skills: Interpersonal skills deal with relationships and interface of the analyst with people in the business.The interpersonal skills relevant to system

  • VINAYAK SINGH

    32/147

    include the following: b. Good interpersonal Communication skills:

    An analyst must be able to communicate, both orally and in

    writing. Communication is not just reports, telephone conversations and interviewes. It is people talking, listening, feeling and reaching to one another; their experiences and

    reactions. Opening communication channel are a must for system development.

    c. Good interpersonal Relations skills: An analyst interacts with all stackholders in a system

    development project. There interaction requires effective interpersonal skills that enable the analyst o deal with a group dynamics, bussiness politics, conflict and change.

    iii) General knowledge of Business process and terminology:

    System analyst must be able to communicate with business

    experts to gain on understanding of their problems and needs.

    They should avail themselves of every opportunity to complete basic business literacy courses such as financial accounting, management or cost accounting, finance, marketing ,

    manufacturing or operations management, quality management, economics and business law.

    iv) General Problem solving skills:

    The system analyst must be able to tke a large bussiness problem, break down that problem into its parts, determine

    problem causes and effects and then recommended a solution.Analyst must avoid the tendancy to suggest the

    solution before analyzing the problem.

    v) Flexibility and Adaptability:

    No two are alike.Accordingly there is no single, magical approach or standard that is equally applicable to all projects.Successful system

    analyst learn to be flexible and to adopt to unique challenges and situations.

    vi) Character and Ethics: The nature of the systems analyst job requires strong character and a

    sense of right and wrong.Analysts often gain access to sensitive and confidential facts and information that are not meant for public

    disclosure.

    15. Build a current Admission for MCA system .Draw context level

    diagram, DFD upto Two level, ER diagram and a dataflow and data stores and a process. Draw input, output screen.(May-04)

    EVENT LIST FOR THE CURRENT MCA ADMISSION SYSTEM 1. Administrator enters the college details. 2. Issue of admission forms.

    3. Administrator enters the student details into the system.

    4. Administrator verifies the details of the student. 5. System generates the hall tickets for the student. 6. Administrator updates the CET score of student in the system.

  • VINAYAK SINGH

    33/147

    7. System generates the score card for the students. 8. Student enters his list of preference of college into the system.

    9. System generates college-wise student list according to CET score.

    10. System sends the list to the college as well as student. Data Store used in MCA admission:

    1) Student_Details : i) Student_id ii) Student Name iii) Student Address

    iv) STudent_contactNo v) Student Qualifiacation vi) Studentmarks10th

    vii) Studentmarks12th viii) StudentmarksDegree

    2) Stud_cet details:

    i) Student_id ii) Student_rollNo iii) Student_cetscore

    iv) Student_percentile 3) Stud_preference List:

    i) Student_id

    ii) Student_rollNo iii) Student_prefenence1

    iv) Student_prefenence3 v) Student_prefenence3

    vi) Student_prefenence4 vii) Student_prefenence5

    4) College List: i) College_id

    ii) college_name iii) college_address

    iv) seats available

    v) Fee

    Input Files

    1) Student Details Form:

    Student Name: ___________________________

    Student Address:

    Student Contact NO:

  • VINAYAK SINGH

    34/147

    Student Qualification:

    Students Percentge 12th:

    Students Percentage 10th:

    Students Degree Percentage: (optional)

    2) Student preference List: Student_rollNo: _________

    Preference No 1:

    Preference No 2:

    Preference No 3:

    Preference No 4: Preference No 5:

    3) Stdent CET Details :

    Student id :

    Student rollno:

    Student Score:

    Student Percentile:

    4) College List :

    College Name:

    College Address:

    Sets Available :

    Fees :

    OUTPUT FILES

    1) Student ScoreCard

    Student RollNo :

  • VINAYAK SINGH

    35/147

    Student Name :

    Student SCore :

    Percentile :

    2) Student List to College

    RollNo Name Score Percentile 1

    2 3 4

    5

    6 7 8

    9

  • VINAYAK SINGH

    36/147

    16. Give the importance of formal methods.specially give the importance

    of spiral development model.(M-03).

    DEFINITION - The spiral model, also known as the spiral lifecycle model, is a

    systems development method (SDM) used in information technology (IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model is favored for large, expensive, and complicated projects.

    The steps in the spiral model can be generalized as follows:

    1. The new system requirements are defined in as much detail as possible. This

    usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system.

    2. A preliminary design is created for the new system.

    3. A first prototype of the new system is constructed from the preliminary

    design. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product.

    4. A second prototype is evolved by a fourfold procedure: (1) evaluating the first

    prototype in terms of its strengths, weaknesses, and risks; (2) defining the

    requirements of the second prototype; (3) planning and designing the second prototype; (4) constructing and testing the second prototype.

    5. At the customer's option, the entire project can be aborted if the risk is

    deemed too great. Risk factors might involve development cost overruns,

    operating-cost miscalculation, or any other factor that could, in the customer's judgment, result in a less-than-satisfactory final product.

    6. The existing prototype is evaluated in the same manner as was the previous

    prototype, and, if necessary, another prototype is developed from it according

    to the fourfold procedure outlined above.

    7. The preceding steps are iterated until the customer is satisfied that the refined prototype represents the final product desired.

    8. The final system is constructed, based on the refined prototype.

    9. The final system is thoroughly evaluated and tested. Routine maintenance is

    carried out on a continuing basis to prevent large-scale failures and to minimize downtime.

    Advantages

    Estimates (i.e. budget, schedule, etc.) get more realistic as work progresses,

    because important issues are discovered earlier.

    It is more able to cope with the (nearly inevitable) changes that software

    development generally entails.

    Software engineers (who can get restless with protracted design processes)

    can get their hands in and start working on a project earlier.

  • VINAYAK SINGH

    37/147

    The spiral model is a realistic approach to the development of large-scale

    software products because the software evolves as the process progresses. In addition, the developer and the client better understand and react to risks at

    each evolutionary level.

    The model uses prototyping as a risk reduction mechanism and allows for the

    development of prototypes at any stage of the evolutionary development.

    It maintains a systematic stepwise approach, like the classic life cycle model,

    but incorporates it into an iterative framework that more reflect the real world.

    If employed correctly, this model should reduce risks before they become

    problematic, as consideration of technical risks are considered at all stages.

    Disadvantages

    1 Demands considerable risk-assessment expertise

    2 It has not been employed as much proven models (e.g. the WF model) and

    hence may prove difficult to sell to the client (esp. where a contract is involved) that this model is controllable and efficient. [More study needs to be done in this regard

    3 It may be difficult to convince customers that evolutionary approach is controllable

    17. What is 4GL model? What are its advantages and diadvantages ?

    (M-03). Fourth Generation Technique:

    1) The term 'Fourth Generation Technique' encompasses broad array of

    software tools that have one thing in common:ech toolenables this software developer to secify some characteristics of software of a

    high level.The tool then automatically generates source code based on developers specification.

    2) There is a little debate that higher the level at which software can be specified to a machine,the faster a program can be built.

    3) The 4gl model focuses on the ability to specify software to a machine at level that is close to natural language or using a notation that

    imparts signifi