Defragmented File System Documentation

Embed Size (px)

Citation preview

  • 8/14/2019 Defragmented File System Documentation

    1/63

    DefragmentedFile SystemSystem Performer

    Hard disk is configured with blocks. These blocks arethe storage points of the files. The size of the file willbe depends on the blocks in operating system. Thefiles which take the blocks non contiguously will becalled fragmented file. File Fragmentation should bea stand-alone application to split large files into itssmaller constituents and to combine files into largefiles. Split;

    Charithardha. CH7/1/2009

  • 8/14/2019 Defragmented File System Documentation

    2/63

    Introduction

    Abstract: Hard disk is configured with blocks. These blocks are the storage

    points of the files. The size of the file will be depends on the blocks inoperating system. The files which take the blocks non contiguously will be

    called fragmented file. File Fragmentation should be a stand-alone

    application to split large files into its smaller constituents and to combine files

    into large files. Split; The system should be able to split the source files into

    given fragments and save the fragments in the target file. In order to create

    the files into fragments user should select the following: Select source file

    such as .Java files. Select target (destination). Select fragments from the

    drop-down list (MB, KB, Bytes). Stop: The application should allow the user tostop in the middle of fragmentation. Exit: The system should allow the user

    to exit from the application. Help: The system should provide help to user of

    the application. About: The system should provide help to user of the

    application. Rejoin The system should allow the user to rejoin the fragments

    of one or more files.

    Problem Identification:The fragmented files are causes of occupying more

    space in the hard disk. This files have to be identified and the files have to

    be arranged with contiguous blocks occupation. The system performance will

    be reduced because of this fragmented files which are occupying the block of

    the hard disk in irregular fashion. The system has to be established to

    defragmenting the files with contiguous blocks of the operating system.

    Solution:The fragmented files should be identified and the files have to be

    defragmented at a destination location. The files from the source location

    have to be shifted to targeted location with contiguous blocks occupation.

    The application has to provide the utility to select the file from source

    destination and save the files at targeted file. The application should provide

    the facility to stop the defragmentation of files. The application should allow

    the user to exit from the application.

    Technology Specification: J2SE, AWT / Swings

  • 8/14/2019 Defragmented File System Documentation

    3/63

    1.1 PROJECT OVERVIEW:

    Client: Universal Cliental

    Project Type: Product Development

    Project Architecture: System Architectural Development

    File Fragmentation should be a stand-alone application to split large files into

    its smaller constituents and to combine files into large files. Split; The system

    should be able to split the source files into given fragments and save the

    fragments in the target file. In order to create the files into fragments user

    should select the following: Select source file such as .Java files. This

    application is a product development project which can be sold to any

    operating system users.

    This application can be used by all programmers, technical writers and other

    document writers who wants to keep the files in a secured state. Disk

    Defragmenter is a system utility for analyzing local volumes, locating

    and consolidating fragmented files and folders. Hard disk is configured

    with blocks. These blocks are the storage points of the files. The file

    which takes the blocks non contiguously will be called fragmented file.

    File Fragmentation should be a stand-alone application to split large

    files into its smaller constituents and to combine files into large files.

    So rearranging the blocks of a file contiguously is known as

    Defragmentation.

    In computing, file system fragmentation, sometimes called file system

    aging, is the inability of a file system to lay out related data

    sequentially (contiguously), an inherent phenomenon in storage-

    backed file systems that allow in-place modification of their contents. It

    is a special case of data fragmentation. File system fragmentation

    increases disk head movement or seeks, which are known to hinder

    throughput. The correction to existing fragmentation is to reorganize

  • 8/14/2019 Defragmented File System Documentation

    4/63

    files and free space back into contiguous areas, a process called

    defragmentation.

    System Analysis:

    Problem Statement:

    As advanced as hard drives have become, one item they are not

    very good at is housekeeping, or maybe that should be drive keeping.

    When files are created, deleted, or modified it's almost a certainty they

    will become fragmented. Fragmented simply means the file is not

    stored in one place in its entirety, or what computer folks like to call a

    contiguous location. Different parts of the file are scattered across thehard disk in noncontiguous pieces. The more fragmented files there are

    on a drive, the more performance and reliability suffer as the drive

    heads have to search for all the pieces in different locations.

    When a file system is first initialized on a partition (the partition is

    formatted for the file system), the entire space allotted is empty.[1] This

    means that the allocator algorithm is completely free to position newly

    created files anywhere on the disk. For some time after creation, files

    on the file system can be laid out near-optimally. When the operating

    system and applicationsare installed or other archives are unpacked,

    laying out separate files sequentially also means that related files are

    likely to be positioned close to each other.

    However, as existing files are deleted or truncated, new regions of free

    space are created. When existing files are appended to, it is often

    impossible to resume the write exactly where the file used to end, as

    another file may already be allocated there thus, a new fragment

    has to be allocated. As time goes on, and the same factors are

    continuously present, free space as well as frequently appended files

    tend to fragment more. Shorter regions of free space also mean that

    the allocator is no longer able to allocate new files contiguously, and

    has to break them into fragments. This is especially true when the file

    system is more full longer contiguous regions of free space are less

    likely to occur.

    http://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Application_softwarehttp://en.wikipedia.org/wiki/Archive_(computing)http://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Operating_systemhttp://en.wikipedia.org/wiki/Application_softwarehttp://en.wikipedia.org/wiki/Application_softwarehttp://en.wikipedia.org/wiki/Application_softwarehttp://en.wikipedia.org/wiki/Archive_(computing)http://en.wikipedia.org/wiki/Archive_(computing)http://en.wikipedia.org/wiki/Archive_(computing)
  • 8/14/2019 Defragmented File System Documentation

    5/63

    Note that the following is a simplification of an otherwise complicated

    subject. The method which is about to be explained has been the

    general practice for allocating files on disk and other random-access

    storage, for over 30 years. Some operating systems do not simply

    allocate files one after the other, and some use various methods to try

    to prevent fragmentation, but in general, sooner or later, for the

    reasons explained in the following explanation, fragmentation will

    occur as time goes by on any system where files are routinely deleted

    or expanded. Consider the following scenario, as shown by the image

    on the right:

    A new disk has had 5 files saved on it, named A, B, C, D and E, and

    each file is using 10 blocks of space (here the block size is

    unimportant.) As the free space is contiguous the files are located oneafter the other (Example (1).)

    If file B is deleted, a second region of 10 blocks of free space is

    created, and the disk becomes fragmented. The file

    system coulddefragment the disk immediately after a deletion, which

    would incur a severe performance penalty at unpredictable times, but

    in general the empty space is simply left there, marked in a table as

    available for later use, then used again as needed[2] (Example (2).)

    Now if a new file F requires 7 blocks of space, it can be placed into thefirst 7 blocks of the space formerly holding the file B, and the 3 blocks

    following it will remain available (Example (3).) If another new file G is

    added, and needs only three blocks, it could then occupy the space

    after F and before C (Example (4).)

    http://en.wikipedia.org/wiki/File:File_system_fragmentation.svg
  • 8/14/2019 Defragmented File System Documentation

    6/63

    If subsequently F needs to be expanded, since the space immediately

    following it is occupied, there are three options: (1) add a new block

    somewhere else and indicate that F has a second extent, (2) move files

    in the way of the expansion elsewhere, to allow F to remain

    contiguous; or (3) move file F so it can be one contiguous file of the

    new, larger size. The second option is probably impractical for

    performance reasons, as is the third when the file is very large. Indeed

    the third option is impossible when there is no single contiguous free

    space large enough to hold the new file. Thus the usual practice is

    simply to create an extentsomewhere else and chain the new extent

    onto the old one (Example (5).)

    Material added to the end of file F would be part of the same extent.

    But if there is so much material that no room is available after the lastextent, then anotherextent would have to be created, and so on, and

    so on. Eventually the file system has free segments in many places

    and some files may be spread over many extents. Access time for

    those files (or for all files) may become excessively long.

    To summarize, factors that typically cause or facilitate fragmentation,

    include:

    low free space.

    frequent deletion, truncation or extension of files.

    overuse ofsparse files.

    Performance implications

    File system fragmentation is projected to become more problematic

    with newer hardware due to the increasing disparity between

    sequential access speed and rotational delay (and to a lesser extent

    seek time), of consumer-grade hard disks,[3] which file systems are

    usually placed on. Thus, fragmentation is an important problem inrecent file system research and design. The containment of

    fragmentation not only depends on the on-disk format of the file

    system, but also heavily on its implementation.[4]

    In simple file system benchmarks, the fragmentation factor is often

    omitted, as realistic aging and fragmentation is difficult to model.

    http://en.wikipedia.org/wiki/Sparse_fileshttp://en.wikipedia.org/wiki/Sparse_fileshttp://en.wikipedia.org/wiki/Sparse_fileshttp://en.wikipedia.org/wiki/Sparse_files
  • 8/14/2019 Defragmented File System Documentation

    7/63

    Rather, for simplicity of comparison, file system benchmarks are often

    run on empty file systems, and unsurprisingly, the results may vary

    heavily from real-life access patterns.[5]

    Types of fragmentation

    File system fragmentation may occur on several levels:

    Fragmentation within individual files and their metadata.

    Free space fragmentation, making it increasingly difficult to lay out

    new files contiguously.

    The decrease of locality of reference between separate, but related

    files.

    File fragmentation

    Individual file fragmentation occurs when a single file has been broken

    into multiple pieces (called extents on extent-based file systems).

    While disk file systems attempt to keep individual files contiguous, this

    is not often possible without significant performance penalties. File

    system check and defragmentation tools typically only account for file

    fragmentation in their "fragmentation percentage" statistic.

    Free space fragmentation

    Free (unallocated) space fragmentation occurs when there are severalunused areas of the file system where new files or metadata can be

    written to. Unwanted free space fragmentation is generally caused by

    deletion or truncation of files, but file systems may also intentionally

    insert fragments ("bubbles") of free space in order to facilitate

    extending nearby files (see preemptive techniques below)

    File scattering

    File segmentation, also called related-file fragmentation, or application-

    level (file) fragmentation, refers to the lack of locality of reference(within the storing medium) between related files (see file sequencess

    for more detail). Unlike the previous two types of fragmentation, file

    scattering is a much more vague concept, as it heavily depends on the

    access pattern of specific applications. This also makes objectively

    measuring or estimating it very difficult. However, arguably, it is the

  • 8/14/2019 Defragmented File System Documentation

    8/63

    most critical type of fragmentation, as studies have found that the

    most frequently accessed files tend to be small compared to available

    disk throughput per second.[6]

    To avoid related file fragmentation and improve locality of reference (in

    this case called file contiguity), assumptions about the operation of

    applications have to be made. A very frequent assumption made is

    that it is worthwhile to keep smaller files within a single directory

    together, and lay them out in the natural file system order. While it is

    often a reasonable assumption, it does not always hold. For example,

    an application might read several different files, perhaps in different

    directories, in the exact same order they were written. Thus, a file

    system that simply orders all writes successively, might work faster for

    the given application.

    Purpose:

    Techniques for mitigating fragmentation

    Several techniques have been developed to fight fragmentation. They can

    usually be classified into two categories: preemptive and retroactive. Due to

    the hard predictability of access patterns, these techniques are most often

    heuristic in nature, and may degrade performance under unexpected

    workloads.

    Preemptive techniques

    Preemptive techniques attempt to keep fragmentation at a minimum

    at the time data is being written on the disk. The simplest of such is,

    perhaps, appending data to an existing fragment in place where

    possible, instead of allocating new blocks to a new fragment.

    Many of today's file systems attempt to preallocate longer chunks, or

    chunks from different free space fragments, called extents to files thatare actively appended to. This mainly avoids file fragmentation when

    several files are concurrently being appended to, thus avoiding them

    from becoming excessively intertwined.[4]

    A relatively recent technique is delayed allocation in XFS and ZFS; the

    same technique is also called allocate-on-flush in reiser4 and ext4. This

    means that when the file system is being written to, file system blocks

  • 8/14/2019 Defragmented File System Documentation

    9/63

    are reserved, but the locations of specific files are not laid down yet.

    Later, when the file system is forced to flush changes as a result of

    memory pressure or a transaction commit, the allocator will have much

    better knowledge of the files' characteristics. Most file systems with

    this approach try to flush files in a single directory contiguously.

    Assuming that multiple reads from a single directory are common,

    locality of reference is improved.[7] Reiser4 also orders the layout of

    files according to the directory hash table, so that when files are being

    accessed in the natural file system order (as dictated by readdir), they

    are always read sequentially.[8]

    BitTorrent and other peer-to-peer filesharing applications attempt to

    limit fragmentation through features that allocate the full space

    needed for a file when initiating downloads.[9]

    Retroactive techniques

    Retroactive techniques attempt to reduce fragmentation, or the

    negative effects of fragmentation, after it has occurred. Many file

    systems provide defragmentation tools, which attempt to reorder

    fragments of files, and sometimes also decrease their scattering (i.e.

    improve their contiguity, or locality of reference) by keeping either

    smaller files in directories, or directory trees, or even file sequences

    close to each other on the disk.

    The HFS Plus file system transparently defragments files that are less

    than 20 MiB in size and are broken into 8 or more fragments, when the

    file is being opened.[10]

    Stateless techniques

    The (now defunct) Commodore Amiga SFS (Smart File System)

    defragments itself while the filesystem is in use. The defragmentation

    process is almost completely stateless (apart from the location it is

    working on), which means it can be stopped and started instantly.

    During defragmentation, data integrity is ensured for both meta data

    and normal data. To allocate contiguous blocks of memory for the fragmented files in the

    hard disk. To improve the performance of the system.

    Easy access to the files in the hard disk.

    Scope:

  • 8/14/2019 Defragmented File System Documentation

    10/63

    In the previous versions of Disk Defragmenter of Windows Operating

    system, only administrator used to defragment the hard disk. This was

    a problem to the normal users. So the proposed system solves this

    problem. The proposed system is used by all the users where the

    software installed. File Defrag is a useful utility that performs file based

    defragmentation. Most defragmentation tools work on an entire disk or

    individual drives. File Defrag is different as it allows you, using an

    Explorer style screen, to see defragmentation of every file and folder

    on each of your disks. Each fragmented file or folder can be

    individually defragmented.It will also allow you to defragment what

    you want file by file or folder by folder rather than being forced to

    defragment a whole drive as with conventional tools.

    File Defrag employs an innovative technique to reduce future

    fragmentation by moving file blocks into fragmented free space on

    thedrive. This reduces the chances of new files themselves becoming

    fragmented and maximises the amount of contiguous free space on

    the drive. Works with FAT, FAT32 and NTFS disk formats, as well as

    Windows compressed files.

    Objective:

    The objective of the project is to increase the performance ofthe system and keep healthy of file systems. To increase the

    performance of the system the project team has analyse the

    following as objective of the system.

    To analyze & defragment the fragmented files in the harddisk.

    To integrate the free space available in between thefragmented files.

    To ease the access to the files in the hard disk, finallyimproves the performance of the system.

    Existing System:

    There is an inbuilt Disk Defragmenter Utility provided by the windows

    Operating System.The Disk Defragmenter Utility is designed to reorganize

    noncontiguous files into contiguous files and optimize their placement on the

    hard drive for increased reliability and performance.

  • 8/14/2019 Defragmented File System Documentation

    11/63

    Disk Defragmenter can be opened a number of different ways. Themost common methods are listed below.

    Start | All Programs | Accessories | System Tools | Disk Defragmenter

    Start | Run | and type dfrg.msc in the Open line. Click OK

    Start | Administrative Tools | Computer Management. Expand Storageand select Disk Defragmenter.

    Proposed System:This system is a java application which identifies the fragmented files

    (non contiguous) in the hard disk and reorganizes memory contiguously for

    them. It also collects all the free space available in between the fragmented

    files and integrates them together to make free memory blocks. As the files

  • 8/14/2019 Defragmented File System Documentation

    12/63

    are defragmented, it speeds up the file access to the user. Therefore

    improves the performance of the system.

    Module Description: Defragmented file system is developedusing the following modules. These modules enable the user to

    complete the task for disk cleaning up utility and increase theproficiency of the system.

    These are four modules:

    |Analyzer| | View report| | Defragmenter| | Stop |

    Analyzer: Analyzes a particular drive and sends the report to view report

    module. program allowing you to defragment selected files and directories.

    Regular defragmentation increases the overall performance of your system

    dramatically. The key advantage of the Rapid File Defragmentor is the ability

    to group files and folders into the profiles and defragment only selected.

    View report: It lists all fragmented and defragmented files in particular

    drive. This utility gives vivid picture of the hard disk with list of files which are

    fragmented which are not fragmented. The details report will be generated to

    view the files of fragmentation and which are required for de fragmentation.

    Defragmenter: This action allocates a contigous memory for the fragmented

    files through which performance of the system is improved. By securely

    repacking your hard disk's fragmented data back together, the operation of

    your hard disks can be streamlined to run with lightning efficiency. It is

    designed for fast optimization of today's modern hard disks.

    Stop: Stops the process of defragmentation if necessary

    Feasibility Study:

    This application is feasible to deploy in any environment deployed

    systems. This application is easy to deploy and easy to run on any

    windows environment. This project is feasible to any organization

    because of the cost of this project is less than the consumption of the

    application. This application can be used for all client systems and

  • 8/14/2019 Defragmented File System Documentation

    13/63

    server. The feasible study of defragmented File system is emphasized

    on the healthy status of the systems.

    An important outcome of the preliminary investigation is the

    determination that system requested is feasible. It is proposed to solve

    the problem development a computer based information system for

    property tax collection. Since the data is voluminous and involves a lot

    of overhead. Therefore, it is the best activity for computerization. The

    proposed system is feasible in following aspects.

    Technical Feasibility:

    The technical feasibility is measured with the current Equipment,

    existing software technologies and available personnel are sufficient for the

    proposed system. The proposed system is technically feasible since theequipment, software technology and available personal in the organization

    are well suitable for the proposed system.

    Economic Feasibility:

    Economic feasibility is measured that the proposed System having

    sufficient benefits in the economic point of view. Since the necessary

    hardware and software are available in the organization, there is no need to

    procure and install new hardware and software. Thus, initial investment on

    hardware and software is nothing. There is no need of extra personal for the

    proposed system. Hence, the proposed system is economically feasible.

    Operational Feasibility:

    The operational fusibility is measured by the usage of the system after

    implementation and resistance from the users. The employees of municipality

    are extending support and there is no resisting change. Hence, it is

    encouraging to undertake a detailed system analysis. It is a measure of how

    well a proposed system solves the problems, and takes advantages of theopportunities identified during scope definition and how it satisfies the

    requirements identified in the requirements analysis phase of system

    development.

    Schedule feasibility

  • 8/14/2019 Defragmented File System Documentation

    14/63

    A project will fail if it takes too long to be completed before it is useful.

    Typically this means estimating how long the system will take to develop, and

    if it can be completed in a given time period using some methods like

    payback period. Schedule feasibility is a measure of how reasonable the

    project timetable is. Given our technical expertise, are the project deadlines

    reasonable? Some projects are initiated with specific deadlines. You need to

    determine whether the deadlines are mandatory or desirable.

    Market and real estate feasibility

    Market Feasibility Study typically involves testing geographic locations

    for a real estate development project, and usually involves parcels of real

    estate land. Developers often conduct market studies to determine the best

    location within a jurisdiction, and to test alternative land uses for a given

    parcels. Jurisdictions often require developers to complete feasibility studies

    before they will approve a permit application for retail, commercial, industrial,

    manufacturing, housing, office or mixed-use project. Market Feasibility takes

    into account the importance of the business in the selected area.

    Software Requirement Specification:

    Functional Requirements:

    Functional Requirements:

    A system requirement that describes an activity or process

    that the system must perform.

  • 8/14/2019 Defragmented File System Documentation

    15/63

    Functional requirements may be calculations, technical

    details, data manipulation and processing and other specific

    functionality that define what a system is supposed to accomplish.

    Behavioral requirements describing all the cases where the system

    uses the functional requirements are captured in use cases.

    Functional requirements are supported by non-functional

    requirements (also known as quality requirements), which impose

    constraints on the design or implementation (such as performance

    requirements, security, or reliability). How a system implements

    functional requirements is detailed in the system design.

    As defined in requirements engineering, functional

    requirements specify particular results of a system. This should be

    contrasted with non-functional requirements which specify overall

    characteristics such as cost and reliability. Functional requirements

    drive the application architecture of a system, while non-functional

    requirements drive the technical architecture of a system.

    Typically, a requirements analyst generates use cases after

    gathering and validating a set of functional requirements. Each use

    case illustrates behavioral scenarios through one or more functional

    requirements.

    Defect reporter Hardware is a stand alone application which should

    be deployed in client and server to function.

    This application should be deployed in client and server

    architecture.

    The application should display the clients in user interface window

    of server.

    The application should enable the user to view the hardware

    defects of clients system.

    If the clients are shutdown the representing icons should be

    inactive.

  • 8/14/2019 Defragmented File System Documentation

    16/63

    Once the system is on the icon should be active.

    The application should not only reveal the hardware defects of the

    clients system but also the system is on or shutdown status.

    Process

    A typical functional requirement will contain a unique name

    and number, a brief summary, and a rationale. This information is

    used to help the reader understand why the requirement is needed,

    and to track the requirement through the development of the

    system.

    The crux of the requirement is the description of the required

    behavior, which must be clear and readable. The described behavior

    may come from organizational or business rules, or it may be

    discovered through elicitation sessions with users, stakeholders, and

    other experts within the organization. Many requirements may be

    uncovered during the use case development. When this happens,

    the requirements analyst may create a placeholder requirement

    with a name and summary, and research the details later, to be

    filled in when they are better known.

    Non Functional Requirements:

    In systems engineering and requirements engineering, a non-

    functional requirement is a requirement that specifies criteria that

    can be used to judge the operation of a system, rather than specific

    behaviors. This should be contrasted with functional requirements

    that define specific behavior or functions.

  • 8/14/2019 Defragmented File System Documentation

    17/63

    In general, functional requirements define what a system is

    supposed to do whereas non-functional requirements define how a

    system is supposed to be. Non-functional requirements are often

    called qualities of a system. Other terms for non-functional

    requirements are "constraints", "quality attributes", "quality goals"

    and "quality of service requirements". Qualities, that is, non-

    functional requirements, can be divided into two main categories

    Execution qualities, such as security and usability, which are

    observable at run time. Evolution qualities, such as testability,

    maintainability, extensibility and scalability, which are embodied in

    the static structure of the software system. Defect Reporter

    Hardware is tested with white box testing and Balck Box testing.

    The rest reports reveal that the software is working properly.The

    application defect reporter should be developed in user interface screens.

    The screens should be clearly mention the functionalities of the project.

    The icons which are designed in the defect reported should navigate the

    user to the next lever options. The screens should be designed with eye

    friendly colors. The screens should be deigned with clear information to be

    understandable to the user.

    File Defragmentation system is providing to the users the safe

    and secure reporting system to reveal the hardware problems of the

    client systems.

    File Defragmentation system is robust software which capture

    the operating systems drivers functionality and reveal the problems of

    the hardware attached to the clients.

    File Defragmentation system is a platform compatibility software

    because this software is developed in java technologies. Hence it is a

    platform compatible project.

    Maintainability of Defect Reporter Hardware

    correct defects

    meet new requirements

  • 8/14/2019 Defragmented File System Documentation

    18/63

    make future maintenance easier, or

    cope with a changed environment;

    these activities are known as software maintenance (cf. ISO 9126).

    Maintainability Index is calculated with certain formulae from

    lines-of-code measures, McCabe measures and Halstead measures.

    The measurement and track maintainability are intended to help

    reduce or reverse a system's tendency toward "code entropy" or

    degraded integrity, and to indicate when it becomes cheaper and less

    risky to rewrite the code instead to change it.

    Defragmented File System has more accessibility. Accessibility is a

    general term used to describe the degree to which a product (e.g.,

    device, service, environment) is accessible by as many people as

    possible. Accessibility can be viewed as the "ability to access" the

    functionality, and possible benefit, of some system or entity.

    Accessibility is often used to focus on people with disabilities and their

    right of access to entities, often through use of assistive technology.

    Several definitions of accessibility refer directly to access-based

    individual rights laws and regulations. Products or services designed to

    meet these regulations are often termed Easy Access or

    Accessible.[citation needed]

    Accessibility of Defragmented File System is not to be confused with

    usability which is used to describe the extent to which a product (e.g.,

    device, service, environment) can be used by specified users to

    achieve specified goals with effectiveness, efficiency and satisfaction

    in a specified context of use.

    Accessibility of Defragmented File Syste is strongly related to universal

    design when the approach involves "direct access." This is about

    making things accessible to all people (whether they have a disability

    or not). However, products marketed as having benefited from a

    Universal Design process are often actually the same devices

    customized specifically for use by people with disabilities.[citation

    needed] An alternative is to provide "indirect access" by having the

  • 8/14/2019 Defragmented File System Documentation

    19/63

    entity support the use of a person's assistive technology to achieve

    access Software application should be user friendly.

    Maintainability Index is calculated with certain formulae from lines-of-

    code measures, McCabe measures and Halstead measures. The

    measurement and track maintainability are intended to help reduce or

    reverse a system's tendency toward "code entropy" or degraded

    integrity, and to indicate when it becomes cheaper and less risky to

    rewrite the code instead to change it.

    The application should not have impact on users eyesight, so better to use colors like light blue.

    Buttons used in the application should be named withappropriate description.

    RAPID APPLICATION DEVELOPMENT (RAD) MODEL:

  • 8/14/2019 Defragmented File System Documentation

    20/63

    The RAD modelis a linear sequential software development process

    that emphasizes an extremely short development cycle. The RAD model is a

    "high speed" adaptation of the linear sequential model in which rapid

    development is achieved by using a component-based construction

    approach. Used primarily for information systems applications, the RAD

    approach encompasses the following phases:

    1. Business modeling:

    The information flow among business functions is modeled in a way

    that answers the following questions:

    What information drives the business process?

    What information is generated?

    Who generates it?

    Where does the information go?

    Who processes it?

    2. Data modeling:

    The information flow defined as part of the business modeling phase is

    refined into a set of data objects that are needed to support the business.

    The characteristic (called attributes) of each object is identified and the

    relationships between these objects are defined.

    3. Process modeling:

    The data objects defined in the data-modeling phase are transformed

    to achieve the information flow necessary to implement a business function.

    Processing the descriptions are created for adding, modifying, deleting, or

    retrieving a data object.

    4. Application generation:

    The RAD model assumes the use of the RAD tools like VB, VC++,

    Delphi etc... rather than creating software using conventional third generation

  • 8/14/2019 Defragmented File System Documentation

    21/63

    programming languages. The RAD model works to reuse existing program

    components (when possible) or create reusable components (when

    necessary). In all cases, automated tools are used to facilitate construction of

    the software.

    5. Testing and turnover:

    Since the RAD process emphasizes reuse, many of the program

    components have already been tested. This minimizes the testing and

    development time.

    RAD ARCHITECTURE(fig.4)

    SOFTWARE REQUIREMENT SPECIFICATION(SRS):

    Systematic requirements analysis is also known as requirements

    engineering. It is sometimes referred to loosely by names such as

    requirements gathering, requirements capture, orrequirements specification.

    The term requirement analysis can also be applied specifically to the

  • 8/14/2019 Defragmented File System Documentation

    22/63

    analysis proper (as opposed to elicitation or documentation of the

    requirements).

    Requirements must be measurable, testable, related to identified

    business needs or opportunities, and defined to a level of detail sufficient for

    system design. There are two types of requirements.

    Software Requirements

    Hardware Requirements

    SOFTWARE REQUIREMENTS

    This project is a stand alone application hence we used JSP, Servlets, Java Scripts.

    Operating system : Windows 98 / XP

    Language : JAVA, J2EE

    Server : IBM Websphere

    4.2 HARDWARE REQUIREMENTS

    Processor : Pentium 4

    RAM : 512 MB

    Hard Disk : 40 GB

    UNIFIED MODELLING LANGUAGE DIAGRAMS:

    UML stands for Unified Modeling Language. This object-oriented system of

    notation has evolved from the work of Grady Booch, James Rum Baugh, Ivar Jacobson,

    and theRational Software Corporation. These renowned computer scientists fused their

    respective technologies into a single, standardized model. Today, UML is accepted by the

    Object Management Group (OMG) as the standard for modeling object oriented

    programs. We used Microsoft product MS Visio to develop the UML Diagrams.

    An overview of the UML:

    The UML is a language for

    http://www.rational.com/http://www.rational.com/http://www.omg.org/http://www.rational.com/http://www.rational.com/http://www.rational.com/http://www.omg.org/http://www.omg.org/http://www.omg.org/
  • 8/14/2019 Defragmented File System Documentation

    23/63

    Visualizing

    Specifying

    Constructing

    Documenting

    the artifacts of a software-intensive system.

    The UML is a Language:

    A language provides a vocabulary and the rules for combining words in that

    vocabulary for the purpose of communication. A modeling language is a language whose

    vocabulary and rules focus on the conceptual and physical representation of a system. A

    modeling language such as the UML is thus a standard language for software blueprints.

    Modeling yields an understanding of a system. No one model is ever sufficient

    rather; you often need multiple models that are connected to one another in order to

    understand anything but the most trivial system.

    Building Blocks of the UML:

    The vocabulary of the UML encompasses three kinds of building blocks:

    1. Things

    2. Relationships

    3. Diagrams

    Things are the abstractions that are first-class citizens in a model; relationships tie these

    things together; diagrams group interesting collections of things

    Things in the UML:

    There are four kinds of things in the UML:

    1.1 Structural things

    1.2 Behavioral things

    1.3 Grouping things

    1.4 Annotational things

    These things are the basic object-oriented building blocks of the UML. You use them to

    write well-formed models.

  • 8/14/2019 Defragmented File System Documentation

    24/63

    1.1 Structural things: Structural things are the nouns of UML model.

    These are the mostly static parts of a model, representing elements that are

    either conceptual or physical. In all, there are seven kinds of structural things.

    Class A description of a set of objects that share the same

    attributes operations, relationships, and semantics. A class

    implements one or more interfaces. Graphically, a class is rendered as

    a rectangle, usually including its name, attributes, and operations.

    Interface A collection of operations that specify a service (for a

    resource or an action) of a class or component. It describes the

    externally visible behaviour of that element. Graphically, an interface

    is rendered as a circle together with its name.

    ISpelling

    Collaboration Define an interaction among two or more classes.

    Define a society of roles and other elements. Provide cooperative

    behaviour. Capture structural and behavioural dimensions.

    UML uses pattern as a synonym (careful). Graphically, collaboration is

    rendered as an ellipse with dashed lines, usually including only its name.

    Collaborations.

    Use Case A sequence of actions that produce an observable result

    for a specific actor .A set of scenarios tied together by a common user

    goal .Provides a structure for behavioural things. Realized through a

    Chain of Responsibility

  • 8/14/2019 Defragmented File System Documentation

    25/63

    collaboration (usually realized by a set of actors and the system to be

    built). Graphically, a use case is rendered as an ellipse with solid lines,

    usually including only its name.

    Place Order

    Use Cases

    Active Class Special class whose objects own one or more

    processes or threadsand therefore can initiate control activity. An

    active class is just like a class except that its objects represent

    elements whose behaviour is concurrent with other elements.

    Graphically, an active class is rendered just like a class, but with heavy

    lines, usually included its name, attributes, and operations.

    ComponentA component is a physical and replaceable part of a

    system. Components can be packaged logically. Conforms to a set of

    interfaces. Provides the realization of an interface. Represents a

    physical module of code. Graphically, a component is rendered as a

    rectangle with tabs, usually including only its name.

  • 8/14/2019 Defragmented File System Documentation

    26/63

    Node Node is a physical element that exists at run time and

    represents a computational resource. Generally has memory and

    processing power. A set of components may reside on a node and may

    also migrate from node to node. Graphically a node is rendered as a

    cube, usually including only its name.

    1.2 Behavioral Things:

    Behavioral Things are the verbs of UML models. These are the dynamic

    parts of UML models: behavior over time and space

    Usually connected to structural things in UML.

    Two primary kinds of behavioral things:

    Interaction: Interaction is a behaviour of a set of objects

    comprising of a set of message exchanges within a particular context

    to accomplish a specific purpose. An interaction involves a number of

    other elements including messages, action sequences, and links.

    Graphically a message is rendered as a directed line, almost always

    including the name of its operation.

  • 8/14/2019 Defragmented File System Documentation

    27/63

    State Machine: Behaviour that specifies the sequences ofstates an object or an interaction goes through during its lifetime in

    response to events, together with its responses to those events.

    Graphically, a state is rendered as a rounded rectangle, usually

    including its name and its sub states.

    1.3 Grouping Things: grouping things are the organizational parts of

    the UML model.

    Packages

    - one primary kind of grouping.

    - General purpose mechanism for organizing elements into groups.

    - Purely conceptual; only exists at development time.

    - Contains behavioral and structural things.

    - Can be nested.

    - Variations of packages are: Frameworks, models, & subsystems.

  • 8/14/2019 Defragmented File System Documentation

    28/63

    Annotational Things:

    Annotational things are the explanatory parts of UML models.

    Comments regarding other UML elements (usually called adornments in UML)

    Note Note is the one primary annotational thing in UML bestexpressed in informal or formal text. A note is simply a symbol for

    rendering constraints and comments attached to an element or a

    collection of elements. Graphically a note is rendered as a rectangle

    with a dog-eared corner, together with a textual or graphical comment.

    Relationships in the UML:

    There are four kinds of relationships in the UML:

    2.1 Dependency

    2.2 Association

    2.3 Generalization

    2.4Realization

    2.1 Dependency:A semantic relationship between two things in which a change to one

    thing (independent) may affect the semantics of the other thing (dependent).

    Graphically, a dependency is rendered as a dashed line, possibly directed,

    and occasionally including a label.

  • 8/14/2019 Defragmented File System Documentation

    29/63

    2.2 Association:

    An association is a structural relationship that describes a set of links,

    a link being a connection among objects. Aggregation is a special kind ofassociation, representing a structural relationship between a whole and its

    parts. Graphically, an association is rendered as a solid line, possibly directed,

    occasionally including a label, and often containing other adornments.

    2.3 Generalization:

    A specialization/generalization relationship in which objects of the

    specialized element (the child) are more specific than the objects of the

    generalized element. Graphically, a generalization relationship is rendered as

    a solid line with a hollow arrowhead pointing to the parent.

    2.4 Realization:

    A semantic relationship between two elements, wherein one element

    guarantees to carry out what is expected by the other element. Graphically, a

    realization is rendered as a cross between a generalization and a dependency

    relationship.

    USE CASE DIAGRAM:

    A use case diagram shows a set of use cases and actors (a special kind of

    class) and their relationships. Use case diagrams address the static use case

    Employer

    0.1

    Employee

    *

  • 8/14/2019 Defragmented File System Documentation

    30/63

    view of a system. These diagrams are especially important in organizing and

    modeling the behaviors of a system.Use case diagrams model the

    functionality of a system using actors and use cases. Use cases are services

    or functions provided by the system to its users.

    Basic Use Case Diagram Symbols and Notations

    System

    Draw your system's boundaries using a rectangle that contains use cases.

    Place actors outside the system's boundaries.

    Use Case

    Draw use cases using ovals. Label with ovals with verbs that represent the

    system's functions.

  • 8/14/2019 Defragmented File System Documentation

    31/63

    Actors

    Actors are the users of a system. When one system is the actor of another

    system, label the actor system with the actor stereotype.

    Relationships

    Illustrate relationships between an actor and a use case with a simple line.

    For relationships among use cases, use arrows labeled either "uses" or

    "extends." A "uses" relationship indicates that one use case is needed by

    another in order to perform a task. An "extends" relationship indicates

    alternative options under a certain use case.

  • 8/14/2019 Defragmented File System Documentation

    32/63

  • 8/14/2019 Defragmented File System Documentation

    33/63

  • 8/14/2019 Defragmented File System Documentation

    34/63

  • 8/14/2019 Defragmented File System Documentation

    35/63

    Testing

    Testing

  • 8/14/2019 Defragmented File System Documentation

    36/63

    Testing is the process of detecting errors. Testing performs a very

    critical role for quality assurance and for ensuring the reliability of

    software. The results of testing are used later on during maintenance

    also. In the test phase various test cases intended to find the bugs and

    loop holes exist in the software will be designed. During testing, the

    program to be tested is executed with a set of test cases and the

    output of the program is performing as it is expected to.

    Often when we test our program, the test cases are treated

    as throw away cases. After testing is complete, test cases and their

    outcomes are thrown away. The main objective of testing is to find

    errors if any, especially the error uncovered till the moment. Testing

    cannot show the absence of defects it can only show the defects that

    are a set of interesting test cases along with their expected output for

    future use.

    Software testing is crucial element and it represents at the

    ultimate review of specification design and coding. There are black box

    testing and glass box testing. When the complete software testing is

    considered Back box attitudes to the tests. That is concluded predicted

    on a close examination of procedural detail.The software is tested using control structures testing

    method under white box testing techniques. The two tests done under

    this approach. One condition testing to check the Boolean operator

    errors, Boolean variable errors, Boolean parenthesis errors etc. Loop

    testing to check simple loops and tested loops.

    Faults can be occurred during any phase in the software

    development cycle. Verification is performed on the output in each phase but

    still some fault. We likely to remain undetected by these methods. These

    faults will be eventually reflected in the code. Testing is usually relied upon to

    detect these defaults in addition to the fault introduced during the code

    phase .For this, different levels of testing are which perform different tasks

    and aim to test different aspects of the system.

  • 8/14/2019 Defragmented File System Documentation

    37/63

    Psychology of Testing

    The aim of testing is often to demonstrate that a program works by showing

    that it has no errors. The basic purpose of testing phase is to detect the

    errors that may be present in the program. Hence one should not start testing

    with the intent of showing that a program works, but the intent should be to

    show that a program doesnt work. Testing is the process of executing a

    program with the intent of finding errors.

    Testing Objectives

    The main objective of testing is to uncover a host of errors,systematically and with minimum effort and time. Stating formally, we

    can say,

    Testing is a process of executing a program with the intent of

    finding an error.

    A successful test is one that uncovers an as yet undiscovered error.

    A good test case is one that has a high probability of finding error, if

    it exists.

    The tests are inadequate to detect possibly present errors.

    The software more or less confirms to the quality and reliable

    standards.

    Levels of Testing

    In order to uncover the errors present in different phases we have the

    concept of levels of testing. The basic levels of testing are as shown

    below

  • 8/14/2019 Defragmented File System Documentation

    38/63

    Client Needs

    Requirements

    Design

    Code

    System Testing

    The philosophy behind testing is to find errors. Test cases are devised

    with this in mind. A strategy employed for system testing is code

    testing.

    Code Testing:

    This strategy examines the logic of the program. To follow this method

    we developed some test data that resulted in executing every

    instruction in the program and module i.e. every path is tested.

    Systems are not designed as entire nor are they tested as single

    systems. To ensure that the coding is perfect two types of testing is

    performed or for that matter is performed or that matter is performed

    or for that matter is performed on all systems.

    Types Of Testing

    Unit Testing Link Testing

    Unit Testing

    Unit testing focuses verification effort on the smallest unit of softwarei.e. the module. Using the detailed design and the process

    specifications testing is done to uncover errors within the boundary of

    the module. All modules must be successful in the unit test before the

    start of the integration testing begins.

  • 8/14/2019 Defragmented File System Documentation

    39/63

    In this project each service can be thought of a module. There are so

    many modules like Login, HWAdmin, MasterAdmin, Normal User, and

    PManager. Giving different sets of inputs has tested each module.

    When developing the module as well as finishing the development so

    that each module works without any error. The inputs are validated

    when accepting from the user.

    In this application developer tests the programs up as system.

    Software units in a system are the modules and routines that are

    assembled and integrated to form a specific function. Unit testing is

    first done on modules, independent of one another to locate errors.

    This enables to detect errors. Through this errors resulting from

    interaction between modules initially avoided.

    Unit testing focuses verification effort on the

    smallest unit of software design module. Using the detail design

    description as an important control path is tested to uncover errors

    with in the boundary of the modules unit. Testing has many important

    results for the next generation is to be easy. The unit testing considers

    the following condition of a program module while testing.

    Interface

    Logical data structure

    Boundary data structures

    Independent path

    Error handling path

  • 8/14/2019 Defragmented File System Documentation

    40/63

    In the project Budget Analysis System we have done the unit testing. The

    table applied out the modules or interface test to answer that information

    properly flows into and out of the program unit under test. The local data

    structure is examine to ensure that data stores temporary monitors its

    integrity during all steps in algorithm execution. Boundary conditions are

    tested to ensure that the module operates properly at boundaries, establish

    to limit on restrict proclaim.

    Link Testing

    Link testing does not test software but rather the integration of each

    module in system. The primary concern is the compatibility of each

    module. The Programmer tests where modules are designed with

    different parameters, length, type etc.

    Integration Testing

    After the unit testing we have to perform integration testing. The goal

    here is to see if modules can be integrated proprerly, the emphasis

    being on testing interfaces between modules. This testing activity can

    be considered as testing the design and hence the emphasis on testing

    module interactions.

    In this project integrating all the modules forms the main system.When integrating all the modules I have checked whether the

    integration effects working of any of the services by giving different

    combinations of inputs with which the two services run perfectly before

    Integration.

    System Testing

    Here the entire software system is tested. The reference document for

    this process is the requirements document, and the goal os to see ifsoftware meets its requirements.

    Here entire ATM has been tested against requirements of project and

    it is checked whether all requirements of project have been satisfied or

    not.

  • 8/14/2019 Defragmented File System Documentation

    41/63

    Acceptance Testing

    Acceptance Test is performed with realistic data of the client to

    demonstrate that the software is working satisfactorily. Testing here is

    focused on external behavior of the system; the internal logic of

    program is not emphasized.

    In this project Network Management Of Database System I have

    collected some data and tested whether project is working correctly or

    not.

    Test cases should be selected so that the largest number of attributes

    of an equivalence class is exercised at once. The testing phase is an

    important part of software development. It is the process of finding

    errors and missing operations and also a complete verification to

    determine whether the objectives are met and the user requirements

    are satisfied.

    White Box Testing

    This is a unit testing method where a unit will be taken at a time and tested

    thoroughly at a statement level to find the maximum possible errors. I tested

    step wise every piece of code, taking care that every statement in the code is

    executed at least once. The white box testing is also called Glass Box Testing.I have generated a list of test cases, sample data, which is used to check all

    possible combinations of execution paths through the code at every module

    level.

    Black Box Testing

    This testing method considers a module as a single unit and checks the unit

    at interface and communication with other modules rather getting into details

    at statement level. Here the module will be treated as a block box that will

    take some input and generate output. Output for a given set of input

    combinations are forwarded to other modules.

  • 8/14/2019 Defragmented File System Documentation

    42/63

    Criteria Satisfied by Test Cases

    1) Test cases that reduced by a count that is greater than one, the

    number of additional test cases that much be designed to achieve

    reasonable testing.

    2) Test cases that tell us something about the presence or absence of

    classes of errors, rather than an error associated only with the

    specific test at hand.

    Implementation

  • 8/14/2019 Defragmented File System Documentation

    43/63

    A product software implementation method is a systematically

    structured approach to effectively integrate a software based service

    or component into the workflow of an organizational structure or an

    individual end-user.

    This entry focuses on the process modeling (Process Modeling) side of

    the implementation of large (explained in complexity differences)

    product software, using the implementation of Enterprise Resource

    Planning systems as the main example to elaborate on.

    Overview

    A product software implementation method is a blueprint to get users

    and/or organizations running with a specific software product. The

    method is a set of rules and views to cope with the most common

    issues that occur when implementing a software product: business

    alignment from the organizational view and acceptance from the

    human view.

    The implementation of product software, as the final link in thedeployment chain of software production, is in a financial perspective

    of a major issue. It is stated that the implementation of (product)

    software consumes up to 1/3 of the budget of a software purchase

    Implementation complexity differences

    The complexity of implementing product software differs on several

    issues. Examples are: the number of end users that will use the

    product software, the effects that the implementation has on changes

    of tasks and responsibilities for the end user, the culture and the

    integrity of the organization where the software is going to be used

    and the budget available for acquiring product software.

    In general, differences are identified on a scale of size (bigger, smaller,

    more, less). An example of the smaller product software is the

  • 8/14/2019 Defragmented File System Documentation

    44/63

    implementation of an office package. However there could be a lot of

    end users in an organization, the impact on the tasks and

    responsibilities of the end users will not be too intense, as the daily

    workflow of the end user is not changing significantly. An example of

    larger product software is the implementation of an Enterprise

    Resource Planning system. The implementation requires in-depth

    insights on the architecture of the organization as well as of the

    product itself, before it can be aligned. Next, the usage of an ERP

    system involves much more dedication of the end users as new tasks

    and responsibilities will never be created or will be shifted.

    Software customization and Business Process Redesign

    Process modeling, used to align product software and organizational

    structures, involves a major issue, when the conclusion is drawn that

    the product software and the organizational structure do not align well

    enough for the software to be implemented. In this case, two

    alternatives are possible: the customization of the software or the

  • 8/14/2019 Defragmented File System Documentation

    45/63

    redesign of the organizational structure, thus the business processes.

    Customizing the software actually transforms the product software in

    tailor-made software, as the idea of standardized software no longer

    applies. This may result in loss of support on the software and the need

    to acquire consultancy when issues arise in the usage of the software.

  • 8/14/2019 Defragmented File System Documentation

    46/63

    Customizing however results in a situation where the organizational

    integrity is not adjusted, which puts less pressure on the end users, as

    less changes or shifts in workflows are required. This fact may

    positively add to the acceptance of any new (product) software

    application used and may thus decrease the implementation time and

    budget on the soft side of the implementation budget.

    Redesigning business processes is more sensible for causing resistance

    in the usage of product software, as altered business processes will

    alter tasks and responsibilities for the end users of the product

    software. However, while the product software is not altered, better

    support, training and service levels are possible, because the support

    was created for the specific integrity of the software.

    Implementation Frameworks

    The guiding principle versus the profession

    Another issue on the implementation process of product software is the

    choice, or actually the question, to what extent an implementation

    method should be used.

    Implementation methods can on the one hand be used as a guiding

    principle, indicating that the method serves as a global idea about how

    the implementation phase of any project should run. This choice leaves

    more room for situational factors that are not taken into account in the

  • 8/14/2019 Defragmented File System Documentation

    47/63

    chosen method, but will result in ambiguity when questions arise in the

    execution of the implementation process.

    On the other hand methods can be used as a profession, meaning that

    the method should be taken strict and the usage of the method should

    be a profession, instead of a guiding principle. This view is very useful

    if the implementation process is very complex and is very dependent

    on exact and precise acting. Organizational and quality management

    will embrace this view, as a strict usage of any method results in more

    clarity on organizational level. Change management however might

    indicate that more flexibility in an implementation method leaves more

    room for the soft side of implementation processes.

  • 8/14/2019 Defragmented File System Documentation

    48/63

    Implementation frameworks

    Apart from implementation methods serving as the set of rules to

    implement a specific product or service, implementation frameworksserve as the project managed structure to define the implementation

    phase in time, budget and quality.

    Several project management methods can serve as a basis to perform

    the implementation method. Since this entry focuses on the

    implementation of product software, the best project management

  • 8/14/2019 Defragmented File System Documentation

    49/63

    methods suitable for supporting the implementation phase are project

    management methods that focus on software and information systems

    itself as well. The applicability of using a framework for implementation

    methods is clarified by the examples of using DSDM and Prince2 as

    project management method frameworks.rony

    DSDM

    The power of DSDM is that the method uses the principles of iteration

    and incremental value, meaning that projects are carried out in

    repeating phases where each phase adds value to the project. In this

    way implementation phases can be carried out incrementally, adding

    value to for example the degree of acceptance, awareness and skills

    within every increment [F. Von Meyenfeldt, Basiskennis

    projectmanagement, Academic Service 1999]. Besides in the

    management of chance scope, increments are also usable in the

    process modeling scope of implementation phases. Using increments

    can align process models of business architectures and product

    software as adding more detail in every increment of the phase draws

    both models closer. The DSDM also has room for phased training,

    documentation and reviewing.The image below illustrates how implementation phases are supported

    by the usage of DSDM, focusing on management of change, process

    modeling and support.

    Prince2

    As DSDM does, the Prince2 method acknowledges implementation as a

    phase within the method. Prince2 consists of a set of processes, of

    which 3 processes are especially meant for implementation. The

    processes of controlling a stage, managing product delivery and

    managing stage boundaries enable an implementation process to be

    detailed in with factors as time and quality. The Prince2 method can be

    carried out iteratively but is also suitable for a straight execution of the

    processes.

  • 8/14/2019 Defragmented File System Documentation

    50/63

    The profits for any implementation process being framed in a project

    management framework are:

    Clarity

    An implementation framework offers the process to be detailed in with

    factors such as time, quality, budget and feasibility.

    Iterative, incremental approach

    As explained, the possibility to execute different phases of the

    implementation process iteratively enables the process to be executed

    by incrementally aligning the product to be implemented with the end-

    user (organization).

    Assessments

    Using an embedded method brings the power that the method is

    designed to implement the software product that the method comes

    with. This suggests a less complicated usage of the method and more

    support possibilities. The negative aspect of an embedded method

    obviously is that it can only be used for specific product software.

    Engineers and consultants, operating with several software products,

    could have more use of a general method, to have just one way ofworking.

    Using a generic method like ERP modeling has the power that the

    method can be used for several ERP systems. Unlike embedded

    methods, the usage of generic methods enables engineers and

    consultants that operate in a company where several ERP systems are

    implemented in customer organizations, to adapt to one specific

    working method, instead of having to acquire skills for several

    embedded models. Generic methods have however the lack that

    implementation projects could become too situational, resulting in

    difficulties and complexity in the execution of the modeling process, as

    less support will be available.

  • 8/14/2019 Defragmented File System Documentation

    51/63

    Managing project delivery is essential to avoid the common problems

    of the software solution not working as expected or crashing out due to

    multiple users accessing the system at the same time. The keys to

    project delivery are: successful implementation of the software,

    managing the business change and scaling up the business use

    quickly.

    Successful Implementation

    Successful implementation of the software must be planned carefully.

    In short there are two key options for delivering the software -- big

    bang or phased release:

    A "big bang" deployment or release software to all users at the same

    time

    Phased deployment or release software to users over a period of time

    for example by department or by geographical location. The project

    needs to make a considered decision on the best way to release a

    software solution to the business. Business will often choose a phased

    deployment, consequently reducing project risk because if there is

  • 8/14/2019 Defragmented File System Documentation

    52/63

    some problem the business impact is reduced. In addition, the project

    deployment of software includes:

    Cleanup of the "test" environment following successful completion of

    testing

    Preparation of project deployment to the business such as setting up

    user accounts to access the system and ensuring any lists of values

    have valid values Deploying the software to the "production"

    environment ready for normal business use. Plan and mechanism to

    back out of production software deployment if the process goes wrong

    for some unexpected reason, restoring the business to its pre-

    deployment state. Some of these ideas have developed from IT

    Service Management and its discipline of Release Management - for

    more background read: Release Management: Where to Start? Project

    management should borrow and evolve good ideas whenever needed.

    Managing the Business Change of Project Delivery

    Project deployment of the software to the business units such that they

    are able to use it from a specified date/time is not enough by itself.

    Managing the business change is an essential part of project delivery

    and that needs to include:Building awareness within the business of the software solution

    through communication

    Developing business support and momentum to use the solution

    through stakeholder engagement Planning and executing the training

    plan for business users and administrators

    Business plan to exploit the use of the solution and to scale up the

    numbers of users Setting up and operating a customer board to

    manage the evolution of the solution

  • 8/14/2019 Defragmented File System Documentation

    53/63

    Software Maintenance

  • 8/14/2019 Defragmented File System Documentation

    54/63

  • 8/14/2019 Defragmented File System Documentation

    55/63

    Software maintenance: Software maintenance in software

    engineering is the modification of a software product after delivery to correct

    faults, to improve performance or other attributes, or to adapt the product to

    a modified environment

    Overview

    This international standard describes the 6 software maintenance processes

    as:

    The implementation processes contains software preparation and transition

    activities, such as the conception and creation of the maintenance plan, the

    preparation for handling problems identified during development, and the

    follow-up on product configuration management.

    The problem and modification analysis process, which is executed once the

    application has become the responsibility of the maintenance group. The

    maintenance programmer must analyze each request, confirm it (by

    reproducing the situation) and check its validity, investigate it and propose a

    solution, document the request and the solution proposal, and, finally, obtain

    all the required authorizations to apply the modifications.

    The process considering the implementation of the modification itself is

    classified below.

    The process acceptance of the modification, by checking it with the individual

    who submitted the request in order to make sure the modification provided a

    solution. The migration process (platform migration, for example) is

    exceptional, and is not part of daily maintenance tasks. If the software must

    be ported to another platform without any change in functionality, this

    process will be used and a maintenance project team is likely to be assigned

    to this task. Finally, the last maintenance process, also an event which does

    not occur on a daily basis, is the retirement of a piece of software.

    There are a number of processes, activities and practices that are unique to

    maintainers, for example:

    Transition: a controlled and coordinated sequence of activities during which a

    system is transferred progressively from the developer to the maintainer;

  • 8/14/2019 Defragmented File System Documentation

    56/63

    Service Level Agreements (SLAs) and specialized (domain-specific)

    maintenance contracts negotiated by maintainers;

    Modification Request and Problem Report Help Desk: a problem-handling

    process used by maintainers to prioritize, documents and route the requests

    they receive;

    Modification Request acceptance/rejection: modification request work over a

    certain size/effort/complexity may be rejected by maintainers and rerouted to

    a developer.

    A common perception of maintenance is that it is merely fixing bugs.

    However, studies and surveys over the years have indicated that the

    majority, over 80%, of the maintenance effort is used for non-corrective

    actions (Pigosky 1997). This perception is perpetuated by users submittingproblem reports that in reality are functionality enhancements to the system.

    Software maintenance and evolution of systems was first addressed by Meir

    M. Lehman in 1969. Over a period of twenty years, his research led to the

    formulation of eight Laws of Evolution (Lehman 1997). Key findings of his

    research include that maintenance is really evolutionary developments and

    that maintenance decisions are aided by understanding what happens to

    systems (and software) over time. Lehman demonstrated that systems

    continue to evolve over time. As they evolve, they grow more complex unless

    some action such as code refactoring is taken to reduce the complexity.

    The key software maintenance issues are both managerial and technical. Key

    management issues are: alignment with customer priorities, staffing, which

    organization does maintenance, estimating costs. Key technical issues are:

    limited understanding, impact analysis, testing, maintainability

    measurement.

    Categories of maintenance in ISO/IEC 14764

    E.B. Swanson initially identified three categories of maintenance: corrective,

    adaptive, and perfective. These have since been updated and ISO/IEC 14764

    presents:

  • 8/14/2019 Defragmented File System Documentation

    57/63

    Corrective maintenance: Reactive modification of a software product

    performed after delivery to correct discovered problems.

    Adaptive maintenance: Modification of a software product performed after

    delivery to keep a software product usable in a changed or changing

    environment.

    Perfective maintenance: Modification of a software product after delivery

    to improve performance or maintainability. A type of maintenance that

    includes reengineering, and is sometimes applied more broadly to include

    enhancement

    Preventive maintenance: Modification of a software product after delivery

    to detect and correct latent faults in the software product before they

    become effective faults.Preventative Maintenance module. When combined,the two modules work seamlessly to schedule, time release, and track

    preventative maintenance and reoccurring work orders. Through a simple

    Windows interface, you can inventory equipment, link specific tasks, and then

    schedule your preventative maintenance tasks daily, weekly, monthly,

    quarterly, semi-annually, and yearly. Preventative maintenance work orders

    are automatically printed out x days prior to the due date for advance notice.

    Coherent Preventative Maintenance is delivered with many standard reports,

    which allow you to accurately track your preventative maintenance efforts.Using corrective maintenance work plans to improve plant reliability

    Preventive maintenance is generally considered to include both condition-

    monitoring and life-extending tasks which are scheduled at regular intervals.

    Some tasks, such as temperature and vibration measurements, must be done

    while the equipment is operating and others, such as internal cleaning, must

    be done while the equipment is shut down.

    There is another, often overlooked, type of preventive maintenance

    inspection which can not be scheduled at regular intervals. These inspections

    can and should be done in conjunction with corrective maintenance.

    Corrective maintenance is defined as maintenance work which involves

    the repair or replacement of components which have failed or broken down.

    For failure modes which lend themselves to condition monitoring, corrective

  • 8/14/2019 Defragmented File System Documentation

    58/63

    maintenance should be the result of a regular inspection which identifies the

    failure in time for corrective maintenance to be planned and scheduled, then

    performed during a routine plant outage.

    When corrective maintenance is done, the equipment should be inspected to

    identify the reason for the failure and to allow action to be taken to eliminate

    or reduce the frequency of future similar failures. These inspections should be

    included in the work plan.

    A good example is the failure of packing in a process pump. Packing can be

    monitored by checking leakage and the location of the gland follower, so

    repacking should not normally be a fixed-time maintenance task. It should be

    done at frequencies which depend on the operating context.

    During the process of repacking the pump, there are a number of simpleinspections related to packing life which can be performed. These include:

    Many of these inspections would not normally be done on a regular scheduled

    basis and can only be done during repacking. In a well-managed maintenance

    system, inspections that should be done during corrective maintenance for a

    specific failure mode (such as packing failures) should be listed, recorded and

    used. So for any work order to repack a pump, the above check list should be

    attached to or included in the work order as a standard procedure. The

    standard should include measurements appropriate to the specificequipment, such as the allowable sleeve wear and the impeller clearance in

    the case of the process pump packing. A check list similar to the one for

    pump packing can be developed for many failure modes for common

    components, such as mechanical drives and hydraulic systems. Integrating

    inspections that directly relate to failures into corrective maintenance work

    plans is a powerful tool to improve plant reliability.

    SOFTWARE MAINTENANCE

    The process of modifying a software system or component. There are three

    classes.

    Perfective maintenance incorporates changes demanded by the user; these

    may, for example, be due to changes in requirements or legislation, or be for

    embedded applications in response to changes in the surrounding system.

  • 8/14/2019 Defragmented File System Documentation

    59/63

    Adaptive maintenance incorporates changes made necessary by

    modifications in the software or hardware (operational) environment of the

    program, including changes in the ...

    Preventive Maintenance

    Preventive maintenance is a schedule of planned maintenance actions aimed at the prevention of

    breakdowns and failures. The primary goal of preventive maintenance is to prevent the failure of

    equipment before it actually occurs. It is designed to preserve and enhance equipment reliability

    by replacing worn components before they actually fail. Preventive maintenance activities include

    equipment checks, partial or complete overhauls at specified periods, oil changes, lubrication and

    so on. In addition, workers can record equipment deterioration so they know to replace or repair

    worn parts before they cause system failure. Recent technological advances in tools for

    inspection and diagnosis have enabled even more accurate and effective equipment

    maintenance. The ideal preventive maintenance program would prevent all equipment failure

    before it occurs.

    Value of Preventive Maintenance

    There are multiple misconceptions about preventive maintenance. One such misconception is

    that PM is unduly costly. This logic dictates that it would cost more for regularly scheduled

    downtime and maintenance than it would normally cost to operate equipment until repair is

    absolutely necessary. This may be true for some components; however, one should compare not

    only the costs but the long-term benefits and savings associated with preventive maintenance.

    Without preventive maintenance, for example, costs for lost production time from unscheduled

    equipment breakdown will be incurred. Also, preventive maintenance will result in savings due to

    an increase of effective system service life.

    Long-term benefits of preventive maintenance include:

    Improved system reliability.

    Decreased cost of replacement.

    Decreased system downtime.

    Better spares inventory management.

    Long-term effects and cost comparisons usually favor preventive maintenance over performing

    maintenance actions only when the system fails.

    When Does Preventive Maintenance Make Sense

    Preventive maintenance is a logical choice if, and only if, the following two conditions are met:

    Condition #1: The component in question has an increasing failure rate. In other words,

    the failure rate of the component increases with time, thus implying wear-out. Preventive

    maintenance of a component that is assumed to have an exponential distribution (which

    implies a constant failure rate) does not make sense!

  • 8/14/2019 Defragmented File System Documentation

    60/63

    Condition #2: The overall cost of the preventive maintenance action must be less than the

    overall cost of a corrective action. (Note: In the overall cost for a corrective action, one

    should include ancillary tangible and/or intangible costs, such as downtime costs, loss of

    production costs, lawsuits over the failure of a safety-critical item, loss of goodwill, etc.)

    If both of these conditions are met, then preventive maintenance makes sense. Additionally,

    based on the costs ratios, an optimum time for such action can be easily computed for a single

    component. This is detailed in later sections.

    Conclusion

  • 8/14/2019 Defragmented File System Documentation

    61/63

    Conclusion:

    I felt good satisfaction to develop this project. Because this project is a

    system architectural project and can be used as product based

    development. During the development of this application I have done

    some research to explore the operating system how it de fragment the

    disk space. But with innovative idea I started exploring the system. To

    capture the fragmented files from the disk drive I used system DLL files

    to play predominant roll. After all my explorations and repeated trails I

    could complete the project with good useful utilities.

    The project has been appreciated by all the users.

  • 8/14/2019 Defragmented File System Documentation

    62/63

    It is easy to use, since it uses the GUI provided in the user

    dialog.

    User friendly screens are provided.

    The usage of software increases the efficiency, decreases the

    effort.

    It has been efficiently employed as a Site management

    mechanism.

    It has been thoroughly tested and implemented.

    BIBILOGRAPHY:

    JAVA:

    JAVA Complete reference by Subramanaim Allamaraju,

    Cedric Buest

    JAVA Script Programming by Yehuda Shiram

    J2EE by Shadab Siddiqui

    JAVA Server Pages by Larne Pekowsely

  • 8/14/2019 Defragmented File System Documentation

    63/63

    JAVA Server Pages by Nice Todd