59
COMPANY PROFILE Name of the Company : CLDC IT SOLUTION Year of Establishment : 2006 Partner Name : Mr.Kannan.V Address : No.997, Gnanalakshmi Complex,, 2 nd Floor, M.T.P Road, R.S Puram Coimbatore – 641 002 Ph : 0422 – 2542210 ; 98650 – 02203. No.534, 100 Feet Road, Gandhipuram, Coimbatore – 641 012 Ph: +910422-2522588; 9952145599.

DCIM Distributed Cache Invalidation Documentation

Embed Size (px)

Citation preview

Page 1: DCIM Distributed Cache Invalidation Documentation

COMPANY PROFILE

Name of the Company : CLDC IT SOLUTION

Year of Establishment : 2006

Partner Name : Mr.Kannan.V

Address : No.997, Gnanalakshmi Complex,,

2nd Floor, M.T.P Road, R.S Puram

Coimbatore – 641 002

Ph : 0422 – 2542210 ; 98650 – 02203.

No.534, 100 Feet Road,

Gandhipuram, Coimbatore – 641 012

Ph: +910422-2522588; 9952145599.

Page 2: DCIM Distributed Cache Invalidation Documentation

ABOUT CLDC IT SOLUTION

CLDC the pioneer in IT Solution, provides quality Training in Software and

Hardware Education. The programs detailed overleaf, are aimed at students from all

streams, working professionals and others interested in upgrading their software and

hardware skills.

This program highly aims in building computer professionals in the field of

communication systems, database administration, software engineering system

administration and network administration.

The syllabus designed undergoes constant up gradation to ensure that students

are in tune with the demands of a fast changing industry.

It has many branches. The main branch is located at Chennai. CLDC IT

Solution is one of the branch located at Gandhipuram.

Our Philosophy

We believe that all people are created equal with unique qualities and that

people have the ability and desire to be high performs in their personal and

professional lives. Everybody has an innate extraordinary potential.

We believe that what can makes successful people is the encouragement to

realize how extraordinary they can be and the initiative to become determinative. The

only way to achieve this is by creating an atmosphere for discussion, interaction and

involvement of life skills through new age training methods.

Page 3: DCIM Distributed Cache Invalidation Documentation

Our vision

Within an environment of rapid change, resource constraints, and ever

increasing demands, to provide innovative and efficient support and quality service to

meet the needs of the students and the unemployed community through training and

placement.

Our Vision StatementTraining system are strategic weapons, never an expense.

Our MissionTo empower the average person to recognize the enormous value of personal

development and the vast benefits of inner potential through quality training systems,

value addition methods and innovative programs.

Our ActivitiesKeynote & Seminars, Dynamic Workshops, outdoor workshops, Training &

Placements.

Page 4: DCIM Distributed Cache Invalidation Documentation

1.1.2 OBJECTIVE

To identify the caching data items in mobile ad hoc networks, we

propose a pull-based algorithm that implements adaptive TTL, piggybacking,

and prefetching, and provides near strong consistency guarantees. Cached data

items are assigned adaptive TTL values that correspond to their update rates at

the data source. Expired items as well as nonexpired ones but meet certain

criteria are grouped in validation requests to the data source, which in turn

sends the cache devices the actual items that have changed, or invalidates

them, based on their request rates. This approach, which we call distributed

cache invalidation mechanism (DCIM), works on top of the COACS

cooperative caching architecture. To our knowledge, this is the first complete

client side approach employing adaptive TTL and achieving superior

availability, delay, and traffic performance. Our main objective is to reduce the

traffic, data leakage, time delay etc…

Page 5: DCIM Distributed Cache Invalidation Documentation

ABSTRACT

This paper proposes distributed cache invalidation mechanism (DCIM), a

client-based cache consistency scheme that is implemented on top of a previously

proposed architecture for caching data items in mobile ad hoc networks (MANETs),

namely COACS, where special nodes cache the queries and the addresses of the nodes

that store the responses to these queries. We have also previously proposed a server-

based consistency scheme, named SSUM, whereas in this paper, we introduce DCIM

that is totally client-based. DCIM is a pull-based algorithm that implements adaptive

time to live (TTL), piggybacking, and prefetching, and provides near strong

consistency capabilities. Cached data items are assigned adaptive TTL values that

correspond to their update rates at the data source, where items with expired TTL

values are grouped in validation requests to the data source to refresh them, whereas

un expired ones but with high request rates are prefetched from the server. In this

paper, DCIM is analyzed to assess the delay and bandwidth gains (or costs) when

compared to polling every time and push-based schemes. DCIM was also

implemented using ns2, and compared against client-based and server-based schemes

to assess its performance experimentally. The consistency ratio, delay, and overhead

traffic are reported versus several variables, where DCIM showed to be superior when

compared to the other systems.

Page 6: DCIM Distributed Cache Invalidation Documentation

2.1 EXISTING SYSTEM:

The cache consistency mechanisms in the literature can be grouped into three main

categories: push based, pull based, and hybrid approaches. Push-based mechanisms

are mostly server-based, where the server informs the caches about updates, whereas

Pull-based approaches are client-based, where the client asks the server to update or

validate its cached data. Finally, in hybrid mechanisms the server pushes the updates

or the clients pull them

2.1.1 DISADVANTAGES OF EXISTING SYSTEM:

The major issue that faces client cache management concerns the maintenance

of data consistency between the cache client and the data source. All cache

consistency algorithms seek to increase the probability of serving from the

cache data items that are identical to those on the server.

However, achieving strong consistency, where cached items are identical to

those on the server, requires costly communications with the server to validate

(renew) cached items, considering the resource limited mobile devices and the

wireless environments they operate in.

Page 7: DCIM Distributed Cache Invalidation Documentation

2.2 PROPOSED SYSTEM:

In this paper, we propose a pull-based algorithm that implements adaptive TTL,

piggybacking, and prefetching, and provides near strong consistency guarantees.

Cached data items are assigned adaptive TTL values that correspond to their update

rates at the data source. Expired items as well as nonexpired ones but meet certain

criteria are grouped in validation requests to the data source, which in turn sends the

cache devices the actual items that have changed, or invalidates them, based on their

request rates. This approach, which we call distributed cache invalidation mechanism

(DCIM), works on top of the COACS cooperative caching architecture.

ADVANTAGES OF PROPOSED SYSTEM:

TTL algorithms are popular due to their simplicity, sufficiently good

performance, and flexibility to assign TTL values to individual data items.

Also, they are attractive in mobile environments because of limited device

energy and network bandwidth and frequent device disconnections.

TTL algorithms are also completely client based and require minimal server

functionality. From this perspective, TTL-based algorithms are more practical

to deploy and are more scalable.

This is the first complete client side approach employing adaptive TTL and

achieving superior availability, delay, and traffic performance.

Page 8: DCIM Distributed Cache Invalidation Documentation

2.3 FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is

put forth with a very general plan for the project and some cost estimates. During

system analysis the feasibility study of the proposed system is to be carried out. This is

to ensure that the proposed system is not a burden to the company. For feasibility

analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

ECONOMICAL FEASIBILITY

OPERATIONAL FEASIBILITY

TECHNICAL FEASIBILITY

SOCIAL FEASIBILITY

2.3.1 ECONOMICAL FEASIBILITY:

This study is carried out to check the economic impact that the system will have on

the organization. The amount of fund that the company can pour into the research and

development of the system is limited. The expenditures must be justified. Thus the developed

system as well within the budget and this was achieved because most of the technologies used

are freely available. Only the customized products had to be purchased.

2.3.2 OPERATIONAL FEASIBILITY

This study is carried out to check the operational feasibility that is to an evaluation

which analyses how well a system operates. Part of operational feasibility is how well it is

received by workers. It is also important to analyse how any new changes or plans will fit into

the existing systemic framework. An evaluation to determine whether the system is

operationally acceptable. it also. determines how the proposed system will fit with current

operational system.

Page 9: DCIM Distributed Cache Invalidation Documentation

2.3.3 TECHNICAL FEASIBILITY:

This study is carried out to check the technical feasibility, that is, the technical

requirements of the system. Any system developed must not have a high demand on the

available technical resources. This will lead to high demands on the available technical

resources. This will lead to high demands being placed on the client. The developed system

must have a modest requirement, as only minimal or null changes are required for

implementing this system.

2.3.4 SOCIAL FEASIBILITY:

The aspect of study is to check the level of acceptance of the system by the user. This

includes the process of training the user to use the system efficiently. The user must not feel

threatened by the system, instead must accept it as a necessity. The level of acceptance by the

users solely depends on the methods that are employed to educate the user about the system

and to make him familiar with it. His level of confidence must be raised so that he is also able

to make some constructive criticism, which is welcomed, as he is the final user of the system.

Page 10: DCIM Distributed Cache Invalidation Documentation

3. SYSTEM SPECIFICATION

3.1 Hardware Configuration

Processor : Pentium III

Speed : 1GHZ

Hard Disk : 40 GB

RAM capacity : 128 MB

CD-ROM drive : 52x speed

Keyboard : 104 keys (normal)

Mouse : Logitech (3 button mouse)

Printer : HP3745 DeskJet printer

Ethernet card : 10/100 Mbps

Monitor : 15” LG Color Monitor

3.2 Software Configuration

Operating system : Windows XP

Front End : Java

Tool : NetBeans 6.9

Page 11: DCIM Distributed Cache Invalidation Documentation

4.Software description

4.1 Front End

Java Technology

Java technology is both a programming language and a platform. The

Java programming language is a high-level language.

With most programming languages, you either compile or interpret a program

so that you can run it on your computer. The Java programming language is unusual in

that a program is both compiled and interpreted. With the compiler, first you translate

a program into an intermediate language called Java byte codes —the platform-

independent codes interpreted by the interpreter on the Java platform. The interpreter

parses and runs each Java byte code instruction on the computer. Compilation happens

just once; interpretation occurs each time the program is executed. The following

figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the Java

Virtual Machine (Java VM). Every Java interpreter, whether it’s a development tool or

a Web browser that can run applets, is an implementation of the Java VM. Java byte

codes help make “write once, run anywhere” possible. You can compile your program

into byte codes on any platform that has a Java compiler. The byte codes can then be

run on any implementation of the Java VM. That means that as long as a computer has

Page 12: DCIM Distributed Cache Invalidation Documentation

a Java VM, the same program written in the Java programming language can run on

Windows 2000, a Solaris workstation, or on an iMac.

The Java PlatformA platform is the hardware or software environment in which a program runs. We’ve

already mentioned some of the most popular platforms like Windows 2000, Linux,

Solaris, and MacOS. Most platforms can be described as a combination of the

operating system and hardware. The Java platform differs from most other platforms

in that it’s a software-only platform that runs on top of other hardware-based

platforms.

The Java platform has two components:

The Java Virtual Machine (Java VM)

The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java platform

and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide

many useful capabilities, such as graphical user interface (GUI) widgets. The Java API

is grouped into libraries of related classes and interfaces; these libraries are known as

packages. The next section, What Can Java Technology Do? Highlights what

functionality some of the packages in the Java API provide.

Page 13: DCIM Distributed Cache Invalidation Documentation

The following figure depicts a program that’s running on the Java platform. As the

figure shows, the Java API and the virtual machine insulate the program from the

hardware.

Native code is code that after you compile it, the compiled code runs on a specific

hardware platform. As a platform-independent environment, the Java platform can be

a bit slower than native code. However, smart compilers, well-tuned interpreters, and

just-in-time byte code compilers can bring performance close to that of native code

without threatening portability.

What Can Java Technology Do?

The most common types of programs written in the Java programming

language are applets and applications. If you’ve surfed the Web, you’re probably

already familiar with applets. An applet is a program that adheres to certain

conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute,

entertaining applets for the Web. The general-purpose, high-level Java programming

language is also a powerful software platform. Using the generous API, you can write

many types of programs.

An application is a standalone program that runs directly on the Java platform.

A special kind of application known as a server serves and supports clients on a

network. Examples of servers are Web servers, proxy servers, mail servers, and print

servers. Another specialized program is a servlet. A servlet can almost be thought of

as an applet that runs on the server side. Java Servlets are a popular choice for building

Page 14: DCIM Distributed Cache Invalidation Documentation

interactive web applications, replacing the use of CGI scripts. Servlets are similar to

applets in that they are runtime extensions of applications. Instead of working in

browsers, though, servlets run within Java Web servers, configuring or tailoring the

server.

How does the API support all these kinds of programs? It does so with packages of

software components that provides a wide range of functionality. Every full

implementation of the Java platform gives you the following features:

The essentials: Objects, strings, threads, numbers, input and output, data

structures, system properties, date and time, and so on.

Applets: The set of conventions used by applets.

Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data

gram Protocol) sockets, and IP (Internet Protocol) addresses.

Internationalization: Help for writing programs that can be localized for users

worldwide. Programs can automatically adapt to specific locales and be displayed in

the appropriate language.

Security: Both low level and high level, including electronic signatures, public

and private key management, access control, and certificates.

Software components: Known as JavaBeansTM, can plug into existing

component architectures.

Object serialization: Allows lightweight persistence and communication via

Remote Method Invocation (RMI).

Java Database Connectivity (JDBCTM): Provides uniform access to a wide

range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers,

collaboration, telephony, speech, animation, and more. The following figure depicts

what is included in the Java 2 SDK.

Page 15: DCIM Distributed Cache Invalidation Documentation

We can’t promise you fame, fortune, or even a job if you learn the Java

programming language. Still, it is likely to make your programs better and requires

less effort than other languages. We believe that Java technology will help you do the

following:

Get started quickly: Although the Java programming language is a powerful

object-oriented language, it’s easy to learn, especially for programmers already

familiar with C or C++.

Write less code: Comparisons of program metrics (class counts, method

counts, and so on) suggest that a program written in the Java programming language

can be four times smaller than the same program in C++.

Write better code: The Java programming language encourages good coding

practices, and its garbage collection helps you avoid memory leaks. Its object

orientation, its JavaBeans component architecture, and its wide-ranging, easily

extendible API let you reuse other people’s tested code and introduce fewer bugs.

Develop programs more quickly: Your development time may be as much as

twice as fast versus writing the same program in C++. Why? You write fewer lines of

code and it is a simpler programming language than C++.

Avoid platform dependencies with 100% Pure Java: You can keep your

program portable by avoiding the use of libraries written in other languages. The

100% Pure JavaTM Product Certification Program has a repository of historical process

manuals, white papers, brochures, and similar materials online.

Page 16: DCIM Distributed Cache Invalidation Documentation

Write once, run anywhere: Because 100% Pure Java programs are compiled

into machine-independent byte codes, they run consistently on any Java platform.

Distribute software more easily: You can upgrade applets easily from a

central server. Applets take advantage of the feature of allowing new classes to be

loaded “on the fly,” without recompiling the entire program.

4.2 FEATURES OF JAVA

Simple

Object oriented

Distributed

Interpreted

Multithreaded

Robust

Dynamic

Secure

Simple

The language itself could be considered the derivative of C and C++, so it is

familiar. At the same time, the environment takes over many of the error-prone

task from the programmer such as pointers and memory management. Java also

eliminates the operator overloading and multiple inheritance features of C++. Java

also implements automatic garbage collection.

Object Oriented

The language is object oriented at the foundation level and allows the inheritance

and reuse of code both in a static and dynamic fashion. Java comes with an

extensive set of classes, arranged in packages that can be used in programs.

Distributed

Java is designed for the distributed environment of the Internet, because it handles

TCP\IP protocols. In fact, accessing a resource using a URL is not much different

Page 17: DCIM Distributed Cache Invalidation Documentation

from accessing a file. The original version of Java (Oak) included features for

inter-address-space messaging. This allows objects on two different computers to

execute procedures remotely. Java has recently revived these interfaces in a

package called Remote Method Invocation (RMI). This feature brings an

unparallel level of abstraction to Client/Server programming.

Robust

The features of the language and run-time environment make sure that the code is

well behaved. This comes primarily as a result of the push for portability. Java is a

strongly typed language, which allows for extensive compile time checking for

potential type mismatch problems. Exception handling is another feature in Java

that makes for more robust programs. An exception is a signal that some sort of

exceptional condition, like an error, has occurred. The Java interpreter also

performs a number of run-time checks such as verifying that all array and string

accesses are within bounds.

Multithreaded

It is easy to imagine multiple things going on at the same time in a GUI (Graphical

User Interface) based network application. Java as a multithreaded language

provides support for multiple threads of execution also called lightweight

processes that can handle different tasks. Java as a multithreaded language

provides support for multiple threads of execution also called lightweight

processes that can handle different tasks. Java makes programming with threads

easier by providing built-in language support for threads

Secure

Security is an important concern, as Java is mainly used in networked

environments. Java implements several security mechanisms information to

protect the user against code that might try to create a virus or invade the file

system. All of this security mechanism information is based on the premise that

nothing is to be trusted. Java's memory allocation model is one of its main

Page 18: DCIM Distributed Cache Invalidation Documentation

defences against malicious code. The Java compiler does not handle memory

layout decisions so that no one can guess the actual memory layout of a class by

looking at its declaration. The Java run-time system uses a byte-code verification

process to ensure that code loaded over the network does not violate any Java

language restrictions.

Dynamic

Java programs carry with them substantial amounts of run-time type

information that is used to verify and resolve accesses to objects at run-time.

This makes it possible to dynamically link code in a sate and expedient

manner. This is crucial to the robustness of the applet environment, in which

small fragments of byte code may be dynamically updated on a running

system.

Page 19: DCIM Distributed Cache Invalidation Documentation

5. PROJECT DESCRIPTIONS

5.1 PROBLEM DEFINITION

The work on push-based mechanisms mainly uses invalidation reports (IRs).

The original IR approach was proposed but since then several algorithms have been

proposed. They include stateless schemes where the server stores no information about

the client caches and stateful approaches where the server maintains state information,

as in the case of the AS scheme. Many optimizations and hybrid approaches were

proposed to reduce traffic and latency, like SSUM and the SACCS scheme where the

server has partial knowledge about the mobile node caches, and flag bits are used both

at the server and the mobile nodes to indicate data updates. Such mechanisms

necessitate server side modifications and overhead processing. More crucially, they

require the server to maintain some state information about the MANET, which is

costly in terms of bandwidth consumption especially in highly dynamic environments.

Page 20: DCIM Distributed Cache Invalidation Documentation

5.2 OVERVIEW OF THE PROJECT

We implement the DCIM to a different class of approaches, as it is a

completely pull-based scheme. Hence, we will focus our survey of previous work on

pull-based schemes, although we will compare the performance of DCIM with that of

our recently proposed push-based approach, namely SSUM. Pull-based approaches, as

discussed before, fall into two main categories:

CLIENT POLLING

TIME TO LIVE.

CLIENT POLLING:

In client polling systems, such as those presented a cache validation request is initiated

according to a schedule determined by the cache. There are variants that try to achieve

strong consistency by validating each data item before being served to a query, in a

fashion similar to the “If-modified-since” method of HTTP/1.1. Each cache entry is

validated when queried using a modified search algorithm, the system is configured

with a probability that controls the validation of the data item from the server or the

neighbors when requested. Although client poll algorithms have relatively low

bandwidth consumption, their access delay is high considering that each item needs to

be validated upon each request. DCIM, on the other hand, attempts to provide valid

items by adapting expiry intervals to update rates, and uses prefetching to reduce

query delays.

TIME TO LIVE

TTL-based approaches have been proposed for MANETs in several caching

architectures. The works suggest the use of TTL to maintain cache consistency, but do

not explain how the TTL calculation and modification are done. A simple consistency

Page 21: DCIM Distributed Cache Invalidation Documentation

scheme was proposed in based on TTL that is provided by the server, but no sufficient

details are provided. Related to the above, that approaches which rely on fixed TTL

are very sensitive to the chosen TTL value and exhibit poor performance. In client

prefetches items from nodes in the network based on the items’ request rates, and

achieves consistency with the data sources based on adaptive TTL calculated similar

to the schemes of the Squid

TTL is calculated as the difference between the query time and the kth recent

distinct update time at the server divided by a factor K, and the server relays to the

cache the k most recent update times. Other proposed mechanisms take into

consideration a complete update history at the server to predict future updates and

assign TTL values accordingly. These approaches assume that the server stores the

update history for each item, which does not make it an attractive solution. On the

other hand, the approach in computes TTL in a TCP-oriented fashion to adapt to

server updates. However, it is rather complex to tune, as it depends on six parameters,

and moreover, our preliminary simulation results revealed that this algorithm gives

poor predictions. Finally, the scheme in computes TTL from a probability describing

the staleness of cached documents. At the end, it is worth mentioning that

piggybacking was proposed in the context of cache consistency to save traffic. The

cache piggybacks a list of invalidated documents when communicating with the

server, while in the server piggybacks a list of updated documents when it

communicates with the cache.

DCIM adapts the TTL values to provide higher consistency levels by having

each CN estimate the interupdate interval and try to predict the time for the next

update and sets it as the item’s expiry time. It also estimates the inter-request interval

for each data item to predict its next request time, and then prefetches items that it

expects to be requested soon.

Page 22: DCIM Distributed Cache Invalidation Documentation

5.3 MODULE DESCRIPTION

CACHE NODE

CLIENT PULLING

HYBRID BASED

TIME TO LIVE

5.3.1 CACHE NODE MODULE

In this module the cache node performs in a network for mobile devices which cache

data retrieved from a data server, without requiring the latter to maintain state

information about the caches. Cache nodes manage the queries and address the nodes

that store the responses to these queries. In a typical caching architecture, several

mobile devices cache data that other devices frequently access or query. Data items

are essentially an abstraction of application data that can be anything ranging from

database records, WebPages, ftp files, etc. Cache management concerns the

maintenance of data consistency between the cache client and the data source. In

cache node it process and monitor the thread.

5.3.2 CLIENT PULLING MODULE

In this module it handles the client-based (Pull-based), where the client asks the server

to update or validate its cached data. Pull-based algorithm that implements adaptive

time to live (TTL), piggybacking, and pre-fetching, and provides near strong

consistency capabilities. Pull approaches is the time to live (TTL) - based algorithms.

where clients are responsible for pulling the data from the server, pull-based, where

the CNs monitor the TTL information and accordingly trigger the cache updating and

validation process.

Page 23: DCIM Distributed Cache Invalidation Documentation

5.3.3 HYBRID MODULE

In this module Hybrid approaches were proposed to reduce traffic and latency, like

SSUM , and the SACCS scheme in where the server has partial knowledge about the

mobile node caches, and flag bits are used both at the server and the mobile nodes to

indicate data updates. Such mechanisms necessitate server side modifications and

overhead processing. More crucially, they require the server to maintain some state

information about the MANET, which is costly in terms of bandwidth consumption

especially in highly dynamic environments hybrid mechanisms used to server pushes

the updates or the clients pull them in MANET. In MANET Environments, data

caching is essential because it increases the ability of mobile devices to access desired

data, and improves overall system performance

5.3.4 TIME TO LIVE MODULE

The first mechanism in calculates TTL as a factor multiplied by the time difference

between the query time of the item and its last update time. This factor determines

how much the algorithm is optimistic or conservative. In the second mechanism, TTL

is adapted as a factor multiplied by the last update interval. TTL is calculated as the

difference between the query time and the kth recent distinct update time at the server

divided by a factor K, and the server relays to the cache the k most recent update

times. Other proposed mechanisms take into consideration a complete update history

at the server to predict future updates and assign TTL values accordingly. These

approaches assume that the server stores the update history for each item, which does

not make it an attractive solution. On the other hand, the approach in computes TTL in

a TCP-oriented fashion to adapt to server updates.

Page 24: DCIM Distributed Cache Invalidation Documentation

5.4 DATA FLOW DIAGRAM

Definition:

A data flow diagram is a graphical technique that depicts information flow and

the transforms that applied as data move from input to output. The Data flow diagram

used to represent a system or software at any level of abstraction. In fact DFDs may be

portioned into levels.

A level of DFD, also called a context model, represents the entire software

elements as a single bubble with input and output by arrow. A level of DFD is

portioned into several bubbles with inter connecting arrows. Each of the process

represented at level one is sub function of the overall depicted in the context model.

The DFD Notations:

Hardware person and other program.

Information of the system to be modeled.

Data Item(s): Arrowhead indicates the

direction of flow

Stored Information that is used by the s/w

External Entity

Process

Store

Page 25: DCIM Distributed Cache Invalidation Documentation

5.4.1 DATA FLOW DIAGRAM

Page 26: DCIM Distributed Cache Invalidation Documentation

5.4.2 UML DIAGRAM

USE CASE DIAGRAM

Client server

Cache data by caching node from

server

Set /monitor TTL

Monitor inter update interval

Predict time for next update

Stored cached information in

query directories

Requesting data from CN

Process the request from client

Send the data with generation time

Page 27: DCIM Distributed Cache Invalidation Documentation

5.4.3 SEQUENCE DIAGRAM FOR MODULES

CACHE NODE FOR CACHE DATA FROM

SERVER

CLIENT PULLING

FOR

REQUESTING DATA

HYBRID BASED

FOR PUSH & PULL

BASED

TIME TO LIVE FOR

STORING THE DATA

Page 28: DCIM Distributed Cache Invalidation Documentation

5.4.4 STRUCTURE DIAGRAM

RELATIONSHIP DIAGRAM

PROCESS DIAGRAM

SERVER CLIENTHYBRID

PULLING

SEND ING DATA

PROCESS REQUEST

MONITORING TTL

CLIENT SERVER

MONITORING INTERVALS

MONITORINGREQUESTING DATA

REQUESTING DATA

Page 29: DCIM Distributed Cache Invalidation Documentation

5.7 INPUT DESIGN

Input design is the process of connecting the user-originated inputs

into a computer to used format. The goal of the input design is to make the data

entry logical & Images. Errors in the input database controlled by input design.

This application is being developed in a user-friendly manner. The

Cards are being designed in such a way that during the processing the cursor is

placed in the position where the Data must be entered. An option of selecting

an appropriate input from the values of validation is made for each of every

data entered. Help managers are also provided whenever the user entry to a

new field to that he/she can understand what is to be entered. Whenever the

user enter an error data, error manager displayed user can move to next field

only after entering the correct data.

Page 30: DCIM Distributed Cache Invalidation Documentation

5.8 OUTPUT DESIGN

The output form of the system is either by screen or by hard copies.

Output design aims at communicating the results of the processing of the users.

The reports are generated to suit the needs of the users. The reports have to be

generated with appropriate levels.

In our project outputs are generated for final review using html Tables

and preview the output to be sent.

Page 31: DCIM Distributed Cache Invalidation Documentation

6. SYSTEM TESTING

Testing is the stage of implementation, which is aimed at ensuring that the

system works accurately and efficiently before live operation commences. Thus the

system test should be a confirmation that all is correct and an opportunity to show the

users that the system works. The security enhancement method is tested in this system

by using various testing techniques.

Testing steps are

1. Unit Testing

2. Acceptance testing

3. Integration Testing

4. Validation Testing

5. Security Testing

6.1 Unit Testing

Unit testing focuses verification effort on the smallest unit of S/W can be

conducted in parallel for modules. The module 'interface' is tested to ensure the

information properly flows into and out of the program unit under test. The 'local data

structures' are examined to ensure that data stored temporarily maintains its integrity

during all steps in an execution.

'Boundary conditions' are tested to ensure that module operates properly at

boundaries established to limit or restrict processing. All 'independent paths' though

the control structures are exercised to ensure that all statements in a module have been

executed at least once. Finally, all 'error handling paths' are tested.

In this project each and every module of this application is tested separately.

For example message transmission between nodes is tested separately.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

Page 32: DCIM Distributed Cache Invalidation Documentation

6.2 Acceptance testing:

User Acceptance Testing is a critical phase of any project and requires

significant participation by the end user. It also ensures that the system meets the

functional requirements.

The customer specifies scenarios to test when a user story has been correctly

implemented. A story can have one or many acceptance tests, whatever it takes to

ensure the functionality works. Acceptance tests are black-box system tests. Each

acceptance test represents some expected result from the system. Customers are

responsible for verifying the correctness of the acceptance tests and reviewing test

scores to decide which failed tests are of highest priority. Acceptance tests are also

used as regression tests prior to a production release. A user story is not considered

complete until it has passed its acceptance tests. This means that new acceptance tests

must be created in iteration or the development team will report zero progress

Test Results: All the test cases mentioned above passed successfully. No defects

encountered.

6.3 Integration Testing

After completing the Unit test the modules are integrated and tested with

sample data. Integration testing addresses the issues associated with the dual problems

of verification and program construction. After the 8/W has been integrated a set of

high-order tests are conducted.

Integration testing is a systematic technique for constructing the program

structure while at the same time conducting tests to uncover errors associated with

interfacing. The objective has been dictated by design. They are two types:

Top down integration.

Bottom up integration.

Test Results: All the test cases mentioned above passed successfully. No defects

encountered.

Page 33: DCIM Distributed Cache Invalidation Documentation

6.4 Validation Testing

At the end of integration testing, software is completely assembled as a

package, interfacing errors have been uncovered and correction testing begins. Every

column should be filled with data. If the user skips any column provided or enters a

wrong data an alert message will be displayed. It will reduce the workload of the

Server.

S/W testing and validation is achieved through a series of black box tests that

demonstrate conformity with the requirements. A rest plan outlines the classes of tests

to be conducted and a test procedure defines specific test cases that will be used to

demonstrate conformity with requirements. Both, the plan and the procedure are

designed to ensure that all functional requirements are achieved, documentation is

correct and other requirements are met. After each validation test case has been

conducted, one of the two possible conditions exist, they are,

1. The function or performance characteristics confirm to specification and are

accepted.

2. A division from specification is uncovered and a deficiency list is created.

The deviation or error discovered at this stage in a project can rarely be corrected prior

to scheduled completion. It is necessary to negotiate with the customer to establish

methods.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

Page 34: DCIM Distributed Cache Invalidation Documentation

6.5 Security Testing

During this testing, the tester plays the role of the individual who desires to

penetrate the system. The tester may attempt to acquire passwords through external

clerical means and attack the system with customs 8/W designed to breakdown any

defenses that have been constructed.

The tester may also overwhelm the system thereby denying service to others

and may purposely cause system errors to penetrate during recovery and may browse

though insecure hoping to find key to system entry.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

Page 35: DCIM Distributed Cache Invalidation Documentation

7. SYSTEM IMPLEMENTATION

After proper testing and validation, the question arises whether the

system can be implemented or not. Implementation includes all those activities

that take place to convert from the old system to the new. The new system may be

totally new; replacing an existing module or automated system, or it may be major

modification to an existing system. In either case proper implementation is

essential to provide a reliable to provide a reliable system to meet organization

requirements.

All planning has now, be completed and the transformation to a fully

operational system can commence. The first job will be writing, debugging

documenting of all computer programs and their integration into a total system.

The master and transaction files are decided, and this general processing of the

system is established. Programming is complete when the programs conformed to

the detailed specification.

When the system is ready for implementation, emphasis switches to

communicate with the finance department staff. Open discussion with the staff is

important form the beginning of the project. Staff can be expected to the

concerned about the effect of the automation on their jobs and the fear of

redundancy or loss of status must be allayed immediately. During the

implementation phase it is important that all staff concerned be apprised of the

objectives of overall operation of the system. They will need shinning on how

computerization will change their duties and need to understand how their role

relates to the system as a whole. An organization-training program is advisable;

this can include demonstrations, newsletters, seminars etc.

Page 36: DCIM Distributed Cache Invalidation Documentation

The department should allocate a member of staff, who understands the

system and the equipment, and should be made responsible for the smooth

operation of the system. An administrator should coordinate the users to the

system.

In existing the major issue that faces client cache management concerns the

maintenance of data consistency between the cache client and the data source. All

cache consistency algorithms seek to increase the probability of serving from the

cache data items that are identical to those on the server.

In proposed we use pull based approach and TTL algorithms are also

completely client based and require minimal server functionality. From this

perspective, TTL-based algorithms are more practical to deploy and are more

scalable.

This is the first complete client side approach employing adaptive TTL and

achieving superior availability, delay, and traffic performance.

Page 37: DCIM Distributed Cache Invalidation Documentation

8. CONCLUSION & FUTURE ENHANCEMENT

8.1CONCLUSION

We presented a client-based cache consistency scheme for MANETs that relies

on estimating the inter update intervals of data items to set their expiry time. It makes

use of piggybacking and prefetching to increase the accuracy of its estimation to

reduce both traffic and query delays. We compared this approach to two pull-based

approaches (fixed TTL and client polling) and to two server-based approaches (SSUM

and UIR). This showed that DCIM provides a better overall performance than the

other client based schemes and comparable performance to SSUM.

Page 38: DCIM Distributed Cache Invalidation Documentation

8.2 FUTURE ENHANCEMENTS

For future work, we will explore three directions to extend DCIM. First, we

will investigate more sophisticated TTL algorithms to replace the running average

formula. Second, we will extend our preliminary work to develop a complete replica

allocation. Third, DCIM assumes that all nodes are well behaved, as issues related to

security were not considered. However, given the possibility of network intrusions, we

will explore integrating appropriate security measures into the system functions. These

functions include the QD election procedure, QD traversal, QD and CN information

integrity, and TTL monitoring and calculation. The first three can be mitigated

through encryption and trust schemes. The last issue was not tackled before in this

case.

Page 39: DCIM Distributed Cache Invalidation Documentation

9. APPENDIX

9.1 SOURCE CODE

package Ensure;

//~--- JDK imports --------------------------------------------

----------------

import java.sql.*;

public class db_Conn {

public static Connection con = null;

public Statement st = null;

public Statement stmt = null;

public Statement stmt1 = null;

public Statement stmt10 = null;

public Statement stmt11 = null;

public Statement stmt12 = null;

public Statement stmt13 = null;

public Statement stmt14 = null;

public Statement stmt15 = null;

public Statement stmt16 = null;

public Statement stmt17 = null;

public Statement stmt18 = null;

public Statement stmt19 = null;

Page 40: DCIM Distributed Cache Invalidation Documentation

public Statement stmt2 = null;

public Statement stmt20 = null;

public Statement stmt21 = null;

public Statement stmt22 = null;

public Statement stmt23 = null;

public Statement stmt24 = null;

public Statement stmt25 = null;

public Statement stmt26 = null;

public Statement stmt27 = null;

public Statement stmt28 = null;

public Statement stmt29 = null;

public Statement stmt3 = null;

public Statement stmt30 = null;

public Statement stmt4 = null;

public Statement stmt5 = null;

public Statement stmt6 = null;

public Statement stmt7 = null;

public Statement stmt8 = null;

public Statement stmt9 = null;

public db_Conn() {

try {

Class.forName("com.mysql.jdbc.Driver");

String url ="jdbc:mysql://localhost:3306/DCIM ";

con =DriverManager.getConnection(url,"root","");

Page 41: DCIM Distributed Cache Invalidation Documentation

st =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt1 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt2 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt3 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt4 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt5 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt6 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

Page 42: DCIM Distributed Cache Invalidation Documentation

stmt7 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt8 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt9 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt10 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt11 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt12 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt13 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt14 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

Page 43: DCIM Distributed Cache Invalidation Documentation

stmt15 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt16 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt17 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt18 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt19 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt20 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt21 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt22 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

Page 44: DCIM Distributed Cache Invalidation Documentation

stmt23 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt24 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt25 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt26 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt27 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt28 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt29 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

stmt30 =

con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,

ResultSet.CONCUR_UPDATABLE);

} catch (Exception e) {

Page 45: DCIM Distributed Cache Invalidation Documentation

System.out.println("Error in database_conn.java " +

e);

}

}

}

Page 46: DCIM Distributed Cache Invalidation Documentation

9.2 SCREENSHOT

Page 47: DCIM Distributed Cache Invalidation Documentation
Page 48: DCIM Distributed Cache Invalidation Documentation
Page 49: DCIM Distributed Cache Invalidation Documentation
Page 50: DCIM Distributed Cache Invalidation Documentation
Page 51: DCIM Distributed Cache Invalidation Documentation
Page 52: DCIM Distributed Cache Invalidation Documentation
Page 53: DCIM Distributed Cache Invalidation Documentation
Page 54: DCIM Distributed Cache Invalidation Documentation
Page 55: DCIM Distributed Cache Invalidation Documentation
Page 56: DCIM Distributed Cache Invalidation Documentation
Page 57: DCIM Distributed Cache Invalidation Documentation
Page 58: DCIM Distributed Cache Invalidation Documentation

10. REFERENCES

[1] http://doi.ieeecomputersociety.org/10.1109/ITNG.2009.179/

[2] Kassem Fawaz, Student Member, IEEE, and Hassan Artail, Senior Member, IEEE-“DCIM: Distributed Cache Invalidation Method for Maintaining Cache Consistency in Wireless Mobile Networks”- IEEE TRANSACTIONS ON MOBILE COMPUTING VOL. 12, NO. 4, APRIL 2013

[3] T. Andrel and A. Yasinsac, “On Credibility of MANET Simulations,” IEEE Computer, vol. 39, no. 7, pp. 48-54, July 2006.

[4] H. Artail, H. Safa, K. Mershad, Z. Abou-Atme, and N. Sulieman, “COACS: A Cooperative and Adaptive Caching System for MANETS,” IEEE Trans. Mobile Computing, vol. 7, no. 8, pp. 961- 977, Aug. 2008.

[5] W. Zhang and G. Cao, “Defending Against Cache Consistency Attacks in Wireless Ad Hoc Networks,” Ad Hoc Networks, vol. 6, pp. 363-379, 2008

[6] B. Krishnamurthy and C.E. Wills, “Piggyback Server Invalidation for Proxy Cache Coherency,” Proc. Seventh Int’l Conf. World Wide Web, Apr. 1998.

[7] J. Jung, A.W. Berger, and H. Balakrishnan, “Modeling TTL-Based Internet Caches,” Proc. IEEE INFOCOM, Mar. 2003.

[8] K. Mershad and H. Artail, “SSUM: Smart Server Update Mechanism for Maintaining Cache Consistency in Mobile Environments,” IEEE Trans. Mobile Computing, vol. 9, no. 6, pp. 778-795, June 2010.

[9] K. Fawaz and H. Artail, “A Two-Layer Cache Replication Scheme for Dense Mobile Ad Hoc Networks,” Proc. IEEE Global Comm. Conf. (GlobeCom), Dec. 2012.

Page 59: DCIM Distributed Cache Invalidation Documentation

Sites Referred:

http://java.sun.com

http://www.sourcefordgde.com

http://www.networkcomputing.com/

http://it-ebooks.info/book/931/