8
An Analysis of Distributed Computer Network Administration 1 Amit Kumar Sahu, 2 Naveen Hemrajani 1 M-Tech Scholar, Suresh Gyan Vihar University, Jaipur (India), [email protected], 2 Vice Principal of Engineering, Suresh Gyan Vihar University, Jaipur (India), [email protected] Abstract Computer Networking is necessary to human beings. Peoples needed to share ideas, research knowledge, and the way of life to their friends. In the modern time, the role of computer networks became dreamy. Analyzing, accessing, sharing and storing of data and information are very easy with help of distributing computer networking. Social networking sites look after human communication in a better manner and create platforms for in agreement individuals to share their thoughts and opinions. There is a massive contribution of computer networking apparatus in the progress which we see today. But when manage computer networking traffic in distributed computer environment, the quality of network connection is challenging task for network Administrator. Computer network properties are depends on different types of applications. Important properties of the quality of network connection are bandwidth, delay and reliability. The network Administrator uses many troubleshooting software to solving network problem regarding the quality of connection. This paper explores different properties for the administration for the distributed computer network environment like techniques for measuring and analyzing the performance. This analysis precious for computer network administrator, who administrating distribute computer network. Keywords: Networking traffic, Distribute computer network, bandwidth, delay, reliability. 1. Introduction 1.1 Distributed computing Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers. The study of distributed computing became its own branch of computer science in the late 1970s and early 1980s. The platform for distributed systems has been the enterprise network linking workgroups, departments, branches, and divisions of an organization. Data is not located in one server, but in many servers. These servers might be at geographically diverse areas, connected by WAN links [3]. There are two main reasons for using distributed systems and distributed computing. First, the very nature of the application may require the use of a communication network that connects several computers. For example, data is produced in one physical location and it is needed in another location. Second, there are many cases in which the use of a single computer would be possible in principle, but the use of a distributed system is beneficial for practical reasons. Traditionally the server had processed both the client environment and the production environment. In situations where the data could be stored on the pc itself, the processing of the production data could be executed on the local processor. But this moved the bottleneck away from the production server processors, and to the network bandwidth [3]. Some advantages with the thick client approach are: Lower server requirements, as a thick client does most of the application processing itself. Lower user environment network bandwidth usage, because there is no keyboard or screen data that has to be sent to and from the server. Higher system reliability, as the thick clients can operate even when the processing servers are unavailable. 1.2 Computer Networks A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow sharing of resources and information. Where at least one process in one device is able to send/receive data to/from at least one process residing in a remote device, then the two devices are said to be in a network. 1.2.1 Network Classification Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, range of the network, Amit Kumar Sahu et al,Int.J.Computer Technology & Applications,Vol 3 (2), 660-667 660 ISSN:2229-6093

An Analysis of Distributed Computer Network · PDF file · 2012-03-16An Analysis of Distributed Computer Network Administration . 1. ... network whereas logical topology relates to

Embed Size (px)

Citation preview

Page 1: An Analysis of Distributed Computer Network · PDF file · 2012-03-16An Analysis of Distributed Computer Network Administration . 1. ... network whereas logical topology relates to

An Analysis of Distributed Computer Network Administration

1Amit Kumar Sahu,

2Naveen Hemrajani

1M-Tech Scholar, Suresh Gyan Vihar University, Jaipur (India), [email protected],

2Vice Principal of Engineering, Suresh Gyan Vihar University, Jaipur (India), [email protected]

Abstract Computer Networking is necessary to human beings.

Peoples needed to share ideas, research knowledge, and

the way of life to their friends. In the modern time, the role

of computer networks became dreamy. Analyzing,

accessing, sharing and storing of data and information are

very easy with help of distributing computer networking.

Social networking sites look after human communication

in a better manner and create platforms for in agreement

individuals to share their thoughts and opinions. There is a

massive contribution of computer networking apparatus in

the progress which we see today.

But when manage computer networking traffic in

distributed computer environment, the quality of network

connection is challenging task for network Administrator.

Computer network properties are depends on different

types of applications. Important properties of the quality of

network connection are bandwidth, delay and reliability.

The network Administrator uses many troubleshooting

software to solving network problem regarding the quality

of connection.

This paper explores different properties for the

administration for the distributed computer network

environment like techniques for measuring and analyzing

the performance. This analysis precious for computer

network administrator, who administrating distribute

computer network.

Keywords: Networking traffic, Distribute computer

network, bandwidth, delay, reliability.

1. Introduction

1.1 Distributed computing

Distributed computing is a field of computer science that

studies distributed systems. A distributed system consists

of multiple autonomous computers that communicate

through a computer network. In distributed computing, a

problem is divided into many tasks, each of which is

solved by one or more computers. The study of distributed

computing became its own branch of computer science in

the late 1970s and early 1980s.

The platform for distributed systems has been the

enterprise network linking workgroups, departments,

branches, and divisions of an organization. Data is not

located in one server, but in many servers. These servers

might be at geographically diverse areas, connected by

WAN links [3].

There are two main reasons for using distributed systems

and distributed computing. First, the very nature of the

application may require the use of a communication

network that connects several computers. For example,

data is produced in one physical location and it is needed

in another location. Second, there are many cases in which

the use of a single computer would be possible in principle,

but the use of a distributed system is beneficial for

practical reasons.

Traditionally the server had processed both the client

environment and the production environment. In situations

where the data could be stored on the pc itself, the

processing of the production data could be executed on the

local processor. But this moved the bottleneck away from

the production server processors, and to the network

bandwidth [3].

Some advantages with the thick client approach are:

Lower server requirements, as a thick client does most

of the application processing itself.

Lower user environment network bandwidth usage,

because there is no keyboard or screen data that has

to be sent to and from the server.

Higher system reliability, as the thick clients can

operate even when the processing servers are

unavailable.

1.2 Computer Networks

A computer network, often simply referred to as a network,

is a collection of hardware components and computers

interconnected by communication channels that allow

sharing of resources and information. Where at least one

process in one device is able to send/receive data to/from

at least one process residing in a remote device, then the

two devices are said to be in a network.

1.2.1 Network Classification

Networks may be classified according to a wide variety of

characteristics such as the medium used to transport the

data, communications protocol used, range of the network,

Amit Kumar Sahu et al,Int.J.Computer Technology & Applications,Vol 3 (2), 660-667

660

ISSN:2229-6093

Page 2: An Analysis of Distributed Computer Network · PDF file · 2012-03-16An Analysis of Distributed Computer Network Administration . 1. ... network whereas logical topology relates to

scale, topology, and organizational scope. Networks that

range a few meters are classified as personal area networks

(PAN). Networks that range a few hundred meters are

classified as local area networks (LAN). Networks that

within a range of some kilometers, known as metropolitan

area network (MAN). And any networks ranging more

than some kilometers are classified as Wide Area

Networks (WAN) [1].

1.2.2 Network Topologies

Network topology is the layout pattern of interconnections

of the various elements (links, nodes, etc.) of a computer

or biological network. Network topologies may be

physical or logical. Physical topology refers to the physical

design of a network including the devices, location and

cable installation. Logical topology refers to how data is

actually transferred in a network as opposed to its physical

design. In general physical topology relates to a core

network whereas logical topology relates to basic network.

There are several possibilities when connecting several

nodes together in a computer network. These possibilities

are called network topologies [1].

The study of network topology recognizes eight basic

topologies:

Point-to-point

Bus

Star

Ring or circular

Mesh

Tree

Hybrid

Daisy chain

1.2.3 Network Transmission Media

Various physical media can be used to transport a stream

of bits from one device to another. Each has its own

characteristics in terms of bandwidth, propagation delay,

cost, and ease of installation and maintenance. Each media

have different qualities and properties. Popular network

transmission Medias are Twisted Pair Cable, Coaxial

Cable, Optical Fiber Cable

1.2.4 Network Protocols

A protocol is a set of rules that governs the

communications between computers on a network. These

rules include guidelines that regulate the following

characteristics of a network: access method, allowed

physical topologies, types of cabling, and speed of data

transfer. A protocol can therefore be implemented as

hardware or software or both. Examples are IP, TCP,

UDPHTTP, FTP, SSH, MAC, ARP, FDDI, MPLS, etc.

1.2.5 Network Media Access Control

The Medium Access Control (MAC) protocol is used to

provide the data link layer of the Ethernet LAN system.

The MAC protocol encapsulates a SDU (payload data) by

adding a 14 byte header (Protocol Control Information

(PCI)) before the data and appending a 4-byte (32-bit)

Cyclic Redundancy Check (CRC) after the data.

Examples of media access control are ATM, FDDI, PPP,

Ethernet

1.3 Data security on network

The area of network security consists of the provisions

and policies adopted by the network administrator to

prevent and monitor unauthorized access, misuse,

modification, or denial of the computer network and

network-accessible resources. Network security involves

the authorization of access to data in a network, which is

controlled by the network administrator. Users choose or

are assigned an ID and password or other authenticating

information that allows them access to information and

programs within their authority [1].

In process to secure computer system, three following

properties are essential [8]

Confidentiality

Integrity

Availability

1.4 Measurements

The measure of computer network performance is

commonly given in units of bits per second (bps). This

quantity can represent either an actual data rate or a

theoretical limit to available network bandwidth. Modern

networks support very large numbers of bits per second.

Instead of quoting 10,000 bps or 100,000 bps, networkers

normally express these quantities in terms of larger

quantities like "kilobits," "megabits," and "gigabits."

Measurements are conducted in four stages [4].

Data collection

Analysis

Presentation

Interpretation

Time series

A time series diagram shows the x-axis in time, and the y-

axis as the measured values of data. Time series diagrams

are useful for describing the measured data, and spotting

trends. An example of a time series diagram can be found

in figure 1.

Amit Kumar Sahu et al,Int.J.Computer Technology & Applications,Vol 3 (2), 660-667

661

ISSN:2229-6093

Page 3: An Analysis of Distributed Computer Network · PDF file · 2012-03-16An Analysis of Distributed Computer Network Administration . 1. ... network whereas logical topology relates to

Figure 1: Timeseries diagram.

2. Review of Literature

2.1 Quality of Service

These days, having stable network connections is a must.

With the ultimately modern world that we all live in today,

a lot of our activities would no longer progress if we do

not have a reliable internet connection. Communication,

bank transaction, research and even shopping all depend

on the worldwide web. With our extensive use of mobile

phones, computers, GPS devices, music and video players,

entertainment appliances and other gadgets, we certainly

need to be aware of the things that we need to do to ensure

that our connections are functioning well [1][12][13].

Quality of Service refers to a broad collection of

networking technologies and techniques. The goal of

Quality of Service is to provide guarantees on the ability

of a network to deliver predictable results. A network

monitoring system must typically be deployed as part of

Quality of Service, to insure that networks are performing

at the desired level [1][13]. The problem with a distributed

computer network is that the routes may have different

properties. The main properties for a distributed compute

network connections are[1][13]:

1) Bandwidth

2) Delay

3) Reliability

4) Jitter

2.2 Quality of Service Standards

With the growth of multimedia networking, often these ad

hoc measures are not enough. Serious attempts at

guaranteeing quality of service through network and

protocol design are needed. In the following sections we

will continue our study of network performance, but now

with a sharper focus on ways to provide a quality of

service matched to application needs. It should be stated at

the start, however, that many of these ideas are in flux and

are subject to change.IETF try to creating a architecture

for streaming multimedia. They result come with two

different analysis.

2.2.1 Differentiated Services

Recent work on differentiated services in the Internet has

defined new notions of Quality of Service (QoS) that apply

to aggregates of traffic in networks with coarse spatial

granularity. Most proposals for differentiated services

involve traffic control algorithms for aggregate service

levels, packet marking and policing, and preferential

treatment of marked packets in the network core. The

problem with this analysis is that there is no common

policy for the type of service field, and so when packages

pass through different networks, the packets may be

handled different then intended by the original sender

computer[1][13][12].

2.2.2 Integrated Services

Integrated services are an architecture that specifies the

elements to guarantee quality of service (QoS) on

networks. Integrated services can for example be used to

allow video and sound to reach the receiver without

interruption. Integrated services, create the path through

the network for the flow of data. This requires a setup for

the connection, when the connection between two nodes is

established. The idea of Integrated services is that every

router in the system implements Integrated services, and

every application that requires some kind of guarantees

has to make an individual reservation [1][13][12].

2.3 Bandwidth

Bandwidth in computer networking refers to the data rate

supported by a network connection or interface. One most

commonly expresses bandwidth in terms of bits per second

(bps). The term comes from the field of electrical

engineering, where bandwidth represents the total distance

or range between the highest and lowest signals on the

communication channel (band)[1][4].

Access Control Technology

Bits Bytes

Ethernet (10base-X) 10 Mb/s 1,25 MB/s

FastEthernet (100base-X) 100 Mb/s 12,5 MB/s

FDDI 100 Mb/s 12,5 MB/s

Gigabit Ethernet (1000base-X) 1.000 Mb/s 125 MB/s

Table 1: Description of Bandwidth provided by various access

technologies

Bandwidth is the throughput of the connection that is of

interest for the customer. "Throughput is

a measure of the amount of data that can be sent over

network in a particulate time[ 14]".

The throughput is determined by the formula:

Throughput =Data Transfer

Time Eq-1

2.4 Reliability

Reliability is term as "An attribute of any system that

consistently produces the same results, preferably meeting

or exceeding its specifications"[5]. To provide network

Amit Kumar Sahu et al,Int.J.Computer Technology & Applications,Vol 3 (2), 660-667

662

ISSN:2229-6093

Page 4: An Analysis of Distributed Computer Network · PDF file · 2012-03-16An Analysis of Distributed Computer Network Administration . 1. ... network whereas logical topology relates to

reliability, it is also important to do preplanning and/or

advanced preparation. Of course, one way is to have

additional spare capacity in the network. However, there

can be a failure in the network that can actually take away

the spare capacity if the network is not designed properly

because of dependency between the logical network and

the physical network [20]. Thus, it is necessary to audit the

network and find the vulnerable points in the network and

then to equip the network with additional capabilities to

avoid such vulnerabilities. For example,

1. The network may be provided with

transmission-level diversity so that for any

transmission link failure there is at least another path not

on the path of the failure.

2. A redundant architecture at network nodes

can be built to address for a node

component or nodal failure; this may include dual or multi

homing to provide for multiple access and egress points to

and from the core network.

The failure rate (λ) has been defined as "The total number

of failures within an item population, divided by the total

time expended by that population, during a particular

measurement interval under stated conditions. It has also

been defined mathematically as the probability that a

failure per unit time occurs in a specified interval, which is

often written in terms of the reliability function, R(t), as,

λ =R t1 −R(t2)

t2−t1 R(t1) Eq-2

where, t1 and t2 are the beginning and ending of a

speci_ed interval of time, and R(t) is the reliability

function, i.e. probability of no failure before time t. The

failure rate data can be obtained in several ways. The most

common methods are[22]:

Historical data about the device or system under

consideration.

Government and commercial failure rate data.

Testing.

2.5 Delay

Network delay is an important design and performance

characteristic of a computer

network or telecommunications network. The delay of a

network specifies how long it takes for a bit of data to

travel across the network from one node or endpoint to

another. It is typically measured in multiples or fractions

of seconds. Delay may differ slightly, depending on the

location of the specific pair of communicating nodes.

Although users only care about the total delay of a

network. Thus, engineers usually report both the maximum

and average delay, and they divide the delay into several

parts:

Processing delay - time routers take to process the

packet header

Queuing delay - time the packet spends in routing

queues

Transmission delay - time it takes to push the

packet's bits onto the link

Propagation delay - time for a signal to reach its

destination

2.6 Jitter

Jitter is the amount of variation in latency/response time,

in milliseconds. Reliable connections consistently report

back the same latency over and over again. Lots of

variation (or 'jitter') is an indication of problems [1]. Jitter

shows up as different symptoms, depending on the

application you're using. Web browsing is fairly resistant

to jitter, but any kind of streaming media (voice, video,

music) is quite suceptible to Jitter [3]. Jitter is a symptom

of other problems. It's an indicator that there might be

something else wrong. Often, this 'something else' is

bandwidth saturation (sometimes called congestion) - or

not enough bandwidth to handle the traffic load [14].

3. Analysis of Present Investigation

In a network environment, authorized users may access

data and information stored on other computers on the

network. The capability of providing access to data and

information on shared storage devices is an important

feature of many networks. The state of a network link can

be determined by measuring the throughput, the delay, the

jitter and the packet loss. These four properties represent

the quality of the link. In the following studies I try to

measuring these properties in following section.

3.1 First Study: Traffic on Network

Here measuring traffic on network for a exact node we can

collect the information about the state of that particular

node. The collected information provides help for network

administrator to optimize the system performance in

distributed computer system network. The network

administrator can analysis the performance of a node, a

subnet, or the whole network.

There are mainly two locations, when we can perform

passive network measurements in distributed computer

system network:

The state of a particular network node in

distributed computer system network, performing

a routing function or may be performing

forwarding functions. Virtual private networks,

firewalls and routers are the example of such

node in distributed computer system network.

The state of a service host, that provides a

network service in distributed computer system

Amit Kumar Sahu et al,Int.J.Computer Technology & Applications,Vol 3 (2), 660-667

663

ISSN:2229-6093

Page 5: An Analysis of Distributed Computer Network · PDF file · 2012-03-16An Analysis of Distributed Computer Network Administration . 1. ... network whereas logical topology relates to

network. HTTP, FTP and DNS are the examples

of network service in distributed computer system

network.

The SNMP service is method for collecting the

measurements, But it increase network traffic. With the

help of tcpdump program we can analyze the network

traffic.tcpdump command will work on most flavors of

unix operating system. tcpdump allows us to save the

packets that are captured, so that we can use it for future

analysis. The saved file can be viewed by the same

tcpdump command. The tcpdump command have many

options. When we execute tcpdump command without any

option, it will capture all the packets flowing through all

the interfaces. With the use of -i option with tcpdump

command, allows us to filter on a selected ethernet

interface. Following show the use of tcpdump command

with –i option.

The simlified output is shown following.

A another command is also used for this perpose, which is

tcpstat. tcpstat is a highly configurable program that

measures some data and may generate some statistics if

wanted. tcpstat is to automatically search for an

appropriate interface, and to show current statistics on it.

The result of this command including bandwidth being

used, number of packets, average packet size, bytes since

last measurement, ARP packets since last measurement,

TCP packets since last measurement, ICMP packets since

last measurement.

The following command was executed on both nodes.

tcpstat -i eth0 \-o "%S %A %C %V %I %T %U %a %d

%b %p %n %N %l \n"

Interval is the sample interval, in seconds, in which the

statistics are based upon and when in default mode, how

often the display is updated. If -1 is given, then the interval

is taken to be the entire length of the sample. Default is 5

seconds.

The tcpstat program extracts some data from the host

node. These values are based on the data that has been

measured since last measurement. Sample output shown

following.

The following values are processed by tcpstat itself, and

can be viewed in the raw data log. The values are from left

to right in sample output.

Timestamp

Number of packet pass through interface

The average/mean packet size.

The standard deviation of the size of each packet.

The number of bits per second.

The distribution of the transport layer protocols show in

the following figure 2. The transport layer protocols are

TCP and UDP; these protocols are encapsulated within the

IP protocol.

Figure 2: Distribution of the Transport Layer Protocols for Node One

To study above output and result we can say that Node

One never utilize the available bandwidth. Node Two may

utilize the bandwidth fully.

3.2 Second Study: Throughput

It should be possible to measure arbitrary one-way

singleton characteristics (e.g., loss, median delay, mean

delay, jitter, 85th percentile of delay)By performing active

measurements from one node to another node with equal

link speed, the state of the connection can be determined.

If the link speed is not as anticipated, countermeasures can

be taken to locate and remove the bottleneck.

By using a active measurement tool, the connection

between two nodes are to be benchmarked and analyzed.

The test should provide enough information to see trends

in the network, and determine if the node manage to utilize

the available bandwidth. To remove uncertainties in the

results, benchmarking should be executed from one node,

to two other nodes.

There are many tools to perform active measurements.

Known network throughput benchmarking tools are:

netperf, iperf, ttcp, and ftp. Netperf is a benchmark that

can be use to measure various aspect of networking

performance. The primary foci are bulk (aka

unidirectional) data transfer and request/response

performance using either TCP or UDP and the Berkeley

Sockets interface. To execute the experiment, a server

node and a client node has to be installed on each of the

nodes.

Description Node

One

Node

Two

Node

Three

Processor Model AMD Intel Pentium

IV

Intel Pentium

IV

Amit Kumar Sahu et al,Int.J.Computer Technology & Applications,Vol 3 (2), 660-667

664

ISSN:2229-6093

Page 6: An Analysis of Distributed Computer Network · PDF file · 2012-03-16An Analysis of Distributed Computer Network Administration . 1. ... network whereas logical topology relates to

Processor Mhz 1800 Mhz 2800 Mhz 2800 Mhz

Memory 1 GB 1 GB 512 MB

Operation

System

Ubuntu(Linux

Based)

Ubuntu(Linux

Based)

Ubuntu(Linux

Based)

Network MAC Ethernet Ethernet Ethernet

Network Link 100 Mb/s 100 Mb/s 100 Mb/s

ISP UNINETT UNINETT UNINETT

IP Address 192.168.1.100 192.168.1.120 192.168.1.124

Table 2: Resources configuration for study

Netperf has two types of command-line options. The first

are global command line options. They are essentially any

option not tied to a particular test or group of tests. An

example of a global command-line option is the one which

sets the test type - -t.

The second type of options are test-specific options. These

are options which are only applicable to a particular test or

set of tests. An example of a test-specific option would be

the send socket buffer size for a TCP_STREAM test.

Global command-line options are specified first with

test-specific options following after a -- as in:

netperf <global> -- <test-specific>

For the experiment, the server program was installed on

Node Two and three, and the client software was installed

on Node One.

3.3 Third Study: Packet Loss, Delay and Jitter

The delay can be measured as the time it takes for one

packet to be sent from one node to another. a host. When

the objects are timing nodes, the maximum delay only

applies to the path between the two nodes. It is well known

that the delay time in communication links follows the

exponential distribution.

Jitter, which represents the variations in the network delay,

is measured using a moving average computation

technique for a given stream of UDP packets, we can

notice that the variations in trends of the jitter

measurements occurred only for the first route change

along the path, which is shown in figure 3. The round trip

time is measured as the time it takes for one packet to be

sent from a node to a destination node, until another packet

is received from the destination node.

There are many tools for measuring the round trip time or

we can say delay in one way traffic. The tools results

depends on packet .The available packet formats can be

ICMP, TCP or UDP. The Each TCP packet consists of a

header followed by a data field. The header length can

vary between 20 and 60 bytes, and the total size of the

packet can be up to 65535 bytes. Actually, many systems

cannot handle packets as large as the protocol allows, and

a working maximum size is 576 bytes.

In this study a script is used and it is a part of the pinger

measurement package, which is used to measure round trip

times from links all around the world. This program

contains facilities to send various kinds of probe packets,

including ICMP Echo messages, process the reply and

record elapsed times and other information in a data file,

as well as produce real-time snapshot histograms and

traces.

Figure 3: Route Changes

The data collected from the nodes by the pinger script,

where each line represents one measurement. The sample

output show in following .

The pinger script extracts data gathered from the active

measurements, it also provides some processed data based

on the measured data. The following values are processed

from the ten round trip time values measured for that

measurement:

The minimum RTT of the ten packages sent in

that measurement

The mean RTT for the ten packages sent in that

measurement.

The maximum RTT of the ten packages sent in

that measurement.

The following figure 4 shows Phase plot of Node One

& Node Two.

Amit Kumar Sahu et al,Int.J.Computer Technology & Applications,Vol 3 (2), 660-667

665

ISSN:2229-6093

Page 7: An Analysis of Distributed Computer Network · PDF file · 2012-03-16An Analysis of Distributed Computer Network Administration . 1. ... network whereas logical topology relates to

Figure 4 shows Phase plot of Node One & Node Two

Figure 5 Distribution of jitter

4. Conclusion

This paper explores different properties for the

administration for the distributed computer network

environment like techniques for measuring and analyzing

the performance. Computer network properties are

depends on different types of applications. This thesis

includes some simple methods for analyzing and

presenting the measured data, these data analysis is very

useful for administrator, who maintains the distributed

computer network. The three studies were expressing the

functionality of the tools used to measure the four

properties in quality of service.

The objective of first study is to make use of passive

throughput measurement tools, to monitor the traffic on

two different nodes for a particular time. In first case study

the tcpstat tool successfully measured the data on nodes

and provides all required information for analysis.

The objective of second study is to make use of active

throughput measurement tools, to benchmark the

distributed network connection between two different

nodes for a particular time.

The netperf tool successfully measured the throughput. In

this study, the results did not match the results that

predicated earlier. The mismatch generated due to limited

hardware recourses.

The objective of third study is to make use of active delay

measurement tools. The round trip time measurement used

to find the delay and the jitter of a connection. The packet

loss data can be used to figure out.

Important properties of the quality of network connection

are bandwidth, delay and reliability. The network

Administrator uses many troubleshooting software to

solving network problem regarding the quality of

connection.

5. References

[1] Andrew S. Tanenbaum. Computer Networks, Fourth

Edition. Prentice Hall, 2003.

[2] Annabel Z. Dodd. The Essential Guide to

Telecommunications, Second Edition. Prentice

Hall PTR, 1999.

[3] William Stallings. Local networks. ACM, 1984.

[4] Mahbub Hassan and Raj Jain. High Performance

TCP/IP Networking. Pearson Prentice Hall, 2004.

[5] Gene Spafford Simson Garfinkel, Alan Schwartz.

Practical Unix & Internet Security, 3rd

Edition. O'Reilly, 2003.

[6] Fred Halsall. Data Communications, Computer

Networks and Open Systems, Fourth Edition.

Addison-Wesley, 1996.

[7] Kevin Hamilton Kennedy Clark. Cisco LAN Switching

(CCIE Professional Development).

Cisco Press, 1999.

[8] Matt Bishop. Computer Security, Art and Science.

Addison-Wesley, 2002.

[9] W. Richard Stevens. The Protocols (TCP/IP Illustrated,

Volume 1). Addison- Wesley, 1993.

[10] Sergio Verdú. Wireless bandwidth in the making.

IEEE, 2000.

[11] B. Kleiner P.A. Tukey J.M. Chambers, W.S.

Cleveland. Graphical Methods for Data

Analysis. Duxbury Press, 1983.

[12] Lionel M. Ni Xipeng Xiao. Internet qos - a big picture.

IEEE, 1999.

[13] Bruce S. Davie Larry L. Peterson. Computer

Networks - A System Approach, Second

Edition. Morgan Kaufmann, 2000.

[14] Joseph D. Sloan. Network Troubleshooting Tools.

O'Reilly, 2001.

[15] Rick Jones. The netperf website.

http://www.netperf.org/, May 2005.

[16] The ttcp website.

ttp://www.pcausa.com/Utilities/pcattcp.htm, May 2005.

[17] The tcpdump website. http://www.tcpdump.org/, May

2005.

[18] Paul Herman. The tcpstat website.

http://www.frenchfries.net/paul/tcpstat/, May 2005.

Amit Kumar Sahu et al,Int.J.Computer Technology & Applications,Vol 3 (2), 660-667

666

ISSN:2229-6093

Page 8: An Analysis of Distributed Computer Network · PDF file · 2012-03-16An Analysis of Distributed Computer Network Administration . 1. ... network whereas logical topology relates to

[19] Sugih Jamin Amgad Zeitoun, Zhiheng Wang.

Rttometer: Measuring path minimum rtt with

confidence. Technical report, The University of

Michigan, Ann Arbor, 2003.

[20] PING 127.0.0.1 Computer Services. The ping

website. http://www.ping127001.com/pingpage.htm,

May 2005.

[21] Daniel P. Siewiorek Jim Gray. High-availability

computer systems. IEEE, 1991.

[22] Miroslaw Malek Allen M. Johnson. Survey of

software tools for evaluating reliability,

availability, and serviceability. ACM Computing

Surveys, 1988.

[23] Rahul Malhotra, Vikas Gupta, Dr R. K. Bansal.

Simulation & Performance Analysis of

Wired and Wireless Computer Networks. Global

Journal of Computer Science and

Technology. Volume 11 Issue Version 1.0 March 2011

[24] Martin Swany and Rich Wolski. Improving

Throughput with Cascaded TCP Connections:

the Logistical Session Layer. UCSB Technical Report

2002-24

[25] Humaira Nishat, Shakeel Ahmed, Dr. D.Srinivasa

Rao. Energy Efficient High Throughput MAC

Protocol Analysis for Wireless AD Hoc Networks.

International Journal of Computer Science and

Network Security, VOL.10 No.6, June 2010

Amit Kumar Sahu et al,Int.J.Computer Technology & Applications,Vol 3 (2), 660-667

667

ISSN:2229-6093