136
ANDROID WALKIE TALKIE A Dissertation Submitted to School of Computer Science In Partial Fulfillment of the Requirement of the Degree of Bachelor in Computer Science Under the Supervision of DR. ABDUL HYEE Deputy Director (ERP), FESCO. by Talha Habib Registration No. FD0121231728 Email: [email protected] National College of Business Administration and Economics

Thesis

Embed Size (px)

Citation preview

Page 1: Thesis

ANDROID WALKIE TALKIE

A Dissertation Submitted to School of Computer Science

In Partial Fulfillment of the Requirement of the Degree of

Bachelor in Computer Science

Under the Supervision of

DR. ABDUL HYEE

Deputy Director (ERP), FESCO.

by

Talha HabibRegistration No. FD0121231728Email: [email protected]

National College of Business Administration and Economics

40/E-1, Gulberg III, Lahore-54660, Pakistan

Page 2: Thesis

2

Page 3: Thesis

3

ANDROID WALKIE TALKIE

A Dissertation Submitted to

School of Computer Science

In Partial Fulfillment of the

Requirement of the Degree of

BS (Computer Science)

by

Talha HabibRegistration No. FD0121231728

Under the Supervision of

DR. ABDUL HYEE

Deputy Director (ERP), FESCO.

National College of Business Administration and Economics

40/E-1, Gulberg III, Lahore-54660, Pakistan

Page 4: Thesis

4

Declaration by student

I hereby declare that the contents of the thesis “Android Walkie Talkie” is research based

and no part has been copied from any published source (except the references, some standard

mathematical or genetic models/equations/protocols etc.). I further declare that this work has not

been submitted for the award of any other diploma/degree. The University may take action if the

above statement is found inaccurate at any stage.

__________________________

Name: Talha Habib

Page 5: Thesis

5

To,

The Controller of Examinations,

Chenab College of Advanced Studies, Faisalabad

We, the supervisory committee, certify that the contents and form of thesis submitted by

Mr. Talha Habib have been found satisfactory and recommend that it be processed for evaluation

by the external examiner(s) for the award of the degree.

Supervisory Committee

1. Supervisor :_______________________________

(Dr. Abdul Hyee)

2. Member :_______________________________

3. Member :_______________________________

Page 6: Thesis

6

DEDICATEDTO

The Holy Prophet Hazrat

MUHAMMADPeace Be Upon Him

He is the greatest Teacher of the World

&

My Loving & Caring ParentsWho praised every moment of my life with and untiring sustenance. Whose affection, love,

encouragement and prayers of day and night make me able to get such success and honor to

accomplish this task.

My Respectable TeacherWho is always with me and guided me with love and gratitude

Page 7: Thesis

7

Acknowledgement

First of all, I would like to thank “ALLAH Almighty” the Merciful, the Creator of mind; who blessed

me with the knowledge and granted me the courage and ability to complete this documentation

successfully.

Thanks to my parents, who cherished every moment of my life with support. Their hands always rose

for me in their prayers.

I deeply appreciate the efforts of my supervisor, Dr. Abdul Hyee who helped me a lot. Despite the

pressure of work he spent time to listen and assist and offered guidance. He knew where to look for the

answers to obstacles while leading me to the right source, theory and perspective. He was always

available for my questions and he was positive and gave generously of his time and vast knowledge.

Without his guidance I would not have been able to accomplish this task.

Talha Habib

Page 8: Thesis

8

Table of contentsDECLARATION BY STUDENT.........................................................................................................4ACKNOWLEDGEMENT....................................................................................................................7TABLE OF CONTENTS......................................................................................................................8LIST OF FIGURES..............................................................................................................................9LIST OF ABBREVIATIONS.............................................................................................................10WALKIE TALKIE..............................................................................................................................12History..................................................................................................................................................................................12Amateur radio......................................................................................................................................................................13Personal Use.........................................................................................................................................................................14OBJECTIVES.....................................................................................................................................14LIMITATION OF STUDY.................................................................................................................15HYPOTHESIS SET TO ACHIEVE THE OBJECTIVE................................................................15Send and receive procedure................................................................................................................................................17Connectivity and searching for station..............................................................................................................................18HAND-SHAKE CLIENT VS HAND-SHAKE SERVER................................................................18SOFTWARE REQUIREMENT SPECIFICATION........................................................................18Functional requirements.....................................................................................................................................................19None Functional Requirements..........................................................................................................................................19SYSTEM DESIGNS............................................................................................................................19Strings.xml...........................................................................................................................................................................21

XML (Extensible Markup Language)...............................................................................................................................22Hand-Shake Server-Client..................................................................................................................................................24

Client side handshake.......................................................................................................................................................25Server side handshake.......................................................................................................................................................25TCP-Three Way Handshaking..........................................................................................................................................26SMTP................................................................................................................................................................................27TLS...................................................................................................................................................................................27WPA2 Wireless.................................................................................................................................................................29Dial up access modems.....................................................................................................................................................30

SERVER SIDE NDS HANDSHAKE – RECEIVING PACKETS..................................................31Station Information and Connectivity..............................................................................................................................33Channel.............................................................................................................................................................................37Audio Player.....................................................................................................................................................................43Audio Recorder.................................................................................................................................................................46Session Manager...............................................................................................................................................................52State View.........................................................................................................................................................................53Walkie Talkie Services......................................................................................................................................................55Switch Button...................................................................................................................................................................59Main Activity....................................................................................................................................................................65Channel Session................................................................................................................................................................71Configuration....................................................................................................................................................................74Database............................................................................................................................................................................74

PROTOCOL........................................................................................................................................75Basic Requirement of protocols..........................................................................................................................................75Protocols and Programming languages.............................................................................................................................77Protocol Layering................................................................................................................................................................78Software Layering...............................................................................................................................................................82APPLICATION STRUCTURE.........................................................................................................85USE CASE...........................................................................................................................................87SDLC....................................................................................................................................................88SEQUENCE DIAGRAM....................................................................................................................90ENTITY RELATION DIAGRAM....................................................................................................91

Page 9: Thesis

9

List of figuresList of figures Page No.Figure 1.0 Working model of JS collider 16Figure 2.0 sending-receiving voice 17Figure 3.0 Hand-shaking 24Figure 4.0 Three-way handshake 26Figure 5.0 SMTP based handshake 27Figure 6.0 TLS Layout 27Figure 7.0 TLS Handshake over SSL 28Figure 8.0 Simple TLS Handshaking 28Figure 9.0 TCP Four Way Handshake 29Figure 10.0 Modem/Device/Server connection hand-shaking 30Figure 11.0 how ping works 33Figure 12.0 App setting layout/Station name setting 34Figure 13.0 Volume control in setting layout/screen 34Figure 14.0 Use volume buttons as PTT on settings screen 35Figure 15.0 Wi-Fi Status check on start 36Figure 16.0 Channel 37Figure 17.0 Playing voice using inner audio player 46Figure 18.0 Protocol Layering without modem 78Figure 19.0 Protocol Layering with modem/router 80Figure 20.0 Software Layering 82Figure 21.0 Protocols and software layering working model 84Figure 22.0 Use Case 87Figure 23.0 SDLC concept 88Figure 24.0 Sequence Design Process – water fall model 90Figure 25.0 Entity Diagram for Walkie Talkie 91

Page 10: Thesis

10

List of abbreviationsNDS: Network Discovery Service

N: Nodes

P2p: peer to peer

N2n: node to node

JS: Java script

JSC: JS Collide

BT: Blue-tooth

DPI: Dots per inch

PX: pixels

UHF: Ultra high frequency

VHF: Very high frequency

PTT: push-to-talk

SCR: Silicon-controlled-rectifier

RF: Radio frequency

HT: Handheld transceiver

AN/PRC: Army Navy/ Portable Radio communicator/communication

AN/PRR: Army Navy/ Pattern recognition receptor

HDPI: High-density Pixels

XHDPI: Extra High-density Pixels

MDPI: Medium-density Pixels

LDPI: Low-density Pixels

ACK: Acknowledgment

SYN: Sync

FRS: Financial Reporting Standard

GMRS: General Mobile Radio Service

PMR: personal mobile radio

GPS: Global Positioning service

NFS: Network File System

DHCP: Dynamic host configuration protocol

NPM: Node Package Manager

IEEE: Institute of Electrical and Electronic Engineers

Page 11: Thesis

11

AbstractAndroid Wi-Fi Walkie Talkie is generic term defining app is based on Walkie Talkie concept which

runs using Wi-Fi technologies to deal with autonomous communication between devices, For the last

several years the current era has been moving forward faster than before, despite the fact of

technologies Walkie Talkie has been proven a great helping utility and for this same reason it is

currently in use for police and also for other metered communication e.g. within large building

contacting support/administrator or calling out for management etc. The study investigates the

possibility of an app development which is lightweight and alternative solution using peer to peer

communications by only using common gateways such as normal DHCP server, modems Wi-Fi hot-

spot to connect android devices to treat as Walkie Talkie handsets.

In the study a prototype was developed as simple sound recorder application for interaction who sent

the recorded voice over medium to other device and application plays the voice on device which

make it easier to communicate by voice, it was first implemented as Bluetooth voice sender and

receiver, the more the app was used the more the new features and flexibility became visible and

using some real time helper libraries such as JS-collider it became flexible enough to cast its very

own port for sockets.

Keywords: Android Wi-Fi communication.

Page 12: Thesis

12

Chapter 1Walkie Talkie

A Walkie Talkie is a hand-held, portable, two-way radio transceiver. Its development during the

Second World War has been variously credited to Donald L. Hings, radio engineer Alfred J. Gross,

and engineering teams at Motorola. First used for infantry, similar designs were created for field

artillery and tank units, and after the war, Walkie Talkies spread to public safety and eventually

commercial and jobsite work. Walkie Talkie is a half-duplex communication device; multiple Walkie

Talkies use a single radio channel, and only one radio on the channel can transmit at a time, although

any number can listen. The transceiver is normally in receive mode; when the user wants to talk, he

presses a "push-to-talk” button that turns off the receiver and turns on the transmitter. Typical Walkie

Talkies resemble a telephone handset, possibly slightly larger but still a single unit, with an antenna

mounted on the top of the unit. Where a phone's earpiece is only loud enough to be heard by the user,

a Walkie Talkie's built-in speaker can be heard by the user and those in the user's immediate vicinity.

Hand-held transceivers may be used to communicate between each other, or to vehicle-mounted or

base stations.

HistoryThe Walkie Talkie was developed by the US military during World War 2. The first radio transceiver

to be widely nicknamed "Walkie Talkie" was the backpacked Motorola SCR-300, created by an

engineering team in 1940 at the Galvin Manufacturing Company. The team consisted of Dan Noble,

who conceived of the design using frequency modulation; Henryk Magnuski, who was the principal

RF engineer; Marion Bond; Lloyd Morris; and Bill Vogel. The first hand-held Walkie Talkie was the

AM SCR-536 transceiver also made by Motorola, named the "Handie-Talkie". The terms are often

confused today, but the original Walkie Talkie referred to the back mounted model, while the handie-

talkie was the device which could be held entirely in the hand. Both devices used vacuum tubes and

were powered by high voltage dry cell batteries. Alfred J. Gross, a radio engineer and one of the

developers of the Joan-Eleanor system, also worked on the early technology behind the Walkie Talkie

between 1934 and 1941, and is sometimes credited with inventing it. Canadian inventor Donald

Hings is also credited with the invention of the Walkie Talkie: he created a portable radio signaling

system for his employer CM&S in 1937. He called the system a "packset", but it later became known

as the "Walkie Talkie". In 2001, Hings was formally decorated for its significance to the war effort.

Hing's model C-58 "Handy-Talkie" was in military service by 1942, the result of a secret R&D effort

that began in 1940.Following World War II, Raytheon developed the SCR-536's military replacement,

the AN/PRC-6. The AN/PRC-6 circuit used 13 vacuum tubes; a second set of 13 tubes was supplied

with the unit as running spares. The unit was factory set with one crystal which could be changed to a

Page 13: Thesis

13

different frequency in the field by replacing the crystal and re-tuning the unit. It used a 24-inch whip

antenna. There was an optional handset H-33C/PT that could be connected to the AN/PRC-6 by a 5-

foot cable. A web sling was provided.

In the mid-1970s the United States Marine Corps initiated an effort to develop a squad radio to

replace the unsatisfactory helmet-mounted AN/PRR-9 receiver and receiver/transmitter hand-held

AN/PRT-4. The AN/PRC-68 was first produced in 1976 by Magnavox, was issued to the Marines in

the 1980s, and was adopted by the US Army as well. The abbreviation HT, derived from Motorola's

"Handie Talkie" trademark, is commonly used to refer to portable handheld ham radios, with "Walkie

Talkie" often used as a layman's term or specifically to refer to a toy. Public safety or commercial

users generally refer to their handhelds simply as "radios". Surplus Motorola Handie Talkies found

their way into the hands of ham radio operators immediately following World War II. Motorola's

public safety radios of the 1950s and 1960s, were loaned or donated to ham groups as part of the

Civil Defense program. To avoid trademark infringement, other manufacturers use designations such

as "Handheld Transceiver" or "Handie Transceiver" for their products

Amateur radioWalkie Talkies are widely used among amateur radio operators. While converted commercial gear by

companies such as Motorola are not uncommon, many companies such as Yaesu, Icom, and

Kenwood design models specifically for amateur use. While superficially similar to commercial and

personal units, amateur gear usually has a number of features that are not common to other gear,

including:

Wide-band receivers, often including radio scanner functionality, for listening to non-amateur radio

bands.

Multiple bands; while some operate only on specific bands such as 2 meters or 70 cm, others support

several UHF and VHF amateur allocations available to the user. Since amateur allocations usually are

not channelized, the user can dial in any frequency desired in the authorized band. Multiple

modulation schemes: a few amateur HTs may allow modulation modes other than FM, including AM,

SSB, and CW, and digital modes such as radio-tele-type or PSK31. Some may have TNCs built in to

support packet radio data transmission without additional hardware. A newer addition to the Amateur

Radio service is Digital Smart Technology for Amateur Radio or D-STAR. Handheld radios with this

technology have several advanced features, including narrower bandwidth, simultaneous voice and

Page 14: Thesis

14

messaging, GPS position reporting, and call-sign routed radio calls over a wide ranging international

network.

As mentioned, commercial Walkie Talkies can sometimes be reprogrammed to operate on amateur

frequencies. Amateur radio operators may do this for cost reasons or due to a perception that

commercial gear is more solidly constructed or better designed than purpose-built amateur gear.

Personal UseThe personal Walkie Talkie has become popular also because of the U.S. Family Radio Service and

similar license-free services in other countries. While FRS Walkie Talkies are also sometimes used as

toys because mass-production makes them low cost, they have proper super heterodyne receivers and

are a useful communication tool for both business and personal use. The boom in license-free

transceivers has, however, been a source of frustration to users of licensed services that are

sometimes interfered with. For example, FRS and GMRS overlap in the United States, resulting in

substantial pirate use of the GMRS frequencies. Use of the GMRS frequencies requires a license;

however, most users either disregard this requirement or are unaware. Canada reallocated frequencies

for license-free use due to heavy interference from US GMRS users. The European PMR446

channels fall in the middle of a United States UHF amateur allocation, and the US FRS channels

interfere with public safety communications in the United Kingdom. Designs for personal Walkie

Talkies are in any case tightly regulated, generally requiring non-removable antennas and forbidding

modified radios.

Objectives The broad objective was to study about real-time communication with android, and functionality of

Walkie Talkie. The specific objectives of study were:

To examine the real-time communication on android

To examine how flexible an android can handle communication and how much further one can go

using java as language and android as OS.

To examine if android can be used as sender-receiver without using GSM or internet services or any

other third party software or hardware.

To determine if android can act as sender-receiver by staying an offline device and using only local

network to communicate

To examine the local network communication speed and limitation

To examine how many nodes can communication through one channel, its speed and limitation

To examine how many nodes can communicate to each other at same time by staying on one channel.

To determine if increasing in number of nodes slow down channel

To determine if increasing in number of nodes slow down android

Page 15: Thesis

15

Limitation of study All modems were not considered as communication medium because of different firewall settings,

variation in firmware, absence of DHCP

Android additional/third-party firewall, or firewall in medium was a challenge because in-out bond

connection for specific channel was required to be open for android to receive and send voice over

line without getting interrupted.

Android variation in OS version was a big challenge, package dependency does not work on

modified OS or older OS, it at least require 4.x.0+ version of android to perform its fullest

BT (Blue-tooth), infrared, NFS was too slow that they can only handle 2-3 nodes per channel.

GSM based channel broadcast was required but because the objective was to make it work offline,

GSM was not use to broadcast signal instead of signal a medium was introduce as Wi-Fi, hot-spot,

DHCP

Wi-Fi/Hot-spot based medium are intent to have faster connection but lesser nodes, Hot-spot can only

handle up to 50-70 nodes.

DHCP is the only and best option, but connection in DHCP is above average and firewall is the only

challenge in it.

Hypothesis set to achieve the objective.The Objective of study is to make communication as fast and real time as possible by using local

network only.

It is hypothesized that they real time communication might be possible if we were to use node based

module or JS ajax based service to update the communication line, instead of making it native which

will make it big in size it would be best to use JS based library which can also be proved to be a

short-cut as well as can be handy in case there are too many nodes and native system is too busy to

run its own operation in the result app being crashed. NodeJS is new in market but almost every

developer knows that it is no less than a standalone stable platform, the most fantastic thing about

NodeJS was io.socket which is basically a socket based system works with a custom port committing

and emitting to communicate. The real-time speed and performance of NodeJS is unmatched thus I

started to find an alternative and a way to use JS library if not NodeJS module itself because NodeJS

runs on its own platform that used NPM, android cannot emulate node module in native app.In the

end JS-collider was used as alternative, it provides TCP/IP session emit and commit just like node

module and is also very light-weight and developer friendly to use.

Page 16: Thesis

16

Chapter 2JS-Collider Working

According to authors and developer of JS-collider:

“JS-Collider is an asynchronous event-driven Java network (NIO) application framework designed to

provide maximum performance and scalability for applications having not too many connections but

significant amount of network traffic (both incoming and outgoing).Performance is achieved by

specially designed threading model and lock-free algorithms”

Working module of JS-collider:

(Figure 1.0 – Working model of JS collider)

In Figure 1.0 the model tells us how JS-collider works.

The area with “S” in it are devices or nodes connected within local area network each device are

emitting their station number and acting like hand-shake server on their own, they are finding another

hand-shake client to validate and bind a connectivity with them. The Green “S” blocks are devices

connected in local area network but they are not verified yet. The purple “S” block is device emitting

his station info in local area network, the yellow “S” block is device validating station info and

establishing connectivity.

The model keeps expanding according to more nodes keep connecting. Each node is server on their

own and treat other device as client, DHCP, hot-spot, modem is just medium used to establish

connection between them for interactions and communication connectivity procedures.

Page 17: Thesis

17

Working with JS-collider:JS-collider is a connectivity service which connects nodes together and make communication as real

time as possible by emit and commit functionality, sending and receiving was achieved.

Now that we have come to this, the only challenge that was remaining was to send voice over

medium. Wi-Fi hotspot is completed first step of android connection, JS collider complete other 2

steps of Connectivity between broad cast channel and sending receiving, the only complexity was

“How to keep the emit and commit alive”

If Emit and commit is in interval or is connected like p2p it is easy to manage, but the thing is,

Walkie Talkie has PTT (push to talk) button and user have to push it before broad casting his voice

over medium, making emit on button click and stopping emit on button click was pretty complex

because it will no longer be p2p connected and because of the variations in connections there has to

be other solution.

Send and receive procedureIn the study where sending receiving was figured out, to send voice over medium, it was decided to

use sound recording and send it to medium after one commit, when other device will receive emit it

will automatically play the committed voice, recording voice and treating it is data chunk/slicing it

before sending was used to make packet light and make voice send-receive-play possible.

(Figure 2.0 sending-receiving voice)

Page 18: Thesis

18

Connectivity and searching for station.Application is uses hard-coded string/parameter as station which helps other devices running same

application to look for each other, this procedure is called hand-shake. Once the application is

running it will broadcast its signature within Local network, if the same application is running

somewhere else and is in local network both will hand-shake and confirm identity, while they are

validating and connecting application receives other’s device name or station name. Application

makes a list of connected nodes and display it, each node will have their own name displaying so the

user can know where exactly he is talking.

Hand-shake Client vs Hand-Shake ServerThe procedure of hand-shake is divided into two parts.

Server hand-shake

Client hand-shake

1. Server Hand-shake:

Server hand-shake are devices who are acting as DHCP by turning on their hot-spot

and connecting other devices through their hot-spot.

2. Client hand-shake:

Client hand-shake are nodes and simple devices connected to each other by centralized

medium/DHCP/network, these devices lookup for other devices within network and

hand-shake with them to know their name and add them to User interface.

Software requirement specificationApplication do not require any additional libraries support from android or any other third-

party resource to perform functional, all libraries application require is already a part of

application. There are no external API or resource call for application, however application

requires android permissions to work normally, without those permission applications can’t

run.

Permission required by applications from android system are:

o Internet permission

o Wi-Fi permission

o Recording permission

o Change/Read Wi-Fi state permission

Page 19: Thesis

19

Functional requirementsWi-Fi hardware and API level 21 or above is required and encouraged. Older version of android

comes with minimum hardware specification which can led to application crashes and devices lags. It

is possibility that app will not install on older version at first place, but even if it is installed it may

not work, and if it is installed and is working, due to hardware specification more device connection

will slow down sending/receiving in result of device and application lagging or crashes. Application

is tested on various API level and OS versions of android, the test result for application are as follow:

None Functional RequirementsDevices should be on same network/local network; Application is intended to work in local networks

only it can’t work online or remotely.

Chapter 3System Designs

App has various methods working together, Instead of Database app uses shared preferences to store

settings, and Module/methods that are part of app are as follows:

1. Hand-Shake Client

2. Hand-Shake Server

3. Station Information

4. Connectivity

5. Channel

6. Audio player

7. Audio recorder

8. Protocol

9. Session manager

10. State view

# OS VERSION API VERSION STATUS1 2.x.x 8 FAIL

2 3.x.x 12 FAIL

3 4.x.x 18 BUGS

4 5.x.x 21 PASS

5 6.x.x 23 PASS

6 7.x.x 25 PASS

Page 20: Thesis

20

11. Walkie Talkie services

12. Switch buttons

13. Main Activity

14. Channel Session

15. Configurations

App has several layouts as follows:

1. Home

a. Connected devices lists

b. Drop down

i. About

ii. Setting

iii. Exit

2. Wi-Fi connectivity

App has 4 dimensions of image, 5 dimension of app logo, 5 dimensions of status bar logo, the

dimension uses in application logo are as follows:

1. image

a. HDPI

b. MDPI

c. XHDPI

d. XXHDPI

2. App logo:

a. HDPI

b. MDPI

c. XHDPI

d. XXHDPI

e. LDPI

3. Status bar logo:

a. HDPI

b. MDPI

c. XHDPI

d. XXHDPI

e. LDPI

App permission are requested through main activity and validated from other method accordingly,

whenever a process is about to happen the first step system take is validating permissions, all of that

permissions are requested from Manifest.xmlfile.

Page 21: Thesis

21

Strings.xmlAndroid holds a feature called string values where all the string used in android is declared / define

there. Whenever the string is needed it is called by its name. For example, I’ve a sentence saying

“this app is developed by Talha Habib” I can define this sentence using xml markup in strings.xml

file in strings folder/file system structure. String.xml string node or DOM object/line in xml structure

can be used to define/declare/initialize an id on that string line DOM object so it can be called and

used whenever is needed later.

Xml is DOM object based structure where we can define our own node names and define our own

attribute on that DOM object, by example it could be anything like this.

<class>

<section name=”c”>

<student name=”Talha Habib” roll=”1807” id=”talha”></student>

</section>

<section name=”d”>

<student name=”Umer Najeeb” roll=”1802” id=”umer”></student>

</section>

</class>

Our code will look like this in XML file, it says it has 2 records on class section “c” and “d” there is

node name “student” container student id and name and other attribute it can be anything, now if we

need to know the name of 1807 roll number person it will give us name under “name” attribute so we

can keep going with our records. Similarly, just like that string.xml has values we can use for later,

for example we know that our app name is “Wi-Fi Walkie Talkie” whenever we need to display it

again we don’t have to write it again if we have said its id all we need to do is to call that id. XML

Stands for Extensible markup language, don’t change "Extensible" to "Xtensible"

XML nodes are called Elements, not tags! In HTML DOM objects, they nodes are called Tags, XML

and HTML/DHTML may look like the same in syntax but they have different way of working and

scope.

XML (Extensible Markup Language)In computing, Extensible Markup Language is a markup language that defines a set of rules for

encoding documents in a format that is both human-readable and machine-readable. The W3C's XML

1.0 Specification and several other related specifications all of them free open standards define XML.

Page 22: Thesis

22

The design goals of XML emphasize simplicity, generality, and usability across the Internet. It is a

textual data format with strong support via Unicode for different human languages. Although the

design of XML focuses on documents, the language is widely used for the representation of arbitrary

data structures such as those used in web services. Several schema systems exist to aid in the

definition of XML-based languages, while programmers have developed many application

programming interfaces to aid the processing of XML data. Applications of XML, 100 document

formats using XML syntax had been developed, including RSS, Atom, SOAP, and XHTML. XML-

based formats became the default for many office-productivity tools, including Microsoft Office,

OpenOffice.org and LibreOffice, and Apple's iWork.

XML has also provided the base language for communication protocols such as XMPP. Applications

for the Microsoft .NET Framework use XML files for configuration. Apple has an implementation of

a registry based on XML.XML has come into common use for the interchange of data over the

Internet. IETF RFC 7303 gives rules for the construction of Internet Media Types for use when

sending XML. It also defines the media type’s application/xml and text/xml, which say only that the

data is in XML, and nothing about its semantics. The use of text/xml has been criticized as a potential

source of encoding problems and it has been suggested that it should be deprecated. With some

format beyond what XML defines itself. Usually this is either a comma or semi-colon delimited list

or, if the individual values are known not to contain spaces, a space-delimited list can be used. div

class=’inner-greeting-box’>Welcome! < /div>; where the attribute "class" has both the value "inner

greeting-box" and also indicates the two CSS class names "inner" and "greeting-box".

Page 23: Thesis

23

XML declarationXML documents consist entirely of characters from the Unicode repertoire. Except for a small

number of specifically excluded control characters, any character defined by Unicode may appear

within the content of an XML document. XML includes facilities for identifying the encoding of the

Unicode characters that make up the document, and for expressing characters that, for one reason or

another, cannot be used directly.

Valid charactersUnicode code points in the following ranges are valid in XML 1.0 documents: U+0009, U+000A,

U+000D: these are the only C0 controls accepted in XML 1.0; U+0020–U+D7FF, U+E000–

U+FFFD: this excludes some non-characters in the BMP; U+10000–U+10FFFF: this includes all

code points in supplementary planes, including non-characters.XML 1.1 extends the set of allowed

characters to include all the above, plus the remaining characters in the range U+0001–U+001F. At

the same time, however, it restricts the use of C0 and C1 control characters other than U+0009,

U+000A, U+000D, and U+0085 by requiring them to be written in escaped form. In the case of C1

characters, this restriction is a backwards incompatibility; it was introduced to allow common

encoding errors to be detected. The code point U+0000 is the only character that is not permitted in

any XML 1.0 or 1.1 document.

Encoding detectionThe Unicode character set can be encoded into bytes for storage or transmission in a variety of

different ways, called "encodings". Unicode itself defines encodings that cover the entire repertoire;

well-known ones include UTF-8 and UTF-16. There are many other text encodings that predate

Unicode, such as ASCII and ISO/IEC 8859; their character repertoires in almost every case are

subsets of the Unicode character set.XML allows the use of any of the Unicode-defined encodings,

and any other encodings whose characters also appear in Unicode. XML also provides a mechanism

whereby an XML processor can reliably, without any prior knowledge, determine which encoding is

being used. Encodings other than UTF-8 and UTF-16 are not necessarily recognized by every XML

parser.

Page 24: Thesis

24

Hand-Shake Server-ClientIn information technology, telecommunications, and related fields, handshaking is an automated

process of negotiation that dynamically sets parameters of a communications channel established

between two entities before normal communication over the channel begins. It follows the physical

establishment of the channel and precedes normal information transfer. The handshaking process

usually takes place in order to establish rules for communication when a computer sets about

communicating with a foreign device. When a computer communicates with another device like a

modem, printer, or network server, it needs to handshake with it to establish a

connection.Handshaking can negotiate parameters that are acceptable to equipment and systems at

both ends of the communication channel, including information transfer rate, coding alphabet, parity,

interrupt procedure, and other protocol or hardware features. Handshaking is a technique of

communication between two entities. However, within TCP/IP RFCs, the term "handshake" is most

commonly used to reference the TCP three-way handshake. For example, the term "handshake" is not

present in RFCs covering FTP or SMTP. A simple handshaking protocol might only involve the

receiver sending a message meaning”

(Figure 3.0 Hand-shaking Client – Server)

Page 25: Thesis

25

Client side handshakePublic HandshakeClientSession (ARGS){

// DECLARATION

If(pingInterval >0){// ping interval for packet interactions.

m_timerHandler =new TimerHandler ();

timerQueue.schedule(m_timerHandler, pingInterval, TimeUnit.SECONDS);

}

Try{

Final ByteBuffer handshakeRequest =

Protocol.HandshakeRequest.create(audioFormat, stationName);

session.sendData (handshakeRequest); //send data through handshake request

}catch(final CharacterCodingException ex){

Log.e (LOG_TAG, getLogPrefix ()+ ex.toString ()); //debugging

session.closeConnection (); //close session

}

}

Server side handshakePublic HandshakeServerSession(ARGS){

// declaration

If(pingInterval >0){

m_timerHandler =new TimerHandler();

m_timerQueue.schedule(m_timerHandler, pingInterval, TimeUnit.SECONDS);

}

Log.i(LOG_TAG, getLogPrefix()+"connection accepted");

}

There are many other types of handshaking and several of ways to do it... some of methods are as

follows:

1. TCP-Three-way handshake

2. WPA/WPA2 Four-way Handshake

Page 26: Thesis

26

TCP-Three Way HandshakingThe first host (Alice) sends the second host (Bob) a "synchronize" (SYN) message with its own

sequence number {\displaystyle x} x, which Bob receives. Bob replies with a synchronize-

acknowledgment (SYN-ACK) message with its own sequence number {\displaystyle y} y and

acknowledgement number {\displaystyle x+1} x+1, which Alice receives. Alice replies with an

acknowledgment message with acknowledgement number {\displaystyle y+1} y+1, which Bob

receives and to which he doesn't need to reply. In this setup, the synchronize messages act as service

requests from one server to the other, while the acknowledgement messages return to the requesting

server to let it know the message was received.

Establishing a normal TCP connection requires three separate steps:

(Figure 4.0 Three-way handshake)

One of the most important factors of three-way handshake is that, in order to exchange the starting

sequence number, the two sides plan to use, the client first sends a segment with its own initial

sequence number {\displaystyle x} x, then the server responds by sending a segment with its own

sequence number {\displaystyle y} y and the acknowledgement number {\displaystyle x+1} x+1, and

finally the client responds by sending a segment with acknowledgement number {\displaystyle y+1}

y+1.

The reason for the client and server not using the default sequence number such as 0 for establishing

connection is to protect against two incarnations of the same connection reusing the same sequence

number too soon, which means a segment from an earlier incarnation of a connection might interfere

with a later incarnation of the connection.

Page 27: Thesis

27

Hand-shaking could use one of many protocols as following:

1. SMTP

2. TLS

3. WPA2 wireless

4. Dial-up access modems

SMTPThe Simple Mail Transfer Protocol (SMTP) is the key Internet standard for email transmission. It

includes handshaking to negotiate authentication, encryption and maximum message size.

(Figure 5.0 SMTP based handshake)

TLSWhen a Transport Layer Security (SSL or TLS) connection starts, the record encapsulates a "control"

protocol—the handshake messaging protocol. This protocol is used to exchange all the information

required by both sides for the exchange of the actual application data by TLS. It defines the messages

formatting or containing this information and the order of their exchange.

(Figure 6.0 TLS Layout)

Page 28: Thesis

28

These may vary according to the demands of the client and server—i.e., there are several possible

procedures to set up the connection. This initial exchange results in a successful TLS connection

(both parties ready to transfer application data with TLS) or an alert message. The protocol is used to

negotiate the secure attributes of a session.

(Figure 7.0 TLS handshake over SSL)

(Figure 8.0 Simple TLS Handshaking)

Page 29: Thesis

29

WPA2 WirelessThe WPA2 standard for wireless uses a four-way handshake defined in IEEE 802.11i-2004.Wi-Fi

Protected Access (WPA) and Wi-Fi Protected Access II (WPA2) are two security protocols and

security certification programs developed by the Wi-Fi Alliance to secure wireless computer

networks. The Alliance defined these in response to serious weaknesses researchers had found in the

previous system, Wired Equivalent Privacy (WEP). WPA (sometimes referred to as the draft IEEE

802.11i standard) became available in 2003.The Wi-Fi Alliance intended it as an intermediate

measure in anticipation of the availability of the more secure and complex WPA2. WPA2 became

available in 2004 and is a common shorthand for the full IEEE 802.11i (or IEEE 802.11i-2004)

standard. A flaw in a feature added to Wi-Fi, called Wi-Fi Protected Setup, allows WPA and WPA2

security to be bypassed and effectively broken in many situations.

The WPA and WPA2 security protocols implemented without using the Wi-Fi Protected Setup

feature are unaffected by the security vulnerability. The WPA protocol implements much of the IEEE

802.11i standard. Specifically, the Temporal Key Integrity Protocol (TKIP) was adopted for WPA.

WEP used a 64-bit or 128-bit encryption key that must be manually entered on wireless access points

and devices and does not change. TKIP employs a per-packet key, meaning that it dynamically

generates a new 128-bit key for each packet and thus prevents the types of attacks that compromised

WEP.

(Figure 9.0 TCP Four Way Handshake)

WPA also includes a message integrity check, which is designed to prevent an attacker from altering

and resending data packets.This replaces the cyclic redundancy check (CRC) that was used by the

WEP standard. CRC's main flaw was that it did not provide a sufficiently strong data integrity

guarantee for the packets it handled. Well tested message authentication codes existed to solve these

problems, but they required too much computation to be used on old network cards. WPA uses a

Page 30: Thesis

30

message integrity check algorithm called TKIP to verify the integrity of the packets. TKIP is much

stronger than a CRC, but not as strong as the algorithm used in WPA2.

Researchers have since discovered a flaw in WPA that relied on older weaknesses in WEP and the

limitations of Michael to retrieve the keystream from short packets to use for re-injection and

spoofing.

Dial up access modemsOne classic example of handshaking is that of dial-up modems, which typically negotiate

communication parameters for a brief period when a connection is first established, and thereafter use

those parameters to provide optimal information transfer over the channel as a function of its quality

and capacity.

(Figure 10.0 Modem/Device/Server connection hand-shaking)

The "squealing" (which is actually a sound that changes in pitch 100 times every second) noises

made by some modems with speaker output immediately after a connection is established are in fact

the sounds of modems at both ends engaging in a handshaking procedure; once the procedure is

completed, the speaker might be silenced, depending on the settings of operating system or the

application controlling the modem.

Page 31: Thesis

31

Server side NDS handshake – receiving packets:

Public void onDataReceived(RetainableByteBuffer data){ // function to get packets

final RetainableByteBuffer msg = m_streamDefragger.getNext(data); // get stream

If(msg ==null){// message is empty

/* HandshakeRequest is fragmented, very rare but is still happens */

}

elseif(msg == StreamDefragger.INVALID_HEADER){// if message header is invalid

m_session.closeConnection(); // close connection

}

else{// message is not empty

if(m_timerHandler !=null){ //idle

try{

if(m_timerQueue.cancel(m_timerHandler)!=0){

return;

}

}

catch(final InterruptedException ex){ // got error

Thread.currentThread().interrupt(); // break current thread.

}

}

// get message ID

if(messageID == Protocol.HandshakeRequest.ID){ // verify ID

finalshort protocolVersion = Protocol.HandshakeRequest.getProtocolVersion(msg);

if(protocolVersion == Protocol.VERSION){

try{ // try

final String audioFormat = Protocol.HandshakeRequest.getAudioFormat(msg);

final String stationName = Protocol.HandshakeRequest.getStationName(msg);

final AudioPlayer audioPlayer = AudioPlayer.create(args);

if(audioPlayer ==null){ // no audioPlayer

Log.i (LOG_TAG, getLogPrefix()); // debug case

m_session.closeConnection(); // close connection

}else{

Log.i(LOG_TAG, getLogPrefix()+"handshake ok"); // debug case

final ByteBuffer handshakeReply = Protocol.HandshakeReplyOk.create();

Page 32: Thesis

32

m_session.sendData(handshakeReply);

m_channel.setStationName(m_session, stationName);

final ChannelSession channelSession =new ChannelSession(args);

m_session.replaceListener(channelSession);

}

}catch(final CharacterCodingException ex){

Log.e (LOG_TAG, getLogPrefix ()+ ex.toString ());

m_session.closeConnection();

}

}else{

/* Protocol version is different, cannot continue. */

}

Client side based NDS handshake – receiving packets:

final String statusText ="Protocol version mismatch:; //

try{

final ByteBuffer handshakeReply = Protocol.HandshakeReplyFail.create(statusText);

m_session.sendData(handshakeReply);

}

catch(final CharacterCodingException ex){

Log.i(LOG_TAG, ex.toString());

}

m_session.closeConnection();

}

}

else{

// debug

m_session.closeConnection();

}

}

}

Page 33: Thesis

33

Chapter 4Station Information and Connectivity

App running on devices will have their own unique address, even with the unique ID like addresses

all running Walkie Talkie application will hold same signature on every node so that system can look

up to them by using handshaking, pings.

(Figure 11.0 how ping works)

Let us assume Device A, B, C are android devices, and 1.1.1.1 is their LAN IP, /24 at the end is sub-

net masking used to calculate how many nodes are connected within LAN, sub-net masking is also

use to discover another visible device which can accept pings, In Diagram Device A is sending B a

request to know if he is online and can respond, if Device B responded back that means Device B is

online and discoverable, The same goes with Device B to C, The ping basically collects all nodes

which can reply back and then after handshake and validation of station information and proper

signature response connectivity between devices are established. Ping uses Send package in bytes and

measure them by Latency, the quicker the response is the faster the connection is. Latency is

measured in micro seconds which means 1000 micro seconds is 1 second, the normal and

recommended latency between two nodes are 20-60 micro seconds. If one device is taking longer

than 200 micro seconds it will cause a slight lagging and delayed response on both ends, it is because

the server has sent his package and package arrive too soon to be collected by other node, in result

some of package data goes missing and corrupted. User can change their broadcast name, station

name which is used to display to make a proper UI/UX understanding and make it user friendly. The

changing on username/broadcasting person name will not have any effect on station because the

station signature is same and can’t be changed for connectivity establishment and security reasons.

Page 34: Thesis

34

(Figure 12.0 App Setting layout/Station name setting)

It is in app preview of settings dialog/popup, layout contains one input of station name, which is

basically node name, the real station name which we are using as signature for connectivity is hard-

coded, station name is like a person name using application, if someone changes his station name

other connected devices can see his name in station lists in main page.

(Figure 13.0 Volume control in setting layout/screen)

Page 35: Thesis

35

Volume control is given as alternative control if user wish to use his volume button as PTT (Push-to-

talk) he can manage his volume settings through settings screen.

(Figure 14.0 Use volume buttons as PTT on settings screen)

One can also start background services and check for Wi-Fi Status, it is useful when user didn’t turn

his Wi-Fi on and is trying to use App, Application will simply popup him a dialog saying he need to

activate/turn on his Wi-Fi in order to use the application, because main purpose of this app is to run

on Wi-Fi, Checkbox control inside settings screen is automated service which check Wi-Fi status on

every start of application, this way user will not miss any important connectivity by mistake and

chances of efficient scope will be prompted.

Page 36: Thesis

36

(Figure 15.0 Wi-Fi Status check on start)

However, all controls in settings screens are optional, it is not required for user to set-it up before

using an app, it is basically and additional customization and performance tweaking flexibility for

more productivity on progressive scale.

Station information parameters and values:

public StationInfo (String name, String addr,int transmission,long ping){

this.name = name;

this.addr = addr;

this.transmission = transmission;

this.ping = ping;

}

Page 37: Thesis

37

ChannelChannels are simply identifiers used to communicate and calculate integrity between nodes, it is also

used to make a sequence connection between them and broadcasting of packets through it.

(Figure 16.0 Channel)

Channel also identify the signature and reflects the station in it, Connectivity is possible through

Channel to Channel, Channels are also a method of keeping sessions and extracting other information

like, device state, ping rate, station name, session life span etc.

Through Channel keeping a background service which will trigger the connection events every

specific interval is possible, App will keep in contact with other app even if user interface is shut

down/switch to another application. An activity will run which will renew sessions and keeps

connectivity all in background and gather newly updated versions of commits and changes like,

sending of voice, changing of name, change in ping rate, session renewals. These sessions creates a

cloud of local devices for distributed communication.

Page 38: Thesis

38

Accepting Connection:

privateclass ChannelAcceptor extends Acceptor {

public Session.Listener createSessionListener(Session session){

Log.i("session accepted");

m_lock.lock();

try{

if(m_stopLatch ==null){

final SessionInfo sessionInfo =new SessionInfo();

m_sessions.put(session, sessionInfo);

returnnew HandshakeServerSession(args);

}

}finally{

m_lock.unlock();

}

returnnull;

}

}

When Channel connection is accepted:Public void onAcceptorStarted(Collider collider,int localPort){

Log.i(LOG_TAG, m_name +": acceptor started: "+ localPort);

m_lock.lock();

try{

if(m_stopLatch ==null){

m_localPort = localPort;

}

if(m_stateListener !=null){

updateStateLocked();

}

Log.i("register service");

return;

}

}

Page 39: Thesis

39

State listener and Exception handling:

privateclass ChannelConnector extends Connector {

privatefinal String m_serviceName;

public ChannelConnector(InetSocketAddress addr, String serviceName){

super(addr);

m_serviceName = serviceName;

}

public Session.Listener createSessionListener(Session session){ // listen sessions

m_lock.lock(); // lock when find another device to prevent

publicvoid onException(IOException ex){ // on error

m_lock.lock();

try{

final ServiceInfo serviceInfo = m_serviceInfo.get(m_serviceName);

if(serviceInfo ==null){

// if serviceInfo is empty throw error

}else{

if(BuildConfig.DEBUG &&((serviceInfo.connector !=this)||

(serviceInfo.session !=null))){

thrownew AssertionError();

}

}

}

Page 40: Thesis

40

Getting station info

private StationInfo[] getStationListLocked(){

if(BuildConfig.DEBUG){

if(!m_lock.isHeldByCurrentThread())

thrownew AssertionError();

if(m_serviceName ==null)

thrownew AssertionError();

}

elseif(m_serviceName ==null)

returnnew StationInfo[0];

int sessions =0;

for(Map.Entry < String, ServiceInfo > e: m_serviceInfo.entrySet()){

if(m_serviceName.compareTo(e.getKey())>0){

if(e.getValue().stationName !=null)

sessions++;

}

}

for(Map.Entry < Session, SessionInfo > e: m_sessions.entrySet()){

if(e.getValue().stationName !=null)

sessions++;

}

final StationInfo[] stationInfo =new StationInfo[sessions];

int idx =0;

for(Map.Entry < String, ServiceInfo > e: m_serviceInfo.entrySet()){

if(m_serviceName.compareTo(e.getKey())>0){

if(e.getValue().stationName !=null){

final ServiceInfo serviceInfo = e.getValue();

stationInfo[idx++]=new StationInfo(args);

}

}

}

}

return stationInfo;

}

Page 41: Thesis

41

Establishing connection between nodes and DHCP:

publicvoid onServiceFound(NsdServiceInfo nsdServiceInfo){

final String serviceName = nsdServiceInfo.getServiceName();

m_lock.lock();

try{

if(BuildConfig.DEBUG &&(m_stopLatch !=null))

thrownew AssertionError();

ServiceInfo serviceInfo = m_serviceInfo.get(serviceName);

if(serviceInfo ==null){

serviceInfo =new ServiceInfo();

m_serviceInfo.put(serviceName, serviceInfo);

}

serviceInfo.nsdServiceInfo = nsdServiceInfo;

serviceInfo.nsdUpdates++;

if((m_serviceName !=null)&&(m_serviceName.compareTo(serviceName)>0)){

if((serviceInfo.session ==null)&&(serviceInfo.connector ==null)){

if(m_resolveListener ==null){

Log.i(LOG_TAG, m_name +": onServiceFound, resolve: "+

nsdServiceInfo);

serviceInfo.nsdUpdates =0;

m_resolveListener =new ResolveListener(serviceName);

m_nsdManager.resolveService(nsdServiceInfo, m_resolveListener);

}else{

Log.i(LOG_TAG, m_name +": onServiceFound: "+ nsdServiceInfo);

}

}

}

}finally{

m_lock.unlock();

}

}

Page 42: Thesis

42

On Connection lostpublicvoid onServiceLost( NsdServiceInfo nsdServiceInfo )

{

final String serviceName = nsdServiceInfo.getServiceName();

m_lock.lock();

try

{

final ServiceInfo serviceInfo = m_serviceInfo.get( serviceName );

if(serviceInfo ==null)

{

Log.w(": internal error: service not found: "+ nsdServiceInfo );

}

elseif((m_serviceName !=null)&&(m_serviceName.compareTo(serviceName)>0)){

if(((m_resolveListener !=null)&&

m_resolveListener.getServiceName().equals(serviceName))||

(serviceInfo.connector !=null)||(serviceInfo.session !=null)){

serviceInfo.nsdServiceInfo =null;

}else{

m_serviceInfo.remove( serviceName );

final StateListener stateListener = m_stateListener;

if(stateListener !=null)

stateListener.onStationListChanged( getStationListLocked());

}

}

else{

m_serviceInfo.remove( serviceName );

}

}

finally{

m_lock.unlock();

}

}

Page 43: Thesis

43

Setting Station Name:Setting station name, generating and getting station name according to session generated,

register/unregister handling of sessions, setting ping rates etc.:

public void setStationName( String serviceName, String stationName )

{

m_lock.lock();

try{

final ServiceInfo serviceInfo = m_serviceInfo.get( serviceName );

if (serviceInfo != null)

{

serviceInfo.stationName = stationName;

serviceInfo.addr = serviceInfo.session.getRemoteAddress().toString();

serviceInfo.state = 0;

serviceInfo.ping = 0;

}

finally

{

m_lock.unlock();

}

}

Audio PlayerApplication does not use a physical/external audio player, Application is programmed to play audio

when it receives from other node and play it in system embedded player, player do not have body of

its own, it is programmatically developed as player which only trigger the speaker hardware to play

voice.

Page 44: Thesis

44

Playing Audio:public void play( RetainableByteBuffer audioFrame )

{

final Node node = new Node( audioFrame );

audioFrame.retain();

for (;;)

{

final Node tail = m_tail;

if (BuildConfig.DEBUG && (tail != null) && (tail.audioFrame == null))

{

audioFrame.release();

throw new AssertionError();

}

if (s_tailUpdater.compareAndSet(this, tail, node))

{

if (tail == null)

{

m_head = node;

m_sema.release();

}

else {

tail.next = node;

break;

}

}

}

Page 45: Thesis

45

Waiting for other voices, and stop after playing one voice broadcast:public void stopAndWait()

{

final Node node = new Node( null );

for (;;){

final Node tail = m_tail;

if (BuildConfig.DEBUG && (tail != null) && (tail.audioFrame == null)) {

throw new AssertionError();

}

if (s_tailUpdater.compareAndSet(this, tail, node)){

if (tail == null) {

m_head = node;

m_sema.release();

}

else{

tail.next = node;

break;

}}try{

m_thread.join();

}

catch (final InterruptedException ex)

{Log.e( LOG_TAG, ex.toString() ); }

}

Page 46: Thesis

46

(Figure 17.0 Playing voice using inner audio player)

Having no external location or triggers/calls to audio player of android stock music player or other

android music player app save’s trouble from getting the same exact player for app to perform

functionally, and increases it size and adds and extra validation to find/match and allocate audio

player, make it standby for keeping it operational.

Audio RecorderAudio Player is triggered when Push-to-talk is active, while pressing and holding PTT button

application will record audio until user leave button, right after user leaves app transmit voice within

LAN where other devices gets it and play it inside app.

Figure 2.0 Sending-receiving voice display a detailed model how PTT button works and how the

recording plays it role.

Page 47: Thesis

47

Recording voice:

public void startRecording()

{

Log.d( LOG_TAG, "startRecording" );

m_lock.lock();

try

{

if (m_state == IDLE)

{

m_state = START;

m_cond.signal();

}

else if (m_state == STOP)

m_state = RUN;

}

finally

{

m_lock.unlock();

}

}

public void stopRecording()

{

m_lock.lock();

try

{

if (m_state != IDLE)

m_state = STOP;

}

finally

{

m_lock.unlock();

}

}

Page 48: Thesis

48

Initializing AudioRecorder:

public static AudioRecorder create( SessionManager sessionManager, boolean repeat )

{

final int rates [] = { 11025, 16000, 22050, 44100 };

for (int sampleRate : rates)

{

final int channelConfig = AudioFormat.CHANNEL_IN_MONO;

final int minBufferSize = AudioRecord.getMinBufferSize(

sampleRate, channelConfig, AudioFormat.ENCODING_PCM_16BIT );

if ((minBufferSize != AudioRecord.ERROR) &&

(minBufferSize != AudioRecord.ERROR_BAD_VALUE))

{

final int frameSize = (sampleRate * (Short.SIZE / Byte.SIZE) / 2) & (Integer.MAX_VALUE - 1);

int bufferSize = (frameSize * 4);

if (bufferSize < minBufferSize)

bufferSize = minBufferSize;

final AudioRecord audioRecord = new AudioRecord(

MediaRecorder.AudioSource.MIC,

sampleRate,

channelConfig,

AudioFormat.ENCODING_PCM_16BIT,

bufferSize );

final String audioFormat = ("PCM:" + sampleRate);

return new AudioRecorder( sessionManager, audioRecord, audioFormat, frameSize,

bufferSize, repeat );

}

}

return null;

}

Page 49: Thesis

49

Sending voice over protocol layering and handling recorder process:

public void run()

{

Log.i( LOG_TAG, "run [" + m_audioFormat + "]: frameSize=" + m_frameSize + "

bufferSize=" + m_bufferSize );

android.os.Process.setThreadPriority( Process.THREAD_PRIORITY_URGENT_AUDIO );

RetainableByteBuffer byteBuffer = m_byteBufferCache.get();

byte [] byteBufferArray = byteBuffer.getNioByteBuffer().array();

int byteBufferArrayOffset = byteBuffer.getNioByteBuffer().arrayOffset();

int frames = 0;

try

{

for (;;)

{

m_lock.lock();

try

{

while (m_state == IDLE)

m_cond.await();

if (m_state == START)

{

m_audioRecord.startRecording();

}

else if (m_state == STOP)

{

m_audioRecord.stop();

m_state = IDLE;

if (m_list != null)

{

int replayedFrames = 0;

for (RetainableByteBuffer msg : m_list)

{

m_audioPlayer.play( msg );

msg.release();

Page 50: Thesis

50

replayedFrames++;

}

m_list.clear();

Log.i( LOG_TAG, "Replayed " + replayedFrames + " frames." );

}

Log.i( LOG_TAG, "Sent " + frames + " frames." );

continue;

}

else if (m_state == SHTDN)

break;

}

finally

{

m_lock.unlock();

}

int position = byteBuffer.position();

if ((byteBuffer.limit() - position) < Protocol.AudioFrame.getMessageSize(m_frameSize))

{

byteBuffer.release();

byteBuffer = m_byteBufferCache.get();

byteBufferArray = byteBuffer.getNioByteBuffer().array();

byteBufferArrayOffset = byteBuffer.getNioByteBuffer().arrayOffset();

position = 0;

if (BuildConfig.DEBUG && (byteBuffer.position() != position))

throw new AssertionError();

}

Protocol.AudioFrame.init( byteBuffer.getNioByteBuffer(), m_frameSize );

if (BuildConfig.DEBUG && (byteBuffer.remaining() <m_frameSize))

throw new AssertionError();

final int bytesReady = m_audioRecord.read(

byteBufferArray, byteBufferArrayOffset+byteBuffer.position(), m_frameSize );

if (bytesReady == m_frameSize)

{

final int limit = position + Protocol.AudioFrame.getMessageSize( m_frameSize );

byteBuffer.position( position );

Page 51: Thesis

51

byteBuffer.limit( limit );

final RetainableByteBuffer msg = byteBuffer.slice();

m_sessionManager.send( msg );

frames++;

if (m_list != null)

{

m_list.add( Protocol.AudioFrame.getAudioData(msg) );

}

msg.release();

byteBuffer.limit( byteBuffer.capacity() );

byteBuffer.position( limit );

}

else

{

Log.e( LOG_TAG, "readSize=" + m_frameSize + " bytesReady=" + bytesReady );

break;

}

}

}

catch (final InterruptedException ex)

{

Log.e( LOG_TAG, ex.toString() );

Thread.currentThread().interrupt();

}

m_audioRecord.stop();

m_audioRecord.release();

byteBuffer.release();

Log.i( LOG_TAG, "run [" + m_audioFormat + "]: done" );

}

Page 52: Thesis

52

Session ManagerSession manager is part of back-end procedure in application file system, it plays it roles to allocate,

control, connect, save sessions flows. Session manager is main part which contribute his distribute

administrative control to view/alter process and retrieve sessions, these sessions are used build a

connection path in which communication will take place.

Adding/removing session:

public void addSession( ChannelSession session )

{

m_lock.lock();

try

{

if (BuildConfig.DEBUG &&m_sessions.contains(session))

throw new AssertionError();

final HashSet<ChannelSession> sessions = (HashSet<ChannelSession>) m_sessions.clone();

sessions.add( session );

m_sessions = sessions;

}

finally

{

m_lock.unlock();

}

}

public void removeSession( ChannelSession session )

{

m_lock.lock();

try

{

final HashSet<ChannelSession> sessions = (HashSet<ChannelSession>) m_sessions.clone();

final boolean removed = sessions.remove( session );

if (BuildConfig.DEBUG && !removed)

throw new AssertionError();

m_sessions = sessions;

}

Page 53: Thesis

53

finally

{

m_lock.unlock();

}

}

Sending Session broadcast:

public void send( RetainableByteBuffer msg )

{

for (ChannelSession session : m_sessions)

session.sendMessage(msg);

}

State ViewState view is indicator, indicating green circle on left side of node head in list, to show “that” node

has sent broadcast. State view method uses canvas to draw a circle and highlight or handle it using

the draw-able attributes of canvas

Drawing state indicator using canvas

protected void onDraw( Canvas canvas )

{

super.onDraw( canvas );

if (m_state <m_paint.length)

{

final float cx = (getWidth() / 2);

final float cy = (getHeight() / 2);

final float cr = (cx - cx / 2f);

canvas.drawCircle( cx, cy, cr, m_paint[m_state] );

}

}

public StateView( Context context, AttributeSet attrs )

{

super( context, attrs );

final TypedArray a = context.obtainStyledAttributes(

attrs, new int [] { android.R.attr.minHeight }, android.R.attr.buttonStyle, 0 );

if (a != null)

Page 54: Thesis

54

{

final int minHeight = a.getDimensionPixelSize( 0, -1 );

if (minHeight != -1)

setMinimumHeight( minHeight );

a.recycle();

}

setWillNotDraw( false );

m_paint = new Paint[2];

m_paint[0] = new Paint();

m_paint[0].setColor( Color.DKGRAY );

m_paint[1] = new Paint();

m_paint[1].setColor( Color.GREEN );

}

Indication of state:

void setIndicatorState( int state )

{

if (state <m_paint.length)

{

if (m_state != state)

{

m_state = state;

invalidate();

}

}

else if (BuildConfig.DEBUG)

throw new AssertionError();

}

Page 55: Thesis

55

Walkie Talkie ServicesWalkie Talkie is method/class in back-end system plays it roles to send/receive packets, it’s the main

engine which is responsible for sending and receiving functionality. It is container which hold js-

collider functionality and all NDS based handshakes and signatures generations.

Performing NDS via JS-collider, initialing Js-collider process:

private static class ColliderThread extends Thread

{

private final Collider m_collider;

public ColliderThread( Collider collider )

{

super( "ColliderThread" );

m_collider = collider;

}

public void run()

{

Log.i( LOG_TAG, "Collider thread: start" );

m_collider.run();

Log.i( LOG_TAG, "Collider thread: done" );

}

}

Discover other nodes with same services/signature nearby, performance of NDS:

private class DiscoveryListener implements NsdManager.DiscoveryListener

{

public void onStartDiscoveryFailed( String serviceType, int errorCode )

{

m_lock.lock();

try

{

if (m_cond != null)

m_cond.signal();

}

finally

{

Page 56: Thesis

56

m_lock.unlock();

}

}

public void onStopDiscoveryFailed( String serviceType, int errorCode )

{

Log.e( LOG_TAG, "Stop discovery failed: " + errorCode );

}

public void onDiscoveryStarted( String serviceType )

{

Log.i( LOG_TAG, "Discovery started" );

m_lock.lock();

try

{

if (m_cond == null)

m_discoveryStarted = true;

else

m_nsdManager.stopServiceDiscovery( this );

}

finally

{

m_lock.unlock();

}

}

When a service/node is found:

public void onServiceFound( NsdServiceInfo nsdServiceInfo )

{

try

{

final String[] ss = nsdServiceInfo.getServiceName().split( SERVICE_NAME_SEPARATOR );

final String channelName = new String( Base64.decode( ss[0], 0 ) );

Log.i( LOG_TAG, "onServiceFound: " + channelName + ": " + nsdServiceInfo );

if (channelName.compareTo( SERVICE_NAME ) == 0)

m_channel.onServiceFound( nsdServiceInfo );

Page 57: Thesis

57

}

catch (final IllegalArgumentException ex)

{

Log.w( LOG_TAG, ex.toString() );

}

}

Getting device ID:

private static String getDeviceID( ContentResolver contentResolver )

{

long deviceID = 0;

final String str = Settings.Secure.getString( contentResolver, Settings.Secure.ANDROID_ID );

if (str != null)

{

try

{

final BigInteger bi = new BigInteger( str, 16 );

deviceID = bi.longValue();

}

catch (final NumberFormatException ex)

{

Log.i( LOG_TAG, ex.toString() );

}

}

if (deviceID == 0)

{

/* Let's use random number */

deviceID = new Random().nextLong();

}

final byte [] bb = new byte[Long.SIZE / Byte.SIZE];

for (int idx=(bb.length - 1); idx>=0; idx--)

{

bb[idx] = (byte) (deviceID &0xFF);

deviceID >>= Byte.SIZE;

}

Page 58: Thesis

58

return Base64.encodeToString( bb, (Base64.NO_PADDING | Base64.NO_WRAP) );

}

Allocating other resources:

public int onStartCommand( Intent intent, int flags, int startId )

{

Log.d( LOG_TAG, "onStartCommand: flags=" + flags + " startId=" + startId );

if (m_audioRecorder == null)

{

final String deviceID = getDeviceID( getContentResolver() );

final SessionManager sessionManager = new SessionManager();

m_audioRecorder = AudioRecorder.create( sessionManager, /*repeat*/false );

if (m_audioRecorder != null)

{

startForeground( 0, null );

final int audioStream = MainActivity.AUDIO_STREAM;

final AudioManager audioManager = (AudioManager) getSystemService( AUDIO_SERVICE );

m_audioPrvVolume = audioManager.getStreamVolume( audioStream );

final String stationName = intent.getStringExtra( MainActivity.KEY_STATION_NAME );

int audioVolume = intent.getIntExtra( MainActivity.KEY_VOLUME, -1 );

if (audioVolume <0)

audioVolume = audioManager.getStreamMaxVolume( audioStream );

Log.d( LOG_TAG, "setStreamVolume(" + audioStream + ", " + audioVolume + ")" );

audioManager.setStreamVolume( audioStream, audioVolume, 0 );

try

{

m_collider = Collider.create();

m_colliderThread = new ColliderThread( m_collider );

final TimerQueue timerQueue = new TimerQueue( m_collider.getThreadPool() );

m_channel = new Channel(

deviceID,

stationName,

m_audioRecorder.getAudioFormat(),

m_collider,

m_nsdManager,

SERVICE_TYPE,

Page 59: Thesis

59

SERVICE_NAME,

sessionManager,

timerQueue,

Config.PING_INTERVAL );

m_discoveryListener = new DiscoveryListener();

m_nsdManager.discoverServices( SERVICE_TYPE, NsdManager.PROTOCOL_DNS_SD,

m_discoveryListener );

m_colliderThread.start();

}

catch (final IOException ex)

{

Log.w( LOG_TAG, ex.toString() );

}

}

}

return START_REDELIVER_INTENT;

}

Switch ButtonSwitchButton.Java is class/method in back-end file system plays it roles to secure PTT, and make

hand gestures based handling operational, such as on press PTT turn on recorder and on slide down

move the list main activity downward to display all nodes connected, it is also responsible for

gestured pattern based handling currently operational in application.

Handling touch events:

public boolean onTouchEvent( MotionEvent ev )

{

final int action = ev.getAction();

switch (action)

{

case MotionEvent.ACTION_DOWN:

if (isEnabled())

{

if (m_state == STATE_IDLE)

{

Page 60: Thesis

60

setPressed( true );

setBackground( m_pressedBackground );

m_state = STATE_DOWN;

m_touchX = ev.getX();

m_touchY = ev.getY();

if (m_stateListener != null)

m_stateListener.onStateChanged( true );

return true;

}

else if (m_state == STATE_LOCKED)

{

m_state = STATE_DOWN;

m_touchX = ev.getX();

m_touchY = ev.getY();

return true;

}

else

{

if (BuildConfig.DEBUG)

throw new AssertionError();

}

}

break;

case MotionEvent.ACTION_MOVE:

{

final float x = ev.getX();

final float y = ev.getY();

final float dx = (x - m_touchX);

final float dy = (y - m_touchY);

switch (m_state)

{

case STATE_IDLE:

break;

case STATE_DOWN:

if ((Math.abs(dx) >m_touchSlop) ||

Page 61: Thesis

61

(Math.abs(dy) >m_touchSlop))

{

if (Math.abs(dx) > Math.abs(dy))

{

if (dx >0.0)

{

m_state = STATE_DRAGGING_RIGHT;

Log.d( LOG_TAG, "STATE_DOWN -> STATE_DRAGGING_RIGHT" );

}

else if (dx <0.0)

{

m_state = STATE_DRAGGING_LEFT;

Log.d( LOG_TAG, "STATE_DOWN -> STATE_DRAGGING_LEFT" );

}

getParent().requestDisallowInterceptTouchEvent( true );

m_touchX = x;

m_touchY = y;

}

}

return true;

case STATE_DRAGGING_RIGHT:

if ((dx > -0.5f) && (Math.abs(dx) > Math.abs(dy)))

{

m_touchX = x;

m_touchY = y;

}

else if (dy >= 0)

{

m_touchX = x;

m_touchY = y;

m_state = STATE_DRAGGING_DOWN;

Log.d( LOG_TAG, "STATE_DRAGGING_RIGHT ->

STATE_DRAGGING_DOWN" );

}

else

Page 62: Thesis

62

{

getParent().requestDisallowInterceptTouchEvent( false );

m_state = STATE_IDLE;

Log.d( LOG_TAG, "STATE_DRAGGING_RIGHT -> STATE_IDLE" );

}

return true;

case STATE_DRAGGING_LEFT:

if ((dx <0.5f) && (Math.abs(dx) > Math.abs(dy)))

{

m_touchX = x;

m_touchY = y;

}

else if (dy >= 0)

{

m_touchX = x;

m_touchY = y;

m_state = STATE_DRAGGING_DOWN;

Log.d( LOG_TAG, "STATE_DRAGGING_LEFT -> STATE_DRAGGING_DOWN"

);

}

else

{

getParent().requestDisallowInterceptTouchEvent( false );

m_state = STATE_IDLE;

Log.d( LOG_TAG, "STATE_DRAGGING_LEFT -> STATE_IDLE" );

}

return true;

case STATE_DRAGGING_DOWN:

if ((dy > -1.0f) || (Math.abs(dx) <1.0f))

{

m_touchX = x;

m_touchY = y;

}

else

{

Page 63: Thesis

63

getParent().requestDisallowInterceptTouchEvent( false );

m_state = STATE_IDLE;

Log.d( LOG_TAG, "STATE_DRAGGING_DOWN -> STATE_IDLE" );

}

return true;

}

}

break;

case MotionEvent.ACTION_UP:

case MotionEvent.ACTION_CANCEL:

if (m_state == STATE_DRAGGING_DOWN)

{

/* Keep button pressed */

m_state = STATE_LOCKED;

getParent().requestDisallowInterceptTouchEvent( false );

}

else

{

m_stateListener.onStateChanged( false );

setBackground( m_defaultBackground );

setPressed( false );

if (m_state != STATE_IDLE)

{

m_state = STATE_IDLE;

getParent().requestDisallowInterceptTouchEvent( false );

}

}

break;

}

return super.onTouchEvent( ev );

}

Initializing functionality, running canvas drawers.

protected void onDraw( Canvas canvas )

{

super.onDraw( canvas );

Page 64: Thesis

64

if ((m_state == STATE_DOWN) && (m_pl != null) && (m_pr != null))

{

final int width = getWidth();

final int height = getHeight();

canvas.drawCircle( width/2, height/2, height/8, m_paint );

canvas.drawPath( m_pl, m_paint );

canvas.drawPath( m_pr, m_paint );

}

}

Drawing with canvas

protected void onSizeChanged( int width, int height, int oldWidth, int oldHeight )

{

final float centerX = (width / 2);

final float centerY = (height / 2);

final int hh = (height / 8);

int w = (width / hh / 2);

if (w <14)

{

/* Too small */

m_pl = null;

m_pr = null;

}

else

{

if (w >20)

w = 20;

m_pl = new Path();

/*1*/ m_pl.moveTo( centerX - hh*2, centerY - hh );

/*2*/ m_pl.lineTo( centerX - hh*(w-4), centerY-hh );

/*3*/ m_pl.lineTo( centerX - hh*(w-4), centerY+hh*2 );

/*4*/ m_pl.lineTo( centerX - hh*(w-2), centerY+hh*2 );

/*5*/ m_pl.lineTo( centerX - hh*(w-5), centerY+hh*4 );

/*6*/ m_pl.lineTo( centerX - hh*(w-8), centerY+hh*2 );

/*7*/ m_pl.lineTo( centerX - hh*(w-6), centerY+hh*2 );

Page 65: Thesis

65

/*8*/ m_pl.lineTo( centerX - hh*(w-6), centerY+hh );

/*9*/ m_pl.lineTo( centerX - hh*2, centerY + hh );

m_pl.close();

m_pr = new Path();

/*1*/ m_pr.moveTo( centerX + hh*2, centerY - hh );

/*2*/ m_pr.lineTo( centerX + hh*(w-4), centerY-hh );

/*3*/ m_pr.lineTo( centerX + hh*(w-4), centerY+hh*2 );

/*4*/ m_pr.lineTo( centerX + hh*(w-2), centerY+hh*2 );

/*5*/ m_pr.lineTo( centerX + hh*(w-5), centerY+hh*4 );

/*6*/ m_pr.lineTo( centerX + hh*(w-8), centerY+hh*2 );

/*7*/ m_pr.lineTo( centerX + hh*(w-6), centerY+hh*2 );

/*8*/ m_pr.lineTo( centerX + hh*(w-6), centerY+hh );

/*9*/ m_pr.lineTo( centerX + hh*2, centerY + hh );

m_pr.close();

}

}

Chapter 5Main Activity

MainActivity.java is main screen container of all visible features and screens. Validation on start,

UI/UX based operations, functionality attachment to objects and layouts are performed inside

MainActivity.java class.

Main activity display titled android activity, shows logo on upper left side and application name with

it aligned centered, on upper right corner inside that title bar there is menu button, which opens menu

layout/screens and display the following list:

1. Settings

2. About

3. Exit

MainActivity.java prevents application termination on app switching and on screen off, because to

make the connection un-interrupted, app is intended to run on background, whenever app is running a

status indicator will show in status bar showing the app logo on left side and the title and description

about the status bar entry saying “app is running.” this way user won’t miss his important

communication between other nodes.

Page 66: Thesis

66

Register Buttons and allocate button switch:

private class ButtonTalkListener implements SwitchButton.StateListener

{

public void onStateChanged( boolean state )

{

if (state)

{

if (!m_recording)

{

m_recording = true;

m_audioRecorder.startRecording();

}

}

else

{

if (m_recording)

{

m_recording = false;

m_audioRecorder.stopRecording();

}

}

}

}

Retrieving and generating list for connected nodes:

private static class ListViewAdapter extends ArrayAdapter<StationInfo>

{

private final LayoutInflater m_inflater;

private final StringBuilder m_stringBuilder;

private StationInfo [] m_stationInfo;

private static class RowViewInfo

{

public final TextView textViewStationName;

public final TextView textViewAddrAndPing;

Page 67: Thesis

67

public final StateView stateView;

public RowViewInfo( TextView textViewStationName, TextView textViewAddrAndPing,

StateView stateView )

{

this.textViewStationName = textViewStationName;

this.textViewAddrAndPing = textViewAddrAndPing;

this.stateView = stateView;

}

}

Start Recording on button press:

public boolean onKeyDown( int keyCode, KeyEvent event )

{

if (m_useVolumeButtonsToTalk)

{

if ((keyCode == KeyEvent.KEYCODE_VOLUME_UP) ||

(keyCode == KeyEvent.KEYCODE_VOLUME_DOWN))

{

if (!m_recording)

{

m_audioRecorder.startRecording();

m_recording = true;

m_buttonTalk.setPressed( true );

}

return true;

}

}

return super.onKeyDown( keyCode, event );

}

Page 68: Thesis

68

1. Settings:

Settings contain a dialog box popup having station name, volume control option, check Wi-Fi

status on start settings/preferences.

These settings are stored inside shared android preferences.

Setting station info:

public void setStationInfo( StationInfo [] stationInfo )

{

m_stationInfo = stationInfo;

notifyDataSetChanged();

}

If user has selected volume control as PTT, start recording on volume button press.

public boolean onKeyUp( int keyCode, KeyEvent event )

{

if (m_useVolumeButtonsToTalk)

{

if ((keyCode == KeyEvent.KEYCODE_VOLUME_UP) ||

(keyCode == KeyEvent.KEYCODE_VOLUME_DOWN))

{

if (m_recording)

{

m_audioRecorder.stopRecording();

m_recording = false;

m_buttonTalk.setPressed( false );

}

return true;

}

}

return super.onKeyDown( keyCode, event );

}

Page 69: Thesis

69

2. About:

About screen contains dialog box popup having a little description about app, its dependency

and disclaimer.

Registering dialog services, and making it standby for operation:

private class SettingsDialogClickListener implements DialogInterface.OnClickListener

{

private final EditText m_editTextStationName;

private final SeekBar m_seekBarVolume;

private final CheckBox m_checkBoxCheckWiFiStateOnStart;

private final CheckBox m_switchButtonUseVolumeButtonsToTalk;

public SettingsDialogClickListener(

EditText editTextStationName,

SeekBar seekBarVolume,

CheckBox checkBoxCheckWiFiStateOnStart,

CheckBox switchButtonUseVolumeButtonsToTalk )

{

m_editTextStationName = editTextStationName;

m_seekBarVolume = seekBarVolume;

m_checkBoxCheckWiFiStateOnStart = checkBoxCheckWiFiStateOnStart;

m_switchButtonUseVolumeButtonsToTalk = switchButtonUseVolumeButtonsToTalk;

}

public void onClick( DialogInterface dialog, int which )

{

if (which == DialogInterface.BUTTON_POSITIVE)

{

final String stationName = m_editTextStationName.getText().toString();

final int audioVolume = m_seekBarVolume.getProgress();

final SharedPreferences sharedPreferences = getPreferences(Context.MODE_PRIVATE);

final SharedPreferences.Editor editor = sharedPreferences.edit();

if (m_stationName.compareTo(stationName) != 0)

{

final String title = getString(R.string.app_name) + ": " + stationName;

Page 70: Thesis

70

setTitle(title);

editor.putString( KEY_STATION_NAME, stationName );

m_binder.setStationName( stationName );

m_stationName = stationName;

}

if (audioVolume != m_audioVolume)

{

editor.putString( KEY_VOLUME, Integer.toString(audioVolume) );

final int audioStream = MainActivity.AUDIO_STREAM;

final AudioManager audioManager = (AudioManager) getSystemService( AUDIO_SERVICE );

Log.d(LOG_TAG, "setStreamVolume(" + audioStream + ", " + audioVolume + ")");

audioManager.setStreamVolume(audioStream, audioVolume, 0);

m_audioVolume = audioVolume;

}

final boolean useVolumeButtonsToTalk = m_switchButtonUseVolumeButtonsToTalk.isChecked();

editor.putBoolean();

editor.putBoolean(KEY_USE_VOLUME_BUTTONS_TO_TALK, useVolumeButtonsToTalk);

editor.apply();

MainActivity.this.m_useVolumeButtonsToTalk = useVolumeButtonsToTalk;

}

}

3. Exit

The only option to terminate app using application system is to use Exit list selection, it is the last

entry in menu list and is responsible to terminate all services including application instance

itself.After the title bar, there is main centered container, which is container for all nodes connected,

inside that list there are one header block container two headers on left side, the upper header shows

the name of device station the second app shows the channel and session info about that node, on the

right side there is one greyish circle indicating who is speaking, whose message is being played. If

someone just used that PTT service app will show green indicator for that node who is using PTT at

the moment and play his voice. On bottom there PTT button labeled as TALK. Is responsible for all

interaction between sending receiving unit, PTT button ID triggers recording, as soon as the

recording is completed and user release the PTT button, it triggers second event which is to send

voice, the voice is sent using Walkie Talkie Service after all channel and switching process.

Page 71: Thesis

71

Destroying all instance:

public void onDestroy()

{

Log.i( LOG_TAG, "onDestroy" );

super.onDestroy();

}

Channel SessionChannel Session class is responsible for renewal and alternation of session in question currently

interacting with another device.

Handling ping rates:

private void handlePingTimeout()

{

if (m_lastBytesReceived == m_totalBytesReceived)

{

if (++m_pingTimeouts == 10)

{

Log.i( LOG_TAG, getLogPrefix() + "connection timeout, closing connection." );

m_session.closeConnection();

}

}

else

{

m_lastBytesReceived = m_totalBytesReceived;

m_pingTimeouts = 0;

}

Log.v( LOG_TAG, getLogPrefix() + "ping" );

m_pingSendTime = System.currentTimeMillis();

m_session.sendData( Protocol.Ping.create() );

}

Page 72: Thesis

72

Receiving packets from nodes:

public void onDataReceived( RetainableByteBuffer data )

{

final int bytesReceived = data.remaining();

RetainableByteBuffer msg = m_streamDefragger.getNext( data );

while (msg != null)

{

if (msg == StreamDefragger.INVALID_HEADER)

{

Log.i("invalid message received, close connection." );

m_session.closeConnection();

break;

}

else

{

handleMessage( msg );

msg = m_streamDefragger.getNext();

}

}

s_totalBytesReceivedUpdater.addAndGet( this, bytesReceived );

}

Sending session data to other node for validation:

public final int sendMessage( RetainableByteBuffer msg )

{

return m_session.sendData( msg );

}

Page 73: Thesis

73

Handling messages

private void handleMessage( RetainableByteBuffer msg )

{

final short messageID = Protocol.Message.getID( msg );

switch (messageID)

{

case Protocol.AudioFrame.ID:

final RetainableByteBuffer audioFrame = Protocol.AudioFrame.getAudioData( msg );

m_audioPlayer.play( audioFrame );

audioFrame.release();

break;

case Protocol.Ping.ID:

m_session.sendData( Protocol.Pong.create() );

break;

case Protocol.Pong.ID:

final long ping = (System.currentTimeMillis() - m_pingSendTime) / 2;

if (Math.abs(ping - m_ping) >10)

{

m_ping = ping;

m_channel.setPing( m_serviceName, m_session, ping );

}

break;

case Protocol.StationName.ID:

try

{

final String stationName = rotocol.StationName.getStationName( msg );

if (stationName.length() >0)

{

if (m_serviceName == null)

m_channel.setStationName( m_session, stationName );

else

m_channel.setStationName( m_serviceName, stationName );

}

}

Page 74: Thesis

74

catch (final CharacterCodingException ex)

{

Log.w( LOG_TAG, ex.toString() );

}

break;

default:

Log.w( LOG_TAG, getLogPrefix() + "unexpected message " + messageID );

break;

}

}

ConfigurationConfiguration class is set of rules and parameter value and variable which contain almost all settings

configuration for current system, it holds settings like session, ping interval, ping rate, station

information, hard-coded signature etc.

Configuring ping rates:

class Config

{

public static int PING_INTERVAL = 5;

}

DatabaseApplication do not use any Database however, it is using android shared preference system for

storing information/settings like check Wi-Fi status on start, use volume control as PTT, changing

Station name.

Setting volume control as PTT

checkBoxUseVolumeButtonsToTalk.setChecked(arg);

Allocating preferences:

final SharedPreferences sharedPreferences = getPreferences( Context.MODE_PRIVATE );

Page 75: Thesis

75

ProtocolA network medium could/may have many protocols. In telecommunications, a communication

protocol is a system of rules that allow two or more entities of a communications system to transmit

information via any kind of variation of a physical quantity. These are the rules or standard that

defines the syntax, semantics and synchronization of communication and possible error recovery

methods. Protocols may be implemented by hardware, software, or a combination of both,

communicating systems use well-defined formats (protocol) for exchanging various messages. Each

message has an exact meaning intended to elicit a response from a range of possible responses pre-

determined for that particular situation. The specified behavior is typically independent of how it is to

be implemented. Communications protocols have to be agreed upon by the parties involved. To reach

agreement, a protocol may be developed into a technical standard. A programming language

describes the same for computations, so there is a close analogy between protocols and programming

languages: protocols are to communications what programming languages are to computations.

Multiple protocols often describe different aspects of a single communication. A group of protocols

designed to work together are known as a protocol suite; when implemented in software they are a

protocol stack.

Most recent protocols are assigned by the IETF for Internet communications, and the IEEE, or the

ISO organizations for other types. The ITU-T handles telecommunications protocols and formats for

the PSTN. As the PSTN and Internet converge, the two sets of standards are also being driven

towards convergence.

Basic Requirement of protocolsGetting the data across a network is only part of the problem for a protocol. The data received has to

be evaluated in the context of the progress of the conversation, so a protocol has to specify rules

describing the context. These kinds of rules are said to express the syntax of the communications.

Other rules determine whether the data is meaningful for the context in which the exchange takes

place. These kinds of rules are said to express the semantics of the communications.

Messages are sent and received on communicating systems to establish communications. Protocols

should therefore specify rules governing the transmission. In general, much of the following should

be addressed: Data formats for data exchange. Digital message bit-strings are exchanged. The bit-

strings are divided in fields and each field carries information relevant to the protocol. Conceptually

the bit-string is divided into two parts called the header area and the data area. The actual message is

Page 76: Thesis

76

stored in the data area, so the header area contains the fields with more relevance to the protocol. Bit-

strings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size.

Address formats for data exchange. Addresses are used to identify both the sender and the intended

receiver(s). The addresses are stored in the header area of the bit-strings, allowing the receivers to

determine whether the bit-strings are intended for themselves and should be processed or should be

ignored. A connection between a sender and a receiver can be identified using an address pair (sender

address, receiver address). Usually some address values have special meanings. An all-1s address

could be taken to mean an addressing of all stations on the network, so sending to this address would

result in a broadcast on the local network. The rules describing the meanings of the address value are

collectively called an addressing scheme.

Address mapping. Sometimes protocols need to map addresses of one scheme on addresses of

another scheme. For instance, to translate a logical IP address specified by the application to an

Ethernet hardware address. This is referred to as address mapping.

Routing. When systems are not directly connected, intermediary systems along the route to the

intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks

are connected using routers. This way of connecting networks is called internetworking. Detection of

transmission errors is necessary on networks which cannot guarantee error-free operation. In a

common approach, CRCs of the data area are added to the end of packets, making it possible for the

receiver to detect differences caused by errors. The receiver rejects the packets on CRC differences

and arranges somehow for retransmission Acknowledgements of correct reception of packets is

required for connection-oriented communication. Acknowledgements are sent from receivers back to

their respective senders

Loss of information - timeouts and retries. Packets may be lost on the network or suffer from long

delays. To cope with this, under some protocols, a sender may expect an acknowledgement of correct

reception from the receiver within a certain amount of time. On timeouts, the sender must assume the

packet was not received and retransmit it. In case of a permanently broken link, the retransmission

has no effect so the number of retransmissions is limited. Exceeding the retry limit is considered an

error.

Direction of information flow needs to be addressed if transmissions can only occur in one direction

at a time as on half-duplex links. This is known as Media Access Control. Arrangements have to be

made to accommodate the case when two parties want to gain control at the same time.

Sequence control. We have seen that long bit-strings are divided in pieces, and then sent on the

network individually. The pieces may get lost or delayed or take different routes to their destination

on some types of networks. As a result, pieces may arrive out of sequence. Retransmissions can result

Page 77: Thesis

77

in duplicate pieces. By marking the pieces with sequence information at the sender, the receiver can

determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original

message.

Flow control is needed when the sender transmits faster than the receiver or intermediate network

equipment can process the transmissions. Flow control can be implemented by messaging from

receiver to sender.

Chapter 6Protocols and Programming languages

Protocols are to communications what algorithms or programming languages are to computations.

This analogy has important consequences for both the design and the development of protocols. One

has to consider the fact that algorithms, programs and protocols are just different ways of describing

expected behavior of interacting objects. A familiar example of a protocolling language is the HTML

language used to describe web pages which are the actual web protocols. In programming languages,

the association of identifiers to a value is termed a definition. Program text is structured using block

constructs and definitions can be local to a block. The localized association of an identifier to a value

established by a definition is termed a binding and the region of program text in which a binding is

effective is known as its scope. The computational state is kept using two components: the

environment, used as a record of identifier bindings, and the store, which is used as a record of the

effects of assignments.

In communications, message values are transferred using transmission media. By analogy, the

equivalent of a store would be a collection of transmission media, instead of a collection of memory

locations. A valid assignment in a protocol (as an analog of programming language) could be

Ethernet: ='message’, meaning a message is to be broadcast on the local Ethernet.

On a transmission medium there can be many receivers. For instance, a mac-address identifies an

ether network card on the transmission medium (the 'ether'). In our imaginary protocol, the

assignment Ethernet[mac-address]: =message value could therefore make sense. By extending the

assignment statement of an existing programming language with the semantics described, a

protocolling language could easily be imagined. Operating systems provide reliable communication

and synchronization facilities for communicating objects confined to the same system by means of

system libraries. A programmer using a general-purpose programming language (like C or Ada) can

use the routines in the libraries to implement a protocol, instead of using a dedicated protocolling

language.

Page 78: Thesis

78

Protocol LayeringProtocol layering now forms the basis of protocol design. It allows the decomposition of single,

complex protocols into simpler, cooperating protocols, but it is also a functional decomposition,

because each protocol belongs to a functional class, called a protocol layer. The protocol layers each

solve a distinct class of communication problems. The Internet protocol suite consists of the

following layers: application-, transport-, internet- and network interface-functions. Together, the

layers make up a layering scheme or model.

(Figure 18.0 Protocol Layering without modem)

In computations, we have algorithms and data, and in communications, we have protocols and

messages, so the analog of a data flow diagram would be some kind of message flow diagram. To

visualize protocol layering and protocol suites, a diagram of the message flows in and between two

systems, A and B, is shown in figure 3.

The systems both make use of the same protocol suite. The vertical flows (and protocols) are in

system and the horizontal message flows (and protocols) are between systems. The message flows are

governed by rules, and data formats specified by protocols. The blue lines therefore mark the

boundaries of the (horizontal) protocol layers.

Page 79: Thesis

79

The vertical protocols are not layered because they don't obey the protocol layering principle which

states that a layered protocol is designed so that layer n at the destination receives exactly the same

object sent by layer n at the source. The horizontal protocols are layered protocols and all belong to

the protocol suite. Layered protocols allow the protocol designer to concentrate on one layer at a

time, without worrying about how other layers perform.

The vertical protocols need not be the same protocols on both systems, but they have to satisfy some

minimal assumptions to ensure the protocol layering principle holds for the layered protocols. This

can be achieved using a technique called Encapsulation.

Usually, a message or a stream of data is divided into small pieces, called messages or streams,

packets, IP datagrams or network frames depending on the layer in which the pieces are to be

transmitted. The pieces contain a header area and a data area. The data in the header area identifies

the source and the destination on the network of the packet, the protocol, and other data meaningful

to the protocol like CRC's of the data to be sent, data length, and a timestamp.

The rule enforced by the vertical protocols is that the pieces for transmission are to be encapsulated in

the data area of all lower protocols on the sending side and the reverse is to happen on the receiving

side. The result is that at the lowest level the piece looks like this: 'Header1, Header2, Header3, data'

and in the layer directly above it: 'Header2, Header3, data' and in the top layer: 'Header3, data', both

on the sending and receiving side. This rule therefore ensures that the protocol layering principle

holds and effectively virtualizes all but the lowest transmission lines, so for this reason some message

flows are colored red in figure 3.

To ensure both sides use the same protocol, the pieces also carry data identifying the protocol in their

header.

The design of the protocol layering and the network (or Internet) architecture are interrelated, so one

cannot be designed without the other. Some of the more important features in this respect of the

Internet architecture and the network services it provides are described next.

The Internet offers universal interconnection, which means that any pair of computers connected to

the Internet is allowed to communicate. Each computer is identified by an address on the Internet. All

the interconnected physical networks appear to the user as a single large network. This

interconnection scheme is called an internetwork or internet.

Page 80: Thesis

80

Conceptually, an Internet addresses consists of a netid and a hostid. The netid identifies a network

and the hostid identifies a host. The term host is misleading in that an individual computer can have

multiple network interfaces each having its own Internet address. An Internet Address identifies a

connection to the network, not an individual computer. The netid is used by routers to decide where

to send a packet.

Network technology independence is achieved using the low-level address resolution protocol (ARP)

which is used to map Internet addresses to physical addresses. The mapping is called address

resolution. This way physical addresses are only used by the protocols of the network interface layer.

The TCP/IP protocols can make use of almost any underlying communication technology.

(Figure 19.0 Protocol Layering with modem/router)

Physical networks are interconnected by routers. Routers forward packets between interconnected

networks making it possible for hosts to reach hosts on other physical networks. The message flows

between two communicating system A and B in the presence of a router ‘R’ are illustrated in figure 4.

Datagrams are passed from router to router until a router is reached that can deliver the datagram on a

physically attached network (called direct delivery). To decide whether a datagram is to be delivered

directly or is to be sent to a router closer to the destination, a table called the IP routing table is

consulted. The table consists of pairs of network-kids and the paths to be taken to reach known

networks. The path can be an indication that the datagram should be delivered directly or it can be the

Page 81: Thesis

81

address of a router known to be closer to the destination. A special entry can specify that a default

router is chosen when there are no known paths.

All networks are treated equal. A LAN, a WAN or a point-to-point link between two computers are all

considered as one network.

A Connectionless packet delivery (or packet-switched) system (or service) is offered by the Internet,

because it adapts well to different hardware, including best-effort delivery mechanisms like the

Ethernet. Connectionless delivery means that the messages or streams are divided into pieces that are

multiplexed separately on the high speed inter-machine connections allowing the connections to be

used concurrently. Each piece carries information identifying the destination. The delivery of packets

is said to be unreliable, because packets may be lost, duplicated, delayed or delivered out of order

without notice to the sender or receiver. Unreliability arises only when resources are exhausted or

underlying networks fail. The unreliable connectionless delivery system is defined by the Internet

Protocol (IP). The protocol also specifies the routing function, which chooses a path over which data

will be sent. It is also possible to use TCP/IP protocols on connection oriented systems. Connection

oriented systems build up virtual circuits (paths for exclusive use) between senders and receivers.

Once built up the IP datagrams are sent as if they were data through the virtual circuits and forwarded

(as data) to the IP protocol modules. This technique, called tunneling, can be used on X.25 networks

and ATM networks.

A reliable stream transport service using the unreliable connectionless packet delivery service is

defined by the transmission control protocol (TCP). The services are layered as well and the

application programs residing in the layer above it, called the application services, can make use of

TCP. Programs wishing to interact with the packet delivery system itself can do so using the user

datagram protocol (UDP).

Page 82: Thesis

82

Software LayeringHaving established the protocol layering and the protocols, the protocol designer can now resume

with the software design. The software has a layered organization and its relationship with protocol

layering is visualized.

(Figure 20.0 Software Layering)

The software modules implementing the protocols are represented by cubes. The information flow

between the modules is represented by arrows. The (top two horizontal) red arrows are virtual. The

blue lines mark the layer boundaries.

To send a message on system A, the top module interacts with the module directly below it and hands

over the message to be encapsulated. This module reacts by encapsulating the message in its own

data area and filling in its header data in accordance with the protocol it implements and interacts

with the module below it by handing over this newly formed message whenever appropriate. The

bottom module directly interacts with the bottom module of system B, so the message is sent across.

On the receiving system B the reverse happens, so ultimately (and assuming there were no

transmission errors or protocol violations etc.) the message gets delivered in its original form to the

top-module of system B On protocol errors, a receiving module discards the piece it has received and

reports back the error condition to the original source of the piece on the same layer by handing the

error message down or in case of the bottom module sending it across

Page 83: Thesis

83

The division of the message or stream of data into pieces and the subsequent reassembly are handled

in the layer that introduced the division/reassembly. The reassembly is done at the destination (i.e. not

on any intermediate routers], TCP/IP software is organized in four layers. Application layer. At the

highest layer, the services available across a TCP/IP internet are accessed by application programs.

The application chooses the style of transport to be used which can be a sequence of individual

messages or a continuous stream of bytes. The application program passes data to the transport layer

for delivery.

Transport layer. The transport layer provides communication from one application to another. The

transport layer may regulate flow of information and provide reliable transport, ensuring that data

arrives without error and in sequence. To do so, the receiving side sends back acknowledgments and

the sending side retransmits lost pieces called packets. The stream of data is divided into packets by

the module and each packet is passed along with a destination address to the next layer for

transmission. The layer must accept data from many applications concurrently and therefore also

includes codes in the packet header to identify the sending and receiving application program.

Internet layer. The Internet layer handles the communication between machines. Packets to be sent

are accepted from the transport layer along with an identification of the receiving machine. The

packets are encapsulated in IP datagrams and the datagram headers are filled. A routing algorithm is

used to determine if the datagram should be delivered directly or sent to a router. The datagram is

passed to the appropriate network interface for transmission. Incoming datagrams are checked for

validity and the routing algorithm is used to decide whether the datagram should be processed locally

or forwarded. If the datagram is addressed to the local machine, the datagram header is deleted and

the appropriate transport protocol for the packet is chosen. ICMP error and control messages are

handled as well in this layer.

Network interface layer. The network interface layer is responsible for accepting IP datagrams and

transmitting them over a specific network. A network interface may consist of a device driver or a

complex subsystem that uses its own data link protocol.

Program translation has been divided into four sub-problems: compiler, assembler, link editor, and

loader. As a result, the translation software is layered as well, allowing the software layers to be

designed independently. Noting that the ways to conquer the complexity of program translation could

readily be applied to protocols because of the analogy between programming languages and

protocols, the designers of the TCP/IP protocol suite were keen on imposing the same layering on the

software framework. This can be seen in the TCP/IP layering by considering the translation of a

Pascal program (message) that is compiled (function of the application layer) into an assembler

program that is assembled (function of the transport layer) to object code (pieces) that is linked

Page 84: Thesis

84

(function of the Internet layer) together with library object code (routing table) by the link editor,

producing relocatable machine code (datagram) that is passed to the loader which fills in the memory

locations (Ethernet addresses) to produce executable code (network frame) to be loaded (function of

the network interface layer) into physical memory (transmission medium). To show just how closely

the analogy fits, the terms between parentheses in the previous sentence denote the relevant analogs

and the terms written cursively denote data representations. Program translation forms a linear

sequence, because each layer's output is passed as input to the next layer. Furthermore, the translation

process involves multiple data representations. We see the same thing happening in protocol software

where multiple protocols define the data representations of the data passed between the software

modules

(Figure 21.0 Protocols and software layering working model)

The network interface layer uses physical addresses and all the other layers only use IP addresses.

The boundary between network interface layer and Internet layer is called the high-level protocol

address boundary the modules below the application layer are generally considered part of the

Page 85: Thesis

85

operating system. Passing data between these modules is much less expensive than passing data

between an application program and the transport layer. The boundary between application layer and

transport layer is called the operating system boundary.

Chapter 7Application Structure

SRC

|---- Main

|------------- Java

|------- ORG

|-------- JSL

|---------wfwt

|---------- AudioPlayer.Java

|----------AudioRecorder.Java

|----------Channel.java

|----------ChannelSession.java

|----------Config.java

|----------HandshakeClientSession.java

|----------HandshakeServerSession.java

|----------MainActivity.java

|----------Protocol.java

|----------SessionManager.java

|----------StateView.java

|----------StationInfo.java

|----------SwitchButton.java

|---------WalkieService.java

|------------- Res

|------ Drawable-hdpi

|----- ic_launcher.png

|----- ic_status.png

|------Drawable-lhdpi

|--- ic_launcher.png

|--- ic_status.png

|-----Drawable-mhdpi

Page 86: Thesis

86

|---- ic_launcher.png

|---- ic_status.png

|-----Drawable—xhdpi

|---- ic_launcher.png

|---- ic_status.png

|-----Drawable—xxhdpi

|---- ic_launcher.png

|---- ic_status.png

|-----mipmap—hdpi

|---- ic_launcher.png

|-----mipmap—xhdpi

|---- ic_launcher.png

|-----mipmap—xxhdpi

|---- ic_launcher.png

|-----mipmap—mdpi

|---- ic_launcher.png

|-----mipmap—ldpi

|---- ic_launcher.png

|----layout

|---- dialog_about.xml

|--- dialog_settings.xml

|----dialog_wifi.xml

|--- list_view_row.xml

|--- main.xml

|----menu

|---menu.xml

|----values

|-- strings.xml

|------------- Manifest.xml

Page 87: Thesis

87

Use caseNode1: sends voice/message/ pressed PTT(button) TALK, after he leave button, his voice was sent in

local area network, any device nearby with same application running and in local area network will

receive Node1 voice.

Node2 is another person using the same app and connected to same network,

Node-N: can be any number any other devices nearby. The behavior will be the same for application

functionality.

(Figure 22.0 Use Case)

Page 88: Thesis

88

SDLCThe systems development life cycle (SDLC), also referred to as the application development life-

cycle, is a term used in systems engineering, information systems and software engineering to

describe a process for planning, creating, testing, and deploying an information system. The systems

development life-cycle concept applies to a range of hardware and software configurations, as a

system can be composed of hardware only, software only, or a combination of both.

(Figure 23.0 SDLC concept)

SDLC contains expanded phases for detailed oriented procedure, such as. Preliminary analysis: The

objective of phase 1 is to conduct a preliminary analysis, propose alternative solutions, describe costs

and benefits and submit a preliminary plan with recommendations. Conduct the preliminary analysis:

in this step, you need to find out the organization's objectives and the nature and scope of the problem

under study. Even if a problem refers only to a small segment of the organization itself, you need to

find out what the objectives of the organization itself are. Then you need to see how the problem

being studied fits in with them. Propose alternative solutions: In digging into the organization's

objectives and specific problems, you may have already covered some solutions. Alternate proposals

may come from interviewing employees, clients, suppliers, and/or consultants. You can also study

what competitors are doing. With this data, you will have three choices: leave the system as is,

improve it, or develop a new system. Describe the costs and benefits. Systems analysis, requirements

definition: Defines project goals into defined functions and operation of the intended application. It is

the process of gathering and interpreting facts, diagnosing problems and recommending

Page 89: Thesis

89

improvements to the system. Analyzes end-user information needs and also removes any

inconsistencies and incompleteness in these requirements. A series of steps followed by the developer

are: Collection of Facts: End user requirements are obtained through documentation, client

interviews, observation and questionnaires, Scrutiny of the existing system: Identify pros and cons of

the current system in-place, so as to carry forward the pros and avoid the cons in the new system.

Analyzing the proposed system: Solutions to the shortcomings in step two are found and any

specific user proposals are used to prepare the specifications.

Systems design: Describes desired features and operations in detail, including screen layouts,

business rules, process diagrams, pseudocode and other documentation.

Development: The real code is written here. Integration and testing: Brings all the pieces together

into a special testing environment, then checks for errors, bugs and interoperability. Acceptance,

installation, deployment: The final stage of initial development, where the software is put into

production and runs actual business.

Maintenance: During the maintenance stage of the SDLC, the system is assessed to ensure it does

not become obsolete. This is also where changes are made to initial software. It involves continuous

evaluation of the system in terms of its performance.

Evaluation: Some companies do not view this as an official stage of the SDLC, while others consider

it to be an extension of the maintenance stage, and may be referred to in some circles as post-

implementation review. This is where the system that was developed, as well as the entire process, is

evaluated. Some of the questions that need to be answered include: does the newly implemented

system meet the initial business requirements and objectives? Is the system reliable and fault-

tolerant? Does the system function according to the approved functional requirements? In addition to

evaluating the software that was released, it is important to assess the effectiveness of the

development process. If there are any aspects of the entire process, or certain stages, that

management is not satisfied with, this is the time to improve. Evaluation and assessment is a difficult

issue. However, the company must reflect on the process and address weaknesses.

Disposal: In this phase, plans are developed for discarding system information, hardware and

software in making the transition to a new system. The purpose here is to properly move, archive,

discard or destroy information, hardware and software that is being replaced, in a manner that

prevents any possibility of unauthorized disclosure of sensitive data. The disposal activities ensure

proper migration to a new system. Particular emphasis is given to proper preservation and archival of

data processed by the previous system. All of this should be done in accordance with the

organization's security requirements.

Page 90: Thesis

90

Sequence DiagramIn Royce's original waterfall model, the following phases are followed in order:

System and software requirements: captured in a product requirements document

Analysis: resulting in models, schema, and business rules

Design: resulting in the software architecture

Coding/Implementation/Deployment: the development, proving, and integration of software

Testing: the systematic discovery and debugging of defects

Operations/Maintenance/Documenting: the installation, migration, support, and maintenance of

complete systems, thus the waterfall model maintains that one should move to a phase only when it’s

preceding phase is reviewed and verified. Various modified waterfall models (including Royce's final

model), however, can include slight or major variations on this process. These variations included

returning to the previous cycle after flaws were found downstream, or returning all the way to the

design phase if downstream phases deemed insufficient.

(Figure 24.0 Sequence Design Process - water fall model)

Page 91: Thesis

91

Entity relation diagramUser is being devices/nodes having their attributes like station name, configuration for interval, ping

rate settings, device id, service type, and generated channel session etc.

One user can use PTT it will broadcast voice over LAN to all other nodes, node should be at least

one, and can be infinite in numbers. The sender node “user” in our diagram is sending voice, the

other devices or users connected will receive it that’s why they are named “Audience” at the moment

in diagram.

(Figure 25.0 Entity Diagram for Walkie Talkie)

Page 92: Thesis

92

References:NDS/DS: https://en.wikipedia.org/wiki/Service_discovery

Handshaking: http://encyclopedia2.thefreedictionary.com/Hand+shake+signal

Handshaking: https://tools.ietf.org/html/rfc5246

Handshaking: https://tools.ietf.org/html/rfc793

XML: https://www.w3.org/XML/

Protocol-Layering:

http://www.cs.cornell.edu/skeshav/book/slides/protocol_layering/protocol_layering.pdf

Protocol-layering: https://cseweb.ucsd.edu/classes/fa11/cse123-a/123f11_Lec2.pdf

NIO framework (JS-collider): https://en.wikipedia.org/wiki/Non-blocking_I/O_(Java)

NIO: https://community.oracle.com/docs/DOC-983601

NIO: http://gee.cs.oswego.edu/dl/cpjslides/nio.pdf

Channels: https://docs.oracle.com/javase/8/docs/api/java/nio/channels/Channel.html

Network protocol: https://en.wikipedia.org/wiki/Transport_Layer_Security