48
1 From The Desk of HOD Keeping with the motto of Acquire Knowledge and Grow, under patronage of our Director, Dr. R.K. Agarwal, department has published the Vol. 7, No. 1 a half-yearly Journal. The aim of publishing Journal of Computer Application (JCA) is to inculcate habit of writing and reading a technical paper among faculty and students. Topics and contents have been selected to educate our students on current state of the art technology of common applications in simplistic manner without going through the mathematical details. Length of topics have deliberately been kept very short to accommodate most of the writers, and keep the interest of readers as well. My sincere appreciation to all writers specially to students of MCA department. Enjoy the reading and kindly offer your valuable suggestion for improvement in our subsequent issues. Looking ahead for your support. Prof. S. L. Kapoor

From The Desk of HOD - Ajay Kumar Garg Engineering … From The Desk of HOD Keeping with the motto of Acquire Knowledge and Grow, under patronage of our Director, Dr. R.K. Agarwal,

Embed Size (px)

Citation preview

1

From The Desk of HOD

Keeping with the motto of Acquire Knowledge and Grow, under patronage of our Director,

Dr. R.K. Agarwal, department has published the Vol. 7, No. 1 a half-yearly Journal. The aim

of publishing Journal of Computer Application (JCA) is to inculcate habit of writing and

reading a technical paper among faculty and students. Topics and contents have been selected

to educate our students on current state of the art technology of common applications in

simplistic manner without going through the mathematical details. Length of topics have

deliberately been kept very short to accommodate most of the writers, and keep the interest of

readers as well. My sincere appreciation to all writers specially to students of MCA department.

Enjoy the reading and kindly offer your valuable suggestion for improvement in our subsequent

issues.

Looking ahead for your support.

Prof. S. L. Kapoor

3

CONTENTS

S. No. TITLE Page No.

1 Pro*C Embedded SQL 5-7Dr. B.K. Sharma, Professor, MCA department, AKGEC, GZB

2 ORIENTDB – Use for Handling Graph Database 8-9Snehlata Kaul, Assistant Professor, AKGEC, Ghaziabad

3 Green Computing 10-12Saroj Bala, Assistant Professor, AKGEC, Ghaziabad

4 Li-Fi (Light Fidelity)-The future technology In Wireless communication 13-15Harnit Saini, Assistant Professor, AKGEC, Ghaziabad

5 An Efficient Indexing Tree Structure For Multidimensional Data On Cloud 16-18Mani Dwivedi, Assistant Professor, AKGEC, Ghaziabad

6 Mobile cloud computing Integration: Architecture, Applications, and Approaches 19-23Anjali Singh, Assistant Professor, AKGEC, Ghaziabad

7 Towards Developing Reusable Software Components 24-26Aditya Pratap Singh, Assistant Professor, AKGEC, Ghaziabad

8 Genetic Algorithm Framework for Parallel Computing Environments 27-29Ruchi Gupta, Assistant Professor, AKGEC, Ghaziabad

9 A Survey on Big Data and Mining 30-32Dheeraj Kumar Singh, Assistant Professor, AKGEC, Ghaziabad

10 Mobile Cyber Threats 33-37Arpna Saxena, Assistant Professor, AKGEC, Ghaziabad

11 Applications of Palm Vein Authentication Technology 38-40Indu Verma, Assistant Professor, AKGEC, Ghaziabad

12. PON Topologies for Dynamic Optical Access Networks 41-43Sanjeev K. Prasad, Assistant Professor, AKGEC, Ghaziabad

13 An Overview of Semantic Search Systems 44-46Dr. Pooja Arora, Assistant Professor, AKGEC, Ghaziabad

13 A Soft Introduction to Machine Learning 47-48Anurag Sharma, Student, MCA Department

5

PRO*C EMBEDDED SQLDr. B.K. Sharma

Professor, MCA Department, AKGEC, GZBEmail : [email protected]

Abstract–The Pro*C/C++ precompiler enables you to createapplications that access your Oracle database whenever rapiddevelopment and compatibility with other systems are yourpriorities.The Pro*C/C++ programming tool enables you toembed Structured Query Language (SQL) statements in a C orC++ program. The Pro*C/C++ precompiler translates thesestatements into standard Oracle runtime library calls, thengenerates a modified source program that you can compile,link, and run in the usual way.

1. INTRODUCTION:Embedded SQL is a process of combining the computing powerof a high-level language like C / C++ with the databasemanipulation capabilities of SQL. It allows you to execute allSQL statement from an application program. Oracle's embeddedSQL environment is called Pro*C.

A Pro*C program is compiled in two steps. First, the Pro*Cprecompiler recognizes the SQL statements embedded in theprogram and replaces them with appropriate calls to thefunctions in the SQL runtime library. The output is pure C/C++code with all the pure C/C++ portions intact. Then, a regular C/C++ compiler is used to compile the code and produces theexecutable.

Pro*C SyntaxAll SQL statements need to start with EXEC SQL and end witha semicolon ";". The SQL statements can place anywhere withina C/C++ block with the standard procedure of programmingthat is the restriction that the declarative statements do notcome after the executable statements. As an example: { int a; EXEC SQL SELECT salary INTO :a FROM Employee WHERE SSN=876543210; printf("The salary is %d\n", a); }

Preprocessor DirectivesThe C/C++ preprocessor directives that work with Pro*C are#include and #if. Pro*C does not recognize #define. Forexample, the following code is invalid: #define THE_SSN 876543210

EXEC SQL SELECT salary INTO :a FROM Employee WHERE SSN = THE_SSN;

2. HOST VARIABLES

Host variables are the key to the communication between thehost program and the database. A host variable expressionmust resolve to an lvalue (i.e., it can be assigned). You candeclare host variables according to C syntax, as you declareregular C variables. The host variable declarations can beplaced wherever C variable declarations can be placed. The Cdatatypes that can be used with Oracle include:

charchar[n]intshortlongfloatdoubleVARCHAR[n] - This is a psuedo-type recognized by thePro*C precompiler. It is used to represent blank-padded,variable-length strings. Pro*C precompiler will convertit into a structure with a 2-byte length field and a n-bytecharacter array.

2.1 PointersYou can define pointers using the regular C syntax, and usethem in embedded SQL statements. As usual, prefix them witha colon: int *x; /* ... */ EXEC SQL SELECT xyz INTO :x FROM ...;The result of this SELECT statement will be written into *x, not x.StructuresStructures can be used as host variables, as illustrated in thefollowing example: typedef struct { char name[21]; /* one greater than column length;for '\0' */ int SSN; } Emp;

6

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

/* ... */ Emp bigshot; /* ... */ EXEC SQL INSERT INTO emp (ename, eSSN) VALUES (:bigshot);

2.2 ArraysHost arrays can be used in the following way: int emp_number[50]; char name[50][11]; /* ... */ EXEC SQL INSERT INTO emp(emp_number, name) VALUES (:emp_number, :emp_name);which will insert all the 50 tuples in one go.Arrays can only be single dimensional. The example charname[50][11] would seem to contradict that rule. However,Pro*C actually considers name a one-dimensional array ofstrings rather than a two-dimensional array of characters. Youcan also have arrays of structures.

When using arrays to store the results of a query, if the size ofthe host array (say n) is smaller than the actual number oftuples returned by the query, then only the first n result tupleswill be entered into the host array.

2.3 Indicator VariablesIndicator variables are essentially "NULL flags" attached tohost variables. You can associate every host variable with anoptional indicator variable. An indicator variable must bedefined as a 2-byte integer (using the typeshort) and, in SQLstatements, must be prefixed by a colon and immediately followits host variable. Or, you may use the keyword INDICATOR inbetween the host variable and indicator variable. Here is anexample: short indicator_var; EXEC SQL SELECT xyz INTO :host_var:indicator_var FROM ...; /* ... */ EXEC SQL INSERT INTO R VALUES(:host_var INDICATOR :indicator_var, ...);You can use indicator variables in the INTO clause of a SELECTstatement to detect NULL's or truncated values in the outputhost variables.

3. DATATYPE EQUIVALENCINGOracle recognizes two kinds of datatypes: internal and external.Internal datatypes specify how Oracle stores column valuesin database tables. External datatypes specify the formatsused to store values in input and output host variables. Atprecompile time, a default Oracle external datatype is assignedto each host variable. Datatype equivalencing allows you tooverride this default equivalencing and lets you control theway Oracle interprets the input data and formats the outputdata.

The equivalencing can be done on a variable-by-variable basisusing the VAR statement. The syntax is: EXEC SQL VAR <host_var> IS <type_name> [ (<length>) ];For example, suppose you want to select employee names fromthe emp table, and then pass them to a routine that expects C-style '\0'-terminated strings. You need not explicitly '\0'-terminatethe names yourself. Simply equivalence a host variable to theSTRING external datatype, as follows: char emp_name[21]; EXEC SQL VAR emp_name IS STRING(21);

The length of the ename column in the emp table is 20 characters,so you allot emp_name 21 characters to accommodate the '\0'-terminator. STRING is an Oracle external datatype specificallydesigned to interface with C-style strings. When you select avalue from the ename column into emp_name, Oracle willautomatically '\0'-terminate the value for you.

You can also equivalence user-defined datatypes to Oracleexternal datatypes using the TYPE statement. The syntax is:EXEC SQL TYPE <user_type> IS <type_name> [ (<length>) ][REFERENCE];

You can declare a user-defined type to be a pointer, eitherexplicitly, as a pointer to a scalar or structure, or implicitly as anarray, and then use this type in a TYPE statement. In thesecases, you need to use the REFERENCEclause at the end ofthe statement, as shown below: typedef unsigned char *my_raw; EXEC SQL TYPE my_raw IS VARRAW(4000) REFERENCE; my_raw buffer; /* ... */ buffer = malloc(4004);

Here we allocated more memory than the type length (4000)because the precompiler also returns the length, and may addpadding after the length in order to meet the alignmentrequirement on your system.

4. DYNAMIC SQLWhile embedded SQL is fine for fixed applications, sometimesit is important for a program to dynamically create entire SQLstatements. With dynamic SQL, a statement stored in a stringvariable can be issued. PREPAREturns a character string intoa SQL statement, and EXECUTE executes that statement.Consider the following example. char *s = "INSERT INTO emp VALUES(1234, 'jon', 3)"; EXEC SQL PREPARE q FROM :s; EXEC SQL EXECUTE q;Alternatively, PREPARE and EXECUTE may be combined intoone statement: char *s = "INSERT INTO emp VALUES(1234, 'jon', 3)"; EXEC SQL EXECUTE IMMEDIATE :s;

7

TransactionsOracle PRO*C supports transactions as definedby the SQL standard. A transaction is a sequence of SQLstatements that Oracle treats as a single unit of work. Atransaction begins at your first SQL statement. A transactionends when you issue "EXEC SQL COMMIT" (to makepermanent any database changes during the currenttransaction) or "EXEC SQL ROLLBACK" (to undo any changessince the current transaction began). After the currenttransaction ends with your COMMIT or ROLLBACKstatement, the next executable SQL statement will automaticallybegin a new transaction.

If your program exits without calling EXEC SQL COMMIT, alldatabase changes will be discarded.Error HandlingAfter each executable SQL statement, yourprogram can find the status of execution either by explicitchecking of SQLCA, or by implicit checking using theWHENEVER statement. These two ways are covered in detailsbelow.

5. SQLCASQLCA (SQL Communications Area) is used to detect errorsand status changes in your program. This structure containscomponents that are filled in by Oracle at runtime after everyexecutable SQL statement.

6. WHENEVER STATEMENTThis statement allows you to do automatic error checking andhandling. The syntax is:EXEC SQL WHENEVER <condition> <action>;Oracle automatically checks SQLCA for <condition>, and ifsuch condition is detected, your program will automaticallyperform <action>.<condition> can be any of the following:

SQLWARNING - sqlwarn[0] is set because Oraclereturned a warningSQLERROR - sqlcode is negative because Oracle returnedan error

NOT FOUND - sqlcode is positive because Oracle couldnot find a row that meets your WHERE condition, or aSELECT INTO or FETCH returned no rows

Examples of the WHENEVER statement:EXEC SQL WHENEVER SQLWARNING DOprint_warning_msg(); EXEC SQL WHENEVER NOT FOUND GOTO handle_empty;

7. CONCLUSIONSPro*C is a precompiler enables you to create applications thataccess your Oracle database. This tool is enables you to embedStructured Query Language (SQL) statements in a C or C++program.

8. REFERENCES[1] Database Systems: The Complete Book by Hector Garcia,

Jeff Ullman, and Jennifer Widom.[2] A First Course in Database Systems by Jeff Ullman and

Jennifer Widom.Gradiance SQL Tutorial

[3] https://docs.oracle.com/cd/B28359_01/appdev.111/b28427.pdf

[4] https://docs.oracle.com/cd/E11882_01/appdev.112/e10825.pdf

ABOUT THE AUTHORSDr. B. K. Sharma is a Professor and DeanHostel of Ajay Kumar Garg EngineeringCollege, Ghaziabad. He obtained hisMCA degree from JNU, New Delhi,M.Tech. from Guru Gobind SinghIndraprastha University, Delhi and Ph.D.from Shobhit University, Meerut. Hisareas of specialization are SoftwareWatermarking, Discrete Mathematics,

Theory of Computation and Compiler Design. During his careerof more than decade in the teaching, he has published manyResearch papers in International/National Journals/Conferences. He has also published many books forengineering students.

Object Recognition Techniques for Digital Images

8

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

ORIENTDB – USE FOR HANDLINGGRAPH DATABASE

Snehlata KaulAssistant Professor, MCA Department

Email Id : [email protected]

ABSTRACT – Data Management is not now to manage and textdata, it is beyond it. At present we need to manage the multi-media, dynamic and swiftly embryonic nature of data whichcontain text, numbers, audio, video, graphics images etc..Through this article I would like to discuss the multi-modeldatabase, the “OrientDB” which support the graphic data.

1. INTRODUCTIONIn computing, a graph database is a database that uses graphstructures for semantic queries with nodes, edges andproperties to represent and store data. Graph databases employnodes, properties, and edges. To mange this type of datadifferent graph databases products are intriguing a prominentroles as well.

Neo4j, OrientalDB, InfiniteGraph and AllegroGraph are someof the graph databases. Through this article I will try to discusesone of the latest Graph databases “OrientDB”, the NOSQLdatabase[2].

The aim of this article is to guide the reader throughunderstanding what are the most interesting features thatOrientDB brings on the table out-of-the-box and how, meldingthem altogether, this database differs from traditional relationalsystems and other NoSQL products, being it document DBslike MongoDB or key-value stores like Redis or Memcache.

2. ORIENTDB –LET US STARTOrientDB is a graph database written in Java, mainly developedby Luca Garulli,AssetData’s CTO. It a 2nd GenerationDistributed Graph Database with the flexibility of Documentsin one product with an Open Source commercial friendly license(Apache 2 license), which overcome lacing feature of handlingBig data of the First generation Graph Databases.

OrientDB is incredibly fast: it can store 220,000 records persecond on common hardware. Even for a Document baseddatabase, the relationships are managed as in Graph Databaseswith direct connections among records.[1]

2.1. Features of OrientDBFully transactional:Supports ACID transactions guaranteeing that all databasetransactions are processed reliably and in the event of a crash

all pending documents are recovered and committed.

Graph structured data model:Native management of graphs. Fully compliant with the ApacheTinkerPop Gremlin (previously known as Blueprints) opensource graph computing framework.[2]

3. ORIENTDB- WORING WITH GRAPH

In graph databases, the database system graphs data intonetwork-like structures consisting of vertices and edges. Inthe OrientDB Graph model, the database represents datathrough the concept of a property graph, which defines a vertexas an entity linked with other vertices and an edge, as an entitythat links two vertices.

OrientDB ships with a generic vertex persistent class, called V,as well as a class for edges, called E.

In effect, the Graph model database works on top of theunderlying document model. But, in order to simplify thisprocess, OrientDB introduces a new set of commands formanaging graphs from the console.[3]

4. ORIENTDB- THE GRAPH MODELA graph represents a network-like structure consisting ofVertices (also known as Nodes) interconnected by Edges (alsoknown as Arcs). OrientDB's graph model is represented by theconcept of a property graph, which defines the following:

Vertex - an entity that can be linked with other Vertices andhas the following mandatory properties:

unique identifierset of incoming Edgesset of outgoing Edges

Edge - an entity that links two Vertices and has the followingmandatory properties:

unique identifierlink to an incoming Vertex (also known as head)link to an outgoing Vertex (also known as tail)label that defines the type of connection/relationshipbetween head and tail vertex

In addition to mandatory properties, each vertex or edge canalso hold a set of custom properties. These properties can be

9

defined by users, which can make vertices and edges appearsimilar to documents.

OrientDB-GRAPH CONSISTENCY Before OrientDB, thegraph consistency could be assured only by usingtransactions. The problems with using transactions for simpleoperations like creation of edges are:

Speed, the transaction has a cost in comparison with non-transactional operations

Management of optimistic retry at application level.Furthermore, with 'remote' connections this means high latency

Low scalability on high concurrency (this will be resolved inOrientDB v3.0, where commits will not lock the databaseanymore)

As of OrientDB provides a new mode to manage graphs withoutusing transactions. It uses the Java class OrientGraphNoTx orvia SQL by changing the global setting sql.graphConsistencyMode to one of the following values:

tx, the default, uses transactions to maintain consistency. Thiswas the only available setting before OrientDB

notx_sync_repair, avoids the use of transactions. Consistency,in case of a JVM crash, is guaranteed through a databaserepair operation, which runs at startup in synchronous mode.The database cannot be used until the repair is finished.

notx_async_repair, also avoids the use of transactions.Consistency, in case of JVM crash, is guaranteed through adatabase repair operation, which runs at startup inasynchronous mode. The database can be used immediately,as the repair procedure will run in the background

Both the new modes notx_sync_repair and notx_async_repairwill manage conflicts automatically, with a configurable RETRY(default=50). In case changes to the graph occur concurrently,

any conflicts are caught transparently by OrientDB and theoperations are repeated. The operations that support the auto-retry are:CREATE EDGEDELETE EDGEDELETE VERTEX

3. CONCLUSIONOrientedDB is Fully transactional, supports ACID transactionsguaranteeing that all database transactions are processedreliably and in the event of a crash all pending documents arerecovered and committed.[4]

Graph structured data model provideinhabitant management of graphs. Fully compliant with theApache TinkerPop Gremlin (previously known as Blueprints)open source graph computing framework..

3. REFERENCE:[1] "Multi-Model Database - OrientDB Manual [2] Jump up

to: a b "Popularity ranking of database managementsystems". db-engines.com. Solid IT. Retrieved 2015-07-04.

[3] Apache Software Foundation. "Apache CouchDB".Retrieved 15 April 2012.

[4] Oracle NoSQL High Availability

ABOUT THE AUTHORSSnehlata Kaul is working as an AssistantProfessor in Ajay Kumar GargEngineering College, Ghaziabad, U.P(India). She has obtained her MCA fromDR. B. R. Ambedkar MarathwadaUniversity, Aurangabad, Maharashtraand M.Tech. from KSOU, Mysore. Shehas more than decades of teachingexperience. Her research area includes

multi agent system, DBMS, ADBMS & SPM. She has attendedseveral seminars, workshops and conferences at various levels.She has published many papers in national and internationaljournals and conferences.

Genetic Programming Approach for Reverse Engineering

10

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

GREEN COMPUTINGSaroj Bala

Assistant Professor, MCA DepartmentE-mail: [email protected]

ABSTRACT– During recent years, attention in GreenComputing has moved research into energy-saving techniquesfor home computers to enterprise systems' Client and Servermachines. Saving energy or reduction of carbon footprints isone of the aspects of Green Computing. The research in thedirection of Green Computing is more than just saving energyand reducing carbon foot prints. This study provides a briefaccount of Green Computing. The emphasis of this study is oncurrent trends in Green Computing; challenges in the field ofGreen Computing and the future trends of Green Computing.

1. INTRODUCTIONThe field of Green Computing encompasses a broad range ofsubjects from new energy-generation techniques to the studyof advanced materials to be used in our daily life. Greentechnology focuses on reducing the environmental impact ofindustrial processes and innovative technologies caused bythe Earth’s growing population. It has taken upon itself thegoal to provide society’s needs in ways that do not damagethe natural resources. This means creating fully recyclableproducts, reducing pollution, proposing alternativetechnologies in various fields, and creating a center ofeconomic activity around technologies that benefit theenvironment. The huge amount of computing manufacturedworldwide has a direct impact on environment issues, andscientists are conducting numerous studies in order to reducethe negative impact of computing technology on our naturalresources. A central point of research is testing and applyingalternative nonhazardous materials in the products’manufacturing process.The article is structured as follows: section 2 discusses currenttrends, Section 3 and 4 discusses challanges and future trendsrespectively.

2. CURRENT TRENDSCurrent trends of Green Computing are towards efficientutilization of resources. Energy is considered as the mainresource and the carbon footprints are considered the majorthreats to environment. Therefore, the emphasis is to reducethe energy utilization & carbon footprints and increase theperformance of Computing. There are several areas whereresearchers are putting lots of efforts to achieve desired results:

1. Organizations are realizing that the source and amount of

their energy consumption significantly contributes toGreenhouse Gas emissions. In response to this finding,organizations are currently using the following equation:Reduced energy consumption =Reduced greenhouse gas emissions =Reduced operational costs for the data centerIt means adopting fewer and more energy efficientsystems while refactoring application environments tomake optimal use of physical resources is the bestarchitectural model.

2. Based on the Gartner estimations over 133,000 PCs arediscarded by U.S. homes and businesses every day andless than 10 percent of all electronics are currentlyrecycled. Majority of countries around the world requireelectronic companies to finance and manage recyclingprograms for their products especially under-developedCountries. Green Computing must take the product lifecycle into consideration; from production to operationto recycling. E-Waste is a manageable piece of the wastestream and recycling e-Waste is easy to adopt. Recyclingcomputing equipment such as lead and mercury enablesto replace equipment that otherwise would have beenmanufactured. The reuse of such equipments allowssaving energy and reducing impact on environment,which can be due to electronic wastes.

3. Currently much of the emphasis of Green Computing areais on Data Centers, as the Data Centers are known fortheir energy hunger and wasteful energy consumptions.With the purpose of reducing energy consumption inData Centers it is worthwhile to concentrate on following:

Information Systems – efficient and right setinformation systems for business needs are a key inbuilding Green Data Centers.Cooling Systems –cooling system should be designedin such a way so that it is expandable as needs forcooling dictates.Standardized environment for equipment is must forData Center Air Management and Cooling System.Consider initial and future loads, when designing &selecting data center electrical system equipment.

11

4. Virtualization is a trend of Green computing it offersvirtualization software as well as management softwarefor virtualized environments. Virtualization runs fewersystems at higher levels of utilization. Virtualization allowsfull utilization of computer resources and benefits in:

Reduction of total amount of hardware;Power off Idle Virtual Server to save resources andenergy; andReduction in total space, air and rent requirementsultimately reduces the cost

5. Another approach to promote Green Computing and saveenvironment is to introduce policies all around the World,so that, companies design products to receive the eco-label. There are several organizations in the world whichsupport “eco-label” IT products. These organizationsprovide certificates to IT products based on factorsincluding design for recycling, recycling system, noiseenergy consumption etc.

3. CHALLENGESAccording to researchers in the past the focus was oncomputing efficiency and cost associated to IT equipmentsand infrastructure services were considered low cost andavailable. Now infrastructure is becoming the bottleneck in ITenvironments and the reason for this shift is due to growingcomputing needs, energy cost and global warming. This shiftis a great challenge for IT industry. Therefore now researchersare focusing on the cooling system, power and data centerspace. At one extreme it is the processing power that isimportant to business and on the other extreme it is the drive,challenge of environment friendly system, and infrastructurelimitations. Green Computing challenges are not only for ITequipments users but also for the IT equipments Vendors.Several major vendors have made considerable progress inthis area, for example, Hewlett-Packard recently unveiled whatit calls “the greenest computer ever”—the HP rp5700 desktopPC. The HP rp5700 exceeds U.S. Energy Star 4.0 standards,and has an expected life of at least five years, and 90% of itsmaterials are recyclable. Dell is speeding up its programs toreduce hazardous substances in its computers, and its newDell OptiPlex desktops are 50% more energy-efficient thansimilar systems manufactured in 2005, credit goes to moreenergy-efficient processors, new power management features,and other related factors. IBM is working on technology todevelop cheaper and more efficient solar cells plus many othersolutions from IBM to support sustainable IT. According toresearchers of Green Computing following are few prominentchallenges that Green computing is facing today:

Equipment power density / Power and coolingcapacities;Increase in energy requirements for Data Centers andgrowing energy cost;Control on increasing requirements of heat removing

equipment, which increases because of increase in totalpower consumption by IT equipments;Equipment Life cycle management – Cradle to Grave;andDisposal of Electronic Wastes

4. FUTURE TRENDSThe future of Green Computing is going to be based onefficiency, rather than reduction in consumption. The primarilyfocus of Green IT is in the organization’s self interest in energycost reduction, at Data Centers and at desktops, and the resultof which is the corresponding reduction in carbon generation.The secondary focus of Green IT needs to focus beyond energyon innovation and improving alignment with overall corporatesocial responsibility efforts. This secondary focus will demandthe development of Green Computing strategies. The idea ofsustainability addresses the subject of business value creationwhile ensuring that long term environmental resources are notimpacted. There are few efforts, which all enterprises aresupposed to take care of. In future certifications together withrecommendations and government regulations will put morepressure on vendors to use green technology and reduceimpact on environment. Cloud computing is energy-efficienttechnology for ICT provided that it’s potential for significantenergy savings that have so far focused on only hardwareaspects, can be fully explored with respect to system operationand networking aspects also. Cloud Computing results in betterresource utilization, which is good for the sustainabilitymovement for green technology. The product durability and/or longevity are one of the best approaches towards achievingGreen Computing objectives. Long life of product will allowmore utilization of products and it will put a control onunnecessary manufacturing of products. It is obvious thatgovernment regulations will push the products vendors tomake more efforts to increase the product life.

Power management is proving to be one of the most valuableand clear-cut techniques in near future to decrease energyconsumption. IT departments with focus on saving energycan decrease use with a centralized power management tool.One of the areas where Green Computing can grow is the shareand use efficiently the unused resources on idle computers.Leveraging the unused computing power of modern machinesto create an environmentally proficient substitute to traditionaldesktop computing is cost effective option. Intelligentcompression techniques can be used to compress the dataand eliminate duplicates help in cutting the data storagerequirements.

5. CONCLUSIONTechnology is an active contributor in achieving the goals ofGreen Computing. IT industry is putting efforts in all its sectorsto achieve Green computing. Equipment recycling, reductionof paper usage, virtualization, cloud computing, power

12

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

management, Green manufacturing are the key initiativestowards Green computing. Current challenges to achieve GreenComputing are enormous and the impact is on computingperformance. Government regulations are pushing Vendors toact green; behave green; do green; go green; think green; usegreen and no doubt to reduce energy consumptions as well.All these efforts are still in limited areas and currently effortsare mainly to reduce energy consumption, e-Waste but thefuture of Green Computing will be depending on efficiencyand Green products. Future work in Green Computing disciplinewill also rely on research work in academics since this is anemerging discipline and there is much more need to be done.There is need for more research in this discipline especiallywithin academic sector.

REFERENCES[1] Tariq Rahim Soomro and Muhammad Sarwar, Green

Computing: From current to Future Trends, InternationalJournal of Social, Behavioral, Educational, Economic,Business and Industrial Engineering Vol:6, No:3, page326-329, 2012.

[2] Pushtikant Malviya, Shailendra Singh, A Study aboutGreen Computing, International Journal of AdvancedResearch in Computer Science and Software Engineering,Volume 3, Issue 6, page 790-794, June 2013

Saroj Bala is working as an AssistantProfessor in Ajay Kumar GargEngineering College, Ghaziabad,U.P.(India). She has obtained her MCAfrom Punjabi University, Patiala and B.Scfrom Kurukshetra University,Kurukshetra. She has over 17 years ofteaching experience. Her research areaincludes data clustering, swarm

intelligence, image processing and green computing. She hasattended several seminars, workshops and conferences atvarious levels. She has published many papers in national andinternational journals.

13

LI-FI (LIGHT FIDELITY)-THE FUTURE TECHNOLOGY INWIRELESS COMMUNICATION

Harnit SainiAssistant Professor, MCA Department, AKGEC, GZB

E-mail: [email protected]

Abstract –Whether you’re using wireless internet in a coffeeshop, stealing it from the guy next door, or competing forbandwidth at a conference, you have probably gotten frustratedat the slow speeds you face when more than one device is tappedinto the network. As more and more people and their manydevices access wireless internet, clogged airwaves are going tomake it. One germen phycist. Harald Haas has come up with asolution he calls “data through illumination” –taking the fibberout of fiber optic by sending data through an LED light bulbthat varies in intensity faster than the human eye can follow.It’s the same idea band behind infrared remote controls but farmore powerful. Haas says his invention, which he calls DLIGHT,can produce data rates faster than 10 megabits per second, whichis speedier than your average broadband connection. Heenvisions a future where data for laptops, smart phones, andtablets is transmitted through the light in a room. And securitywould be snap – if you can’t see the light, you can’t access thedata.

1. INTRODUCTIONIn simple terms, Li-Fi can be thought of as a light-based Wi-Fi.That is, it uses light instead of radio waves to transmitinformation. And instead of Wi-Fi modems, Li-Fi would usetransceiver-fitted LED lamps that can light a room as well astransmit and receive information. Since simple light bulbs areused, there can technically be any number of access points.This technology uses a part of the electromagnetic spectrumthat is still not greatly utilized- The Visible Spectrum. Light isin fact very much part of our lives for millions and millions ofyears and does not have any major ill effect. Moreover there is10,000 times more space available in this spectrum and justcounting on the bulbs in use, it also multiplies to 10,000 timesmore availability as an infrastructure, globally.

It is possible to encode data in the light by varying the rate atwhich the LEDs flicker on and off to give different strings of 1sand 0s. The LED intensity is modulated so rapidly that humaneyes cannot notice, so the output appears constant.

More sophisticated techniques could dramatically increase VLCdata rates. Teams at the University of Oxford and the Universityof Edinburgh are focusing on parallel data transmission usingarrays of LEDs, where each LED transmits a different datastream. Other groups are using mixtures of red, green and blueLEDs to alter the light's frequency, with each frequencyencoding a different data channel.

Li-Fi, as it has been dubbed, has already achieved blisteringly

high speeds in the lab. Researchers at the Heinrich HertzInstitute in Berlin, Germany, have reached data rates of over500 megabytes per second using a standard white-light LED.Haas has set up a spin-off firm to sell a consumer VLC transmitterthat is due for launch next year. It is capable of transmittingdata at 100 MB/s - faster than most UK broadband connections.

2. HOW LI-FI WORKS?Li-Fi is typically implemented using white LED light bulbs atthe downlink transmitter. These devices are normally used forillumination only by applying a constant current. However, byfast and subtle variations of the current, the optical output canbe made to vary at extremely high speeds.

This very property of optical current is used in Li-Fi setup.The operational procedure is very simple-, if the LED is on,you transmit a digital 1, if it’s off you transmit a 0. The LEDscan be switched on and off very quickly, which gives niceopportunities for transmitting data. Hence all that is requiredis some LEDs and a controller that code data into those LEDs.All one has to do is to vary the rate at which the LED’s flickerdepending upon the data we want to encode.

Further enhancements can be made in this method, like usingan array of LEDs for parallel data transmission, or using mixturesof red, green and blue LEDs to alter the light’s frequency witheach frequency encoding a different data channel. Suchadvancements promise a theoretical speed of 10 Gbps –meaning one can download a full high-definition film in just 30seconds.

14

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

TTo further get a grasp of Li-Fi consider an IR remote. It sendsa single data stream of bits at the rate of 10,000-20,000 bps.Now replace the IR LED with a Light Box containing a largeLED array.

3. APPLICATION OF LI-FI TECHNOLOGY

3.1 You Might Just Live LongerYou Might Just Live Longer For a long time, med icaltechnology has lagged behind the rest of the wireless world.Operating rooms do not allow Wi-Fi over radiation concerns,and there is also that whole lack of dedicated spectrum. WhileWi-Fi is in place in many hospitals, interference from cell phonesand computers can block signals from monitoring equipment.Li-Fi solves both problems: lights are not only allowed inoperating rooms, but tend to be the most glaring (punintended)fixtures in the room. And, as Haas mentions in his TED Talk,Li-Fi has 10,000 times the spectrum of Wi-Fi, so maybe we can, Idunno, delegate red light to priority medical data. Code Red!

3.2 AirlinesAirline Wi-Fi. Ugh. Nothing says captive audience like havingto pay for the "service" of dialup speed Wi-Fi on the plane.And don’t get me started on the pricing.

The best I’ve heard so far is that passengers will "soon" beoffered a "high-speed like" connection on some airlines. Unitedis planning on speeds as high as 9.8 Mbps per plane.

Uh, I have twice that capacity in my living room. And at thesame price as checking a bag, I expect it. Li-Fi could easilyintroduce that sort of speed to each seat's reading light. I’ll bethe guy WoWing next to you. Its better than listening to youtell me about your wildly successful son, ma’am.

3.3 Smarter Power PlantsWi-Fi and many other radiation types are bad for sensitiveareas. Like those surrounding power plants. But power plantsneed fast, inter-connected data systems to monitor things likedemand, grid integrity and (in nuclear plants) core temperature.The savings from proper monitoring at a single power plantcan add up to hundreds of thousands of dollars.

Li-Fi could offer safe, abundant connectivity for all areas ofthese sensitive locations. Not only would this save moneyrelated to currently implemented solutions, but the draw on apower plant’s own reserves could be lessened if they haven’tyet converted to LED lighting.

3.4 Undersea AwesomenessUnderwater ROVs, those favourite toys of treasure seekersand James Cameron,operate from large cables that supply theirpower and allow them to receive signals from their pilots above.

ROVs work great, except when the tether isn’t long enough toexplore an area, or when it gets stuck on something. If theirwires were cut and replaced with light —say from a submerged,high-powered lamp —then they would be much freer to explore.They could also use their headlamps to communicate witheach other, processing data autonomously and referringfindings periodically back to the surface, all the while obtainingtheir next batch of orders.

3.5 It Could Keep You Informed and Save LivesSay there’s an earthquake in New York. Or a hurricane. Takeyour pick —it’s a wacky city. The average New Yorker may notknow what the protocols are for those kinds of disasters. Untilthey pass under a street light, that is.

Remember, with Li-Fi, if there’s light, you’re online. Subwaystations and tunnels, common dead zones for most emergencycommunications, pose no obstruction. Plus, in times lessstresssing cities could opt to provide cheap high speed Webaccess to every street corner.

4. ADVANTAGES OF LI-FILi-Fi can solve problems related to the insufficiency ofradio frequency bandwidth because this technology usesVisible light spectrum that has still not been greatlyutilized.High data transmission rates of up to 10Gbps can beachieved.Since light cannot penetrate walls, it provides privacyand security that Wi-Fi cannot.Li-Fi has low implementation and maintenance costs.It is safe for humans since light, unlike radio frequencies,cannot penetrate human body. Hence, concerns of cellmutation are mitigated.

5. DISADVANTAGES OF LI-FILight can't pass through objects.A major challenge facing Li-Fi is how the receiving devicewill transmit back to transmitter.High installation cost of the VLC systems.Interferences from external light sources like sun, light,normal bulbs, opaque materials.

6. CONCLUSIONThe possibilities are numerous and can be explored further. Ifhis technology can be put into practical use, every bulb can beused something like a Wi-Fi hotspot to transmit wireless dataand we will proceed toward the cleaner, greener, safer andbrighter future.

The concept of Li-Fi is currently attracting a great deal ofinterest, not least because it may offer a genuine and veryefficient alternative to radio-based wireless. As a growingnumber of people and their many devices access wireless

15

internet, the airwaves are becoming increasingly clogged,making it more and more difficult to get a reliable, high-speedsignal.

This may solve issues such as the shortage of radio-frequencybandwidth and also allow internet where traditional radio basedwireless isn’t allowed such as aircraft or hospitals. One of theshortcomings however is that it only work in direct line ofsight.

REFERENCES[1] seminarprojects.com/s/seminar-report-on-lifi[2] http://en.wikipedia.org/wiki/Li-Fi[3] http://teleinfobd.blogspot.in/2012/01/what-is-lifi.html[4] technopits.blogspot.comtechnology.cgap.org/2012/01/

11/a-lifi-world/[5] www.lificonsortium.org/[6] the-gadgeteer.com/2011/08/29/li-fi-internet-at-thespeed-

of-light

Harnit Saini has done her Bachelor ofComputer Applications from Kanya MahaVidyalya, Jalandhar, Punjab, affiliated toGuru Nank Dev University, Amritsar,Punjab in the year 2002. She has doneher Master of Computer Applicationswith honours from Punjab Institute ofManagement and Technology, Mandi

Gobindgarh, Punjab, affiliated to Punjab Technical University,Jalandhar, Punjab in year 2005. She has done her Master ofTechnology degree in Computer Science and Engineering fromAjay Kumar Garg Engineering College, Ghaziabad. She hasattended a national conference on Development of ReliableInformation Systems, Techniques and Related Issues duringher M.Tech. at Ajay Kumar Garg Engineering College,Ghaziabad in February, 2012. She has also attended a workshopon Formal Languages, Automata Theory and Computations atAjay Kumar Garg Engineering College, Ghaziabad in April, 2012.She was also in the organizing team of national conferenceDRISTI-2012 and national seminar CYST-2013 held in AjayKumar Garg Engineering College Ghaziabad. She has publishedtwo papers during her M.Tech in International Journals. She isan active member of IEEE. She possesses good moral valuesand calmness. She is ready to face challenges at every momentof life. Faith in God is her biggest strength.

16

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

AN EFFICIENT INDEXING TREE STRUCTURE FORMULTIDIMENSIONAL DATA ON CLOUD

Mani Dwivedi,Assistant Professor, AKGEC, Ghaziabad

[email protected]

Abstract – Nowadays cloud computing provides storage resourceon demand has become increasingly important. Cloudcomputing has attracted much attention in industrial andresearch areas, for its advantages such as high availability, highreliability and costsaving to the business organizations.However, multidimensional data indexing remains as a bigchallenge for cloud computing, because of the inefficiency instorage and search caused by complicated existing indexstructures, which greatly limits the scalability of applicationsand dimensionality of data to be indexed. A novel index schemewith a simple tree structure for multidimensional data indexing(SDI) in cloud systems have been proposed, which overcomesthe root-bottleneck problem existing in most other tree-basedmultidimensional indexing schemes for cloud datamanagement. Extensive study verifies the superiority of SDI inboth search and storage management performance.

1. INTRODUCTIONCloud computing is the technology used to access remotelystored data through the internet. It protects the data from thedisasters like earthquakes, tsunami, cyclones, fire etc. As cloudcomputing is becoming more accrual, more information is beingcentralized into the cloud. Data owners are relieved from theburden of data storage and maintenance, to enjoy the ondemand high quality data service. The customers of the cloud,now has to be secure against the cloud service providers itself,because they can leak information to prohibited entities or gethacked. Though current cloud systems have achieved greatsuccess in file-sharing and file management with the help ofmature technologies such as keyword-based search and onedimensional data indexing, extending cloud technologies toapplications with more complicated data management tasks isnontrivial. There are several difficulties in implementing

multidimensional data indexing and supportingmultidimensional complex/similarity queries, which can be eitherk-nearest-neighbor (KNN) queries or range queries.

Almost every existing cloud computing environment employsa one-dimensional identifier (or ID, for short) space. The onlyexception protocol, CAN [2], uses a low dimensional torus asthe topology of the ID space. The dimensionality of the IDspace usually cannot be matched with the dimensionality ofthe data to be indexed. Apparently, these distributed-hash-table (DHT) based methods cannot be used directly for ourpurpose. Another natural choice is to extend the tree-basedindexing in centralized databases or traditional distributeddatabases with a limited number of nodes. Essentially, thesemethods imitate a multidimensional tree index with an additionalring-based overlay linking all nodes in the tree together.

It can be assumed that tree structured overlay network shouldbe used in the way that additional links should be added withcare. With appropriately designed algorithms, a simple treestructured overlay network with less links could be moreefficient than a complex one in terms of, not only networkmaintenance, but also multidimensional data search. In thispaper a novel Simple tree structure for multidimensional dataindexing (SDI) is presented by overcoming the problems ofVBI-tree [1].

2. SDI STRUCTUREIn order to overcome the shortcomings of VBI-Tree,SDI isforwarded, which is shown in Fig.1. Each routing node may“link” to five kinds of

Fig. 1. SDI Structure

17

nodes, if any: one parent, two children, two adjacent routingnodes, neighbor routing nodes and one ancestor node.Compared with VBI-Tree, SDI defines new ancestor links anddifferent adjacent links, but removes upside path. If all datanodes are removed which are leaves, then a routing tree withonly routing nodes is created. By an inorder traversal to thisrouting tree, we create adjacent link between any two nodes asshown in Fig. 1. Given a node x, the nodes immediately prior toand after it, connected by the adjacent link, are the left andright adjacent nodes, respectively. Adjacent link will alwaysconnect to one LRN at one end. Ancestor link is distributed bythe lower level routing node to its specific higher level routingnode descendants, which are at least two levels higher 2 andlie on the left(right) child branch but right(left) most positionsat each level. Ancestor link brings the ancestor coverageinformation to its selected descendants. For any node at levell (l >= 0), it will distribute at most 2* (log N - l - 2) links outwith network nodes N. Child height is the child subtree height,which is used to activate the balance algorithm [9]. For thedata node, it has no sideways routing tables, ancestor oradjacent links, but only one parent link to LRN(Leaf routingNode). Modification to the original adjacent link has restrictedthe routing inside routing tree, which makes the tree succinctand using ancestor link instead of upside path reduces theupdating cost from O(N) to O(log N), which makes the treespeedy.

2.1 Ancestor link distributionIn SDI, ancestor links are distributed to routing nodes at eachlevel evenly except the left and right most ones. Each link willbe maintained by at most two nodes of the same level: theright(left) most descendant of the left(right) child branch. Allnodes with level no less than 2, except the left most and rightmost ones, will have ancestor links. If one node maintains anancestor link, there is only one. Each node will keep the regioninformation for the linked ancestor, if any.

2.2 Index buildingIndex building for SDI is the same as in VBI-Tree [1], whichdeals with the data indexing distribution among tree nodes.The ancestor node will cover the descendant node. Nodejoining or departure causes the data space to be divided orcombined. Data adding or deleting causes the coverage spaceof node to be expanded or shrunk, which is the same processas in centralized index schemes [3,4].

3. QUERY PROCESSINGAccording to the modification to the tree structure, newalgorithms for query processing have been defined. By visitingancestor links among neighbors, the query efficiency is stillO(log N) but with no root-bottleneck problem or highmaintenance cost. Point query is a special case of range queryand KNN query, setting query radius to zero. For simplicity,first we consider the case where no sibling nodes overlap with

each other. Ancestor link helps to do a discrete data searchwith less hops, which reduces the query processing costdrastically, compared with VBI-Tree. It is ensured that all thenodes which intersect with the query can be visited once andonly once. In the query processing algorithm, by usingadditional parameters, we restrict the search to the nodes orbranches which have not been checked. For query processing,we use a parallel query distribution which guarantees the queryefficiency to be O(log N).

3.1 Performance of query processingVBI-Tree uses upside path to log the coverage information forall ancestors along the way to root and then each node has awider view. So VBI-Tree beats SDI in query efficiency. Howeverboth of them can resolve the query by O(log N). The mostimportant thing for SDI is that it can resolve the query withmuch less cost (query messages) and the more skewed thedata distribution, the more benefit it gets. SDI beats VBI-Treein three cases, especially for the range query and the KNNquery. By using ancestor links, SDI resolves discrete datachecking with less hops but VBI-Tree can only jump one levelat a time. The query cost increases with increasingdimensionality, the bigger the dimensionality, the more the spaceoverlapping. KNN query processing adopts the range queryprocessing algorithm and SDI wins in query cost.

4. CONCLUSIONIndexing of multidimensional data is an essential problem forbringing cloud technologies into mission critical datamanagement applications. An enabling technique for thispurpose should not only keep search efficiency in a staticenvironment, but also provide availability and robustnesswithout performance sacrifice in a large-scale cloud basedsystems where nodes may join or leave the system dynamically.SDI, a tree-based overlay network is introduced, in which eachnode only maintains carefully selected additional links toancestor and descendant nodes. It has been shown here thateven with less additional links, the search algorithms based onSDI still bounds the query efficiency by O(log N). Theadvantage achieved by the simple yet efficient index structureis huge. It eliminates the root bottleneck problem suffered bymost other tree-based overlay networks. Furthermore, sinceless links are maintained, it reduces the cost for both networkmaintenance and query processing.

5. REFERENCES[1] H.V. Jagadish, B.C. Ooi, Q.H. Vu, R. Zhang, A. Zhou, VBI

-Tree: A Peerto- Peer framework for supporting multi-dimensional indexing schemes, in: ICDE, 2006, pp. 34–43.

[2] S. Ratnasamy, P. Francis, M. Handley, R. Karp, S. Shenker,A scalable content-addressable network, in: SIGCOMM,2001, pp. 161–172.

[3] A. Guttman, R-trees: A dynamic index structure for spatial

18

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

searching, in: SIGMOD, 1984, pp. 47–57.[4] P. Ciaccia, M. Patella, P. Zezula, M-tree: An efficient

access method for similarity search in metric spaces, in:VLDB, 1997, pp. 426–435.

ABOUT THE AUTHORS Mani Dwivedi did her MCA from IIMTManagement College, Merrut ,Affiliatedto UP Technical University, Noida in theyear 2008. She also received the degreein M.Tech (Computer ScienceEngineering) from UP TechnicalUniversity, Noida. She is currentlyworking in MCA department as Assistant

Professor of Ajay Kumar Garg Engineering College, Ghaziabad.

19

MOBILE CLOUD COMPUTING INTEGRATION:ARCHITECTURE, APPLICATIONS, AND APPROACHES

Anjali SinghAssistant Professor, MCA Department, AKGEC, GZB

E-mail: [email protected]

Abstract – Together with an explosive growth of the mobileapplications and emerging of cloud computing concept, mobilecloud computing (MCC) has been introduced to be a potentialtechnology for mobile services. MCC integrates the cloud com-puting into the mobile environment and overcomes obstaclesrelated to the performance (e.g., battery life, storage, andbandwidth), environment (e.g., heterogeneity, scalability, andavailability), and security (e.g., reliability and privacy) dis- cussedin mobile computing. This paper gives a survey of MCC, whichhelps general readers have an overview of the MCC includingthe definition, architecture, and applications. The issues,existing solutions, and approaches are presented. In addition,the future research directions of MCC are discussed.

1. INTRODUCTIONMobile devices (e.g., smartphone and tablet PC) areincreasingly becoming an essential part of human life as themost effective and convenient communication tools notbounded by time and place. Mobile users accumulate richexperience of various services from mobile applications (e.g.,iPhone apps and Google apps), which run on the devices and/or on remote servers via wireless networks. The rapid progressof mobile computing (MC) becomes a powerful trend in thedevelopment of IT technology as well as commerce andindustry fields. However, the mobile devices are facing manychallenges in their resources (e.g., battery life, storage, andbandwidth) and communications (e.g., mobility and security).The limited resources significantly impede the improvement ofservice qualities. Cloud computing (CC) has been widelyrecognized as the next generation computing infrastructure.CC offers some advantages by allowing users to useinfrastructure (e.g., servers, networks, and storages), platforms(e.g., middleware services and operating systems), andsoftware’s (e.g., application programs) provided by cloudproviders (e.g., Google, Amazon, and Sales force) at low cost.In addition, CC enables users to elastically utilize resources inan on- demand fashion. As a result, mobile applications can berapidly provisioned and released with the minimal managementefforts or service provider’s interactions. With the explosionof mobile applications and the support of CC for a variety ofservices for mobile users, mobile cloud computing (MCC) isintroduced as an integration of CC into the mobile environment.MCC brings new types of services and facilities mobile usersto take full advantages of CC

2. WHAT IS MOBILE CLOUD COMPUTING?‘Mobile cloud computing at its simplest, refers to aninfrastructure where both the data storage and data processinghappen outside of the mobile device. Mobile cloud applicationsmove the computing power and data storage away from mobilephones and into the cloud, bringing applications and MC tonot just smartphone users but a much broader range of mobilesubscribers’.

MCC is a new paradigm for mobile applications whereby thedata processing and storage are moved from the mobile deviceto powerful and centralized computing platforms located inclouds. These centralized applications are then accessed overthe wireless connection based on a thin native client or webbrowser on the mobile devices. Alternatively, MCC can bedefined as a combination of mobile web and CC which is themost popular tool for mobile users to access applications andservices on the Internet. Briefly, MCC provides mobile userswith the data processing and storage services in clouds. Themobile devices do not need a powerful configuration (e.g.,CPU speed and memory capacity) because all the complicatedcomputing modules can be processed in the clouds.

2.1 ARCHITECTURES OF MOBILE CLOUDCOMPUTING

From the concept of MCC, the general architecture of MCCcan be shown in Figure 1. In Figure 1, mobile devices areconnected to the mobile networks via base stations (e.g., basetransceiver station, access point, or satellite) that establishand control the connections (air links) and functional interfacesbetween the networks and mobile devices. Mobile users’requests and information (e.g., ID and location) are transmittedto the central processors that are connected to serversproviding mobile network services. Here, mobile networkoperators can provide services to mobile users asauthentication, authorization, and accounting based on thehome agent and subscribers’ data stored in databases. In thecloud, cloud controllers process the requests to provide mobileusers with the corresponding cloud services. These servicesare developed with the concepts of utility computing,virtualization, and service-oriented architecture (e.g., web,application, and database servers). Generally, a CC is a large-scale distributed network system implemented based on a

20

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

number of servers in data centers. The cloud services aregenerally classified based on a layer concept. In the upperlayers of this paradigm, Infrastructure as a Service (IaaS),Platform as a Service (PaaS), and Software as a Service (SaaS)are stacked.

a. To provide services for customers. Typically, data centersare built in less populated places, with high power supplystability and a low risk of disaster.

b. IaaS. Infrastructure as a Service is built on top of thedata center layer. IaaS enables the provision of storage,hardware, servers, and networking components. Theclient typically pays on a per-use basis. Thus, clientscan save cost as the payment is only based on howmuch resource they really use. Infrastructure can beexpanded or shrunk dynamically as needed.

c. PaaS. Platform as a Service offers an advanced integratedenvironment for building, testing, and deploying customapplications. The examples of PaaS are Google AppEngine, Microsoft Azure, and Amazon Map Reduce/Simple Storage Service.

d. SaaS. Data storage service can be viewed as either inIaaS or PaaS. Given this architectural model, the userscan use the services flexibly and efficiently.

2.2 Advantages of mobile cloud computing Cloud computingis known to be a promising solution for MC because of manyreasons (e.g., mobility, communication, and portability). In thefollowing, we describe how the cloud can be used to overcomeobstacles in MC, thereby pointing out advantages of MCC.Extending battery lifetime. Battery is one of the main concernsfor mobile devices. Several solutions have been proposed toenhance the CPU performance and to manage the disk andscreen in an intelligent manner to reduce power consumption.However, these solutions require changes in the structure ofmobile devices, or they require a new hardware that results in

an increase of cost and may not be feasible for all mobiledevices. Computation offloading technique is proposed withthe objective to migrate the large computations and complexprocessing from resource-limited devices (i.e., mobile devices)to resourceful machines (i.e., servers in clouds). This avoidstaking a long application execution time on mobile deviceswhich results in large amount of power consumption. Theeffectiveness of offloading techniques through severalexperiments. The results demonstrate that the remoteapplication execution can save energy significantly. In addition,many mobile applications take advantages from task migrationand remote processing. For example, offloading a compileroptimization for image processing can reduce 41% for energyconsumption of a mobile device. Also, using memory arithmeticunit and interface (MAUI) to migrate mobile game componentsto servers in the cloud can save 27% of energy consumptionfor computer games and 45% for the chess game. Improvingdata storage capacity and processing power. Storage capacityis also a constraint for mobile devices. MCC is developed toenable mobile users to store/access the large data on the cloudthrough wireless networks. First example is the Amazon SimpleStorage Service which supports file storage service. Anotherexample is Image Exchange which utilizes the large storagespace in clouds for mobile users. This mobile photo sharing

service enables mobile users to upload images to the cloudsimmediately after capturing. Users may access all images fromany devices. With the cloud, the users can save considerableamount of energy and storage space on their mobile devicesbecause all images are sent and processed on the clouds.

Mobile cloud computing also helps in reducing the runningcost for compute-intensive applications that take long timeand large amount of energy when performed on the limited-resource devices. CC can efficiently support various tasks for

Figure1. Mobile cloud computing architeceture

21

data warehousing, managing and synchronizing multipledocuments online. For example, clouds can be used fortranscoding playing chess or broadcasting multimedia servicesto mobile devices. In these cases, all the complex calculationsfor transcoding or offering an optimal chess move that take along time when perform on mobile devices will be processedefficiently on the cloud. Mobile applications also are notconstrained by storage capacity on the devices because theirdata now is stored on the cloud. Improving reliability. Storingdata or running applications on clouds is an effective way toimprove the reliability because the data and application arestored and backed up on a number of computers. This reducesthe chance of data and application lost on the mobile devices.In addition, MCC can be designed as a comprehensive datasecurity model for both service providers and users. Forexample, the cloud can be used to protect copyrighted digitalcontents (e.g., video, clip, and music) from being abused andunauthorized distribution. Also, the cloud can remotely provideto mobile users with security services such as virus scanning,malicious code detection, and authentication. Also, suchcloud-based security services can make efficient use of thecollected record from different users to improve theeffectiveness of the servicesIn addition, MCC also inherits some advantages of clouds formobile services as follows:

Dynamic Provisioning. Dynamic on-demand provisioning ofresources on a fine-grained, self-service basis is a flexible wayfor service providers and mobile users to run their applicationswithout advanced reservation of resources.

Scalability. The deployment of mobile applications can beperformed and scaled to meet the unpredictable user demandsdue to flexible resource provisioning. Service providers caneasily add and expand an application and service without orwith little constraint on the resource usage.

Multitenancy. Service providers (e.g., network operator anddata center owner) can share the resources and costs to supporta variety of applications and large number of users.

Ease of integration. Multiple services from different serviceproviders can be integrated easily through the cloud andInternet to meet the user demand.

3. APPLICATIONS OF MOBILE CLOUD COMPUTINGMobile applications gain increasing share in a global mobilemarket. Various mobile applications have taken the advantagesof MCC. In this section, some typical MCC applications areintroduced.

3.1. Mobile commerce Mobile commerce (m-commerce) is abusiness model for commerce using mobile devices. The m-commerce applications generally fulfill some tasks that require

mobility (e.g., mobile transactions and payments, mobilemessaging, and mobile ticketing). The m-commerce applicationscan be classified into few classes including finance, advertising,and shopping. The m-commerce applications have to facevarious challenges (e.g., low network bandwidth, highcomplexity of mobile device configurations, and security).Therefore, m-commerce applications are integrated into CCenvironment to address these issues. Yang et al. proposes a3G E-commerce platform based on CC. This paradigm combinesthe advantages of both third generation (3G) network and CCto increase data processing speed and security level based onpublic key infrastructure (PKI). The PKI mechanism uses anencryption-based access control and an over-encryption toensure privacy of user’s access to the outsourced data. A 4PL-AVE trading platform utilizes CC technology to enhance thesecurity for users and improve the customer satisfaction,customer intimacy, and cost competitiveness.

3.2. Mobile learning Mobile learning (m-learning) is designedbased on electronic learning (e-learning) and mobility. However,traditional m-learning applications have limitations in terms ofhigh cost of devices and network, low network transmissionrate, and limited educational resources.

Cloud-based m-learning applications are introduced to solvethese limitations. For example, utilizing a cloud with the largestorage capacity and powerful processing ability, theapplications provide learners with much richer services in termsof data (information) size, faster processing speed, and longerbattery life.

The benefits of combining mlearning and CC to enhance thecommunication quality between students and teachers. In thiscase, a smartphone software based on the open source answerstudents’ questions in a timely manner. In addition, a contextualm-learning system based on Mobile Interaction in

3.3. Mobile healthcareThe purpose of applying MCC in medical applications is tominimize the limitations of traditional medical treatment (e.g.,small physical storage, security and privacy, and medical errors.Mobile healthcare (m-healthcare) provides mobile users withconvenient helps to access resources (e.g., patient healthrecords) easily and efficiently. Besides, m-healthcare offershospitals and healthcare organizations a variety of on-demandservices on clouds rather than owning standalone applicationson local servers.

Intelligent emergency management system can manageand coordinate the fleet of emergency vehicles effectivelyand in time when receiving calls from accidents orincidents.Health-aware mobile devices detect pulse rate, bloodpressure, and level of alcohol to alert healthcareemergency system.

22

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

Pervasive access to healthcare information allowspatients or healthcare providers to access the currentand past medical information.Pervasive lifestyle incentive management can be usedto pay healthcare expenses and manage other relatedcharges automatically.

A paper proposes @Health Cloud, a prototype implementationof m-healthcare information management system based on CCand a mobile client running Android operating system (OS).This prototype presents three services utilizing the Amazon’sS3 Cloud Storage Service to manage patient health recordsand medical images.

Seamless connection to cloud storage allows users toretrieve, modify, and upload medical contents (e.g.,medical images, patient health records, and biosignals)utilizing web services and a set of available APIs calledRepretational State Transfer.Patient health record management system displays theinformation regarding patients’ status, related biosignals,and image contents through application’s interface.Image viewing support allows the mobile users to decodethe large image files at different resolution levels givendifferent network availability and quality.

4. ISSUES AND APPROACHES OF MOBILE CLOUDCOMPUTING

As discussed in the previous section, MCC has manyadvantages for mobile users and service providers. However,because of the integration of two different fields, that is, CCand mobile networks, MCC has to face many technicalchallenges. This section lists several research issues in MCC,which are related to the mobile communication and CC. Then,the available solutions to address these issues are reviewed.

4.1 Issues in mobile communication side(1) Low bandwidth. Bandwidth is one of the big issues in

MCC because the radio resource for wireless networksis much scarce as compared with the traditional wirednetworks.

(2) Availability. Service availability becomes a moreimportant issue in MCC than that in the CC with wirednetworks. Mobile users may not be able to connect tothe cloud to obtain a service due to traffic congestion,network failures, and the out-of-signal.

(3) Heterogeneity. Mobile cloud computing will be used inthe highly heterogeneous networks in terms of wirelessnetwork interfaces. Different mobile nodes access to thecloud through different radio access technologies suchas WCDMA, GPRS, WiMAX, CDMA2000, and WLAN.As a result, an issue of how to handle the wirelessconnectivity while satisfying MCC’s requirements arises(e.g., always-on connectivity, on-demand scalability ofwireless connectivity, and the energy efficiency of mobiledevices).

4.2. Issues in computing side(1) Computing offloading. As explained in the previous

section, offloading is one of the main features of MCC toimprove the battery lifetime for the mobile devices andto increase the performance of applications.

(2) Security. Protecting user privacy and data/applicationsecrecy from adversary is a key to establish and maintainconsumers’ trust in the mobile platform, especially inMCC.

(3) Enhancing the efficiency of data access. With anincreasing number of cloud services, the demand ofaccessing data resources (e.g., image, files, anddocuments) on the cloud increases. As a result, a methodto deal with (i.e., store, manage, and access) dataresources on clouds becomes a significant challenge.

(4) Context-aware mobile cloud services. It is important forthe service provider to fulfill mobile users’ satisfactionby monitoring their preferences and providingappropriate services to each of the users

5. PRICINGUsing services in MCC involves both mobile service provider(MSP) and cloud service provider (CSP). However, MSPs andCSPs have different services management, customersmanagement, methods of payment, and prices. Therefore, thiswill lead to many issues; that is, how to set price, how the pricewill be divided among different entities, and how the customerspay. For example, when a mobile user runs mobile gamingapplication on the cloud, this involves the game serviceprovider (providing a game license), mobile service provider(accessing the data through base station), and CSP (runninggame engine on a data center).

6. CONCLUSIONMobile cloud computing is one of the mobile technologytrends in the future because it combines the advantages ofboth MC and CC, thereby providing optimal services for mobileusers. That traction will push the revenue of MCC to $5.2billion. With this importance, this article has provided anoverview of MCC in which its definitions, architecture, andadvantages have been presented. The applications supportedby MCC including m-commerce, mlearning, and mobilehealthcare have been discussed which clearly show theapplicability of the MCC to a wide range of mobile services.Then, the issues and related approaches for MCC (i.e., fromcommunication and computing sides) have been discussed.Finally, the future research directions have been outlined.

REFERENCES1. Satyanarayanan M. Proceedings of the 1st ACM

Workshop on Mobile Cloud Computing & Services:Social Networks and Beyond (MCS), 2010.

2. Satyanarayanan M. Fundamental challenges in mobilecomputing, In Proceedings of the 5th annual ACM

23

symposium on Principles of distributed computing, 1996;1–7.

3. Ali M. Green cloud on the horizon, In Proceedings of the1st International Conference on Cloud Computing(CloudCom), Manila, 2009; 451–459.

4. http://www.mobilecloudcomputingforum.com5. White Paper. Mobile Cloud Computing Solution Brief.

AEPONA, 2010.6. Christensen JH. Using Restful web-services and cloud

computing to create next generation mobile applications,In Proceedings of the 24th ACM SIGPLAN conferencecompanion on Object oriented programming systemslanguages and applications (OOPSLA), 2009; 627–634.

7. Liu L, Moulic R, Shea D. Cloud service portal for mobiledevice management, In Proceedings of IEEE 7thInternational Conference on e-Business Engineering(ICEBE), 2011; 474.

8. Foster I, Zhao Y, Raicu I, Lu S. Cloud computing and gridcomputing 360-degree compared, In Proceedings ofWorkshop on Grid Computing Environments (GCE),2009;.

9. Calheiros RN, Vecchiola C, Karunamoorthy D, Buyya R.The Aneka platform and QoS-driven resourceprovisioning for elastic applications on hybrid Clouds.Future Generation Computer Systems, to appear.

10. Buyya R, Yeo CS, Venugopal S, Broberg J, Brandic I.Cloud computing and emerging IT platforms: vision,hype, and reality for delivering computing as the 5thutility. Journal on Future Generation Computer Systems2009; 25(6): 599–616.

11. Huang Y, Su H, Sun W, et al. Framework for building alow-cost, scalable, and secured platform for webdeliveredbusiness services. IBM Journal of Research andDevelopment 2010; 54(6): 535–548.

12. Tsai W, Sun X, Balasooriya J. Service-oriented cloudcomputing architecture, In Proceedings of the 7thInternational Conference on Information Technology:New Generations (ITNG), 2010; 684–689.

Ms. Anjali Singh has an teachingExperience of 14 years she had pursuedher MCA and M.Tech degrees in year2001 and 2013 respectively. whichincludes teaching in different subjects likeComputer Networks, multimedia SystemsMobile Computing, Modelling Simulation,Cyber Security, Web Technology HumanValues etc.

24

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

GENETIC ALGORITHM FRAMEWORK FOR PARALLELCOMPUTING ENVIRONMENTS

Ruchi GuptaAssistant Professor, MCA Department

E-mail: 80 [email protected]

Abstract –In this research article, we presented a framework toexecute Genetic algorithms (GA) in various parallelenvironments. GA researchers can prepare implementations ofGA operators and fitness functions using this framework. Inthe proposed framework, the GA model is restricted to a coarse-grained and micro-grained model. In this paper, differentparallel computing environment using genetic algorithm arepresented. Computational performance is also discussed throughexamples.Index Terms: - Genetic Algorithm, Parallel Computing.

1. INTRODUCTIONRecently, several types of parallel architecture have come intowide use. For example, calculation with multi-core CPU whichmore than four cores is not unusual. General purposed GPUbecomes also easy to use. In Japan, some of thesupercomputing centers are open for researchers to use high-end computational resources.

Thus, even when we wish to use the same algorithms, it isnecessary to prepare different implementation codes suitablefor different parallel architectures. This places a heavy burdenon algorithm researchers, because in-depth knowledge of thedifferent parallel architectures is required to run theirimplementation codes efficiently on parallel machines. GA is atype of optimization algorithm with multipoint search [1]. GAmay find the optimum point even when the landscape of theobjective function has multiple peaks. However, GA requiresmuch iteration to find the optimum. This results in highcalculation cost. As GA is a multipoint search algorithm, itimplicitly has several types of parallelism [2][3][4][5]. Thus,several types of research regarding parallelization of GAs areexisted. Ono et al, introduced the GA model and implementationparallel models of GA should be clarified. As there is parallelismin the GA itself, parallel GA can be performed even on a singleprocess. We call this the logical parallel model. On the otherhand, because GA has multiple search points, a single modelcan be implemented on parallel computers. In this case, animplementation parallel model should be prepared.

In most GA research, these logical and implementation parallelmodels are not distinguished clearly and are often the same[6][7][8]. When the logical model is closely related to theimplementation model, GA users should have deep knowledgeof the parallel architectures on which their parallel GAs arerunning. At the same times, as the logical model andimplementation model are closely related, different parallel

codes are required for different parallel machines. Therefore, itwould be of great benefit if GA users were not required to havesuch deep knowledge of novel parallel architectures to runtheir GAs in parallel.

Here, we presented e a parallel environment framework for GAthat adopts the coarse grained and micro-grained model as animplementation model. GA researchers prepare theimplementations of genetic algorithm operators and fitnessfunctions using the proposed framework.

2. GENETIC ALGORITHMThe GA is an optimization algorithm that mimics naturalevolution with variation and adaptation to the environment.In evolution processes in nature, an individual that is betteradapted to the environment among a group of individualforming a certain generation survives at a higher rate, andleaves offspring to the next generation. In the GA concept, thecomputer finds an individual that is better adapted to theenvironment, or a solution that yields an optimum value to anevaluation function, by modeling the mechanism of biologicalevolution. Figure 1 shows a typical flowchart of GA.

Fig. 1. Flowchart of GA.

3. PARALLEL MODEL OF GAThe GA is able to parallelized because it searches multiplepoints and repeats sampling. This method has the samecharacteristic as serial GA. The other is to split a populationinto multiple subpopulations. Today, the latter method isfrequently used, because it has the higher parallelism than theformer method. There are GA methods performing very efficientparallelism, which combine the both methods. On changingthe method of parallel GA, it is important to consider that itmay happen to change the amount of calculations and theaccuracy of solutions. That brings to the point that Parallel GAhas the following two meanings.

25

Parallel algorithm for increasing search performanceParallel implementation for reducing execution time

For example, Pospichal [9] has proposed the distributedpopulation GA based on GPU, and achieved a high efficiency.Although this method has achieved high efficiency, GA otherthan the distributed population GA cannot be implemented,since GA and parallel implementation are inseparable Also, it isdifficult to implement this method into architecture other thanGPU. The most basic parallel models are introduced as thefollowing Parallel models of GA can be divided into coarse-grained and micro-grained models

1) Coarse-grained modelThe coarse-grained model is generally called a distributedpopulation model. This model splits the population into multiplesubpopulations, which are then searched. Therefore, severalindividuals in several subpopulations are moved into othersubpopulations. This operation is called the migration. Figure2 shows the flow of the coarse grained model. This model usescomputational resources effectively, because it connects tocomputational nodes only during migration. In addition, thismodel changes performance of the search compared to a serialalgorithm.

Fig. 2. Coarse-grained model.

2) Micro-grained modelEvaluations account for a large share of total execution time incomplex of objective problems. The micro-grained model isbased on the general concept of parallelization. This model isa master-slave model. A master processor executes othergenetic operations besides evaluation. Evaluations areexecuted by slave processors. A master processor sendsindividuals that should be evaluated. Slave processorsevaluate these individual, and return them to the masterprocessor. Figure 3 shows the flow of micro-grained model.This model shows inferior parallelization performance comparedto the coarse-grained model, because it must have manyconnections and the master processor uses a CPU. In addition,this model does not alter the search performance compared toa serial algorithm.

Fig. 3. Micro-grained model.

4. CONCLUSIONS AND FUTURE WORKIn this paper, we have surveyed a framework for GAs in parallelenvironments. GA researchers can prepare implementations ofGA operators and fitness functions using this framework. Inthe proposed framework, the GA model is restricted to coarsegrained and micro-grained model.

In future work, a mechanism to find the best number ofindividuals and to tune it dynamically will be implemented inthe libraries. In addition, we will also attempt to prepare someparallel libraries for other parallel architectures.

5. REFERENCE1. David E. Goldberg, Genetic Algorithms in Search,

Optimization, and Machine Learning, Addison-Wesley,1989.

2. T. Starkweather, D. Whitley, and K. Mathimas,Optimization using Distributed Genetic Algorithms,Parallel Problem Solving form Nature, 1991.

3. H. M¨uhlenbein, “Parallel genetic algorithms, populationgenetics and combinatorial optimization,” in Parallelism,Learning, Evolution, vol. 565 of Lecture Notes inComputer Science, pp. 398–406. Springer Berlin /Heidelberg, 1991.

4. C.Belding Theodore, “The Distributed Genetic AlgorithmRevisited,” Proc.6th International Conf. GeneticAlgorithms, pp. 114–121, 1995.

5. M. Miki, T. Hiroyasu, M. Kaneko, and K. Hatanaka, “AParallel Genetic Algorithm with Distributed EnvironmentScheme,” IEEE International Conference on Systems,Man, and Cybernetics, vol. 1, pp. 695–700, 1999.

6. Lim D., Ong Y. Soon, Jin Y., Sendhoff.B, and Lee B. Sung,“Efficient hierarchical parallel genetic algorithms usinggrid computing,” Future Generation Computer Systems,vol. 23, no. 4, pp. 658–670, 2007.

7. J. Ming Li, X. Jing Wang, R. Sheng He, and Z. Xian Chi,“An efficient fine-grained parallel genetic algorithmbased on gpu-accelerated,” in Network and ParallelComputing Workshops, 2007. NPC Workshops. IFIPInternational Conference on, 2007, pp. 855–862.

8. Thompson, A. Matthew and Dunlap, I. Brett,

26

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

“Optimization of analytic density functionals by parallelgenetic algorithm,” Chemical Physics Letters, vol. 463,no. 1–3, pp. 278–282, 2008.

9. P. Pospichal and J. Jaros, “GPU-based Acceleration ofthe Genetic Algorithm,” GPU competition of GECCOcompetition, 2009.

Ruchi Gupta is an Assistant Professor inAKGEC, Ghaziabad, an affiliated collegeof Uttar Pradesh Technical University,Lucknow, (India). She did her MCA andMTech (CS) from U.P TechnicalUniversity Lucknow and pursuing Ph.D.(Computer Science) from ShardaUniversity Grater Noida. Her area ofinterest includes Genetic Algorithm.

27

TOWARDS DEVELOPING REUSABLE SOFTWARECOMPONENTS

Aditya Pratap SinghAssistant Professor AKGEC, Ghaziabad

[email protected]

Abstract–The CBSE Component based software engineering)allows faster development at lower cost and better usabilityusing Software Development with reuse and for reuse. Thesoftware components work as a plug and play device due totheir reusability properties, which abstract the softwarecomplexity and increase performance. In spite of numerousresearch efforts there are no mature software componentdevelopment guidelines defined for the current technologiessuch as .NET or JAVA.This study presents guidelines for component development forreuse in .NET environment. This study demonstrated anapproach by designing a binary component as part ofdevelopment for reuse based on .NET component framework.

Keywords – Software Reuse, Software Component, CBSE.

INTRODUCTIONIn last three decades, Software Development has establisheditself as a global market for business. In current scenario, ITCompanies have slowly shifted from traditional softwaredevelopment environment to component based environment.The reason behind these changes is increased productivityand reduced cost due to reusability of software components.Correspondingly, CBSE (Component Based SoftwareEngineering) is the new trend for software developmentindustries.

The software component is an independent element which canbe deployed and composed in further software developmentcycle without any modification. These software componentsreduce complexity from end user point of view and providequick and easy implementation, subsequently help to reducethe cost. The subsequent development work in softwareindustry is trying to incorporate component reusability in oneor other ways. Zirpins et al. [6] suggested that, Today’s ModernERP systems are made of several software components and itshows the real example of Software Component reuse on bigscale.

Ravichandran, and Rothenberger [4] stated that, Softwarecomponents works similar to an autonomous hardwarecomponents, which abstracts the internal complexity of deviceand provide easy user interface to operate and similarly can beused as a building block in new product development. Amongall IT giant, Microsoft was the first one, who understood theindustry needs and succeed to cash this ready market.Microsoft offered the Development Kit known as “Microsoft

Visual Studio” to develop reusable software component withmany development options. Now days all Software Industrieshave understood that component can deliver great reusability,extensibility and maintainability for benefits for creating a largescale software system by breaking down into the small binarycomponents. In Ravichandran and Rothenberger [5] noticedthat, now industry is emphasizing more on black box reuseover white box reuse. Component based programmingemphasize on “Black Box Reuse” which means implementingclient of such component, not need to worry about its internalfunctionality.

The significant contribution of this study is to propose genericcomprehensive guidelines for development team to adopt themwhile developing software components for reuse.

COMPONENT BASED SOFTWARE ENGINEERINGComponent-Based Software Engineering (CBSE) is concernedwith improving Component-Based Development (CBD)practices. In particular, CBSE aims to provide developers withfinal software system properties predictability based on theanalysis of its constituent components. Therefore, there is aneed to develop effective ways for developing softwarecomponents.

Software component is an independent unit of binary code,which can be used as plug and play, like a hardware device. Itdesigned and developed in modular architecture, whichpromote interoperability with other component and frameworkfor reuse or with reuse.

Although, a software component is an independent modularunit, which is loosely coupled and not bonded to one clientand most important, it possesses official usage guidelines forfurther reuse. In general, a typical software component modelis divided into three parts as:

a) Semantics: Denotes, what component are mean to be?b) Syntax: Defines, how it is constructed, developed and

represented?c) Compositions: Finally, how it is going to be composed

and reassembled?

Semantics provide description of components and describethe components usage and functionality. Where syntax

28

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

denotes the component’s algorithms and developmentcomplexity, which give the physical structure to component.Finally composition provides overall wrapper mechanism tocompose the various functionality of a component and presentit for reuse to customers.

In some areas, life cycle of component is much similarly aswindow or web software development lifecycle but modeling,packaging and implementation are totally different from generalsoftware development process. An in depth domain analysisand method of packaging and deployment of compiled binarycode in such a loose couple way, makes component’s lifecyclespecial.

The demands and requirements for software component keepfluctuating according to new system requirements in market,so while developing a new software component for reuse,developers need to follow some guidelines for componentdesign and development, however still software industry don’thave a standard guideline for building reusable componentsbut seasoned author like Lowy [2] and [4] proposed their ownguidelines for component architecture. On comparing this studyfind that [3] is more specific about lower level of programmingconcepts, while [5] is more focused on bigger picture aboutcomponent design and reuse potential for industries.

Figure 1. Difference between Producer Reuse and ConsumerReuse.

Furthermore, software component lies, in two categoriesas,“for reuse” and “with reuse”. Similarly [1], claimedthat usageof software component can be divided into two categories:

a) Consumer Reuse: which mean, development of newsoftware systems, using existing component, called as“development with reuse”.

b) ProducerReuse: which means, developing, building newportable components, for further reuse, called“Development for Reuse”.

The figure 1, presents the differentiation between producerreuse and consumer reuse.

PROPOSED GUIDELINESFOR DEVELOPINGREUSABLE .NET COMPONENTS

One of the main advantages of building components is to

promote reusability. Visual Studio .NET also supports such aparadigm, wherein the components that we develop can behosted inside the toolbar and then dragged and dropped intothe various projects. The development environment then writesthe necessary code for us. Before starting component designHowever, some self-assessment questions can be helpful forarchitects to component design [5]:

• Identifying the common functions in domain to avoidduplications of tasks.

• Dependency on other components and hardware devices.• Optimized designed for further technology up gradation.• Easy use and implementation with some minor changes.• How valid is component decomposition for reuse?• From business point of view, a good Return on

Investment (ROI) of a software component ensures thecomponent’s longer life and usage in applicationdomains.

The design for reuse provides a set of clear implementationsteps, which should be followed by architects and developers.This set of guidelines can be classified into a number ofcategories:• Supporting components reuse by providing exception

handlers.• Interface based programming.• Using Delegate to provide flexibility using strong type

function pointers.• Use of Generic<T>, make classes and functions more

reusable with any data type.• Inheritance (Is a relationship) ensure the relationship

between classes and their derived objects; providefoundation of Object Oriented Programming concepts.

• Design and develop components with Interface basedProgramming, which abstracts the code complexity foruser.

• Using Abstract Classes, which provide flexibility overinterface, to alter the implementing class’s functionality.

• Remoting, Object Marshaling• Multithreading with Thread safety.• Packaging and deployment.

CONCLUSIONCurrent technologies help establish a solid foundation to enablethe functionalities of software components. With adequatetechnologies in the field of software development Softwarecomponent guidelines offer the best practice for componentdevelopment. The guidelines for new component developmentshould be flexible to add in more artifacts and principals tomanipulate them according to current businesses requirements,which would help to save cost and labor and provide flexibledevelopment environment. This small article tries to presentsome of the guidelines for software component developmentin .NET environment.

29

REFERENCES[1]. Lau, K. K. and Wang, Z. (2007) Software Component

Models. IEEE Transaction Software Engineering, Vol.33(10) p. 709-724.

[2]. Lowy, J. (2003) Programming .NETComponent.Cambridge, O’Reilly.

[3]. MSDN (2013)An Introduction to c# Generics. [online].MSDN.

[4]. Ramachandran M. (2008) Software Component:Guidelines and Applications. New York: Nova Sciencepublisher.

[5]. Ravichandran, T. and Rothenberger, A. (2003) Softwarereuse strategies and component markets. Communicationof ACM. Vol. 6(8) p. 109-110.

[6]. Zirpins, C., Ortiz, G., Lamersdorf, W. and Emmerich, W.(2013) Proceeding of the first international workshop onengineering service compositions. IBM.

Aditya Pratap Singh received hisMaster of computer application degreefrom Uttar Pradesh Technical University,Lucknow in 2003. He is perusing PhDfrom Gautam Budhha University, GreaterNoida. He is assistant professor in theDepartment of MCA at Ajay Kumar GargEngineering College, Ghaziabad. Hiscurrent research interests are in

component-based software engineering and softwaremeasurement. He has presented his work in several nationaland international conferences. His work has appeared in IEEEExplore. He has served in program committee relating to nationalconferences on cyber security issues.

30

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

A SURVEY ON BIG DATA AND MININGDheeraj Kumar Singh

Assistant Professor, Department of MCA, AKGEC, [email protected]

Abstract–Big Data concern large-volume, complex, growing datasets with multiple, autonomous sources. With the fastdevelopment of networking, data storage, and the data collectioncapacity, Big Data are now rapidly expanding in all science andengineering domains, including physical, biological andbiomedical sciences. This paper presents a HACE theorem thatcharacterizes the features of the Big Data revolution, andproposes a Big Data processing model, from the data miningperspective. This data-driven model involves demand-drivenaggregation of information sources, mining and analysis, userinterest modeling, and security and privacy considerations. Weanalyze the challenging issues in the data-driven model andalso in the Big Data revolution.

1. INTRODUCTIONWhat is Big Data? A name and a marketing term, for sure, butalso shorthand for advancing trends in technology that openthe door to a new approach to understanding the world andmaking decisions. There is a lot more data, all the time, growingat 50 percent a year, or more than doubling every two years,estimates IDC, a technology research firm. It’s not just morestreams of data, but entirely new ones. For example, there arenow countless digital sensors worldwide in industrialequipment, automobiles, electrical meters and shipping crates.They can measure and communicate location, movement,vibration, temperature, humidity, even chemical changes in theair, mean lot of different type of data all-together.

2. BIG DATA AND DATA MININGThe Big Data is nothing but a data, available atheterogeneous, autonomous sources, in extreme largeamount, which get updated in fractions of seconds. Forexample, the data stored at the server of Facebook, as mostof us, daily use the Facebook; we upload various types ofinformation, upload photos. All the data get stored at thedata warehouses at the server of Facebook. This data isnothing but the big data, which is so called due to itscomplexity. Also another example is storage of photos atFlicker. These are the good real-time examples of the BigData. Another best example of Big data would be, thereadings taken from an electronic microscope of theuniverse. Now the term Data Mining, Finding for the exactuseful information or knowledge from the collected data,for future actions, is nothing but the data mining. So,collectively, the term Big Data Mining is a close up view,with lots of detail information of a Big Data with lots ofinformation. As shown in fig 1 below. Fig.

Fig.1 Data Mining with Big Data

3. KEY FEATURES OF BIG DATAThe features of Big Data are:

It is huge in size.The data keep on changing time time to time.Its data sources are from different phases.It is free from the influence, guidance, or control ofanyone.It is too much complex in nature, thus hard to handle.

It’s huge in nature because, there is the collection of data fromvarious sources together. If we consider the example ofFacebook, lots of numbers of people are uploading their datain various types such as text, images or videos. The peoplealso keep their data changing continuously. This tremendousand instantaneously, time to time changing stock of the data isstored in a warehouse. This large storage of data requires largearea for actual implementation. As the size is too large, no oneis capable to control it oneself. The Big Data needs to becontrolled by dividing it in groups. Due to largeness in size,decentralized control and different data sources with differenttypes the Big Data becomes much complex and harder tohandle. We cannot manage them with the local tools those weuse for managing the regular data in real time. For major BigData-related applications, such as Google, Flicker, Facebook,a large number of server farms are deployed all over the worldto ensure nonstop services and quick responses for localmarkets.

4. CHALLENGING ISSUES IN DATA MINING WITHBIG DATA.

There are three sectors at which the challenges for Big Dataarrive. These three sectors are:

31

Mining platform.Privacy.Design of mining algorithms.

Basically, the Big Data is stored at different places and also thedata volumes may get increased as the data keeps on increasingcontinuously. So, to collect all the data stored at different placesis that much expensive. Suppose, if we use these typical datamining methods (those methods which are used for mining thesmall scale data in our personal computer systems) for miningof Big Data, and then it would become an obstacle for it. Becausethe typical methods are required data to be loaded in mainmemory, though we have super large main memory. To maintainthe privacy is one of the main aims of data mining algorithms.Presently, to mine information from Big data, parallel computingbased algorithms such as MapReduce are used. In suchalgorithms, large data sets are divided into number of subsetsand then, mining algorithms are applied to those subsets.Finally, summation algorithms are applied to the results ofmining algorithms, to meet the goal of Big Data mining. In thiswhole procedure, the privacy statements obviously break aswe divide the single Big Data into number of smaller datasets.

While designing such algorithms, we face various challenges.As shown in the figure 2 above, there are blind men observingthe giant elephant. Everyone is trying to predict theirconclusion on what the thing is actually. Somebody is sayingthat the thing is a hose; someone says it’s a tree or pipe etc.Actually everyone is just observing some part of that giantelephant and not the whole, so the results of each blind person’sprediction is something different than actually what it is.Similarly, when we divide the Big Data in to number of subsets,and apply the mining algorithms on those subsets, the resultsof those mining algorithms will not always point us to theactual result as we want when we collect the results together.

5. RELATED WORKOn the level of mining platform sector, at present, parallelprogramming models like MapReduce are being used for thepurpose of analysis and mining of data. MapReduce is a batch-

oriented parallel computing model. There is still a certain gapin performance with relational databases. Improving theperformance of MapReduce and enhancing the real-time natureof large-scale data processing have received a significantamount of attention, with MapReduce parallel programmingbeing applied to many machine learning and data miningalgorithms. Data mining algorithms usually need to scanthrough the training data for obtaining the statistics to solveor optimize model. For those people, who intend to hire a thirdparty such as auditors to process their data, it is very importantto have efficient and effective access to the data. In such cases,the privacy restrictions of user may be faces like no local copiesor downloading allowed, etc. So there is privacy-preservingpublic auditing mechanism proposed for large scale datastorage.[1] This public key-based mechanism is used to enablethird-party auditing, so users can safely allow a third party toanalyze their data without breaching the security settings orcompromising the data privacy. In case of design of data miningalgorithms, Knowledge evolution is a common phenomenonin real world systems. But as the problem statement differs,accordingly the knowledge will differ. For example, when wego to the doctor for the treatment, that doctor’s treatmentprogram continuously adjusts with the conditions of thepatient. Similarly the knowledge. For this, Wu [2] [3] [4]proposed and established the theory of local pattern analysis,which has laid a foundation for global knowledge discovery inmultisource data mining. This theory provides a solution notonly for the problem of full search, but also for finding globalmodels that traditional mining methods cannot find.

6. CONCLUSIONBig Data is going to continue growing during the next years,and each data scientist will have to manage much more amountof data every year. This data is going to be more diverse, larger,and faster. We discussed some insights about the topic, andwhat we consider are the main concerns and the mainchallenges for the future. Big Data is becoming the new FinalFrontier for scientific data research and for businessapplications. We are at the beginning of a new era where BigData mining will help us to discover knowledge that no onehas discovered before. Everybody is warmly invited toparticipate in this intrepid journey.

REFERENCES[1] C. Wang, S.S.M. Chow, Q. Wang, K. Ren, and W. Lou,

“Privacy- Preserving Public Auditing for Secure CloudStorage” IEEE Trans. Computers, vol. 62, no. 2, pp. 362-375, Feb. 2013.

[2] X. Wu and S. Zhang, “Synthesizing High-FrequencyRules from Different Data Sources,” IEEE Trans.Knowledge and Data Eng., vol. 15, no. 2, pp. 353-367,Mar./Apr. 2003.

[3] X. Wu, C. Zhang, and S. Zhang, “Database Classificationfor Multi-Database Mining,” Information Systems,

32

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

vol. 30, no. 1, pp. 71- 88, 2005[4] K. Su, H. Huang, X. Wu, and S. Zhang, “A Logical

Framework for Identifying Quality Knowledge fromDifferent Data Sources,” Decision Support Systems, vol.42, no. 3, pp. 1673-1683, 2006.

[5] E.Y. Chang, H. Bai, and K. Zhu, “Parallel Algorithms forMining Large-Scale Rich-Media Data,” Proc. 17th ACMInt’l Conf. Multimedia, (MM ’09,) pp. 917-918, 2009.

[6] D. Howe et al., “Big Data: The Future of Biocuration,”Nature, vol. 455, pp. 47-50, Sept. 2008.

[7] A. Labrinidis and H. Jagadish, “Challenges andOpportunities with Big Data,” Proc. VLDB Endowment,vol. 5, no. 12, 2032-2033, 2012.

[8] Y. Lindell and B. Pinkas, “Privacy Preserving DataMining,” J. Cryptology, vol. 15, no. 3, pp. 177-206, 2002.

33

MOBILE CYBER THREATSArpna Saxena

Assistant Professor, MCA DepartmentE-mail: [email protected]

Abstract–Smart phones and tablets have long been establishedas popular personal electronics devices. Cyber-threats, includingthose targeting mobile devices, are directly linked to cybercrime.In most developed countries, creating and distributing malicioussoftware is a criminal offence. Although such criminal acts areperpetrated in virtual environments, their victims lose realassets, such as personal data and money. Combating cybercrimeis particularly difficult because cybercriminals do not need tocross the borders of other countries to commit crimes in thoseterritories. At the same time, enforcement authorities in thesesame countries have to overcome numerous barriers in order toadminister justice. Therefore, international cooperationbetween information security experts and law enforcementauthorities is required to effectively combat crime in the virtualworld.

1. INTRODUCTIONSmart phones, or mobile phones with advanced capabilitieslike those of personal computers (PCs), are appearing in morepeople’s pockets, purses, and briefcases. Smart phones’popularity and relatively lax security have made them attractivetargets for attackers.

The number and sophistication of attacks on mobile phones isincreasing, and countermeasures are slow to catch up.

Smart phones and personal digital assistants (PDAs) give usersmobile access to email, the internet, GPS navigation, and manyother applications. However, smart phone security has notkept pace with traditional computer security. Technical securitymeasures, such as firewalls, antivirus, and encryption, areuncommon on mobile phones, and mobile phone operatingsystems are not updated as frequently as those on personalcomputers [3]. Mobile social networking applicationssometimes lack the detailed privacy controls of their PCcounterparts. Unfortunately, many smart phone users do notrecognize these security shortcomings. Many users fail toenable the security software that comes with their phones,and they believe that surfing the internet on their phones is assafe as or safer than surfing on their computers [4].

Meanwhile, mobile phones are becoming more and morevaluable as targets for attack. People are using smart phonesfor an increasing number of activities and often store sensitive

data, such as email, calendars, contact information, andpasswords, on the devices. Mobile applications for socialnetworking keep a wealth of personal information. Recentinnovations in mobile commerce have enabled users to conductmany transactions from their smart phone, such as purchasinggoods and applications over wireless networks, redeemingcoupons and tickets, banking, processing point-of-salepayments, and even paying at cash registers.

2. TYPICAL ATTACKS LEVERAGE PORTABILITY ANDSIMILARITY TO PCS

Mobile phones share many of the vulnerabilities of PCs.However, the attributes that make mobile phones easy to carry,use, and modify open them to a range of attacks.

Perhaps most simply, the very portability of mobilephones and PDAs makes them easy to steal. The ownerof a stolen phone could lose all the data stored on it,from personal identifiers to financial and corporate data.Worse, a sophisticated attacker with enough time candefeat most security features of mobile phones and gainaccess to any information they store [5].Many seemingly legitimate software applications, orapps, are malicious [6]. Anyone can develop apps forsome of the most popular mobile operating systems, andmobile service providers may offer third-party apps withlittle or no evaluation of their safety. Sources that are notaffiliated with mobile service providers may also offerunregulated apps that access locked phone capabilities.Some users “root” or “jailbreak” their devices, bypassingoperating system lockout features to install these apps.Even legitimate smart phone software can be exploited.Mobile phone software and network services havevulnerabilities, just like their PC counterparts do. Foryears, attackers have exploited mobile phone softwareto eavesdrop, crash phone software, or conduct otherattacks [7]. A user may trigger such an attack throughsome explicit action, such as clicking a maliciouslydesigned link that exploits vulnerability in a web browser.A user may also be exposed to attack passively, however,simply by using a device that has a vulnerable applicationor network service running in the background [8].Phishing attacks use electronic communications to trickusers into installing malicious software or giving away

34

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

sensitive information. Email phishing is a common attackon PCs, and it is just as dangerous on email-enabledmobile phones. Mobile phone users are also vulnerableto phishing voice calls (“vishing”) and SMS/MMSmessages (“smishing”) [9]. These attacks target featurephones (mobile phones without advanced data andwireless capabilities) as well as smart phones, and theysometimes try to trick users into receiving fraudulentcharges on their mobile phone bill. Phishers oftenincrease their attacks after major current events, craftingtheir communications to look like news stories orsolicitations for charitable donations. Spammers usedthis strategy after the March 2011 earthquake andtsunami in Japan [10].

3. CONSEQUENCES OF A MOBILE ATTACK CAN BESEVERE

Many users may consider mobile phone security to be lessimportant than the security of their PCs, but theconsequences of attacks on mobile phones can be just assevere. Malicious software can make a mobile phone amember of a network of devices that can be controlled byan attacker (a “botnet”). Malicious software can also senddevice information to attackers and perform other harmfulcommands. Mobile phones can also spread viruses to PCsthat they are connected to. Losing a mobile phone used tomean only the loss of contact information, call histories,text messages, and perhaps photos. However, in more recentyears, losing a smart phone can also jeopardize financialinformation stored on the device in banking and paymentapps, as well as usernames and passwords used to accessapps and online services. If the phone is stolen, attackers coulduse this information to access the user’s bank account or creditcard account. An attacker could also steal, publicly reveal,or sell any personal information extracted from the device,including the user’s information, information aboutcontacts, and GPS locations. Even if the victim recovers thedevice, he or she may receive many spam emails and SMS/MMS messages and may become the target for futurephishing attacks. Some personal and business services adda layer of authentication by calling a user’s mobile phone orsending an additional password via SMS before allowing theuser to log onto the service’s website. A stolen mobile phonegets an attacker one step closer to accessing the services asthe user. If the device contains the owner’s username andpassword for the service, the attacker would have everythingnecessary to access the service.

4. FIVE CORNERSTONES OF MOBILECYBERSECURITY

Mobile communications are a complex ecosystem comprisedof a broad list of technologies and players integrated into a“system-of-systems” that enables the wireless environmentthat consumers enjoy today. Within this ecosystem, security

is often addressed in terms of five cornerstone segments asshown below.

Next, we explore each of the five segments, the threat landscapeand the proactive steps the mobile industry is taking to addressthe threats through solutions available today

1. Consumers and End Users Industry is working hard, andwith growing success, to educate users on how to reduce theircybersecurity risks. Best practices that the industryrecommends for consumers to become security savvy include:

configure Devices to Be more Secure – Smart phonesand other mobile devices have password features thatlock the devices on a scheduled basis. After apredetermined period of time of inactivity (e.g., oneminute, two minutes, etc.) the device requires the correctPIN or password to be entered. Encryption, remote-wipecapabilities and - depending on the operating system -anti-virus software may also serve to improve security.

“caveat Link” – Beware of suspicious links. Do not clickon links in suspicious emails or text messages as theymay lead to malicious websites.

Exercise caution Downloading apps – Avoidapplications from unauthorized application stores. Someapplication stores vet apps so they do not containmalware. Online research on an app before downloadingis often a sound first step.

check Permissions – Check the access (i.e., access towhich segments of your mobile device) that an applicationrequires, including Web-based applications, browsersand native applications.

35

Know your Network – Avoid using unknown Wi-Finetworks and use public Wi-Fi hot spots sparingly.Hackers can create “honey pot” Wi-Fi hot spots intendedto attract, and subsequently compromise, mobile devices.Similarly, they troll public Wi-Fi spots looking forunsecured devices. If you have Wi-Fi at home, enableencryption.

Don’t Publish your mobile Phone Number – Postingyour mobile phone number on a public website can makeit a target for software programs that crawl the Webcollecting phone numbers that may later receive spam, ifnot outright phishing attacks.

Use your mobile Device as it Was Setup – Some peopleuse third-party firmware to override settings on theirmobile devices (e.g., enabling them to switch serviceproviders). Such “jail breaking” or “rooting” can resultin malware or malicious code infecting the mobile devices.

These are only a few of the strategies and resources availablefrom the industry, but the bottom line is that users play animportant role in protecting their devices, especially what theydownload and links they click on. Consumers benefit the bestfrom cybersecurity when they are aware of the variety ofsecurity options that are a part of their mobile devices.

2. DeviceToday’s mobile devices are miniature computers. In additionto these truly “smart” phones, there is a growing variety ofdevices such as tablets and net book computers that includewireless connectivity. These new mobile devices are moreadvanced than those sold even five years ago. All computers,including mobile devices, need to be secured to preventintrusion. Applications downloaded from questionable, or evenlegitimate sites, can record information typed onto the device(e.g., bank account numbers, passwords and PINs), read datastored on the device (including emails, attachments, textmessages, credit card numbers and login/passwordcombinations to corporate intranets); and record conversations(not only telephone calls) within earshot of the phone. Amalicious application or malware can transmit any of thisinformation to hackers (including those in foreign countries)who then use the information for nefarious and criminalpurposes, such as transferring money out of bank accountsand conducting corporate espionage.

3. Network-Based Security PoliciesFrom a consumer perspective, network operators provide awealth of tools that can be used to provide improved securityand data protection for information that resides on the smartphone or tablet. Such tools include device managementcapabilities, firewalls and other network-based functionality.These tools give consumers the power to protect their

information, but network service providers cannot dictatesecurity policies for consumers to follow. However, serviceproviders provide a wealth of consumer educational materialsand practices for enhanced security protection.

4. Authentication and ControlA lost, unlocked smart phone with pre-programmed access toa bank account or a corporate intranet can cause incalculabledamage. Authentication control is the process of determiningif a user is authorized to access information stored on thedevice or over a network connection.

5. Cloud, Networks and Services Networks deliver many ofthe applications and services that consumers enjoy today. Asillustrated, the complex security solutions the industryprovides encompass multiple types of network accessconnections: the cloud, the Internet backbone, core networkand access network connections.

5. ACT QUICKLY IF YOUR MOBILE PHONE OR PDAIS STOLEN

Report the loss to your organization and/or mobileservice provider. If your phone or PDA was issued by anorganization or is used to access private data, notifyyour organization of the loss immediately. If your personalphone or PDA was lost, contact your mobile phoneservice provider as soon as possible to deter malicioususe of your device and minimize fraudulent charges.Report the loss or theft to local authorities. Dependingon the situation, it may be appropriate to notify relevantstaff and/or local police.Change account credentials. If you used your phone orPDA to access any remote resources, such as corporatenetworks or social networking sites, revoke allcredentials that were stored on the lost device. This mayinvolve contacting your IT department to revoke issuedcertificates or logging into websites to change yourpassword.If necessary, wipe the phone. Some mobile serviceproviders offer remote wiping, which allows you or yourprovider to remotely delete all data on the phone.

6. CYBER SAFETYAs our use of the devices increases and expands to new featuresand functions in other areas such as banking and healthcare,they may hold even more personal data. By following CTIA–The Wireless Association and its members’ simpleCYBERSAFETY tips, consumers can actively protectthemselves and their data.

C – Check to make sure the websites downloads, SMS links,etc. are legitimate and trustworthy BEFORE you visit or add tothem to your mobile device so you can avoid adware /spyware/viruses/ unauthorized charges/etc. Spyware and adware may

Basics for Integration of Data Warehouse and Data Mining

36

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

provide unauthorized access to your information, such aslocation, websites visited and passwords, to questionableentities. You can validate an application’s usage by checkingwith an application store. To ensure a link is legitimate, searchthe entity’s website and match it to the unknown URL.

Y – Year-round, 24/7, always use and protect your wirelessdevice with passwords and PINs to prevent unauthorizedaccess. Passwords/PINs should be hard to guess, changedperiodically and never shared. When you aren’t using yourdevice, set its inactivity timer to a reasonably short period (i.e.,1–3 minutes).

B – Back-up important files from your wireless device to yourpersonal computer or to a cloud service/application periodicallyin case your wireless device is compromised, lost or stolen.

E – Examine your monthly wireless bill to ensure there is nosuspicious and unauthorized activity. Many wireless providersallow customers to check their usage 24/7 by using shortcutson their device, calling a toll-free number or visiting theirwebsite. Contact your wireless provider for details.

R – Read user agreements BEFORE installing software orapplications to your mobile device. Some companies may useyour personal information, including location, for advertisingor other uses. Unfortunately, there are some questionablecompanies that include spyware/malware/viruses in theirsoftware or applications

S – Sensitive and personal information, such as banking orhealth records, should be encrypted or safeguarded withadditional security features, such as Virtual Private Networks(VPN). For example, many applications stores offer encryptionsoftware that can be used to encrypt information on wirelessdevices.

A – Avoid rooting, jail breaking or hacking your mobile deviceand its software as it may void your device’s warranty andincrease the risk of cyber threats to a wireless device.

F – Features and apps that can remote lock, locate and/orerase your device should be installed and used to protect yourwireless device and your personal information fromunauthorized users.

E – Enlist your wireless provider and your local police whenyour wireless device is stolen. If your device is lost, ask yourprovider to put your account on “hold” in case you find it. Inthe meantime, your device is protected and you won’t beresponsible for charges if it turns out the lost device werestolen. The U.S. providers are creating a database designedto prevent smart phones, which their customers report asstolen, from being activated and/or provided service on the

networks.T – Train yourself to keep your mobile device’s operatingsystem (OS), software or apps updated to the latest version.These updates often fix problems and possible cybervulnerabilities. You may need to restart your mobile deviceafter the updates are installed so they are applied immediately.Many smart phones and tablets are like mini-computers so it’sa good habit to develop.

Y – You should never alter your wireless device’s uniqueidentification numbers (i.e., International Mobile EquipmentIdentity (IMEI) and Electronic Serial Number (ESN)). Similar toa serial number, the wireless network authenticates each mobiledevice based on its unique number

7. CONCLUSIONEffective cybersecurity – whether for a nation, business,organization or individual – is the result of a partnershipbetween the entity being protected and those in the industrythat makes mobile communications possible. All of theparticipants, from the consumer to the manufacturers, carriers,applications developers, software providers, etc. have a roleto play. At every step of the process, there is a sharedresponsibility for making cybersecurity a priority. The goodnews is that as a result of the historical, ongoing and concertedefforts of industry, regulators and lawmakers, public knowledgeof the need for heightened cybersecurity has grown andcontinues to grow.

While achieving political consensus is always a challenge,there appears to be a widespread understanding amongpolicymakers that a single legislative “fix” for cybersecuritydoes not exist; therefore, a flexible approach to legislation inthe wireless arena is necessary. The threat landscape is, bydefinition, a non-static one. Enabling cybersecurity, as a result,cannot be achieved by following a set list of mandated criteria.Even if such a list were to exist, it would be outdated the sameday it was established.

Cybersecurity threats and vulnerabilities can change from dayto day, and even hour to hour. The effective steps for managingcyber risks today are unlikely to suffice for very long.Maintaining security in a wireless environment is a constantlyevolving dynamic.

However, policymakers play an important role in cybersecurity.Policy efforts that are informed by the realities of thecybersecurity atmosphere — no silver bullet, no single fix,many moving parts and all of them interdependent — are amust.

8. REFERENCE[1] PandaLabs. “Quarterly Report PandaLabs (January-

March 2011).”

37

[2] Symantec. “Symantec Report Finds Cyber ThreatsSkyrocket in Volume and Sophistication.”

[3] National Institute of Standards and Technology.“Guidelines on Cell Phone and PDA Security (SP 800-124).”.

[4] Trend Micro. “Smartphone Users: Not Smart EnoughAbout Security.”

[5] National Institute of Standards and Technology.“Guidelines on Cell Phone and PDA Security (SP 800-124).”

[6] “Technical Information Paper: Cyber Threats to MobileDevices” (http://www.us-cert.gov/reading_room/TIP10-105-01.pdf)

[7] National Institute of Standards and Technology.“Guidelines on Cell Phone and PDA Security (SP 800-124).”

[8] John Cox. “iPhone on Wi-Fi Vulnerable to SecurityAttack.”

[9] US-CERT. “Technical Information Paper-TIP-10-105-01:Cyber Threats to Mobile Devices.”

Ms. Arpna Saxena is working asAssistant Professor with Ajay KumarGarg Engineering College, Ghaziabad.She has completed her MCA in 2003 fromHNBU University, Uttranchal andM.Tech. in 2014 from Guru Gobind SinghUniversity, Delhi.

38

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

APPLICATIONS OF PALM VEIN AUTHENTICATIONTECHNOLOGY

Indu VermaAssistant Professor, MCA Department AKGEC, Ghaziabad (U.P)

E-mail: [email protected]

Abstract– The vein information is hard to duplicate since veinsare internal to the human body. The palm vein authenticationtechnology offers a high level of accuracy. Palm veinauthentication uses the vascular patterns of an individual’spalm as personal identification data. Compared with a fingeror the back of a hand, a palm has a broader and more complicatedvascular pattern and thus contains a wealth of differentiatingfeatures for personal identification. This paper discusses thecontactless palm vein authentication device that uses bloodvessel patterns as a personal identifying factor. The veininformation is hard to duplicate since veins are internal to thehuman body. The palm vein authentication technology offers ahigh level of accuracy.

I. INTRODUCTIONPalm vein authentication uses the vascular patterns of anindividual’s palm as personal identification data. Comparedwith a finger [1] or the back of a hand, a palm has a broader andmore complicated vascular pattern and thus contains a wealthof differentiating features for personal identification. The palmis an ideal part of the body for this technology; it normallydoes not have hair which can be an obstacle for photographingthe blood vessel pattern, and it is less susceptible to a changein skin color, unlike a finger or the back of a hand. The deoxidizedhemoglobin in the vein vessels absorb light having awavelength of about 7.6 x 10-4 mm within the near-infraredarea [2]. When the infrared ray image is captured, unlike theimage seen in Fig.1, only the blood vessel pattern containingthe deoxidized hemoglobin is visible as a series of dark lines(Fig.2). Based on this feature, the vein authentication devicetranslates the black lines of the infrared ray image as the bloodvessel pattern of the palm (Fig. 3), and then matches it with thepreviously registered blood vessel pattern of the individual.

Fig.1. Fig.2. Fig.3. Fig.4.Visible Infrared Extracted Palm veinray image ray image vein pattern sensor

Biometrics are automated methods of recognizing a personbased on a physiological or behavioral characteristic. Amongthe features measured are; face, fingerprints, hand geometry,handwriting, iris, retinal, vein, and voice. Biometric systemsare superior because they provide a nontransferable means ofidentifying people not just cards or badges. The key pointabout an identification method that is ”nontransferable" meansit cannot be given or lent to another individual so nobody canget around the system they personally have to go through thecontrol point. The fundamentals of biometrics are that they arethings about a person:

- Measurable - things that can be counted, numbered orotherwise quantified

- Physiological characteristics - like height, eye color,fingerprint, DNA etc.

- Behavioral characteristics - such as the way a personmoves, walks, types.

In a practical biometric system (i.e., a system that employsbiometrics for personal recognition), there are a number ofother issues that should be considered, including: Performance,which refers to the achievable recognition accuracy and speed,the resources required to achieve the desired recognitionaccuracy and speed, as well as the operational andenvironmental factors that affect the accuracy and speed;

Acceptability, which indicates the extent to which people arewilling to accept the use of a particular biometric identifier(characteristic) in their daily lives; Circumvention, whichreflects how easily the system can be fooled using fraudulentmethods. A key advantage of biometric authentication is thatbiometric data is based on physical characteristics that stayconstant throughout one’s lifetime and are difficult(some morethan others) to fake or change. Biometric identification canprovide extremely accurate, secured access to information;fingerprints, palm vein and iris scans produce absolutely uniquedata sets (when done properly). Automated biometricidentification can be done rapidly and uniformly, withoutresorting to documents that may be stolen, lost or altered. It isnot easy to determine which method of biometric data gatheringand reading does the "best" job of ensuring secureauthentication. Each of the different methods has inherent

39

advantages and disadvantages. Some are less invasive thanothers; some can be done without the knowledge of thesubject; others are very difficult to fake.

Palm vein authentication uses an infrared beam to penetratethe users hand as it is held over the sensor; the veins withinthe palm of the user are returned as black lines. Palm veinauthentication has a high level of authentication accuracy dueto the uniqueness and complexity of vein patterns of the palm.

Fig5: Fig6: Fig7: Fig8:Iris scan Face recognition Finger print Palm print

Because the palm vein patterns are internal to the body, this isa difficult method to forge. Also, the system is contactless andhygienic for use in public areas.

II. PREVIOUS WORKSBiometrics authentication is a growing and controversial fieldin which civil liberties groups express concern over privacyand identity issues. Today, biometric laws and regulations arein process and biometric industry standards are being tested.Automatic recognition based on “who you are” as opposed to“what you know” (PIN) or “what you have” (ID card).Recognition of a person by his body & then linking that bodyto an externally established identity forms a very powerful toolfor identity management Biometric Recognition. Figure 1 showsthe different type of biometric authentication. Canadian airportsstarted using iris scan in 2005 to screen pilots and airportworkers. Pilots were initially worried about the possibility thatrepeated scans would negatively affect their vision but thetechnology has improved to the point where that is no longeran issue. Canada Customs uses an iris scan system calledCANPASS-Air for low-risk travelers at Pearson airport.

Finger vein authentication is used as a new biometric methodutilizing the vein patterns inside one’s fingers for personalidentification. Vein patterns are different for each finger andfor each person, and as they are hidden underneath the skin’ssurface, forgery is extremely difficult. These unique aspects offinger vein pattern recognition set it apart from previous formsof biometrics and have led to its adoption by the many financialinstitutions as their newest security technology.

III. IMPLEMENTATION OF CONTACTLESSPALM VEIN AUTHENTICATION

The contactless palm vein authentication technology consistsof image sensing and software technology. The palm vein

sensor (Fig.6) captures an infrared ray image of the user’s palm.The lighting of the infrared ray is controlled depending on theillumination around the sensor, and the sensor is able to capturethe palm image regardless of the position and movement of thepalm. The software then matches the translated vein patternwith the registered pattern, while measuring the position andorientation of the palm by a pattern matching method.

Implementation of a contactless identification system enablesapplications in public places or in environments where hygienestandards are required, such as in medical applications. Inaddition, sufficient consideration was given to individuals whoare reluctant to come into direct contact with publicly useddevices.

The first step in all palm vein authentication applications is theenrolment process, which scans the user's palm and recordsthe unique pattern as an encrypted biometric template in thedatabase or on the smart card itself. In banking applications,for example, once a new customer has been issued a smartcard, he/she is asked to visit the bank in order to enroll hervein data.

Fig 9. Schematic of the hand vein pattern imaging module.

IV. COMPARISONSComparing bioguard and Mohamed Shahin proposal [5], itseems bioguard palm vein technology provide advantages,such as:

Accuracy and Reliability – Uniqueness and complexity of veinpatterns, together with advanced authentication algorithms,ensure unsurpassed accuracy. Field test duration less than irisrecognition, and near-zero false rejection and false acceptancerates.

Security – Vein patterns are internal and unexposed, makingthem almost impossible to duplicate or forge. Images areconverted into encrypted biometric templates at the sensorlevel, preventing misuse of the actual image.

Contactless – Hygienic, non-invasive, "no touch" technologyenables use when hands are dirty, wet or even wearing sometypes of latex gloves.

40

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

Cost-Effective – Attractively priced while saving you the hugepotential costs of malpractice litigation, privacy violations, etc.Provides a high level of security at a reasonable cost.

Usability – Compact form factor provides greater flexibilityand ease of implementation in a variety of security applications.Application areas for palm vein technology are it supportsvariety of banking scenarios:

1. ATMs 2. Walk-in-customers 3. Internal branch security 4.Remote banking.

Whereas the advantages in [5] is that Hand Vein VerificationSystem (HVVS) is accurate in the low to medium security level,these hand vein verification is purely system base, it’s putsmore effort on the overall performance of a system. Thesesystem checks the FAR(%)(false acceptance rate) and theFRR(%)(false rejection rate) with different threshold unit, toget the optimal threshold.

V CONCLUSIONReliable personal recognition is critical to many applicationsin our day to day life.

Biometrics refers to automatic recognition of an individual basedon her behavioural and/or physiological characteristics. It isobvious that any system assuring reliable personal recognitionmust necessarily involve a biometric component. This is nothowever, to state that biometrics alone can deliver reliablepersonal recognition component. Biometric-based systems alsohave some limitations that may have adverse implications forthe security of a system. While some of the limitations ofbiometrics can be overcome with the evolution of biometrictechnology and a careful system design, it is important tounderstand that foolproof personal recognition systems simplydo not exist and perhaps, never will. Security is a riskmanagement strategy that identifies controls, eliminates, orminimizes uncertain events that may adversely affect systemresources and information assets. The security level of a systemdepends on the requirements (threat model) of an applicationand the cost-benefit analysis.

As biometric technology matures, there will be an increasinginteraction among the market, technology, and the applications.This interaction will be influenced by the added value of thetechnology, user acceptance, and the credibility of the serviceprovider. It is too early to predict where and how biometrictechnology would evolve and get embedded in whichapplications. But it is certain that biometric-based recognitionwill have a profound influence on the way we conduct ourdaily business.

REFERENCES[1] “Extraction of Finger-Vein Patterns Using Maximum

Curvature Points in Image Profiles,” in Proceedings ofthe 9th IAPR Conf. on Machine Vision Applications, N.Miura, A. Nagasaka, and T. Miyatake, (MVA2005,Tsukuba Science City, Japan, 2005), pp. 347-350.

[2] Bio-informatics Visualization Technology committee, Bio-informatics Visualization Technology (CoronaPublishing, 1997), p.83, Fig.3.2

[3] “Palm Vein Authentication System: A Review”,International Journal of Control and Automation Vol. 3,No. 1, March, 2010

[4] “Palm vein authentication technology and itsapplications”, proceeding of The Biometric ConsortiumConference, September 19-21st 2005,Hyatt RegencyCrystal City, Arlington, VA, USA

[5] Mohamed Shahin, Ahmed Badawi, and Mohamed Kamel,”Biometric Authentication Using Fast Correlation of NearInfrared Hand Vein Patterns”, International Journal ofBiological and Medical Sciences, vol 2,No.1,winter 2007,pp. 141-148.

[6] Shi Zhao, Yiding Wang and Yunhong Wang, “ExtractingHand Vein Patterns from Low-Quality Images: A NewBiometric Technique Using Low-Cost Devices”, FourthInternational Conference on Image and Graphics, 2007.

[7] Masaki Watanabe, Toshio Endoh,Morito Shiohara, andShigeru Sasaki,” Palm vein authentication technologyand its applications”, The Biometric ConsortiumConference, September 19-21, 2005,USA, pp. 1-2.

[8] “Palm Vein Authentication Technology” white paper,Bioguard, Innovative Biometric Solutions, March, 2007.

[9] Yuhang Ding, Dayan Zhuang and Kejun Wang, “A Studyof Hand Vein Recognition Method”, The IEEEInternational Conference on Mechatronics & AutomationNiagara Falls, Canada, July 2005.

Indu Verma is working as an assistantprofessor in Ajay Kumar GargEngineering College, Ghaziabad,U.P(India), she has obtained her M-Tech(Computer Engineering) with Hons. fromShobhit University and MCA from U.Ptechnical University, Lucknow (U.P.). Shehas been in teaching from last 9.5 years;

she has been member of several academic and administrativecommittees.During her teaching tenure she has also been coordinated aNational Conference and many Technical fests at college level.She has attended several seminars, workshops and conferencesat various levels. Also she has some papers published atnational and international conferences. Her area of researchincludes Biometric System authentication, Computer Networks,Network Security and Database.

41

PON TOPOLOGIES FOR DYNAMIC OPTICALACCESS NETWORKS

Sanjeev K. PrasadAsst. Prof. MCA Department, AKGEC Ghaziabad

[email protected]

Abstract– PON-based access networks envisage the demonstrationof scalability to allow gradual deployment of time andwavelength multiplexed architectures in a single-platformwithout changes in fiber infrastructure and also highly-efficientbandwidth allocation for service provision and upgrade on-demand. This is achieved by the application of coarse-finegrooming to route reflective-ONUs of time and wavelengthPONs collectively and the development of MAC protocolsenabling the OLT to dynamically assign available time slotsamong ONUs and demonstrate efficient bandwidth andwavelength assignment.

Keywords: PON, coarse WDM (CWDM), dynamic bandwidthallocation (DBA), quality of service (QoS).

1. INTRODUCTIONThe emergence of new bandwidth-intensive applicationsarticulated by distance learning, online gaming, Web 2.0 andmovie delivery by means of high-definition video, hasultimately justified the necessity of upgrading the accessnetwork infrastructure to provide fat-bandwidth pipelines atsubscriber close proximity. Passive optical networks (PONs)offer currently more opportunities to communicate theseservices than ever before, with potential connection speeds ofup to 100 Mbit/s in mind [1]. A scalable multi-PON accessnetwork architecture [2] has been investigated in that directionto provide interoperability among dynamic time divisionmultiplexing (TDM) and wavelength division multiplexing(WDM)-PONs through coarse WDM (CWDM) routing in theoptical line terminal (OLT). To provide bandwidth on demand,a novel TDM dynamic minimum bandwidth (DMB) allocationprotocol and an upgraded version have been proposed toachieve quality of service (QoS) at three different service levelsand diverse network throughputs [3]. In addition, to allow forWDM-PON resource allocation, to overcome the inevitablenetwork congestion of single wavelength networks, thedeveloped medium access control (MAC) protocols have beenextended to implement logical point-to-point topologies basedon general loop-back WDM-PON architectures [2] to increaseservice provisioning between reflective optical network units(ONUs) and the OLT by vigorously distributing networkcapacity simultaneously between the upstream anddownstream.

2. NETWORK ARCHITECTUREThe network architecture in Fig. 1 exhibits a single 4x4 coarsearray waveguide grating (AWG) in the OLT to route multipleTDM and WDM-PONs by means of a single tunable laser(TL1) and receiver (RX1) allowing for coarse-fine grooming todisplay smooth network upgrade. Proposed coarse AWGdevices display 7 nm, 3 dB Gaussian passband windows [4],denoted in Fig. 1 by coarse ITU-T channels λ1 = 1530 nm andλ2 = 1550 nm, set to accommodate up to 16, 0.4 nm-spacedwavelengths to address a total of 16 ONUs per PON. Indownstream, TL1 will optimally utilize λ1

9, placed at the centreof the AWG coarse channel λ1, to broadcast information to allONUs of TDM-PON1. To address a WDM-PON, TL1 will switchon all 16 wavelengths, centered ±3.2 nm around coarse channelλ2, i.e. λ2

1-16, to address jointly all ONUs in WDM-PON4 [5].

Figure 1. Unlimited-capacity multi-PONaccess network architecture.

The established network interoperability is a key feature sinceit allows a smooth migration from single to multi-wavelengthoptical access to address increasing bandwidth requirements.Reflective semiconductor optical amplifier (RSOA)-based ONUsare universally employed, avoiding the necessity ofwavelength-specific, local optical sources. The use of multipletransceivers in a single OLT to serve all reflective PONs [5]allows for centralized control to distribute ONU capacity amongupstream and downstream on demand and concurrently provideeach PON with multiple wavelengths for enhanced bandwidthallocation flexibility. Finally, the network exhibits increasedscalability since extra TLs can be directly applied at unused

42

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

AWG ports in the OLT, e.g. TL2 at I/O port 2, to maintain highnetwork performance at increased traffic load with low OLTinventory count since the ratio of subscriber number to OLTtransceivers is comparatively high.

3. NETWORK BANDWIDTH MANAGEMENTTo allow centralized, dynamic bandwidth allocation among thearchitecture’s TDM and WDM-PONs, the OLT according tothe developed algorithms will assign varying frame time-slots,initially to each PON in order-of demand and subsequentlyarrange each PON bandwidth among their ONUs based ontheir service level and individual bandwidth requirement [3, 6].In that direction, the DMB protocol [3] provides ONUs in TDM-PONs with three service levels at different weights, Wt, torepresent network accessing priority. Subsequently thealgorithm automatically assigns to each ONU a guaranteedminimum bandwidth, Bt min, from the overall network capacity,to satisfy their basic service requirements at various servicelevels t and apportions any unused bandwidth to ONUsaccording to their buffer queuing status. Following probablevariations in network capacity, it is capable of readjusting theguaranteed minimum and unused bandwidths among ONUs tocomply with subscriber contracts. For service level t, for examplethe maximum allocated bandwidth Bmax_allocated for ONUi will beequal to the addition of Bt min and the extra assigned bandwidth,Bex_assigned. Otherwise, if the bandwidth requirement, Ri, is smallerthan the total, Bmax_allocated will be equal to the requiredbandwidth, Ri, as given by (1).

To further reduce the packet waiting time in the ONUs, thenetwork traffic self-similarity characteristics have beenincorporated into the DMB protocol [6]. In addition, since theupstream and downstream channels are independent, the grantmassages for subsequent polling cycles can be communicatedbefore the last ONU has finished its upstream transmission.Consequently the OLT in what is known as the advanced DMB(ADMB) protocol possess the capability to automatically re-arrange the upstream transmission order by assigning the ONUwith the longest upstream transmission period to the lastupstream time slot, reducing the idle period and increasing theoverall network throughput. Contrasting the ADMB protocolwith published dynamic bandwidth assignment algorithms,simulation results have shown [6] substantial reductions inmean packet delay, particularly at high network load in relationto the IPACT algorithm [7].

4. LONG-REACH PONSThe scalability of the presented network architecture iscurrently further explored by investigating its ability todemonstrate a long-reach, wide-splitting ratio network shared

by numerous TDM and WDM-PONs [8], offering significantcost savings in an incorporated access-metro networkinfrastructure [8]. The current architecture is believed to bestraightforwardly extendable to such a topology since it alreadyserves a number of different physical PON locations withdedicated coarse wavelength channels. To that extend,extensive splitting ratios can be achieved by exploiting thedecentralized coarse routing capability of the architecture toallow amplification units to be installed on route if necessaryand to allocate increased number of dedicated wavelengthchannels to each physical PON using the multiple free spectralranges (FSRs) of the CWDM AWG.

In view of the MAC layer of a typical 100 km long-reach TDM-PON, the direct implementation of the DMB protocol hasdisplayed limited performance in terms of bandwidth utilizationdue to a recorded, 500 μs increase in packet propagation timecompared to 25 km-PONs, exhibiting a total of 1000 ìs-wide idletime-slots in each transmission cycle as a result of the reportand grant packets polling times. To achieve acceptable channelthroughput and packet delay performance, an innovative two-state DMB (TSD) protocol has been demonstrated that utilizesthe idle time slots in the sense of virtual polling cycles, duringwhich the ONUs can transmit data by means of a predictionmethod estimating their bandwidth requirement [9]. The amountof virtual bandwidths, allocated to ONUs, are determined bythe DMB algorithm with each estimated ONU bandwidthrequirement and 1000 μs idle period regarded as the maximumpolling period parameter [9]. As a result, simulation resultshave presented a significant 34% improvement in terms ofchannel throughput performance as well as a reduction inpacket delay and packet loss rate [9].

5. HYBRID WIRELESS OPTICAL NETWORKSThe need for convergence of wired and low-cost wirelesstechnologies in the access network, providing end users withconnection flexibility and mobility, is being explored byinvestigating the interoperability of the multi-PON architecture[2] with WiMAX. This could offer network resilience in case offiber failure to individual ONUs through the use of overlappingWiMAX cells while allow for efficient dynamic resourceallocation to base-station-ONUs of a TDM-PON to provideadditional WiMAX channels by means of a centralized signalprocessing approach in the OLT.

6. CONCLUSIONSThe access network architecture presented in this paper utilizesthe coarse channels of an AWG in the OLT and reflective ONUsto demonstrate dynamic TDM and WDM-PONs, through asingle OLT with coarse-fine grooming features. The use of asingle-AWG, single-TL OLT to address multiple reflectiveONUs of a WDMPON have demonstrated error-free routing of16, 0.4 nm-spaced wavelengths over a single, 7 nm-wideGaussian passband window of the AWG in the presence PDW

43

shifting. To manage packet transmission, several DBAprotocols have been proposed to dynamically and efficientlyarrange the bandwidth among ONUs. Depending on thephysical-layer architecture, the innovative DMB protocol hassuccessfully been modified to develop the ADMB and MDMBprotocols for TDM and WDM-PONs respectively to efficientlyimprove the performance of channel throughput and packetdelay. Recent developments have concentrated on 100 km-reach networks to achieve comparable network performancein terms of channel utilization rate, packet delay, and packetloss rate to standard access PONs at a superior 400% widernetwork coverage by means of the application of the TSDprotocol. Notable initiatives have also been carried out toinvestigate the application of multiple wavelength operationover standard splitter-based GPONs by extending the dynamicbandwidth algorithms to include an additional dimension, thatof wavelength and the integration of WiMAX to terminatewireless users to base station-ONUs with the intention ofproviding flexibility in resource allocation among end users.

REFERENCES[1] P.-F. Fournier, “From FTTH pilot to pre-rollout in France,”

presented at CAI Cheuvreux, France, 2007.[2] Y. Shachaf, C.-H. Chang, P. Kourtessis, J. M. Senior,

“Multi-PON access network using a coarse AWG forsmooth migration from TDM to WDM PON,” OSA OpticsExpress, vol. 15, pp. 7840-7844, 2007.

[3] C.-H. Chang, P. Kourtessis, and J. M. Senior, “GPONservice level agreement based dynamic bandwidthassignment protocol,” Journal of Electronics Letters, vol.42, pp. 1173-1174, 2006.

[4] J. Jiang, C. L. Callender, C. Blanchetière, J. P. Noad, S.Chen, J. Ballato, and J. Dennis W. Smith, “ArrayedWaveguide Gratings Based on PerfluorocyclobutanePolymers for CWDM Applications,” IEEE PhotonicsTechnology Letters, vol. 18, pp. 370-372, 2006.

[5] Y. Shachaf, P. Kourtessis, and J. M. Senior, “Aninteroperable access network based on CWDM-routedPONs,” presented at 33rd European Conference andExhibition on Optical Communication (ECOC), Berlin,Germany, 2007.

[6] C.-H. Chang, P. Kourtessis, and J. M. Senior, “DynamicBandwidth assignment for Multi-service access inGPON,” presented at 12th European Conference onNetworks and Optical Communications (NOC),Stockholm, Sweden, 2007.

[7] G. Kramer, B. Mukherjee, and G. Pesavento, “IPACT adynamic protocol for an Ethernet PON (EPON),” IEEECommunications Magazine, vol. 40, pp. 74-80, 2002.

[8] R. P. Davey and D. B. Payne, “The future of fiber accesssystems?” BT Technology Journal, vol. 20, pp. 104 -114,2002.

[9] C. H. Chang, N. M. Alvarez, P. Kourtessis, and J. M.Senior, “Dynamic Bandwidth assignment for Multi-service access in long-reach GPON,” presented at 33rdEuropean Conference and Exhibition on OpticalCommunication (ECOC), Berlin, Germany, 2007.

Sanjeev K. Prasad is an AssistantProfessor in AKGEC, Ghaziabad, anaffiliated college of Uttar PradeshTechnical University, Lucknow, (India).He received M. Tech. degree in ComputerScience & Engineering from UttarPradesh Technical University, Lucknow,(India) and he is pursuing Ph.D. inComputer Science from Gurukula Kangri

Vishwavidyalay, Haridwar, (India). Before joining this college,he has served many other colleges affiliated to Uttar PradeshTechnical University, Lucknow, (India). He has got published4 papers in international journals and 3 papers publishedinternational conferences. His areas of interest include MobileAd-Hoc Networks and Computer Graphics.

44

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

AN OVERVIEW OF SEMANTIC SEARCH SYSTEMSDr. Pooja Arora

Assistant Professor, MCA Department AKGECGhaziabad, India

[email protected]

Abstract–Research on semantic search aims to improveconventional information search and retrieval methods, andfacilitate information acquisition, processing, storage andretrieval on the semantic web. The past ten years have seen anumber of implemented semantic search systems and variousproposed frameworks. A comprehensive survey is needed to gainan overall view of current research trends in this field. In thisarticle, report and findings based on which a generalizedsemantic search framework is formalized. Further, issues withregards to future research in this area are described.

Keywords–Semantic Search, Knowledge Acquisition, SemanticWeb, Information Retrieval.

1. INTRODUCTIONResearch in information retrieval (IR) community has developedvariety of techniques to help people locate relevant informationin large document repositories. Besides classical IR models(i.e., Vector Space and Probabilistic Model) [7], extendedmodels such as Latent Semantic Indexing, Machine Learningbased models (i.e., Neural Network, Symbolic Learning, andGenetic Algorithm based models) and Probabilistic LatentSemantic Analysis (PLSA) have been devised with hope toimprove information retrieval process. However, rapidexpansion of the Web and growing wealth of informationpose increasing difficulties to retrieve information efficientlyon the Web. To arrange more relevant results on top of theretrieved sets, most of contemporary Web search enginesutilize various ranking algorithms such as PageRank, HITS,and Citation Indexing that exploit link structures to rank thesearch results. Despite the substantial success, thosesearch engines face perplexity in certain situations due tothe information overload problem on one hand, andsuperficial understanding of user queries and documentson the other. The semantic web is an extension of the currentWeb in which resources are described using logic-basedknowledge representation languages for automated machineprocessing across heterogeneous systems. In recent years,its related technologies have been adopted to developsemantic-enhanced search systems. Significance of theresearch in this area is clear for two reasons: it supplementsconventional information retrieval by providing searchservices centered on entities, relations, and knowledge; anddevelopment of the semantic web also demands enhancedsearch paradigms in order to facilitate acquisition, processing,storage, and retrieval of the semantic information. The article

provides a survey to gain an overall view of the current researchstatus[1].

2. SEMANTIC SEARCH SYSTEMSConventional search techniques are developed on the basisof words computation model and enhanced by the link analysis.On one hand, semantic search extends the scope of traditionalinformation retrieval paradigm from mere document retrieval toentity and knowledge retrieval; on the other hand, it improvesthe conventional IR methods by looking at a differentperspective: the meaning of words, which can be formalizedand represented in machine processible format using ontologylanguages such as RDF and OWL. For example, an arbitraryresource or entity can be described as an instance of a class inan ontology; having attribute values and relations with otherentities. With the logical representation of resources, a semanticsearch system is able to retrieve meaningful results by drawinginference on the query and knowledge base. As a simpleexample, meaning of the query for “people in School ofComputer Science” will be interpreted by a semantic searchsystem as individuals (e.g., professors and lecturers) who haverelations (e.g., work for or affiliated with) with the school. Onthe contrary, conventional IR systems interpret the query basedon its lexical form. Web pages in which the worlds “people”and ”computer science” co-occur, are probably retrieved. Thecost is that users have to extract useful information from anumber of pages, possibly query the search engine severaltimes.

2.1 Document-oriented SearchDocument-oriented search can be thought of as an extensionof the conventional IR approaches where the primary goal isto retrieve documents, such as web pages and text documents,document fragments, scientific publications and ontologies.In search systems of this category, documents are annotatedusing logic-based knowledge representation languages withrespect to annotation and domain ontologies to provideapproximate representations for the topics or contents oforiginal documents. The retrieval process is carried out bymatching user queries with the resulting semanticannotations[3]. The early work in SHOE search systemfacilitates users constructing constrained logical queries byspecifying attribute values of ontology classes to retrieveprecise results. The main limitations are the manual annotationprocess and lack of inference support. There has been

45

considerable work on speculating the significance of logicalreasoning for semantic search systems. OWLIR adopts anintegrated approach that combines logical inference andtraditional information retrieval techniques. A document isrepresented using original text and semantic markup. Thesemantic markup and the domain ontology are exploited bylogical reasoners to provide enhanced search results.Conventional IR system based on the original text is integratedinto the semantic search system to provide complementaryresults in case no result is returned by the semantic search.

2.2 Entity and Knowledge-oriented SearchEntity and knowledge-oriented search method expands thescope of conventional IR systems which solely retrievedocuments. Systems based on this method often modelontologies as directed graphs and exploit links between entitiesto provide exploration and navigation. Attribute values andrelations of the retrieved entities are also shown to provideadditional knowledge and rich user experiences. TAP is one ofthe first empirical studies of large scale semantic search on theWeb[4]. It improves the traditional text search by understandingthe denotation of the query terms, and augmenting searchresults using a broad-coverage knowledge base with aninference mechanism based on graph traversal. Recently anumber of systems such as CS-Aktive, RKB Explorer, andSWSE have been implemented based on the principles of “openworld” and “linked data4”, which coincide with spirit of thesemantic web of being “distributed”.

2.3 Multimedia Information SearchIn ontology-based image annotation and search systems,resources are annotated by domain experts or using text aroundimages in documents. Early works adopt manual approachwhich suppresses scalability. The work in performs queryexpansion to retrieve semantically related images by drawinglogical inference on its domain ontology (e.g., usingsubsumption relation between concepts). Falcon-S annotatesimages using metadata by crawling and parsing pages on theWeb. Disambiguation of resources with same labels is resolvedusing context information derived from user queries. Squiggleis a semantic framework to help building domain-specificsemantic search applications for indexing and retrievingmultimedia items. Its knowledge representation model is basedon the SKOS vocabulary which enables the system to suggestmeanings of queries by a simple inference process, e.g., suggestalternative labels or synonyms for an image. As a result, imagesannotated with one label can be retrieved using the image’salternative labels.

2.4 Relation-centered SearchRelation-centered semantic search approach pays specialattention to relations between query terms implicitly expressedby users. It usually performs an additional query pre-processing step through the use of external language lexicons,

or an inference process using a knowledge base. AquaLog is aquestion-answering system. It implements similarity servicesfor relations and classes to identify synonyms of verbs andnouns appearing in a query using WordNet5. The obtainedsynonyms are used to match property and entity names in theknowledge base for answering queries. SemSearch supportsnatural language queries and translates them into formal queriesfor reasoning. An entity referred by a keyword is matchedagainst a subject, predicate or object in the knowledge baseusing combinations. The above two systems process entityrelations using language lexicon or word combinations.

2.5 Semantic AnalyticsSemantic analytics, also known as semantic associationanalysis, is introduced by Sheth et al. to discover new insightsand actionable knowledge from large amounts ofheterogeneous content. It is ssentially a graph-theoretic basedapproach that represents, discovers and interprets complexrelationships between resources[5]. By modeling RDF databaseas directed graph, relationships between entities in knowledgebase are represented as graph paths consisting of a sequenceof links. Search algorithms for semantic associations such asbreadth first and heuristics-based search are discussed in.Effective ranking algorithms are indispensable because thenumber of relationships between entities in a knowledge basemight be much larger than the number of entities. In [2] theranking evaluates a number of parameters: context,subsumption, trust, rarity, popularity, and association length.

2.6 Mining-based SearchEffectiveness of the semantic search depends largely on thequality and coverage of the underlying knowledge base. Thesearch methodologies discussed so far either utilise explicitknowledge, which is asserted in the knowledge base, or implicitknowledge, which is derived using logical inference withrules[6]. Another kind of knowledge, which we refer to as“hidden knowledge”, cannot not be easily observed usingtechniques such as information extraction, natural languageprocessing, logical inference, and semantic analytics. Forexample, “who are the experts in semantic web researchcommunity?”, “Which institutions ranks highly in the machinelearning research area?”. Such knowledge can only be derivedfrom large amount of data by using some sort of sophisticateddata analysis techniques. We refer to approaches that utilisetechniques to infer hidden knowledge as mining-basedsemantic search.

3. CONCLUSION AND FUTURE WORKResearch of semantic search aims to expand the scope andimprove retrieval quality of conventional IR techniques. I haveinvestigated a number of existing systems, and classified theminto several categories in accordance with their methodologies,scope, functionalities, as well as most distinctive features. Themain finding is: though varieties of systems have been

Ambient Intelligence

46

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

developed, a logical semantic search framework is notformalized.

REFERENCES[1] H Alani, K O’Hara and N Shadbolt, Ontocopi: Methods

and tools for identifying communities of practice.Intelligent Information Processing 2002, Vol. 221, pp. 225–236.

[2] B Aleman-Meza, C Halaschek-Wiener, I B Arpinar, CRamakrishnan and A P Sheth, Ranking complexrelationships on the semantic web. IEEE InternetComputing, Vol. 9, No. 3, 2005, pp. 37–44.

[3] B Aleman-Meza, M Nagarajan, C Ramakrishnan, L Ding,P Kolari, A P Sheth, I B Arpinar, A Joshi and T Finin,

Semantic analytics on social networks: experiences inaddressing the problem of conflict of interest detection..WWW 2006, ACM, pp. 407–416.

[4] K Anyanwu, A Maduko and A P Sheth, Semrank: rankingcomplex relationship search results on the semantic web.WWW 2005, ACM, pp. 117–127.

[5] K Anyanwu and A P Sheth, _-Queries: Enabling Queryingfor Semantic Associations on the Semantic Web. WWW2003, pp. 690–699.

[6] D Artz and Y Gil, A survey of trust in computer scienceand the semantic web.. J. Web Sem., Vol. 5, No. 2, 2007,pp. 58–71.

[7] R A Baeza-Yates and B A Ribeiro-Neto, ModernInformation Retrieval, ACM Press / Addison-Wesley,1999.

47

A SOFT INTRODUCTION TO MACHINE LEARNINGAnurag Sharma

Student, MCA Department [email protected]

Abstract– Machine Learning is an important area of ArtificialIntelligence. It involves automating the procurement ofinformation base required for functioning of expert systems. Inthis paper we will be introduced to machine learning in itssoftest form and will know about what it is, why its is so importantand what are its various applications in today’s scenario in theever going development of artificial intelligence.

INTRODUCTIONMachine Learning is the study of computational methods forimproving performance by mechanizing the acquisition ofknowledge from experience.[1] Evolved from the study ofpattern recognition and computational learning theory inartificial intelligence, it has now evolved in a subfield ofcomputer science and an important sector to understand ifone wants to efficiently implement knowledge in a machine. Amachine can be made to learn to respond to a problem in twoways, manually and automatically. To do it manually is to writeprograms which teach the machine how to respond to aparticular situation but wouldn’t it be wonderful if the machinecan understand it by itself and act accordingly? That’s wheremachine learning plays its part.

In this paper we will learn the algorithms to implement machinelearning, why it’s so important and what some of its real worldapplication are.

DIFFERENT STYLES OF MACHINE LEARNINGSupervised LearningInput data is called training data and has a known label orresult such as spam/not-spam or a stock price at a time. Amodel is prepared through a training process where it is requiredto make predictions and is corrected when those predictionsare wrong. The training process continues until the modelachieves a desired level of accuracy on the training data.Example problems are classification and regression. Examplealgorithms include Logistic Regression and the BackPropagation Neural Network.

Unsupervised LearningInput data is not labelled and does not have a known result. Amodel is prepared by deducing structures present in the inputdata. This may be to extract general rules. It may through amathematical process to systematically reduce redundancy, orit may be to organize data by similarity. Example problems areclustering, dimensionality reduction and association rule

learning. Example algorithms include: the Apriori algorithm andk-Means.

Semi-Supervised LearningInput data is a mixture of labeled and unlabelled examples.There is a desired prediction problem but the model must learnthe structures to organize the data as well as make predictions.Example problems are classification and regression. Examplealgorithms are extensions to other flexible methods that makeassumptions about how to model the unlabelled data.[2]

Reinforcement LearningThe learner is not told which actions to take, as in most formsof machine learning, but instead must discover which actionsyield the most reward by trying them. In the most interestingand challenging cases, actions may affect not only theimmediate reward but also the next situation and, through that,all subsequent rewards. These two characteristics--trial-and-error search and delayed reward--are the two most importantdistinguishing features of reinforcement learning.[3]

ALGORITHMS OF MACHINE LEARNING

WHY IS MACHINE LEARNING SO IMPORTANT?Things like growing volumes and varieties of available data,computational processing that is cheaper and more powerful,and affordable data storage have made machine learningimportant in today’s scenario.

All of these things mean it's possible to quickly andautomatically produce models that can analyze bigger, morecomplex data and deliver faster, more accurate results – evenon a very large scale. The result? High-value predictions thatcan guide better decisions and smart actions in real time withouthuman intervention.

48

Journal of Computer Application, Volume 7, No. 1, January-June 2016Ajay Kumar Garg Engineering College, Ghaziabad

One key to producing smart actions in real time is automatedmodel building. Analytics thought leader Thomas H. Davenportwrote in The Wall Street Journal that with rapidly changing,growing volumes of data, "... you need fast-moving modelingstreams to keep up." And you can do that with machine learning.He says, "Humans can typically create one or two good modelsa week; machine learning can create thousands of models aweek."

APPLICATIONS OF MACHINE LEARNINGSome real world applications of machine learning are:

Face RecognitionAnti virusEmail FilteringGeneticsSignal DenoisingWeather ForecastComputer VisionInternet Fraud Detection

CONCLUSIONMachine learning is a type of artificial intelligence that providescomputers with the ability to learn without being explicitlyprogrammed. Machine learning focuses on the developmentof computer programs that can teach themselves to grow andchange when exposed to new data. The process of machinelearning is similar to that of data mining. Both systems searchthrough data to look for patterns. However, instead of extracting

data for human comprehension -- as is the case in data miningapplications -- machine learning uses that data to improve theprogram's own understanding. Machine learning programsdetect patterns in data and adjust program actions accordingly.It was thus seen that machine learning not only plays animportant part in making a machine more intelligent withouthuman intervention but it’s also a very elaborate yet interestingfield of computer science. By the help of machine learning, notonly we can make a system which can predict its own actions,we can make a system which is close to our own mind.

REFERENCES[1] Pat Langley, “Applications of Machine Learning and Rule

Induction”, Stanford University, Stanford, California(USA), March 30, 1995, pp. 1.

[2] Jason Brownlee, “A Tour of Machine LearningAlgorithms”, Machine Learning Mastery, November 25,2013.

[3] Richard S. Sutton and Andrew G. Barto, “ReinforcementLearning: An Introduction”, The MIT Press, Cambridge,London (England).

Anurag Sharma is pursuing MCA fromAjay Kumar Garg Engineering collegeGhaziabad, U.P(India). He has completedhis graduation from CCS university. Hishobbies are traveling and reading.