95
PROCEEDINGS OF NATIONAL CONFERENCE ON MODERN TRENDS IN PHYSICS (NCMTP-2017) MARCH 10-11, 2017 Under Technical Education Quality Improvement Programme TEQIP-1.1 phase-II Organized by Department of Basic Science & Humanities (Physics) Narula Institute of Technology (An Autonomous Institute under MAKAUT) 81, Nilgunj Road, Agarpara, Kolkata –700 109

Organized by Department of Basic Science & Humanities

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

PROCEEDINGS OF NATIONAL CONFERENCE ON MODERN TRENDS IN PHYSICS

(NCMTP-2017)

MARCH 10-11, 2017

Under Technical Education Quality Improvement Programme

TEQIP-1.1 phase-II

Organized by

Department of Basic Science & Humanities

(Physics)

Narula Institute of Technology (An Autonomous Institute under MAKAUT)

81, Nilgunj Road, Agarpara,

Kolkata –700 109

Proceedings of National Conference on

MODERN TRENDS IN PHYSICS (NCMTP- 2017)

MARCH 10-11, 2017

UNDER TECHNICAL EDUCATION QUALITY IMPROVEMENT PROGRAMME

TEQIP-1.1, PHASE-II

Editor

Dr. Susmita Karan

TIC (Physics) & Asst. Professor, BS&HU

Narula Institute of Technology, Kolkata

Organized by

Department of Basic Science &Humanities

(Physics)

Narula Institute of Technology (An Autonomous Institute under MAKAUT)

81, Nilgunj Road, Agarpara,

Kolkata –700 109

2018

Ideal International E – Publication Pvt. Ltd. www.isca.co.in

427, Palhar Nagar, RAPTC, VIP-Road, Indore-452005 (MP) INDIA

Phone: +91-731-2616100, Mobile: +91-80570-83382

E-mail: [email protected] , Website: www.isca.co.in

Title: Proceedings of National Conference on Modern Trends in Physics

Editor: Dr. Susmita Karan

Edition: First

Volume: I

© Copyright Reserved

2017

All rights reserved. No part of this publication may be reproduced, stored, in a retrieval

system or transmitted, in any form or by any means, electronic, mechanical, photocopying,

reordering or otherwise, without the prior permission of the publisher.

ISBN:978-93-86675-29-3

NCMTP-2017 4

Ideal International E- Publication

www.isca.co.in

JIS Group Educational initiative is the endeavour of Sardar Jodh Singhji, Chairman, JIS Group. Through

the years, with this enterprising zeal and vision the empire of JIS Group spanned in the fields of Education,

Dairy business, Telecommunication, Transportation, Infrastructure, Logistics, Healthcare and Social

service. His aspiration to serve society by imparting knowledge, education and employment culminated

into JIS Group Educational Initiatives. This is one of the majestic entrepreneurial endeavours in Eastern

India, creating facilities for higher education, Research, industry and creating jobs for thousands of people.

JIS Group Educational Initiatives has heralded new age education in West Bengal by imparting futuristic

undergraduate and post graduate programmes. Spread across several sprawling campuses, JIS Group

Educational Initiatives has colleges in Engineering, Dental, Pharmaceutical Sciences, Management Science

and Polytechnic. The objective was to create an opportunity for students from Eastern India by providing a

high standard Education and Research platform in Engineering, Dental Science, Pharmacy, Hospitality

management etc.

The journey commenced with a mission

“Igniting Minds, Empowering Lives”

“Learning is the beginning of wealth.

Learning is the beginning of health.

Learning is the beginning of spirituality,

searching and learning is

where the miracle process all begins”

ABOUT JIS GROUP

NCMTP-2017 5

Ideal International E- Publication

www.isca.co.in

Narula Institute of Technology is a leading Engineering & Management college, located at Agarpara in West

Bengal. Approved by All India Council for Technical Education (AICTE) and affiliated to MAULANA ABUL

KALAM AZAD University of Technology (MAKAUT). The college offers NBA accredited degree programmes in

engineering. The four year B. Tech course is imparted in the streams like CSE, ECE, EE, CE, IT, EIE & ME. The

institute provides a brilliant platform for pursuing higher studies through PG courses like M. Tech (CSE, ECE-

Communication, EE-Power System, CE-Structural engineering), MBA and MCA. It has expanded to include diploma

programs in EE, CE and ETC under the affiliation of West Bengal State Council of Technical Education. The

Institute is eligible for receiving Central assistance under the recognition of 2(f) & 12(B) under UGC Act. The

institute is also accredited by National Assessment and Accreditation Council (NAAC). The college has also

received the notable World Bank Assisted and MHRD approved TEQIP (Phase II) grant for the advancement of

Technical Education and is a one-stop venue for promoting a vibrant and sustainable. Moreover, it is a proud

moment for the institute that presently it has acquired its position (among the top most 150 private colleges in

India)in NIRF Ranking.

Academic success is the key for laying the foundation for the students and therefore the College emphasizes

on quality academic delivery in their stride towards excellence. The College has also significantly reinforced their

outreach initiatives by facilitating faculty development programme, knowledge exchange sessions, and procuring

funded projects from Government to foster synergy between academia, business, industry and the community.

The institute boasts of a powerful R & D cell with immense contribution from the scholarly faculty members. There

is an enormous repository of International and National Journal publications which have drawn nationwide

attention. The college is in collaboration with Oracle, INFOSYS, TCS, NIT Sikkim, IIT-KGP, AIT Bangkok and other

organizations of repute. The students get an opportunity to interact with foreign experts all across the globe through

Conferences, conferences and special teaching-learning sessions. The student chapter plays a crucial role in

organizing informative technical events within the campus. At present there are five student chapters in our college:

IETE student forum of Electronics & Communication Engineering Department, ICE & ASCE of Civil Engineering

Department, CSI of Computer Science Engineering, Information Technology & MCA Department and Institute of

Engineers of Electrical Engineering Department.

NIT is a one-stop venue for promoting a vibrant and sustainable atmosphere for teaching-learning. Besides

academics, the students get an exposure to the world of co-curricular activities which help them in shaping their

personality. Thus, the cornerstone of the successful evolution of Narula Institute of Technology lies in its meticulous

tutoring and mentoring of the future professionals of the industry as well as of academia and citizens of the society

where the Institute's success has always been directly proportional to the success of the students.

ABOUT THE INSTITUTE

NCMTP-2017 6

Ideal International E- Publication

www.isca.co.in

PREFACE

The Department of Basic Science and Humanities, Narula Institute of Technology is organizing “A Two Day

National Conference on Modern Trends in Physics” to develop better understanding of physics research and its

applications in technology. The conference will be held during March 10-11, 2017 at Narula Institute of

Technology, Kolkata.

The field of physics have not only helped the development in different fields in science and technology but also

contributed towards the improvement of the quality of human life to a great extent. All this has become possible with

the different discoveries and inventions leading to the development of various applications. The core aim of the

conference namely “MODERN TRENDS IN PHYSICS” is to provide an opportunity for the delegates to meet,

interact and exchange new ideas in the various areas of physics. This national Conference on Physics will add a

contribution to the conference series in the field of Physics. The conference has conducted sessions on various fields

in physics such as, applied physics, condensed matter physics, atomic, molecular and optical physics, radio physics,

astrophysics, quantum physics, medical and chemical physics etc. which are the top tracks that we have categorized

at our prestigious conference. This nationwide meet has provided a platform where information between researchers

from the different controls can be effectively traded.

We have the pleasure to welcome the guests of Honour, the eminent speakers and several outstanding researchers

from different Universities/Institutions of repute. We would like to take the proud privilege to thank our Managing

Director, Principal, Registrar, HOD of Department of BS& HU, the Organizing Chair of the conference, the

organizing committee, the reviewers, all colleagues and friends, the entire cast and crew who helped us to organize

this Conference.

March 2017, Kolkata

Dr. Susmita Karan

TIC (Physics) & Assistant Professor, Department of BS & HU

NCMTP-2017 7

Ideal International E- Publication

www.isca.co.in

Table of Contents

MESSAGE FROM CHAIRMAN JIS GROUP 8

MESSAGE FROM CHAIRMAN BOG JIS GROUP 8

MESSAGE FROM MANAGING DIRECTOR JIS GROUP 9

MESSAGE FROM PRINCIPAL (CONFERENCE CHAIR) 9

MESSAGE FROM ORGANIZING CHAIR 10

MESSAGE FROM PROGRAMME COORDINATOR 10

LIST OF COMMITTEE MEMBERS 11

ARTICLES FROM INVITED SPEAKER AND OTHER

PRESENTERS

12

NCMTP-2017 8

Ideal International E- Publication

www.isca.co.in

MESSAGE FROM CHAIRMAN, JIS GROUP

MESSAGE FROM CHAIRMAN, BOG, JIS GROUP

Vision looks inward and becomes duty. Vision looks outward and

becomes aspiration. Vision looks upward and becomes faith." I always experienced a yearning to acknowledge my responsibilities and

reciprocate by contributing to the growth and development of our society.

Years ago when I visited my son's school, I perceived that the best way to

advance society is by fostering education and it was at that moment that the

dream and vision of JIS Group Educational Initiatives was conceived. Now,

when this vision of duty, aspiration and faith has become a reality, it is a proud

moment for me and my team to see thousands of students pursuing higher

education in JIS Group of Colleges and equipping themselves to become

industry ready professionals for successful careers. In this process the Group

intends to unite all dimensions of Education from Undergraduate to Post

Graduate Programmes in Engineering and Technology, Computer

Applications, Dental Science, Pharmacy, Hospitality, diverse streams of

Management and so on under the same umbrella in order to comprehensively

and collectively optimize the reach of Educational Initiatives in every strata

and corner of society towards a better future. Our educational Initiatives

believes that creating an academic foundation for social, cultural, scientific,

economic and technological development in our Nation can mature into Global Interface by giving way to education

exchange in the International territory as well. Therefore, our focus is to achieve unparalleled excellence that will

bring development to our society and mankind by optimizing their potential, thereby establishing the observation of

the renowned Journalist Sydney J. Harris on the role the purpose of education which is to “turn mirrors into

windows”.

Sardar Jodh Singh

I am happy to observe that the Department of Basic Science and Humanities

Department of NiT has been organizing the “National Conference on

Modern Trends in Physics (NCMTP 2017)”, 10-11th March, 2017.This

Conference in terms of its areas and tracks is a comprehensive one providing a

platform from multiple disciplines of engineering and technology to participate

and contribute. This Conference will definitely be a significant attempt to

assemble the leading experts and learners in the field. Understanding the

differences between invention and innovation is the keynote to success in

today’s globalised market driven economy. It is not only important to invent

ideas but also to be able to convert them into productive outcomes in

consumer’s society. Innovation and invention are quite different things. While

invention is largely a personal pursuit, innovation is much more akin to social

pursuit. Innovation warrants attention because it contributes immensely to social and industrial development.

I am confident that this Conference will come up with new findings, strategies and innovations on various

issues laid out by the Organizers and will brain storm the mindset of the participating researchers. I would further

expect that this Conference will identify the state -of -art and future directions in the mentioned areas so as to ensure

demand driven and productive research to fulfill the societal needs and desire. This Conference must depict a future

line transforming the concepts in the published papers into patenting and commercialization of the products.

Prof. (Dr.) Sparsha Mani Chatterjee

NCMTP-2017 9

Ideal International E- Publication

www.isca.co.in

MESSAGE FROM MANAGING DIRECTOR,JIS GROUP

MESSAGE FROM THE PRINCIPAL, CONFERENCE CHAIR

I am chasing a dream that my father (Sardar Jodh Singh) cherished, to

empower lives through knowledge and education. In this regard we have

established the JIS educational initiative which is now one of the leading

private educational service providers in India. JIS educational initiative has 25

educational institutes to its credit and holds an average of 25,000 students who

have enrolled in diverse academic programmes. We have also created new

standards in quality self-financed education and laid the foundation of the JIS

University.

I am extremely delighted to share through this message my enthusiasm about

the “National Conference on Modern Trends in Physics (NCMTP

2017)”,10-11th March, 2017at Narula Institute of Technology, Agarpara, Kolkata, India. The National Conference

promises to be a forum of research scholars and professionals from within the country and outside and will certainly

provide a platform for the sharing of experience and the exchange of opinions on technological advancements.

I am sure that this event will draw talent from all over the globe and create a great learning experience for all

participants, delegates and guests. I appreciate the efforts taken by the Organizing Committee of the NCMTP 2017

and all the eminent persons involved. I wish them great success.

Mr. Taranjit Singh

On behalf of the Organizing Committee, I welcome all to the “National Conference on Modern Trends inPhysics

(NCMTP 2017)”, to be held during 10-11th March, 2017, at the Campus of Narula Institute of Technology, which is

sponsored by TEQIP?

National Conference is a gathering of academicians, researchers and

students from several part of our country in a single platform in order

to have the opportunity to interact and share ideas among themselves.

On behalf of the Organizing Committee, I would like to convey my

sincere thanks to TEQIP for sponsoring the event.

I also extend my sincere thanks to our Managing Director Mr.

Taranjit Singh for motivating us to organize the event successfully. I

would like to appreciate the collective efforts put in by the members

of different Committees and staff members of the Institute for making

NCMTP 2017 a grand success without whom it would have been

very difficult for us to arrange the event.

I also offer my thanks to all the participants for their immense support and active participation with sincerity and

punctuality. I appreciate the effective assistance of every faculty and staff of the institute in direct and indirect

manner to make NCMTP 2017 a grand success.

I hope, every individual will be satisfied and will enjoy the Conference to a great extent.

Prof. (Dr.) M. R. Kanjilal

NCMTP-2017 10

Ideal International E- Publication

www.isca.co.in

MESSAGE FROM THE CONFERENCE CHAIR

MESSAGE FROM THE PROGRAMME COORDINATOR

I consider conducting NCMTP 2017 a very challenging job on behalf of the

Organizing Committee of the National Conference. The main aim to arrange

this two day National Conference is to bring academicians, researchers and

students in a single platform in order to have the opportunity to interact and

share ideas among themselves. To make the program most fruitful, the

availability of the suitable speakers was our high concern. We are really

thankful that the speakers showed their enthusiasm and lend their valuable time

to educate our participants in regards to NCMTP 2017.

The eminent speakers from different disciplines as resource persons are invited

to share their valuable research and ideas among students during the 2-day

Conference to raise the interest of the students on research activity.

Our Principal and the committee members of NCMTP 2017 gave their best effort to materialize the smooth

functioning of the two day Conference. We find immense satisfaction after the successful completion of the

Programme. We hope to organize such programme in future to benefit our students as well as the Nation by

providing future researchers. I hope, every participant will be benefitted and will enjoy the Conference to the most.

Dr. Sumit Nandi Organising Chair

I feel honoured and privileged to get the opportunity to propose a vote of thanks on

this grand inaugural occasion of the Conference of Physics, NCMTP 2017 at

Narula Institute of Technology. March 10, 2017 is indeed a very memorable day for

all the members of the Basic Science & Humanities department. As we usher the

opening of the National Conference in the presence of the honorable Principal and

the dignitaries. I, on behalf of Organizing Committee convey deep regards and

heartfelt thanks to the respected dignitaries, participants and fellow colleagues. I am

thankful to all the participants across West Bengal for coming to Narula Institute of

Technology to attend the Conference.

I am very much thankful to TEQIP for sponsoring this program, without whose

generous financial support, it would not have been possible to organize such an

event. I, on behalf of the entire team of organizing committee wish to extend a very

hearty vote of thanks and deep gratitude to our honourable Managing Director Mr.

Taranjit Singh for motivating us and giving us such a platform to organize such

effective program for teaching and research fraternity. I extend my whole hearted vote of thanks and deep gratitude

to our friend, philosopher and guide, our honourable Principal, Prof. (Dr.) M. R. Kanjilal for extending her unfailing

support towards our initiative to organize this Conference. I am very much thankful to our HOD, Dr. Sumit Nandi

for his continuous support and advices which have greatly helped towards the successful organization of NCMTP

2017. I would like to place on record our hearty thanks to our registrar Ms. Nidhi Singh for her perfect logistic

support towards organizing the Conference. I am thankful to the NCMTP 2017 steering committee members for

their whole hearted support and for working relentlessly for the past few weeks in order to achieve grand success in

NCMTP 2017. I thank all the HODs of all the respective departments, the invited speakers, delegates and specially

students for their enthusiastic participation in this Conference. I also convey my sincere thanks to all the people who

have given their precious time in organizing this grand occasion.

Dr. Susmita Karan

Programme Coordinator

NCMTP-2017 11

Ideal International E- Publication

www.isca.co.in

LIST OF COMMITTEE MEMBERS

Chief Patron: Sardar Jodh Singh (Chairman, JIS Group, India)

Patrons: Sardar Taranjit Singh (MD, JIS Group)

Prof.(Dr.) S .M. Chatterjee, Chairman, BOG

Mr. S. S. Duttagupta (Director, JIS Group)

Prof. (Dr.) Asit Guha (Advisor, JIS Group)

Mr. U. S. Mukherjee (Deputy Director, JIS Group)

Mr. Simarpreet Singh (Director, JIS Group)

Mrs. Manpreet Kaur (CEO, JIS Group)

Ms. Jaspreet Kaur (Trustee Member, JIS Group)

Conference Chair: Prof.(Dr.) M. R. Kanjilal (Principal, NiT, Agarpara)

Advisory Committee: Prof. Anuradha De, NITTTR, Kolkata

Prof. S. K. De, IACS, Kolkata

Prof. L. C. Tribedi , TIFR

Prof. A. K. Singh , CSIR-CIMFR, Dhanbad

Prof. V. K. Singh IIT(ISM) Dhanbad

Prof. Prasanta Kumar Mukherjee , Belur Ramkrishna Mission

Prof. D.D. Tripathi, CSIR-CIMFR, Dhanbad

Prof. Shanta Bandopadhyay, Bethune College

Prof. Pankaj Mishra ,IIT(ISM) Dhanbad

Prof. Kakali Mukherhee, Behala College

Prof. Aparajita Nag, B.K.C. College

Prof. J.K. Das, Advisor NIT, Agarpara

Programme Committee Organizing Chair: Dr. Sumit Nandi, Associate Professor & HOD, BS & HU, NiT Agarpara

Programme Coordinator: Dr. Susmita Karan , TIC (Physics) & Asst. Professor, BS&HU(Physics)

Convener: Dr. Indrani Sarkar, Asst. Professor, BS&HU (Physics)

Secretary: Dr.Tapan Kumar Mukhopadhyay, Associate Professor, BS&HU (Physics)

Dr. Dhananjay Kumar Tripathi, Asst. Professor, BS&HU (Physics)

Treasurer: Dr. Rupa Bhattacharyya, Asst. Prof, BS&HU

Organizing Committee: Dr. Debjani Chakrabarti, TIC (Mathematics) & Asst. Prof., BS&HU

Dr. Nikhilesh Sil, Asst. Prof., BS&HU

Dr. Pijush Basak , Asst. Prof., BS&HU Ms. Nidhi Singh, Registrar, NiT

Dr. Raju Dutta, Asst. Prof., BS&HU Mr. Debtosh Panda, NiT

Dr. Sarbani Ganguly, TIC (Chemistry) & Asst. Prof., BS&HU Mr. Karuna Ketan Karan, NiT

Dr. Leena Sarkar Bhaduri, TIC (English) & Asst. Prof., BS&HU Mr. Ratan Das, Site Supervisor, NiT

Ms.Rajasi Ray, Asst. Prof. BS&HU Mr. Nilanjan Mitra, Accounts, NiT

Ms.Sharmistha Basu, Asst. Prof., BS&HU Mr. Partha Das, TEQIP Cell, NiT

NCMTP-2017 12

Ideal International E- Publication

www.isca.co.in

NAME OF THE ARTICLE AUTHOR PAGE NO.

Numerical Study in Optical Wave Guide

Communication: A Different Approach

Sucharita Bhattacharyya

13

Taguchi–Based Optimization and Numerical

Modelling of Surface Roughness in CNC Turning

Operation

Arghya Gupta

Monsoonal Rainfall of Sub-Himalayan West

Bengal: Empirical Modelling and Forecasting

Pijush Basak

Big Data Analysis Framework in Healthcare Sanjay Goswami, Indrani Sarkar, Anwesha

Das, Kaustav Bhattacharya & Disha

Dasgupta

Role of several non-linear filters in image

processing

Sriparna Banerjee, Sangita Roy &Sheli

Sinha Chaudhuri

A Solar Power Generation System with a Seven

Level Inverter

Susmita Dhar & Sanchari Kundu

Reversible Combinational Circuit Design Using

QCA in VLSI Design

Kunal Das, Shubhendu Banerjee, Soumya

Bhattacharyya &Ashifuddin Mondal

Gsm Based Earth Fault & Over Voltage Detector

Papendu Das, Avipsa Basak, Supravat Roy,

Souvik Kundu , Aritra kumar Basu, Archita

Chakraborty, Preet Singh & Susmita Karan

Electronic Eye Based Security System

Rituparna Dey, Atri Mondal, Kushal

Bhowmick,Alivia Bose, Uday Sankar

Saha, Anshuman Bhowmick& Susmita

Karan

Li-Fi Technology-Transfer of Data through Visible

Light

Abhijit Seth, Nistha Gupta, Poonam Verma,

Trinanjana De, Pritha Majumder, Prakash

Kr Thakur, Shyamal Mishra& Susmita

Karan

Automated Car Controller

Aparna Das, Manish Guha, Attrayee Sinha,

Sayantan Mitra,Vijitaa Das, Swati Nayna &

Susmita Karan

Literacy Tracking Device Sukanya Sen

Forecast Model Improvement with ECMWF

Sohini Bairagi, Priyanka Roy, Ranit Basack

& Amritandu Saha

Application of Power Electronics on Wind Turbine

Molay Kumar Laha, Dipu Mistry, Sreeja

Chakraborty, Nilkantha Nag,Soumya Das &

Pallav Dutta

Biophysics in Disease Analysis Eisita Basak

The Trappist Ranit Dey, Subhankar Roy &

S.Uma.Harshini

Human Skin as Touch Screen Aniket Dhar and Ishitri Mukherjee

5G Connectivity-The Beginning of New Era

Harshita Jain, Sagar Gupta, Rishav Roy &

Nazia Hassan

Spectroscopy Aindrila Mukhopadhyay

Fingerprint Sensor on Touchscreens

Sourav Sadhukhan, Debarati Mitra, Saatwik

Bhowmick & Arindam Majhi

Blue Energy: Wave Energy Converter Using

MATLAB

Nilkantha Nag, Sreeja Chakraborty, Molay

Kumar Laha, Sudhangshu Sarkar & Soumya

Das

Car Density Detection Technique Using Webcam

and MATLAB

Dibyendu Sur, Vijitaa Das, Rituparna Dey

and Swati Nayna

Design Rectangular Micro Strip Patch Antenna at

2.4GHz

Shuvam Banerjee

Preparation of poly vinyl alcohol cellulose

nanocomposites using ionic liquid

Susmita Karan & Sumanta Karan

Structure –based Drug Design: Nucleic Acid as

anti-cancer drug target

Indrani Sarkar

Application of Artificial Neural Networks and

Genetic Algorithm in Drug Discovery

Indrani Sarkar & Sanjay Goswami

89

Free Energy: A potential solution to the energy

crisis

Dr. Santa Bandyopadhyay (Rajguru)

NCMTP-2017 13

Ideal International E- Publication

www.isca.co.in

Numerical Study in Optical Wave Guide Communication: A Different Approach Sucharita Bhattacharyya

Dept. of Applied Science & Humanities, Guru Nanak Institute of Technology

Sodepur, Kolkata 700114, India

_______________________________________________________________________________________________________

Abstract

Here a comprehensive study of optical waveguide modellingis accomplished where concept of modal index are applied in

refractive index profiles for various rib wave guide structures through Higher (Fourth) Order Compact (HOC) Finite Difference

Method (FDM) followed by Conjugate Gradient (CG) iteration scheme. Numerical results verify the validity of the used

approximations for stability and convergence with least computational time. Obtained values for normalized and modal

indicesare used to optimize the structure parameters of the wave guides in GaAs/GaAlAs and GeSi material systemsfor efficient

wave propagation and hence to confirm the confinement of the propagating waves through such structures in complete

agreement with the results as obtained by other workers.

Keywords: Optical Wave Guide; Finite Difference Method;Conjugate Gradient iteration; Higher Order Compact; Structure

Parameter; Optimization

_______________________________________________________________________________________________________

1. Introduction Semiconductor optical waveguides as a fundamental component are widely used in photonics and integrated optics where

transmission and processing of signals are carried out using optical beam as carrier through proper modelling , the basic step

of any communication system .Well- defined refractive index profiles and geometric shapes help to understand the propagation

properties in such wave guides appropriately by evaluating the structural design performance through wave guiding and its

confinement capability [1]. Also the high cost required during the fabrication process can be reduced by optimizing the

technique and design parameters, best suited the initial requirements.

As no precise analytic solution is obtained for these types of structures, different numerical methods have been adopted for their

study in terms of modal analysis [2]. Finite Difference Method (FDM) [3, 4] and Finite Element Method (FEM) [5] are the two

most rigorous methods adopted for the purpose. If the governing eqns. & their boundary conditions are provided, it is easier to

implement FDM rather than FEM. Commonly used second order FDM gives computational instability with non-physical

oscillation in the solution domain and higher order FDM of conventional type do not allow direct iterative technique, but

compact type is the exception which allows direct applications of various iteration schemes.

So to study the refractive index profiles of GaAs and GeSi rib waveguide structures for efficient wave propagation, Higher

(here 4th

) Order Compact (HOC) Finite Difference Method (FDM) , commonly used in Fluid Dynamics, are used here in

combination with Conjugate Gradient (CG) Iteration technique compared to conventional Successive Over Relaxation (SOR)

and Steepest Descent Approximation (SDA) [6] scheme to show their effect on transmission properties of the guided waves

through fundamental TE-polarised solutions of the Helmholtz wave equation . Actually the resulting coefficient matrix system is

non symmetric having complex eigen values where CG scheme proves to be more effective with optimization algorithm of the

problem giving best solution in a minimum simulation time that converge to the optimum, reducing the number of iterations and

complexity. Obtained results of modal index , normalized index [7, 8] and their variations with wave guide structure parameters

identifies their optimized values and the corresponding polarized E-field results clearly shows the material dependence of

transmitted waves.

Following the introduction, used theoretical concept and the numerical simulations are given in Section 2. In Section 3, the

propagating field is analyzed in terms of run-time comparison of different iteration methods and structure parameter

optimization of the used rib wave guides including comparison with conventional approaches. A brief conclusion is given in

Section 4.

2. Theoretical Reviews and Numerical Computation The configuration of the optical rib waveguide structure used here is shown in Fig. 1 with rib width w and inner rib height H.

Rib outer region has a thickness of d and h is the height of the rib, λ is the free-space optical wavelength. The three regions have

refractive indices of Cn , Gn , and Sn at the chosen λ value. Now considering harmonic wave propagation in the z direction

along a rib waveguide, and following usual procedure [2] , the Helmholtz wave equation for quasi TE modes (with xE

continuous across horizontal interfaces but discontinuous across vertical interfaces) are obtained as the eigen solutions of the

equation 2 2 2[ ]T x xk E E (1)

with 2

T as the Laplacian operator acting on x and y and

1

22 ( , )

( , ) ( )n x y

k x y

(2)

is the total propagation constant. Here is the angular frequency and ( , )n x y is refractive index, β2is the eigen value

corresponding to the eigen function Ex. Here dielectric constant ( , )x y is piecewise constant throughout the solution domain,

permeability µ is completely constant and x and y represent the coordinates of the transverse section of the waveguide.

NCMTP-2017 14

Ideal International E- Publication

www.isca.co.in

Fig. 1 Rib Wave Guide Structure Fig. 2 Cell structure for the finite difference grid

In order to solve Eq. (1) numerically by compact fourth order finite difference method, we have decomposed the rectangular

solution domain into uniform grid points whose x length and y length are xh and yh respectively as shown in Fig. 2. Then

following the scheme developed in [2,5], a simultaneous system of linear algebraic equation with nine-diagonal coefficient

matrix is obtained as

,

, ,

1,0,11,0,1

0k l

i j i k j l

kl

A E

(3)

which can be rewritten in its usual form of 2

TE TE TE TEA E E . (4)

where ATE is a real non symmetric band matrix with 2

TE is TE propagation eigen value and ETE is the corresponding normalized

eigen vector representing the field profile Ex . Here we have used zero field outer boundary condition.

Iterative Method & Rayleigh Quotient Solution (RQS) of matrix eigen value problem Now to find out modal eigen value β

2 contained in Eq. (4), classical Rayleigh Quotient Solution [7] of the matrix eigen value

problem is used as 2T

TE TE TE

T

TE TE

E A E

E E

(5)

where the superscript T denotes the transpose of a matrix. After an estimation of β2

, the field profile ETE is updated from an

iterative solution of the system of linear algebraic equations (4) by Conjugate Gradient (CG) Method. It may be mentioned that

the well-known Steepest Descent(SD) and Successive Over Relaxation (SOR) iteration schemes [6], belonging to well-known

Newton-Krylov Subspace category are also studied here and their performances are evaluated and compared in terms of CPU

time for various grid sizes for two specific rib-wave guide structures. The CG scheme proves to be the most effective one for

the used non-symmetric coefficient matrix system, showing least computational time, assuring faster convergence while

producing similar simulation results.

Normalized Index The effective index concept is applied for the guiding region of the rib wave guide as there are discontinuities existing in the

dielectric interfaces and there must be an average value for the refractive index which would reduce a layered wave guide to an

equivalent uniform one [7]. This effective index value can be used to calculate normalized index2 2

2 2

eff s

G s

n nb

n n

which indicates

that how far a mode is from cut off, playing a very important role so far as propagation is concerned through a wave guide. For

propagation to be possible, it is required that 0 1b .

A code is therefore developed [1] for our nine-diagonal coefficient matrix system to determine the electric field profile, effective

index (neff), normalized index (b), and confinement factor () through structure parameter optimization of various rib wave

guide cross sections to gain deeper insights. Corresponding TE field distributions are shown in the form of contour plots.

3. Results and Discussion The developed scheme [1] is used successfully to calculate effective refractive index and normalized index for two different

dielectric rib wave guides - the GeSi-Si hetero structure and GaAs/GaAlAs system, mostly used in optoelectronic integrated

circuits [3,4,5], both with air cladding . The geometrical and Optical parameters of four rib wave guide structures S1,S2, S3 and

S4 [4] in GaAs and GeSi are shown in Table 1.

As already mentioned above, three iterative methods are studied here for the system of equations obtained to determine

electric field profiles for two structures S2 and S4 as they have nearly same optical properties, though different geometries. The

NCMTP-2017 15

Ideal International E- Publication

www.isca.co.in

corresponding computational (CPU) time for various grid sizes are calculated and shown in Table 2. All numerical results

shown here, are obtained using MATLAB on a computer having processor configuration: Intel Core i5 3550 @ 3.3Ghz. From

the table, it is found that the CG Method takes least computational time compared to the other two and shows regular

convergence as expected [6]. So CG algorithm is used here as the most appropriate iterative scheme to solve (4) and the

corresponding code in MATLAB has been developed for obtained nine-diagonal coefficient matrix system to determine the

electric field profile, effective index and the normalized index using structure parameter optimization.The values of modal

index neff and normalized index b as calculated for the first three rib waveguide structures are shown in Table 3 for GaAs where

it is found that at λ = 1.55 µm, the normalized index b is maximum for S1 and are of decreasing order for S2 and S3 which

Table 1 Optical and Geometrical structure parameters (in µm) of used rib wave guides

Structure r.i. of guiding zone

r.i. for substrate r.i. for cladding H h d w λ

GaAs GeSi GaAs GeSi GaAs GeSi

S1 3.44 3.6 3.34 3.5 1.0 1.0 1.3 1.1 0.2 2 1.55

S2 3.44 3.6 3.36 3.5 1.0 1.0 1.0 0.1 0.9 3 1.55

S3 3.44 3.6 3.435 3.5 1.0 1.0 6.0 2.5 3.5 4 1.55

S4 3.44 3.6 3.40 3.5 1.0 1.0 1.0 0.4-1 1.0-h 3 1.15

Table 2 Comparison of CPU time(s) for various iterative methods in S2 and S4 structures

hx hy S2 structure S4 structure

SOR SDM CGM SOR SDM CGM

0.058 0.09 5079.29895 331.595725 8.86085679 5292.30272 831.984533 3.40082179

0.06 0.09 4514.87174 250.678006 13.4784863 733.984704 99.7158391 3.32282130

0.07 0.09 3641.09454 265.966104 47.22150270 563.569212 38.6414476 9.84366310

0.08 0.092 2804.47677 291.721870 4.83603099 671.631105 593.623887 12.5580804

0.078 0.09 3130.26926 3.74402399 1.65361059 612.288324 565.61282 13.7124878

signifies smaller vertical spread for TE propagation mode for strongly guiding S1 [3] structure compared toS2 and S3. These

results are compared with other available data [8] obtained by other methods which show very close agreement (within 1%) for

both the indices. The only exceptions are obtained by Stern [3] for strongly guiding S1 structure where 7% deviation is

encountered showing the sensitivity of indices values for this particular structure. Actually smaller b value as obtained in [3]

indicates the propagation mode to be closer to cut-off which is not expected for strongly guiding structure. The degree of

guidance for propagating wave may be related to the corresponding confinement factor value as obtained by the present method

and is shown in Table 4for different structures (in GaAs) which further verifies the stronger guiding capacity of S1 structure.

Corresponding contour plots of these structures are shown in Figs. 3–5.In all three figures, it is considered that hx = 0.09, hy =

0.09.

Now in the design of wave guide devices, a variety of factors are considered for optimization of the structure parameters. In this

work, the variation of normalized and modal indices with rib outer region thickness d and rib width w are studied to find out

their optimized values for specific structure which can play a very significant role during the fabrication of such wave guides.

The rangeof variation for d is chosen from 0.6 to 1µm with corresponding changes in rib height h from 0.4 to 1.0 µm for GaAs

NCMTP-2017 16

Ideal International E- Publication

www.isca.co.in

rib wave guide of S4 structure. Corresponding neff and b is shown in Tables 5. It is clear from the table that as d increases, neff

and b increases. Different workers also got the similar results. As already mentioned, the higher b value indicates that

corresponding propagating mode to be far from cut off, signifying vertical spread decrease and better confinement of the

propagating wave. This can be related to the fact that increase in d value in the guiding region of the wave guide having higher

refractive index enhances the confinement of propagating e.m. wave in that region.

Table 3 Comparison of modaland normalized index values for different geometrical structures obtained by different methods in

GaAs material system

Structure S1 S2 S3

Method neff b neff b neff b

Present

Method 3.3902787 0.4991 3.3958327 0.4450 3.4367243

0.3446

Stern[3] 3.3869266 0.4655 3.3953942 0.4395 3.4366674 0.3333

FFM[8]

3.3904487 0.5008 3.3948874 0.4332 3.4367238 0.3446

FDM[8] 3.3901687 0.4980 3.3954802 0.4406 3.4367068 0.3412

BPM[8] 3.3902687 0.4990 3.3944707 0.4280 3.4368358 0.3670

VPM[8] 3.3905686 0.5020 3.3950156 0.4348 3.4367468 0.3492

Table 4. Confinement Factor values for different structures

Structure S1 S2 S3

Confinement Factor 0.99928628 0.99729114 0.99049938

Dimensionless contour plots of electric field profiles for rib wave guide structures S1, S2, and S3 at λ=1. 55 µm in GaAs

material are shown below.

Fig 3 S1 structure Fig 4 S2 structure Fig 5 S3 structure

Table 5 Variation of Normalised Index and Modal Index with rib thickness d and comparison with other available data for S4 of

GaAs waveguide

NCMTP-2017 17

Ideal International E- Publication

www.isca.co.in

d

Vfem[7] Fem[7] Stern[5] Present Method

effn b effn b effn b effn b

0.6 3.41357 0.338 3.41364 0.339 3.41353 0.3368 3.41361 0.339

0.7 3.41405 0.35 3.41410 0.351 3.41406 0.3503 3.41400 0.348

0.8 3.41473 0.367 3.41483 0.369 3.41472 0.3665 3.41487 0.370

0.9 3.41557 0.388 3.41559 0.388 3.41554 0.3872 3.41590 0.396

1.0 3.41713 0.427 3.41710 0.426 3.41716 0.4275 3.41726 0.430

Then to gather an idea of material dependence on propagating waves, optimization of waveguide structure parameters (in terms

of d and w) are studied for GaAs and GeSi rib wave guides and the results for corresponding neff and b values are shown in

Table 6 and 7 respectively for S4 structure. In Table 6, different b values obtained for same d and same λ = 1.15 µm in GaAs

and GeSi wave guides clearly show material dependence of propagating waves which holds equally well for other wavelengths

too. Most importantly, it is also noted that for 0.8d µm, b values for GaAs is smaller than that for GeSi in each case, but the

situation reversed for 0.9d where smaller vertical spread (as b is larger) is obtained for GaAs than GeSi showing better

confinement capability for the former one within the specified range.

Table 6 Comparison of modal index values, normalized index values for S4 structure in GaAs and GeSi material system

obtained by varying d

Similarly, Table 7 presents variation of b and neff with w, the rib width ranging from (2.7-4.0) µm where it is found that b

values for GaAs is smaller than for GeSi for 2.85w µm but for 3w , normalized index value b increases for GaAs

compared to GeSi showing better confinement capability for GaAs in this range. Here it may be noted that too small width (w)

will reduce the restriction of the rib structure on the guiding mode whereas too big width (w) will be difficult to tackle for

system integration (as then the waveguide bends and branches), so the width (w) of the rib waveguide is set to be in the

specified range.

Now the interesting point to mention here is that the optimized values chosen by Stern [3] for d is 0.9 µm and for w as 3 µm for

GaAs. So the optimized results obtained using the present method for normalised index b within the specified ranges of d and w

for GaAs completely agree with the prediction of Stern, though the results for GeSi couldn’t be verified due to unavailability of

data.

GaAs GeSi

d

effn b effn b

0.6 3.413616522

0.3391

3.535552057

0.3522

0.7 3.414009233

0.3489

3.537570667

0.3724

0.8 3.414874639

0.3705

3.539040821

0.3870

0.9 3.415904027 0.3962 3.539954058 0.39616

1.0 3.417261383 0.4301 3.540537082 0.40197

NCMTP-2017 18

Ideal International E- Publication

www.isca.co.in

Table 7 Comparison of modal index and normalized index values for S4 structure in GaAs and GeSi material system obtained

by varying w

4. Conclusion Here a simple accurate higher order compact (HOC) method in combination with conjugate gradient (CG) iteration scheme has

been presented. It determines accurately TE field solutions of the Helmholtz wave equation taking full account of the

discontinuities in the vertical interfaces within a rib waveguide structure. Obtained values of effective index, normalized index,

and confinement factors for GaAs and GeSi material systems in various structures are found to be in close agreement with other

published results. Also this method successfully explains the significant role of structure parameter optimization for efficient

propagation of e.m. wave which must be taken into account during the fabrication process of such wave guides to make them

cost effective.

Acknowledgements The author is very grateful to University Grants Commission (Grant No. F.PSW-180/13-14(ERO)), Govt. of India for providing

the necessary research fund to carry out this research work and to the JIS group for providing necessary infrastructure.

References [1] Thander, A,K, and Bhattacharyya, S, 2017 .Study of optical modal index for semi conductor rib wave guides using higher

order compact finite difference method, Optik, vol. 131, pp.775-784.

[2] Thander, A,K, and Bhattacharyya, S, 2016 .Optical Confinement study of different semi conductor rib wave guides using

Higher Order Compact Finite Difference Method, Optik, vol.127, pp. 2116-2120.

[3] Stern, M,.S, 1995, Finite Difference Analysis of Planar Optical Waveguide, Progress in Electromagnetics Research, PIER,

vol. 10, pp. 123-186.

[4] Stern, M,S, 1988. Semivectorial polarised finite difference method for optical waveguides with arbitrary index profile, IEE

Proceedings-Optoelectron.135 , pp. 56-63.

[5] Rahman, B,M,A, 1995. Finite Element Analysis of Optical Waveguides Progress in Electromagnetics Research, PIER,

Vol.10, pp.187-216.

[6] Thander, A,K, and Bhattacharyya, S, 2015 .Optical wave guide analysis using Higher Order Compact FDM in combination

with Newton Krylov Subspace Method , IEEE International Conference on Research in Computational Intelligence and

Communication Networks, (ICRCICN 2015) 66-71.

[7] Thander, A,K, and Bhattacharyya, S, 2015 . Rib Wave guide propagation through modal index study using higher order

compact(HOC) finite difference method(FDM), 6th

International Conferences on Computers and Devices for

Communications(CODEC-2015), IEEE.

[8] Huang ,W, and Hauss, H, A, 1991. A simple variational Approach to Optical Rib Waveguides, Journal of Light wave

Technology, vol. 9, No. 1, pp.56-61.

GaAs GeSi

w

effn b effn b

2.70 3.410146433

0.2525

3.538635337

0.3830

2.85 3.412706582

0.3163

3.538783837

0.3844

3.00 3.414874639

0.3705

3.539040821

0.3870

3.10 3.415854368 0.3949 3.539152238 0.3881

3.50 3.419069424 0.4752 3.539511329 0.3917

4.00 3.42244980 0.5598 3.539866498 0.3952

NCMTP-2017 19

Ideal International E- Publication

www.isca.co.in

Taguchi–Based Optimization and Numerical Modelling of Surface Roughness in CNC Turning

Operation Arghya Gupta

Department of Mechanical Engineering, Narula Institute of Technology, Agarpara, Kolkata, West Bengal- 700109

Abstract In any machining process, in order to reduce cost and to accomplice the desired product standard and quality, it is of utmost

importance to determine the optimum values of the input machining parameters. In this Thesis work, the effect of some major

input cutting parameters like Cutting Speed, Feed and Depth of cut on Average Surface Roughness (Ra) have been examined

with Aluminium (LM6) as work material and a single point cutting tool with indexable Tungsten Carbide insert is used on CNC

Lathe. A solution of Balmerol Protocool SL 20% and demineralised water 80% is used as a cooling agent. Optimization of input

cutting parameters is done by using Taguchi method and the experimental set up is designed according to Taguchi’s L16

orthogonal array. In this thesis work experiments have been carried out with the values as tabulated through Taguchi’s L16

orthogonal array and Average Surface Roughness (Ra) values for all 16 observations were measured by “Talysurf”. The results

were analysed by using Signal to Noise Ratio (S/N Ratio) and “Main effects plot for S/N Ratios”, to obtain optimum values for

input cutting parameters. This thesis work aims at determining empirical relationships both of linear type and of exponential

type between Average Surface Roughness (Ra) and the different input cutting parameters.

Keywords:Surface Roughness, Cutting Speed, Feed, Depth of Cut, CNC Lathe, Taguchi’s L16 orthogonal array, S/N Ratio,

Empirical relationship

1. Introduction

1.1 Objective The objective of this present work is to develop efficient linear and exponential empirical equations of average Surface

Roughness (Ra ) based on some input cutting parameters like Cutting Speed (“v” in m/min), Depth of Cut (“d ” in mm) and Feed

(“ f ” in mm/rev). The developed equations are intended to validate with the experimental results to find out the accurate model.

Also Taguchi method is used to find out the optimal cutting parameters.

1.2 Literature review Ahilan et al.[1] have developed neural network models for prediction of machining parameters in CNC turning. Results from

experiments, based on Taguchi’s Design of Experiments (DoE) were used to develop neuro based hybrid models. ANOVA have

been used to decide influence of process parameters hence minimum power consumption and maximum productivity can be

achieved.Risbood et al.[2] have used neural network to predict surface finish by taking the acceleration of radial vibration of

tool holder as a feedback. Benardros et al.[3] have presented various methodology and approaches based on machining theory,

experimental investigation, designed experiments and artificial intelligence with their drawbacks to avoid any re-processing of

the machined work piece. Nalbant et al.[4] have executed experimental studies on artificial neural networks (ANN). In the input

layer of the ANNs, the cutting tools, feed rate and cutting speed values were used while at the output layer the surface roughness

values were used. Kwon et al.[5] have used a fuzzy adaptive modelling technique, which adapts the membership functions in

accordance with the magnitude of the process variations, to predict surface roughness.Krishankant et al.[6] have designed

Taguchi orthogonal array with three levels of turning parameters with the help of MINITAB 15. They have measured initial and

final weight of workpiece (EN24) and also the machining tome to calculate MRR in two sets of experiment (i.e. first run and

second run). S/N ratio was calculated for the larger the better and hence optimal levels of the machining parameters (speed, feed,

depth of cut) were obtained.Quazi et al.[7] have employed orthogonal arrays of Taguchi, S/N ratio, the analysis of variance

(ANOVA) to analyse the effect of the turning parameters. Lazarevic et al.[8] have analysed different cutting parameters on

average surface roughness on the basis of the standard Taguchi orthogonal array with the help of MINITAB. The optimal

cutting parameter settings were determined based on analysis of means (ANOM) and analysis of variance (ANOVA).Vipindas

et al.[9] have performed their experiments on Al 6061 material based on Taguchi orthogonal array. They have observed that

feed is the significant factor at 95% confidence level.Durai et al.[10] have taken three levels of process parameters to optimised

the minimum energy consumption with the help of Taguchi’s orthogonal array. They have shown that, as material removal

rate increases power demand increases and energy consumption decreases.Nanbant et al. [11] optimised three cutting parameters

namely insert radius, feed rate and depth of cut with consideration of surface roughness. They have employed the Orthogonal

array, the S/N ratio and ANOVA to study the performance characteristics in turning operations of AISI 1030 steel bars using

TiN coated tools.Asilturk et al. [12] have used AISI 304 austenitic stainless steel workpiece and carbide coated tool under dry

condition to study the influence of cutting speed, feed and depth of cut on surface roughness (Ra and Rz). The adequacy of the

developed model is proved by ANOVA and response surface 3D pots.

2. Experiment /Analysis The experimental work was performed in CNC Lathe, with work material as Aluminium (LM6) and carbide tip HSS as cutting

tool. Cutting speed with Aluminium as work material and HSS as cutting tool is 75 to 105 m/min [18]. The initial diameter (D)

of the cylindrical Aluminium work piece is 38mm. Hence spindle speeds in RPM can be calculated by the formula v= π. D. N (v

in m/min and N in RPM). Here 4 levels of spindle speed, depth of cut and feed values have been chosen (Table 1).

In the next step, with the combination of input parameters, as generated through Taguchi orthogonal array, experiments have

NCMTP-2017 20

Ideal International E- Publication

www.isca.co.in

Table 1: Experimental results for surface roughness and determination of S/N ratios

been carried out and surface roughness values are measured with “Talysurf”. Instrument calibration is required before

conducting the experiment. Turning operation was performed for 16 observations with the combination of input cutting

parameters. Surface roughness values are measured for each run. Then with the help of “MINITAB 16” software S/N ratios

were calculated and “Main effects plot for NS / ratios” have been plotted (Fig. 1) to obtain the optimal input cutting parameter

values. In Fig.1 maximum values are optimum values, i.e., Spindle speed 76 m/min, Depth of cut 1.1 mm and Feed rate 0.1

mm/rev.

80777675

10

8

6

4

2

1.10.90.80.5

0.250.200.150.10

10

8

6

4

2

Spindle speed (m/min)

Mea

n of

SN

ratio

s

Depth of cut (mm)

Feed (mm/rev)

Main Effects Plot for SN ratiosData Means

Signal-to-noise: Smaller is better

3. Result and Discussion As already mentioned, 4 levels of input cutting parameters are given as input; orthogonal array has been generated as shown

in Table 1.In this work the most appropriate array is determined as (from 4³= 64 possible combination), in order to obtain

the optimal input cutting parameters and their effects. For this approach “MINITAB 16” software is used.

3.1 Mathematical Formulation:

The following two models are used to develop the relationship between surface roughness ( ) and cutting speed ( v ), depth of

cut ( d ) and feed tare ( f ),

A linear empirical model of following type (by Ahilan et al. [1])

Sl.

No.

Spindle

Speed

(m/min)

Depth of cut

(mm)

Feed

(mm/rev)

Surface Roughness

(µm)

S/N Ratio

(dB)

1 75 0.5 0.1 0.294 10.633

2 75 0.8 0.15 0.578 4.761

3 75 0.9 0.2 0.797 1.9708

4 75 1.1 0.25 0.744 2.568

5 76 0.5 0.15 0.695 3.1603

6 76 0.8 0.1 0.286 10.8726

7 76 0.9 0.25 0.649 3.7551

8 76 1.1 0.2 0.689 3.2356

9 77 0.5 0.2 0.759 2.3951

10 77 0.8 0.25 0.71 2.974

11 77 0.9 0.1 0.413 7.680

12 77 1.1 0.15 0.53 5.514

13 80 0.5 0.25 0.701 3.085

14 80 0.8 0.2 0.758 2.4066

15 80 0.9 0.15 0.86 1.3103

16 80 1.1 0.1 0.304 10.342

NCMTP-2017 21

Ideal International E- Publication

www.isca.co.in

fcdbva ... ------------------------ (1)

An exponential model of following type as developed by Fang et al.[13]

=X ----------------------- (2)

Where A, X, a, b, c are constants.

3.1 Determination of empirical models (Linear and Exponential) Table 1 shows that cutting speed 75m/min remains constant for observations 1 to 4, 76 m/min for obs. 5 to 8, 77 m/min is for

obs. 9 to 12 and 80 m/min remains constant for obs. 13 to 16. Hence, effort will be given to derive empirical relationship for a

particular cutting speed. It is clear from the Table 1 that 4 set of equations can be made for all 4 levels of cutting speeds, hence

16 equations can be generated, and ultimately 4 empirical relationships both of linear and exponential type can be obtained.

A comparative study has been carried out between surface roughness experimental values and surface roughness values by linear

and exponential models with ±20% error.( According to Risbood et al. [3] ±20% error is reasonable) And it was found that the

following linear equation (Eq.3) gives values closer to the experimental values. For the exponential model (Eq.4) it has been

observed that all the percentage errors are not within ±20% range. So, it can be concluded that Eq.4 is not reliable while

calculating surface roughness theoretically.

0.64419 + 0.1216 v – 0.444 d + 1.684 f. ------------------------------- (3)

= 0.39806 ------------------------------ (4)

3.2 Verification Fig. 2 shows the Surface roughness experimental values versus Surface roughness values from linear model i.e., 0.64419 +

0.1216 v – 0.444 d + 1.684 f. A line inclined at 45˚ and passing through the origin is also drawn in the figure. For perfect

prediction, all the points should lie on this line. Here it is seen that most of the points are closer to this line. Hence, this linear

model for surface roughness provides reliable prediction. A comparative study between Surface roughness experimental values and Surface roughness values from linear model (Eq. 3)

has been carried out in Fig. 3 and is found that Surface roughness experimental values are closer to Surface roughness values

from linear model (Eq.3). It has also been observed that 11 out of 16 cases, percentage error values are within the range of

±20%. It can be concluded that Eq. 3 can be used to calculate surface roughness theoretically.

Surface Roughness (µm) Experimental

Values

Su

rface

Rou

gh

nes

s

(µm

) V

alu

es f

rom

Eq

uati

on

(3)

Figure 2: Surface roughness experimental values versus Surface roughness values from linear model (Eq. 3).

NCMTP-2017 22

Ideal International E- Publication

www.isca.co.in

5. Conclusion In this experimental based thesis work “Taguchi” method has been used to develop “Orthogonal Array” with 4 levels of certain

input cutting parameters like Cutting Speed (“v” in m/min), Depth of Cut (“d” in mm) and Feed (“ f ” in mm/rev). With the

combination of input cutting parameters, Surface Roughness (Ra) has been measured with “Talysurf”, surface roughness tester.

The output values (Ra) are converted into S/N Ratios by using “MINIITAB 16”software. “Main Effects Plot for S/N Ratios” has

been plotted for Surface Roughness to obtain the optimal input cutting parameter values. In this thesis work efforts have also

been given to develop Linear and Exponential models to find out the accurate model.

References [1]C. Ahilan, SomasundaramKumanan ,N. Sivakumaran, J. Edwin RajaDhas. “Modelling andpredictionofmachiningquality

inCNCturningprocessusingintelligenthybriddecision makingtools.”Appled soft computing13 (2013) 1543-1551.

[2]K.A.Risbood,U.S.Dixit,A.D. Sahasrabudhe.“Predictionof surface roughnessand dimensionaldeviation by measuring

cuttingforcesandvibrationsinturningprocess.”Journal ofMaterials Processing Technology132 (2003) 203-214.

[3]P.G.Benardos,G.C.Vosniakos.“Predictingsurfaceroughnessinmachining:areview.”

International Journal ofMachineTools &Manufacture43(2003) 833-844.

[4]M.Nalbant,H.Gokkaya,I.Toktas,G.Sur.“Theexperimentalinvestigationoftheeffects

ofuncoated,PVDandCVDcoatedcementedcarbideinsertsandcuttingparameterson surface roughness in CNC turning and its

prediction using artificial neural networks.” Robotics and computerintegratedmanufacturing 25 (2009) 211-223.

[5]YongjinKwon,GaryW.Fisher,Tzu-Liang(Bill)Tseng.“Fuzzy neuronadaptive modelling

topredictsurfaceroughnessunderprocessvariationsinCNC turning.”Journalof manufacturingsystems,vol 21/ No. 6. (2002) 440-

450.

[6]Krishankant, JatinTaneja,MohitBector,RajeshKumar.“ApplicationofTaguchimethod foroptimizingturningprocessby

theeffectsofmachiningparameters.”Internationaljournal ofengineeringandadvancedtechnology (IJEAT).ISSN:2249-

8958.Volume-2,Issue-1, October2012. 263-274.

[7]T.Z.Quazi,PratikMore,VipulSonawane.“Acasestudy ofTaguchimethodinthe optimization

ofturningparameters.”Internationaljournalofengineering andadvanced technology(IJEAT) Vol3,Issue2, February2013.616-626.

[8]D.Lazarevic,M.Madic,P.Jankovic,A.Lazarevic.“Cutting parametersoptimizationfor surface roughness in turning operation of

polyethylene (PE) usinf Taguchi method.” Tribologyin industry.Vol. 34, No. 2(2012) 68-73.

[9]M.P.Vipindas,Dr.P.Govindan.“ Taguchibasedoptimizationof surfaceroughnessin CNCturning

operation.”Internationaljournaloflatesttrendinengineeringand technology(IJLTET). Vol.2, issue5, July2013.454-463.

[10] DuraiMatinsureshBabu,Mouleeswaran SenthilKumar,JothiprakashVishnuu.“ Optimization ofcutting

parametersforCNCturnedpartsusing Taguchi’stechnique.”Internationaljournalofengineering.TomeX(Year2012)-

FASCICULE3(ISSN1584-2673). 493-496.

[11]M.Nalbant,H.Gokkaya,G.Sur.“ApplicationofTaguchimethodin theoptimizationof

cuttingparametersforsurfaceroughnessinturning.”Materialsanddesign28(2007)1379-1385.

[12]IihanAsilturk, SuleymanNeseli.“Multiresponse optimisationof CNCturning

parametersviaTaguchimethodbasedresponsesurfaceanalysis.”Measurement45(2012).785-794.

[13]X.D.Fang,H.Safi- Jahanshaki.“Anewalgorithmfordeveloping areferencemodelfor

predictingsurfaceroughnessinfinifhofsteels.”InternationalJournalofProductionResearch35 (1997),179-197.

[14] W.Hongxiang,L.Dan,D.Shen.“Surfacerougnnesspredictionmodelfor ultraprecision turningAluminiumalloy

withasinglecrystaldiamondtool.”ChineseJournalofMechanical Engineering15 (2002).153-156.

[15] G.Taguchi, “Introduction to quality engineering. Asian productivityorganisation,”Tokyo. (1990)

[16]G.Taguchi,E.A.Elsayed,T.C.Hsiang.“Qualityengineeringinproductionsystems.”McGraw-Hill, NY, USA,New York(1989).

[17]R.A.Fisher.“Statisticalmethodsforresearchworker”.OliverandBoyd.London. (1925).

[18] https://en.wikipedia.org/wiki/Speeds_and_feeds.

NCMTP-2017 23

Ideal International E- Publication

www.isca.co.in

Monsoonal Rainfall of Sub-Himalayan West Bengal: Empirical Modelling and Forecasting Pijush Basak

Dept. of Mathematics,Narula Institute of Technology

Abstract The South West Monsoonal rainfall data of the subdivisions of Sub-Himalayan West Bengal is modelled as a non-linear time

series. As observed through set of rainfall data of the regions, more than 60% of the inter-annual variability of rainfall is

accounted for by the proposed non-linear model. The proposed model is capable of forecasting South West Monsoon rainfall

one year in advance in the respective regions. The model indicates the high/low rainfall with the help of antecedent data. For the

year 2001, the predicted South West Monsoon rainfall in Sub-Himalayan West Bengal is 168.21±21.36 (cm).

Keywords: South West Monsoon rainfall, Lognormal distribution, Skill of forecast, modelling, Climatic normal

_______________________________________________________________________________________________________

1. Introduction The occurrence of South West Monsoon (SWM) rainfall or the summer monsoon rainfall over the meteorological subdivisions of

Sub-Himalayan West Bengal (SHWB: No.5) is an important phenomenon; also its impact on the agriculture, economy and society

is well known. Efforts are made to quantify the variability and forecast of monsoonal phenomenon at various temporal and spatial

scales till long time. A modelling exercise effort is ought to be focussed on understanding of variability of historical data. A

number of literatures are available on analysis of variability of SWM rainfall data. One may refer to the architectural works of

Mooley and Parthasarathy1; Gregory

2, Thapliyal

3, Iyenger and Basak

4, Iyenger and Raghukant

5 and Basak

6.

The basic characteristics of SWM rainfall data is non-Gaussianness on several temporal and spatial scales. In fact, non-

Gaussianness in weekly, monthly and seasonal scales persists even though those data can be treated as sum of large number of

random variables. The linear time scale models based on past rainfall reflect the behaviour reasonably well.

Kedem and Chiu10

, however, proposes that in a small time scale, rain rate has to be Lognormal random variable and

indicate that lognormal distribution is a natural outcome of the law of the proportionate effect applied to rain rate, namely,

Rj+1 -Rj= εjRj (1)

where εj‘s are independent identically distributed random variable and are independent of Rj’s.

The present work establishes the logic for the SWM rainfall (June-September) of the subdivision SHWB. It is explained that the

equation (1) can be systematically extended to account for year-to-year and long term relationship known to persist in monsoonal

rainfall. The applicability of the model is demonstrated consistently on SWM rainfall of the subdivision. The proposed model is

understood to behave like a ‘useful tool’ for statistical forecasting of monsoonal rainfall. The detailed approach is presented in the

consequent sections. A forecast of the recent year is also presented.

2. Data The SWM rainfall data of the subdivisions such as SHWB is extracted from IMD data base available at IITM-IMD site

(http://www.tropmet.res.in/) with the details of data assembly are available in the concerned website.

3.Modelling The initial statistical properties of the sub divisional SWM rainfall are presented in Table 1. The mean µR is the long term time

average of the data and may be recognised as ‘climatic normal’. The standard deviation R reflects year-to-year variability of the

SWM rainfall with respect to long term average.

The autocorrelation and other salient features helpful for modelling are extracted. These statistics would have been computed from

the available single sample under the assumption that the process is ergodic. The inter-annual correlations are studied and tested as

if those are Gaussian data. In the present study, for SWM rainfall series of the subdivisions with moderate or strong non-

Gaussianness such steps may not be valid except for long-term average as useful reference quantities. With these in view, we do

not refer to the concepts connected with Gaussian processes such as standard variates; but treat data series as if it is generalised

lognormal data.

The equation (1) is now generalised and converted to simple Lognormal model equation, namely,

Rj+1 / Rj= f(Rj) + εj (2)

where f(Rj) = an appropriate function of Rj .

It may be observed that equation (2) is converted to equation (1) when f(Rj) = 1. It indicates that given jth

year rainfall Rj, the

annual change is proportional to an unknown function of Rj itself.

In Fig. 1, the relation between the (Rj+1/Rj) and Rj, for subdivisional SWM rainfalls of SHWB is presented for the period 1871-

1990; the figures indicate that there is a clear cut discernable trend that can be expressed as a cubic polynomial of the form

f(R) = aR3 + bR

2 + cR + d (3)

The cubic equation is selected in comparison with quadratic and linear equations due to least variance for the error ε obtained as

σ2 = (1/(N -1)).εj

2 (4)

whereεjis a random time series of unknown structure that internally directs (traces) the system.

The equation (2) is now a non-linear difference equation for the data in the form

NCMTP-2017 24

Ideal International E- Publication

www.isca.co.in

Rj+1 = Rj.f(Rj) + εj.Rj=Rj( aRj3 + bRj

2 + cRj + d ) + εj.Rj (5)

If j is taken to be independent of Rj, it follows that the conditional expectation of Rj+1 given Rj would be

<Rj+1>=Rj..(aRj3+bRj

2+cRj+d) (6)

The above expression defines the natural point predictor for the rainfall in year (j+1) if only the previous year value is known. The

error in the predictor is perhaps the conditional standard deviation of Rj+1, namely,

σRj+1 =σ.Rj (7)

For SWM rainfall of the subdivisions analysed in the paper, only one real root is possible for the fixed point equation f(R) = 1 and

ultimately equation (6), on iteration would stabilise at that fixed point. The parameters of the proposed cubic polynomial f(R) and

the fixed point (Rc) of equation (6) are presented for the SWM data series of the subdivisions in Table 2. It is understood that

equation (6) or in general equation (1) extracts the climatic or long-term average of the data series reasonably well.

Thereafter, the fixed point Rc of Table 2 is compared with the mean value µR of Table 1. However, the variability band in any year

(j+1) depends on the previous rainfall values Rjgiven by equation (7). This may be considered large enough although σ is too

small. This, however, implies that equation (6) in its current form would not yield good one-year ahead forecast due to plausible

attraction towards long-term mean value (fixed point), namely Rc. It indicates that the natural variability is not reflected by

equation (3) containing f(Rj). Further, the form of f(Rj) has to be rectified and it would be interesting to measure the forecast skill

of equation (6) so as to see an improvement in forecast.

4. Improvement in forecast skill A relation in the form of equation (6) is utilized for forecasting the SWM rainfall in the next year i.e. R j+1, whenever the rainfall in

the previous year Rjis known. The skill in forecast can be measured with reference to climatic mean µRand variance σR2. If there

had been no inter-annual relationship in the SWM rainfall, then Rjmay be treated as independent samples of random variable. In

such a case, the only possible prediction is a year is its mean value µR and the prediction error may be measured as (Rj+1-

µR)2which is obviously the climatic variance σR

2. Consequently, any improvement in forecast has to be acquired by reduction of

variance with respect to σR2

. With this in view, the error between the observed rainfall and the value predicted by equation (6) is

compared for each year with j=2 to j=120 (1872-1990). The percentage reduction in variance with respect to climatic normal µRare

computed and presented in Table 3.

In fact, a small reduction in variance as observed in Table 3 indicates that equation (6) incorporating only consecutive reduction

yearly connection, namely Rj+1 and Rj has to be improved by incorporating the longer inter-annual relations. This may be executed

by finding correlation between (Rj+1/Rj) and lagged yearly values. Previously, it is reported by investigators (Parthasarthy et al.1,

Iyenger9 and Iyenger and Basak

4) that SWM rainfall posses significant correlations at several lags at a number of regions. For the

present paper, the inter-relations are investigated for the present sub divisional SWM rainfalls. As the correlations are small

numbers and series concerned are non-negative and non-Gaussian, the statistical significance level for the Gaussian series may not

be valid and would be less than that of Gaussian series (Johnson and Kotz10

). Also, due to positivity property, the random

variables (different series) may be jointly exponentially distributed and thus the linear correlations for significance of those

variables would be considerably less than linear correlation of Gaussian series (Johnson and Kotz10

). The small correlations which

are rejected for Gaussian series at 5% level (±0.20 for a data series of length around 120)may still be valid for non-Gaussian series

relevant to our study. Thus, for non-Gaussian series as per our investigation, for a sample size of 120, the significant value at 5%

level is accepted as ±0.13.

The above discussion evolves a clue for the improvement of the model (6) by incorporating up to twentieth lagged terms, namely,

Rj-1 , Rj-2 , ,..., Rj-19, Rj-20. As the length of data is limited to 120 as discussed, the correlations up to lag 20 is considered in our study.

In Table 4, significant lagged correlations between (Rj+1/Rj) and rainfall is significant for 6th

and 20th

lags for SHWB (Table 4)

suggesting a model of the form

Rj+1 = Rj(aRj3 + bRj

2 + cRj + d + g6Rj-6 + g20Rj-20) + δj.Rj (8)

It may be noted that all the data series under consideration posses significant lag-correlation in the range of 14-20 years. Upon

minimization of the mean square value of the error δj for the respective subdivisions, the coefficients of the equations (8a-b) are

computed and are presented in Table 5.

The last row in Table 5, the standard deviation σδ is presented for the sub divisional SWM rainfalls. It may be interestingly

observed that the standard deviation σδ is less than the previous error value σ’sfor all the corresponding case (Tables 2 and 5).

Naturally, the significance of adding lagged terms in the model (6) is established

Equations (8) provide a non-linear map of higher dimension. Initial conditions of 20 past values are required for solving the

equations (8) and then iterating sufficient number of times, one can find the steady state (stable fixed points) of the model. This

process possesses considerable variation controlled by the cubic polynomial f(R) in equations (8). The skill of these new models in

forecasting is verified by finding the reduction in variances as discussed earlier. The reductions in variance in a year-to-year

forecast achieved by equations (8) along with equation (6) for all thesub divisional SWM rainfalls are presented in Table 3.It is

observed that in all the cases, the percent reductions in variances for equations (8) are always higher than equation (6).

5. Test Forecasting The models (8a-d) proposed by SWM rainfall of the subdivisions has a clear cut deterministic part and a random part. The

deterministic part <Rj+1 >is a point predictor for the rainfall in the year (j+1) conditioned on all the previous values. The statistical

variability band about this predictor leads to ± σδ.Rj. Consequently, as a forecast we understand that the rainfall in year (j+1) will be

with 67% probability in the interval [< Rj+1 >-σδ.Rj,< Rj+1 >+σδ.Rj].Obviously, it is expected that σ.Rj is less than σR. It is pointed

NCMTP-2017 25

Ideal International E- Publication

www.isca.co.in

out that the rainfall data is available over the period 1871-2001. In the prediction exercise, for the first prediction year, 1991, the

previous model has been directly used and also for the subsequent 10 years (1992-2001), the actual rainfall of the previous year

has been utilized. It is observed that most of the predicted values are well within the range of prediction. In Table 6, the

predictions of the SWM rainfall are presented. Totally 11 predictions have been made. The expected value of the prediction is

reported against the 1-sigma band (67% probability).The observed numbers of correct frequencies are 8, Regarding statistical

significance, the Null Hypothesis (H0) that correct prediction would be purely due to chance, is tested with Chi-square test. If the

correct prediction is purely due to chance, then expected correct prediction would be 10.5. However, the number of correct

prediction is 8 which are about 73% of the total predictions. For the concerned frequencies, the Chi-square is 7.36 compared to

tabulated Chi-square of 3.84 at 5% level of significance. Consequently, H0 gets rejected and the modelling capability of the model

is verified.

6. Discussion The model developed in the paper carries importance in the sense that SWM rainfalls of the subdivisions are modelled as

generalised Lognormal random variables. It is of course, the generalization of the basic law of proportion leading to basic

Lognormal distribution. It is shown that long-term climatic mean is essentially controlled by one-step annual connection. This

model accounts for nearly 60% of the inter-annual variability of the subdivisions of all the regions. The inter-relation between the

present rainfall value and past values are utilized to shape the model for the prediction purpose.

The impact of the previous rainfall values on the future values of rainfall has been introduced in the model. Imperatively, it is

stressed that the present and future rainfall values are the cumulative effect of the present and past values; although the basic law

of proportion is the main consideration the model.

The SWM rainfall forecasts of the subdivisions are presented in Table 6 as an expected value along with its standard deviation.

The actual value is expected to be within 1-σ prediction interval with high probabilities of 67%. As explained in the Table, all the

actual values of SWM rainfall for the prediction exercise period (1991-2001) are within the 1-σinterval, in general. The prediction

variation is generally less than the climatic variation as observed.

7. Summery and Conclusion The year-to-year forecasting ability of the model verified to be effective on an independent sample of 11 years. For the year 2001,

the forecast for the SHWB is 111.34±23.39 (cm) with a probability of 67%.The capacities of the model except the regions

mentioned are under investigation. It would be interesting to see how the present random error part can be decomposed into the

intra seasonal variabilityin the impending regions. It is hoped that such effort would further improve the forecast ability of the

present model.

Reference1. Ajay Mohan R S and Goswami B N 2000, A common spatial mode for intra-seasonal and inter-seasonal variation and

predictability of the Indian summer monsoon, Curr. Sc.79 1106-111.

2. Gadgil S,,Srinivasan J and Nanjundiah R S 2002, On forecasting the Indian summer monsoon: the intriguing season of

2000; Curr. Sc.83 394-403.

3. Gregory S 1989 Macro-regional definition and characteristics of Indian summer monsoon rainfall 1871-1975, Int. J.

Climatol.9 465-483.

4. Hartmann D L and Michelson M L 1989, Intraseasonal periodicities in Indian rainfall, J. Atmos. Sc.46 2838-2862.

5. Hastenrath S and Greischar L 1993, Changing predictability of Indian monsoon rainfall anomalies?,Proc. Ind. Acad. Sci.

(Earth Planet Sci.)102 35-47.

6. Iyenger R N and Basak P 1994, Regionalization of Indian monsoon rainfall and long-term variability signals;Int. J.

Climatol. 14 1095-1114.

Table I: SWM rainfall data (1871-2001)

Region Area (Square Km) µR σR Skewness Kurtosis

SHWB 21625 199.1141 20.0709 0.1255 3.1025

SHWB= Sub Himalayan West Bengal

Table 2: Parameters of f(R) and Rc

Region a b c d σ Rc

SHWB -0.0010 0.0655 -1.4225 11.4302 0.1858 199.1141

SHWB= Sub Himalayan West Bengal

NCMTP-2017 26

Ideal International E- Publication

www.isca.co.in

Table 3: Percentage reduction in climatic variance

Region Equation (6) Equation (8)

SHWB 3.1163 3.2049

SHWB= Sub Himalayan West Bengal

Table 4: Correlation Coefficient (Rj+1/ Rj) and

SWM rainfall at different lags Rj-1, Rj-2,…,Rj-19.

Lag-RF SHWB

Rj 0.7096*

Rj-1 -0.0336

Rj-2 0.0823

Rj-3 0.0978

Rj-4 -0.1303

Rj-5 -0.0379

Rj-6 0.1559*

Rj-7 -0.1002

Rj-8 -0.0984

Rj-9 0.0244

Rj-10 0.0922

Rj-11 -0.0326

Rj-12 -0.0680

Rj-13 0.0211

Rj-14 0.0668

Rj-15 0.0171

Rj-16 -0.0564

Rj-17 -0.0684

Rj-18 0.0652

R j-19 0.0361

Rj-20 -0.1573*

SHWB= Sub Himalayan West Bengal

Table 5: Coefficients of equation (8)

Coefficie

nts SHWB

a 0.0005

b -0.0256

c 0.3606

d 1.3039e-3

g1 -

g2 -

g3 -

g4 -

g5 -

g6 2.5144e-3

g7 -

g8 -

g9 -

g10 -

g11 -

g12 -

g13 -

g14 -

g15 -

g16 -

g17 -

g18 -

g19 -

g20 2.6201e-3

σδ 0.1514 SHWB= Sub Himalayan West Bengal

*Significant l

NCMTP-2017 27

Ideal International E- Publication

www.isca.co.in

Table 6: Test forecasting with independent data

SHWB= Sub Himalayan West Bengal

Year

SHWB

Actua

l Prediction

1991

124.51

154.42±11.90

1992

132.26

154.94±19.13

1993

154.68

152.98±18.77

1994

197.20

154.18±19.04

1995

122.18

143.05±17.67

1996

157.05

151.19±18.67

1997

148.82

153.38±18.00

1998

130.16

153.38±18.00

1999

165.56

154.79±19.12

2000 156.24 153.50±18.96

2001 161.25 168.21±21.36

Fig 1. (Rj+1/Rj) vs. Rj+1 plot of Sub-Himalayan West Bengal

SWM rainfall

12 14 16 18 20 22 24 26 280.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Rj (x10) cm

Rj+

1/R

j

Sub-Himalayan West Bengal

SWM rainfll

cubic fit

NCMTP-2017 28

Ideal International E- Publication

www.isca.co.in

Big Data Analysis Framework in Healthcare 1Sanjay Goswami,

2Indrani Sarkar,

1Anwesha Das,

1Kaustav Bhattacharya and

1Disha Dasgupta

1Department of Computer Applications

2Department of Basic Sciences and Humanities (Physics)

Narula Institute of Technology, Agarpara, Kolkata

Abstract The proposed framework is intended to analyse healthcare data from a particular region in order to categorize the people

intogroups based on Gender, Disease, City, Symptoms and Treatments Availed. Huge amount of data can be handled and

processed using a distributed data processing system like Hadoop. The framework is able to predict epidemics, propose

disease cures, suggest quality of living and suggest the ways to avoid preventable deaths. It can also be used to predict

models of treatment delivery.

Keywords:Big Data, NoSQL Database, Hadoop, Healthcare Analytics, Temporal Event Analysis, Medical Records,

MapReduce, K-Means Clustering.

___________________________________________________________________________________________________

1. Introduction The healthcare industry has historically generated large amounts of data, driven by record keeping, compliance, regulatory

requirements and patient care. While most data is stored in hard copy formats, the current requirement is towards rapid

igitization of these large amounts of data. Pushed by mandatory requirements and the need for improving quality of

healthcare delivery, keeping the costs low, the massive quantity of data (known as “BIG DATA”) needs to be utilised in a

constructive manner in order to improve upon a wide variety of medical and healthcare services including clinical decision

support , disease surveillance, and population health management.

In this new era of BIG DATA, healthcare can also be modernised by properly analysing the healthcare related data and

deduce vital information regarding groups, genders or region a particular disease attacks the most.The gigantic size of the

data involves huge analytics leading to large computation. This can be achieved by using some distributed processing

application such as HADOOP. MapReduce is a popular framework upon HADOOP for performing large scale data

processing on large data sets[1].

The following section explains the general framework of Big Data application in Healthcare Analytics.

2. Architecture of the Framework The overall system can be understood from the following abstract model

Figure 1. Abstract architectural framework for Big Data Analytics in Healthcare

Stages:

The overall architecture and operations are presented in the Figure 1. The stages are briefly described as below:

1. Data Acquisition: In this stage, healthcare and medical data are collected from various sources. The sources may

include – diagnostic tests, therapeutic treatments, sale of particular drug patterns in different areas, medical follow up

trends, etc. The information is collected from doctors and hospitals. It may be patient wise, gender wise, region wise or

survival status wise.

2. Data Mining: Huge amount of irrelevant and redundant data are also acquired from the previous stage. All data may

not be of any use to the application developer focussing on some particular disease in a particular area. Relevant data may

be needed to be mined out from the huge stack of data collected from various sources. Association rules are generally

designed and applied to fetch out relevant data for further processing.

3. Feature Extraction: Once the relevant data are found out, key features representing each data forms are extracted out

that represent the data in a canonical form to data analysis algorithms. Canonical features are feature sets with reduced

dimensions that represent data easily in the data analysis phase.

4. Data Analysis: In this stage, the information is represented in the form of K-Means cluster maps in which we can

differentiate between the diseases, patients, genders or regions, etc.

5. Decision Making: Based on the segregated data in the previous stage, the domain experts can take decision about

designing their therapeutic services to the society.

3. Proposed Application of the Framework

Data Acquisition Feature Extraction Data Mining

Data Analysis Decision Making

NCMTP-2017 29

Ideal International E- Publication

www.isca.co.in

The proposed framework is intended to analyse healthcare data from a particular region in order to categorise the people

groups based on Gender, Disease, City, Symptoms and Treatments. The gigantic data can be handled and processed using a

distributed processing system like HADOOP. The framework is expected to predict epidemics, propose disease cures,

suggest quality of life improvement and avoid preventable deaths. This framework can also be used to predict models of

treatment delivery.

Diseases and their possible symptomatic data are grouped together and analysed to provide clear picture of the scenario. A

clustering algorithm such as K-means clustering can be applied to group the data and analyse them as per the following

requirements:

1. Symptom wise disease prediction: Users can provide symptoms and the system will be able to predict the disease by

categorizing them according to the diseases related to those symptoms. So patients can prevent the disease by taking

precautions earlier.

2. Disease wise treatment determination: According to the disease predicted from the symptoms, a suitable treatment

plan can be determined and suggested to the patients.

3. Disease wise survival ratio: System can show the disease wise survival ratio by analyzing historical data. This will

help taking decisions by the patient’s family to take a treatment plan earlier in order to prevent last stage harassments if the

disease is detected later.

4. Gender and age wise disease statistics: Same kinds of diseases can affect men and women differently and at different

rates and ages. Based on gender wise disease statistics, men and women can be suggested to take precautions regarding

various diseases at different ages.

5. Region wise disease statistics: Some diseases get spread very fast in some particular regions. The region wise statistics

can thus help the people residing in those particular areas to take additional precautions to avoid the spread of those

diseases and reduce casualties.

6. Identifying and Developing the Next Generation of Healthcare Treatments: By studying different statistics,

diseases can be predicted and prevented by suggesting precautions and advance treatment plans to the people based on age,

gender and region based data.

The current project deals with existing disease data and performs analysis based on that data. A Decision Tree is built up

using analysis algorithm [2][3][4]. A Decision tree builds classification or regressions models in the form of a tree

structure. It breaks down a dataset into smaller and smaller subsets while at the same time an associated decision tree is

incrementally developed. The final result is a tree with decision nodes and leaf nodes. A decision node (e.g., Disease,

Symptoms, Ages, and Region) has two or more branches representing a classification or decision. The topmost decision

node in a tree which corresponds to the best predictor called the root node. Decision data trees can be handled by both

categorical and numerical data. An example decision tree is depicted in the Figure 2.

Figure 2. Decision Tree for Healthcare Data Analysis

All modules and user interfaces are proposed to be built using Python and Java Swing. Local databases may be developed

using MySQL and backend big data processing can be done using HADOOP.

4. Algorithm Used K-means clustering [5]: Given a set of observations (x1, x2, …, xn), where each observation is a d-dimensional real vector,

k-means clustering aims to partition the n observations into k (≤ n) sets S = S1, S2, …, Sk so as to minimize the within-

cluster sum of squares (i.e. variance). Formally, the objective is to find:

whereμi is the mean of points in Si. This is equivalent to minimizing the pair wise squared deviations of points in the same

cluster:

SYMPTOMS AGES REGIONS

DISEASE DATA

Disease1 Disease2

Disease3

Disease

Survival

Disease

Survival

NCMTP-2017 30

Ideal International E- Publication

www.isca.co.in

Because the total variance is constant, this is also equivalent to maximizing the squared deviations between points in

different clusters (between-cluster sum of squares).

5. Conclusions This system implements the group-wise analysis of the huge enormous patient data in HADOOP. The method could give

patients and doctors brief information about the rate of increasing diseases and its increasing ratio and treatment. This could

help the doctors as well as the patients to get a brief idea about the diseases, their symptoms, treatments and precautions

necessary.

References: [1] Shanjiang Tang, Bu-Sung Lee, Bingsheng He, “Dynamic MR: A Dynamic Slot Allocation Optimization Framework for

MapReduce Clusters ”, IEEE Transactions, Vol 2 No.3 Sep 2013,pp.333-345.

[2] Wullianallur Raghupathi and Viju Raghupathi, “Big data analytics in healthcare: promise and potential”, Health

Information Science and Systems,pp.1-10, 2014.

[3] Aditi Bansal, Balaji Bodkhe, Priyanka Ghare, Seema Dhikale, Ankita Deshpande, “Healthcare Data Analysis using

Dynamic Slot Allocation in Hadoop” International Journal of Recent Technology and Engineering , Vol-3 Issue-5,

November 2014, pp. 15-18.

[4] Divyakant Agrawal, UC Santa Barbara, Philip Bernstein, Microsoft Elisa Bertino, Purdue Univ. “Big Data White pdf”,

from Nov 2011 to Feb-2012.

[5] https://en.wikipedia.org/wiki/K-means_clustering.

NCMTP-2017 31

Ideal International E- Publication

www.isca.co.in

SPWM

A Solar Power Generation System with a Seven Level Inverter Susmita Dhar & Sanchari Kundu

Narula Institute of Technology, Agarpara , Kolkata, India

Abstract

This paper is about a solar power generation system, which is composed of a dc/dc power converter and a seven-level

inverter.The dc-dc power converter integrates a boost converter to convert the output voltage of the solar cell array. The

capacitor selection circuit between the boost converter and a seven level inverter converts the two output voltage sources of

dc/dc power converter into a three level dc voltage, and the full bridge converter further converts this three level dc voltage

into seven level ac voltage. The proposed system generates a sinusoidal output current that is in phase with the utility

voltage and is fed to load.

Keywords: boost converter; capacitor; multi-level inverter; pulse-width modulated (PWM); solar panel.

___________________________________________________________________________________________________

1. Introduction Solar power is preferred than fossil fuel now a days because it is pollution free. Solar radiation is used to generate power

by use of coal and fuels. The main source of greenhouse gas is the fossil fuel. Till sun exists in the universe the solar

radiation is available to us. The solar radiation is nothing but the rays coming from the sun to the earth is main thing. These

photovoltaic panels are collected from this rays that is heat or radiation is converted to electricity through this panel, further

these rays are converted to dc through a converter so that it can boost the output voltage to match DC bus voltage of the

inverter.

Sinusoidal output-voltage waveform is formed by multi-level inverter. It has output current with good harmonic profile

but low emphasizing of electronic elements because of decreased voltages, switching losses are lower than that of

conventional two-level inverters, a reduced filter size, and it have a lesser Electromagnetic interference. For this it is

cheaper, light weight and more compact in size. Different types of multi-level inverter used are diode-clamped, flying

capacitor or multi-cell cascaded H-bridge, and modified H-bridge multilevel. Multi-level inverter in Solar power system

enables each photo voltaic (PV) source to be controlled separately. The “cascaded H-bridge inverter” is most commonly

used due to its different structure arrangement from others.

Multilevel inverters modulation can be classified according to switching frequencies methods working with low switching

frequencies. In low-switching frequency applications, the staircase modulation is very popular for the cascade multilevel

inverters. In the cascade multilevel inverter, the basic idea of the staircase modulation is to connect each cell of the inverter

at specific angles to generate the multilevel output waveform. In the cascade H-bridge multilevel inverter Asymmetric

voltage technology is used to allow more levels of output voltage so the cascade H-bridge multilevel inverter is suitable for

applications with increased voltage levels. Here Three H-bridge inverters with a dc bus voltage of multiple relationships

can be connected in cascade to produce a single-phase seven-level inverter and twelve power electronic switches are used.

To construct the three voltage levels three dc capacitors are used. A modified seven level inverter and converter

configuration for solar power generation is described in this paper.

2. Circuit Configuration The solar power generation system consists of solar photovoltaic panel, a dc-dc power converter and a modified seven

level inverter. The solar cell array is connected to the dc–dc power converter, and the dc–dc power converter is a boost

converter. The dc–dc power converter converts the output power of the solar cell array into independent voltage sources

with closed loop control, which are supplied to the seven-level inverter. The proposed solar power generation system is

shown in figure 1.

V Vmmp

Fig.1. Block diagram of proposed system

The seven-level inverter is composed of a capacitor selection circuit which means converter capacitor and a full-bridge

power converter which is connected in a cascade. The power electronic switches of capacitor selection circuit determine the

discharge of the two capacitors while the two capacitors are being discharged individually or in series. The multiple

relationships between the voltages of the dc capacitors gives three-level dc voltage. The full-bridge power converter further

converts this three-level dc voltage to a seven-level ac voltage. The proposed solar power generation system generates a

sinusoidal output current which is phase with the load.

SOLAR

PANEL

MPPT BOOST

CONVERTER

CAPACITOR

LC

FILTER

SEVEN

LEVEL

INVERTER

LOAD

PWM

NCMTP-2017 32

Ideal International E- Publication

www.isca.co.in

3. Modeling of PV Array As we know that solar energy is a non-conventional energy. The model does not take into account the internal losses of the

current. A diode is connected in anti-parallel with the light generated current source. The equivalent circuit of a solar cell is

a current source in parallel with a diode. The output of the current source is directly proportional to the light falling on the

cell (photocurrent Iph). During darkness, the solar cell is not an active device, it works as a diode, i.e. a p-n junction. It

produces neither a current nor a voltage. However, it is connected to an external supply (large voltage) which generates a

current ID, called diode (D) current or dark current. The diode determines the V-I characteristics of the cell.

Fig. 2. Circuit diagram of the PV model

The output current Io is obtained by Kirchhoff law:

Io=Iph- Id (1)

Iph is the photo current, Id is the diode current which is proportional to the saturation current.

Id=Irr*(Tak/Trk)3*e

[(Eg*q/K*A)*1/Trk-1/Tak] (2)

Id is the PV module saturation current. Tak is the module operating temperature in Kelvin. Trk is the reference temperature in

Kelvin. Ideality factor is represented by A and its value is 1.3. It depends on PV cell technology. Irr is the reverse saturation

or leakage current of the diode, k Boltzmann constant 1.381*10^(-23) J/K, q is electron charge (1.602*10^(-19) C). Eg is

the band gap of Silicon taken as 1.12 ev.

Irr=Iscr/[e(q*Voc/K*Ns*A*Trk)

-1] (3)

Nsis the number of PV cells connected in series. 36 number of PV cells are connected in series. VTis called the thermal

voltage because of its exclusive dependence of temperature.

Ipv=[Iscr+(Ki*(Tak-Trk))]*S/1000 (4)

Ipv is the light generated current of the PV array. S is the PV module illumination in w/cm2.K i is the short circuit

temperature coefficient at Iscr 0.0013 A/0C.

Io=Np*Ipv-Np*Id*e[(q/Ns*A*K*Tak)*(Vo+Io*Rs)]

-1 (5)

Rs is the series resistance of PV module which is 0.45 ohms.

4. Design of Boost Converter A boost converter is a switch mode DC to DC converter in which the output voltage is greater than the input voltage. It is

also called step-up converter.The key principle that drives the boost converter is the tendency of an inductor to resist

changes in current by creating and destroying a magnetic field. A schematic of a boost power stage is shown in Figure 3.

Fig 3. Diagram of boost converter

The basic principle of boost converter consists of two distinct states:

in the On-state, the switch S is closed, resulting in an increase in the inductor current.

in the Off-state, the switch is open and the only path offered to inductor current is through the fly back diode D, the

capacitor C and the load R.

NCMTP-2017 33

Ideal International E- Publication

www.isca.co.in

The input current is the same as the inductor current. So it is not discontinuous as in the buck converter and the

requirements on the input filter are relaxed compared to a buck converter.

5. Design of Multi-level Inverter The seven-level inverter is composed of a capacitor selection circuit and a full-bridge power converter, which are connected

in cascade is shown in figure 4. The operation of the seven-level inverter can be divided into the positive half cycle and the

negative half cycle of the utility. The power electronic switches and diodes are assumed to be ideal, while the voltages of

both capacitors C1 and C2 in the capacitor selection circuit are constant and equal to Vdc/3 and 2Vdc/3, respectively.The

operation of the seven-level inverter in the positive half cycle of the utility can be further divided into four modes.

Mode 1: The operation of mode 1 both SS1 and SS2 of the capacitor selection circuit are OFF, so C1 is discharged through

D1 and the output voltage of the capacitor selection circuit is Vdc/3. S1 and S4 of the full-bridge power converter are ON. At

this point, the output voltage of the seven-level inverter is directly equal to the output voltage of the capacitor selection

circuit, which means the output voltage of the seven-level inverter is Vdc/3.

Mode 2: The operation of mode 2 ,SS1 is OFF and SS2 is ON, so C2 is discharged through SS2 and D2 and the output

voltage of the capacitor selection circuit is 2Vdc/3. S1 and S4 of the full-bridge power converter are ON. At this point, the

output voltage of the seven-level inverter is 2Vdc/3.

Fig. 4. Single phase cascaded multilevel inverter

Mode 3: The operation of mode 3, SS1 is ON. Since D2 has a reverse bias when SS1 is ON, the state of SS2 cannot affect

the current flow. Therefore, SS2 may be ON or OFF, to avoiding switching of SS2. Both C1 and C2 are discharged in series

and the output voltage of the capacitor selection circuit is Vdc. S1 and S4 of the full-bridge power converter are ON. At this

point, the output voltage of the seven-level inverter is Vdc.

Mode 4: The operation of mode 4 both SS1 and SS2 of the capacitor selection circuit are OFF.

The output voltage of the capacitor selection circuit is Vdc/3.Only S4 of the full-bridge power converter is ON. Since the

output current of the seven-level inverter is positive and passes through the filter inductor, it forces the anti-parallel diode of

S2 to be switched ON for continuous conduction of the filter inductor current. At this point, the output voltage of the seven-

level inverter is zero. Therefore, in the positive half cycle, the output voltage of the seven-level inverter has four levels: Vdc,

2Vdc/3, Vdc/3, and 0.In the negative half cycle, the output current of the seven-level inverter is negative.

6.Result and Discussion The proposed solar power generation System is verified with a prototype which was developedwith a controller based. The

prototype was used for a single-phase utility with 110V and 50 Hz.The output current of the seven-level inverter is

sinusoidal and in phase with the utility voltage.

NCMTP-2017 34

Ideal International E- Publication

www.isca.co.in

Fig. 5. Characteristics of: (a) PV; (b) IV

7.Conclusion This paper is about a solar power generation system to convertthe dc energy generated by a solar cell array into ac energy

that is fed into the load. The solar panel output voltage boosted and fed to the multi-level inverter. The proposed system is

checked using different single-phase loads. The multi-level inverter reduces the switching power loss and improves the

power efficiency. The voltages of the two dc capacitors inthe proposed seven-level inverter are balanced automatically,so

the control circuit is simplified. The proposed solar power generation systemcan effectively trace the maximum power of

solar cell array.

References [1] Ravula Sateesh, A., Venugopal Reddy, B., 2015. “Design And Implementation Of Seven Level Inverter With Solar

Energy Genration System”. In:International Journal of Emerging Trends in Electrical and Electronic(IJETEE – ISSN: 2320-

9569) Vol. 11, pp.195-200.

[2] M.Vanathi, A., P.M.Manikandan, B., M.Muruganandam, C., S.Saravanan, C., 2015. “ A Modified Seven Level

Inverter for Dynamic Varying Solar Power Generation”. In: International Journal of Innovative Research in Science,

Engineering and Technology,Vol. 4, pp. 1182-1191.

[3] Habbati Bellia, A., Ramdani Youcef, B., Moulay Fatima, C., 2014. “A detailed modeling of photovoltaic module using

MATLAB”. In: NRIAG Journal of Astronomy and Geophysics, pp. 53-61.

[4] P.Sathya, A., G.Aarthi , B., 2013. “Modelling and Simulation of Solar Photovoltaic array for Battery charging

Application using Matlab-Simulink”. In:IJESRT, pp. 3111-3115.

[5] S.Uday, A., Sk. Shakir Hussain, A., Mr. Chettumala Ch Mohan Rao, A., 2015. “A Seven-Level Inverter using SOLAR Power Generation System”. In: IJMETMR, Vol. 2, pp. 2128-2131. [6] Zhong Du, A., Leon M. Tolbert, A., 2009, ‘‘Fundamental Frequency Switching Strategies of a Seven-Level Hybrid

Cascaded H-Bridge Multilevel Inverter”. In: IEEE, Vol. 24, NO. 1.

[7] Tolbert, L.M.; Peng, F.Z, 2000, “Multilevel converters as a utility interface for renewable energy systems”. In:

Proceedings of the Power Engineering Society Summer Meeting, Seattle, WA, USA, pp. 1271–1274.

[8] Walker, G.R. Sernia, P.C, 2002, “ Cascaded DC–DC converter connection of photovoltaic modules”. In: Proceedings

of the 33rd Annual Power Electronics Specialists Conference, Cairns, Queensland, Australia, pp. 24–29.

[9] Rodriguez, J., Lai, J.S., Peng, F.Z, 2002, “ Multilevel inverters: A survey of topologies, controls, and applications” In:

IEEETrans. Ind. Electron, 49, pp. 724–738.

[10] L.A.C Lopes, Lienhardt, A.-M, 2003, “A simplified nonlinear power source for simulating PV panels”. In: IEEE 34th

Annual Conference on, Volume 4, pp. 1729- 1734.

[11] J. Mei, B. Xiao, K. Shen, L. M. Jian Yong Zheng, 2013, “Modular multilevel inverter with new modulation method

and its application to photovoltaic grid-connected generator,” In: IEEE Trans. Power Electron., vol. 28, no. 11, pp. 5063–

5073.

[12] J. M. Shen, H. L. Jou, J. C. Wu, 2012, “Novel transformer-less grid-connected power converter with negative

grounding for photo voltaic generation system”. In: IEEE Trans. Power Electron., vol. 27, no. 4, pp. 1818– 1829.

NCMTP-2017 35

Ideal International E- Publication

www.isca.co.in

Role of several non-linear filters in image processing Sriparna Banerjee

1, Sangita Roy

2, Sheli Sinha Chaudhuri

1

1ETCE Department, Jadavpur University, Kolkata 700032, India

2ECE Department, Narula Institute of Technology, Kolkata, India

Abstract

The filters play very important role in image processing field .The filters may be broadly classified into two types a) Linear

filter and b) Non-linear filter. Each type of filter has it’s own area of application. In this work we have discussed in what

ways the non-linear filters are better than linear filters on the basis of it’s applications in the image processing field and also

how several non-linear filters are used for noise removal, image smoothing, edge preservation and navigation in image

processin

_________________________________________________________________________________

1. Introduction Superposition and shift-invariance are the two basic properties need to be satisfied by a filter, in order to become a linear

filter.

1. Superposition property:- This property states that if a linear system is given an input

[ ] [ ] [ ] Then it’s output should be

[ ] [ ] [ ] 2. Shift-invariance property:-This property states that if y[n] is obtained as an output from a linear shift-invariant system

,when it is given an input x[n], then y[n-n0] should be the output of the system when it is given an input x[n-n0].

Besides these properties there are two more properties which need to be satisfied by a filter in order to become a linear

filter.

3. Causality

4. Stability

Stability is the condition when both the filter’s input and output do not exceeds fiNiTe limits .This condition is called

Bounded input Bounded output (BIBO) Condition.

The causal condition states that the output at any instant depends only on the past and present inputs but does not depend on

future inputs.

In digital image processing the causal property is required, for the systems where the future inputs are not considered like

video streaming.

The filters which satisfy all the above properties are called linear filters .Filters, which does not satisfy any of these above

properties are called non-linear filters.

Here we will discuss the role of several non-linear filters in image processing.

2. Analysis Among the several non-linear filters used in image processing, we will begin with the simplest among them the min filter.

a) Min filter-This filter is is the simplest type of non-linear filter used in the image processing field.The min filter is a

sliding window –spatial filter and it works depending on the following principle:

Let us take an example of min filtering by considering a single 3X3 window.

a b

a. Pixel values before min-filtering, b. Pixel values after min-filtering.

The pixel value which is encircled with a red border represents the pixel at which the filtered is centred at. We can see how

the centre pixel’s value gets changed after min-filtering.

The basic principle behind the min-filtering method is that:

The centre pixel’s value along with the pixel values within the neighbourhood are sorted in the ascending order.After

filtering the centre pixel’s value gets replaced with the least value obtained as a resultant after sorting.

b) Median filter- This filter is a simple and the most popular non-linear filter used in the image processing field. This filter

alike min filter is also a sliding window spatial filter. Let us take the same example as we taken in case of min filter to

discuss the working principle of median filter and how does it differs from min filter.

4 15 9

1 5 7

18 6 8

4 15 9

1 7

18 6 8

5 1

NCMTP-2017 36

Ideal International E- Publication

www.isca.co.in

a b

a. Pixel values before median-filtering, b. Pixel values after median-filtering.

Here the basic operation is similar as that in the case of min filtering but instead of replacing the centre pixel’s value with

the least value here we have replaced the centre pixel’s value with the median value computed after sorting is performed.

c) Entropy filter- Entropy filter also works on a pre-defined neighborhood just like the min and median filters. Except the

fact that here the value of the center pixel gets replaced by the information entropy obtained from the neighboring pixels. It

is a texture analysis filter.

d) Range filter-The most important feature in an image is edges.

Range filter like entropy filter is also a texture analysis filter .It is mainly used for highlighting the edges of the objects in

the images.

Like all other filters, range filter also work on a pre-defined neighborhood. In case of range filter, the center pixels get

replaced by the difference of the maximum and the minimum values of its neighborhood pixels.

e) Bilateral filter-It is a non-filter which is mainly used for preserving the edges of objects in the images. In this the centre

pixel’s value gets replaced with the weighted average of the pixel values of the neighbourhood pixels along with the centre

pixel. The weights are mainly determined depending on Gaussian distribution. The weight values depend on radiometric

differences apart from Euclidean distance of pixels.

f)Extended Kalman filter-This filter is used when the estimation of the system is required. It is used for tracking the

complex movement of objects.

g) Fuzzy filters –The fuzzy filters gives us much better results than classical filters when used in the image processing

tasks. Fuzzy filters deal with uncertainties more efficiently and can restore a highly degraded image where a lot of

uncertainties are involved. In fuzzy filters, a fuzzy membership value is associated with every pixel and different types of

fuzzy rules restore the neighborhood related information along with the other information and also help to eliminate noise

with blurry edges.

The fuzzy filters also help to preserve the edges and to smoothing the noisy images.

h) Running median filter- This filter is a much advanced version of the median filter. This filter can remove salt and pepper

noise from corrupted image with much more efficiency.

i) Rolling guidance filter-This filter mainly works in an iterative manner invoking joint filters for few minutes. The joint

filter may be guidance, bilateral etc.

3. Results and Discussion a) Min filter-

We can see after min-filtering the integer ‘11’ which is corrupted with noise almost gets faded away, but it restored using

min-filtering because min filters are used to find the darkest points in the image.

b) Median filter-

IMAGE AFTER MIN FILTERING

median filtered image

4 15 9

1 7

18 6 8

5

4 15 9

1 7

18 6 8

7

original image

Figure 1 a) Image before min-filtering, b) Image after min-filtering

noisy image

Figure 2 (a) Image corrupted with salt & pepper noise, b) Image obtained after median filtering

NCMTP-2017 37

Ideal International E- Publication

www.isca.co.in

ORIGINAL IMAGE

c) Range filter- Edge is the most important feature of an image .Edges of an image can be high lightened with the help of

range filter.

d) Bilateral filter- Bilateral filter is also for image smoothening and edge preservation.

Figure 3 (a) Original image

Bilateral filter can produce cartoon effect on an

image.

(b) Image obtained after bilateral filtering

Figure 4 (a) Original image

(b) Image obtained after applying cartoon effect

e) Rolling guidance filter- This filter is used for separating different scale structures without any prior knowledge of

texture, noise etc.

Figure 5 (a) Original image

(b) Image obtained after filtering using rolling

guidance filter

4.Conclusion In this work we have shown several non-linear filters and how their application does area differs from one another .Here we

have also seen how non-linear filters give better results when we use it for image smoothing, texture analysis and edge

preservation than linear filters because in linear filters the centre pixel’s value get replaced by the value obtained after

performing an averaging operation on all the neighborhood pixels including the centre pixel’s value. But the hardware

implementation of non-linear filter is much more complex than linear filters. So the future work can be the reduction in the

hardware complexity of the non-linear filters.

Acknowledgements The entire work is being carried out by using Matlab 2012b.

References

[1]R. Lukac, B.Smolka, K.N. Plataniotis, and A.N. Venetsanopoulos,Generalized Entropy Vector Filters4th EURASIP

Conference focused on Video I Imam Processing and Multimedia Communications, EC-VIP-MC 2003,2-5

July2003,Zagreb, Croatia.

[2]Rong Zhu, Yong Wang,Application of Improved Median Filter on Image Processing, Journal of Computers, Vol. 7, No.

IMAGE AFTER RANGE FILTERING

NCMTP-2017 38

Ideal International E- Publication

www.isca.co.in

4, April 2012

[3] C. Mythili, V. Kavitha,‘Efficient Technique for Color Image Noise Reduction, The Research Bulletin of Jordon ACM,

Vol. II (III).

[4] Nodes, T A and Gallagher, N C, ‘Median filters: some modifications and their properties’ IEEE Trans. Acoust., Speech

Signal Process. Vol 30, (1982) pp 739-746

NCMTP-2017 39

Ideal International E- Publication

www.isca.co.in

Reversible Combinational Circuit Design Using QCA in VLSI Design Kunal Das

1, Shubhendu Banerjee

1 , Soumya Bhattacharyya

2 ,Ashifuddin Mondal

1

1Department of Computer Science & Engineering, Narula Institute of Technolog, Agarpara, Kolkata-109. 2Department of Information Technology, Narula Institute of Technology, Agarpara, Kolkata-109

Abstract Quantum-dot cellular automaton (QCA) is a novel step towards the VLSI design by replacing the traditional CMOS

Technology. It might bypass the transistor paradigm to molecular –scale or nano scale computing. In QCA paradigm,

information is represented with the polarization of Quantum dots in a QCA cell. A very low power stimulus such as photon

or light or very small voltage can change the polarity of QCA cell. In case of reversible logic circuit like combinational and

sequential circuit, the total computation done with less power dissipation.Reversible logic synthesis in QCA had been

proved to be very efficient Low power design for nanotechnology. In this paper, we proposed Reversible Logic Gate which

is proved to be universal reversible logic gate by means of implementing all basic reversible logic (AND, OR, NOT, NOR,

NAND, XOR and XNOR) gates. An important metric for evaluating reversible circuit is Garbage count. Hence Garbage

minimization issue had also been addressed in our proposal. Here we had shown that our URLG implements all logic gates

with reduced garbage output compared to previous proposal. Logic synthesis of 4*4 RLG using URLG being discussed and

compared with previously reported HNG gate.

Keywords:QCA, Majority Voter (MV), Reversible Logic Gate, URLG, Garbage count.

___________________________________________________________________________________________________

_

1. Introduction Low power design is become primary goal of Very Large Scale Integration (VLSI). Traditionally classical logic circuit

which is found to be ‘irreversible logic circuit’ dissipate heat energy in an order KTln2 joules per bit of Information that is

lost, where K is Boltzman’s constant and T is absolute Temperature at which the computation is performed. Bennett shows

that in case of Reversible logic computation KTln2 joules energy will not dissipate [23]. Hence Reversible logic design

naturally gets priority to design combinational as well as sequential circuit. In this Low Power design era Reversible circuit

design is applicable for VLSI in CMOS, Quantum computing, DNA computing as well as Quantum dot cellular automata

(QCA). In QCA, Limited progress had been noticed using QCA, a few bunch of proposal had been found. In Quantum

computing we found there are many proposal on Reversible Logic Gate (RLG) design like Fredkin Gate [18], Feynman

Gate [16], Toffili Gate [17]. Very recently Haghparast and Navi Proposed NFT Gate for Nanotechnology based system

[19]. In this proposal, it also addressed ‘Garbage Minimization’ problem for implementing all basic logic gate. HNG [19]

proposed for 4*4 Reversible Logic Gate.In this paper, we explore a ‘Universal Reversible Logic Gate’ (called URLG) that

implements all basic gate in QCA like AND, OR, NOT, NAND, NOR, XOR and XNOR. It also addressed the ‘Garbage

Minimization’ problem. We make a comparison with NFT in context of Garbage count; as a result we show a maximum

minimization of garbage output ever reported. We also proposed effective design 4*4 RLG using 3*3 URLG as a basic

building block.

2. Basic QCA Quantum dot cellular automata consist of four quantum dots positioned at four corners of cell and two mobile electrons

confined within the cell. In QCA logic state is determined by the polarization of electrons rather than voltage level as in

CMOS technology. The two stable polarization of electrons P= +1.00 and P= -1.00 of a QCA cell represents logic ‘1’ and

logic ‘0’ respectively, shown in Fig.1 QCA required four phased clocking signal. The four phases are relaxed, switch, hold

and release. In the relax phase there is no inter-dot barrier. In the switch phase, barrier is slowly become high and cell

attends defiNiTe polarity depending on the input. Electrons are polarized due to columbic effects. The polarity retains in the

hold phase. The barrier is slowly getting lowered and cell release the polarity in the release phase.

Reversible Logic Gate To avoid energy dissipation in irreversible logic gate, RLG proved to be promising area of study [23]. There are several

proposals had been made which are (A) Feynman (FG) [16] (B) Toffoli gate (TG) [17] (C) Fredkin gate (FRG) [18] (D)

NFT Gate [19] commonly performed as reversible logic gates shown in Fig.2 to Fig.5.

Definition 1: If a reversible gate has k input and therefore k outputs, Input Vector Iv is mapped with output vector Ov such

that mapping is bijective i.e. the one-to-one mapping between Iv and Ov. The corresponding reversible gate is known as

RLG k*k gate.

Definition 2: Garbage output refers to the no’s of output added to make n*k function reversible, which output is/are not

used for further computations.

Definition 3: Constant Input preset value of input that ware added to n*k function to make reversible.

Example1: A 2-input 2-output function given by formula (X, Y)(X’,X Y) or truth vector [2,3,1,0] is reversible.

NCMTP-2017 40

Ideal International E- Publication

www.isca.co.in

Fig.1. QCA Polarized cell Fig. 2. Feynman Gate Fig. 3.Toffoli Gate Fig.4. Fredkin Gate

Fig.5. NFT gate

4.Scope of Work As we discussed early, that Reversible logic gate design is emerging technology in low power computing. Landauer shows

in [12] that in case of irreversible circuit design each bit of information lost in the form of heat energy and generate KTln2

joules of heat energy. Bennett shows that in case of reversible circuit design the above energy will not be dissipated [23].

There are several proposals had been found in Quantum Computing on reversible logic gate design, few are [12][13][14].

But there are very few proposals had been found in QCA [19][20]. Very recently Haghparsat and Navi proposed a NFT

Quantum Gate [19] focusing on important metric in reversible logic design ‘Garbage count’ for designing basic gate. The

‘Garbage count’ issue is applicable for Quantum Computing as well as QCA design. So ‘Garbage minimization’ must be

focused in the research work. Haghparsat and Navi also reported 4*4 HNG gate [19]. Hence it may be scope of work to

design a universal reversible gate (3*3) by which we can address the ‘Garbage minimization’ issue and also able to

synthesis 4*4 or any k*k RLG using that reversible gate in QCA.

5. Proposed URLG Reversible Logic Gate design proved to be promising technique in low power era. Power dissipation due to irreversible

circuit had been addressed in [12]. In this paper our centre of attention is to design molecular QCA gate which is proposed

to be universal Reversible gate (called as URLG). Using our proposed URLG we can able to design all basic reversible

logic gate like AND, OR, NOT, NAND, XOR and XNOR. The important metric for Reversible Logic Design is Garbage

count had been addressed in [19]. We had shown that best ‘garbage minimization’ is achieved with respect to all

counterparts. We also demonstrate Reversible Latch using proposed URLG.

A. Characterization of URLG

Proposed 3*3 Universal reversible logic gate is defined by mapping function f: MM is one to one i.e. bijective. The Input

vector Iv (A, B, C) and output vector Ov=(P=C (AB),Q=B,R=C (A+B)). where’ ’ denotes XNOR. The truth table

of URLG is shown in Table 1.The URLG had been design using 8 MV’s and 2 NNI. The four numbers of clocking zones

are required for designing our URLG in QCA. The block diagram of URLG gate with D0 to D3 clocking zones is shown in

Fig.14 and the symbol diagram is shown in Fig. 6.

B. Implementation of Combinational Basic Reversible gate

The URLG can be used as building block for implementing different basic logic gate. In Fig.7 Shows one URLG can

implement both AND & OR gate with only one garbage output. The constant input C=+1.00, the output ‘P’ act like AND

and output ‘R’ act like OR gate. If the constant input C change to ‘-1.00’(i.e. binary 0) the circuit acts like NAND & NOR

gate, the output ‘P’ act like NAND and output ‘R’ act like NOR gate which is shown in Fig. 8. It implies that practically we

can achieve four basic gates using single URLG by altering constant input C. The garbage output is only one for four basic

gates, which is best proposal than the exiting proposal. The Fig.9, Fig.10 Shows implementation of XNOR and XOR gate.

In case of XNOR we required only one URLG with two garbage output and constant input B=-1.00(i.e. binary 0). On the

other hand XOR can be implemented by using two URLG as shown in Fig. 10. The three constant input are polarized as

describe in figure and four garbage output for implementing XOR gate. The comparison made with recently made proposal

by Haghparast and Navi [8] is shown Table 2. The result is concluded in terms of garbage minimization and no’s of URLG

used to better than the exiting counterpart.

NCMTP-2017 41

Ideal International E- Publication

www.isca.co.in

3 X 3

URL

G

A

0

0

garP

garQ

AR

Table 1 Truth table for 3*3 URLG

Input of 3*3 URLG Output of 3*3 URLG

A B C P Q R

0 0 0 1 0 1

0 0 1 0 0 0

0 1 0 1 1 0

0 1 1 0 1 1

1 0 0 1 0 0

1 0 1 0 0 1

1 1 0 0 1 0

1 1 1 1 1 1

Fig. 6.3*3 URLG. ‘ ’ denotes XNOR operation. Fig. 7. Reversible AND, OR gate using URLG

Fig.8 Reversible NAND, NOR gate using URLG Fig. 9. Reversible XNOR gate using URLG

Fig. 10.Reversible XOR gate using URLG Fig. 11. Reversible NOT gate using URLG

6. Implementation of 4*4 RLG We introduce 4*4 RLG synthesis using URLG as shown in Fig.12.The input vector Iv(A,B,C,D) and the Output Vector

Ov=(P=C (AB),Q=B,R=C (A+B), S=CD).Hence we prove that our proposed URLG truly a Universal Reversible

Gate.

3 X 3

URLG

A

B

C

)(ABCP

BQ

3 X 3

URLG

A

B

1

ABP

garQ

)( BAR

3 X 3

URL

G

A

B

0

ABP

.garQ

)( BAR

3 X 3

URLG

A

0

B

garP

BAR

garQ

NCMTP-2017 42

Ideal International E- Publication

www.isca.co.in

Fig. 12. 4*4 RLG synthesis using URLG.’ ’ denotes as XNOR operation.

7. Simulation Result The design of URLG was verified with simulator QCADesigner V2.0.3 [22]. In the bi stable approximation we used

following parameter: QCA cell size 18nm * 18 nm, Dot size= 5nm2, number of sample =42800, convergence

tolerance=0.001000, radius of effect=41nm, relative permittivity =12.9 clock high=9.8e-22 and clock low=3.8e-23, layer

separation =11.5000nm. In our QCA Layout we have the goal of remarkable design of URLG. Simulation result verified

our proposed URLG.

Fig. 13. Simulation output of URLG.

8. Result and Discussion The proposed Universal reversible logic gate is more efficient and effective gate design in QCA. First, we compared with

early reported gate NFT [8] with respective to garbage count and No’s of gate required to implement all basic gates, the

result shows in Table 2.We found 37.5% garbage minimization improvement compared with NFT, which is maximum

minimization of garbage ever reported in QCA literature.It also be proved to be useful building block for designing 4*4

reversible gate as shown in Fig. 14, similarly it also achievable any size (k*k) Reversible gate. Hence the Evaluation

suggests that the name ‘Universal Reversible Logic Gate’ is truly a Universal gate.

9. Conclusion In this paper, we explore a testable Universal Reversible logic gate (URLG) which is proved to be Universal gate. We also

compared with very recently proposed NFT and achieved 37.5% improvements in garbage minimization with respect to

NFT as shown in Table 2.URLG is also applicable for design sequential circuit and is a basic building block for 4*4 RLG.

Hence we came to conclusion that our proposed testable URLG design must be promising step towards the goal of low

power design in nanotechnology.

NCMTP-2017 43

Ideal International E- Publication

www.isca.co.in

Fig.14. 3*3 URLG Implemented with QCADesigner [22]

Table 2. Comparison with our Proposed URLG and NFT

Basic Logic

Gate Design

Using NFT [8] Using URLG

Improvement No

’s

of

NF

T

No

’s

of

M

V

No’s of

Garbag

e

output

No’s

of

URL

G

No

’s

of

M

V

No’s of

Garbag

e

output

AND +1 OR

2 26 4 1 10 1 Over all

25 %

improvement

in using No’s

of RLG and

37.5%

improvement

s in Garbage

count.

NAND+NO

R

3 39 6 1 10 1

EXOR 1 13 2 2 20 4

EXNOR 2 26 4 1 10 2

NOT -!

- - 1 10 2

Total 8 104 16 6 60 10

1 both gates are implemented with single gate. Hence no extra URLG required. ! NOT gate implemented with NAND gate.

References

1) A.O. Orlov, I. Amlani, G.H. Bernstein, C.S. Lent, G.L. Snider, “Realization of a Functional Cell for Quantum-Dot

Cellular Automata,” Science, Vol 277, pp 928-930, 1997.

2) C. S. Lent et al., “Quantum cellular automata”, Nanotechnology, vol. 4 pp. 49-57, 1993.

3) C. S. Lent, P. D. Tougaw, “A device architecture for computing with quantum dots,” Proc. IEEE, vol. 85, no. 4, pp.

541-557, 1997.

4) W. Porod, “Quantum-dot devices and quantum-dot cellular automata,” Inter. J. Bifurcation and Chaos, vol. 7, no. 10

pp. 2199-2218, 1997.

5) I. Amlani, A. O. Orlov, G. Toth, C. S. Lent, G. H. Bernstein and G. L. Snider, “Digital logic gate using quantum-dot

cellular automata,” Applied Physics Letters, Vol. 74, pp. 2875, 1999.

6) G. Toth and C. S. Lent, “Quasiadiabatic switching for metal island quantum dot cellular automata,” J. Appl. Phys.,

85(5): 2977–2984, March 1999.

7) I. Amlani et al., “Experimental demonstration of a leadless quantum-dot cellular automata cell,” Appl. Phys. Lett.,

vol. 77, no. 5, pp. 738-740, 2000

8) K. Hennessy and C. S. Lent, “Clocking of molecular quantum dot cellular automata,” J. Vac. Sci. Technol. B, 19(5):

1752– 1755, 2001.

9) C. S. Lent and B. Isaksen, “Clocked molecular quantum-dot cellular automata,” IEEE Trans. on Electron Dev.,

50(9): 1890– 1895, September 2003.

10) M. T. Niemier, M. J. Kontz and P. M. Kogge, “A design of and design tools for a novel quantum dot based

microprocessor,” In 27th Ann. Design Automation Conference, pages 227–232, June 2000.

11) A. Vetteth, K. Walus, V. Dimitrov and G. Jullien, “Quantum dot cellular automata carry-look-ahead adder and

barrel shifter,” IEEE Emerging Telecommunications TechnologiesConference, September 2002.

12) Landauer R., “Irreversibility and heat generation in the computing process,” IBM J. Research and Development,

5(3): 183-191(1961).

NCMTP-2017 44

Ideal International E- Publication

www.isca.co.in

13) Bennett C. H., “Logical reversibility of computation,” IBM J. Research and Development, 17: 525-532, 1973.

14) P.D. Tougaw and C.S. Lent,”Logical devices implemented using quantum cellular automata,” Journal of Applied

Physics, 75:1818, 1994.J.

15) G. Snider, A. Orlov, C. Lent, G. Bernstein, M. Lieberman, T. Fehlner, “Implementation of Quantum-dot Cellular

Automata,” ICONN 2006, pp 544-547.

16) Feynman R., Quantum mechanical computers. Optics News, 11, 1985 pp. 11-20.

17) Toffoli T., “Reversible computing,” Tech Memo MIT/LCS/TM-151, MIT Lab for Computer Science, 1980.

18) Fredkin E. and T. Toffoli, “Conservative logic,” Int’l J. Theoretical Physics, 21, pp. 219-253.1982.

19) M.Haghparast and K.Navi, “A Novel Fault Tolerance Reversible Gate for Nanotechnology Based System”, Am.J.

Applied Sci., 5(5):519-523.

20) K.Das and D.De “Characterization, Test and Logic Synthesis of Novel Conservative & Reversible Logic Gates for

QCA,” Int. Journal of Nanoscience, World Scientific, 9, 2 (2010).

21) S. Ray and B. Saha, “Minority Gate Oriented Logic Design with Quantum-dot Cellular Automata,” LNCS4173,

pp.646-656(2006).

22) K.Waluset.al, ‘ATIPS laboratory QCA Designer’ ATIPS laboratory, University of Calgary, Canada,

2002.homepage- http://www.atips.ca/projects/qcadesigner.

23) Bennett C. H., IBM J. Research and Development, 17, pp 525-532, 1973.

NCMTP-2017 45

Ideal International E- Publication

www.isca.co.in

Gsm Based Earth Fault & Over Voltage Detector Papendu Das

a, Avipsa Basak

a, Supravat Roy

a , Souvik Kundu

a , Aritra kumar Basu

a, Archita Chakraborty

a, Preet Singh and

Susmita Karanb

a,Department of Electronics & Instrumentation Engineering, Narula Institute of Technology, Kolkata, India bDepartment of Basic Science and Humanities Department, Narula Institute of Technology, Kolkata, India

Abstract

In today’s world electricity is one of our basic need and to use it safely and having proper precaution is of utmost important.

In this research, we will be dealing not only with the earthing fault detection but also with the type of fault. If we only deal

with fault detection it is of no use in the modernized society. In this digitalized world, if automation is not done then the

total purpose of the project is not of any use to us. So, it is our duty to make it innovative and efficient by introducing some

advance technology in this project. Detection of fault is nothing new but having and audio visual signal along with a phone

call has been implemented in our project. Here the circuit takes in analog voltage and converts them into digital signals and

provides the necessary warnings.

Here, we have used a mechanism which will send a call or a message immediately if an earth fault or any over voltage is

detected in power supply system. Consequently, it will also save delicate instruments from hazardous conditions. The first

and the foremost advantage is that a phone call can possibly alert every individual and protect him or her from getting a

severe shock due to electric fault. It will also save the device from any over voltage condition. It has various practical

applications in medical instruments, industrialinstruments and in household devices. The earth fault detection circuit is not

only needed in today’s household electrical precaution but also acts as a life saver in dangerous situation.

Keywords:Earth fault detection; Automatic power shut down; Message or Cal; Over Voltage Detection; Protecting sensitive

devices

___________________________________________________________________________________________________

1. Introduction In today’s world, electricity is one of our basic needs and having proper safety precaution is of utmost importance. We are

dealing not only with the earthing fault detection but also with the over voltage protection. If automated warning and

protection is not implemented then the total purpose of the project would become vague. Hence to fulfil the need, we have

used Arduino programming and GSM module kit.

2.Experiment /Analysis In this experiment following components are used:Arduino Uno, GSM Module SIM900A, 225K, 400V capacitor, 470uF,

25V electrolytic capacitor, Bridge diode 1N4007, 470KΩ, 120Ω, 100KΩ, 56KΩ, 82K, 6.8K, 33K, 560 Ω, 1K resistors,

1N4734A zener diod, 470uF, 25V capacitor, LM358 comparator IC, LEDs, Buzzer, 6V Relay, NPN Transistor 2N2222

3. Methodology: The earth fault detector circuit measures 220V between phase and earth, converts 230V AC-5V DC using bridge diodes,

filter capacitor and 1N4734A and supplies it to the arduino. When the 5V at pin no A0 goes lowdue to earth fault, and an

audio visualalarm is heard, and a signal is sent to gsm module for phone call to a particular mobile number as has been

Circuit Diagram

Programmed, simultaneously a relay mechanism activated by the low at pin no A0,automatically cuts off the power supply.

Similarly, when over voltage (more than 240V) is sensed by LM358 comparator circuit, an analog voltage signal is sent to

pin no A1 of arduino, consequently an audio visual alarm turns ON, signal is sent to gsm module for phone call, and the

relay mechanism automatically cuts off the power supply, thereby protecting any connected appliances from getting

shorted.

NCMTP-2017 46

Ideal International E- Publication

www.isca.co.in

Block Diagram:

Snapshot of the Project:

4. Advantages of this project: The circuit is very simple and effective in its performance. It is very moderate in cost and is also lifesaving as it would

provide immediate audio visual alarm, phone call and auto power cut-off, in case of an earth fault because an individual

may get a severe shock, in case there is faulty earth connection.

5. Disadvantages of this project: Use of GSM module does not make the circuit very cost effective. Short Circuit protection circuit is not inbuilt. A separate

MCB must be used, if needed.The circuit designed cannot handle voltages above 400V.

6. Results and Discussion: We have done this project on the basis of gsm module and arduino. We also came to know about earth fault detection and at

the same time it will act as a shock absorber.

7. Conclusion The project is very useful for handling sensitive devices from earth fault and over voltage supply. It can provide suitable

protection in various practical sensitive medical instruments, measuring instruments and in household appliances.

References 1. Field Experience with detecting an arcing ground fault on a generator Neutral point by Nathan Klingerman and Larry

Wright

2. Ground fault protection for an ungrounded system by Adam Heskitt and Hillori Mitchell.

Earth Fault detector

Arduino Uno

Gsm Module

Audio Visual Indication

Over voltage detector

NCMTP-2017 47

Ideal International E- Publication

www.isca.co.in

Electronic Eye Based Security System Rituparna Dey

a, Atri Mondal

a,Kushal Bhowmick

a,Alivia Bose

a,Uday Sankar Saha

a,Anshuman Bhowmick

a& Susmita

Karanb

aDepartment of EIE, Narula Institute of Technology, Kolkata

bDepartment of Basic Science & Humanities, Narula Institute of Technology, Kolkata

Abstract In this paper an electronic eye based security system has been build up to reduce the burglary attempt that occurs in absence

of the owner in the vulnerable areas. In this project LDR (light dependent resistor) has been used, if an burglary attempt

occurs. This if happens to use the light to finish off the task in hand, whenever the light is detected by the LDR. The circuit

will be activated and the power will be given to the buzzer and the gsm to inform the neighbours and the owner of the

house respectively.

Keywords:LDR,Burglary

___________________________________________________________________________________________________

1. Introduction In an era of advanced technology, where every single day carries us one step forward the dimension of automation , our

projects stands here to illuminate this fact itself. Electronic eye can also be considered as a spell working device as it keeps

an eye on the vulnerable areas in absence of the owner. A perfect example can be exhibited in a supposed condition when a

thief arrives and the thief happens to use a light to finish his task at hand .This circuit will get activated as soon as the light

falls on the device and will activate the relay the relay in turn n will give the signal to the buzzer and the GSM kit,

informing the neighbours and the owner of the house in his or her absence.

2. Experiment This paper is about a Security based system on a photo sensing Arrangement. Here a power supply is regulated by a fixed

5V regulator ic, then if the light falls on the LDR, its resistance decreases and allows the flow of current through the

terminal. The output signal is then amplified by two transistors and the amplified voltage is fed into the relay. The pole of

the relay gets connected to the “NO” terminal and the green LED(which indicates no light was applied to the LDR) changes

to RED colour and a buzzer gets activated generating a sound. Simultaneously a 5V is given to the arduino uno which

activates the gsm module and it makes a phone call to the owner informing about the burglary.

3. Results and Discussion In this circuit a 12 v input voltage is given which is regulated by a 5 v regulator ic which is to active the circuit. If any light

is detected by the circuit, the relay gets activated and gives the signal to the buzzer and the Gsm Kit.

NCMTP-2017 48

Ideal International E- Publication

www.isca.co.in

3. Conclusion

Therefore this system can be installed in our home, shops or in all the necessary areas, where, the owner of the place is not

present for a reason and so by installing this system, the owner can provide the necessary protection to his or her valuable

things, that too in a very low cost, and also very easy to use/handle.

References 1. http://www.Electronicsforu.Com

2. http://www.google.com

NCMTP-2017 49

Ideal International E- Publication

www.isca.co.in

Li-Fi Technology-Transfer of Data through Visible Light Abhijit Seth

a, Nistha Gupta

a,Poonam Verma

a, Trinanjana De

a, Pritha Majumder

a, Prakash Kr Thakura

a, Shyamal Mishra

a&

Susmita Karanb

aDepartment of Electronics and Instrumentation, Narula Institute of Technology bDepartment of Basic Science and Humanities , Narula Institute of Technology

___________________________________________________________________________________________________

Abstract Li-Fi transfer data through the illumination by sending data through an LED light bulb that varies in intensity faster than the

human eye. Wi –Fi is great for general wireless coverage within buildings, whereas Li-Fi is ideal for high density wireless

data coverage in confined area and for relieving radio interference issues. Wi-Fi already achieved extremely intense speed

in lab. Li-Fi provides the better bandwidth, efficiency availability and security than the Wi-Fi communication. In future Li-

Fi technology has been more useful to all the sectors

Keywords: Li-Fi, Wi-Fi, Bandwidth, Efficiency, Security

___________________________________________________________________________________________________

1. Introduction The conception of Li-Fi is presently fascinating a great deal of significance and interests not least owing to the fact that it

gives forth a legitimate substitute to the radio frequency waves. As the population is increasing and their present day

gadgets and appliances access wireless internet, the airways are turning out to be progressively obstructed and their

inadequacy to every device, making it more problematic to get a worthwhile and a reliable high speed signal. The present

situation uses radio waves for wireless communication. As the technology advances, that is in this upcoming era of

technology, the radio wave spectrum gradually becomes less liable to cater this need. To firm out these controversies of

scalability, availability and security we have turned out with the new approach of transmitting data wirelessly through light

using LED’s which is referred to as Li-Fi. This development is immediately prior to the latest one using light and VLC

technology.

2. Methodology Li-Fi runs on visible light. Li-Fi is a visible light communication. This means that it accommodates a photo detector to

receive light signals and a signal processing element to communicate. Light reaches everywhere so it becomes really easy

for us to communicate through the visible portion of the electromagnetic spectrum. Li-Fi even provides wireless indoor

communication. Li-Fi technology helps in transmitting the data through an LED bulb by illumination which varies in

intensity faster than the human eye can follow. Li-Fi is great for high density wireless data coverage in confined areas and

for resolving the radio inference issues whereas Wi-Fi is ideal for general wireless coverage within the building premises.

Li-Fi ensures better security, efficiency, connectivity, bandwidth and availability and has already actualized blisteringly

high speed in the laboratories. By leveraging the extremely low cost nature of the LED’s and lighting uNiTs, starting right

from public internet access through street lamps to auto piloted cars that communicates through headlights. Harald Hass

envisions a future where data for laptops , smart phones and tablets will be transmitted through the visible light.

It is mainly implemented by using a light bulb at the downlink transmitter. Normally the light bulb glows at a constant

current supply.However at times, fast and subtle variations in the current can be made to produce the optical outputs since

only the usage of light is involved. And hence can be easily applied in the aircrafts and hospitals or any such area where the

radio frequency communication is problematic.

3. Working principle of Li-Fi The operation involving Li-Fi is quite simple. If the LED is ON, we transmit digital 1. And if it is OFF, we transmit a

digital 0. The switching on and off of the LED’s are done very quickly hence providing nice opportunities to transmit the

data. So, finally all that is required is some LED and a controller that’s codes data into those LED’s and the flickering

depends on the data particularly that we want to encode. The more the LED’s in our lamps, the more data it will process.

Let us consider an IR remote that sends the stream of data at the rate of 10,000-20000 bps. Now let us replace the IR LED

with a light box containing a large LED array which is capable of sending thousands of such streams at a very fast and

reliable manner. LED’s are found in traffic and street lights, car brake lights, remote control uNiTs and countless other

applications. S visible light communication not only solves the problem related to lack of spectrum space but also enables

many applications because this spectrum is unused and not regulated and thus can be used for communication at very high

speeds. The method of using these rapid pulses of light to transmit this information wirelessly has been technically

referred to as “Visible Light Communication” which actually has a potential to compete with Wi-Fi and hence inspired the

characterization of Li-Fi.

4. Transfer medium of Li-Fi Generally the fibre optic cables are the wires that transmit the data through an extremely thin layer of glass or plastic

threads. The relation between fibre optic thread and Li-Fi is that light signals travel through these fibres in the form of light

and then translated to 0’s and 1’s, the data part. However fibre optics are extremely expensive but have a massive

bandwidth availability can do away with that and hence may soon replace most of the existing wired cables and the

required change has already started initiating.

NCMTP-2017 50

Ideal International E- Publication

www.isca.co.in

5. Applications Li-Fi finds enormous applications in the field of Health technologies, Airlines, Power plants, undersea working etc.

Moreover, it is important in Information delegation, Learning and GPS usage.

6. Conclusion Possibilities for the future utilization are abundant. Every light bulb can be converted into Li-Fi signal receptor to transfer

data and we could proceed towards a cleaner, safer and greener future. As we all are aware of the fact that the airways are

getting clogged day by day, Li-Fi can offer all of us a genuine alternative. Li-Fi is enabled by advanced digital transmission

technologies. Optical cell networks based on Li-Fi are the link between future energy efficient illumination and cellular

communication. They can also harness unregulated, unused and vast amount of electromagnetic spectrum and can even

enable smaller cells without the aid of new infrastructure.

Acknowledgement We would like to extend our thanks to our respective parents for their help and support.

References [1]Engineering Research, ISSN 0973-4562 Vol.7 No.11 (2012) 2.

[2] http://edition.cnn.com/2012/09/28/tech/lifi-haasinnovation 3.

[3] http://www.dvice.com/archives/2012/08/lifi-ten-waysi.php

[4] wifi-it-s-lifi-internetthrough-lightbulbs

[5]http://www.lifi.com/pdfs/techbriefhowlifiworks.pdf

[6] http://www.ispreview.co.uk/index.php/2013/01/tiny-ledlights-set-to-deliver-wifi-style-internetcommunications.html

[7] http://www.newscientist.com/article/mg21128225.400- will-lifi-be-the-new-wifi.html

[8]. http://groupivsemi.com/working-lifi- could-be available-soon/

[9] http://www.lifi.com/pdfs/techbriefhowlifi works.pdf

[10 http://ledlightingtips.overblog.com/2014/03/technology-how-do-lifi-light-sourcework.html

[11] www.google.com

NCMTP-2017 51

Ideal International E- Publication

www.isca.co.in

Automated Car Controller Aparna Das

a , Manish Guha

a , Attrayee Sinha

a , Sayantan Mitra

a ,Vijitaa Das

a , Swati Nayna

a and Susmita Karan

b

aDepartment of Electronics & Instrumentation Engineering, Narula Institute of Technology, Kolkata, India

bDepartment of Basic Science and Humanities, Narula Institute of Technology, Kolkata, India

Abstract

In the upcoming area of technology, for the ease of human being living, industrial automation with lesser human effort and

all other possible application areas are enriched with the ever growing features of technical advancement in the field of

Robotic mechanism. In this project work the effort of human kind will be lessened with the making of automated car

controller with the utilization of ultrasonic obstruction detection and relocation of vehicle to avoid collision. The

incorporation of Arduino programming merged with intelligent design approach of Ultrasonic Sensor is the main attraction

of the presented work. Further steps are being taken by us in this project to make our digitally globalized. This gives the

idea of a creation in technical field where hardware and software interfacing is done tactfully.

Keywords: Car Controller; Ultrasonic detecting Sensor; Relocation of vehicle; Arduino Programming; Automation

___________________________________________________________________________________________________

1. Introduction An autonomous car (driverless car or self-driving car) is a vehicle that is capable of sensing its environment and navigating

without human input. Autonomous cars use a variety of techniques to detect their surroundings, such as radar, laser light,

GPS, odometry, and computer vision. Advanced control systems interpret sensory information to identify appropriate

navigation paths, as well as obstacles and relevant signage. Autonomous cars have control systems that are capable of

analysing sensory data to distinguish between different cars on the road, which is very useful in planning a path to the

desired destination. Among the potential benefits of autonomous cars is a significant reduction in traffic collisions; the

resulting injuries; and related costs, including a lower need for insurance.

2. Experiment /Analysis In our circuit, we have used three ultrasonic sensors which are directly connected to Arduino via specific pin numbers.

Suppose based on a car model, we have placed the Ultrasonic Sensor in three different positions one at front, one at the

right and one at the left of the car. We have used four DC motors which will act as the four wheels of the car.

Condition:

If there are car or any object at front and right of our car the ultrasonic sensor will sense the object or traffic and will move

towards left. Next if the ultrasonic senses any object at left and front then it will move toward right. If there are cars in all

the three direction then the ultrasonic will sense all of them and our car will move in same direction but the speed will get

drop. If there are cars in right and left then the car will continue to move at same speed.

Techniques used in Programming:

Considering the front wheels which are actually dc motors if there are no cars on the left the wheel on the left

will change will stop or in programming it will be treated as low and all other wheels will be high and the car

will change its direction towards left and will continue to move there. Suppose when there are car in all direction

and our car should drop the speed according to our circuit so we have used delay in programming to drop the

speed.

3. Results and Discussion We have used three ultrasonic sensors in our circuit which will help us to detect objects in three different directions. In this

circuit, we are using 4 DC motors which are driven by the driver IC L293D. The driver IC connects the dc motors with

Arduino. The dc motor moves according to the command given by us through Arduino programming. Each motor has three

connections with Arduino, two input pins and one enable pin. The enable pin activates the dc motors.

Block Diagram& Circuit:

4. Conclusion Our society will be highly benefitted by this; it will reduce traffic accidents a lot. It will also minimize the efforts of the

drivers. It will be helpful for the learning drivers too. So overall it will be a major step towards automation.In future in

NCMTP-2017 52

Ideal International E- Publication

www.isca.co.in

order to remove the limitation of specifying the range of detection, we can use webcam for image processing of cars by

detecting cars and then change direction of the vehicle.

References [1] Clark, T., Woodley, R., De Halas, D., 1962. Gas-Graphite Systems, in “Nuclear Graphite”. In: Nightingale, R. (Ed.).

Academic Press, New York, pp. 387.

[2] Deal, B., Grove, A., 1965. General Relationship for the Thermal Oxidation of Silicon. Journal of Applied Physics 36,

37–70.

[3] Deep-Burn Project: Annual Report for 2009, Idaho National Laboratory, Sept. 2009.

[4] Fachinger, J., den Exter, M., Grambow, B., Holgerson, S., Landesmann, C., Titov, M., Podruhzina, T., 2004. Behavior

of spent HTR fuel elements in aquatic phases of repository host rock formations, 2nd International Topical Meeting on

High Temperature Reactor Technology. Beijing, China, paper #B08.

[5] Fachinger, J., 2006. Behavior of HTR Fuel Elements in Aquatic Phases of Repository Host Rock Formations. Nuclear

Engineering & Design 236, 54.

[6] Quintiere, R., James, G., 2006. Fundamentals of Fire Phenomena. John Wiley & Sons Ltd., Chichester, U. K.

[7] Samochine, D., Boyce, K., Shields, J., 2005. Investigation into staff behaviour in unannounced evacuations of retail

stores - Implications fortraining and fire safety engineering. In: Daniel, E.E. (Ed.), Fire Safety Science - Proceedings of the

8th International Symposium. International Association for Fire Safety Science, pp. 519-530.

NCMTP-2017 53

Ideal International E- Publication

www.isca.co.in

Literacy Tracking Device Sukanya Sen

Department of Computer Science & Engineering, Narula Institute of Technology

Abstract Each and every person should be literate enough at least to meet the daily requirements of life. But this is not the case in

some remote places in some countries. But again not every person is rich enough to afford proper education. So, here is an

idea-During making census report we have to make record of the educational status of every single person in a family along

with the yearly income of the family. On the basis of the report, segregate the individuals who have same educational status

and arrange for required courses in their cable channel (in different slots of time).Those who don’t have TV, a special

theatre is to be made for them. At the end of each course, code words are to be given. Strict rules are to be made such that

each and every Sunday, a literacy tracking test is to be organised at the nearest grounds, by law that would ask for the

code words to the people. If they succeed to say the correct code word, they are qualified to the next level or class;

otherwise they are taken to the nearby cinema halls by force and given lectures on their due courses.

The literacy tracking device will be nothing but an electronic gadget with a voice sensor. It will also consist of a virtual

typing pad so that those who are able to type, can type their respective code words and those who are unable to type will

utter the code words that would be recorded by the voice sensor and matched with the code words already saved in its

memory. Since, different people have different educational statuses, before typing or saying the codes, there will be options

to select which class of education one is currently belonging to. And obviously, the codes will be different for different

classes. An important point to note is that all these activities will happen in a secret, secured room, just like that we use

during elections, so that the code words are not leaked for people belonging to the same class.

Thus, a good use of electronics can be for the sake of the country, i.e. to ensure worldwide 100% literacy.

Keywords: education; codewords; device; gadgets; literacy.

1. Introduction In today’s competitive world, literacy and education has always been the primary needs. But, even in this 21

st century,

people are not literate enough to compete. Especially Indians! India is a densely populated country. Majority of this

population falls under labour class and so are the illiteracy rates! It is perhaps due to the lack of facilities that the Indian

government fails to supply to such anovercrowd. People here are so poor that they cannot afford to have the basic

education. The statistics says the literacy rates in the Asian countries are:-Sri Lanka-98.1%, China-95.1%, Malaysia-93.1%,

Indonesia-92.8%, Myanmar-92.7% and in India-74.4%. Thus we see that illiteracy rates in India are quite higher than the

other countries mentioned here. Hence we see how illiteracy dominates the Indian subcontinent! Our sole objective is to

find a solution to this.

2. Analysis The plan is quite simple. The prerequisites required for this project are:-Voice sensors;Keyboards/ optical typing

pads;Memory elements (for e.g. flip flops);Public co-operation.

Here’s the plan:-During making census report we have to make record of the educational status of every single person in a

family along with the yearly income of the family. On the basis of the report, segregate the individuals who have same

educational status and arrange for required courses in their cable channel (in different slots of time).Those who doesn’t

have TV, a special theatre is to be made for them. At the end of each course, code words are to be given. Strict rules are to

be made such that each and every Sunday, a literacy tracking test is to be organized at the nearest grounds, by law that

would ask for the code words to the people. If they succeed to say the correct code word, they are qualified to the next level

or class; otherwise they are taken to the nearby cinema halls by force and given lectures on their due courses. Theliteracy

tracking device will be nothing but an electronic gadget with a voice sensor. It will also consist of a virtual typing pad so

that those who are able to type, can type their respective code words and those who are unable to type will utter the code

words that would be recorded by the voice sensor and matched with the code words already saved in its memory. Since,

different people have different educational statuses, before typing or saying the codes, there will be options to select which

class of education one is currently belonging to. And obviously, the codes will be different for different classes. An

important point to note is that all these activities will happen in a secret, secured room, just like that we use during

elections, so that the code words are not leaked for people belonging to the same class. Those who are be able to type or

utter the correct code words, only they will be provided with the next videos. Those who fail to do so,theywill be taken to

certain educational centres and taught with special attention. Thus, if there is public co-operation, we will be able to build

up a strong country within a few years, and if there’s education and literacy, the people will be able to apply for jobs and as

a result, the country’s economy will also be empowered.

3. Results and Discussion There are some advantages along with some disadvantages of this project. Advantages are - Easy access (People can sit

back and relax at their homes while watching the educational videos from their own cable channels), Each and every people

will be well acquainted with technology and Digitized study and thus we will be able to make a digital India. But, there are

several disadvantages also- it needs adequate supply of electronic gadgets. Also we have to convince each and every

people, especially those who have a traditional mentality. So, it is really a hectic task for the organizing committee.

NCMTP-2017 54

Ideal International E- Publication

www.isca.co.in

4. Conclusion In spite of all the difficulties and drawbacks, it may be concluded that if every Indian citizen co-operates in this, India

would probably achieve 100% literacy within the next 10-20 years!

Acknowledgements I express my gratitude towards my teachers who encouraged me to participate in the event. I would also like to thank

TEQIP platform without which all these efforts had had no meaning.

References [1] Img1- http://www.rotaryteach.org/images/literacy-rate-comparision.jpg

NCMTP-2017 55

Ideal International E- Publication

www.isca.co.in

Forecast Model Improvement with ECMWF Sohini Bairagi, Priyanka Roy, Ranit Basack, Amritandu Saha

Department Electronics and Communication Engineering, Narula Institute of Technology

81, Nilgunj Road, Agarpara, Kolkata, West Bengal 700109

___________________________________________________________________________________________________

Abstract In our daily life, weather forecast is part of our life and also very useful. In previous days; many models were

presented.Among these, European Centre for Medium-Range Weather Forecasts in Reading, UK is very popular. The

European Centre for Medium-Range Weather Forecasts in Reading produces a number of different forecast products every

day as part of their operational suite. One product is the global ensemble forecasting system. Ensemble modelling takes

advantage of the uncertainty in initial conditions and model formulation to produce 51 possible forecasts. Work is needed

on how to best use this model output in operational forecasting environments and to understand its strengths and

weaknesses, particularly for small-scale, high-impact weather phenomena such as thunderstorms. Some organizations use

the ECMWF model output to initialized their own limited-area models. How best to use this data and construct the best

forecast product remains an active area of research. This model has improved the errors and bias in the forecasts.

Keywords:ECMWF model; Weather forecast; Level pressure; Satellite; ERA-Interim data server

_________________________________________________________________________________

1. Introduction: European Centre for Medium-Range Weather Forecasts is

the application of science and technology to predict the

state of the atmosphere for a given location. Human beings

have attempted to predict the weather informally for

millennia and formally since the nineteenth century. The

model is made by collecting quantities data about the

current state of the atmosphere at a given place and using

scientific understanding of atmospheric process. All

ECMWF’s operational forecasts aim to assess the most

likely forecast and also the degree of confidence one can

have in that forecast. To do this the Centre carries out an

ensemble of predictions which individually are full

descriptions of the evolution of the weather, but

collectively they assess the likelihood or probability of a

range of possible future weather. In this section you will find the products, tools, documentation andinformation you need

to make the best use of our data.

2. Project Discussion: NWP requires input of meteorological data, collected by satellites and earth observation systems such as automatic and

manned stations, aircraft, ships and weather balloons. Assimilation of this data is used to produce an initial state of a

computer model of the atmosphere, from which an atmospheric model is used to forecast the weather. These forecasts are

typically: medium-range forecasts, predicting the weather up to 15 days ahead, monthly forecasts, predicting the weather on

a weekly basis 30 days ahead, and seasonal forecasts up to 12 months ahead. The ERA-Interim data server surface archive

has a mixture ofanalysis fields, forecast fields and fields available from both the analysis and forecast. The other daily

archives have only analysis data. If step 0 is chosen, then only analysed fields, which are produced for 0000, 0600, 1200

and 1800 UTC, are available. If step 3, 6, 9 or 12 is selected then only forecast fields which are produced from forecasts

beginning at 0000 and 1200 UTC, are available. The sea level pressure field can be used to find low and high pressure

systems as well as the location of cold fronts. The highs and lows can be located by the white H and L symbols on the map.

Cold fronts will generally start from areas of low pressure and follow the trough of low pressure south and to the west of

the low. Rain and/or snow are likely in the regions directly around the low and somewhat less likely along the cold front.

This model runs to 7 days and provides data at 24 hour intervals for sea level pressure, 850 mb winds and temperatures and

500 mb heights. These plots are generated once a day at 8:45 PM EST.

3. Conclusion ECMWF's monthly and seasonal forecasts provide early predictions of events such as heat wave, cold spells and droughts,

as well as their impacts on sectors such as agriculture, energy and health. Since ECMWF runs a wave model, there are also

predictions of coastal waves and storm surges in European waters which can be used to provide warnings. The Centre

provides Twice-daily global numerical weather forecasts, Air quality analysis, Atmospheric composition monitoring,

Climatemonitoring, Ocean circulation analysis, Hydrological prediction. Moreover we can say that ECMWF model is very

useful for knowing the forecast easily and accurately.

Acknowledgement: We would like to express my special thanks of gratitude to my teacher Dr.Susmita Karan as well as our principal Dr.

NCMTP-2017 56

Ideal International E- Publication

www.isca.co.in

Maitreyi Ray Kanjilal who gave me the golden opportunity to do this Wonderful project on the topic Forecast Model

Improvement with ECMWF, which also helped main doing a lot of Research an decamp to know about so many new

things we are really thankful to them. Secondly we would also like to thank my parents and friends who helped us a lot in

finalizing this Project within the limited timeframe.

Reference: 1. www.ecmwf.int/en/forecasts

2.weather.unisys.com/ecmwf/

3. https://en.wikipedia.org/.../European_Centre_for_Medium-Range_Weather_Forecasts

4. www.ecmwf.int/.../forecasts/.../mean-sea-level-pressure-wind-speed-850-hpa-and-geo

NCMTP-2017 57

Ideal International E- Publication

www.isca.co.in

Application of Power Electronics on Wind Turbine Molay Kumar Laha, Dipu Mistry, Sreeja Chakraborty, Nilkantha Nag,Soumya Das and Pallav Dutta

Electrical Engineering Department, Narula Institute of Technology,

Agarpara, Kolkata-700109, India

Abstract Fossil fuel is limited and energy consumption study shows global power consumption rising rapidly. Renewable energy is

the answer of fossil fuels and power electronics is the answer of energy consumption. This paper based on how wind energy

changing from a minor energy production to important power source in renewable energy system with help of power

electronics.

Keywords–Renewable energy; Wind energy; Power electronics

1. Introduction Wind power system is one of the most oldest, attractive and ongoing working renewable energy sources. It is started with

Charles Brush in 1888 with first large-size wind electricity generation system (12 kw generator) and now Denmark already

produce 30% electricity in major cities with the help of wind. According to global wind energy council last year (2016)

total 54.6 GW (Gigawatts) of wind capacity installed across the planet which make total global installation capacity almost

487GW.

The main advantages of Renewable energy sources are they are clean, widely distributed and produce no greenhouse

gas emission during operation and use little land. The net effect on the environment are far less problematic then those of

non-renewable power sources and one of the most accepted energy sources is Wind power. In earlier days wind turbine are

technically based on squirrel-cage induction generator which is directly connected to the grid. There is no technology to

control active and reactive power back then, which are typically very important control parameter to balance frequency and

voltage but in present, the size and power range of the wind turbine increases and with the help of power electronics control

parameters (active and passive) are easy to manipulate. So, it is important to introduce power electronics as an

integratebetween wind turbine and grid.

This paper will discuss different wind turbine figures both aerodynamically and electrically and also various control

mechanism will be explained for wind turbine.

Figure 1.Converting-wind-power-to-electrical power in a wind turbine

2. Methodology

Wind turbine energy conversion- wind turbine uses the power of wind to get his required mechanical power. The

specially aerodynamically designed blades of wind turbine convert raw wind power to rotating mechanical power. The most

productive way to convert this low speed, high torque rotational mechanical power to electrical power is using a gear box

and a standard fixed generator as shown in fig 1. Although if multi pole generator system is used then the gear box is

optional. The electrical output from the generator transfer to the power converter and power control section where the

interface between turbine to grid occurs.

3. Experiment/analysis Different wind turbine configuration- As mention before aerodynamically design wind turbine blades are solely

responsible for the conversion of wind power to mechanical power. The power of wind is a cube of wind speed so, it is very

important to be able to control and limit the converted mechanical power at higher wind speed. The limitation is done by

stall control (fixed blade position but stall of the wind appear along the blade at higher wind speed) pitch control( blades are

turned out at higher wind speed ) or active stall (blade angles are manipulated in such a way to create stall along the blades

). There are three type of wind turbine. The first type is Wind turbine without power electronics. The second type is Wind

turbines with partially related power electronics and the last type is Full scale power electronic interfaced wind

turbine system.

Figure 2, 3, 4 represent first typewind turbinewhre wind speed controled by the aerodynamically and speed of the turbine is

fixed.

NCMTP-2017 58

Ideal International E- Publication

www.isca.co.in

Figure 5,6,7 represents power chrateristics of fixed speed wind turbine.

Figure 2. Stall control Figure 5. Stall

control power charateristics

Figure 3. Pitch control Figure 6. pitch control power charateristics

Figure 4. Active stall control Figure 7. Active stall control power charateristics

In these figures, power is limited by stall, pitch or active stall control. All three systems uses a reactive power compensator

to cut down (almost eradicate) reactive power from the system. It is done by switching capacitor banks (5-25 steps). All

these system uses soft-starter.

Figure 8, 9 represent wind turbine with partially related power electronics. In figure 9wounded rotor induction generator is

connected with a resistance control power converter. This power converter control the resistance of the induction generator

and a speed variation of +30% around synchronous speed can be achieved. This power converter is also help to control both

active and reactive power which give prominent grid control. So, wind turbine implemented with power electronics can act

as a powerful productive power source to the grid.

Figure 8. Partially related power electronics Figure 9. DFIG basedpartially related power

Wind turbine electronics Wind turbine

Figure 9 represents DFIG (Doubly Fed Induction Generator) which generally uses a Back-to-back converter and consists of

two bi-directional converters sharing a common dc-link. One converter connected to the rotor and other one connected to

NCMTP-2017 59

Ideal International E- Publication

www.isca.co.in

the grid .These powerful two converters has ability to vary speed of the generator and able to control both active and

reactive power. This DFIG system does not need soft starter and reactive power compensator.

Figure 10, 11 represent full scale power electronics interfaced with wind turbine.

Figure 10.represents synchronous generator with a gear box to control the wind turbine speed and it needs small power

converter for field excitation.

Figure 11.represents Multi-pole system with synchronous generator. These models do not need gearbox and lot cheaper

than other models so, they are very popular in the industry.

Fig 10.Wind turbine with synchronous Generator Fig11. Wind turbine with multi synchronous generator

4. Control of wind turbine Wind turbine needs fast and slow control and this type of control can be achieved by aerodynamically and with the help of

power converters. The aim is maximum power production on the available wind power. Figure 12.represents an example of

control scheme of a wind turbine with DFIG system. Whenever the wind speed is low the turbine will be fixedat maximum

possible slip for not get

Figure 12.Control scheme of a wind turbine with DFIG system

over-voltage. Pitch angle controller limit the power in case turbine reaches his nominal power. A generator side converter is

connected to the DFIG in order to controlling generated electrical power. The grid side converter is used for keeping DC-

link voltage fixed.

5. Result Figure 13.shows global cumulative installed wind capacity. This figure shows how wind energy generation increases in last

couple of years. This magic increment was not possible without the help of power electronics.

Figure 13. Global cumulative installed wind capacity.

6. Conclusion This paper discusses power electronics application in wind turbine system. This paper briefly discuss how wind power is

converted to electrical energy and discuss various kind of aerodynamically control wind turbine system and how power

electronics are useful to get maximum power production by controlling active and reactive control.so, it can be said power

scaling of wind turbine is very important in order to be able to reduce the energy cost.

NCMTP-2017 60

Ideal International E- Publication

www.isca.co.in

Acknowledgement We would like to extend my sincere thanks to the TEQIP for immense support. We are indebted to the Head of the

Department, Electrical Engineering and other faculty members for giving us an opportunity to learn and present this

software-based paper work. If not, the above mentioned people my paper would never have been completed successfully.

References

(i) A. D. Hansen, F. Iov, F. Blaabjerg, L. H. Hansen, ”Review of Contemporary Wind Turbine Concepts and their Market

Penetration”. Journal of Wind Engineering, Vol. 28, No. 3, 2004, pp. 247-263.

(ii) M. Liserre, R. Cardenas, M. Molinas, J. Rodriguez, ”Overview of Multi-MW wind turbines and wind parks”, IEEE

Transactions on Industrial Electronics, Vol. 58, No. 4, April 2011, pp. 1081-1095.

(iii) F.Blaabjerg, Z.Chen, SB.Kjaer,“Power Electronics as Efficient Interface in Dispersed Power Generation Systems”,

IEEE Trans.On Power Electronics, 2004,Vol. 19, no. 4, pp. 1184-1194.

(iv) Z. Chen,J.M.Guerrero, F Blaabjerg,"A Review of the State of the Art of PowerElectronics for Wind Turbines," IEEE

Transactions on Power Electronics, vol.24, No.8, pp.1859-1875,Aug. 2009.

(v) Z. Chen, E. Spooner,”Wind turbine power converters: a comparative study”, Proc. of PEVD ’98, 1998 pp. 471 – 476.

(vi) P. Rodriguez, A. Timbus, R. Teodorescu, M. Liserre, F. Blaabjerg,"Reactive Power Control for Improving Wind

Turbine System Behavior Under Grid Faults," IEEE Transactions on Power

Electronics, vol.24, no.7, pp.1798-1801, July 2009.

(vii)D. Xiang, Li Ran, P.J. Tavner, S. Yang, "Control of a doubly fed induction generator in a wind turbine during grid fault

ride-through,"IEEE Transactions on Energy Conversion, Vol.21, no.3, pp. 652-662,Sept. 2006.

NCMTP-2017 61

Ideal International E- Publication

www.isca.co.in

Biophysics in Disease Analysis Eisita Basak

Narula Institute of Technology

___________________________________________________________________________________________________

Abstract Biophysics is one of the most important parts of Bio-medical Science. Electrocardiogram is used to analyse the activity of

the heart of a person. Magnetic Resonance Imaging is used to create detailed image of a human body. Both processes have

physics behind the working.

___________________________________________________________________________________________________

1. Introduction Biophysics is the application of ‘PHYSICS’ concepts, theories and methods to enhance the healthcare. This branch is

responsible for the technical foundation of radiology, radiation oncology and nuclear medicine. It deals with ionizing and

non-ionizing radiation. Imaging and therapy is done by using radiation technology. Imaging is required to verify and adapt

the treatment according to the findings. Examples: CT scan, USG, ECG, MRI etc.

2. Analysis Electrocardiogram (ECG) is a representation of the electrical events of the cardiac cycle. Each event has a distinctive

waveform. The study of waveform can lead to greater insight into a patient’s cardiac pathophysiology.

The working principle of ECG depends on the movement of impulse. Electrical impulse (wave of depolarization) picked

up by placing electrodes on patient. The voltage change is sensed by measuring the current change across 2 electrodes - a

positive electrode and a negative electrode. If the electrical impulse travels towards the positive electrode, this results in

positive deflection. If the impulse travels away from the positive electrode, this results in a negative deflection. A

conventional ECG is given in Fig 1.

Fig. 1

Magnetic Resonance Imaging (MRI) machine uses a powerful magnetic field, radio frequency pulse and a computer. It

works on the principle of the NMR or Nuclear Magnetic Resonance. It creates a detailed image of a human body.

Human body contains 65% of water. Hydrogen atoms in free water molecules act as proton or nucleus. Acting protons

precess around own axis with an angular frequency ω0. This particular frequency is called ‘Larmor frequency’. Due to this

Larmor frequency, a electric field is produced around the proton. When the human body is placed in a magnetic field, if the

frequency of the magnetic field becomes equal to the Larmor frequency of precession, protons absorb energy. As a result,

the spin of the proton flips accordingly and the protons show Resonance. Different body tissues return to normal spins at

different rate. The rate of spins of defective body tissues defers from the normal body tissues. The radiation is the captured

by the scanner. A conventional MRI scan is given in Fig 2.

Fig. 2

3. Results and Discussion For the diseases analysis and the treatment, many procedure are used which is based on the working principle of the

Physics. ECG is the direct application of the pulse measurements depending on the change in the flow of current. MRI is

the application of the Nuclear Magnetic Resonance for the betterment of the human life.

NCMTP-2017 62

Ideal International E- Publication

www.isca.co.in

4. Conclusion Till now many researches are going on to increase the quality of the healthcare through Biomedical Physics. Such some

technologies are Dual Energy X-rays absorptiometry, molecular imaging, electrical impedance tomography, diffuse optical

imaging and optical coherence tomography.

Acknowledgements I would like to thank the Teachers of Physics Department for the enormous support. I would like to thank my group

members – Sayantan Saha, Subham Seth, Atalantic Panda, Sayan Das, Srinjan Ghosh for their cooperation.

References 1. Donald L. Pavia, Introduction of Spectroscopy, Page no. 105-110, 4th edition, Brooks/Cole.

2. http://www.livescience.com/39074-what-is-an-mri.html

3. http://www.cyberphysics.co.uk/topics/medical/MRI.htm

4. http://www.brainandspine.org.uk/mri-scans

5. http://hyperphysics.phyastr.gsu.edu/hbase/Biology/ecg.html

6. http://www.webmd.com/heart-disease/electrocardiogram

7. http://www.medicine-on-line.com/html/ecg/e0001en.htm

NCMTP-2017 63

Ideal International E- Publication

www.isca.co.in

The Trappist Ranit Dey , Subhankar Roy and S.Uma.Harshini

Department of Electronics and Communication Engineering , Narula Institute of Technology ,

81, Nigunj Road, Agarpara, Kolkata – 700109, West Bengal

Abstract One of the instruments that have been helping humans in the field of observation of remote objects by collecting data is

Telescope and the greatest observation was captured by the Belgian optic robotic telescope, the TRAPPIST which is

situated high in the Chilean mountains at ESO's La Silla Observatory. The curiosity of humans along with the versatility of

new equipment has led to the initial discovery of the new planetary system, TRAPPIST-1 which was observed by

TRAPPIST, the TRAnsiting Planets and PlanetesImals Small Telescope. A team of astronomers used the telescope to

observe the ultra-cool dwarf star 2MASS J23062928-0502285, now also known as TRAPPIST-1. By utilising the principles

of transit photometry, they recently discovered a total of seven Earth-sized planets orbiting a star in the constellation of

Aquarius just 40 light years away. The TRAPPIST has remotely measured the orbital periods of the planets of TRAPPIST-

1 and the data relating to it like the sizes of the planets or with what eccentricities they are moving, etc. is also predicted.

The exact time at which the planets transit also provides us a means to measure their masses, which leads to know their

densities and thereby informing us about their bulk properties. The planets are thought to be consistent with a rocky

composition. The TRAPPIST has also been studying the climates of terrestrial worlds beyond our Solar system which leads

mankind to a bigger leap. The TRAPPIST-1 worlds are providing humanity its first opportunity to discover evidence of

biology beyond the Solar system. It also helps with the subsequent investigations that are required to learn about foreign

atmospheres. This improves enormously our capacity to discover planets with the transit method. Finding planets orbiting

ultra-cool dwarf’s means that to find planets similar to our Earth on several aspects, but different in several aspects. For

instance, the amount and type of light the planets receive is not the same as what we receive on Earth. Also the proximity of

the TRAPPIST-1 planets to their star means that they are likely to be tidally-locked which signifies that there is a

permanent dayside and a permanent night side. What will be the effects of climate remains mostly unknown? Planets like

those of TRAPPIST-1 will open the study of what appears to our eyes like exotic climates, but that may in fact be some of

the most usual climates outside the Solar system. Ultra-cool stars provide a platform to study the most common planets that

exist, which is essential to understanding the formation of Earth-like planets and are also crucial in order to one day

establish with what frequency biology has emerged in the Cosmos.

Keywords– TRAPPIST; TRAPPIST-1; transit photometry; ultra-cool dwarf; Cosmos.

___________________________________________________________________________________________________

1. Introduction The remote sensing instruments are the core of Atmospheric Physics and the TRAPPIST is one of them who had made a

remarkable discovery in the 21st century. The TRAnsiting Planets and PlanetesImals Small Telescope or popularly known

as the TRAPPIST discovered a new planetary system having a set of seven planets orbiting a star which is very similar to

our solar system. Itis situated high in the Chilean mountains at ESO's La Silla Observatory and became online in 2010. The

TRAPPIST consists of a pair of 60cm Robotic telescopes. The telescope is dedicated for the study, detection and

characterization of planets beyond our Solar system. In January 9th

2011, the first observation of an ultra-cool dwarf was

made by TRAPPIST, in preparation for SPECULOOS (Search for Planets EClipsing ULtra-cOOl Star). In the year of 2015,

intense photometric monitoring of the star (2MASS J23062928-0502285or TRAPPIST-1a) began. Later in 1st

May, 2016

TRAPPIST was accompanied by the UKIRT in Hawaii, the William HerschelTelescope in La Palma, LiverpoolTelescope

and the Very Large Telescope in Chile for collecting data and they made clear indications that there were more than 3

planets orbiting the Star. Soon the other orbiting planets were confirmed to be found. Later on 23rd

February, 2017 the

discovery of the system of the planets was published worldwide.

2. Analysis The observed details of the host star i.e., the TRAPPIST-1 are that its mass and radius is approximately 8% and 11% than

that of Sun and it is 3228K cooler than Sun. The method of Transit photometry has led to the discovery of the TRAPPIST-1

planetary system. The planets were detected by measuring the minute dimming of the Star as the orbiting planets passed

between the Earth and the star. The dimming of the Star directly reflects the size ratio between the star and the planet. This

NCMTP-2017 64

Ideal International E- Publication

www.isca.co.in

method of photometry gives the size estimation but not the mass as compared to the spectroscopic method which gives an

estimate of the mass. A combination of these two methods can be used to calculate the density of a planet. The regular and

repeated shadows that are cast during the transit led to the discovery of these planets. The other bulk properties can be

measured by the analysis of the exact transit time of the planets when combined with the radial velocity data. The

TRAPPIST has also remotely studied atmospheric composition of these planets by monitoring the depth of transit at

different wavelengths. Some of the measured and calculated properties of these planets are listed in the table below.

Table I: The TRAPPIST-1 planetary System

Name Radius Mass Eccentricity

TRAPPIST-1b 1.085± 0.035 Re 0.85 ± 0.72 Me <0.081

TRAPPIST-1c 1.056± 0.035 Re 1.38 ± 0.61 Me <0.083

TRAPPIST-1d 0.772± 0.030 Re 0.41 ± 0.27 Me <0.070

TRAPPIST-1e 0.918± 0.039 Re 0.62 ± 0.58 Me <0.085

TRAPPIST-1f 1.045± 0.038 Re 0.68 ± 0.18 Me <0.063

TRAPPIST-1g 1.127± 0.041 Re 1.34 ± 0.88 Me <0.061

TRAPPIST-1h 0.715±0.043 Re Unknown Unknown

Re is the radius of the Earth and Me is the mass of the Earth

Primary analysis states that the sizes and masses of the planets are comparable to the Earth and Venus and are consistent

with a rocky composition. It is assumed that the planets may be tidally locked which will make the development of life-

form more challenging. The permanently day side and the permanent night side may lead to strong winds circling the

planets. Moreover, it is estimated that TRAPPIST-1b and TRAPPIST-1c do not have hydrogen envelopes and may have lost

enough water possibly compromising habitability but TRAPPIST-1d may have kept enough water to sustain life. Out of the

seven, TRAPPIST-1e, 1f and 1g are predicted to contain liquid water on their surface. The mass, density and the eccentricity

of TRAPPIST-1h are still unclear and it is under observation.

3. Results and Discussion The TRAPPIST and some satellites are focusing more on the new system so as to increase the number of transit timing

measurements for each of the planets. This study will provide us more accurate masses and orbital eccentricities and will

also certainly confirm whether the planets possess any volatile matter (like water) along with rocky surface. During transit,

the portion of the starlight that goes through the atmosphere of the planets gets transformed by the chemical composition of

the atmosphere and by its vertical structure and so this provides us a key to remotely study the climates of terrestrial worlds

beyond our Solar system. From Table 1, it can be observed that the radii and the masses of the planets are approximately

comparable to the radius and mass of the Earth which hints that Earth will have friends soon.

The resultant survey assumes that three planets are in the Goldilocks zone (habitable zone) which is very significant for

survival and very remarkable for humans. Planets which eclipse Ultra-cool dwarf stars are very essential for understanding

the formation of Earth like planets. It is crucial so as to establish the existence of biology and when it has emerged in the

Universe. Also a phenomenal situation is assumed that by observing the inter-orbital spacing of the planets, that if someone

stands on the surface of any planet in the system then they will probably be able to view other planets of the system orbiting

the Star.

4. Conclusion The TRAPPIST and the TRAPPIST-1 worlds are both providing humanity with many opportunities to study extra-terrestrial

worlds beyond our solar system. A Science-fiction may turn out reality if some extra-terrestrial life is found out from those

worlds and already there were 10 billion radio signals sent for SETI (Search for Extra-Terrestrial Intelligence) and still

nothing reciprocated. But as usual, the hope and curiosity will never cease and further research will surely reveal something

more exciting in the upcoming years.

Acknowledgements We would like to take this opportunity to express our profound gratitude and deep regard to my teacher of the Physics

Department, for her exemplary guidance, valuable feedback and constant encouragement throughout the completion of the

paper. Their perceptive criticism kept us working to make this paper in a much better way. We would also like to give our

sincere gratitude to all the other co-workersAshmita Das, Debasish Ghosh, Amit Maiti and Soumita Das who supported

with their valuable suggestions and without which this paper would have been incomplete.

References [1] www.trappist.one

[2]en.m.wikipedia.org/wiki/TRAPPIST-1

[3]en.m.wikipedia.org/wiki/Active_SETI

[4]www.planetary.org/explore/space-topics/exoplanets/transit-photometry.html

NCMTP-2017 65

Ideal International E- Publication

www.isca.co.in

Human Skin as Touch Screen Aniket Dhar and Ishitri Mukherjee

Narula Institute of Technology, 81,Nilgunj Road,Agarpara,Kolkata-700109

Abstract Electronics is the branch of science that deals with the study and control of flow of electrons and the study of their

behaviour and effects in vacuums, gases, and semiconductors, and with devices using such electrons. This control of

electron is accomplished by devices that resist, carry, select, steer, switch, store, manipulate and exploit the electrons. One

of such electronic device is mobile.

Mobile communication technology has come a long way since the initial analog phones. In the last few decades, mobile

wireless communication networks have experienced a tremendous change. Each generation have some standards,

capacities, techniques and new features which differentiate it from the previous one. And, the new standard is of the touch

screen mobiles. Touch screen is important source of input device and output device normally layered on the top of an electronic visual display of an

information processing system. The touch screen enables the user to interact directly with what is displayed, rather than using a mouse,

touchpad or any other intermediate devices. Mobile industry has brought many revolutionary changes in the field of mobile, from big and

bulky handsets to small & portable sets, from keypad operation to touch screen facility. This project aims at using human skin arms or

palm or leg as touch screen panel. All we need to do is wear a band in our wrist, which will display all the data from our mobile to our

skin & we can use it as a touch screen technology. To execute further actions we just need to type the command on our skin & with aid of

an acoustic sensor, this sensor reads the commands from our skin and executes it. The acoustic sensor is employed to analyse the precise

tissue density and extra biometric data from our skin. To decide the type of command we have to specify.

Keywords:Acoustic sensor, skinput.

1. Introduction Call it an effort of human mind or a miracle from human heart. But this is all happening with technology. We land in an

era where everything that can be possibly thought can also be practically put into. And that to quite reasonably..!! Just

moves your hand or fingers over a thing and it works. Yes, it is a interactive gesture based technology I am talking about.

Mobile industry has brought many revolutionary changes in the field of mobile, from big and bulky handsets to small &

portable sets, from keypad operation to touch screen facility. This project aims at using human skin arms or palm or leg as

touch screen panel. All we need to do is wear a band in our wrist, which will display all the data from our mobile to our

skin & we can use it as a touch screen technology. We make this with two new technology 1) Skinput technology or Bio-

acoustic sensor and 2) Sixth sense technology.Skinput is a technology that appropriates the human body for acoustic

transmission, allowing the skin to be used as an input surface. SixthSense is a wearable gestural interface that augments the

physical world around us with digital information and lets us use natural hand gestures to interact with that information.

2. Analysis Skinput is an input technology that uses bio-acoustic sensing to localize finger tips on the skin. When augmented with a

pico-projector, the device can provide a direct manipulation, graphical user interface on the body. Skinput represents one

way to decouple input from electronic devices with the aim of allowing devices to become smaller without simultaneously

shrinking the surface area on which input can be performed. Skinput employs acoustics, which take advantage of the human

body’s natural sound conductive properties. This allows the body to be annexed as an input surface without the need for the

skin to be invasively instrumented with sensors, tracking markers, or other items.

Skinput has been publicly demonstrated as an armband, which sits on the biceps. This prototype contains ten small

cantilevered Piezo elements configured to be highly resonant, sensitive to frequencies between 25 and 78 Hz.

The gadget effectively turns your arm into a touch screen surface by picking up various ultra-low sounds produced when

you tap different areas.

Different skin locations are acoustically distinct because of bone density and the filtering effect from soft tissues and joints.

The team then used software that matched sound frequencies to specific skin locations.

We've evolved over millions of years to sense the world around us. When we encounter something, someone or some place,

we use our five natural senses to perceive information about it; that information helps us make decisions and chose the right

actions to take. But arguably the most useful information that can help us make the right decision is not naturally

Fig 1. Transverse Wave Propagation

Fig 2. Longitudinal Wave Propagation

NCMTP-2017 66

Ideal International E- Publication

www.isca.co.in

perceivable with our five senses, namely the data, information and knowledge that mankind has accumulated about

everything and which is increasingly all available online. Although the miniaturization of computing devices allows us to

carry computers in our pockets, keeping us continually connected to the digital world, there is no link between our digital

devices and our interactions with the physical world. Information is confined traditionally on paper or digitally on a screen.

Sixth Sense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact

with this information via natural hand gestures. ‘Sixth Sense’ frees information from its confines by seamlessly integrating

it with reality, and thus making the entire world your computer.

The SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled

in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device

in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be

used as interfaces; while the camera recognizes and tracks user's hand gestures and physical objects using computer-vision

based techniques. The software program processes the video stream data captured by the camera and tracks the locations of

the coloured markers (visual tracking fiducials) at the tip of the user’s fingers using simple computer-vision techniques. The

movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the

projected application interfaces. The maximum number of tracked fingers is only constrained by the number of unique

fiducials, thus Sixth Sense also supports multi-touch and multi-user interaction.

The wave propagation through the skin can be of two types: 1) Longitudinal wave propagation, and 2) Transverse wave

propagation. Now both camera and projector are connected with the mobile or computer via Bluetooth. When we wear the

sixth sense device on our arm or palm, the Pico-projector will show the screen on our hand as screen shown in mobile or

computer. When we touch the screen, this movement and generated wave propagation by the finger taping automatically

sensed by the skinput device.

Fig 3. Acoustic response on different skin location

Fig 4:-Finger tapping on screen which is shown by projectorFig 5.Skinput device on arm

Fig 6. Wave propagation into our arm, wrist finger tapping

NCMTP-2017 67

Ideal International E- Publication

www.isca.co.in

Fig 7.Skinput device with sixth sense

3. Conclusion From this project we found an advance device. We does not required mobile screen or computer display to operate them,

just touch our hand or arm we can operate them. We does not required any type keyboard, touch screen, huge wires

connection or high power supply. With this technology, users are capable of managing audio devices, play games, create

phone calls and navigate stratified browsing systems.In future we should access many more things like we should get

information (i.e. physical dimension,temperature,shape etc.) about the things that we will touch andwe should give

information about the person with whom we just shake our hands from one to another etc.

References [1] "Telepointer: Hands-Free Completely Self Contained Wearable Visual Augmented Reality without Headwear and

without any Infrastructural Reliance", IEEE International Symposium on Wearable Computing (ISWC00), pp. 177, 2000,

Los Alamitos, CA, USA.

[2] "WUW – wear Ur world: a wearable gestural interface", Proceedings of CHI EA '09 Extended Abstracts on Human

Factors in Computing Systems Pages 4111-4116, ACM New York, NY, USA.

[3] IEEE Computer, Vol. 30, No. 2, February 1997, Wearable Computing: A First Step Toward Personal Imaging, pp25-32.

[4] Wearable, tetherless computer–mediated reality, Steve Mann. February 1996. In Presentation at the American

Association of Artificial Intelligence, 1996 Symposium; early draft appears as MIT Media Lab Technical Report 260,

December 1994.

[5] "Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer", Steve Mann with Hal

Niedzviecki, ISBN 0-385-65825-7 (Hardcover), Random House Inc, 304 pages, 2001.

[6] An Anatomy of the New Bionic Senses [Hardcover], by James Geary, 2002, 214pp

[7] "Skinput:Appropriating the Body as an Input Surface". Microsoft Research Computational User Experiences

Group.Retrieved 26 May 2010.

[8]Harrison, Chris; Tan, Desney; Morris, Dan (10–15 April 2010). "Skinput:Appropriating the Body as an Input Surface"

(PDF). proceedings of the ACM CHI conference 2010.

[9] Harrison, Chris; Tan, Desney; Morris, Dan (10–15 April 2010). "Skinput:Appropriating the Body as an Input Surface"

(PDF). proceedings of the ACM CHI conference 2010.

[10] Dillow, Clay (3 March 2010). "Skinput Turns Any Bodily Surface Into a Touch Interface". Popular Science.

[11] "Technology: Skin Used as an Input Device" (interview transcript). National Public Radio. 4 March 2010.

NCMTP-2017 68

Ideal International E- Publication

www.isca.co.in

5G Connectivity-The Beginning of New Era Harshita Jain, Sagar Gupta, Rishav Roy and Nazia Hassan

Narula Institute of Technology, 81Nilgunj Road Kolkata-700058, India

Abstract Present cell phones have it all- smallest size, large storage, speed dialing and so on. Fifth generation (5G) mobile system’s

model is an IP- based model for wireless and mobile networks interoperability which is capable to fulfill the demands of the

cellular communications market. The All-IP Network (AIPN) uses packet switching and its continuous evolution provides

optimized performance and cost. 5G network architecture consists of a user terminal and a number of independent

autonomous radio access technologies (RAT). 5G assures large broadcasting capacity up to gigabit (GB) supporting almost

65,000 connections at a time. More, applications combined with artificial intelligence by which human life will be

surrounded by artificial sensors which could be communicating with the mobile phones. 5G includes latest technologies

such as cognitive radio, SDR, Nano-technology, cloud computing and everything based on IP platform. It is expected that

the initial internet philosophy of keeping the network as simple as possible and giving more functionalities to the end nodes

in reality with the future generation of mobile networks, referred as 5G.

Keywords: Interoperability; AIPN; Packet Switching;Cognitive Radio; SDR; Nano-technology; Cloud Computing.

___________________________________________________________________________________________________

1. Introduction 5

th Generation mobile networks or 5G in abbreviated form, is the next step towards wireless networks system. 5G has

higher capacity and density than 4G and other network. One of the key objectives for mobile networks is to provide an

excellent end-user experience to satisfy the ever growing demand on data but the need for more capacity is just one driver

for mobile networks to evolve towards 5G. Fifth generation technology will fulfil all the requirements of customers who

always want advanced features in cellular phones. 5G wireless technology has many hopes and promises associated with it,

including delivering up to 10 GBPS of throughput per user with much denser networks latency .The 5th

Generation

technologies offer various new advanced features which makes it most powerful and in huge demand in the future.

2. Analysis 2.1 Evolution of networks:

1G emerged in 1970's. 1G uses the analog cellular technology. It uses FDMA (Frequency Division Multiple Access)

multiplex system with data bandwidth of 2kbps. It is a circuit switching network. The core network is PSTN.2G emerged in

the 1990's. 2G uses the digital cellular technology. It uses TDMA (Time Division Multiple Access) multiplex system with

an average bandwidth speed of 64kbps. 2G is also a circuit switching network like the 1G network. The core network is

PSTN. 3G emerged in 2004. 3G uses the broad bandwidth/CDMA/IP technology. It uses CDMA (Core Division Multiple

Access) multiplex system with a speed of 2mbps. 3G is a packet as well as a circuit switching network. The core network

for 3G is Packet N/W. 4G emerged in 2015. 4G uses the unified IP and seamless combo of LAN/WAN/WLAN/PAN. It

uses CDMA technology. 4G is a packet switching network. The core network for 4G is INTERNET. 5G is expected to be

launched in the year 2020. It will use the 4G+ WWWW multiplex system for the higher speed in comparison to the other

network systems. It will have a speed of more than 10gbps. Using the CDMA technology it will be a totally packet

switching network. The core network for 5G as decided will be INTERNET.

NCMTP-2017 69

Ideal International E- Publication

www.isca.co.in

2.2 Architecture of 5G:

In 5G Network Architecture all IP based mobile applications, it uses packet switching and services are offered via Cloud

Computing Resources (CCR). Cloud computing is a model for convenient on-demand network access to configurable

computing resources. Cloud computing allows consumers to use applications without installation and access their personal

data at any computer with internet access. The fifth generation wireless mobile multimedia internet networks can be

completely wireless communication without limitation. 5G makes perfect wireless real world – World Wide Wireless Web

(WWWW).

3. Results and Discussion

5G offers very high speed, high capacity, and low cost per bit.Due to high error tolerance it offers high quality service. 5G

technology offer transporter class gateway with unparalleled consistency. This technology offer high resolution for crazy

cell phone user and bi-directional large bandwidth shaping.The uploading and downloading speed of 5G technology is very

high.It is providing large broadcasting capacity up to Gigabit which supporting almost 65,000 connections at a time.

4. Conclusion Ultimately rolling out 5G will be ensuring it can accommodate mobile broadband growth. But, in order to make 5G a

success, it must also deliver on the vision to efficiently enable the wireless interconnection of machines to the cloud. In

addition, 5G need to balance with the need for greater efficiency, which is best when capability is centralized. It is expected

that the initial Internet philosophy of keeping the network simple as possible, and giving more functionalities to the end

nodes, will become reality in the future generation of mobile networks, here referred to as 5G.

Acknowledgements Authors are pleased to acknowledge BS&HU Department of Narula Institute of Technology, for every support to write this

paper. Authors also acknowledge the support and encouragement of other contributors, helpers and mentors in preparing for

such a paper. Authors would like to express their special thanks of gratitude to our teachers who gave us the golden

opportunity to prepare this paper.

References 1. IEEE, “IEEE 802.11ac-2013, Wireless LAN.

2. 5G Mobile and Wireless Communications Technology Cambridge University Press.

3. 2G-5G Networks: Evolution of Technologies, Standards, and Deployment.

4. IEEE Transactions on Wireless Communication.

5. IEEE Communications Surveys & Tutorials.

6. Toni Janevski, 5G Mobile Phone Concept.

7. IEEE Communications Magazine.

NCMTP-2017 70

Ideal International E- Publication

www.isca.co.in

Spectroscopy Aindrila Mukhopadhyay

Narula Institute of Technology ___________________________________________________________________________________________________

Abstract Many physical techniques are used in the study of solid polymers and some such as NMR Spectroscopy, can give

information about a wide variety of features of the structure or properties. Nuclear magnetic resonance (NMR) is a

spectroscopic technique that detects the energy absorbed by changes in the nuclear spin state. The application of NMR

spectroscopy to the study of proteins and nucleic acids has provided unique information on the dynamics and chemical

kinetics of these systems. One important feature of NMR is that it provides information, at the atomic level, on the

dynamics of proteins and nucleic acids over an exceptionally wide range of time scales, ranging from seconds to pico-

seconds. In addition, NMR can also provide atomic level structural information of proteins and nucleic acids in solution

Keywords:NMR Spectroscopy

___________________________________________________________________________________________________

1. Introduction Spectroscopy involves gaining information from the absorption, emission, or reflection of light from a sample. There are

many other examples of spectroscopic applications in our experience, but three familiar real-life examples include:X rays-

Firstly dense bone absorbs x-ray radiation, Grocery store scanners. A monochromatic laser is either absorbed (black bar)

or reflected (white bar). The simple black-or-white lines with their yes-or-no absorption-or-reflection response essentially

produce a binary code, from which products and prices can be determined. Third application of spectroscopy has been

found in Stop lights. A lens is adjusted at timed intervals to enable emission of green, red, or yellow light.

2. Analysis The fundamental principles of chemical spectroscopy are illustrated below. Spectroscopy involves having quantized energy

levels. You are familiar with the concept of quantized energy levels for electrons (1s, 2s, 2p, 3s, 3d etc.) and electron spins

(spin up or spin down, but other things are also quantized (vibrational energies, rotational energies…). Given that there is

an exact energy gap between two quantized energy states, a photon of precise energy must be absorbed in order to excite a

molecule from the ground state. When an excited state relaxes back to the ground state, that same photon is released. By

measuring the exact frequencies of photons that are either absorbed or emitted, we can measure ∆E. The quantity of

photons can tell us about how much material is absorbing or emitting. The chemist must then be able to interpret what the

frequencies of the photons mean in terms of chemical structure.The NMR Spectroscopy is an analytical chemistry

technique used in quality control and research for determining the content and purity of a sample as well as its molecular

structure.Subatomic particles (electrons, protons and neutrons) can be imagined as spinning on their axes. In many atoms

(such as 12

C) these spins are paired against each other, such that the nucleus of the atom has no overall spin. However, in

some atoms (such as 1H and

13C) the nucleus does possess an overall spin. The rules for determining the net spin of a

nucleus are as follows:If the number of neutrons and the number of protons are both even, then the nucleus has NO spin. If

the number of neutrons plus the number of protons is odd, then the nucleus has a half-integer spin (i.e. 1/2, 3/2, 5/2). If the

number of neutrons and the number of protons are both odd, then the nucleus has an integer spin (i.e. 1, 2, 3). The overall

spin, I, is important. Quantum mechanics tells us that a nucleus of spin I will have 2I + 1 possible orientations. A nucleus

with spin 1/2 will have 2 possible orientations. In the absence of an external magnetic field, these orientations are of equal

energy. If a magnetic field is applied, then the energy levels split. Each level is given a magnetic quantum number, m.

2. Discussion

Calculating transition energy

The nucleus has a positive charge and is spinning. This generates a small magnetic field. The nucleus therefore possesses a

magnetic moment, m, which is proportional to its spin, I.

The constant, g, is called the magnetogyric ratioand is a fundamental nuclear constant which has a different value for every

nucleus. h is Planck’s constant.

The energy of a particular energy level is given by;

where B is the strength of the magnetic field at the nucleus.

The difference in energy between levels (the transition energy) can be found from

This means that if the magnetic field, B, is increased, so is E. It also means that if a nucleus has a relatively large

magnetogyric ratio, then E is correspondingly large.

NCMTP-2017 71

Ideal International E- Publication

www.isca.co.in

The absorption of radiation by a nucleus in a magnetic field

Imagine a nucleus (of spin 1/2) is lying in the lower energy level (i.e. its magnetic moment does not oppose the applied

field)in a magnetic field. The nucleus is spinning on its axis. According to the classical view of the behaviour of the

nucleus,this axis of rotation will precess around the magnetic field in the presence of a magnetic field:

The frequency of precession is termed the Larmor frequency, which is identical to the transition frequency.

The potential energy of the precessing nucleus is given by:

E = - m B cos q

where q is the angle between the direction of the applied field and the axis of nuclear rotation.

If energy is absorbed by the nucleus, then the angle of precession, q, will change. For a nucleus of spin 1/2, absorption of

radiation "flips" the magnetic moment so that it opposes the applied field (the higher energy state).

It is important to realize that only a small proportion of "target" nuclei are in the lower energy state (and can absorb

radiation). There is the possibility that by exciting these nuclei, the populations of the higher and lower energy levels will

become equal. If this occurs, then there will be no further absorption of radiation. The spin system is saturated. The

possibility of saturation means that we must be aware of the relaxation processes which return nuclei to the lower energy

state.

4. Conclusion

Apart from normal uses, NMR spectroscopy has several other new applications. Protein-ligand interactions are at the heart

of drug discovery research. NMR spectroscopy is an excellent technology to identify and validate protein-ligand

interactions.“Pure shift” NMR spectroscopy (also known as broadband homonuclear decoupling) has been developed for

disentangling overlapped proton NMR spectra.

References [1] Online Willey Library

[2] Understanding of NMR Spectroscopy- James Keeler

[3] All Introductions to Polymer Physics- David I. Bower

NCMTP-2017 72

Ideal International E- Publication

www.isca.co.in

Fingerprint Sensor on Touchscreens Sourav Sadhukhan,Debarati Mitra,Saatwik Bhowmick and Arindam Majhi

Department of Electronics and Communication Engineering

Narula Institute of Technology, 81Nilgunj Road, Agarpara -700109, Kolkata, West Bengal, India

___________________________________________________________________________________________________ Abstract At present, most fingerprint sensors have to sit above the glass, necessitating a cut-out in the face of the phone or a

dedicated button that houses the sensor. That’s the case on market-leading handsets.But in this case we are creating a

dedicated part in every mobile for this sensor but if we can use that sensor on the screen that will be more beneficial. An

affiliate of a renowned company announced had developed a new fingerprint sensor named Ultrasonic Fingerprint Sensor

which is very thin and can work under any thin material allowing device designers to incorporate fingerprint readers

without dedicated buttons, pads, or other exposed elements.It was also stated that fingerprint recognition rates on the under-

glass module are comparable to button-type sensors and should reduce any smartphone malfunctions, with the added bonus

of allowing phones to be made waterproof or scratch-resistant, or simply designed to look sleeker. We intend to bring forth

a phone that uses that exact type of embedded fingerprint reader; with an ultrasonic fingerprint reader built into the glass.

Ultrasonic imaging of fingerprints not only allows the reader to sit beneath the glass, but is even more accurate than the

capacitive sensor used in Touch ID.

Keywords: Fingerprint sensors; Touch screen; Smartphone; Ultrasonic fingerprint reader.

__________________________________________________________________________________

1. Introduction Smartphones started sporting fingerprint sensors years ago, but the technology was still too early to make a big impact on

the experience. After Apple introduced Touch ID on the iPhone, Android OEMs came back to fingerprint reader tech with

renewed interest. Thanks to improved hardware, it has become a feature people actually want on Android. Recently we

have seen the rise of fingerprint scanners. All started from flagships and now, even on budget smartphones, you can find

them. Fingerprints and mass are similar in a way that 'none can either be created or be destroyed'. This feature of

fingerprints makes them unique and highly secure.Use of fingerprints as a technology is no less than a boon in a world

where almost anything can be hacked. It’s even more exciting to see almost every smartphone company dives into a foray

of using fingerprint scanners for security purposes.The introduction of these highly modified fingerprint scanners gives us

the much craved privacy we needed in this jeopardized world. With this technology, any and all kinds of hacking will go

down rapidly.But currently the smartphones available in the market are having the fingerprint sensor at the back or at the

home button of the phone where a dedicated part is given for that sensor. Our concept is to develop a touch screen which

will also work as a fingerprint sensor where we don't need to provide a dedicated part of the phone for the sensor.This will

benefit the smartphone to look more slicker and it can be made fully waterproof and dustproof.

2. Project Discussion We mainly use capacitive type fingerprint sensor and also capacitive touch screen in our smartphones. But there is also an

advance sensor which is ultrasonic fingerprint sensor.To actually capture the details of a fingerprint, the hardware consists

of both an ultrasonic transmitter and a receiver. An ultrasonic pulse is transmitted against the finger that is placed over the

scanner. Some of this pulse is absorbed and some of it is bounced back to the sensor, depending ups and downs, pores and

other details that are unique to each fingerprint. Like that this sensor creates a 3D diagram of our fingerprints.This

ultrasonic sensor can work under any thin material. So if that ultrasonic sensor is placed beneath the touchscreen, we can

scan our fingerprint from anywhere on the screen by using the latest applications.

Fig 1.A Sample of FingerprintFig 2. Working of Ultrasonic Fingerprint Sensor

NCMTP-2017 73

Ideal International E- Publication

www.isca.co.in

Fig 3.Scanned Fingerprint through Touchscreen Fig 4. Fingerprint Sensor Working on Touchscreen

3. Conclusion So finally we can conclude by saying that using of Ultrasonic Fingerprint Scanner under the touchscreen of the

smartphones will bring forth a better way of using our Mobile phone. The design will be better and the user Interface will

be smoother.

Acknowledgements We would like to express our special thanks of gratitude to our teacher Dr.Susmita Karan as well as our principal

Dr.M.R.Kanjilal who gave us the golden opportunity to do this wonderful project on the topic Fingerprint Sensor On

Touchscreen which also helped us in doing a lot of Research and we came to know about so many new things we are

really thankful to them.

Secondly,we would also like to thank my parents and friends who helped me a lot in finalizing this projectwithin the limited

time frame.

References

[1] http://www.pcworld.com/article/3149960/components/synaptics-has-a-new-fingerprintsensor-that-will-mean-smoother-

phone-screens.htm

[2] http://www.techradar.com/news/phone-and-communications/lg-fixes-a-fingerprint-readerunder-your-smartphone-s-

screen-1320178

[3] edia, the free encyclopediasimple.m.wikipedia.org

[4] https://simple.m.wikipedia.org/wiki/Touchscreen

NCMTP-2017 74

Ideal International E- Publication

www.isca.co.in

Blue Energy: Wave Energy Converter Using MATLAB Nilkantha Nag, Sreeja Chakraborty , Molay Kumar Laha, Sudhangshu Sarkar and Soumya Das

Department of Electrical Engineering, Narula Institute of Technology,Agarpara,Kolkata-700109,India

Abstract The aim of this paper is to evaluate the feasibility of wave energy to produce power for commercial purpose. Such

renewable energies are always recognised as vital inputs for sustainability and hence encouraging their growth is important.

Subjects include generation of power from waves from ocean. In this paper, wave energy has been discussed because of the

distinctive advantages of wave power. Large energy fluxes are available in case of waves and the predictability of wave

conditions over periods of days has increased. The possible power take-off systems are identified in this paper followed by

a consideration of some of the control strategies to enhance the efficiency of point absorber-type WECs.

Keywords:MATLAB/SIMULINK;Wave Energy Converter; Power Take Off.

1.Introduction Wave power refers to the power generated from the energy of ocean surface waves and the trapping of that energy to do

useful work. Sea waves are a very promising energy carrying renewable sources among other renewable power sources,

since they are able to manifest an enormous amount of energy resources in almost all geographical regions.

There are several reviews of wave energy converter (WEC) concepts (for example, see references [1], [2], [3], and [4]).

These show that many wave energy devices has been investigated, but many are at the R&D stage, with a small range of

devices having been tested at large scale only, employed in the oceans. The LIMPET shoreline oscillating water column

(OWC), installed at Islay, Scotland, in 2000 shows one system that is currently producing power for the National Grid [5].

2.Analysis The device that helps in converting ocean wave energy to electricity is called a Wave Energy Converter (WEC). It has

various shapes and sizes. The one I have modeled in this thesis is a so called point absorber: it has a buoy that moves with

the wave motion. The buoy is connected to an energy converter which is called Power Take Off (PTO), so that the

movement of the buoy is transmitted to the PTO. Inside the PTO, conversion of mechanical energy to electrical energy

takes place using a generator and is eventually transmitted to the grid [6].

A mathematical model of a product, in this case a WEC, assists during the developing process. With the mathematical

model I have tested the behavior of the product before I had a physical product to test on. The objective of this thesis is to

develop a simulation tool of a WEC [6].

Here, Simulink has been used for simulation of the program to model the WEC[9]. Simulink is preferred to over regular

MATLAB code since it overviews the model, which in turn makes it easy for other users to understand the model and

develop it further. To build a GUI (graphical user interface), I have used MATLAB user interface tool GUIDE [7], which

creates a link with the Simulink model. For solving numerical problems I have used ode45, since it is both robust and

relatively fast,is provided by MATLAB and is based on the Dormand-Prince Runga-Kutta (4-5) formula [8].

Fig 1.WEC model using SIMULINK blocks

3. Results and Discussion Initially the buoy motion modeled without the PTO has been shown. Then PTO has been added to the model to see the

motion of the complete WEC. This process is followed by checking on model approximations and numerical accuracy.

Lastly, the effects of Phase Control on the power output are shown further.

NCMTP-2017 75

Ideal International E- Publication

www.isca.co.in

6. Conclusion The WEC model uses linearized hydrodynamic forces. WAMIT linearizes these approximations around the buoy

equilibrium point; thus the simulation results are the most reliable for small buoy amplitudes.Thus, WEC model was

successfully implemented in Simulink. The model calculates the buoy and the PTO positions along with the velocities and

power. The Phase Control algorithm successfully optimizes the buoy motion, thereby increasing the power output.

7. Future Scope The Phase Control algorithm helped in wave prediction. This can be done by collecting wave data before the wave reaches

the buoy, and then I have estimated the wave elevation at the buoy position using a Kalman filter.

Acknowledgements We would like to extend my sincere thanks to the TEQIP for immense support. We are indebted to the Head of the

Department, Electrical Engineering and other faculty members for giving us an opportunity to learn and present this

software-based paper work. If not for the above mentioned people our paper would never have been completed

successfully.

References

[1] Salter, S. H. Wave power. Nature, 1974, 249(5459),720–724.

[2] Thorpe, T. W. A brief review of wave energy, Technical report no. R120, EnergyTechnology Support Unit (ETSU).A

report produced for the UK Department of Trade and Industry, 1999.

[3] Cl ment, A.McCullen, P.Falc ao, A., Fiorentino, A.Gardner, F.Hammarlund, K.Lemonis, G.Lewis, T.Nielsen, K.

Petroncini, S.Pontes, M.-T.Schild, B.-O. Sjöström, P., Søresen, H. C., and Thorpe, T. Wave energy in Europe: current

status and perspectives. Renew. Sust.Energy Rev., 2002, 6(5), 405–431.

[4] Previsic, M. Offshore wave energy conversion devices,Technical report E21 EPRI WP-004-US-Rev 1, Electrical Power

Research Institute, 2004.

[5]Wavegen.Available from http://www.wavegen.co.uk/what_we_offer_limpet.htm (access date 24 June 2008).

[6] J. Falnes. A revire of wave-energy extraction.Marine Structures, Volume 20, 2007.

[7] MATLAB. version 8.1 (R2013a). The MathWorks Inc, Natick, Massachusetts,2013.

[8] Lawrence F. Shampine and Mark W. Reichelt. The MATLAB ODE suite.SIAM Journal on Scientific Computing,

Volume 18, 1997.

[9] Simulink. version 8.1 (R2013a). The MathWorks Inc., Natick, Massachusetts,2013.

NCMTP-2017 76

Ideal International E- Publication

www.isca.co.in

Car Density Detection Technique Using Webcam and MATLAB Dibyendu Sur, Vijitaa Das, Rituparna Dey and Swati Nayna

Department of Electronics and Instrumentation Engineering, Narula Institute of Technology, Agarpara, Kolkata, India

Abstract The day to day increment of traffic in modern cities is one of the major concerns of today’s world. The huge density of

vehicles causes huge and prolonged traffic jam and even road accidents. An approach has been designed and reported in

this paper in order to monitor the real time traffic conditions and suitably alert the people. Video Monitoring helps to limit

excessive stream of traffic as well as prohibit unlawful driving, controlled by safety measure from an occurrence. A system

based on video monitoring technique has been implemented to reduce road accidents. The motto of this paper is to

introduce video based road density determination system by four-wheelers with the help of Gaussian Mixture Models

(GMMs). The increment of traffic density beyond any suitable threshold value has been captured by CCTV camera and the

image has been processed digitally by MATLAB. A suitable audio-visual as well as text alert has been designed. A method

has also been implemented to alert traffic control room or any registered passenger about the increment of vehicles by

proper phone calls or messages by GSM technique.

Keywords: Webcam; Car; MATLAB; GSM; Arduino Uno

1. Introduction The increasing number of road accidents in modern cities is a great matter of concern. The uncontrolled traffic systems and

day by day increment of vehicles make the road congested and dangerous for common people. Continuous monitoring by

on-spot traffic police becomes heavily necessary. Monitoring every corner and every lane by himself as well as taking

instant decision is very hectic and demanding from a person. Any automated monitoring and traffic density indication

system is hugely appreciable in modern countries. In this paper, development of a system capable of monitoring density of

car and generation of audio visual alarm has been reported. Constant video footage to be captured and processed by

MATLAB and the decision can be sent to any mobile phone using GSM signal.

2. BlockDiagram and Explanation The block diagram has been shown in Fig. 1.

Fig. 1.Block diagram of the circuit.

The webcam takes image with predetermined frames per second and constructs a video of certain duration. This video is

then fed to MATLAB. The processing of this video is done by Computer vision toolbox present in latest version of

MATLAB. A suitable programming is written with different modules comprising different job. The primary job is to count

the number of four wheelers (mainly cars, taxi etc.) in a single frame and display the number in real time basis. A threshold

number for cars per single frame is given to the program. The second job of the program is to compare the present number

of cars with the threshold value in real time. If the real time number of cars exceeds the threshold value, a monotonic alarm

and a text message is generated by MATLAB. The pitch and amplitude can easily change by programming. The text to be

displayed can also be modified. An Arduino Uno interface followed by GSM SIM900A interfacing is incorporated to send

phone calls or text message to any part of the world.

3. MATLAB Programming and Computer Vision Toolbox The MATLAB program can be divided into three modules.

(1) The program detects the number of cars per single frame and displays them.

(2) The number of cars is compared with threshold number.

(3) Alarm is generated with certain frequency and text message is being displayed by the program.

Computer Vision System processes any color or grayscale image and extracts any patterns, features with already written

and in-built codes. Using different algorithms for image pattern detection, it can compare between images and generate

conclusion. The success rate for this toolbox may vary from specific condition to generalized systems. In 3-D computer

vision, extended features such as calibration of webcam, stereo vision are also enabled. Any feature or pattern can be

NCMTP-2017 77

Ideal International E- Publication

www.isca.co.in

learned by machine learning technique or neural network technique. This is extensively used to detect human face, facial

parts, cars, texts etc.

4. Arduino Uno and SIM900A GSM module Arduino Uno is an embedded system with ATmega328P microcontroller. This is a microcomputer system, compatible with

PC for programming. It is comprised with fourteen digital i/o pins, six analog inputs. Six digital i/o pins can be used as

PWM channels. It also comprises with a 16 MHz quartz crystal, a USB connection for PC programming, a power supply

connection, a reset switch and an ICSP header. Programs can be written in PC to be executed in microcontroller in Arduino

Uno to perform desired task. SIM900A is a GSM based complete Dual-band GSM/GPRS signal transmitter/receiver.

Suitable interfacing with Arduino Uno and programming in Arduino Uno can transmit or receive/detect any phone call/

prewritten message from PC. This feature is used to send phone calls to desired and already stored phone numbers (phone

numbers stored in Arduino Uno programming) anytime when the number of cars in a real time video exceeds the threshold

number stored in the MATLAB program. A suitable MATLAB to Arduino Uno interface must be installed in order to run

the program continuously.

5. Results A snapshot of call being forwarded to a pre-decided phone number after successful detection of number of cars from a

video frame has been shown in Fig. 2.

Fig2.Snapshot of GSM call being forwarded.

Acknowledgements The authors acknowledge Electronics and Instrumentation Department of Narula Institute of Technology for the resources

provided.

References [1] Dibyendu Sur, Mugdha Mondal, Soumitaa Chatterjee, “Person Locator from a Real Time Video”, in International

Journal of Scientific and Engineering Research (IJSER), Volume 7, Issue 4. pp. 21-24, 2016.

[2] Dibyendu Sur, Mugdha Mondal, Sagar Patra, Arijita Das, Susmita Das, “Development of Home Intruder Tracking

System Using Face Recognition”, Fifth International Conference On Recent Trends In Information Technology (ICRTIT

2016) organised by Department of Information Technology, Madras Institute of Technology, Anna University, Chennai

600044, sponsored by IEEE Madras Section and ISRO (Indian Space Research Organisation), 2016.

[3] B. Singh Mokha and S. Kumar, “A review of computer vision system for The vehicle identification and Classification

from online and offline videos”, Signal & Image Processing : An International Journal (SIPIJ), vol.6, No.5, pp. 63-76,

2015.

[4] F. Zhang, D. Clarke and A. Knoll, “Vehicle Detection Based on LiDAR and Camera Fusion”, 2014 IEEE 17th

International Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, pp. 1620-1625, 2014.

[5] S. Jhumat and R. K. Purwar, “Techniques to Estimate Vehicle Speed”, International Journal of Advanced Research in

Computer and Communication Engineering, Vol. 3, No. 6, pp. 6875-6878, 2014.

NCMTP-2017 78

Ideal International E- Publication

www.isca.co.in

Design RectangularMicro Strip Patch Antenna at 2.4GHz Shuvam Banerjee

ECE Department, Narula institute of technology, Kolkata

___________________________________________________________________________________________________

Abstract Asimple micro strip patch antenna consists of metallic patch and ground between which is adielectric medium called the

substrate. Micro strip patch antennas are used for communication purposes especially in military and civil applications. In

this paper a simple rectangular micro strip patch antenna is designed in HFSS at a resonant frequency of 2.4 GHz.

___________________________________________________________________________________________________

I. Introduction The antenna is a vital part of any wireless communication system. Antenna is one type of transducer that converts the

electrical energy into the electro-magnetic energy in form of electromagnetic waves. It is used for coupling between the

guided medium and free-space. For wireless communication system highly used microstrip patch antenna because this

antenna has following advantages like small size, light weight, low cost low power consumption. But a major drawback of

this antenna is the narrow bandwidth. The many methods to used increasing this bandwidth, like changing feeding

technique, increasing substrate height, changing the substrate permittivity, multi-layer substrate and changing the patch

shaped [1].

A Microstrip Patch antenna consists of a radiating patch on one side of a dielectric substrate which has a ground plane on

the other side as shown in Figure 1 [2]. For good antenna performance, a thick dielectric substrate having a low dielectric

constant is desirable. Since this provides better efficiency, larger bandwidth and better radiation. In general it is also called

“Printed Antennas”.

Fig 1: Rectangular micro strip patch antenna

Different Parameters of Micro-strip Antenna are Length of the Micro-strip Patch Element (L), Width of the Micro-strip

Patch Element (W), thickness of the patch (t) and Height of the Dielectric Substrate (H).The antenna can be characterized

by its Gain, Directivity, Return loss, Radiation patterns and Polarization.

Fig 2: Different Shapes of Micro-strip Patch Elements

2. Feed Techniques Micro-strip antenna can be feed by variety of methods. The foremost popular feed techniques used are-Micro strip line, Co-

axial probe, Aperture coupling and Proximity coupling.

NCMTP-2017 79

Ideal International E- Publication

www.isca.co.in

3. The Design Specifications The three essential parameters for the design of a rectangular Micro strip Patch Antennaare as follows:

(i) Dielectric constant (εr) - The dielectric material selected for our design is Duroid which has a dielectric constant of 2.2.

[2].The dielectric constant of the substrate (εr) is typically in the range 2.2 ≤ εr≤ 12.

(ii) Resonant Frequency (fr) -The resonant frequency of the antenna must be selected appropriately. The Mobile

Communication Systems uses the frequency range from 2100-5600 MHz. Hence the antenna designed must be able to

operate in this frequency range. The resonant frequency selected for my design is 2.4 GHz.

(1)

(iii) Height (h) - it is essential that the antenna is not bulky. Hence, the height of the dielectric substrate is 3.2mm.The

height range is 0.003λo ≤ h ≤ 0.05 λo.

Hence the essentialparameters for the design are- εr are 2.2, fr is 2.4 GHz and h is 3.2 mm.

3.1 Calculation of the Width

(2) Substituting c= 3 x 10

10 cm/sec,εr =2.2 and f0= 2.4GHz we get, W=4.94 cm.

3.2 Effective refractive index

The effective refractive index value of a patch is an important parameter in the designing procedure of a microstrip patch

antenna. The radiations travelling from the patch towards the ground pass through air and some through the substrate

(called as fringing). Bath the air and the substrates have different dielectric values, therefore in order to account this we find

the value of effective dielectric constant. The value of the effective dielectric constant is calculated using the following

equation as

(3)

Substituting εr =2.2, W =4.94 cm, h=0.32cm we get:

reff = 2.05

3.3 Length

Calculation of the length correction due to fringing:

Due to fringing, electrically the size of the antenna is increased by an amount of (ΔL). Therefore, the actual increase in

length (ΔL) of the patch is to be calculated using the following equation

(4)

Substituting reff= 2.05 , W=4.94cm and h=0.32cm we get:

ΔL=0.16cm

The length of the patch

The length (L) of the patch is now to be calculated using the below mentioned equation .

(5)

Substituting ΔL=0.16, f0=2.4GHz, reff= 2.05 5 we get:L =4.04cm

3.4 Effective length of the patch

Since the length of the patch has been extended by ΔL on each side, the effective length of the patch is now [2].

Substituting L=4.04cm and ΔL=0.16cm we get Leff =4.2cm.

NCMTP-2017 80

Ideal International E- Publication

www.isca.co.in

4. Micro-Strip Antenna Design using HFSS

Fig 3: Rectangular patch antenna at 2.4 GHz Fig4: Wave port

Fig 5: Perfect assigned to substrate Fig 6: Perfect assigned to Patch

5. Result

Fig 7: Determination of center frequency using s-parameter

In this graph we find Bandwidth that is 50MHz, The return loss of the antenna is minimum at 2.4 GHz.

Fig 8: Radiation pattern

From this graph we get gain that is 7dB.

6. Applications Micro-strip antennas are used for integrated phased array systems. It is used in GPS (Sat. Navigational System)

technology.in Mobile satellite communications, This types of antenna are used for the Direct Broadcast satellite (DBS)

system & remote sensing.

7. Conclusion The design of patch antenna using ANSOFT HFSS Software has been shown here. Our project on design of Micro-strip

antenna is based on Micro strip co-axial line technique.

NCMTP-2017 81

Ideal International E- Publication

www.isca.co.in

Acknowledgement I wish to express my sincere gratitude to Prof. Abhijit Ghosh, Assistant Professor, ECE Department, Narula Institute of

Technology.

References [1]Kiran Raheel, Shahid Bashir, Nayyer Fazal “Review of Techniques for Designing Printed Antennas for UWB

Application” International Journal of Engineering Sciences & Emerging Technologies, Feb 2012. ISSN: 2231 – 6604

Volume 1, Issue 2, pp: 48-60 ©IJESET.

[2] C. A. Balanis, “ Antenna Theory, Analysis and Design”, JOHN WILEY &SONS, INC, New York 1997.

[3] J. G. Vera-Dimas, M. Tecpoyotl-Torres, P. Vargas-Chable, J. A. Damián-Morales J. Escobedo-Alatorre and S.

Koshevaya “Individual Patch Antenna and Antenna Patch Array for Wi-Fi Communication” Center for Research of

Engineering and Applied Sciences (CIICAp), Autonomous University of Morelos State (UAEM), 62209, Av. Universidad

No.1001, Col Chamilpa, Cuernavaca, Morelos, México

NCMTP-2017 82

Ideal International E- Publication

www.isca.co.in

Preparation of poly vinyl alcohol cellulose nanocomposites using ionic liquid Susmita Karan

1& Sumanta Karan

2

1Department of Basic Science and Humanities, Narula Institute of Technology, 81 Nilgunj Road, Agarpara, Kolkata, West

Bengal 700109 2Department of Materials Science, Indian Institute of Technology, Kharagpur, West Bengal 721392

___________________________________________________________________________________________________

Abstract In this work cellulose were dissolved by treating with 1-butyl-3-methyl imidazonium chloride ionic liquid and the poly

vinyl alcohol (PVA) matrix was reinforced by in-situ generation of cellulose nanoparticles by non-solvent precipitation

technique leading to the formation of cellulose nanocomposites. The PVA--Cellulose nanocomposites, were characterized

by X-ray diffraction (XRD), Fourier transform infrared (FTIR), Field emission scanning electron microscopy (FESEM) and

Atomic Force Microscopy (AFM) analysis. XRD analysis showed significant changes in the crystalline nature of PVA due

to the incorporation of cellulose nanoparticles at different loading percentage. Cellulose particles, having dimension around

7-9 nm were observed in the PVA-Cellulose films under the FESEM. The nano form of cellulose particle was also

confirmed by AFM analysis. The interaction of cellulose and PVA was confirmed by FTIR study. An attempt was made to

recover the ionic liquid which was confirmed by 1H-NMR study.

Keywords:BMIMCl, Cellulose-Starch Composites,

___________________________________________________________________________________________________

1. Introduction Cellulose, the most abundant organic polymer in the biosphere, is the main constituent of wood and is a

homopolysaccharide composed of β-1-4 glucopyranose units [1, 2]. The degree of polymerization (DP) of wood is around

10,000 to 15,000 [3]. Each repeating unit of cellulose contains three hydroxyl (-OH) groups. These hydroxyl groups and

their ability to form hydrogen bonds play a major role in directing the crystal packing and also governing the physical

properties of cellulose. In cellulosic plant fiber, cellulose is present in an amorphous state, but also associates to crystalline

domains through both inter-molecular and intra-molecular hydrogen bonding [3, 4]. The important properties like good

mechanical properties, low density, biodegradability, and availability from renewable resources of cellulose have

contributed in rising interest in this material [5]. It is predicted that nanocellulose reinforcements in the polymer matrix may

provide the value-added materials with superior performance and extensive applications for the next generation. In the

world of science, nanotechnology is generally defined as the manipulation of materials in the range of 1 to 100 nm in at

least one dimension [6].

Recently, cellulose-based nanocomposites have attracted great attention due to its high value added application [7-10].

However, it is difficult for the dissolution of cellulose in water and common organic solvents due to its strong inter- and

intra-molecular hydrogen bonding. The solvents which dissolve cellulose include LiCl/ DMAc [11, 12], N-

methylmorpholine- N-oxide monohydrate [13,14], NaOH/ urea [15, 16], NaOH/H2O [17], LiOH/urea [18, 19],

NaOH/(NH2)2 CS/H2O [20, 21], NaOH/urea/ (NH2)2 CS [22, 23] and ionic liquids [24-27]. Among these solvents, ionic

liquid is well known as the solvent for cellulose due to its high fluidity, low melting temperature, low toxicity, non-

flammability, high ionic conductivity and important of all no measurable vapour pressure [28-30]. There have been a few

reports on the synthesis of cellulose-based composites using ionic liquid [31-37]. Composites of cellulose and a

polystyrene-type polymeric ionic liquid were prepared in imidazolium type polymerizable ionic liquid [31]. Cellulose-

starch composites gel from an ionic liquid solution 1-butyl-3-methylimidazolium chloride (BMIMCl) was synthesized at

room temperature for several days [32]. Cellulose/Hydroxy Apatite composite tissue engineering scaffolds were fabricated

by a particulate leaching technique with poly (methyl methacrylate) particles as the porogen from (BMIMCl) solution [33].

Single walled carbon nanotubes (SWCNTs)/cellulose composite with excellent biocompatibility have been prepared by the

treatment of SWCNTs with a cellulose solution in an ionic liquid such as 1-butyl-3- methylimidazolium bromide [34]. It is

worth to point out that the main problem associated with making effective nanocomposites from inorganic materials is to

produce a homogeneous dispersion within a cellulose matrix. Therefore, to develop new synthetic methods to control

dispersion are of great importance for extensive applications of cellulose-based nanocomposites. The cellulose

nanofibers/particles used in this study were derived from pure cotton. In our work, we prepared PVA-cellulose

nanocomposites by using ionic liquid BMIMCl. The main challenge envisaged in this study is the difficulty in isolating the

ionic liquid from the solution. Ultimately we have recovered the ionic liquid [IL] by using of ethyl alcohol, is a non-solvent

for PVA-cellulose nanocomposites. Cellulose nano-particles were evaluated as reinforcing materials for a water-soluble

polymer, poly vinyl alcohol (PVA), which has hydrophilic properties and excellent film forming ability [35]. The

hydrophilic properties of this matrix were also likely to result in enhancing the interaction of PVA with cellulose

fibers/particles. The structural property, thermal property and also morphology of these nanocomposites were characterized

by Fourier transform infrared spectroscopy (FTIR), X-ray powder diffraction study (XRD), Atomic force microscopy

(AFM) and Field emission scanning electron microscopy (FESEM). The chemical structure of recycled ionic liquid was

also confirmed by 1H-NMR study.

NCMTP-2017 83

Ideal International E- Publication

www.isca.co.in

Fig1. Structure of BMIMCl [40]

Fig2: Dissolution of cellulose in ionic liquid.

Fig2. Dissolution of cellulose in ionic liquid [40]

2. Experimental work 2.1 Materials

Pure cotton used as sources of cellulose (reinforcement) have been collected from local market and PVA used as matrix

which is a Loba Chemical product. Ionic Liquid BMIMCl (1-butyl-3-methyl imidazolium chloride) (Sigma –Aldrich,

Product Number: 94128, CAS-No.: 79917-90-1) was used as the

solvent for dissolving cellulose and ethanol (Changshu Yangyuan

Chemical product) was used as a non-solvent for PVA.

2.2 Dissolution of cellulose (cotton) by using of BMIMCl

The dissolution of cellulose was done by using of the ionic liquid

BMIMCl whose structure is shown in Fig1. At first the measured

amount of cotton (0.5%, 1.0%, 1.5% cellulose loading) was mixed

with BMIMCl, in measured quantity and heated to 100°C

temperature in water bath for 5 hours [45]. Viscous gel type solutions

were prepared.

The dissolution mechanism of cellulose in ionic liquids involves

the oxygen and hydrogen atoms of cellulose-OH in the

formation of electron donor–electron acceptor complexes which

interact with the ionic liquid (Fig2). For their interaction, in

cellulose oxygen atoms serve as electron pair donor and

hydrogen atoms act as an electron acceptor [44].

2.3 Preparation of PVA-Cellulose nanocomposites

using of BMIMCl

In the present study Poly vinyl alcohol-cellulose

nanocomposites were prepared at different cellulose percent

loading (0.5, 1.0 and 1.5 wt. %). At first PVA solutions (5% wt)

was prepared by dissolving 0.5 gm of PVA in 10 ml of water at

60°C temperature using magnetic stirrer. Cellulose (cotton) was

dissolved in BMIMCl almost in the same ratio for each sample

(1:98). Dissolution was done by heating in a water bath for 5hrs

at 100°C temperature. The prepared cellulose solution was

poured dropwise in the PVA solution with a syringe with

continuous stirring. The cellulose got precipitated in the PVA

solution as water was a non-solvent for cellulose. The cellulose dispersed solution was then homogenized for

30mins.Ethanol was then added to the homogenized solution when PVA/cellulose was precipitated (ethanol was a non-

solvent for PVA). Then the PVA-Cellulose precipitate was separated by centrifugation and filtration. The precipitate

(mainly PVA and cellulose) was mixed with 10ml water and heated to a 60°C temperature to make a solution of poly vinyl

alcohol containing dispersed cellulose particles. This PVA/cellulose solution was cast onto Petridis for drying at room

temperature to form films.All these samples contain same amount of PVA and code of the samples are given in Table1.

Table1: Code names of film samples

SAMPLES CODE NAME OF THE SAMPLE

PVA Poly vinyl alcohol

nPVAHC0.5 Poly vinyl alcohol(0.5% cellulose loading homogenized )

nPVAHC1.0 Poly vinyl alcohol(1.0% cellulose loading, homogenized)

nPVAHC1.5 Poly vinyl alcohol(1.5% cellulose loading, homogenized)

n = composite are almost free from IL.

2.4 Recovery of BMIMCl The filtrate obtained in the last experiment was a clear solution, containing BMIMCl, ethanol and water. The filtrate was

continuously heated at 55°C temperature until ethanol and water were evaporated out and a yellow coloured liquid were

observed which contained mainly BMIMCl. The recovered BMIMCl was slightly different in colour from the pure ionic

liquid. The recovered BMIMCl was confirmed by NMR study, which has been discussed later.

NCMTP-2017 84

Ideal International E- Publication

www.isca.co.in

Fig3:X-ray diffactograms of pure PVA, nPVAHC0.5, nPVAHC1.0,

nPVAHC1.5

0 10 20 30 40 50 60

0

200

400

600

800

1000

0 10 20 30 40 50 60

0

200

400

600

800

1000

0 10 20 30 40 50 60

0

200

400

600

800

1000

0 10 20 30 40 50 60

0

200

400

600

800

1000

Intensity

Position (2)

Pure PVA

nPVAHC0.5

nPVAHC1.0

nPVAHC1.5

2.5 Characterization

The prepared samples were initially characterized by X-ray diffraction (XRD) study using expert PRO with a scanning rate

2°/min and scanning range 2θ=0-60° using CuKα radiation (wavelength is 0.154443nm) at 40kV and 30mA. Surface

morphology of the film samples were observed through a FESEM instrument (JEOL, JSM6700F) operating at 2.5 to 5 kV.

All samples in this respect were coated with a thin layer of platinum prior to observation. Fourier transform infrared

spectroscopy (FTIR) was done by a Bruker Vector 22 FT IR spectrophotometer operating at 4000-400 cm-1

. Surface

morphology of the prepared samples were carried out by AFM [Nanonics Imaging Limited, Israel, Model: MV1000] using

tapping mode having glantip of diameter 20 nm.

3. Result and discussion 3.1 Crystalline nature of PVA-cellulose nanocomposites The crystallinity of the PVA and PVA-Cellulose nanocoposites at different cellulose loading is studied by XRD

measurement (Fig3). The sample crystallinity

(XCR) of the samples is determined by Segal

method [36] as

Xcr = (I200 - IAM / I200 ) × 100%

….…………….……………...........[1]

Where I200 is the height of the (200) peak,

representing both the crystalline &

amorphous region of PVA. IAM is the

approximate intensity of amorphous region.

The changes in the % crystallinity are

summarized in Table2. In case of pure PVA

the crystallinity is the higher compared to the

IL free PVA-cellulose nanocomposite, and

the crystallinity decreases slightly with

respect to increase in cellulose loading,

implying that dissolution of amorphous

cotton decreases the crystallinity of the PVA

composite. XRD diffractograms of the films

having several cellulose loading percentages

are compared in Fig3. The PVA film showed

a single scattering peak at 2 = 19.17 which

is very close to the reported one [37] .The presence of a secondary peak approximately at 22.3 confirms the formation of

PVA-cellulose nanocomposites. However the shifting of the main PVA diffraction peak in the nanocomposites also

confirms its formation. It is also observed that with the increase in cellulose loading in PVA matrix the main PVA peak

slightly loses its intensity. We have also calculated the crystallite size of the PVA using Debye-Scherer formula [38] as

Lh,k,l = Kλ/βcosθ……………………............................................................................................[2]

Where, K = 0.94, based on the full width at half maximum [39]. For the sample films of nPVAHC0.5, nPVAHC1.0 and

nPVAHC1.5 the crystallite size of PVA is found to be 17.4 nm, 12.5 nm and 12.50 nm.

Table2: Percentage of crystallinity of pure PVA and PVA-Cellulose nanocomposites

Samples code name % crystallinity (XCR)

PVA 65

nPVAHC0.5 64

nPVAHC1.0 63

nPVAHC1.5 63

3.2 Morphology by FE-SEM

The surface morphology of the composites has been examined by FE-SEM and also the size of the cellulose particles are

determined by FE-SEM analysis.The Fig5 shows the typical surface morphology of PVA-cellulose nanocomposites where

the incorporation of cellulose nanoparticles is clearly visible in the polymer matrix compared to pure PVA films in Fig4.

From the figure with higher magnification we have calculated the average cellulose particle size which is almost equal to 7-

9nm.

NCMTP-2017 85

Ideal International E- Publication

www.isca.co.in

Fig 4(A) FE-SEM image of PVA (resoluion X

30,000)

Fig 4(B) FE-SEM image of PVA (resoluion X

1,00,000)

Fig 5(A) FE-SEM image of nPVAHC1.0 (resoluion

X 30,00,000)

Fig 5(B) FE-SEM image of nPVAHC1.0 (resoluion X

10,00,000)

Fig 6A. AFM images of cellulose nanocomposites for

nPVAHC0.5 (Height Contrast)

Fig 6B. AFM images of cellulose nanocomposites for

nPVAHC0.5 (Phase Contrast)

Fig 6C. AFM images of cellulose nanocomposites

for nPVAHC1.0 (Height Contrast)

Fig 6D. AFM images of cellulose nanocomposites

for nPVAHC1.0 (Phase Contrast)

3.3 AFM analysis

The surface structure of the cellulose nanocomposites has been determined by AFM analysis. The nanoform of cellulose

PVA matrix (Fig6) is also confirmed by this analysis with respect to pure PVA (Fig7). The images were scanned in tapping

NCMTP-2017 86

Ideal International E- Publication

www.isca.co.in

Fig7. 1HNMR spectrum of recovered BMIMCL for

nPVAHC0.5

Fig8. 1HNMR spectrum of Pure BMIMCL

mode in air using silicon cantilevers. The AFM images were obtained at different percent of cellulose loading (film

samples) which are compared to pure PVA films. The phase and height images of cellulose nanocomposites films show the

presence of different shapes of cellulose particle in the nanometer scale for different percent of cellulose loading. Presence

of cellulose nanoparticle in the PVA matrix was also confirmed by AFM analysis. The size of cellulose nanoparticles

determined based on AFM height images was in a range about from 5 nm to 60 nm (height of the particle) approximately.

The measurements were made using software in which the height of the nano particle was determined. These results are

correlated well with the FESEM measurement.

4. Recycling of ionic liquid From the mixture of IL, PVA-cellulose, water and ethanol; ethanol, water and IL were separated by the use of research

centrifuge machine. Then precipitation of PVA-cellulose composites is taken out and mixture of IL, water and ethanol was

separated in the test tube. Then the IL was separated from ethanol and water by the evaporation at about 60°C. After

evaporation only IL are obtain at the end of the test tube. The chemical structure of recycled IL was confirmed by 1HNMR

spectrum (Fig7). The 1HNMR (DMSO-d6 as the solvent, δ, ppm) spectrum consists of the following : 9.376 (1H, s,

NCHN), 7.807(1H, m, CH3NCHCHN), 7.831 (1H, m, CH3NCHCHN), 4.179-4.150 (2H, t, NCH2(CH2)2CH3), 3.739 (3H, s,

NCH3), 1.721-1.707 (2H, m, NCH2CH2CH2CH3), 1.196-1.184 (2H, m, N(CH2)2CH2CH3), 0.819-0.805 (3H, t, N(CH2)3CH3)

which are close to the published data for BMIMCl[39],and matched fairly well with those as received from starting

BMIMCl (Fig8) (9.509 (1H, s, NCHN), 7.880(1H, m, CH3NCHCHN), 7.803 (1H, m, CH3NCHCHN), 4.211-4.199 (2H, t,

NCH2(CH2)2CH3), 3.716 (3H, s, NCH3), 1.756-1.744 (2H, m, NCH2CH2CH2CH3), 1.234-1.222 (2H, m, N(CH2)2CH2CH3),

0.863-0.856 (3H, t, N(CH2)3CH3 ) and confirmed the almost good quality of the recycled ionic liquid BMIMCl. The

resultant ionic liquid can be reused for further cellulose dissolution.

5. Conclusion Our main objective of this work was to develop the PVA-Cellulose nanocomposites using by ionic liquid as a solvent for

dissolution of cellulose. PVA composite films were prepared by the reinforcing of nanocellulose into a PVA matrix at

different cellulose loading percentage. The structures of PVA-Cellulosic nanocomposites have been discussed. From the

AFM and FESEM analysis we also confirmed generation of cellulose nano particle into the PVA matrix. XRD result

showed that the incorporation of cellulose nano fibers changes the crystallinity of PVA matrix. We have also recovered the

ionic liquid from the PVA-Cellulose composites using the ethanol (non solvent), which is confirmed by NMR analysis.

This ionic liquid also can be used for the further application.

References

1.Nishino, T.; Matsuda, I.; Hirao, K. All-Cellulose Composite.Macromolecules. 2004, 37, 7683-7687.

2. Eichhorn, S. J.; Baillie, C. A.; Zafeiropoulos, N.; Mwaikambo, L. Y.; Ansell, M. P.; Dufresne, A. Review: Current

international research into cellulosic fibres and composites J. Mater. Sci.2001, 36, 2107-2131.

3. Fengel, D.; Wegner, G. “Wood-Chemistry, Ultrastructure, Reactions”, Walter de Gruyter, Berlin, New York, 1989.

4. Klemm, D.; Heublein, B.; Fink, H. P.; Bohn, A. Cellulose: Fascinating Biopolymer and Sustainable Raw Material

Angew Chem. Int. Ed., 2005, 44, 3358-3393.

5. Zimmerman, T.; Pöhler, E.; Schwaller, P. Mechanical and Morphological Properties of Cellulose Fibril Reinforced

Nanocomposites. Adv. Eng. Mater.,2005, 12, 1156-1161.

6. Azizi-Samir, A. S.; Alloin, F.; Paillet, M.; Dufresne, A. Tangling Effect in Fibrillated Cellulose Reinforced

Nanocomposites. Macromolecules. 2004, 37, 4313-4316.

7. Sureshkumar, M.; Siswanto, D. Y.; Lee, C. K. Magnetic antimicrobial nanocomposite based on bacterial cellulose and

silver nanoparticles. J Mater Chem. 2010, 20, 6948-6955.

8. Svagan, A.J.; Jensen, P.; Dvinskikh , S.V.; Furo, I.; Berglund, L. A. Towards tailored hierarchical structures in

cellulose nanocomposite biofoams prepared by freezing/freeze-drying, J Mater Chem, 2010, 20, 6646-6654.

NCMTP-2017 87

Ideal International E- Publication

www.isca.co.in

9. Siro, I.; Plackett, D. Microfibrillated cellulose and new nanocomposite materials: a review, Cellulose, 2010, 17, 459-494.

10. Gindl, W.; Keckes J, All-cellulose nanocomposite, Polymer. 2005, 46, 10221-10225.

11. Ishii, D.; Tatsumi, D.; Matsumoto, T. Effect of solvent exchange on the supramolecular structure, the molecular

mobility and the dissolution behavior of cellulose in LiCl/DMAc, Carbohydr Res, 2008, 343, 919-928.

12. Chrapava, S.; Touraud, D.; Rosenau, T.; Potthast, A.; Kunz, W. The investigation of the influence of water and

temperature on the LiCl/DMAc/cellulose system, Phys Chem Chem Phys.2003, 5, 1842- 1847.

13. Rosenau, T.; Hofinger, A.; Potthast, A.; Kosma, P. On the conformation of the cellulose solvent Nmethylmorpholine-

N-oxide (NMMO) in solution, Polymer, 2003, 44, 6153-6158.

14. Kulpinski, P. Cellulose nanofibers prepared by the N-methylmorpholine-N-oxide method, J ApplPolym Sci, 2005, 98,

1855-1859.

15. Zhou, J.P.; Zhang, L.N.; Cai, J. Behavior of cellulose in NaOH/urea aqueous solution characterized by light scattering

and viscometry, J Polym Sci PartB: Polym Phys, 2004, 42, 347-353.

16. Cai, J.; Zhang, L. N.; Zhou, J. P.; Qi, H. S.; Chen, H.; Kondo, T.; Chen, X.M.; Chu, B. Multifilament fibers based on

dissolution of cellulose in NaOH/urea aqueous solution: structure and properties, Adv Mater, 2007, 19, 821-825.

17. Roy, C.; Budtova, T.; Navard, P. Rheological properties and gelation of aqueous cellulose-NaOH solutions,

Biomacromolecules, 2003, 4, 259-264.

18. Cai, J.; Zhang, L. N. Rapid dissolution of cellulose in LiOH/urea and NaOH/urea aqueous solutions, Macromol Biosci,

2005, 5, 539-548.

19. Cai, J.; Liu, Y. T.; Zhang, L.N. Dilute solution properties of cellulose in LiOH/urea aqueous system, JPolym Sci Part

B: Polym Phys,2006, 44, 3093-3101.

20. Weng, L.H.; Zhang, L. N.; Ruan, D.; Shi, L. H.; Xu, J. Thermal gelation of cellulose in a NaOH/thiourea aqueous

solution, Langmuir, 2004, 20, 2086-2093.

21. Ruan, D.; Zhang, L. N.; Mao, Y.; Zeng, M.; Li, X. B. Microporous membranes prepared from cellulose in

NaOH/thiourea aqueous solution, J MembraneSci, 2004, 241, 265-274.

22. Zhang, S. A.; Li, F. X.; Yu, J. Y.; Gu, L. X. Novel fibers prepared from cellulose in NaOH/thiourea/urea aqueous

solution, Fiber Polym, 2009, 10, 34-39.

23. Jin, H. J.; Zha, C. X.; Gu, L. X. Direct dissolution of cellulose in NaOH/thiourea/urea aqueous solution, Carbohydr Res,

2007, 342, 851-858.

24. Swatloski, R. P.; Spear, S. K.; Holbrey, J. D. Rogers RD, Dissolution of cellulose with ionic liquids, J AmChem Soc,

2002, 124, 4974-4975.

25. Wu, J.; Zhang, J.; Zhang, H.; He, J. S.; Ren, Q.; Guo, M. Homogeneous acetylation of cellulose in a new ionic liquid,

Biomacromolecules, 2004, 5, 266-268.

26. Zhang, H.; Wu, J.; Zhang, J.; He, J. S. 1-Allyl-3- methylimidazolium chloride room temperature ionic liquid: a new and

powerful nonderivatizing solvent for cellulose, Macromolecules, 2005, 38, 8272- 8277.

27. Zhu, S. D.; Wu, Y. X.; Chen, Q. M.; Yu, Z. N.; Wang, C. W.; Jin, S. W.; Ding, Y. G.; Wu, G. Dissolution of cellulose

with ionic liquids and its application: a minireview, Green Chem, 2006, 8, 325-327.

28. Dupont, J.; de Souza, R. F.; Suarez, P. A. Z. Ionic liquid (molten salt) phase organometallic catalysis, Chem Rev, 2002,

102, 3667-3692.

29. Binnemans, K. Ionic liquid crystals, Chem Rev, 2005, 105, 4148-4204.

30. Smiglak, M.; Metlen, A.; Rogers, R. D. The second evolution of ionic liquids: from solvents and separations to

advanced materials-energetic examples from the ionic liquid cookbook, Acc Chem Res, 2007, 40, 1182-1192.

31. Kadokawa, J.; Murakami, M.; Kaneko, Y. A facile method for preparation of composites composed of cellulose and a

polystyrene-type polymeric ionic liquid using a polymerizable ionic liquid, Compos Sci Technol, 2008, 68, 493-498.

32. Kadokawa, J.; Murakami, M.; Takegawa, A.; Kaneko, Y. Preparation of cellulose-starch composite gel and fibrous

material from a mixture of the polysaccharides in ionic liquid, Carbohydr Polym, 2009, 75, 180-183.

33. Tsioptsias, C.; Panayiotou, C. Preparation of cellulose- nanohydroxyapatite composite scaffolds from ionic liquid

solutions, Carbohydr Polym, 2008, 74, 99-105.

34. Li, L.; Meng, L. J.; Zhang, X. K.; Fu, C. L.; Lu, Q. H. The ionic liquid-associated synthesis of a cellulose/SWCNT

complex and its remarkable biocompatibility, J Mater Chem, 2009, 19, 3612-3617.

35. Qua, E. H.; Hornsby, P. R.; S Sharma, H. S.; Lyons, G.; McCall, R. D. Preparation and Characterization of Poly(vinyl

alcohol) Naocomposites made from Cellulose Nanofibers. Appl. Polym. Sci. 2009, 113, 2238-2247.

36. Cheng, Q.; Wang, S.; Rials, T. G.; Lee, S. H. Physical and mechanical properties of polyvinyl alcohol and

polypropelene composites materials reinforced with fibril aggregates isolated from regenerated cellulose fibers.

Cellulose.2007, 14, 593-602.

37. Sriupayo, J.; Supaphol, P.; Blackwell, J.; Rujiravanit, R. Preparation and characterization of α-chitin whisker-

reinforced poly(vinyl alcohol) nanocomposite films with or without heat treatmentPolymer.2005, 46, 5637-5644.

38. Das, Kunal.; Ray, Dipa.; Bandyopadhyay, N. R.; Sengupta, Suparna. Study of the Properties of Microcrystalline

Cellulose Particles from Different Renewable Resources by XRD, FTIR, Nanoindentation, TGA and SEM.J Polym

Environ.2010, 18, 355-363.

39. The owner societies2001.

NCMTP-2017 88

Ideal International E- Publication

www.isca.co.in

Structure-based Drug Design: Nucleic Acids as anti-cancer drug target Indrani Sarkar

Basic Science And Humanities Department

Narula Institute of Technology

Abstract Structure based drug design depends on the three-dimensional structure of biological ‘druggable’ target that is responsible

for the development of the disease, through methods such as x-ray crystallography, NMR spectroscopy or homology

modeling. After the target is identified, a potential drug candidatehas to be designed based on the binding affinity and

selectivity for its target molecule. There are two methods for Structure-based Drug Design, Receptor-based drug design and

Pharmacophore-based drug design.

Keywords:Receptor-based drug design;Pharmacophore-based drug design; G-quadruplex

1. Introduction Receptor-based drug design follws these steps 1. The target is identified

2. A library of compounds are screened

3. A compound is identified that binds to target and triggers specific biological actions

4. The Lead Molecule is optimized.

5. Properties of the lead are tested with biological assays

6. New molecules are designed based on complementary structure of the receptor, active site volume and electrostatic

parameters and ligand interaction.

7. The predicted compounds are synthesized to obtain the desired properties

Pharmacophore-based drug design:

The molecular interactions between the target and ligand can be utilized to support the binding of the current prototype

into the active site of the receptor (pharmacophore-based drug design). Ligand based drug designing is followed when no

reported 3D structure is there. Here, if sufficient number of reported active compounds (10-40) with diverse activity

values is there, one can build a 3D pharmacophore model of these set of compounds by overlapping all of them and finding

the common feature among them. This ligand based 3D model is further used to screen virtual libraries of compounds

which gives different hits assessed by means of fitness value to the model. These hits can be further taken for synthesis .

One can also go for techniques like 2D or 3D Quantitative Structure Activity Relationshin (QSAR) to make a

mathematical model which will have a predictive power for activity (IC50) for that particular enzyme.

2. Discussion Why Nucleic Acids are an important drug target?

Nucleic acids play vital roles in genetic information storage, replication, transcription, and translation directing protein

synthesis. Targeting nucleic acids can selectively interrupt gene expression for treating various diseases including cancers

at the genetic level.Nucleic acids are the molecular targets of many chemotherapeutic anticancer drugs. The affinity is

controlled by non-directional interactions: hydrophobic interactions and Specificity is controlled by directional

interactions:Van der waals interactions , covalent bonding, hydrogen bonding and electrostatic interactions (1,2)

Drugs acting on DNA (Table 1)

1. Intercalating agents:

These drugs contain planar aromatic or heteroaromatic ring systems. Planar systems slip between the layers of nucleic acid

pairs and disrupt the shape of the helix. Preference is often shown for the minor or major groove. Intercalation prevents

replication and transcription. Eg Adriamycin

2. Topoisomerase II

It relieves the strain in the DNA helix by temporarily cleaving the DNA chain and crossing an intact strand through the

broken strand

3. Alkylating agents

They contain highly electrophilic groups and form covalent bonds to nucleophilic groups in DNA.(e.g. 7-N of guanine) to

prevent replication and transcription. They are useful anti-tumour agents but have toxic side effects (e.g. alkylation of

proteins) Eg Mechlorethamine.Cisplatin.

Drugs acting on RNA:

1. Drugs acting on rRNA

Antibiotics like Chloramphenicol, Streptomycin, Rifamycins, Erythromycin act on rRNA.

2. Drugs acting on mRNA

Antisense RNA Therapy

Drugs related to nucleic acid building blocks

Antiviral agents like Azidothymidine (AZT) acts as enzyme inhibitor. AZT is phosphorylated to a triphosphate in the body.

Triphosphate has two mechanisms of action; it inhibits a viral enzyme (reverse transcriptase) or is added to growing DNA

chain and acts as chain terminator

Table 1: Drugs binding with DNA and their mode of binding

NCMTP-2017 89

Ideal International E- Publication

www.isca.co.in

Activity Mode of Binding

Netropsin Antitumor, Antiviral Minor groove binding

Pentamidine Active against P. carinii Minor groove binding

Berenil Antitrypanosomal Minor groove binding

Guanyl

bisfuramidine

Active against P. carinii Minor groove binding

Netropsin Antitumor, Antiviral Minor groove binding

Distamycin Antitumor, Antiviral Minor groove binding

SN7167 Antitumor, Antiviral Minor groove binding

SN6999 Active against P. falciparum Minor groove binding

Nogalamycin Antitumor Intercalation

Menogaril Antitumor- Topoisomerase II

poison

Intercalation

Mithramycin Anticancer antibiotic Minor groove binding

Targeting DNA secondary structures ‘G-quadruplex

G-quadruplexes are commonly found in telomere region and the oncogene promoter region which are highly G-rich and

dynamic in nature. They are formed by stacking of G- tetrads. (3)Each G-tetrad has four guanines arranged in square

planar manner, stabilized by hoogesten hydrogen bond. DNA G-quadruplexes have recently emerged as a new class of

novel molecular targets for anticancer drugs. A growing list of proteins has been identified to interact with DNA G-

quadruplex structures, strongly pointing to the existence of quadruplex DNA in vivo Recognition of the biological

significance of DNA G-quadruplexes has intensified research and development of G-quadruplex-interactive compounds,

some of which are prospective anticancer agents that display relatively low cytotoxicity. A G-quadruplex-targeting drug has

entered Phase II clinical trials (4, 5).

3. Conclusion The study of energetics of binding of different drugs with nucleic acids is an important steps towards understanding the

action of these drugs. Theoretical studies complementing experimental techniques can be an useful tool for developing new

compounds which can be used as effective drugs.

Acknowledgements The author is thankful to the University Grant Commission, Government of India, for financial support.

References 1. Reynolds CH, Merz KM, Ringe D, eds. (2010). Drug Design: Structure- and Ligand-Based Approaches (1 ed.).

Cambridge, UK: Cambridge University Press.ISBN 978- 0521887236.

2. Madsen U, Krogsgaard-Larsen P, Liljefors T (2002). Textbook of Drug Design and Discovery. Washington, DC: Taylor

& Francis.ISBN 0-415-28288-8.

3. Neidle S, Parkinson G. Telomere maintenance as a target for anticancer drug discovery. Nat Rev Drug Discov

2002;1(5):383–393. [PubMed: 12120414]

4. Punchihewa, C.; Yang, D. Therapeutic targets and drugs II: G-quadruplex and G-quadruplex inhibitors.In: Hiyama, K.,

editor. Cancer Drug Discovery and Development: Telomeres and Telomerase in Cancer. Humana Press; Totowa, NJ, USA:

2009. p. 251-280

5. Qin Y, Hurley LH. Structures folding patterns and functions of intramolecular DNA G-quadruplexes found in eukaryotic

promoter regions. Biochimie 2008; 90(8):1149–1171.[PubMed: 18355457]

NCMTP-2017 90

Ideal International E- Publication

www.isca.co.in

Application of Artificial Neural Networks and Genetic Algorithm in Drug Discovery Indrani Sarkar,

1Sanjay Goswami

2

1Department of Basic Science and Humanities (Physics), Narula Institute of Technology, Agarpara

2Department of Computer Application, Narula Institute of Technology, Agarpara

Abstract: Human brain learns from its experience. A neuron in brain receives inputs from many external sources, processes them and

makes a decision. Artificial Neural Network (ANN) modeling is a group of computer algorithms developed for modelling

and pattern recognition. ANNs behave similarly to the neurons of the brain and act as a powerful tool to simulate various

non-linear systems. ANNs have been successfully used to solve many problems in drug research. The development of a

new drug is still a challenging process. Computational methods like neural network and genetic algorithm are used to

develop new drugs.

Keywords : Genetic Algorithm, Artificial Neural Networks, Drug design

___________________________________________________________________________________________________

_

1. Introduction The discovery of a new drug is still a difficult, time-consuming and expensive process. Computational methods used in

chemo informatics can be used to assist and speed up the drug discovery process. This article focuses on the use of neural

networks and genetic algorithms in designing new drugs. Artificial neural networks have some advantages over classical

statistical methods such as regression analysis or partial least squares analysis (PLS) because ANN can investigate

complex, nonlinear relationships. They do not require rigidly structured experimental designs and can map functions using

incomplete data. ANN methodology has potential application in the pharmaceutical sciences (1) . We give an account of the

application of ANN and genetic algorithms to various cases used for theoretical drug design. Sequence alignment, variable

selection in quantitative structure activity relationship (QSAR) studies, 3D QSAR, design of combinatorial libraries, and

docking of drugmolecules to targets like proteins and DNA are a few applications (2).

Neural networks can be applied to four basic types of applications:

• association;

• classification (clustering);

• transformation (different representation);

• modeling.

Chemicaland biological activity of a drug is closely related to its properties like molecular weight, molar volume,

electronegativity, hydrogen donor, hydrogen acceptor and molar refractivity. Our goal is to find a model which will

correlate the inputs (properties) with a target (biological activity) (3)

2. Methodology Quantitative Structure-Activity Relationship (QSAR)

QSAR correlates topological, physicochemical and quantum properties of compounds with their biological activities. These

parameters include topological parameters, molecular weight, molar volume, electronegativity, logP, hydrogen acceptor,

hydrogen donor and molar refractivity. ANNs have been shown to be an effective tool to establish this type of relationship

and predict the activities of new compounds.

Virtual Screening (VS)

Virtual screening speeds the drug discovery process. It applies computational methods to identify (or “screen”)

compounds with known chemical structures which can be tested for high biological activities in a database of molecules.

ANNs-based QSAR models are preferred for the prediction of potential compounds in the virtual screening.

Quantitative Structure Toxicity Relationship (QSTR)

ANNs have application in pharmaco-toxicology (QSTR) studies. QSTR is a mathematical relationship between the

quantitative molecular descriptors of the drug and its toxicological activities. QSTR models can be applied to predict the

toxicity of compounds. Similar to QSAR, the molecular descriptors of QSTR are derived from properties of the

compounds. These descriptors are then correlated with a toxicological response of interest through ANNs modeling.

Pharmacokinetic and Pharmacodynamics

The ANN technology is used to study complex interactions between drug substance and physiological system (

pharmacodynamics) .

A variety of artificial neural network methods have been developed. Here, we present some of the on- line resources for

neural networks.

List of Artificial neural networks

1. The Battelle Pacific Northwest National Laboratory (Richland, WA, U SA) (descriptions of commercial neural network

programs and shareware packages)

http://www.emsl.pnl.gov:2080/proj/neuron/neural/systems

2. Websites containing links to neural network resources

http://www.ccs.neu.edu/groups/honors-program/freshsem/19951996/cloder/myN Nlinks.html

http://www-sci.sci.kun.nl/cac/www-neural-links.html

3. The US E N ET newsgroup (for discussion of neural networks)

NCMTP-2017 91

Ideal International E- Publication

www.isca.co.in

http://groups.google.com/groups?q= comp.ai.neural-nets

4. Stuttgart Neural Network Simulator

http://www-ra.informatik.uni-tuebingen.de/S N N S

5. Online material relating to the book Neural Networks in Chemistry and Drug Design

http://www2.chemie.uni-erlangen.de/publications/A N N-book/index.html

Use of neural networks in drug design

Neural networks can be employed for the following applications in drug design:

• analysis of multi-dimensional data;

•prediction of biological activity and ADME-Tox (absorption, distribution, metabolism, excretion, and toxicity) properties;

• Comparison/classification of drug libraries

A method called comparative molecular surface analysis (COMSA), combines the mapping of the mean electrostatic

potential on the molecular surface (by a Kohonen self-organizing network) with a PLS analysis to establish a QSAR model.

Typical investigations in drug design that use neural networks include: 1. Prediction of the aqueous solubility of drugs

2. Study of HIV-1 reverse transcriptase

3. The application of Bayesian regularized neural networks to the development of QSAR models. Classification and

modelling of chemotherapeutic agents, anti-bacterial, anti-neoplastic and anti-fungal.

4. ANN in the field of genomics is gene prediction perform homology searches in protein.

Definition of a genetic algorithm

Genetic algorithms (GAs) mimic nature’s evolutionary method of adapting to a changing environment. They are stochastic

optimization methods and provide a powerful means to perform directed random searches in a large problem space as

encountered in and drug design. Each individual in a population is represented by a chromosome. After initialization of the

first generation the fitness of each individual is evaluated by an objective function. In the reproduction step, the genetic

operators of parent selection, crossover and mutation are applied, thereby providing the first offspring generation. Iteration

is performed until the objective function converges (4).

Applications of GAs in drug design

A QSAR model can be made by variable selection using a GA. Fitness is evaluated by a PLS (partial least squares) cross

validation. Pharmacophore perception for receptors with an unknown 3D structure can be carried out by comparing the

spatial and electronic requirements of a set of ligands that are known to bind to the receptor of interest. Such a comparison

is performed by the structural alignment of these ligands (5,6,7).

Genetic algorithms

1. The G enetic Algorithms Archive

http://www.aic.nrl.navy.mil/galist

2. The ILLi G AL Home page

http://gal4.ge.uiuc.edu/index.html

3. The P G APack Parallel G enetic Algorithm Library

ftp://ftp.mcs.anl.gov/pub/pgapack

4. G Alib (a C++ library of genetic algorithm components)

http://lancet.mit.edu/ga

5. Evolving O bjects (a C++ library for evolutionary computation)

http://sourceforge.net/projects/eodev

6. Newsgroup discussions related to genetic algorithm research

http://groups.google.com/groups?q= comp.ai.genetic

3.Conclusion Speeding up drug discovery and development is of central interest in all pharmaceutical companies. It has been shown that

neural networks and genetic algorithms are powerful tools with a wide range of applications in the field of drug design.

Hybrid algorithms that combine GAs and neural networks appear in the literature as promising methods for strengthening

the impact of computational methods in drug design. However, the application of neural networks and GAs requires some

essential knowledge of these methods before it can be properly employed.

Acknowledgements

The author is thankful to The University Grant Commission, Government of India for financial support.

References:

1. Rumelhart, D.E. et al. (1986) Learning representations by backpropagating errors. Nature 323, 533–536

2. Zupan, J. and Gasteiger, J. (1999) Neural Networks in Chemistry and Drug Design (2nd edn),Wiley

3. Peterson, K.L. (2000) Artificial Neural Networks andTheir Use in Chemistry. In Reviews in Computational Chemistry

(Lipkowitz, K.B. and Boyd, D.B., eds) (Vol. 16), pp. 53–140,Wiley

4. Turner, D.B. and Willett, P. (2000) Evaluation of the EVA descriptor for QSAR studies: 3.The use of a genetic

algorithm to search for models with enhanced predictive properties (EVA_GA). J.Comput.-Aided Mol. Design 14, 1–21 34

5. Kimura,T. et al. (1998) GA strategy for variable selection in QSAR studies: GA-based region selection for CoMFA

modeling. J.Chem.Inf. Comput. Sci. 38, 276–282 3

6. 2. Hu L, Chen G, Chau RM (2006) A neural networks-based drug discovery approach and its application for designing

NCMTP-2017 92

Ideal International E- Publication

www.isca.co.in

aldose reductase inhibitors. J Mol Graph Model 24: 244-253.

7. Myint KZ, Wang L, Tong Q, Xie XQ (2012) Molecular fingerprint-based artificial neural networks QSAR for ligand

biological activity predictions. Mol Pharm 9: 2912-2923.

NCMTP-2017 93

Ideal International E- Publication

www.isca.co.in

Free Energy: A potential solution to the energy crisis Dr. Santa Bandyopadhyay (Rajguru)

___________________________________________________________________________________________________

_

Abstract Active vacuum is a source of enormous seething energy. This energy is present everywhere in this world and can run all

the worlds machinery without the use of oil, coal or any other common fuels. This energy is called free energy. It is a clean

energy causing no pollution and is inexpensive unlimited source of energy. This energy can be extracted from the active

vacuum with the help of a dipole. The broken symmetry of the di-pole with the active vacuum enable the exchange of the

free energy. Motionless Electric Generator (MEG) has been already developed in laboratory which incorporates this free

energy from vacuum and exhibits Co-efficient of Performance (COP) greater than 1. The aim of this present article is to

generate awareness about this new research area so that mass production of MEG would be possible for the benefit of the

mankind as well as the conservation of the environment of our mother Earth.

Keywords:Zero Point Energy, Longitudinal EM energy, Active Vacuum, MEG

___________________________________________________________________________________________________

_

1. Introduction

In modern era, the extreme need of production of huge amount of electric power has been increasing exponentially day by

day. Various energy sources are used to produce electric power for the daily usage of people of the world.Fig.1 shows the

world energy consumption by different sources.

It is observed that coal, oil and natural

gas are used in maximum for

production of electric power and they

are getting exhausted day by day and

they also produce pollution which is

harmful to mankind as well as

environment. Production of electric

power from nuclear sources causes

radiation hazards and also produce

nuclear waste as by-product which is

difficult to dispose . Very little power

is available from renewable energy

sources which is not suitable for large

scale consumption and vulnerable to

climate changes and also not cost

effective. What is the solution of

energy crisis? The solution may be

“Free energy”. Free energy is clean

energy causing no pollution and also

is inexpensive unlimited source. It

was well known to scientific community since 18th

century. In 1905, Nocola Tesla said -“Electric power is everywhere

present in unlimited quantities and can drive the worlds machinery without the need of coal, oil, gas or any other of the

common fuels”.

Free energy can be extracted from vacuum as vacuum is a source of seething energy. Richard Feynman said -“There is

enough energy inside the space in this empty cup to boil all the oceans of the world”. In 2000, Tom Bearden said -“At any

point, and at any time one can freely and inexpensively extract enormous Electro Magnetic (EM) energy from active

vacuum itself”. Two Nobel Prizes were awarded in 1957 to Lee and Yang for substantiating the extraction process for this

energy.

Theory

When all the matters are removed from any volume of space it is known as vacuum, but EM radiation can exist there even

at absolute zero temperature which is called Zero point energy. This enormous EM energy comes from continuous creation

and annihilation of virtual particles in the active vacuum and thus this vacuum is always in dynamic equilibrium state. In

space-time domain (4 d space) this EM wave is longitudinal but it can be transduced to our 3d world as transverse wave and

this enormous energy can run our worlds machines without the use of any common fuels. To transduce this energy to

worlds’ system a little di-pole is required in form of two equal and opposite charge or two poles of a battery or a magnet.

The Di-pole acts as a part of broken symmetry of the active vacuum and in turn acts as transducer. The process can be

compared to placing a paddle wheel on a flowing river, the wheel acquires energy from the flowing river but the river does

not get exhausted for it. Though this surprising fact was known since 18th

century, no rigorous , systematic research was

done in the field. Recently T.Beardon and J.L. Naudim developed a Motionless Electric Generator which incorporates this

energy and having no moving parts, with less wear and tear that enhances longevity and above all no fuel is necessary to

operate MEG which yields COP greater than 1.

To understand the theory behind MEG, we should go back to Maxwell’s original 20 quaternion equations in some 20

NCMTP-2017 94

Ideal International E- Publication

www.isca.co.in

variables. Heaviside reduced it to two vector equations without the variables separated. (1) Class I systems, which are in

equilibrium with their active environment (2) Class II systems, which are in disequilibrium with their active vacuum

environment. The Class I systems rigorously obey classical thermodynamics, and hence can never exhibit COP>1.0. The

Class II systems do not obey classical thermodynamics, but obey the thermodynamics of systems far from equilibrium with

their active vacuum environment. Class II systems (the source charge and the source dipole are examples) can exhibit five

very important functions: They can (a) self-order, (b) self-oscillate or self-rotate, (c) output more energy than the operator

inputs (the excess is freely received from the active vacuum environment), (d) self-power itself and its load (all the energy

is freely received from the active vacuum environment), and (e) exhibit negentropy. Source charges and source dipoles do

all the above five functions.Lorentz symmetrically regauged the Heaviside equations, still further restricting the

theory. Actually this arbitrarily discards all Class II systems, and retains only Class I systems. This process of Lorentz

regauging discards all permissible Maxwellian COP>1.0 systems)1 .

"The charges on the surface of the wire provide two types of electric field. The charges provide the field inside the wire

that drives the conduction current according to Ohm's law. Simultaneously the charges provide a field outside the wire

that creates a Poynting flux. By means of this latter field, the charges enable the wire to be a guide (in the sense of a

railroad track) for electromagnetic energy flowing in the space around the wire. Intuitively one might prefer the notion

that electromagnetic energy is transported by the current, inside the wires. It takes some effort to convince oneself (and

one's students) that this is not the case and that in fact the energy flows in the space outside the wire."

Poynting (2)

assumed from the beginning only that component of the energy flow in space around the wire that strikes the

circuit (surface charges) and gets diverged into the conductors to power the circuits. Heaviside(3,4)

discovered that diverged

Poynting component also, but also discovered the REMAINING huge nondiverged component that does not strike the

circuit at all, but just vibrates on off into space and is wasted.

"It [the energy transfer flow] takes place, in the vicinity of the wire, very nearly parallel to it, with a slight slope towards

the wire... . Prof. Poynting, on the other hand, holds a different view, representing the transfer as nearly perpendicular to

a wire, i.e., with a slight departure from the vertical. This difference of a quadrant can, I think, only arise from what seems

to be a misconception on his part as to the nature of the electric field in the vicinity of a wire supporting electric

current. The lines of electric force are nearly perpendicular to the wire. The departure from perpendicularity is usually

so small that I have sometimes spoken of them as being perpendicular to it, as they practically are, before I recognized the

great physical importance of the slight departure. It causes the convergence of energy into the wire."

If in a generator Win is input power and Wout is output power, then

Wout = Win + W H

W H – is Heaviside non divergent energy flow missing the ckt and wasted.

Wout ˃ Win , WH˃ ˃ Win

Then, Wout ˃ ˃ Win , and COP ˃ ˃ 1

So it is true for every generator power system.

But later, Lorentz (8)

regauged the theory of Poynting and Heaviside and simplified the formula so that the gigantic

Heaviside energy flow component was discarded from the knowledge of Electromagnetism.

2.Experiment

The replicated version of MEG by J. L. Naudim is shown in Fig 2. The MEG extracts energy from a permanent magnet

with energy

replenished from

the active

vacuum.

An

electromagnetic

generator

without moving

parts includes a

permanent

magnet and a

magnetic core

including first

and second

magnetic paths.

A first input coil

and a first output

coil extend

around portions of the first magnetic path, while a second input coil and a second output coil extend around portions of the

second magnetic path.

The input coils are alternatively pulsed to provide induced current pulses in the output coils. Driving electrical current

through each of the input coils reduces a level of flux from the permanent magnet within the magnetic path around which

the input coil extends. In an alternative embodiment of an electromagnetic generator, the magnetic core includes annular

spaced-apart plates, with posts and permanent magnets extending in an alternating fashion between the plates. An output

NCMTP-2017 95

Ideal International E- Publication

www.isca.co.in

coil extends around each of these posts. Input coils extending around portions of the plates are pulsed to cause the induction

of current within the output coils.

This device transforms the magnetic force of a permanent magnet into electricity and achieves over unity. However, it does

NOT run by itself, it is run by energy extracted from the vacuum with COP ˃ 1. One unit produces 2.5 Kw if more such

units are joined in series, it will produce more power.

3. Conclusion Active vacuum is a source of enormous energy as virtual particle and anti-particles are continuously created and annihilated

and always it is in dynamic equilibrium. This energy can flow into worlds machinery if there is any broken symmetry

which can be produced by any dipole i.e. two equal and opposite charges or two poles of a battery or a magnet. Motionless

electromagnetic generator has been already developed using this energy with COP>1. This electric generator is very useful

as it causes no pollution. It does not need any electric grid. At any place, at any point of time, one can operate MEG with

no recurring cost, only initial equipment cost is needed. NASA has already made space vehicle which itself powered by

extracting Free energy from the vacuum. So there is a high hope to solve the energy crisis of the world. But now the

challenge is mass production of MEG for large scale usage. The theory of Free energy also should be developed and more

researches are necessary in this new field. So, physicists and electrical engineers should come forward and work together to

shed new light in this area.

Reference 1. ‘Classical Electrodynamics’ Jackson, 2

nd edition, Wiley, New York, 1975, P-219-221

2. J. H. Poynting, “On the transfer of energy in the electromagnetic field,” Phil. Trans. Roy. Soc. Lond., Vol. 175, Part II,

1885, p. 343-361.

3. Oliver Heaviside, "Electromagnetic Induction and Its Propagation," The Electrician, 1885, 1886, 1887, and later. A series

of 47 sections, published section by section in numerous issues of The Electrician during 1885, 1886, and 1887.

4. Oliver Heaviside, "On the Forces, Stresses, and Fluxes of Energy in the Electromagnetic Field," Phil. Trans. Roy. Soc.

London, 183A, 1893, p. 423-480.

5. Tesla Nicola ‘My Inventions’ Part 1 – V, published in the electrical experimenter monthly magazine from Feb- June

1919, Reprint edition with introduction note by Ben Johnson, New York, Barn & Noble 1982.

6. M.W.Evans, P.K.Anastasovski, T.E. Bearden et al. ‘Explanation of the motionless electromagnetic generator with O (3)

electrodynamics’. Foundation of Physics Lectures, 14(Feb 2001) P-87-94.

7. C. S. Wu, E. Ambler, R. W. Hayward, D. D. Hoppes and R. P. Hudson, Experimental Test of Parity Conservation in Beta

Decay," Physical Review, Vol. 105, 1957, p. 1413-TBD. Reports the discovery that the weak interaction violates parity

(spatial reflection). This also established the broken C symmetry of a dipole.

8. H. A. Lorentz, Vorlesungen über Theoretische Physik an der Universität Leiden, Vol. V, Die Maxwellsche Theorie

(1900-1902), Akademische Verlagsgesellschaft M.B.H., Leipzig, 1931, "Die Energie im elektromagnetischen Feld," p. 179-

186. Figure 25 on p. 185 shows the Lorentz concept of integrating the Poynting vector around a closed cylindrical surface

surrounding a volumetric element. This is the procedure which arbitrarily selects only a small component of the energy

flow associated with a circuit—specifically, the small Poynting component striking the surface charges and being diverged

into the circuit to power it—and then treats that tiny component as the "entire" energy flow. Thereby Lorentz arbitrarily

discarded all the nondiverged vast Heaviside energy transport component which does not strike the circuit at all, and is just

wasted.

9. Lindeman Peter ‘The free energy secrets of cold electricity’2001, ISBN.

10. Paramhansa Tiwari ‘Beyond Matter’ 1984, Print well publications, Aligarh, India.

11. Tiwari P. ‘Violation of conservation of charge in space power generation phenomena’. The journal of Borderland

research, vol. 55 (5), Sept. - Oct. 1989.