Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
Breast cancer prediction system using Data mining methods
Dr. C Nalini1 D.Meera
2
1Professor,
2Mtech Student, Computer Science and Engineering,
BIST, BIHER,BharathUniversity,Chennai – 73.
ABSTRACT
Cancer is the second most common cause of death worldwide with an estimated 7.9 million of
deaths in 2007. This number is projected to rise further and reach 12 million deaths in 2030
which makes cancer a major public health issue. Data mining is defined as a process used to
extract usable data from a large data set. There are many different areas under data mining and
one of them is classification or the supervised learning. In the health care industry, the data
mining is predominantly used for disease prediction.Classification also can be implemented
through a number of different approaches or algorithms.There are many different areas under
Data Mining and one of them is Classification or the supervised learning.We have conducted the
comparison between two algorithms with help of WEKA (The Waikato Environment for
Knowledge Analysis), which is an open source software. It contains different type’s data mining
algorithms.This paper explains discussion of Bayesian Network and J48 algorithms. Here, for
comparing the result, we have used the analysis of classification accuracy and execution time
and various parameters.
KEYWORDS
Classification, Decision tree [J48], Naïve Bayes, Breast cancer
1. INTRODUCTION
The second leading cause of death among women is breast cancer and it comes directly after
lung cancer [1-9]. The health and medical sector is more in need of data mining today. When
certain data mining methods used, valuable information can be extracted from large data base
and that can help to medical practitioner to take decision, and improve health services. There are
a few arguments that can support the use of data mining in health sector for breast cancer like
early detection, early avoidance, and indication based medication, rectifying hospital data errors
[10-15].WEKA is a powerful tool as it contains supervised learning as well unsupervised
learning methods. It contains Classification, Clustering, Association Mining, Feature Selection,
Data Visualization, etc. The main reason behind using WEKA is helps researchers like us to
implement and compare data mining techniques very easily on real or synthetic data. It is also
well suited for developing new machine learning techniques[5]. WEKA comes under the open
source software issued under GNU General Public License[16-21].
International Journal of Pure and Applied MathematicsVolume 119 No. 12 2018, 10901-10911ISSN: 1314-3395 (on-line version)url: http://www.ijpam.euSpecial Issue ijpam.eu
10901
Breast cancer is an uncontrolled growth of breast cells. It refers to a malignant tumor that has
developed from cells in the breast. Breast Cancer constitutes a major public health issue globally
with over 1 million new cases diagnosed annually,resulting in over 400,000 annual deaths and
about4.4 million women living with the disease. It alsoaffects one in eight women during their
lives. It isthe commonest site specific malignancy affectingwomen and the most common cause
of cancer mortality in women worldwide. It is also found in men but not very common.
Statistics available in Nigeria are largely unreliable because of many factors that have not
allowed adequate data collection and documentation; but according to DrChinyereAkpanika,
[22-29] a Gynaecologist at the University of Calabar Teaching Hospital (UCTH) said Nigeria
recorded over 100,000 new cases of cancer annually, added that early detection of the disease
increased the chance of survival of an affected person. According to her, 85 per cent of women
who have breast cancer do not have a family history of breast cancer. The Medical
Director,Optimal Cancer Care Foundation, Dr Femi Olaleye, [2] said breast cancer killed one in
every 25 Nigerian women.
Breast cancer is a malignant or benign tumor, inside breast, wherein cellsdivide and grow
without control [30-34]. Scientists have tried to know the exact reason behind breast cancer, as
there are a few risk factors which increase the like hood of a woman developing breast cancer.
Age, genetic risk and family history are some such factors being considered for breast
cancer[3].Treatments of breast cancer are divided into two types, local and systematic. Surgery
and radiation are local type of treatments whereas chemotherapy and hormone therapy are
examples of systematic therapies. For getting best results, both treatments are used together in
different variations as per the patient and disease intensity[35-39].These records are cleaned and
filtered with the intention that the irrelevant data from the warehouse would be removed before
mining process occurs.
The aim of this paper is to predict the Breast Cancer effectively by applying the attribute subset
evaluator namely Consistency Subset Evaluator.Usingthese significant attributes the
classification of dataset is performed using naïve bayesand J48. The remaining portion of the
paper is discussed as follows. The proposed methodology is given in Section 3. Section [4]
analyses the experimental results. Section [40-45] gives conclusion.
2. LITERATURE SURVEY: Dr.Rajesh et al (2012) who used SEER dataset for the diagnosis of breast cancer using the C4.5
classification algorithm. The algorithm was used to classify patients into either pre-cancer stage
or potential breast cancer cases. During the testing phase, the C4.5 classification rules were
applied to a test sample and the algorithm showed had an accuracy of 92.2%, sensitivity of
46.66% and a specificity of 97.4%. Future enhancement of the work will require the
improvisation of the C4.5 algorithm to improve classification rate to achieve greater accuracy.
Shajahan et al (2013) worked on the application of data mining techniques to model breast
cancer data using decision trees to predict the presence of cancer. Data collected contained 699
instances (patient records) with 10 attributes and the output class as either benign or malignant.
Input used contained sample code number, clump thickness, cell size and shape uniformity, cell
growth and other results physical examination[6]. The results of the supervised learning
International Journal of Pure and Applied Mathematics Special Issue
10902
algorithm applied showed that the random tree algorithm had the highest accuracy of 100% and
error rate of 0 while CART had the lowest accuracy with a value of 92.99% but naïve bayes’ had
the an accuracy of 97.42% with an error rate of 0.0258.
Chaurasia and Pal6 offered three popular data mining algorithms: CART, ID3 (iterative
dichotomized 3), and DT for diagnosing heart diseases, and the results presented demonstrated
that CART obtained higher accuracy within less time.
Chaurasia and Pal5 compare the performance criterion of supervised learning classifiers, such
as Naıve Bayes, SVM-RBF kernel, RBF neural networks, Decision Tree (Dt) (J48), and simple
classification and regression tree (CART), to find the best classifier in breast cancer datasets. The
experimental result shows that SVM-RBF kernel is more accurate than other classifiers; it scores
at the accuracy level of 96.84% in the Wisconsin Breast Cancer (original) datasets.
Qi Fanet.al [6] the authors of this paper focus on SEER public use-data to predict Brest Cancer.
They use pre-classification method and find a possible solution to discover thein formation of
Brest Cancer.H.S.Hota [2] built a classification model using various intelligent techniques such
as ANN(Artificial Neural Network),Unsupervised Artificial Neural Network, Statistical
technique and decision tree. Experimental results show a testing accuracy of 97.73% from which
the efficiency of the ensemble model was highlighted.
Ryan Potter has performed the comparison of classification algorithms on breast cancer dataset
to perform the diagnosis of the patients.
3. METHODOLOGY: 3.1 Dataset:
The breast cancer dataset have been created for analysis of diabetic disease.This dataset contains
seven hundred and sixty eight instances and eight attributes are used in this comparative analysis.
Menstrual and reproductive history:
Earlymenstruation (before age 12), late menopause(after age 55), having your first child at an
olderage, or never having given birth can alsoincrease the risk of breast cancer.
Certain genome:
This is cause by mutation incertain genes and can be determined throughgenetic test as
individual withcertain gene mutation can pass it onto their children.
Dense breast tissue:
This can increase the risk of having breast cancer and makes lumps harder to detect.
Population
This study was performed on TMA images from 164 fe- male patients with an invasive ductal
carcinoma of the breast, treated at the Oncology Institute of Vilnius University and investigated
at the National Center of Pathology,during the period of 2007 to 2009. The study was
approvedby the Lithuanian Bioethics Committee.The patients’ con-sent to participate in the
study was obtained.
Tissue preparation
The TMAs were constructed, stained and scanned as described previously [3]. Briefly,
onemillimetre-diameter cores were punched from tumour areas randomly selected by the
pathologist and paraffin sections were cut at 3μm-thickness[3].
International Journal of Pure and Applied Mathematics Special Issue
10903
Immunohistochemistry (IHC)
IHC for Ki67 was performed with a multimer-technology based detection system, ultraView
Universal DAB (Ventana, Tucson, AZ, USA). The Ki67 antibody (clone MIB-1;DAKO,
Glostrup, DK) was applied at a 1:200 dilution for 32 minutes, followed by the
VentanaBenchMark XT automated immunostainer (Ventana) standard Cell Conditioner1 (CC1, a
proprietary buffer) at 95°C for 64 minutes[5].Finally, the sections were developed in DAB at
37°C for eight minutes, counterstained with Mayershematoxylin and mounted.
Image acquisition
Digital images were captured using the Aperio Scan Scope XT Slide Scanner(Aperio
Technologies, Vista,CA, USA) under 20x objective magnification (0.5μm resolution). One TMA
spot image per patient was used for the study.
Quantification with stereology test grid
RV were obtained by marking Ki67-positive and negativetumour cell profiles, using a
stereological method for 2Dobject enumeration [4,5] implemented by the Stereology module
(ADCIS, Caen, France) with a test grid ofsystematically sampled frames (frame size - 125
pixels,spacing of frames - 250 pixels) overlaid on a spot imagein ImageScope (Aperio
Technologies, USA),
Visual evaluation (VE)
A global subjective impression for the Ki67 LI on the sameimages was performed by five
pathologists independentlyand provided semi-quantitative values (Ki67-VE-1, 2, 3, 4and 5)
expressed as the percentage of Ki67-positivetumour cell profiles. Counting was not included in
theprocedure.
3.2 Classification
Classification is a process that is used for partitioning the data into different classes according to
some constrains or it classify each item in a dataset into one of predefined set of classes or
groups. It is a supervised learning approach having known class categories.Several major kinds
of classification algorithms including C4.5, J48, k-nearest neighbor classifier, Naive Bayes,
SVM, Apriori, and AdaBoost. In this research work Naïve Bayes, decision tree[J48] algorithms
are used to predict the breast cancer disease.
3.2.1 Naive Bayes Naive Bayes is a machine learning algorithm for classification problems. It is based on Thomas
Bayes’s probability theorem. It is primarily used for text classification which involves high-
dimensional training datasets. A few examples are spam filtration, sentimental analysis, and
classifying news articles. It is not only known for its simplicity but also for its effectiveness.
It is fast to build models and make predictions with Naive Bayes algorithm. Naive Bayes
algorithm is the algorithm that learns the probability of an object with certain features belonging
to a particular group/class.In short, it is a probabilistic classifier. The Naive Bayesalgorithm is
called “naive” because it makes theassumption that the occurrence of a certain feature
isindependent of the occurrence of other features. The“Bayes” part refers to the statistician and
philosopherThomas Bayes and the theorem was named after him,Baye’s theorem, which is the
base for Naı¨ve Bayes algorithm.Naive Bayes algorithm is Baye’s theorem oralternatively known
as Baye’s rule or Baye’s law. Itgives us a method to calculate the conditional probability,i.e. the
probability of an event based on previousknowledge available on the events. More
formally,Baye’s theorem is stated as the following equation
International Journal of Pure and Applied Mathematics Special Issue
10904
P(A/B)=P(B/A)P(A) / P(B)
• P(A/B): Probability (conditional probability) ofoccurrence of event A given the event B is true.
• P(A) and P(B): Probabilities of the occurrence ofevent A and B, respectively.
• P(B/A): Probability of the occurrence of event Bgiven the event A is true.
3.2.2 Decision Trees (J48)
J48 decision trees classifier is a simple decision learning algorithm, it accepts only categorical
data for building a model. The basic idea of ID3 is to construct a decision tree by employing a
top down greedy search through the given sets of training data to test each attribute at every
node. It uses statistical property known as information gain to select which attribute to test at
each node in the tree. Information gain measures how well a given attribute separates the training
samples according to their classification.
It is suitable for handling both categorical as well as continuous data. A threshold value is fixed
such that all the values above the threshold are not taken into consideration. The initial step is to
calculate information gain for each attribute. The attribute with the maximum gain will be
preferred as the root node for the decision tree.
Given a set S of breast cancer cases, J48 first grows an initial tree using the divide-and-conquer
algorithm as follows:
• If all the cases in S belong to the same class or S is small, the tree is a leaf labeled with the
most frequent class in S;
• Otherwise, choose a test based on a single attribute with two or more outcomes. Make this test
the root of the tree with one branch for each outcome of the test, partition S into corresponding
subsets S1, S2,……, Sn for a dataset containing n cases according to the outcome for each case,
and apply the same procedure recursively to each subset. It uses a statistical property known as
information gain to select which attribute to test at each node in the tree. It measures how well a
given attribute separates the training samples according to their classification.
4EXPERIMENTAL RESULTS
This work is implemented in Weka tool. Weka is a collection of machine learning algorithms for
data mining tasks. The algorithms can either be applied directly to a dataset or called from your
own Java code. Weka contains tools for data pre-processing, classification, regression,clustering,
association rules, and visualization. It is also well-suited for developingnew machine learning
schemes.The experimental comparison of classification algorithms are done based on the
performance measures of classification accuracy and execution time.
4.1 Classification Accuracy Accuracy:
Accuracy is defined in the terms of correctly classified instances divided by the total
number of instances present in the dataset.
Accuracy = 𝑇𝑃+𝑇𝑁
𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁
Where TP- True Positive, FP- False Positive, TN- True Negative, FN- False Negative
TP Rate:
International Journal of Pure and Applied Mathematics Special Issue
10905
It is the ability which is used to find the high true-positive rate. The true-positive rate is
also called as sensitivity.
TPR= 𝑇𝑃
𝑇𝑃+𝐹𝑁
Precision:
Precision is given the correlation of number of modules correctly classified to the number
of entire modules classified fault-prone. It is quantity of units correctly predicted as faulty.
Precision= 𝑇𝑃
𝑇𝑃+𝐹𝑃
F-Measure:
F- Measure is the one has the combination of both precision and recall which is used to
compute the score.
F-Measure= 2∗𝑟𝑒𝑐𝑎𝑙𝑙 ∗𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛
𝑟𝑒𝑐𝑎𝑙𝑙 +𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛
Table: Accuracy Measure for Classifier Algorithms
Algorithms Correctly
classified
instances
(%)
Incorrectly
classified
instances
(%)
TP rate F
Measure
IR
precision
IR
Recall
Accuracy
(%)
Naïve bayes 215 71 0.85 0.81 0.78 0.85 64%
J48 217 69 0.95 0.84 0.75 0.95 60%
From the table, In performance measures Decision tree(j48)performs best in classifying process
than Naïve Bayes algorithm and In accuracy naïve bayes algorithm is better than J 48.
In explorer:
Figure : decision tree
International Journal of Pure and Applied Mathematics Special Issue
10906
In knowledge flow environment:
4.2: Execution time
Algorithms Execution time (sec)
Naïve bayes 0
J48 0.1
From the table naïve Bayes takes less execution time than the J48 algorithm.
5. RESULT AND DISCUSSION
The simulation results are partitioned into several sub items for easier analysis and evaluation.
The algorithm will be compared by analyzing their performance; execution time.The algorithm
which has the higher accuracy with the minimum execution time has chosen as the best
algorithm. In this classification, naïve bayes has the maximum classification accuracy,
performance factors and minimum execution time. So it is considered as the best classification
algorithm. . From the results , it is very clear that data mining techniques can be used in
predicting breast cancer risks and that the J48 decision trees has a better accuracy than the naïve
bayes’ model which is a statistical tool.
6. CONCLUSION
In this paper, we applied two prediction models for breast cancer. Here, we used two popular
data mining methods: Naive Bayes and J48. In this research work classification process is used
to classify four types of kidney diseases. Comparison of J48 and Naïve Bayes classification
algorithms is done based on the performance factors classification accuracy and execution time.
From the results, the Naïve bayesthat are accurate, hence it is considered as best classifier when
compared with J48 algorithm. Perhaps, Naïve Bayes classifier classifies the data with minimum
execution time.
International Journal of Pure and Applied Mathematics Special Issue
10907
REFERENCES
1. Hameed Hussain, J., Sharavanan, R., Floor cleaning machine by remote control,
AInternational Journal of Pure and Applied Mathematics, V-116, I-14 Special Issue, PP-
461-464, 2017
2. Hameed Hussain, J., Srinivasan, V., Extraction of polythene waste from domestic waste,
International Journal of Pure and Applied Mathematics, V-116, I-14 Special Issue, PP-
427-431, 2017
3. Hameed Hussain, J., Thirumavalavan, S., Flow analysis of copper tube for solar trough
collector without joint, International Journal of Pure and Applied Mathematics, V-116, I-
14 Special Issue, PP-541-544, 2017
4. Hanirex, D.K., Kaliyamurthie, K.P., Mining the financial multi-relationship with accurate
models, Middle - East Journal of Scientific Research, V-19, I-6, PP-795-798, 2014
5. Hemapriya, M., Meikandaan, T.P., Repair of damaged reinforced concrete beam by
externally bonded with CFRP sheets, International Journal of Pure and Applied
Mathematics, V-116, I-13 Special Issue, PP-473-479, 2017
6. Hemapriya, M., Meikandaan, T.P., Experimental study on changes in properties of
cement concrete using steel slag and fly ash, International Journal of Pure and Applied
Mathematics, V-116, I-13 Special Issue, PP-369-375, 2017
7. Hemapriya, M., Meikandaan, T.P., Experimental study on structural repair and
strengthening of RC beams with FRP laminates, International Journal of Pure and
Applied Mathematics, V-116, I-13 Special Issue, PP-355-361, 2017
8. Hemapriya, M., Meikandaan, T.P., Effect of high range water reducers on sorptivity and
water permeability of concrete, International Journal of Pure and Applied Mathematics,
V-116, I-13 Special Issue, PP-377-381, 2017
9. Hemapriya, M., Meikandaan, T.P., Strength and workability characteristics of super
plasticized concrete, International Journal of Pure and Applied Mathematics, V-116, I-13
Special Issue, PP-345-353, 2017
10. Hemapriya, M., Meikandaan, T.P., Potency and workability behavior of quality
plasticized structural material, International Journal of Pure and Applied Mathematics, V-
116, I-13 Special Issue, PP-363-367, 2017
11. Hussain, J.H., Manavalan, S., Optimization of properties of jatropha methyl Ester (JME)
from jatropha oil, International Journal of Pure and Applied Mathematics, V-116, I-18
Special Issue, PP-481-484, 2017
12. Hussain, J.H., Manavalan, S., Optimization and comparison of properties of neem and
jatropha biodiesels, International Journal of Pure and Applied Mathematics, V-116, I-17
Special Issue, PP-79-82, 2017
13. Hussain, J.H., Meenakshi, C.M., Simulation and analysis of heavy vehicles composite
leaf spring, International Journal of Pure and Applied Mathematics, V-116, I-17 Special
Issue, PP-135-140, 2017
14. Hussain, J.H., Nimal, R.J.G.R., Review: Investigation on mechanical properties of
different metal matrix composites in diffusion bonding method by using metal
International Journal of Pure and Applied Mathematics Special Issue
10908
interlayers, International Journal of Pure and Applied Mathematics, V-116, I-18 Special
Issue, PP-459-464, 2017
15. Jagadeeswari, P., Subashini, G., Basic results of probability, International Journal of Pure
and Applied Mathematics, V-116, I-17 Special Issue, PP-275-276, 2017
16. Janani, V.D., Kavitha, S., Conceptual level similarity measure based review spam
detection adversarial spam detection using the randomized hough transform-support
vector machine, International Journal of Pure and Applied Mathematics, V-116, I-9
Special Issue, PP-197-201, 2017
17. Jasmin, M., Beulah Hemalatha, S., Security for industrial communication system using
encryption / decryption modules, International Journal of Pure and Applied Mathematics,
V-116, I-15 Special Issue, PP-563-567, 2017
18. Jasmin, M., Beulah Hemalatha, S., VLSI-based frequency spectrum analyzer for low area
chip design by using yasmirub method, International Journal of Pure and Applied
Mathematics, V-116, I-15 Special Issue, PP-557-560, 2017
19. Jasmin, M., Beulah Hemalatha, S., RFID security and privacy enhancement, International
Journal of Pure and Applied Mathematics, V-116, I-15 Special Issue, PP-535-538, 2017
20. Jasmin, M., Beulah Hemalatha, S., Digital phase locked loop, International Journal of
Pure and Applied Mathematics, V-116, I-15 Special Issue, PP-569-574, 2017
21. Jeyalakshmi, G., Arulselvi, S., Community oriented configurations for WSN,
International Journal of Pure and Applied Mathematics, V-116, I-15 Special Issue, PP-
529-533, 2017
22. Jeyalakshmi, G., Arulselvi, S., Investigating file systems, International Journal of Pure
and Applied Mathematics, V-116, I-15 Special Issue, PP-517-521, 2017
23. Jeyalakshmi, G., Arulselvi, S., Methodology for the development of lambda calculus,
International Journal of Pure and Applied Mathematics, V-116, I-15 Special Issue, PP-
511-515, 2017
24. Jeyalakshmi, G., Arulselvi, S., Remote procedure calls in access points, International
Journal of Pure and Applied Mathematics, V-116, I-15 Special Issue, PP-523-526, 2017
25. Jeyanthi Rebecca, L., Anbuselvi, S., Sharmila, S., Medok, P., Sarkar, D., Effect of marine
waste on plant growth, Der Pharmacia Lettre, V-7, I-10, PP-299-301, 2015
26. Kaliyamurthie, K.P., Parameswari, D., Udayakumar, R., Malicious packet loss during
routing misbehavior-identification, Middle - East Journal of Scientific Research, V-20, I-
11, PP-1413-1416, 2014
27. Kanagavalli, G., Sangeetha, M., Intelligent trafficlight system for reducedfuel
consumption, International Journal of Pure and Applied Mathematics, V-116, I-15
Special Issue, PP-491-494, 2017
28. Kanagavalli, G., Sangeetha, M., GPS based blind pedestrian positioning and voice
response system, International Journal of Pure and Applied Mathematics, V-116, I-15
Special Issue, PP-479-484, 2017
29. Kanagavalli, G., Sangeetha, M., Detection of retinal abnormality by contrast
enhancement methodusingcurvelet transform, International Journal of Pure and Applied
Mathematics, V-116, I-15 Special Issue, PP-497-502, 2017
International Journal of Pure and Applied Mathematics Special Issue
10909
30. Kanagavalli, G., Sangeetha, M., Design of low power VLSI circuits for precharge logic,
International Journal of Pure and Applied Mathematics, V-116, I-15 Special Issue, PP-
505-509, 2017
31. Kanniga, E., Selvaramarathnam, K., Sundararajan, M., Kandigital bike operating system,
Middle - East Journal of Scientific Research, V-20, I-6, PP-685-688, 2014
32. Karthik, B., Arulselvi, Noise removal using mixtures of projected gaussian scale
mixtures, Middle - East Journal of Scientific Research, V-20, I-12, PP-2335-2340, 2014
33. Karthik, B., Arulselvi, Selvaraj, A., Test data compression architecture for lowpowervlsi
testing, Middle - East Journal of Scientific Research, V-20, I-12, PP-2331-2334, 2014
34. Karthikeyan, R., Michael, G., Kumaravel, A., A housing selection method for design,
implementation&evaluation for web based recommended systems, International
Journal of Pure and Applied Mathematics, V-116, I-8 Special Issue, PP-23-27, 2017
35. Khanaa, V., Thooyamani, K.P., Using lookup table circulating fluidised bed combustion
boiler by the method of sensor output linearization, Middle - East Journal of Scientific
Research, V-16, I-12, PP-1801-1806, 2013
36. Khanaa, V., Thooyamani, K.P., Face routing protocol using genetic algorithm in, Middle
- East Journal of Scientific Research, V-16, I-12, PP-1863-1867, 2013
37. Khanaa, V., Thooyamani, K.P., Udayakumar, R., Two factor authentication using mobile
phones, World Applied Sciences Journal, V-29, I-14, PP-208-213, 2014
38. Khanaa, V., Thooyamani, K.P., Udayakumar, R., Next major wave of it inovation, World
Applied Sciences Journal, V-29, I-14, PP-218-220, 2014
39. Khanaa, V., Thooyamani, K.P., Udayakumar, R., Traffic policing approach for wireless
video conference traffic, World Applied Sciences Journal, V-29, I-14, PP-200-207, 2014
40. Khanaa, V., Thooyamani, K.P., Udayakumar, R., Patient monitoring in gene ontology
with words computing using SOM, World Applied Sciences Journal, V-29, I-14, PP-195-
199, 2014
41. Khanaa, V., Thooyamani, K.P., Udayakumar, R., Load balancing in structured PEER to
PEER systems, World Applied Sciences Journal, V-29, I-14, PP-186-189, 2014
42. Khanaa, V., Thooyamani, K.P., Udayakumar, R., Impact of route stability under random
based mobility model in MANET, World Applied Sciences Journal, V-29, I-14, PP-274-
278, 2014
43. Khanaa, V., Thooyamani, K.P., Udayakumar, R., Modelling Cloud Storage, World
Applied Sciences Journal, V-29, I-14, PP-190-194, 2014
44. Khanaa, V., Thooyamani, K.P., Udayakumar, R., Elliptic curve cryptography using in
multicast network, World Applied Sciences Journal, V-29, I-14, PP-264-269, 2014
45. Khanaa, V., Thooyamani, K.P., Udayakumar, R., SRW/U as a lingua franca in managing
the diversified information resources, World Applied Sciences Journal, V-29, I-14, PP-
279-284, 2014
International Journal of Pure and Applied Mathematics Special Issue
10910
10911
10912