Upload
alexander-decker
View
221
Download
0
Embed Size (px)
Citation preview
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
1/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
89
Development of a Writer-Independent Online Handwritten
Character Recognition System Using Modified Hybrid
Neural Network Model
Fenwa O. D.*
Department of Computer Science and Engineering,
Ladoke Akintola University of Technology, P.M.B 4000, Ogbomoso, Nigeria.
*E-mail of the corresponding author: [email protected]
Omidiora E. O.
Department of Computer Science and Engineering,
Ladoke Akintola University of Technology, P.M.B 4000, Ogbomoso, Nigeria.
E-mail: [email protected]
Fakolujo O. A.
Department of Computer Science and Engineering,
Ladoke Akintola University of Technology, P.M.B 4000, Ogbomoso, Nigeria.
E-mail: [email protected]
Ganiyu R. A.
Department of Computer Science and Engineering,
Ladoke Akintola University of Technology, P.M.B 4000, Ogbomoso, Nigeria.
E-mail: [email protected]
Abstract
Recognition of handwritten characters has become a difficult problem because of the high variability and
ambiguity in the character shapes written by individuals. Some of the problems encountered by researchers
include selection of efficient feature extraction method, long network training time, long recognition time
and low recognition accuracy. However, many feature extraction techniques have been proposed in
literature to improve overall recognition rate although most of the techniques used only one property of the
handwritten character. This research focuses on developing a feature extraction technique that combined
three characteristics (stroke information, contour pixels and zoning) of the handwritten character to create a
global feature vector. A hybrid feature extraction algorithm was developed to alleviate the problem of poor
feature extraction algorithm of online character recognition system. Besides, this research work also
focused on alleviating the problem of standard backpropagation algorithm based on error adjustment. A
hybrid of modified Counterpropagation and modified optical backpropagation neural network model was
developed to enhance the performance of the proposed character recognition system. Experiments were
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
2/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
90
performed with 6200 handwriting character samples (English uppercase, lowercase and digits) collected
from 50 subjects using G-Pen 450 digitizer and the system was tested with 100 character samples written
by people who did not participate in the initial data acquisition process. The performance of the system was
evaluated based on different learning rates, different image sizes and different database sizes. The
developed system achieved better performance with no recognition failure, 99% recognition rate and an
average recognition time of 2 milliseconds.
Keywords: Character recognition, Feature extraction, Neural Network, Counterpropagation, Optical
Backpropagation, Learning rate.
1. Introduction
The use of neural network for handwriting recognition is a field that is attracting a lot of attention. As the
computing technology advances, the benefits of using Artificial Neural Network (ANN) for the purpose ofhandwriting recognition become more obvious. Hence, new ANN approaches geared toward the task of
handwriting recognition are constantly being studied. Character recognition is the process of applying
pattern-matching methods to character shapes that has been read into a computer to determine which alpha-
numeric character, punctuation marks, and symbols the shapes represent. The classes of recognition
systems that are usually distinguished are online systems for which handwriting data are captured during
the writing process (which makes available the information on the ordering of the strokes) and offline
systems for which recognition takes place on a static image captured once the writing process is over
(Anoop and Anil, 2004; Liu et al., 2004; Mohamad and Zafar, 2004; Naser et al., 2009; Pradeep et al.,
2011). The online methods have been shown to be superior to their offline counterpart in recognizing
handwriting characters due the temporal information available with the former (Pradeep et al., 2011).
Handwriting recognition system can further be broken down into two categories: writer independent
recognition system which recognizes wide range of possible writing styles and a writer dependent
recognition system which recognizes writing styles only from specific users (Santosh and Nattee, 2009).Online handwriting recognition today has special interest due to increased usage of the hand held devices.
The incorporation of keyboard being difficult in the hand held devices demands for alternatives, and in this
respect, online method of giving input with stylus is gaining quite popularity (Gupta et al., 2007).
Recognition of handwritten characters with respect to any language is difficult due to variability of writing
styles, state of mood of individuals, multiple patterns to represent a single character, cursive representation
of character and number of disconnected and multi-stroke characters (Shanthi and Duraiswamy, 2007).
Current technology supporting pen-based input devices include: Digital Pen by Logitech, Smart Pad by
Pocket PC, Digital Tablets by Wacom and Tablet PC by Compaq (Manuel and Joaquim, 2001). Although
these systems with handwriting recognition capability are already widely available in the market, further
improvements can be made on the recognition performances for these applications.
The challenges posed by the online handwritten character recognition systems are to increase the
recognition accuracy and to reduce the recognition time (Rejean and Sargurl, 2000; Gupta et. al., 2007).Various approaches that have been used by many researchers to develop character recognition systems,
these include; template matching approach, statistical approach, structural approach, neural networks
approach and hybrid approach. Hybrid approach (combination of multiple classifiers) has become a very
active area of research recently (Kittler and Roli, 2000; 2001). It has been demonstrated in a number of
applications that using more than a single classifier in a recognition task can lead to a significant
improvement of the systems overall performance. Hence, hybrid approach seems to be a promising
approach to improve the recognition rate and recognition accuracy of current handwriting recognition
systems (Simon and Horst, 2004). However, Selection of a feature extraction method is probably the single
most important factor in achieving high recognition performance in character recognition system (Pradeep
et. al., 2011). No matter how sophisticated the classifiers and learning algorithms, poor feature extraction
will always lead to poor system performance (Marc et. al., 2001). In furtherance, Fenwa et al.(2012)
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
3/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
91
developed a feature extraction technique for online character recognition system using hybrid of
geometrical and statistical features. Thus, through the integration of geometrical and statistical features,
insights were gained into new character properties, since these types of features were considered to be
complementary.
2. Research Methodology
The five stages involved in developing the proposed character recognition system, which include data
acquisition, pre-processing, character processing that comprises feature extraction and character digitization,
training and classification using hybrid neural network model and testing, are as shown in Figure 2.2.
Experiments were performed with 6200 handwriting character samples (English uppercase, lowercase and
digits) collected from 50 subjects using G-Pen 450 digitizer and the system was tested with 100 character
samples written by people who did not participate in the initial data acquisition process. The performance
of the system was evaluated based on different learning rates, different image sizes and different database
sizes.
2.1 Data Acquisition
The data used in this work were collected using Digitizer tablet (G-Pen 450) shown in Figure 2.3 . It has an
electric pen with sensing writing board. An interface was developed using C# to acquire data (character
information) such as stroke number, stroke pressure, etc from different subjects using the digitizer tablet. 26
Upper case (A-Z), 26 lower case (a-z) English alphabets and 10 digits (0-9) making a total number of 62
characters. 6,200 characters (62 x 2 x 50) were collected from 50 subjects as each individual was requested
to write each of the characters 2 times (this is done to allow the network learn various possible variations of
a single character and become adaptive in nature). This serves as the training data set which was the input
data that was fed into the neural network.
2.2 Data PreprocessingPre-processing is done prior to the application of feature extraction algorithms. Pre-processing aims to
produce clean character images that are easy for the character recognition systems to operate more
accurately. Feature extraction stage relies on the output of this process. The pre processing techniques used
in this research work is Grid Resizing.
2.3 Resizing Grid
From the interface that was provided, there wasnt any degree of measurement to determine how small/big
the input character should be. Hence, the character written is resized to a matrix size of 5 by 7, 10 by 14
and 20 by 28 for all input characters. This is used to get the universe of discourse which is the shortest
matrix that fit the entire character skeleton. The universe of discourse is measured to easily get a uniform
matrix size in multiple of 5 by 7, 10 by 14 and 20 by 28 respectively.
Any character measured smaller than the required size is considerably resized to the multiple of 5 by 7, 10
by 14 and 20 by 28 conversely any character measured larger than the required size will also be resized tothe multiple of 5 by 7, 10 by 14 and 20 by 28. This implies that rows and columns of Zeros are added or
subtracted from the resized image matrix to achieve the required multiple of 5 by 7, 10 by 14 and 20 by 28.
2.4 Feature Extraction Development
The goal of feature extraction is to extract a set of features, which maximizes the recognition rate with the
least amount of elements. Many feature extraction techniques have been proposed to improve overall
recognition rate; however, most of the techniques used only one property of the handwritten character. This
research focuses on a feature extraction technique that combined three characteristics (stroke information,
contour pixels and zoning) of the handwritten character to create a global feature vector. Hence, a hybrid
feature extraction algorithm was developed using Geometrical and Statistical features as shown in Figure
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
4/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
92
2.4. Integration of Geometrical and Statistical features was used to highlight different character properties,
since these types of features are considered to be complementary.
2.4.1 The Developed Hybrid (Geom-Statistical) Feature Extraction Algorithm
The Hybrid feature extraction model adopted in this work is as shown in Figure 2.4. Stages of
development of the proposed hybrid feature extraction algorithm are as follow:
Stage 1: Get the stroke information of the input characters from the digitizer (G- pen 450). These include:
(i) Pressure used in writing the strokes of the characters
(ii) Number (s) of strokes used in writing the characters
(iii) Number of junctions and the location in the written characters
(iv) The horizontal projection count of the character.
Stage 2: Apply Contour tracing algorithm to trace out the contour of the characters:
Stage 3: Develop a modified hybrid zoning algorithm and RUN it on the contours of the characters:
Two zoning algorithms were proposed by Vanajah and Rajashekararadhya in 2008 for the
recognition of four popular Indian scripts (Kannada, Telugu, Tamil and Malayalam) numeral.
These were: Image Centroid and Zone-based (ICZ) Distance Metric Feature Extraction System
(Vanajah and Rajashekararadhya, 2008a) and Zone Centroid and Zone-based (ZCZ) Distance
Metric Feature Extraction System (Rajashekararadhya and Vanajah, 2008b). The Two Algorithms
were modified in terms of:
(i) Number of zones being used (25 Zones) as shown in Figure 2.1.(ii) Measurement of the distances from both the Image Centroid and Zone Centroid to the
pixels present in each zone.
(iii) The area of application (uppercase (A-Z), lowercase (a-z) English alphabet and digit (0-9)).
Few zones are adopted in this research but emphasis is laid on how to effectively measure the pixel
densities in each zone. However, pixels pass through the zones at varied distances that is why we measured
five distances for each zone at an angle of 200
.
Hybrid of Modified ICZ and Modified ZCZ based Distance Metric Feature Extraction Algorithm.
Input: Pre processed character image
Output: Features for classification and recognition
Method Begins
Step1: Divide the input image into 25 equal zones.
Step2: Compute the input image centroid
Step3: Compute the distance between the image centroid to each pixel present in the zone measured at
angle 200
Step4: Repeat step 3 for the entire pixel present in the zone.
Step5: Compute average distance between these points.
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
5/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
93
Step6: Compute the zone centroid
Step7: Compute the distance between the zone centroid to each pixel present in the zone measured at angle20
0.
Step8: Repeat step 7 for the entire pixel present in the zone
Step9: Compute average distance between these points.
Step10: Repeat the steps 3-9 sequentially for the entire zones.
Step11: Finally, 2xn (50) such features were obtained for classification and recognition.
Method Ends
Stage 4: Feed the outputs of the extracted features of the characters into the digitization stage in order to
convert all the extracted features into digital forms.
2.4.2 Development of Hybrid Neural Network Model
In this research work, hybrid approach with serial combination scheme was adopted. Hybrid of modified
Counterpropagation and modified Optical Backpropagation neural network was developed as shown in
Figure 2.5. The architecture of the neural network with three layers is illustrated in Figure 2.5. The output
of ith
layer is given by (2.1) except the output layer which uses maxsoft function:
ai = logsig(w
ia
i-1+ b
i)
(2.1)
where i = (2, 3) and a0
= P, a1
= E where E is the Euclidean distance between the weight vector and
the input vector.
wi= Weight vector of i
thlayer
ai= Output of i
thlayer
b
i
= Bias vector for i
th
layer.
The input vector P is represented by the solid vertical bar at the left. The dimensions of P are displayed
as 35 x 1, indicating that the input is a single vector of 35 elements (i.e. the image size). These inputs go to
weight matrix W1, which has 86 rows (i.e. 86 neurons in the first hidden layer) and 35 columns. A
constant 1 enters the neuron as input and is multiplied by a bias b1. The net input to the transfer function
(Euclidean distance) in the Kohonen (hidden layer) is n1, which is given as the Euclidean distance
between the weight vector wiand the input vector P. The neurons output a
1 serves as inputs to the second
hidden layer. These inputs go to weight matrix W2, which has 86 rows (i.e.86 neurons in the first hidden
layer) and 35 columns. A constant 1 enters the neuron as input and is multiplied by a bias b2.
The net input to the transfer function (log sigmoid) in the second (hidden layer) is n2, which is the sum of
the bias b2 and the product W
2a
1. The output a
2 is a single vector of 86 elements and this serves as
inputs to the output layer a3. These inputs go to weight matrix W
3, which has 86 rows (i.e. 86 neurons
from the second hidden layer) and 62 columns (26 uppercase + 26 lowercase + 10 digits). The net input to
the transfer function (maxsoft) in the output layer is n3, which is the sum of the bias b3 and the productW
3a
2. The neurons output a
3 is a single vector of 62 elements and this serves as the final output of the
neural network.
This research work adopted the modified Counterpropagation network (CPN) developed by Jude, Vijila,
and Anitha (2010). The training algorithm involves the following two phases:
(i) Weight adjustment between the input layer and the hidden layer (Kohonen layer)
The weight adjustment procedure for the hidden layer weights is same as that of the conventional CPN. It
follows the unsupervised methodology to obtain the stabilized weights. After convergence, the weights
between the hidden layer and the output layer are calculated.
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
6/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
94
(ii) Weight adjustment between the hidden layer and its output layer
The weight adjustment procedure employed in this work is significantly different from the conventionalCPN. The weights are calculated in the reverse direction without any iterative procedures. Normally, the
weights are calculated based on the criteria of minimizing the error. But in this work, a minimum error
value is specified initially and the weights are estimated based on the error value. Thus without any training
methodology, the weight values are estimated. This technique accounts for higher convergence rate since
one set of weights are estimated directly. It is this output that served as the input to the modified optical
backpropagation algorithm.
2.4.2.1 Modified Optical Backpropagation Neural Network
In the standard backpropagation, the error at a single output unit is defined according to Equation as:
o pk = (Ypk- Opk) .fo'
k(neto
pk)
(2.2)
where the subscript p refers to the pth training vector, and k refers to the kth output unit, Ypk is the
desired output value, Opk is the actual output from kthunit, then opk will propagate backward to update the
output-layer weights and the hidden-layer weights. In the Optical backpropagation (OBP), error at a single
output unit is adjusted according to Otair and Salameh (2005) as:
New o
pk= (1+e(Ypk - Opk )2
. fo'
k (neto
pk)), if (Y O) >= zero
(2.3a)
New o
pk= - (1+e(Ypk - Opk )2
. fo'
k(neto
pk)), if (Y O) < zero
(2.3b)
The error function defined in Optical Backpropagation (Otair and Salameh, 2005) earlier is proportional to
the square of the Euclidean distance between the desired output and the actual output of the network for a
particular input pattern. As an alternative, other error functions whose derivatives exist and can be
calculated at the output layer can replace the traditional square error criterion (Haykin, 2003). In this
research work, error of the third order (Cubic error) has been adopted to replace the traditional square errorcriterion used in Optical backpropagation. The Equation of the cubic error is given as:
o
pk= -3(Ypk Opk)2
. fo'
k(netopk)
(2.4)
The cubic error in Equation (2.4) was manipulated mathematically in order to maximize the error of each
output unit which will be transmitted backward from the output layer to each unit in the intermediate layer.
These are as shown in Equations (2.5a) and (2.5b) below:
Modified opk= 3((1+ e
t)2
. fo'
k(netopk)) If (Ypk Opk)
2>= 0
(2.5a)
Modified opk= -3((1+ et)2 . fo'k(net
opk)) If (Ypk Opk)
2
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
7/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
95
(i) Error signal function(ii) Application areaWith the introduction of Cubic error function and Momentum, the modified Optical Backpropagation is
given as:
1. Apply the input example to the input units.
2. Calculate the net-input values to the hidden layer units.
3. Calculate the outputs from the hidden layer.
4. Calculate the net-input values to the output layer units
5. Calculate the outputs from the output units
6. Calculate the error term for the output units, using Equations (2.5a) and (2.5b)
7. Calculate the error term for the hidden units, through applying Modified opkalso
Modified h
pj = fh'j(net
hpj) .(
opk . W
okj)
(2.7)
8. Update weights on the output layer.W
okj(t+1) = W
okj(t) + W
okj(t) + ( . Modified
opk . ipj)
(2.8)
9. Update weights on the hidden layer.
Whji(t+1) = W
hji(t) + ( . Modified
hpj . Xi)
(2.9)
Repeat steps from step 1 to step 9 until the error (Y pk Opk) is acceptably small for each of the training
vector pair. The proposed algorithm as in OBP is stopped when the cubes of the differences between the
actual and target values summed over units and all patterns are acceptably small.
2.4.3 The Hybrid Neural Network Algorithm
This research work employed hybrid of modified Counterpropagation and modified Optical
Backpropagation neural networks for the training and classification of the input pattern. The training
algorithm involves the following two stages:Stage A: Performs the training of the weights from the input nodes to the Kohonen hidden node.
Step 1: Weight adjustment between the input layer and the hidden layer
The weight adjustment procedure for the hidden layer weights is same as that of the conventional CPN. It
follows the unsupervised methodology to obtain the stabilized weights. This process is repeated for a
suitable number of iterations and the stabilized set of weights are obtained. After convergence, the weights
between the Kohonen hidden layer and the output layer are calculated.
Step 2: Weight adjustment between the hidden layer and the output layer
The weight adjustment procedure employed in this work is significantly different from the
conventional CPN. The weights are calculated in the reverse direction without any iterative procedures.
Normally, the weights are calculated based on the criteria of minimizing the error. But in this work, a
minimum error value is specified initially and the weights are estimated based on the error value. The
detailed steps of the modified algorithm are given below.
Step 1: The stabilized weight values are obtained when the error value (target output) is equal to zero (or) a
predefined minimum value. The error value used for convergence in this work is 0.1. The following
procedure uses this concept for weight matrices calculation.
Step 2: Supply the target vectors t1to the output layer neurons
Step 3: Since ( t1 y1 ) = 0.1 for convergence
(2.10)
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
8/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
96
The output of the output layer neurons is set equal to the target values as:
y1 = t1 0.1
(2.11)
Step 4: Once the output value is calculated, the sum of the weighted input signals. Thus without any
training methodology, the weight values are estimated. This technique accounts for higher convergence rate
since one set of weights are estimated directly.
STAGE B: Performs the training of the weights from the second hidden node to the output nodes.
1. Calculate the net-input values from the Kohonen to the second hidden layer units.
2. Calculate the outputs from the second hidden layer.
3. Calculate the net-input values to the output layer units
4. Calculate the outputs from the output units
5. Calculate the error term for the output units, using Equations (2.5a) and (2.5b)6. Calculate the error term for the hidden units, through applying modified
opkas in Equation (2.7)
7. Update weights on the output layer using (2.8)
8. Update weights on the hidden layer using Equation (2.9)
Repeat steps from step 1 to step 8 until the error (Y pk Opk) is acceptably small for each of the training
vector pair. The proposed algorithm as classical BP is stopped when the cubes of the differences between
the actual and target values summed over units and all patterns are acceptably small.
The description of notation used in the training procedure is as given below:
Xpi : net input to the ith input unit
nethpj: net input to thejth hidden unit
Whji: weight on the connection from the ith input unit tojth hidden unit
ipj: net input to thejth hidden unit
netopk: net input to the kth output unit
Wo
kj: weight on the connection from thejth hidden unit to kth output unit
Opk: actual output for the kth output unit
3. Software ImplementationThis section discussed the phases of developing the proposed character recognition system. All the
algorithms have been implemented using C# programming language and RUN under Windows7 operating
system on Pentium (R) 2.00GB RAM, 1.83GH Processor and Pentium (R) 4.00GB RAM, 2.13GH
Processor respectively. Different interfaces representing the phases of the system development are shown
from Figures 2.6 to 2.11.
3.1 Character Acquisition
Character acquisition interface as in Figures 2.7 and 2.8 displayed a drawing area to serve as a platform to
acquire characters from users. Any drawn character on this platform must be saved into the developed
systems database by clicking the Add Botton. It also captured number of strokes and pressure used in
writing a character. A total 6,200 character samples were acquired using G-Pen 450 as shown in Figure 2.2
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
9/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
97
and stored in the database.
3.2 Network Training
Before the network can be trained, the network parameters such as the learning rate, epoch value, the quit
error and character image size must be specified in the Application setting interface. The next thing is to
specify the total number of the character to be loaded from the database for training. The database size can
be varied by specifying the maximum number of character to be selected from character category
(uppercase, lowercase and digit). This can be accomplished with the interfaces shown in Figure 2.9 and
Figure 2.10. On clicking the Training phase button, the neural network trains itself with alphanumeric
characters of A Z, a z and 0 9 by performing several iterations based on the error value, learning rate
and the epoch value stipulated for the Neural Network. Epoch increases from 1 to 10,000 and the Error
reduces from a particular value depending on the value specified by the user. The training is terminated
when either the epoch reaches 10,000 or the error reaches the minimum value specified. Network training
results such the training time, the epoch value were displayed as shown in Figure 2.10.
3.3 Recognition Phase
Recognition of the character can be accomplished by clicking the Data acquisition/ Recognition phase
botton. User is required to write a character and click the Recognize botton. The results such as the
detected character, recognition rate and recognition time will be displayed as shown in Figure 2.11.
4. Performance Evaluation
The performance metrics adopted in evaluating the developed system are: different learning rate parameters,
different image sizes, different database sizes and system configurations. The results are as given in Figure
2.12 to Figure 2.17. Figure 2.12 shows the graph of learning rate versus epoch. Learning rate parameter
variation has a positive effect on the network performance. The smaller the value of learning rate, the lowerthe value with which the network updates its weights. This intuitively implies that it will be less likely to
face the over learning difficulty since it will be updating its links slowly and in a more refined manner.
However, this would also imply more number of iterations is required to reach its optimal state. Figure 2.13
indicates the graph of image size versus epoch. The image size is directly proportional to epoch. Usually,
the complex and large sized input sets require a large topology network with more number of iterations and
this has an adverse effect on the recognition time. Figure 2.14 shows the graph of variation of database size
and recognition rates. Increase in database size has a positive proportionality relation to the recognition
rates due to the fact that network is able to attribute the test character to larger character samples in the
vector space. However, rate of increment in recognition rate with respect to database size is considerably
small. Moreover, it can be shown from Figure 2.15 that the more the dimensional input vector (image
matrix size), the less the recognition performance. Three different image sizes (5 by 7, 10 by 14 and 20 by
28) were considered; it was shown from the results that the higher the image size, the higher the percentage
of recognition, although, the rate of change was small. Furthermore, increase in image size also increasesthe recognition time.
System configuration is also another factor influencing the performance of the recognition system. Both the
training time and recognition time are measured in CPU (processor) seconds, the higher the speed of the
system (processor speed), the lower the training time and the recognition time. This is due to the fact that
more time will be used by 1.8GH system to reach the required epochs (iteration) than the system with
2.13GH speed. The result is as shown in Figure 2.16. Figure 2.17 is the graph showing comparison of
performance of the developed system with related works in literature. It can be shown from the graph that
the accuracy of the system developed by Muhammad et. al. (2005) is higher than that of Muhammad et. al.
(2006). This shows that the performance of Counterpropagation neural network is better than that of
Standard Backpropagation. The best recognition rate was achieved in the developed system with no
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
10/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
98
recognition failure. This means that the developed system was able to recognise characters irrespective of
the writing styles.
5. Conclusion and Future Work
In this paper, we have been able to develop an effective feature extraction technique for the proposed online
character recognition system using hybrid of geometrical and statistical features. The hybrid feature
extraction was developed to alleviate the problem of poor feature extraction algorithm of online character
recognition system. However, a hybrid of modified counter propagation and modified optical
backpropagation neural network model was developed for the proposed system. The performance of the
online character recognition system under consideration was evaluated based on different learning rates,
image sizes and database sizes. The results from the study were compared with some works in the literature
using counterpropagation and standard backpropagation respectively. The results showed that
counterpropagation performed better than standard backpropagation in terms of correct recognition, falserecognition, recognition failure and the achieved recognition rates were 94% and 81% respectively. The
developed system achieved better performance with no recognition failure, 99% recognition rate and an
average recognition time of 2 milliseconds. Future work could be geared towards integrating an
optimization algorithm into the learning algorithms to further enhance the convergence of the neural
network.
References
Anoop, M. N. and Anil K.J. (2004): Online Handwritten Script Recognition, IEEE Trans. PAMI, 26(1):
124
130.
Liu, C.L., Nakashima, K., Sako, H. and Fujisawa, H. (2004): Handwritten Digit Recognition:
Investigation of
Normalization and Feature Extraction Techniques, Pattern Recognition, 37(2): 265-279.
Mohamad, D. and Zafar, M.F. (2004): Comparative Study of Two Novel Feature Vectors for Complex
Image
Matching Using Counterpropagation Neural Network, Journal of Information Technology, FSKSM, UTM,
16(1): 2073-2081.
Naser, M.A., Adnan, M., Arefin, T.M., Golam, S.M. and Naushad, A. (2009): Comparative Analysis of
Radon and Fan-beam based Feature Extraction Techniques for Bangla Character Recognition, IJCSNS
International Journal of Computer Science and Network Security, 9(9): 120-135.
Pradeep, J., Srinivasan, E. and Himavathi, S. (2011): Diagonal Based Feature Extraction for Handwritten
Alphabets Recognition using Neural Network, International Journal of Computer Science and InformationTechnology (IJCS11), 3(1): 27-37.
Shanthi, N., and Duraiwamy, K. (2007): Performance Comparison of Different Image Sizes for
Recognizing Unconstrained Handwritten Tamil Characters Using SVM, Journal of Science, 3(9): 760-764.
Santosh, K.C. and Nattee, C. (2009): A Comprehensive Survey on Online Handwriting Recognition
Technology and Its Real Application to the Nepalese Natural Handwriting, Kathmandu University Journal
of Science, Engineering Technology, 5(1): 31-55.
Simon G. and Horst B. (2004): Feature Selection Algorithms for the Generalization of Multiple Classifier
Systems and their Application to Handwritten Word Recognition, Pattern Recognition Letters, 25(11):
1323-1336.
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
11/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
99
Gupta, K., Rao, S.V., and Viswanath (2007): Speeding up Online Character Recognition, Proceedings of
Image and Vision Computing Newzealand, Hamilton: 41-45.
Manuel, J., Fonseca, and Joaquim, A.J., (2001): Experimental Evolution of an Online Scribble
Recognizer, Pattern Recognition Letters, 22(12): 1311-1319.
Rejean, P. and Sargur, S.N. (2000): On-line and Off-line Recognition: A Comprehensive Survey, IEEE
Transaction on Pattern Analysis and Machine Intelligence, 22(1): 63-84.
Kittler, J. and Roli, F. (2000): 1st
International Workshop on Multiple Classifier Systems, Cagliari, Italy.
Kittler, J. and Roli, F. (2001): 2nd
International Workshop on Multiple Classifier Systems, Cagliari, Italy.
Marc, P., Alexandre, L., and Christian, G. (2001): Character Recognition Experiment using Unipen Data,
Parizeau and al., Proceeding of ICDAR, Seattle: 10-13.
Muhammad, F.Z., Dzuulkifli, M., and Razib, M.O. (2006): Writer Independent Online Handwritten
Character Recognition Using Simple Approach, Information Technology Journal, 5(3): 476-484.
Freeman, J.A. and Skapura, D.M. (1992): Backpropagation Neural Networks Algorithm Applications andProgramming Techniques, Addison-Wesley Publishing Company: 89-125.
Minai, A.A. and Williams, R.D. (1990): Acceleration of Backpropagation through learning Rate
Momentum Adaptation, Proceeding of the International Joint Conference on Neural Networks: 1676-1679.
Riedmiller, M. and Braun, H. (1993): A Direct Adaptive Method for Faster Backpropagation learning the
PROP Algorithm, Proceedings of the IEEE International Conference on Neural Networks (ICNN), 1: 586-
591, Francisco.
Otair, M.A. and Salameh, W.A. (2005a): Online Handwritten Character Recognition using an Optical
Backpropagation Neural Network, Issues in Informing Science and Information Technology, 2: 787-797.
Rajashekararadhya, S.V. and Vanaja, P.R. (2008a): Handwritten numeral recognition of three popular
South Indian Scripts: A Novel Approach:, Proceedings of the Second International Conference on
information processing ICIP: 162-167.Rajashekararadhya S.V. and Vanaja, P.R (2008b): Isolated Handwritten Kannada Digit Recognition: A
Novel Approach, Proceedings of the International Conference on Cognition and Recognition: 134-140.
Fenwa, O. D., Omidiora, E. O. and Fakolujo O. A. (2012): Development of a Feature Extraction
Technique for Online Character Recognition System, Journal of Innovative Systems Design and
Engineering, International Institute of Science, Technology and Education,Vol. 3, No.3, pp. 10-23.
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
12/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
100
Figure 2.1: Character n in 5 by 5 (25 equal zones)
Figure 2.2: Block Diagram of the Developed Character Recognition System
Figure 2.3: The snapshot of Genius Pen (G-Pen 450) Digitizer for character acquisition
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
13/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
101
Figure 2.4: The Developed Hybrid Feature Extraction Model
a1
= E (W1
P),a2
= logsig (W2a
1+ b
2), a
3= Maxsoft (W
3+ b
3)
Figure 2.5: The Hybrid Neural Network Model
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
14/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
102
Figure 2.6: Graphic user interface of the developed Online Character Recognition System
Figure 2.7: Acquisition of character A using 3 strokes
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
15/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
103
Figure 2.8: Acquisition of character A using 2 strokes
Figure 2.9: Network parameter setting and loading data to be trained from the database
Figure 2.10: Network Training in progress
Figure 2.11: Result of recognition process displaying recognition status
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
16/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
104
Figure 2.12: Graph showing the effect of variation in Learning Rate and Database size on Epochs
Figure 2.13: Graph showing the effect of variation of Image size and Database size on Epoch
Figure 2.14: Graph showing the effect of variation in Database size and Epoch on the Recognition Rate
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
17/18
Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
105
Figure 2.15: Graph showing the effect of variation in Image size and Database size on the Recognition Rate
Figure 2.16: Graph showing the effect of variation in system configurations and Database size on Epochs
CR = Correct Recognition; FR = False Recognition; RF = Recognition Failure
Figure 2.17: Graph showing performance evaluation of the developed system with related works
7/31/2019 11.[89-105]Development of a Writer-Independent Online Handwritten Character Recognition System Using Modifie
18/18
International Journals Call for Paper
he IISTE, a U.S. publisher, is currently hosting the academic journals listed below. The peer review process of the following journals
sually takes LESS THAN 14 business days and IISTE usually publishes a qualified article within 30 days. Authors should
end their full paper to the following email address. More information can be found in the IISTE website :www.iiste.org
usiness, Economics, Finance and Management PAPER SUBMISSION EMAIL
uropean Journal of Business and Management [email protected]
esearch Journal of Finance and Accounting [email protected]
ournal of Economics and Sustainable Development [email protected]
formation and Knowledge Management [email protected]
eveloping Country Studies [email protected]
dustrial Engineering Letters [email protected]
hysical Sciences, Mathematics and Chemistry PAPER SUBMISSION EMAIL
ournal of Natural Sciences Research [email protected]
hemistry and Materials Research [email protected]
athematical Theory and Modeling [email protected]
dvances in Physics Theories and Applications [email protected]
hemical and Process Engineering Research [email protected]
ngineering, Technology and Systems PAPER SUBMISSION EMAIL
omputer Engineering and Intelligent Systems [email protected]
novative Systems Design and Engineering [email protected]
ournal of Energy Technologies and Policy [email protected]
formation and Knowledge Management [email protected]
ontrol Theory and Informatics [email protected]
ournal of Information Engineering and Applications [email protected]
dustrial Engineering Letters [email protected]
etwork and Complex Systems [email protected]
nvironment, Civil, Materials Sciences PAPER SUBMISSION EMAIL
ournal of Environment and Earth Science [email protected]
vil and Environmental Research [email protected]
ournal of Natural Sciences Research [email protected]
vil and Environmental Research [email protected]
fe Science, Food and Medical Sciences PAPER SUBMISSION EMAIL
ournal of Natural Sciences Research [email protected]
ournal of Biology, Agriculture and Healthcare [email protected]
ood Science and Quality Management [email protected]
hemistry and Materials Research [email protected]
ducation, and other Social Sciences PAPER SUBMISSION EMAIL
ournal of Education and Practice [email protected]
ournal of Law, Policy and Globalization [email protected]
ew Media and Mass Communication [email protected]
ournal of Energy Technologies and Policy [email protected]
storical Research Letter [email protected]
ublic Policy and Administration Research
[email protected] Affairs and Global Strategy [email protected]
esearch on Humanities and Social Sciences [email protected]
eveloping Country Studies [email protected]
rts and Design St dies ADS@iiste org
Global knowledge sharing:
EBSCO, Index Copernicus, Ulrich's
Periodicals Directory, JournalTOCS, PKP
Open Archives Harvester, Bielefeld
Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate,OCLC WorldCat, Universe Digtial Library ,
NewJour, Google Scholar.
IISTE is member ofCrossRef. All journals
h hi h IC I t F t V l (ICV)
http://www.iiste.org/http://www.iiste.org/http://www.iiste.org/http://www.iiste.org/Journals/index.php/EJBMhttp://www.iiste.org/Journals/index.php/EJBMhttp://www.iiste.org/Journals/index.php/RJFAhttp://www.iiste.org/Journals/index.php/RJFAhttp://www.iiste.org/Journals/index.php/JEDS/indexhttp://www.iiste.org/Journals/index.php/JEDS/indexhttp://www.iiste.org/Journals/index.php/IKMhttp://www.iiste.org/Journals/index.php/IKMhttp://www.iiste.org/Journals/index.php/DCShttp://www.iiste.org/Journals/index.php/DCShttp://www.iiste.org/Journals/index.php/IELhttp://www.iiste.org/Journals/index.php/IELhttp://www.iiste.org/Journals/index.php/JNSRhttp://www.iiste.org/Journals/index.php/JNSRhttp://www.iiste.org/Journals/index.php/CMRhttp://www.iiste.org/Journals/index.php/CMRhttp://www.iiste.org/Journals/index.php/MTMhttp://www.iiste.org/Journals/index.php/MTMhttp://www.iiste.org/Journals/index.php/APTAhttp://www.iiste.org/Journals/index.php/APTAhttp://www.iiste.org/Journals/index.php/CPERhttp://www.iiste.org/Journals/index.php/CPERhttp://www.iiste.org/Journals/index.php/CEIShttp://www.iiste.org/Journals/index.php/CEIShttp://www.iiste.org/Journals/index.php/ISDEhttp://www.iiste.org/Journals/index.php/ISDEhttp://www.iiste.org/Journals/index.php/JETPhttp://www.iiste.org/Journals/index.php/IKMhttp://www.iiste.org/Journals/index.php/IKMhttp://www.iiste.org/Journals/index.php/CTIhttp://www.iiste.org/Journals/index.php/CTIhttp://www.iiste.org/Journals/index.php/JIEAhttp://www.iiste.org/Journals/index.php/JIEAhttp://www.iiste.org/Journals/index.php/IELhttp://www.iiste.org/Journals/index.php/IELhttp://www.iiste.org/Journals/index.php/NCShttp://www.iiste.org/Journals/index.php/NCShttp://www.iiste.org/Journals/index.php/JEEShttp://www.iiste.org/Journals/index.php/JEEShttp://www.iiste.org/Journals/index.php/CERhttp://www.iiste.org/Journals/index.php/CERhttp://www.iiste.org/Journals/index.php/JNSRhttp://www.iiste.org/Journals/index.php/JNSRhttp://www.iiste.org/Journals/index.php/CERhttp://www.iiste.org/Journals/index.php/CERhttp://www.iiste.org/Journals/index.php/JNSRhttp://www.iiste.org/Journals/index.php/JNSRhttp://www.iiste.org/Journals/index.php/JBAHhttp://www.iiste.org/Journals/index.php/JBAHhttp://www.iiste.org/Journals/index.php/FSQMhttp://www.iiste.org/Journals/index.php/FSQMhttp://www.iiste.org/Journals/index.php/CMRhttp://www.iiste.org/Journals/index.php/CMRhttp://www.iiste.org/Journals/index.php/JEPhttp://www.iiste.org/Journals/index.php/JEPhttp://www.iiste.org/Journals/index.php/JLPGhttp://www.iiste.org/Journals/index.php/NMMChttp://www.iiste.org/Journals/index.php/NMMChttp://www.iiste.org/Journals/index.php/JETPhttp://www.iiste.org/Journals/index.php/HRLhttp://www.iiste.org/Journals/index.php/HRLhttp://www.iiste.org/Journals/index.php/PPARhttp://www.iiste.org/Journals/index.php/PPARhttp://www.iiste.org/Journals/index.php/IAGShttp://www.iiste.org/Journals/index.php/IAGShttp://www.iiste.org/Journals/index.php/RHSShttp://www.iiste.org/Journals/index.php/RHSShttp://www.iiste.org/Journals/index.php/DCShttp://www.iiste.org/Journals/index.php/DCShttp://www.iiste.org/Journals/index.php/ADShttp://www.crossref.org/01company/17crossref_members.htmlhttp://www.crossref.org/01company/17crossref_members.htmlhttp://www.crossref.org/01company/17crossref_members.htmlhttp://www.crossref.org/01company/17crossref_members.htmlhttp://www.iiste.org/Journals/index.php/ADShttp://www.iiste.org/Journals/index.php/DCShttp://www.iiste.org/Journals/index.php/RHSShttp://www.iiste.org/Journals/index.php/IAGShttp://www.iiste.org/Journals/index.php/PPARhttp://www.iiste.org/Journals/index.php/HRLhttp://www.iiste.org/Journals/index.php/JETPhttp://www.iiste.org/Journals/index.php/NMMChttp://www.iiste.org/Journals/index.php/JLPGhttp://www.iiste.org/Journals/index.php/JEPhttp://www.iiste.org/Journals/index.php/CMRhttp://www.iiste.org/Journals/index.php/FSQMhttp://www.iiste.org/Journals/index.php/JBAHhttp://www.iiste.org/Journals/index.php/JNSRhttp://www.iiste.org/Journals/index.php/CERhttp://www.iiste.org/Journals/index.php/JNSRhttp://www.iiste.org/Journals/index.php/CERhttp://www.iiste.org/Journals/index.php/JEEShttp://www.iiste.org/Journals/index.php/NCShttp://www.iiste.org/Journals/index.php/IELhttp://www.iiste.org/Journals/index.php/JIEAhttp://www.iiste.org/Journals/index.php/CTIhttp://www.iiste.org/Journals/index.php/IKMhttp://www.iiste.org/Journals/index.php/JETPhttp://www.iiste.org/Journals/index.php/ISDEhttp://www.iiste.org/Journals/index.php/CEIShttp://www.iiste.org/Journals/index.php/CPERhttp://www.iiste.org/Journals/index.php/APTAhttp://www.iiste.org/Journals/index.php/MTMhttp://www.iiste.org/Journals/index.php/CMRhttp://www.iiste.org/Journals/index.php/JNSRhttp://www.iiste.org/Journals/index.php/IELhttp://www.iiste.org/Journals/index.php/DCShttp://www.iiste.org/Journals/index.php/IKMhttp://www.iiste.org/Journals/index.php/JEDS/indexhttp://www.iiste.org/Journals/index.php/RJFAhttp://www.iiste.org/Journals/index.php/EJBMhttp://www.iiste.org/