RENT A THINKER 



Home  My Page  Chat  TikToke 
4.1
Abstract
4.2
Introduction
4.3
Proposed Algorithm
4.4
Unsupervised Learning Algorithm
4.5
Continuous Output Averaging Algorithm
4.6
Results
4.7
References
4.8
Published
Feed Forward type multilayered
Neural Networks are very popular for their generalization and feature extraction
property. However, the Feed Forward type network needs supervised learning to
train the network initially. Selforganizing networks on the other hand are used
in application of image classification, speech recognition, language translation
etc. An algorithm is developed to train multilayered Feed Forward type Network
in both supervised and unsupervised mode. Unsupervised learning is achieved by
enhancing the maximum output and de clustering the crowded classes
simultaneously. Proposed algorithm allows forcing any output to learn a desired
class using supervised training like Back Propagation. The Network uses
multilayered architecture and hence it classifies the features. Population of
each class could be controlled with greater flexibility.
Multilayered
Feed Forward type Neural Networks (FFNN) are most popular and are often used
with supervised learning algorithms like Back Propagation. These networks have
excellent properties like generalization, feature extraction, mapping of
nonlinear boundary etc. Once a FFNN is trained with a set of inputoutput, it
cannot respond to new features/patterns. For example if FFNN is trained with
speech of a person, it will not recognize the voices of others, until it is
trained again. Also it needs precise knowledge of output during training
operation. It is often seen that the multilayered network training could be
accelerated by individually training the hidden layers in unsupervised mode.
Complex networks are often hierarchical in nature and the individual network
needs generalized and unsupervised training to organize and classify inputs. A
generalpurpose algorithm is designed to organize the data using unsupervised
learning. The algorithm is similar to conventional Back Propagation technique,
except the error function do not have any desired output. Desired output is
replaced by combination of two vectors, (a) maximum output element and (b)
average of occurrence of maximum output. The first vector used as excitatory and
the second one as inhibitory element. This algorithm does not use the boundary
convergence around the maxima like Kohonens algorithm. Hence the convergence
of class boundaries need not be synchronized with the convergence of error.
Multilayered
Feed Forward type networks are trained by minimizing the energy function of
output error in the form of
E = ½ ε
(Y_{k} D_{k}) ^{2}
.(1)
Where
D_{k} is the desired output and Y_{k} is the actual output of
the network.
Any
network element W_{ij} is modified in proportion to the first derivative
dE/dW_{ij}. In case of unsupervised training, desired output D_{k}
in equation (1) is unknown. To replace D_{k} in unsupervised mode, we
assume the following:
·
Only one output should be high for each set of inputs
·
Each output should be energized by an input subset or a
class which have similar property or features
·
Output distribution is uniform or number of the input sets
corresponding to each output is uniform or depends on a predefined profile
To
achieve above criterion, we perform the following steps.
The
figure 1 shows the proposed twolayer classifier. The weights are updated
using back propagation algorithm as follows.
W_{new}
= W_{old} n * dE/dW
Correction
for output layer is given by
W_{jk}
= W_{jk} n (Y_{k} A_{k}) * Y_{k} (1
Y_{k}) * Y_{j}
W_{ij}
= W_{ij} Y_{i} * Y_{j} * (1 Y_{j}) * S (Y_{k}
A_{k}) * Y_{k} * (1 Y_{k}) * W_{jk}
To
organize the output in order, we assume following modification in the above
algorithm. We add small weighted random number between Kr to +Kr, to K_{max}.
As a result of this each output is spread to its neighbors. The neighboring
outputs acquire similar properties as a result of random variable Kr.
Fig
1 shows proposed network to classify ASCII character set. A twolayered network
used for classification using selforganization algorithm described above is
shown. In the above network, (a) 96 input neurons organized in 8x12 character
matrix, (b) 12 neurons in hidden layer, (c) 32 neurons in output layer with
total 1536 connections are shown. Fig 2 shows the classification results. The
input patterns are equally distributed in thirtytwo classes having similar
features. We also observe the features similarity in the neighboring outputs.
1.
Kohonen T. 1988a, March, The phonetic typewriter computer 21
(3):1122
2.
Kohonen T. 1988b, Selforganization and associative memory, 2^{nd}
edition, New York SpringerVeriag.
3.
Widrow B., Winger R. G., and Baxter R. A., (1988), Layered
Neural Nets for Pattern Recognition, Transactions on Acoustics, Speech and
Signal Processing. Vol 36, No. 7 July 1988.
4.
Lippman R.P., An introduction to computing with Neural Nets.
IEEE ASSP Magazine, pp422, April 1987.
5.
Rumelhart D.E., Hinton G.E., and Williams R.J., Learning
Representation by BackPropagation Errors, Nature, Vol, 323.
Fig 1 Shows multilayered Feed Forward type network with
8x12 binary image inputs, 12 hidden neurons and 32 output neurons. The
above network is used in selflearning mode. 
Fig 2 the classified outputs of network of Fig 1. 