
3D Object Recognition by Classification Using Neural Networks307
3. Classification
Classification is a research area that has been developed
in the sixties. It is the basic principle of multiple support
systems for diagnosis. It assigns a set of objects to a set
of classes according to the description thereof. This de-
scription is done through properties or specific conditions
typical to these classes.
Objects are then classified according to whether or not
they check these conditions or properties. Classification
methods can be supervised or unsupervised.
Supervised methods require the user a description of
the classes while those unsupervised are independent of
the user. Rather they are methods of statistical grouping
that sort objects according to their properties and form
sets with similar characteristics.
4. The Artificial Neural Networks
The artificial neural networks (ANN) are mathematical
models inspired by the structure and behavior of bio-
logical neurons [12]. They are composed of intercon-
nected units called artificial neurons capable of perform-
ing specific and precise functions [13]. ANN can ap-
proximate nonlinear relationships of varying degrees of
complexity and significant to the recognition and classi-
fication of data. Figure 1 illustrates this situation.
4.1. Architecture of Artificial Neural
Networks
For an artificial neural network, each neuron is inter-
connected with other neurons to form layers in order to
solve a specific problem concerning the input data on the
network [14,15].
The input layer is responsible for entering data for the
network. The role of neurons in this layer is to transmit
the data to be processed on the network. The output layer
can present the results calculated by the network on the
input vector supplied to the network. Between network
input and output, intermediate layers may occur; they are
called hidden layers. The role of these layers is to trans-
form input data to extract its features which will subse-
quently be more easily classified by the output layer. In
these networks, information is propagated from layer to
layer, sometimes even within a layer via weighted con-
nections.
A neural network operates in two consecutive phases:
a design phase and use phase. The first step is to choose
Figure 1. Black box of artificial neura l networks.
the network architecture and its parameters: the number
of hidden layers and number of neurons in each layer.
Once these choices are fixed, we can train the network.
During this phase, the weights of network connections
and the threshold of each neuron are modified to adapt to
different conditions of input. Once the training on this
network is completed, it goes into use phase to perform
the work for which it was designed.
4.2. Multilayer Perceptron
For a multilayer network, the number of neurons in the
input layer and output layer is determined by the problem
to be solved [14-16]. The architecture of this type of
network is illustrated in Figure 2. According to R.
LEPAGE and B. Solaiman [14],the neural network has a
single layer with a hidden number of neurons appro-
ximately equal to:
12JNM
where:
N: number of input parameters.
: the number of neurons in the output layer.
4.3. Figures and Tables
The learning algorithm used is the gradient back propa-
gation algorithm. [17] This algorithm is used in the feed
forward type networks, which are networks of neurons in
layers with an input layer, an output layer, and at least
one hidden layer. There is no recursion in the connec-
tions and no connections between neurons in the same
layer.
The principle of backpropagation is to present the
network a vector of inputs to the network, perform the
calculation of the output by propagating through the lay-
ers, from the input layer to the output layer through hid-
den layers. This output is compared to the desired output,
an error is then obtained. From this error, is calculated
the gradient of the error which in turn is propagated from
the output layer to the input layer, hence the term back
propagation. This allows modification of the weights of
the network and the refore learning. The operation is
repeated for each input vector and that until the stop cri-
ter ion is verified [18].
4.4. Learning Algorithm
The objective of this algorithm is to minimize the maxi-
mum possible error between the outputs of the network
Figure 2. Architecture of a multilayer perceptron network.
Copyright © 2011 SciRes. JSEA