Thursday, September 17, 2009

A16 Neural Networks

We are done with pattern recognition and classification by (1) mean distance method and (2) probabilistic classification by LDA. Let us now try another classification technique by way of (3) probabilistic neural networks. A neural network is a mathematical model of how neurons in the brain work. Among the many function and varied capabilities of the brain is classification.

Just like the brain, neural network can perform tasks based on experience. It can carry out mapping or classification only after learning the patterns of the classification scheme, something which can be taught by giving examples or training the program. Each example serves to improve the neural network program’s recognition processing.

The architecture of this neural network is illustrated as:

Input layer- is where the features of the set are fed. This first layer computes distances from the input vector to the training input vectors and produces a vector whose elements indicate how close this input is to a training input.


Hidden layer- acts contributions from the input layer by computing the summed probabilities for each class of inputs and passes the results to the output layer.


Output layer-outputs which class these set of feature vectors pertain/ belong to.

In Scilab, we use ANN or Artificial Neural Network to do the classification of the set of features extracted in activity 14. The program used was adapted from Cole’ Blog.



In Activity 14, the samples consisted of 2 class of chips, the Cheese- it labeled as (0) and Pillows as (1). Each class has 12 samples and each sample is described by 4 features namely: hor*ver or area 1, area 2(#pixels), perimeter, and the ratio of area2/perimeter.


Procedure:

1.Specify the no. of neurons in (4)input layer,(4) hidden layer and (1)output layer.


The ANN architecture

N = [4,4,1];

There are 4 features (normalized) extracted from each sample:
These features were specified as x:

x=read('data6train.txt',-1,4);


2. Training

Used 12 objects, 6 from each class in training process as the example classification

t=[0 0 0 0 0 0 1 1 1 1 1 1];


3. Learning

The learning rate and threshold are specified by:

lp = [0.1,0];

Here 0.1 and 0 is the threshold for the error tolerated by the network:

Training cycles or iterations:

T = 1000;


4. Run the classification using ann_FF_Std_online and ann_FF_run(x,N,W);


W = ann_FF_Std_online(x,t,N,W,lp,T);

c=ann_FF_run(x,N,W);

c=c'



5. Results and Discussion

The expected output values were supposed to be in 0’s and 1’s. However the actual results as seen in column 1 Table1 were in decimal. Still, values for each sample were very close to 0 and 1’s and from here we can say that the program has learned the proper grouping of each sample.

Next, I tested whether the program is now ready for the real classification.


This test worked well and the results show that the program worked with 100% correct classification.
Then to see whether or not the program is dependent on the arrangement of the samples, I shuffled the arrangement of the test samples.

Again the test worked well and the samples were classified correctly and showed that this time the program can work smoothly as it can adapt to the arrangement of the inputs.
Then I tested the effects of varying the parameter T which is the no. of cycles or iterations.

As expected lowering the T value reduced the accuracy of the classification while increasing it improves the accuracy, although doing that took a rather a longer runtime. Still the classifications for both cases were all correct.


Afterwards I shuffled the 24 samples to test whether this program can still sort them into their respected classes. Then I just rearranged the output in order.


These tests proved that the program has learned the classification pattern and is 100% accurate.

SOURCE: A16 Neural Networks Manual and Cole' Blog

ASSESSMENT: 10/10 because I was able to use NN in Scilab and make it classify my samples into their respective classes correctly.

Wednesday, September 16, 2009

A17 Photometric Stereo

Fig1. Images I1,I2,I3 and I4


Based on the manual for this activity , from the point sources locations (1) and intensities obtained from the 4 images of the object we are able to compute for g via equation (2). Next we determine the normal vector(3), and then compute component surface normals(nx,ny,nz) from each row.

And relate these surface normals to the partial derivatives of f (4) and compute the elevation z=f(u,v) by taking the integral of in (5) via cumsum of (4) over x and y. This can be easily plotted in 3d using mesh.

Fig 2. 3D plot of the object shape using mesh


SOURCE: A17 Photometric Stereo Manual


ASSESSMENT: 10 because I was able to reconstruct the surface using the directions in the manual :) and I was able to use mesh :)

Wednesday, September 2, 2009

A15 Probabilistic Classification

Linear discriminant analysis (LDA) is one of the techniques used pattern recognition to classify two or more classes of objects and in this case the separation of the 2 kinds of chips using the 4 features aextracte from previous activities.

The classification works by taking the combinations of features based on differences. Say the objects’ features are linearly separable, we can use linear discriminant model (LDA) formula given as:


SOURCE: http://people.revoledu.com/kardi/tutorial/LDA/LDA.html#LDA

This is a summary of the data used in activity 14:

Table 1.
Procedure:
1. LDA
After processing the data in Table 1 as input x in the implemented LDA formula in Scilab, each sample was easily classified as either belonging in class 1 or 2 depending on its value f.
2. Determine if f1>f2:
By comparison, a large value of f1 would mean that this object belonged to class 1 and and class 2 if f1 is small.
The LDA worked successfully in assigning the 24 samples to its respective class.

ASSESSMENT: 10/10 because I successfully implemented LDA in scilab and used EXCEL to summarize the data into tables :)