Wednesday 27 February 2013

Effect of changing the number of neurons









%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Code create a neural network- MLP
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

tic
[P,T]=simplefit_dataset;

plot(P,T,'x')
grid; xlabel('time (s)'); ylabel('output'); title('simplefit dataset');

net=newff([0 10], [5,1], {'tansig','purelin'},'traingd');

net1 = train(net, P, T);

%(defualts parameters) 1000 iterations, gradient 1 e-5 

N= sim(net1,P);

plot( P,T, P,N, P,N-T); grid;
legend('Original function', 'Simulated function','Error');
title('Neural Network Simulated fuction with # neurons in the hidden layer');
xlabel('time (s)'); ylabel('output');
toc

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Effect of changing the number of neurons;

Rsults with five neurons:

10 neurons
Elapsed time is 10.174713 seconds.


15 neurons
Elapsed time is 10.453801 seconds.

20
Elapsed time is 11.307679 seconds.

25
Elapsed time is 12.205114 seconds.

30
Elapsed time is 12.442988 seconds.

difference between 25 and 30 nuerons is the same, increasing neurons will give no significant improvement on the gradient
Time increases as program has do do more calculations for extra neurpon.

Increasing the number of iterations improves performance, however
 the time taken is longer
For 30 neurons with 5000 iterations, elapsed time is 45.844242 seconds.


The training algorithm can be altered to improve learnin, ie replace traingd with trainlm, this is the Levenberg-Marquardt algorithm. It can give an output with very low error in fewer interationa and with fewer neurons in the hidden layer. Elapsed time is 15.018103 seconds. The time taken for 15 neurons at 500 iterations is longer than it would be for the traingd algorithm as the calculations made to adjust the weights are more complex, but it is still quicker using this algorithm to get an accurate representation of the original function.



Monday 11 February 2013

Function Approximation with AI methodologies


The first one is about using neural networks to solve a calculation with complicated environment.
The title is <Application of artificial neural networks to calculate the partial gas concentrations in a mixture>
http://www.sciencedirect.com/science/article/pii/S0925400501007365

The second one is about using neural networks to forecast.
The title is <Bankruptcy prediction using neural networks>
http://www.sciencedirect.com/science/article/pii/0167923694900248

The final one is about how neural networks learn from experience.
http://books.google.co.uk/books?hl=zh-CN&lr=&id=FJblV_iOPjIC&oi=fnd&pg=PA181&dq=neural+networks+for+calculate&ots=zYIj3gKYQS&sig=VBXX2F_YjXnzNRSY4G5ptcJjpEs&redir_esc=y#v=onepage&q=neural%20networks%20for%20calculate&f=false

I think these three part are the main development directions of neural networks.


P= [-0.4 -0.5 0.6; 0.9 0 0.1];
T= [1 1 0];

net=newp([-1 1;-1 1],1);
net.trainParam.epochs = 20;
net=train(net,P,T);    
Y=sim(net,P)      
E1=mae(Y-T)    
Q=[0.6 0.9 -0.1; -0.1 -0.5 0.5];
Y1=sim(net,Q)      
figure;          
plotpv(Q,Y1);      
plotpc(net.iw{1},net.b{1})



P=[0.5152 0.8173 1.0000 ;
     0.8173 1.0000 0.7308;
     1.0000 0.7308 0.1390;
     0.7308 0.1390 0.1087;
     0.1390 0.1087 0.3520;
     0.1087 0.3520 0.0000;]';
T=[0.7308 0.1390 0.1087 0.3520 0.0000 0.3761];
net=newff([0 1;0 1;0 1],[5,1],{'tansig','logsig'},'traingd');
net.trainParam.epochs=5000;
net.trainParam.goal=0.01;
LP.lr=0.1;
net=train(net,P,T);


newp():generate a perceptron
hardlim():hard limiter activation function
learnp():perceptron learning function
train():neural network training function
sim():neural network simulation function
mae():mean absolute error performance function

Function Approximation with AI methodologies


5/2/13

I found out how to get the NN to display the funtion it has created. you use the sim command to simulate the results example:



x=0:0.05:2;
y=humps(x);

P=x; T=y;

plot(P,T,'x')

xlabel('time'); ylabel('output'); title('Original Function');

net = fitnet(3, 'trainlm');  %fitnet is feedforward network 'n' neurons, and training function

%default settings; 1000 iterations
 net = train(net,P,T);
     view(net); %diagram of network
     y = net(x);
     a= sim(net,P);

% Plot result and compare
plot(P,a-T,P,a, P,T); grid;
legend('error', 'NN function', 'Original');



If you inrease the number of neurons you can see the NN fuction becoming closer to the orignal, also the error line is closer to zero.

humps(x) is a demo function which is equivalent to :

y = 1 ./ ((x-.3).^2 + .01) + 1 ./ ((x-.9).^2 + .04) - 6;

Alterantively you can use the data set
[x,y] = simplefit_dataset;(no need for the x=0:0.05:2 with this)

Friday 1 February 2013

To make the plan and know the aim of this project


data set website with data we can use to test our network:

http://kdd.ics.uci.edu/

matlab tutorial for function fitting

http://www.mathworks.co.uk/help/nnet/gs/fitting-a-function.html

The toolbox doesnt output a function that relates the input to the output... The neural network itself is the function!
This is how Dr Goulermas has explained it to me.
He says our project should be more to do with the explanation of the maths of neural networking. And we can also deterimine the effects of changing the number of neurons for SLP's and MLP's (single and multiple layer perceptrons) mathematically then verifing that on Matlab

PROJECT LOG


For function fitting matlab features a Neural Networking Tool which uses neural networking to appoximate the data's function. We're going to need to familiarise ourselves with neural network theory and this Matlab toolbox. I've uploaded the user guide and it has some examples we can practice in it(and on the Mathworks website).

There's a few lectures and tutorials on youtube about neural networks, this one is good as a brief and basic introduction;

http://www.youtube.com/watch?v=DG5-UyRBQD4

Matlab code for linear regression (taken from Mathworks website);

[x,t] = simplefit_dataset;
net = feedforwardnet(20);
net = train(net,x,t);
y = net(x);
[r,m,b] = regression(t,y);
plotregression(t,y)

Running this you will see the NNT training tool go through several iterations to learn the best weights and biases to give the line of best fit. The program then plots the data with the equation of the line along the left side of the graph.

The 'simplefit_dataset' is just some example data that matlab has.
You can put in your own data, for example using x=1:10 and t=[1.2,2.5,2.9,3.1,4.8,5.6,7.2,8.8,9.2,10.0] will give a line y=0.8t + 2.1. the R=0.8082 describes how close the data fits the line.


I also find the past experimental result from:

http://year2projects.blogspot.co.uk/.

There are three types of function approximation which are curve fitting toolbox, neural network and interpolation function.

I read some Chinese thesis about interpolation approximation. It is used when the data not enough and need to replenish.It could find the regularity of distribution and make a function to connect each point which has been known. So people could forecast some point between each two points.

http://wenku.baidu.com/view/5fa11641be1e650e52ea99a1

I still need some time to know how to write the code to realize the interpolation approximation.