Thursday, 7 March 2013

Sunday, 3 March 2013

Began to design the poster

We began to design the poster. This poster aims to attract more people’s attention on the result of our project. We wanted that the poster could stand out from others while it was academic enough. This is a master plate that we will imitate.

Use network to forecast

[input_con, target_con] = concrete_dataset();

% new_target = [];
%    new_input = [];
%    dimension=size(input);

   t=target_con.';
   p=input_con.';



 net = fitnet(50);
    net = train(net,p,t);
 %   view(net)
    y = net(p);
    plotregression(t,y);



Friday, 1 March 2013

The effect of the learning rate.





P = [0.5152 0.8173 1.0000;
     0.8173 1.0000 0.7308;
     1.0000 0.7308 0.1390;
     0.7308 0.1390 0.1087;
     0.1390 0.1087 0.3520;
     0.1087 0.3520 0.0000;]';
T=[0.7308 0.1390 0.1087 0.3520 0.0000 0.3761];
net=newff([0 1;0 1;0 1],[5,1],{'tansig','logsig'},'traingd');
net.trainParam.epochs=1000;
net.trainParam.goal=0.01;
LP.lr=0.01;
net=train(net,P,T);

If the learning rate is too large, it may swing between inappropriate solution.
If the learning rate is too small, the learning will be very slow.

learning rate=0.01:the performance=0.128
learning rate=0.1:the performance=0.185
learning rate=0.2:the performance=0.337
learning rate=0.5:the performance=0.308

Wednesday, 27 February 2013

Effect of changing the number of neurons









%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Code create a neural network- MLP
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

tic
[P,T]=simplefit_dataset;

plot(P,T,'x')
grid; xlabel('time (s)'); ylabel('output'); title('simplefit dataset');

net=newff([0 10], [5,1], {'tansig','purelin'},'traingd');

net1 = train(net, P, T);

%(defualts parameters) 1000 iterations, gradient 1 e-5 

N= sim(net1,P);

plot( P,T, P,N, P,N-T); grid;
legend('Original function', 'Simulated function','Error');
title('Neural Network Simulated fuction with # neurons in the hidden layer');
xlabel('time (s)'); ylabel('output');
toc

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Effect of changing the number of neurons;

Rsults with five neurons:

10 neurons
Elapsed time is 10.174713 seconds.


15 neurons
Elapsed time is 10.453801 seconds.

20
Elapsed time is 11.307679 seconds.

25
Elapsed time is 12.205114 seconds.

30
Elapsed time is 12.442988 seconds.

difference between 25 and 30 nuerons is the same, increasing neurons will give no significant improvement on the gradient
Time increases as program has do do more calculations for extra neurpon.

Increasing the number of iterations improves performance, however
 the time taken is longer
For 30 neurons with 5000 iterations, elapsed time is 45.844242 seconds.


The training algorithm can be altered to improve learnin, ie replace traingd with trainlm, this is the Levenberg-Marquardt algorithm. It can give an output with very low error in fewer interationa and with fewer neurons in the hidden layer. Elapsed time is 15.018103 seconds. The time taken for 15 neurons at 500 iterations is longer than it would be for the traingd algorithm as the calculations made to adjust the weights are more complex, but it is still quicker using this algorithm to get an accurate representation of the original function.



Monday, 11 February 2013

Function Approximation with AI methodologies


The first one is about using neural networks to solve a calculation with complicated environment.
The title is <Application of artificial neural networks to calculate the partial gas concentrations in a mixture>
http://www.sciencedirect.com/science/article/pii/S0925400501007365

The second one is about using neural networks to forecast.
The title is <Bankruptcy prediction using neural networks>
http://www.sciencedirect.com/science/article/pii/0167923694900248

The final one is about how neural networks learn from experience.
http://books.google.co.uk/books?hl=zh-CN&lr=&id=FJblV_iOPjIC&oi=fnd&pg=PA181&dq=neural+networks+for+calculate&ots=zYIj3gKYQS&sig=VBXX2F_YjXnzNRSY4G5ptcJjpEs&redir_esc=y#v=onepage&q=neural%20networks%20for%20calculate&f=false

I think these three part are the main development directions of neural networks.


P= [-0.4 -0.5 0.6; 0.9 0 0.1];
T= [1 1 0];

net=newp([-1 1;-1 1],1);
net.trainParam.epochs = 20;
net=train(net,P,T);    
Y=sim(net,P)      
E1=mae(Y-T)    
Q=[0.6 0.9 -0.1; -0.1 -0.5 0.5];
Y1=sim(net,Q)      
figure;          
plotpv(Q,Y1);      
plotpc(net.iw{1},net.b{1})



P=[0.5152 0.8173 1.0000 ;
     0.8173 1.0000 0.7308;
     1.0000 0.7308 0.1390;
     0.7308 0.1390 0.1087;
     0.1390 0.1087 0.3520;
     0.1087 0.3520 0.0000;]';
T=[0.7308 0.1390 0.1087 0.3520 0.0000 0.3761];
net=newff([0 1;0 1;0 1],[5,1],{'tansig','logsig'},'traingd');
net.trainParam.epochs=5000;
net.trainParam.goal=0.01;
LP.lr=0.1;
net=train(net,P,T);


newp():generate a perceptron
hardlim():hard limiter activation function
learnp():perceptron learning function
train():neural network training function
sim():neural network simulation function
mae():mean absolute error performance function

Function Approximation with AI methodologies


5/2/13

I found out how to get the NN to display the funtion it has created. you use the sim command to simulate the results example:



x=0:0.05:2;
y=humps(x);

P=x; T=y;

plot(P,T,'x')

xlabel('time'); ylabel('output'); title('Original Function');

net = fitnet(3, 'trainlm');  %fitnet is feedforward network 'n' neurons, and training function

%default settings; 1000 iterations
 net = train(net,P,T);
     view(net); %diagram of network
     y = net(x);
     a= sim(net,P);

% Plot result and compare
plot(P,a-T,P,a, P,T); grid;
legend('error', 'NN function', 'Original');



If you inrease the number of neurons you can see the NN fuction becoming closer to the orignal, also the error line is closer to zero.

humps(x) is a demo function which is equivalent to :

y = 1 ./ ((x-.3).^2 + .01) + 1 ./ ((x-.9).^2 + .04) - 6;

Alterantively you can use the data set
[x,y] = simplefit_dataset;(no need for the x=0:0.05:2 with this)