Categories
Artificial Intelligence (AI) Mastering Development

How can I train a neural network if I don’t have enough data?

I have created a neural network that is able to recognize images with the numbers 1-5. The issue is that I have a database of 16×5 images which ,unfortunately, is not proving enough as the neural network fails in the test set. Are there ways to improve a neural network’s performance without using more data? The ANN has approximately a 90% accuracy on the training sets and a 50% accuracy in the test ones.

Code:

clear
graphics_toolkit("gnuplot")
sigmoid = @(z) 1./(1 + exp(-z));
sig_der = @(y) sigmoid(y).*(1-sigmoid(y));


parse_image;   % This external f(x) loads the images so that they can be read. 
%13x14
num=0;
for i=1:166
  if mod(i-1,10)<=5 && mod(i-1,10) > 0
    num=num+1;
    data(:,num) = dlmread(strcat("/tmp/",num2str(i)))(:);
  end
end



function [cost, mid_layer, last_layer] = forward(w1,w2,data,sigmoid,i)
  mid_layer(:,1)=sum(w1.*data(:,i));
  mid_layer(:,2)=sigmoid(mid_layer(:,1));
  last_layer(:,1)=sum(mid_layer(:,2).*w2);
  last_layer(:,2)=sigmoid(last_layer(:,1));
  exp_res=rem(i,5);
  if exp_res==0
    exp_res=5;
  end
  exp_result=zeros(5,1); exp_result(exp_res)=1;
  cost = exp_result-last_layer(:,2);
end

function [w1, w2] = backprop(w1,w2,mid_layer,last_layer,data,cost,sig_der,sigmoid,i)
  delta(1:5) = cost;
  delta(6:20) = sum(cost' .* w2,2);
  w2 = w2 + 0.05 .* delta(1:5) .* mid_layer(:,2) .* sig_der(last_layer(:,1))';
  w1 = w1 + 0.05 .* delta(6:20) .* sig_der(mid_layer(:,1))' .* data(:,i);
end

w1=rand(182,15)./2.*(rand(182,15).*-2+1);
w2=rand(15,5)./2.*(rand(15,5).*-2+1);

for j=1:10000
  for i=[randperm(85)]
    [cost, mid_layer, last_layer] = forward(w1,w2,data,sigmoid,i);
    [w1, w2] = backprop(w1,w2,mid_layer,last_layer,data,cost,sig_der,sigmoid,i);
    cost_mem(j,i,:)=cost;
  end
end

Leave a Reply

Your email address will not be published. Required fields are marked *