当前位置:网站首页>Neural network learning notes 2.2 -- write a simple convolution neural network image classifier with MATLAB

Neural network learning notes 2.2 -- write a simple convolution neural network image classifier with MATLAB

2022-07-19 03:33:00 Oreo is delicious

Supporting video explanation

10 Minutes to learn matlab Realization cnn Image classification _ Bili, Bili _bilibili

10 Minutes to learn matlab Realization cnn Image classification

The overall code

link :https://pan.baidu.com/s/1btnY-jZXMK9oj3ZQxDvz8g 
Extraction code :k4v8 

You can open the code , Let me explain it step by step , Meaning of each step , And how you should use !

Catalog

1. For the sake of understanding , Here are some basic concepts , Yes, skip the program directly later

        1.1 The channel number

        1.2 Fully connected layer

        1.3 edge detection ( The core of convolution )

        1.4  The law of abandonment (Dropout Method)

1.5 softmax( Multiple classifiers )

2. adopt Matlab Realize a simple convolutional neural network classification

2.1 Load data

2.2 Divide the data set

2.3 Define the network framework ( It is suggested to watch the video at the top of the article )

        2.4 Network training

2.5 Test the network for accuracy prediction


1. For the sake of understanding , Here are some basic concepts , Yes, skip the program directly later

        1.1 The channel number

        Each picture is made up of red, green and blue , When importing, it corresponds to red, green and blue channels , That is, three matrices , These three matrices represent the strength values of the corresponding red, green and blue pixels .

        1.2 Fully connected layer

        Pull the last matrix of convolution into a one-dimensional matrix , Then there is the simplest neural network model layer by layer , The middle is connected by weight .

        1.3 edge detection ( The core of convolution )

        Edge detection mainly uses convolution kernel to extract image features , For example, there are those that pass the vertical edge detection , Or horizontal edge inspection

        1.4  The law of abandonment (Dropout Method

For a neural layer y = f(Wx+b), Introduce a discard function d(·) bring y = f(Wd(x)+b).

  among  m ∈ {0,1}^d  Is to discard the mask (dropout mask), By taking probability as p The Bayesian effort distribution is randomly generated .

( The above pictures and concepts come from teacher Qiu's God will network ppt The chapter of regularization )

In short, it is through incomplete invocation of nodes , Regularize , The purpose is to prevent the model from over fitting

1.5 softmax( Multiple classifiers )

          If you use Softmax Regression classifier , Equivalent to the last layer of the network C Neurons , Its output process Softmax The normalized function can be used as the conditional probability of each class .

Is to get the probability of each event output , And then class Determine the category together

2. adopt Matlab Realize a simple convolutional neural network classification

         Here is matlab An example of convolution image classification provided in , Let me explain in detail how each step of the model comes , And explain how we can set it according to our own data set

2.1 Load data

         Load the digital sample data as image data storage .

         The annotated line here adds the file address , We can add ‘c:’ That's what it means c disc , Click the single quotation mark in the back to the middle to select the address .

        imageDatastore Function automatically labels the image according to the folder name .

digitDatasetPath = fullfile(matlabroot,'toolbox','nnet','nndemos', ...
    'nndatasets','DigitDataset');
% digitDatasetPath = fullfile('','','','','');

imds = imageDatastore(digitDatasetPath, ...
    'IncludeSubfolders',true, ...
    'LabelSource','foldernames');   

         Open the data set, which is like this , The file name is the label , Inside is the data of handwritten digits

         The label here doesn't have to be a number , For example, when I was doing it, there was the English of the corresponding fault classification name , I haven't tried Chinese , I don't think most of them support .

2.2 Divide the data set

         Divide the data into training data set and verification data set , So that each category in the training set contains 750 Images , And the validation set contains the remaining images corresponding to each tag .splitEachLabel Split the image data store into two new data stores for training and verification .

         here 750 Is the number of training sets , Here we need to pay attention to , This number should not be greater than the minimum number in each folder , Otherwise, it will report a mistake , hinder randomize It's just to take randomly , It's convenient that you don't need to disturb manually .

numTrainFiles = 750;
[imdsTrain,imdsValidation] = splitEachLabel(imds,numTrainFiles,'randomize');

2.3 Define the network framework ( It is suggested to watch the video at the top of the article )

         Define convolutional neural network architecture . Specify the size of the image in the network input layer and the number of classes in the full connection layer in front of the classification layer . Each image is 28×28×1 Pixels , Yes 10 Classes .

inputSize = [28 28 1];
numClasses = 10;

layers = [
    imageInputLayer(inputSize)
    convolution2dLayer(5,20)
    batchNormalizationLayer
    reluLayer
    fullyConnectedLayer(numClasses)
    softmaxLayer
    classificationLayer];

          Through here matlab The toolbox can write the network structure through modular design, which is comfortable without typing code

deepNetworkDesigner

         Enter this code in the command window on the left , Will open this editor

        The network structure in the above code segment makes it look like this

                 Then click generate code above to get the network structure in our code segment above .

         Then the setting of network parameters

         Here, set the network parameters, such as the number of iterations , There are also pictures showing the training process , wait , If there are no special requirements, you can directly use the parameters I set here , If the training results are not good , And every time, it is practical to train to the maximum round rather than the minimum error , At this time, you can change the number of training rounds

         If there are other functions that need to be implemented, you can enter them in the command window   help trainingOptions Open the settings document , And then through Ctrl+F Shortcut key to find the function you need according to this ‘ Function name ’‘ Parameters ’ The format of is added here .

options = trainingOptions('sgdm', ...
    'MaxEpochs',4, ...
    'ValidationData',imdsValidation, ...
    'ValidationFrequency',30, ...
    'Verbose',false, ...
    'Plots','training-progress');

        2.4 Network training

net = trainNetwork(imdsTrain,layers,options);

         Inside the brackets are ( Training data , Network structure , Training parameters ) 

        Then the training chart will appear during training

  Above is the accuracy , The following is the calculation loss, that is, the error

2.5 Test the network for accuracy prediction

YPred = classify(net,imdsValidation);
YValidation = imdsValidation.Labels;
accuracy = mean(YPred == YValidation)

         The three lines of code respectively get the predicted value through the network , Get the label corresponding to the prediction result , Judge whether the prediction result is the same as the real label to calculate the accuracy .

原网站

版权声明
本文为[Oreo is delicious]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/200/202207170101552275.html