当前位置:网站首页>Deep learning from getting started to giving up the 100 day challenge
Deep learning from getting started to giving up the 100 day challenge
2022-07-19 13:37:00 【Tight enough to get fat】
Write it at the front
For human justice and glory , I've been on a whim recently , I want to update my learning experience of deep learning every day in this blog , One hundred days .
Its utilitarian purpose is to urge oneself to learn , Keep learning ,so,let’s start it!
【1】-2022 year 7 month 15 Japan - Construct a CNN
import keras
from keras.datasets import mnist # Data sets
from keras.models import Sequential # Used to instantiate a class
from keras.layers import Activation,Dense,Flatten,Conv2D,MaxPooling2D,Dropout # keras Each module of
from sklearn.utils import shuffle # Scramble the data
from sklearn.preprocessing import StandardScaler # normalization
from keras.layers import SpatialDropout2D # Drop Upgraded version of layer
import time,os # Used for time statistics and system import
from keras_flops import get_flops # Floating point number calculation module
The structure of the first convolutional neural network is very simple
n_hidden_1 = 64 # Set hidden layer
n_classes = 1 # Set the last output layer
training_epochs = 5 # The number of iterations
batch_size = 100 # Set the number of batch pictures each time
model = Sequential()
model.add(Dense(n_hidden_1,activation='relu',input_shape=(42,)))
model.add(Dense(32,activation='relu'))
model.add(Dense(n_classes,activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=[ 'accuracy'])
print(model.summary()) # Check the network structure parameters and other information
flops = get_flops(model, batch_size=batch_size) # Calculate the floating point number of the network ( Computational complexity )
# Model training
history = model.fit(X_train,y_train,batch_size=batch_size,
epochs=training_epochs,validation_data=(X_test,y_test))
model.evaluate(X_test,y_test)
【2】-2022 year 7 month 16 Japan - Some parameters explain
About parameter interpretation :
EPOCHS = 200 Define the time of training speed
BATCH_SIZE = 128 Represents the sample size of an input neural network
VERBOSE = 1 The progress bar shows
N_HIDDEN = 128 Number of neurons
NB_CLASSES = 10 Input
VALIDATION_SPLIT = 0.2 Check or prove the effectiveness of Training II. The amount of data retained ( Segmentation ratio )
verbose: The log shows
verbose = 0 Output log information for non-standard output stream
verbose = 1 Record for the output progress bar
verbose = 2 For each epoch Output line record
About binary_crossentropy【 Binary cross entropy 】 and categorical_crossentropy【 Classification cross entropy 】
binary crossentropy:
It is often used in binary classification problems , It is usually necessary to add sigmoid Use it together
categorical crossentropy:
It is suitable for multi classification problems , And use softmax As the activation function of the output layer
【 On the choice of loss function :binary_crossentropy、categorical_crossentropy、sparse_categorical_crossentropy】
Learn from the blog : https://blog.csdn.net/qq_35599937/article/details/105608354
Dichotomous problem :
If it's a dichotomous problem , That is, the final result can only be one of the two classifications , Loss function loss Use binary_crossentropy
Multiple classification problem :
For multi classification problems , In choosing the loss function loss when , It mainly depends on how the data is encoded :
1. If it is a classification code (one-hot code ), Then use categorical_crossentropy
I am right. one-hot The understanding of coding is :one-hot Coding is when the label is vectorized , Each label is a N Dimension vector (N It's up to you to decide ), Only one value of this vector is 1, The rest are 0. That is to index integers i Convert to length of N The binary vector of , This vector has only the second i The element is 1, The rest are 0
Keras There is a built-in method to vectorize labels :
from keras.utils.np_utils import to_categorical
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels)
2. If it's integer encoding , Then use sparse_categorical_crossentropy
My understanding of integer coding is : Integer encoding is to put all labels into a vector , Each label corresponds to a value of the vector
————————————————
Copyright notice : This paper is about CSDN Blogger 「 Fu Huatao Fu」 The original article of , follow CC 4.0 BY-SA Copyright agreement , For reprint, please attach the original source link and this statement .
Link to the original text :https://blog.csdn.net/fu_jian_ping/article/details/107707780
【3】-2022 year 7 month 17 Japan - Make it clear. Keras Two models in :Sequential and Model
Original address :https://blog.csdn.net/weixin_39916966/article/details/88049179
- Sequential It is easier to define the network structure , But simple at the same time , It is also disadvantageous to have built a more complex network
- Model Way to build more complex networks , It's a little harder
Code display :
Sequential
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential()
model.add(Dense(32, input_shape = (784,)))
model.add(Activation('relu'))
model.add(Dense(10))
model.add(Activation('softmax'))
Model
from keras.layers import Input, Dense
from keras.models import Model
# Define input layer , Determine input dimensions
input = input(shape = (784, ))
# 2 Hidden layers , Every one has 64 Neurons , Use relu Activation function , And the upper layer is taken as the parameter
x = Dense(64, activation='relu')(input)
x = Dense(64, activation='relu')(x)
# Output layer
y = Dense(10, activation='softmax')(x)
# Defining models , Specify input and output
model = Model(input=input, output=y)
# Compile model , Specify optimizer , Loss function , Measure
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
# Model fitting , Training
model.fit(data, labels)
【4】-2022 year 7 month 18 Japan - Define a simple neural network [ be based on TensorFlow2.0]
【5】-2022 year 7 month 19 Japan - Yesterday's accurate baseline improvement strategy of the network 1
【6】-2022 year 7 month 20 Japan - Use random deactivation to further improve
【7】-2022 year 7 month 21 Japan -
【8】-2022 year 7 month 22 Japan -
【9】-2022 year 7 month 23 Japan -
边栏推荐
- MySQL sort index failure?
- Using the case statement will produce a latch condition
- 【js逆向爬虫】-有道翻译js逆向实战
- S32K148_CAN驱动(裸机开发)
- AcWing第 60 场周赛
- STM32F1与STM32CubeIDE编程实例-MPU-6050 六轴(陀螺仪 + 加速度计)驱动
- Ossimport migration path
- onvif协议相关:2.1.3 none方式获取流地址
- ONVIF Protocol Related: 4.1.3 WS - username token Method get capture d'écran URL
- Computer dial-up Internet access
猜你喜欢

语音通信网络的原理

Responsive dream weaving template wine cellar website

如何优雅的升级 Flink Job?

Onvif protocol related: 3.1.1 digest access authorization

Principle of voice communication network

Wrong again, byte alignment and the use of pragma pack

Design and Simulation of anti reverse connection circuit based on MOS transistor

codeforce:A. Difference Operations【数学思维】

【码蹄集新手村 600 题】float 与 double 的格式说明符

sqli-labs(less-11)
随机推荐
Codeforce:a. doremy's IQ [reverse greed]
2.三数之和
[pyGame learning notes] 7 event
健康防猝指南3:健康保健
AcWing第 60 场周赛
onvif协议相关:4.1.2 WS-Username token方式获取token
onvif协议相关:3.1.2 Digest方式获取token列表
【码蹄集新手村 600 题】计算一个整数有多少位数
What are the pain points of collaborative tools collaborative office management
[try to hack] ARP and ARP deception
[从零开始学习FPGA编程-53]:高阶篇 - 基于IP核的FPGA开发-PLL锁相环IP核的原理与配置(Xilinx)
eth入门之运行节点
LeetCode 0117. Populate the next right node pointer II of each node
【码蹄集新手村 600 题】运算符 / 在不同的运算顺序中的类型转换
codeforce:A. Doremy‘s IQ【反向贪心】
LeetCode 0118. 杨辉三角
Perl command batch replaces some contents in the file
[micro Service ~ advanced] configuration center practice
[record of question brushing] 13 Roman numeral to integer
Li Kou 198-213 looting Ⅰ, Ⅱ - Dynamic Planning