当前位置:网站首页>Pytoch notes (2)
Pytoch notes (2)
2022-07-19 07:53:00 【The soul is on the way】
Pytorch Return to reality
In the fifth and sixth classes, teacher Liu explained linear regression and logistic regression , Interested Taoist friends can go B Station search Lord Liu .
Linear regression

- among y ^ \hat{y} y^, It can be understood as observation value , The function of linear equation is to construct a function to calculate the observed value , Then according to the selected loss function , Calculate the error between the observed value and the actual value .
Four steps
- 1. Prepare the dataset
- 2. The design model
- 3. Construct loss function and optimizer
- 4. Cycle training (forward backward update)
# SGD AND LinearModel
import torch
# Ignore the warning
import warnings
warnings.filterwarnings("ignore")
# Prepare the dataset
x_data = torch.Tensor([[1.0], [2.0],[3.0]])
y_data = torch.Tensor([[2.0], [4.0],[6.0]])
# The design model
class LinearModel(torch.nn.Module):# Here inherit torch Of module, Only forward No, backword The reason is that ,module It will automatically carry out back propagation
def __init__(self):
super(LinearModel,self).__init__() # Call the parent class Initialization process
self.linear = torch.nn.Linear(1,1) # Is the class that calls the linear function ,y= Ax +b And give the initial value 1,1
def forward(self, x):
y_pred = self.linear(x)
return y_pred
model = LinearModel()
# Construct loss function and optimizer
def SGD_LIST_Loss():
#SDG_LIST_new = [] # The reason for creating the list here is , Follow up visual display
criterion = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)
#
# optim Many optimizers in ,Adadelta、Adagrad、Adam etc. I want to show different curves later , Put... Directly SGD Change it to the corresponding name
# Interested Taoists can go to see the document https://pytorch.org/docs/stable/optim.html
for epoch in range(100):
y_pred = model(x_data)
loss = criterion(y_pred, y_data)
print(epoch, loss.item())
#SDG_LIST_new.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("w = ", model.linear.weight.item())
print("b = ", model.linear.bias.item())
x_test = torch.Tensor([4.0])
y_test = model(x_test)
print("y_pred = ", y_test.data)
# Here is a simple visualization code For Taoist friends to read
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif']=['Simhei'] # According to Chinese
x = range(len(Adam_list))
plt.figure(figsize=(30,10))
plt.title(" Loss function trend chart ")
plt.xlabel(" The number of iterations ")
plt.ylabel("LOSS value ")
plt.plot(x,SDG_LIST_new,'-',color='c',label="SDG_LIST")
plt.legend()
Logical regression
Introduce

- Logistic regression is not used to solve regression problems , It is used to solve classification problems .
- The difference between logical regression and linear regression is , Logistic regression adds SIGMOD function , Because it is used to solve classification problems , So the last thing we need is a probability value , That's what we're talking about P(X=x).
- Why sigmod function because sigmod Functions have good congenital conditions , For example, the value range [0,1], It's an increasing function , Is the saturation function ( When X After taking a certain value ,y Close to the extreme value ).
- sigmod There are actually many kinds of functions, but they are used most at present , It is also the most representative
s i g m o d = 1 1 + e − x sigmod = \frac{1}{1+e^{-x}} sigmod=1+e−x1
So we agreed to call it , The function is sigmod function .
Here are others sigmod function 
Loss function

- The loss function we use in logistic regression is called BCE.
- When we calculate the small batch data later , We will use Mini-Batch, That is to say Loss Sum and then average , Why add a minus sign ? It is the same as gradient descent and gradient rise , Originally, the larger the value, the better , Plus the minus sign, the smaller the better .
# Logical regression
import torchvision
# here root Parameters are where your data is saved , If you use this line of code directly, you may report an error ,
# You can put it directly into the root directory , You can also create a new folder and put it , Pay attention to modifying the path
train_set = torchvision.datasets.MNIST(root="./datasets/mnist", train = True, download= True)
test_set = torchvision.datasets.MNIST(root="./datasets/mnist", train = False, download= True)
# There are other libraries , There are some different kinds of pictures , Download here is very slow , Almost 1700W Data
train_set_new = torchvision.datasets.CIFAR10(root="./datasets/mnist", train = True, download= True)
test_set_new = torchvision.datasets.CIFAR10(root="./datasets/mnist", train = True, download= True)
The code here is basically consistent with linear regression , Just modify some function names .
# 1、 Prepare the data
# 2、 Determine the model
# 3、 Build loss and optimizer
# 4、 Training
# If you run directly , There may be a mistake , as a result of anconda Medium libiomp5md.dll Conflict with library function , You can add the following two lines of code , It won't be wrong .
import os
os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
import torch.nn.functional as F
import torch
# Ignore the warning
import warnings
warnings.filterwarnings("ignore")
x_data = torch.Tensor([[1.0], [2.0],[3.0]])
y_data = torch.Tensor([[0], [0],[1]])
class LogisticRegressionModel(torch.nn.Module):# Here inherit torch Of module, Only forward No, backword The reason is that ,module It will automatically carry out back propagation
def __init__(self):
super(LogisticRegressionModel,self).__init__() # Call the parent class Initialization process
self.linear = torch.nn.Linear(1,1) # Is the class that calls the linear function ,y= Ax +b And give the initial value 1,1
def forward(self, x):
y_pred = F.sigmoid(self.linear(x))
return y_pred
model = LogisticRegressionModel()
criterion = torch.nn.BCELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)
for epoch in range(1000):
y_pred = model(x_data)
loss = criterion(y_pred, y_data)
# print(epoch, loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
import numpy as np
import matplotlib.pyplot as plt
x=np.linspace(0,10,200)
x_t = torch.Tensor(x).view((200,1))
y_t = model(x_t)
y = y_t.data.numpy()
plt.plot(x,y)
plt.plot([0,10],[0.5,0.5], c='r')
plt.xlabel ('Hours')
plt.ylabel('Probability of Pass')
plt.grid()
plt.show()

边栏推荐
猜你喜欢

【JVM】之堆内存、逃逸分析、栈上分配、同步省略、标量替换详解

CAN FD如何应用Vector诊断工具链?

HMI概念设计的未来在哪里?

实操教程:CANoe在CAN总线测试中的应用

Prevent blackmail attacks through data encryption schemes

Application of A2B audio bus in intelligent cockpit

Spark3.x entry to mastery - stage 7 (spark dynamic resource allocation)

Flink introduction to practice - phase II (illustrated runtime architecture)

PyCharm 界面设置

3D可视化入门基础:看渲染管线如何在GPU运作
随机推荐
High concurrency day04 (Zab protocol, observer, NC, Avro, RPC)
如何选择合适的模型
实时数据仓库-从0到1实时数据仓库设计&实现(SparkStreaming3.x)
MongoDB的下载、安装和使用
Spark3.x entry to proficiency - stage 3 (in-depth analysis of the whole process of spark data processing)
Spark3.x入门到精通-阶段六(RDD高级算子详解&图解&shuffle调优)
Redis jump table implementation principle & time complexity analysis
Use of mongodb
Redis details
Modify scroll bar style
INSTALL_PARSE_FAILED_MANIFEST_MALFORMED
60. Clear cache
Modify the select style
Promote trust in the digital world
Virtual machine stack of [JVM]
[MySQL] lock mechanism: detailed explanation of lock classification, table lock, row lock and page lock in InnoDB engine
目标检测和边界框
Introduction to 3D Visualization: see how rendering pipelines work in GPU
Spark3.x入门到精通-阶段七(spark动态资源分配)
【JVM】之堆内存、逃逸分析、栈上分配、同步省略、标量替换详解