当前位置:网站首页>Transplant Wu Enda's deep learning 01 machine learning and neural network second week neural network basic programming homework elective homework to pycharm
Transplant Wu Enda's deep learning 01 machine learning and neural network second week neural network basic programming homework elective homework to pycharm
2022-07-19 10:41:00 【--Jize--】
Catalog
Environment configuration
windows10
anaconda3
python3.8
pychrm-community-2022.1.3
problem
①imageio.imread
Source code
image = np.array(imageio.imread(fname))
Report errors
DeprecationWarning: Starting with ImageIO v3 the behavior of this function will switch to that of iio.v3.imread. To keep the current behavior (and make this warning dissapear) use `import imageio.v2 as imageio` or call `imageio.v2.imread` directly.
image = np.array(imageio.imread(fname))
Solution 1
image = np.array(imageio.v3.imread(fname))
Solution 2
image = np.array(plt.imread(fname))
②scipy.misc.imresize
Source code
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
Report errors
module 'scipy.misc' has no attribute 'imresize'
resolvent
Import module first
from skimage.transform import resize
Change the code to
my_image = resize(image, output_shape=(num_px, num_px)).reshape((1, num_px * num_px * 3)).T
Complete code
import numpy as np
from matplotlib import pyplot as plt
import h5py
import pylab
from skimage.transform import resize
def load_dataset():
train_dataset = h5py.File(
'F:/JupyterNotebook/ Wu Enda's in-depth study assignment /01. Machine learning and neural networks /2. In the second week of Neural network basis / Programming operation /datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File(
'F:/JupyterNotebook/ Wu Enda's in-depth study assignment /01. Machine learning and neural networks /2. In the second week of Neural network basis / Programming operation /datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
classes = np.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
index = 5
plt.imshow(train_set_x_orig[index])
pylab.show()
print("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode(
"utf-8") + "' picture.")
# Number of training set examples
m_train = train_set_x_orig.shape[0]
# Number of test set samples
m_test = test_set_x_orig.shape[0]
# The height of the training image is also equal to the width of the training image
num_px = train_set_x_orig.shape[1]
print(" Number of training set examples : m_train = " + str(m_train))
print(" Number of test set samples : m_test = " + str(m_test))
print(" Height of the image / Width : num_px = " + str(num_px))
print(" Image dimension : (" + str(num_px) + ", " + str(num_px) + ", 3)")
print("train_set_x dimension : " + str(train_set_x_orig.shape))
print("test_set_x dimension : " + str(test_set_x_orig.shape))
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
print("train_set_x_flatten dimension : " + str(train_set_x_flatten.shape))
print("train_set_y dimension : " + str(train_set_y.shape))
print("test_set_x_flatten dimension : " + str(test_set_x_flatten.shape))
print("test_set_y dimension : " + str(test_set_y.shape))
# The following two sentences are easier to understand by comparison reshape
# print(train_set_x_orig)
# print(" Reshaped inspection dimension : " + str(train_set_x_flatten[0:5, 0]))
train_set_x = train_set_x_flatten / 255
test_set_x = test_set_x_flatten / 255
def sigmoid(z):
s = 1 / (1 + np.exp(-z))
return s
# Test code
# print("sigmoid([0, 2]) = " + str(sigmoid(np.array([0, 2]))))
def initialize_with_zeros(dim):
# !!! zeros When the number of rows and columns of the array is filled in brackets , Add a pair of parentheses !!!
w = np.zeros((dim, 1))
b = 0
# assert() Inspection conditions , If not, terminate the procedure , Termination error reporting “AssertionError”
assert (w.shape == (dim, 1))
# isinstance() Determine whether a variable is of a certain type
assert (isinstance(b, float) or isinstance(b, int))
return w, b
# Test code
# dim = 2
# w, b = initialize_with_zeros(dim)
# print("w = " + str(w))
# print("b = " + str(b))
def propagate(w, b, X, Y):
m = X.shape[1]
A = sigmoid(np.dot(w.T, X) + b)
cost = -1 / m * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A))
dw = 1 / m * np.dot(X, (A - Y).T)
db = 1 / m * np.sum(A - Y)
assert (dw.shape == w.shape)
assert (isinstance(db, float))
cost = np.squeeze(cost)
assert (cost.shape == ())
grads = {
"dw": dw,
"db": db}
return grads, cost
# # Test code
# w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1, 2], [3, 4]]), np.array([[1, 0]])
# grads, cost = propagate(w, b, X, Y)
# print("dw = " + str(grads["dw"]))
# print("db = " + str(grads["db"]))
# print("cost = " + str(cost))
def optmize(w, b, X, Y, num_iterations, learning_rate, print_cost=False):
costs = []
for i in range(num_iterations):
grads, cost = propagate(w, b, X, Y)
dw = grads["dw"]
db = grads["db"]
w = w - learning_rate * dw
b = b - learning_rate * db
if i % 100 == 0:
costs.append(cost)
if print_cost and i % 100 == 0:
print(" iteration %i The loss after this time is : %f" % (i, cost))
params = {
"w": w,
"b": b}
grads = {
"dw": dw,
"db": db}
return params, grads, costs
# Test code
# params, grads, costs = optmize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False)
# print("w = " + str(params["w"]))
# print("b = " + str(params["b"]))
# print("dw = " + str(grads["dw"]))
# print("db = " + str(grads["db"]))
# print(costs)
def predict(w, b, X):
m = X.shape[1]
Y_prediction = np.zeros((1, m))
w = w.reshape(X.shape[0], 1)
A = sigmoid(np.dot(w.T, X) + b)
for i in range(A.shape[1]):
if A[0, i] <= 0.5:
Y_prediction[0, i] = 0
else:
Y_prediction[0, i] = 1
assert (Y_prediction.shape == (1, m))
return Y_prediction
# print("prdictions = " + str(predict(w, b, X)))
def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False):
w, b = initialize_with_zeros(X_train.shape[0])
parameters, grads, costs = optmize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
w = parameters["w"]
b = parameters["b"]
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
print(" Training accuracy :{}%".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print(" Test accuracy :{}%".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {
"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train": Y_prediction_train,
"w": w,
"b": b,
"learning_rate": learning_rate,
"num_iterations": num_iterations}
return d
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations=2000, learning_rate=0.005, print_cost=False)
index = 1
plt.imshow(test_set_x[:, index].reshape((num_px, num_px, 3)))
pylab.show()
print("y = " + str(test_set_y[0, index]) + ", you predicted that is a \"" + classes[
int(d["Y_prediction_test"][0, index])].decode("utf-8") + "\" picture.")
# Draw the relationship between the loss function and the number of iterations
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('costs')
plt.xlabel('iterations(per hundreds)')
plt.title("Learning rate = " + str(d["learning_rate"]))
plt.show()
# Observe different learning rates , Draw the relationship between the loss function and the number of iterations
learning_rates = [0.01, 0.001, 0.0001]
models = {
}
for i in learning_rates:
print("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations=1500, learning_rate=i,
print_cost=False)
print('\n' + "--------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label=str(models[str(i)]["learning_rate"]))
plt.ylabel('costs')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
fname = 'F:/JupyterNotebook/ Wu Enda's in-depth study assignment /01. Machine learning and neural networks /2. In the second week of Neural network basis / Programming operation /images/cat_in_iran.jpg'
image = np.array(plt.imread(fname))
my_image = resize(image, output_shape=(num_px, num_px)).reshape((1, num_px * num_px * 3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
pylab.show()
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[
int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
Pay attention to it 3 Change the path to your own file path
Reference resources
边栏推荐
- Design and Simulation of intelligent storage cabinet control system
- SAP ECC 和 S4HANA Material 物料库存管理的模型比较
- NJCTF 2017messager
- 发送阻塞,接收阻塞
- Distinction between private key and public key -- Explanation of private key and public key
- 因果学习将开启下一代AI浪潮?九章云极DataCanvas正式发布YLearn因果学习开源项目
- Detailed explanation of C language custom types
- MFC|框架下自绘CEdit控件
- 移植吴恩达深度学习01机器学习和神经网络第二周神经网络基础编程作业选修作业到pycharm
- C # treeview tree structure recursive processing (enterprise group type hierarchical tree display)
猜你喜欢

Figure an introduction to the interpretable method of neural network and a code example of gnnexplainer interpreting prediction

NJCTF 2017messager

How to use SVG to make text effects arranged along any path

如何使用SVG制作沿任意路径排布的文字效果

电商销售数据分析与预测(日期数据统计、按天统计、按月统计)

win10开始键点击无响应

Lvi-sam: laser IMU camera tight coupling mapping

HCIA OSPF

Three programming implementations to quickly determine whether the site is alive

微信小程序云开发 1 - 数据库
随机推荐
Attachment handling of SAP Fiori
Redis集群、一主二从三哨兵的搭建
Detailed explanation of C language custom types
电商销售数据分析与预测(日期数据统计、按天统计、按月统计)
2022 Shaanxi secondary vocational group "Cyberspace Security" - packet analysis
JSP based novel writing and creation website
R language uses LM function to build linear regression model, and uses subset function to specify the subset of data set to build regression model (uses subset function to filter the data subset that
C# SerialPort配置和属性了解
SAP ECC 和 S4HANA Material 物料库存管理的模型比较
无法改变的现状
Hcip day 1 7.15
String type function transfer problem
[Acwing] 第 60 场周赛 C-AcWing 4496. 吃水果
华为机试:报文解压缩
国产旗舰手机价格泡沫严重,二手比新机更划算,要不然就买iPhone
Job: enter an odd number of 1-100
vulnhub inclusiveness: 1
开发第一个Flink应用
mysql不能启动了?相关组件缺失?系统升级?组件不匹配?开始重装mysql
VScode+Unity3D的配置