当前位置:网站首页>Developers share | handwriting operator is not so difficult. I'll teach you to use mindspire to realize adaptive average pooling operator!
Developers share | handwriting operator is not so difficult. I'll teach you to use mindspire to realize adaptive average pooling operator!
2022-07-18 22:23:00 【Shengsi mindspire】

Recently, with MindSpore Reappear Fast SCNN When it comes to the Internet , It uses an adaptive average pooling operator “nn.AdaptiveAvgPool2d”, however MindSpore The current version does not provide the corresponding operator for developers , So the author consulted some materials , Understand its calculation principle . Experimental results show that , The following methods can replace AdaptiveAvgPool2d operator , Based on this, you can migrate to other AI In the frame .
01
AdaptiveAvgPool2d
AdaptiveAvgPool2d The function of is simply , Developers just need to pass in “ Data to be processed ” and “ Target size ”, This operator will automatically calculate the kernel_size and stride Data such as , Make the output result shape by “ Target size ”.
But in fact, the above understanding is not completely correct , Even wrong . Based on this understanding, we can slightly understand its function , But if you want to reproduce it , Will fall into the misunderstanding completely .
At present, a common way of recurrence is , Since we know that the calculation process of common pooling operation is , The of the pool layer is known kernel_size、padding、stride And the size of the input tensor input_size, Then the output tensor size output_size by :
output_size =(input_size+2*padding-kernel_size)/stride +1( The calculation is simplified here , If you input tensor column Values and row It's not worth it , Then calculate respectively )
Then we'll find a way to pass input_size and output_size Roll back kernel_size、stride Just wait for the value , The data we need can be obtained by reverse calculation .
But in fact, this method only fits its shape , Not to his liking . We can only make the output tensor what we need “ Target size ”, Its internal value is the same as “nn.AdaptiveAvgPool2d” There is no small difference in the calculation results of , The reason is , Lies in the wrong starting point .
02
AdaptiveAvgPool2d Calculation principle
The author has read a lot of information , In the end in https://discuss.pytorch.org/t/what-is-adaptiveavgpool2d/26897 Found what you need . among Thomas Yes AdaptiveAvgPool2d Made a fairly accurate explanation , I improved the code they shared to NCHW The pattern of :
import torch.nn as nn
import torch
def torch_pool(inputs, target_size):
#NCHW
H = target_size[0]
W = target_size[1]
s_p1 = (torch.arange(W, dtype=torch.float32) * (inputs.size(-1) / W)).long()
e_p1 = ((torch.arange(W, dtype=torch.float32)+1) * (inputs.size(-1) / W)).ceil().long()
s_p2 = (torch.arange(H, dtype=torch.float32) * (inputs.size(-2) / H)).long()
e_p2 = ((torch.arange(H, dtype=torch.float32)+1) * (inputs.size(-2) / H)).ceil().long()
pooled2 = []
for i_H in range(H):
pooled = []
for i_W in range(W):
res = torch.mean(inputs[:, :, s_p2[i_H]:e_p2[i_H],s_p1[i_W]:e_p1[i_W]], dim=(-2,-1), keepdim=True)
pooled.append(res)
pooled = torch.cat(pooled, -1)
pooled2.append(pooled)
pooled2 = torch.cat(pooled2,-2)
return pooled2
if __name__ == '__main__':
data = [[[[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]]
,
[[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]]
]]
inputs = torch.tensor(data,dtype=torch.float32)
print(inputs)
print(inputs.size())
print("*********************************")
avgpool1 = torch_pool(inputs, (1,3))
avgpool2 = torch_pool(inputs, (2,3))
avgpool3 = torch_pool(inputs, (3,3))
avgpool6 = torch_pool(inputs, (6,5))
print(avgpool1)
print("*********************************")
print(avgpool2)
print("*********************************")
print(avgpool3)
print("*********************************")
print(avgpool6)The result of the calculation is :
tensor([[[[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.]],
[[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.]]]])
torch.Size([1, 2, 6, 8])
*********************************
tensor([[[[3., 6., 8.]],
[[3., 6., 8.]]]])
*********************************
tensor([[[[3., 6., 8.],
[3., 6., 8.]],
[[3., 6., 8.],
[3., 6., 8.]]]])
*********************************
tensor([[[[3., 6., 8.],
[3., 6., 8.],
[3., 6., 8.]],
[[3., 6., 8.],
[3., 6., 8.],
[3., 6., 8.]]]])
*********************************
tensor([[[[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000]],
[[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000]]]])And pytorch Of nn.AdaptiveAvgPool2d Operators for comparison and verification :
import torch.nn as nn
import torch
if __name__ == '__main__':
data = [[[[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]]
,
[[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]
,[2,3,4,5,6,9,7,8]]
]]
x = torch.tensor(data,dtype=torch.float32)
print(x)
print(x.size())
print("*********************************")
avgpool1 = nn.AdaptiveAvgPool2d((1,3))
avgpool2 = nn.AdaptiveAvgPool2d((2,3))
avgpool3 = nn.AdaptiveAvgPool2d((3,3))
avgpool6 = nn.AdaptiveAvgPool2d((6,5))
print(avgpool1(x))
print("*********************************")
print(avgpool2(x))
print("*********************************")
print(avgpool3(x))
print("*********************************")
print(avgpool6(x))The result of the calculation is :
tensor([[[[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.]],
[[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.],
[2., 3., 4., 5., 6., 9., 7., 8.]]]])
torch.Size([1, 2, 6, 8])
*********************************
tensor([[[[3., 6., 8.]],
[[3., 6., 8.]]]])
*********************************
tensor([[[[3., 6., 8.],
[3., 6., 8.]],
[[3., 6., 8.],
[3., 6., 8.]]]])
*********************************
tensor([[[[3., 6., 8.],
[3., 6., 8.],
[3., 6., 8.]],
[[3., 6., 8.],
[3., 6., 8.],
[3., 6., 8.]]]])
*********************************
tensor([[[[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000]],
[[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000],
[2.5000, 4.0000, 5.5000, 7.3333, 7.5000]]]])You can find , Whether it's output shape still Tensor Internal values , Both are the same . And no matter what changes NCHW Which part of , expand N、C、H、W Any one dimension of , The final calculation results are consistent .
03
MindSpore edition AdaptiveAvgPool2d
Look at the code above , If you want to rewrite MindSpore Version of the code , We just need to replace ‘torch.arange’、‘torch.mean’、‘torch.cat’ These three main operators , And add a rounding operation , stay MindSpore The middle is ops.ReduceMean(keep_dims=True)、P.Concat(axis=-1) Equal operator , Just do the corresponding replacement .
But I was rewriting res = torch.mean(inputs[:, :, s_p2[i_H]:e_p2[i_H],s_p1[i_W]:e_p1[i_W]], dim=(-2,-1), keepdim=True) In this sentence , Find out MindSpore Yes ‘ Variable subscript ’ Exceptions will occur when slicing , I'm not sure if there is something wrong with my usage .
But I wrote a provisional version , For example, if you want to NCx32x64 Data pooling of NCx6x6 size , We can calculate the subscript of the slice in advance , You can get such a version of available code :
def _AvgPool2d6x6(self,x):
s_p1 = [ 0, 10, 21, 32, 42, 53]
e_p1 = [11, 22, 32, 43, 54, 64]
s_p2 = [ 0, 5, 10, 16, 21, 26]
e_p2 = [ 6, 11, 16, 22, 27, 32]
pooled2 = []
for i_H in range(6):
pooled = []
for i_W in range(6):
res = self.reduceMean(x[:, :, s_p2[i_H]:e_p2[i_H],s_p1[i_W]:e_p1[i_W]], (-2,-1))
pooled.append(res)
pooled = self.concat1((pooled[0],pooled[1],pooled[2],pooled[3],pooled[4],pooled[5]))
pooled2.append(pooled)
pooled2 = self.concat2((pooled2[0],pooled2[1],pooled2[2],pooled2[3],pooled2[4],pooled2[5]))
return pooled2
MindSpore Official information
official QQ Group : 486831414
Official website :https://www.mindspore.cn/
Gitee : https : //gitee.com/mindspore/mindspore
GitHub : https://github.com/mindspore-ai/mindspore
Forum :https://bbs.huaweicloud.com/forum/forum-1076-1.html
边栏推荐
- wordpress建立数据库连接时出错
- Towhee daily model weekly report
- A New Optimizer Using Particle Swarm Theory
- uniapp基础知识
- surging作者出具压测结果
- 狂神redis笔记01
- Common usage of Arthas
- Interviewer: what is the builder model?
- Why is the count () method of MySQL so slow?
- MySQL 5.7.37 database download and installation tutorial (no installation required for Windows)
猜你喜欢

技术干货| MindSpore新一代自主研发分子模拟库:Mind-Sponge

Domestic light! High score spatiotemporal representation learning model uniformer

Cmu15445 (fall 2019) project 4 - logging & Recovery details

国产之光!高分时空表征学习模型 UniFormer

Today, I went to oppo for an interview and was asked numbly

动态规划之4种背包问题

Win11怎么进行长截图?Win11长截图的方法

Quickly solve the problem of error or garbled code when inserting Chinese data into mysql

【AI工程】02-AI工程(AI Engineering)面面观

Student 985: why is C language still taught in schools| Send books at the end of the text
随机推荐
Cmu15445 (fall 2019) project 4 - logging & Recovery details
Stream - elegant handling of collection elements
Shuan Q, Dachang was forced to graduate and recited eight part essay for a month without a window. Fortunately, he got an offer
国产之光!高分时空表征学习模型 UniFormer
Which kind of noise reduction Bluetooth headset is good? The most cost-effective active noise reduction Bluetooth headset
开发者分享|手写算子没那么难,教你用MindSpore实现自适应平均池化算子!
A New Optimizer Using Particle Swarm Theory
博途PLC模糊PID控制(量化因子和比例因子)
DCAT admin code generator application (re edit)
DOM operation in reverse order, interview questions
[UCOS III source code analysis] - memory management mechanism
Practical application of machine learning: quickly brush five machine learning problems of Niuke
Summary on the use of uni cli project management uniapp compilation package version tool
华为od-寻找相同子串
Chapter 9.1 program design of MATLAB
Student 985: why is C language still taught in schools| Send books at the end of the text
快速解决MySQL插入中文数据时报错或乱码问题
What did the new operator do? Interview
The mobile terminal is set with fonts smaller than 12px and script labels
Graduation season -- common interview questions in database