当前位置:网站首页>(pytorch advanced road 5) rnn/lstm/lstmp/gru
(pytorch advanced road 5) rnn/lstm/lstmp/gru
2022-07-18 23:15:00 【likeGhee】
Write it at the front :RNN Platitudes are too simple to skip , Just leave a file …
Now the main use is RNN A variation of the ,GRU perhaps LSTM
Model has one-way 、 two-way 、 Multiple one-way and two-way superposition ,nn RNN API in num_layers Is to set how many layers to stack , If bidirectional , Then the output size is hidden size * 2
List of articles
RNN
RNN General idea
The so-called cyclic neural network , When modeling a sequence , When calculating each representation , Consider the historical information of the past , Historical information is saved through memory units , Every moment is stored in the memory unit to obtain historical information , Assist in making a prediction at the current moment .
There are two chains in both directions ,FL and BL,FL The output is related to the current input and the past memory unit ,BL The output is related to the current input and the future memory unit .
RNN It is characterized by processing variable length sequences , Model size is independent of sequence length , The amount of computation increases linearly with the length of the sequence , Consider historical information , Streaming output , The weight does not change
The disadvantage is that serial computing is relatively slow , Unable to get too long history information , The gradient disappears
The formula :
h yes hidden state, Hidden state
nn RNN API in , The input is three-dimensional ,input (L, N, Hin)
L = sequence length
N = batch size
D = 2 if bi=True or 1
Hin = input_size
Hout = hidden_size
All the time output Output shape [N, L, D * Hout]
The last status bar outputs h_n [ D * num_layer, N, Hout]

nn.RNN API
def test_rnn_api():
input_size = 4
hidden_size = 3
num_layer = 1
bs = 1
seq_len = 2
h_in = 4
single_rnn = nn.RNN(input_size, hidden_size, num_layer, batch_first=True)
inp = torch.randn(bs, seq_len, h_in)
output, h_n = single_rnn(inp)
print(output.shape, "# output.shape [bs, seq_len, h_s]")
print(h_n.shape, "# h_n.shape [D=1 * num_layer, bs, h_s]")
test_rnn_api()
def test_bi_rnn_api():
input_size = 4
hidden_size = 3
num_layer = 1
bs = 1
seq_len = 2
h_in = 4
bi_rnn = nn.RNN(input_size, hidden_size, num_layer,
bidirectional=True, batch_first=True)
inp = torch.randn(bs, seq_len, h_in)
output, h_n = bi_rnn(inp)
print(output.shape, "# output.shape [bs, seq_len, 2*h_s]")
print(h_n.shape, "# h_n.shape [D=2 * num_layer, bs, 2*h_s]")
print(output)
print(h_n)
test_bi_rnn_api()
A one-way RNN Realization
input shape: [bs, seqlen, in_size]
weight input hidden shape: [h_dim, input_size]
weight hidden hidden shape: [h_dim, h_dim]
h_prev shape: [bs, hidden_size]
x shape: [bs, input_size]
iteration seqlen, Calculation RNN The formula
def rnn_impl(inp, weight_ih, weight_hh, bias_ih, bias_hh, h_prev):
""" Default input It's a three-dimensional shape [bs, len, in_size] weight input hidden: [h_dim, input_size] weight hidden hidden [h_dim, h_dim] h_prev [bs, hidden_size] """
bs, seq_len, input_size = inp.shape
h_dim = weight_ih.shape[0]
# Initialize an output matrix
h_out = torch.zeros(bs, seq_len, h_dim)
# RNN Complexity and Sequence length Linear relationship
for t in range(seq_len):
x = inp[:, t, :] # x shape = [bs, input_size]
x = x.unsqueeze(dim=2) # expand 1 dimension [bs, input_size, 1]
# Yes weight expand , Copy bs Share [bs, h_dim, input_size]
bw_ih = weight_ih.unsqueeze(dim=0).tile(bs, 1, 1)
bw_hh = weight_hh.unsqueeze(dim=0).tile(bs, 1, 1)
# [h_dim, input_size] @ [input_size, 1] = [h_dim, 1]
wih_times_x = torch.bmm(bw_ih, x).squeeze(-1) # [h_dim,]
whh_times_h = torch.bmm(bw_hh, h_prev.unsqueeze(2)).squeeze(-1) # [bs, h_dim]
h_prev = torch.tanh(wih_times_x + whh_times_h + bias_ih + bias_hh)
h_out[:, t, :] = h_prev
return h_out, h_prev.unsqueeze(0) # Because the official output hn yes 3 Dimensional
def test_rnn_impl():
bs, seq_len = 2, 3
input_size, hidden_size = 2, 3
# Randomly initialize an input
inp = torch.randn(bs, seq_len, input_size)
# Initial hidden state
h_prev = torch.zeros(bs, hidden_size)
rnn = nn.RNN(input_size, hidden_size, batch_first=True)
res1, h_n1 = rnn(inp, h_prev.unsqueeze(dim=0))
# Take out RNN Parameters in
for parameter, name in rnn.named_parameters():
print(parameter, name)
print("=========================")
weight_ih = rnn.weight_ih_l0
weight_hh = rnn.weight_hh_l0
bias_ih = rnn.bias_ih_l0
bias_hh = rnn.bias_hh_l0
res2, h_n2 = rnn_impl(inp, weight_ih, weight_hh, bias_ih, bias_hh, h_prev)
print(res2)
print(res1)
print(torch.allclose(res1, res2))
test_rnn_impl()
two-way RNN Realization
Based on a one-way RNN It can be realized
Initialize an output matrix , Note that the hidden layer dimension should be multiplied by 2
Call twice RNN, Calculation backward RNN When will input Flip
Pay attention to backward_output Conduct seq Only by turning it over can it correspond to seq Sequence
take forward and backward output Splice to h_out in
Finally, adjust shape,hn shape=[D*layer, bs, h_dim]
def bi_rnn_impl(inp, weight_ih, weight_hh, bias_ih, bias_hh, h_prev,
weight_ih_reverse, weight_hh_reverse, bias_ih_reverse,
bias_hh_reverse, h_prev_reverse):
bs, seq_len, input_size = inp.shape
h_dim = weight_ih.shape[0]
# Initialize an output matrix , Note that the hidden layer dimension should be multiplied by 2
h_out = torch.zeros(bs, seq_len, h_dim * 2)
# Call twice RNN
forward_output, _ = rnn_impl(inp, weight_ih, weight_hh,
bias_ih, bias_hh, h_prev)
# flip Yes input tensor seq_len Flip the dimension
backward_output, _ = rnn_impl(torch.flip(inp, [1]), weight_ih_reverse,
weight_hh_reverse, bias_ih_reverse, bias_hh_reverse, h_prev_reverse)
# take f and b output Fill in h_out in
h_out[:, :, :h_dim] = forward_output
# Pay attention to backward_output Conduct seq Only by turning it over can it correspond to seq Sequence
h_out[:, :, h_dim:] = torch.flip(backward_output, [1])
# Take the last moment T=-1 State vector hn, Be careful hn shape=[D*layer, bs, h_dim]
hn = h_out[:, -1, :].reshape([bs, 2, h_dim]).transpose(0, 1)
return h_out, hn
def test_bi_rnn_impl():
bs, seq_len = 2, 3
input_size, hidden_size = 2, 3
# Randomly initialize an input
inp = torch.randn(bs, seq_len, input_size)
# Initial hidden state
h_prev = torch.zeros(2, bs, hidden_size)
bi_rnn = nn.RNN(input_size, hidden_size, bidirectional=True, batch_first=True)
res1, h_n1 = bi_rnn(inp, h_prev)
# Take out RNN Parameters in
for parameter, name in bi_rnn.named_parameters():
print(parameter, name)
print("=========================")
weight_ih = bi_rnn.weight_ih_l0
weight_hh = bi_rnn.weight_hh_l0
bias_ih = bi_rnn.bias_ih_l0
bias_hh = bi_rnn.bias_hh_l0
weight_ih_reverse = bi_rnn.weight_ih_l0_reverse
weight_hh_reverse = bi_rnn.weight_hh_l0_reverse
bias_ih_reverse = bi_rnn.bias_ih_l0_reverse
bias_hh_reverse = bi_rnn.bias_hh_l0_reverse
res2, hn2 = bi_rnn_impl(inp,
weight_ih,
weight_hh,
bias_ih,
bias_hh,
h_prev[0],
weight_ih_reverse,
weight_hh_reverse,
bias_ih_reverse,
bias_hh_reverse,
h_prev[1])
print(res1)
print(res2)
print(torch.allclose(res1, res2))
test_bi_rnn_impl()
RNN CELL
RNN Single step module
import torch
import torch.nn as nn
def test_rnn_cell():
input_size = 10
hidden_size = 20
bs = 3
seq_len = 6
rnn_cell = nn.RNNCell(input_size, hidden_size)
inp = torch.randn(seq_len, bs, input_size)
h_prev = torch.randn(bs, hidden_size)
output = torch.zeros([seq_len, bs, hidden_size])
for i in range(seq_len):
h_prev = rnn_cell(inp[i], h_prev)
output[i] = h_prev
print(output.shape)
test_rnn_cell()
LSTM
LSTM General idea
LSTM comparison RNN More doors , Input gate 、 Output gate 、 Oblivion gate 、 Memory unit
From left to right f,f And ct-1 Multiply , It's equivalent to c Did a screening , i, g, i And g Multiplying is equivalent to doing once i Screening , The multiplication result is added to c On ,o Output gate , Final o ride tanh ct obtain ht
i: Input gate
f: Oblivion gate
g: cells
o: Output gate
c: Cell state , Memory unit
h: Hidden state 、 Output 
Looking at the first four lines of formula, we can find ,ifgo There are W multiply x part , therefore ifgo four Wi It can be stacked and multiplied at the same time x, Empathy Wh It's OK
The fifth line formula c Of f ride c and i ride g Is to multiply element by element
Sixth elements , The final ht Namely lstm Output
LSTM There are two initial states , Yes t-1 Subscript variables indicate the need to provide the initial state , So here h and c You need to provide the initial state h0 and c0
c_size = hidden_size
LSTMP In order to reduce the LSTM Amount of computation ,LSTMP Yes ht Compress , Yes h_dim The performance loss of compression is not great
ht = [email protected]
input : [N, L, Hin]
h_0 : [D * num_layers, N, Hout]
output : [N, L, D * Hout]
h_n : [D * num_layers, N, Hout]
c_n : [D * num_layers, N, Hout]
If it is many to many(seq2seq) Mission ,output You can use it , explain LSTM The output of every moment needs
If you only need hn, So it means many to one Mission , Only the representation of the last moment is used to express the whole sentence
If a projection Words , Use mapping pairs h Compress ,hn Namely projection size 了 , No hidden size 了
Wih It's right input Make a linear transformation into h_dim dimension , take 4 individual W The matrix is spliced together to form ,Wih : [4 * h_dim, input_size]
Empathy Whh It's right h Do linear change to h_dim dimension ,Whh : [4 * h_dim, h_dim]
bias: [4 * h_dim, ]
API
Besides, it's pytorch Realized LSTM Four doors are invisible ct-1
There is no addition at the end of the four doors Wct-1 + bias
def test_lstm_api():
bs, t, i_size, h_size = 2, 3, 4, 5
inp = torch.randn(bs, t, i_size)
# No training required
c0 = torch.randn(bs, h_size)
h0 = torch.randn(bs, h_size)
# Call the official API
lstm_layer = nn.LSTM(i_size, h_size, batch_first=True)
output, (hn, cn) = lstm_layer(inp, (h0.unsqueeze(0), c0.unsqueeze(0)))
print(output)
print(hn)
for k, v in lstm_layer.named_parameters():
print(k, "# #", v.shape)
test_lstm_api()
A one-way LSTM Realization
W yes 4 A door W Stacked , Take out one by one 1/4 Just fine , And calculate in turn ifgo door , Calculate again c and h, Iterate and cycle them
def lstm_forward(inp, initial_states, w_ih, w_hh, b_ih, b_hh):
""" input [bs, T, input_size] """
h0, c0 = initial_states
bs, seq_len, i_size = inp.shape
h_size = w_ih.shape[0] // 4 # w_ih [4 * h_dim, i_size]
bw_ih = w_ih.unsqueeze(0).tile(bs, 1, 1) # [bs, 4 * h_dim, i_size]
bw_hh = w_hh.unsqueeze(0).tile(bs, 1, 1) # [bs, 4 * h_dim, h_dim]
prev_h = h0 # [bs, h_d]
prev_c = c0
output_size = h_size
output = torch.randn(bs, seq_len, output_size)
# Traverse time
for t in range(seq_len):
x = inp[:, t, :] # [bs, input_size]
# In order to carry on bmm, Yes x Add one dimension [bs, i_s, 1]
w_times_x = torch.bmm(bw_ih, x.unsqueeze(-1)).squeeze(-1) # [bs, 4h_d]
w_times_h_prev = torch.bmm(bw_hh, prev_h.unsqueeze(-1)).squeeze(-1) # [bs, 4h_d]
# Calculation i door , Take the front of the matrix 1/4
i = 0
i_t = torch.sigmoid(w_times_x[:, h_size*i:h_size*(1+i)] + w_times_h_prev[:,h_size*i:h_size*(1+i)] + b_ih[h_size*i:h_size*(1+i)] + b_hh[h_size*i:h_size*(1+i)])
# f door
i += 1
f_t = torch.sigmoid(w_times_x[:, h_size*i:h_size*(1+i)] + w_times_h_prev[:, h_size*i:h_size*(1+i)] + b_ih[h_size*i:h_size*(1+i)] + b_hh[h_size*i:h_size*(1+i)])
# g door
i += 1
g_t = torch.tanh(w_times_x[:, h_size*i:h_size*(1+i)] + w_times_h_prev[:, h_size*i:h_size*(1+i)] + b_ih[h_size*i:h_size*(1+i)] + b_hh[h_size*i:h_size*(1+i)])
# o door
i += 1
o_t = torch.sigmoid(w_times_x[:, h_size*i:h_size*(1+i)] + w_times_h_prev[:, h_size*i:h_size*(1+i)] + b_ih[h_size*i:h_size*(1+i)] + b_hh[h_size*i:h_size*(1+i)])
# cell
prev_c = f_t * prev_c + i_t * g_t
# h
prev_h = o_t * torch.tanh(prev_c)
output[:, t, :] = prev_h
return output, (prev_h, prev_c)
def test_lstm_impl():
bs, t, i_size, h_size = 2, 3, 4, 5
inp = torch.randn(bs, t, i_size)
# No training required
c0 = torch.randn(bs, h_size)
h0 = torch.randn(bs, h_size)
# Call the official API
lstm_layer = nn.LSTM(i_size, h_size, batch_first=True)
output, _ = lstm_layer(inp, (h0.unsqueeze(0), c0.unsqueeze(0)))
for k, v in lstm_layer.named_parameters():
print(k, "# #", v.shape)
print("++++++++++++++++++++++++++++++++++++++")
w_ih = lstm_layer.weight_ih_l0
w_hh = lstm_layer.weight_hh_l0
b_ih = lstm_layer.bias_ih_l0
b_hh = lstm_layer.bias_hh_l0
output2, _ = lstm_forward(inp, (h0, c0), w_ih, w_hh, b_ih, b_hh)
print(torch.allclose(output2, output))
print(output)
print(output2)
test_lstm_impl()
A one-way LSTMP Realization
LSTMP Namely LSTM On the basis of h Compress , Yes h A linear transformation is made to project to the lower dimension
output_size To become projection_size
def lstm_forward(inp, initial_states, w_ih, w_hh, b_ih, b_hh, w_hr=None):
""" input [bs, T, input_size] If w_hr No None The explanation is to bring projection w_hr [p_dim, h_dim] """
h0, c0 = initial_states
bs, seq_len, i_size = inp.shape
h_size = w_ih.shape[0] // 4 # w_ih [4 * h_dim, i_size]
bw_ih = w_ih.unsqueeze(0).tile(bs, 1, 1) # [bs, 4 * h_dim, i_size]
bw_hh = w_hh.unsqueeze(0).tile(bs, 1, 1) # [bs, 4 * h_dim, h_dim]
prev_h = h0 # [bs, h_d]
prev_c = c0
if w_hr is not None:
output_size = w_hr.shape[0]
bw_hr = w_hr.unsqueeze(0).tile(bs, 1, 1)
else:
output_size = h_size
bw_hr = None
output = torch.randn(bs, seq_len, output_size)
# Traverse time
for t in range(seq_len):
x = inp[:, t, :] # [bs, input_size]
# In order to carry on bmm, Yes x Add one dimension [bs, i_s, 1]
w_times_x = torch.bmm(bw_ih, x.unsqueeze(-1)).squeeze(-1) # [bs, 4h_d]
w_times_h_prev = torch.bmm(bw_hh, prev_h.unsqueeze(-1)).squeeze(-1) # [bs, 4h_d]
# Calculation i door , Take the front of the matrix 1/4
i = 0
i_t = torch.sigmoid(w_times_x[:, h_size*i:h_size*(1+i)] + w_times_h_prev[:,h_size*i:h_size*(1+i)] + b_ih[h_size*i:h_size*(1+i)] + b_hh[h_size*i:h_size*(1+i)])
# f door
i += 1
f_t = torch.sigmoid(w_times_x[:, h_size*i:h_size*(1+i)] + w_times_h_prev[:, h_size*i:h_size*(1+i)] + b_ih[h_size*i:h_size*(1+i)] + b_hh[h_size*i:h_size*(1+i)])
# g door
i += 1
g_t = torch.tanh(w_times_x[:, h_size*i:h_size*(1+i)] + w_times_h_prev[:, h_size*i:h_size*(1+i)] + b_ih[h_size*i:h_size*(1+i)] + b_hh[h_size*i:h_size*(1+i)])
# o door
i += 1
o_t = torch.sigmoid(w_times_x[:, h_size*i:h_size*(1+i)] + w_times_h_prev[:, h_size*i:h_size*(1+i)] + b_ih[h_size*i:h_size*(1+i)] + b_hh[h_size*i:h_size*(1+i)])
# cell
prev_c = f_t * prev_c + i_t * g_t
# h
prev_h = o_t * torch.tanh(prev_c) # [bs, h_size]
# Yes h Compress
if w_hr is not None:
prev_h = torch.bmm(bw_hr, prev_h.unsqueeze(-1)).squeeze(-1) # [bs, p_size]
output[:, t, :] = prev_h
return output, (prev_h, prev_c)
def test_lstmp_impl():
bs, t, i_size, h_size = 2, 3, 4, 5
proj_size = 3
inp = torch.randn(bs, t, i_size)
# No training required
c0 = torch.randn(bs, h_size)
h0 = torch.randn(bs, proj_size)
# Call the official API
lstm_layer = nn.LSTM(i_size, h_size, batch_first=True, proj_size=proj_size)
output, _ = lstm_layer(inp, (h0.unsqueeze(0), c0.unsqueeze(0)))
for k, v in lstm_layer.named_parameters():
print(k, "# #", v.shape)
print("++++++++++++++++++++++++++++++++++++++")
w_ih = lstm_layer.weight_ih_l0
w_hh = lstm_layer.weight_hh_l0 # [bs, p_size] p_size comparison h_d It's getting smaller
b_ih = lstm_layer.bias_ih_l0
b_hh = lstm_layer.bias_hh_l0
w_hr = lstm_layer.weight_hr_l0
output2, _ = lstm_forward(inp, (h0, c0), w_ih, w_hh, b_ih, b_hh, w_hr)
print(torch.allclose(output2, output))
print(output.shape)
print(output2.shape) # [bs, seq, p_size]
test_lstmp_impl()
GRU
GRU General idea
LSTM Yes 4 There are two doors and one c, But in GRU in , There are only two doors , One r reset door , The other is z update door ,GRU There is no c Of , Only provide an initial state h
nt Can be seen as a candidate ,
ht = (1-z)n + zh
A one-way GRU Realization
First, get some attribute variables ,h_size yes wih Of the 0 Dimension divided by 3, perhaps h_size be equal to whh The second dimension of
In order to calculate bmm, Matrix multiplication with batch , Yes w Expand dimension
Assign output state matrix
Cyclic serial iterative operation gru, Calculation rz door , Candidate status n and h
def gru_forward(inp, h0, w_ih, w_hh, b_ih, b_hh):
""" wih and whh Is a stack of three matrices """
bs, seq, i_size = inp.shape
h_size = w_ih.shape[0] // 3
prev_h = h0 # [bs, h_dim]
bw_ih = w_ih.unsqueeze(0).tile(bs, 1, 1) # [bs, 3*h_dim, i_size]
bw_hh = w_hh.unsqueeze(0).tile(bs, 1, 1) # [bs, 3*h_dim, h_dim]
output = torch.randn(bs, seq, h_size)
for t in range(seq):
x = inp[:, t, :] # [bs, i_size]
w_times_x = torch.bmm(bw_ih, x.unsqueeze(-1)).squeeze(-1) # [bs, 3*h_d]
w_times_h = torch.bmm(bw_hh, prev_h.unsqueeze(-1)).squeeze(-1) # [bs, 3*h_d]
i = 0
ind_l = h_size * i
ind_r = h_size * (i+1)
r_t = torch.sigmoid(w_times_x[:, ind_l:ind_r] + w_times_h[:, ind_l:ind_r] + b_ih[ind_l:ind_r]
+ b_hh[ind_l:ind_r])
i += 1
ind_l = h_size * i
ind_r = h_size * (i+1)
z_t = torch.sigmoid(w_times_x[:, ind_l:ind_r] + w_times_h[:, ind_l:ind_r] + b_ih[ind_l:ind_r]
+ b_hh[ind_l:ind_r])
# Candidate status
i += 1
ind_l = h_size * i
ind_r = h_size * (i+1)
n_t = torch.tanh(w_times_x[:, ind_l:ind_r]+b_ih[ind_l:ind_r] +
r_t * (w_times_h[:, ind_l:ind_r] + b_hh[ind_l:ind_r]))
prev_h = (1-z_t) * n_t + z_t * prev_h
output[:, t, :] = prev_h
return output, prev_h
def test_gru_impl():
bs, seq, i_size, h_dim = 2, 3, 4, 5
inp = torch.randn(bs, seq, i_size)
h0 = torch.randn(bs, h_dim)
gru = nn.GRU(i_size, h_dim, batch_first=True)
res1, _ = gru(inp, h0.unsqueeze(0))
for k, v in gru.named_parameters():
print(k, v.shape)
w_ih = gru.weight_ih_l0
w_hh = gru.weight_hh_l0
b_ih = gru.bias_ih_l0
b_hh = gru.bias_hh_l0
res2, _ = gru_forward(inp, h0, w_ih, w_hh, b_ih, b_hh)
print(torch.allclose(res1, res2))
print(res1)
print(res2)
test_gru_impl()
边栏推荐
- SQL Server 各种锁 NOLOCK、UPDLOCK、HOLDLOCK、READPAST
- EF core learning notes: one to many relationship configuration
- Interviewer: tell me about the most valuable bug you found in your work
- 10 minutes to customize the pedestrian analysis system, detection and tracking, behavior recognition, human attributes all in one
- NASA首次拍到宇宙大爆炸后一瞬间的清晰照片
- Comparative investigation of three commonly used time series databases - incluxdb, Prometheus, iotdb
- Bigdata operation week 14 & 15
- About products | how to plan products?
- Leetcode -- 49 letter ectopic word grouping
- [fluent -- actual combat] dart language quick start
猜你喜欢

【图片编辑小软件】FastStone Photo Resizer支持批量转换和批量重命名
![[image editing software] FastStone Photo Resizer supports batch conversion and batch renaming](/img/18/a1abdc5a805a85dade6f88111ae7fb.png)
[image editing software] FastStone Photo Resizer supports batch conversion and batch renaming

PMP practice once a day | don't get lost in the exam -7.16

10分钟自定义搭建行人分析系统,检测跟踪、行为识别、人体属性All-in-One

三维点云课程(三)——聚类

C # network application programming, experiment 1: WPF exercise

Can't go on, mend the foundation -- C thread develops output string program

How does go ensure the order of concurrent reads and writes Memory model

About products | how to plan products?

Parker Parker proportional valve d1fve50bcvlb
随机推荐
Interviewer: tell me about the most valuable bug you found in your work
Log collection scheme efk
常用放大器基础知识(上)
[C #] positive sequence, reverse sequence, maximum value, minimum value and average value
Be diligent in chatting
C # network application programming, Experiment 4: thread management exercise
Original Rexroth proportional valve 4wrba10w64-2x/g24n9z4/m
三维点云课程(二)——最邻近问题
[C # network application programming] Experiment 3: process management exercise
Encapsulation and invocation of old store classes in business class library
Summary of the preparation process of employee management system
[CVPR2019] On Stabilizing Generative Adversarial Training with Noise
华为交换机配置SSH登录
【决策樹】使用决策樹進行乳腺癌的診斷
Pytorch installation (very detailed)
错误: -source 1.6 中不支持 diamond 运算符的解决办法
Remove the k-bit number [greedy thought & monotonic stack implementation]
Progress [detailed summary]
【开发教程1】开源蓝牙心率防水运动手环-套件检测教程
What if the work of product evaluators is repetitive and cumbersome? Can it be automated?