RNN
RNN循环神经网络,主要用于处理和预测序列数据
CLASS torch.nn.``RNN(*args, **kwargs)
class torch.nn.RNN(*args, **kwargs)Parameters:input_size:hidden_size:num_layers: default:1nonlinearity: 可以选择激活函数'tanh'或者'relu',default:tanhbias: 如果是False的话,不会用b_ih和b_hh,default:Truebatch_first: 如果是True,输入的shape是(batch,seq_len,feature),default:False,(seq_len,batch,feature)drop_out: default:0bidirectional: 如果是true,变成双向,default:FalseInputs:input,h_0input的shape是(seq_len,batch,input_size),h_0的shape是(num_layers * num_directions,batch,hidden_size),如果bidirectional是True,num_directions是2,否则为1Outputs:output,h_noutput的shape是(seq_len,batch,num_directions*hidden_size),是每个h_t的值h_n的shape是(num_layers*num_directions,batch,hidden_size),是h_t,t=seq_len时,h_t的值output[-1,:,:]=h_n,batch_first为False
下面是torch官网自己构建的简单的RNN模型
class RNN(nn.Module):# you can also accept arguments in your model constructordef __init__(self, data_size, hidden_size, output_size):super(RNN, self).__init__()self.hidden_size = hidden_sizeinput_size = data_size + hidden_sizeself.i2h = nn.Linear(input_size, hidden_size)self.h2o = nn.Linear(hidden_size, output_size)def forward(self, data, last_hidden):input = torch.cat((data, last_hidden), 1)hidden = self.i2h(input)output = self.h2o(hidden)return hidden, output
LSTM

class torch.nn.LSTM(*args, **kwargs)Parameters(参数同RNN):input_size:hidden_size:num_layers:bias:batch_first:drop_out:bidirectional:Input:input,(h_0,c_0)input的shape是(seq_len, batch, input_size)h_0的shape是(num_layers * num_directions, batch, hidden_size)
