RNN

RNN循环神经网络,主要用于处理和预测序列数据
RNN和LSTM - 图1
CLASS torch.nn.``RNN(*args, **kwargs)
image.png

  1. class torch.nn.RNN(*args, **kwargs)
  2. Parameters:
  3. input_size:
  4. hidden_size:
  5. num_layers: default:1
  6. nonlinearity: 可以选择激活函数'tanh'或者'relu'default:tanh
  7. bias: 如果是False的话,不会用b_ihb_hhdefault:True
  8. batch_first: 如果是True,输入的shape是(batch,seq_len,feature),default:False,(seq_len,batch,feature)
  9. drop_out: default:0
  10. bidirectional: 如果是true,变成双向,default:False
  11. Inputs:input,h_0
  12. inputshape是(seq_len,batch,input_size),
  13. h_0shape是(num_layers * num_directions,batch,hidden_size),如果bidirectionalTrue
  14. num_directions2,否则为1
  15. Outputs:output,h_n
  16. outputshape是(seq_len,batch,num_directions*hidden_size),是每个h_t的值
  17. h_nshape是(num_layers*num_directions,batch,hidden_size),是h_tt=seq_len时,h_t的值
  18. output[-1,:,:]=h_n,batch_firstFalse

下面是torch官网自己构建的简单的RNN模型

  1. class RNN(nn.Module):
  2. # you can also accept arguments in your model constructor
  3. def __init__(self, data_size, hidden_size, output_size):
  4. super(RNN, self).__init__()
  5. self.hidden_size = hidden_size
  6. input_size = data_size + hidden_size
  7. self.i2h = nn.Linear(input_size, hidden_size)
  8. self.h2o = nn.Linear(hidden_size, output_size)
  9. def forward(self, data, last_hidden):
  10. input = torch.cat((data, last_hidden), 1)
  11. hidden = self.i2h(input)
  12. output = self.h2o(hidden)
  13. return hidden, output

LSTM

image.png

  1. class torch.nn.LSTM(*args, **kwargs)
  2. Parameters(参数同RNN):
  3. input_size:
  4. hidden_size:
  5. num_layers:
  6. bias:
  7. batch_first:
  8. drop_out:
  9. bidirectional:
  10. Input:input,(h_0,c_0)
  11. inputshape是(seq_len, batch, input_size)
  12. h_0shape是(num_layers * num_directions, batch, hidden_size)