SGD with momentum

Usage Example:

  1. // Constructs an SGD optimizer.
  2. std::shared_ptr<SGD> solver(new SGD);
  3. // Sets the params to be optimized in the model.
  4. solver->append(model->parameters());
  5. // Sets momentum and weight decay
  6. solver->setMomentum(0.9f);
  7. solver->setWeightDecay(0.0005f);
  8. // Sets the regularization method, defaults to L2 norm.
  9. solver->setRegularizationMethod(RegularizationMethod::L2);
  10. // Sets the learning rate.
  11. solver->setLearningRate(0.001);
  12. // Calculates the gradient based on the loss and updates the params.
  13. solver->step(loss);

ADAM

Usage examples

  1. // Constructs an ADAM optimizer
  2. std::shared_ptr<SGD> solver(new ADAM);
  3. // Sets the params to be optimized in the model.
  4. solver->append(model->parameters());
  5. // Sets ADAM momentums,weight and decay
  6. solver->setMomentum(0.9f);
  7. solver->setMomentum2(0.99f);
  8. solver->setWeightDecay(0.0005f);
  9. // Sets the regularization method, defaults to L2 norm.
  10. solver->setRegularizationMethod(RegularizationMethod::L2);
  11. // Sets the learning rate.
  12. solver->setLearningRate(0.001);
  13. // Calculates the gradient based on the loss and updates the params.
  14. solver->step(loss);

Loss

Loss functions supported right now. You can also define your own.

  1. VARP _CrossEntropy(Express::VARP predicts, Express::VARP oneHotTargets);
  2. VARP _KLDivergence(Express::VARP predicts, Express::VARP oneHotTargets);
  3. VARP _MSE(Express::VARP predicts, Express::VARP oneHotTargets);
  4. VARP _MAE(Express::VARP predicts, Express::VARP oneHotTargets);
  5. VARP _Hinge(Express::VARP predicts, Express::VARP oneHotTargets);
  6. VARP _DistillLoss(Express::VARP studentLogits, Express::VARP teacherLogits, Express::VARP oneHotTargets,
  7. const float temperature, const float alpha);