优化算法

到目前为止,你一直使用梯度下降来更新参数并使损失降至最低。 在本笔记本中,你将学习更多高级的优化方法,以加快学习速度,甚至可以使你的损失函数的获得更低的最终值。 一个好的优化算法可以使需要训练几天的网络,训练仅仅几个小时就能获得良好的结果。

在训练的每个步骤中,你都按照一定的方向更新参数,以尝试到达最低点。

符号:与往常一样,【HW】L2W2-梯度下降的优化 - 图1 da适用于任何变量a

首先,请运行以下代码以导入所需的库。

  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. import scipy.io
  4. import math
  5. import sklearn
  6. import sklearn.datasets
  7. from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
  8. from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
  9. from testCases import *
  10. %matplotlib inline
  11. plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
  12. plt.rcParams['image.interpolation'] = 'nearest'
  13. plt.rcParams['image.cmap'] = 'gray'

1 梯度下降

机器学习中一种简单的优化方法是梯度下降(gradient descent,GD)。当你对每个step中的所有【HW】L2W2-梯度下降的优化 - 图2示例执行梯度计算步骤时,它也叫做“批量梯度下降”。

热身练习:实现梯度下降更新方法。 对于【HW】L2W2-梯度下降的优化 - 图3,梯度下降规则为:

【HW】L2W2-梯度下降的优化 - 图4

【HW】L2W2-梯度下降的优化 - 图5

其中L是层数,【HW】L2W2-梯度下降的优化 - 图6是学习率。所有参数都应存储在 parameters字典中。请注意,迭代器lfor 循环中从0开始,而第一个参数是【HW】L2W2-梯度下降的优化 - 图7【HW】L2W2-梯度下降的优化 - 图8。编码时需要将l 转换为l+1

  1. # GRADED FUNCTION: update_parameters_with_gd
  2. def update_parameters_with_gd(parameters, grads, learning_rate):
  3. L = len(parameters) // 2 # number of layers in the neural networks
  4. # Update rule for each parameter
  5. for l in range(L):
  6. parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate*grads["dW" + str(l+1)]
  7. parameters["b" + str(l+1)] = parameters["b" + str(l+1)] -learning_rate*grads["db" + str(l+1)]
  8. return parameters

它的一种变体是随机梯度下降(SGD),它相当于mini版的批次梯度下降,其中每个mini-batch只有一个数据示例。刚刚实现的更新规则不会更改。不同的是,SGD一次仅在一个训练数据上计算梯度,而不是在整个训练集合上计算梯度。下面的代码示例说明了随机梯度下降和(批量)梯度下降之间的区别。

  • (Batch) Gradient Descent:
  1. X = data_input
  2. Y = labels
  3. parameters = initialize_parameters(layers_dims)
  4. for i in range(0, num_iterations):
  5. # Forward propagation
  6. a, caches = forward_propagation(X, parameters)
  7. # Compute cost.
  8. cost = compute_cost(a, Y)
  9. # Backward propagation.
  10. grads = backward_propagation(a, caches, parameters)
  11. # Update parameters.
  12. parameters = update_parameters(parameters, grads)
  • Stochastic Gradient Descent:
  1. X = data_input
  2. Y = labels
  3. parameters = initialize_parameters(layers_dims)
  4. for i in range(0, num_iterations):
  5. for j in range(0, m):
  6. # Forward propagation
  7. a, caches = forward_propagation(X[:,j], parameters)
  8. # Compute cost
  9. cost = compute_cost(a, Y[:,j])
  10. # Backward propagation
  11. grads = backward_propagation(a, caches, parameters)
  12. # Update parameters.
  13. parameters = update_parameters(parameters, grads)

对于随机梯度下降,在更新梯度之前,只使用1个训练样例。当训练集大时,SGD可以更新的更快。但是这些参数会向最小值“摆动”而不是平稳地收敛。下图是一个演示例子:

【HW】L2W2-梯度下降的优化 - 图9

图 1: SGD vs GD
“+”表示损失的最小值。 SGD造成许多振荡以达到收敛。但是每个step中,计算SGD比使用GD更快,因为它仅使用一个训练示例(相对于GD的整个批次)。

实现SGD总共需要3个for循环:
1.迭代次数
2.【HW】L2W2-梯度下降的优化 - 图10个训练数据
3.各层上(要更新所有参数,从【HW】L2W2-梯度下降的优化 - 图11#card=math&code=%28W%5E%7B%5B1%5D%7D%2Cb%5E%7B%5B1%5D%7D%29&id=tF6bV)到【HW】L2W2-梯度下降的优化 - 图12#card=math&code=%28W%5E%7B%5BL%5D%7D%2Cb%5E%7B%5BL%5D%7D%29&id=EBVks))

小批量梯度下降法通常会得到更快的结果。通过小批量梯度下降,你可以遍历小批量,而不是遍历各个训练示例。

【HW】L2W2-梯度下降的优化 - 图13

图 2SGD vs Mini-Batch GD
“+”表示损失的最小值。在优化算法中使用mini-batch批处理通常可以加快优化速度。

你应该记住

  • 梯度下降,小批量梯度下降和随机梯度下降之间的差异是用于执行一个更新步骤的数据数量。
  • 必须调整超参数学习率【HW】L2W2-梯度下降的优化 - 图14
  • 在小批量的情况下,通常它会胜过梯度下降或随机梯度下降(尤其是训练集较大时)。

2 Mini-Batch 梯度下降

让我们学习如何从训练集(X,Y)中构建小批次数据。

分两个步骤:

  • Shuffle:如下所示,创建训练集(X,Y)的随机打乱版本。X和Y中的每一列代表一个训练示例。注意,随机打乱是在X和Y之间同步完成的。这样,在随机打乱之后,X的【HW】L2W2-梯度下降的优化 - 图15列就是对应于Y中【HW】L2W2-梯度下降的优化 - 图16标签的示例。打乱步骤可确保该示例将随机分为不同小批。

【HW】L2W2-梯度下降的优化 - 图17

  • Partition:将打乱后的(X,Y)划分为大小为mini_batch_size(此处为64)的小批处理。请注意,训练示例的数量并不总是可以被mini_batch_size整除。最后的小批量可能较小,但是你不必担心,当最终的迷你批处理小于完整的mini_batch_size时,它将如下图所示:

【HW】L2W2-梯度下降的优化 - 图18

练习:实现random_mini_batches。我们为你编码好了shuffling部分。为了帮助你实现partitioning部分,我们为你提供了以下代码,用于选择【HW】L2W2-梯度下降的优化 - 图19【HW】L2W2-梯度下降的优化 - 图20小批次的索引:

  1. first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]
  2. second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]
  3. ...

请注意,最后一个小批次的结果可能小于mini_batch_size=64。令【HW】L2W2-梯度下降的优化 - 图21代表【HW】L2W2-梯度下降的优化 - 图22向下舍入到最接近的整数(在Python中为math.floor(s))。如果示例总数不是mini_batch_size = 64的倍数,则将有【HW】L2W2-梯度下降的优化 - 图23个带有完整示例的小批次,数量为64最终的一批次中的示例将是(【HW】L2W2-梯度下降的优化 - 图24)。

  1. # GRADED FUNCTION: random_mini_batches
  2. def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
  3. """
  4. Creates a list of random minibatches from (X, Y)
  5. Arguments:
  6. X -- input data, of shape (input size, number of examples)
  7. Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
  8. mini_batch_size -- size of the mini-batches, integer
  9. Returns:
  10. mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
  11. """
  12. np.random.seed(seed) # To make your "random" minibatches the same as ours
  13. m = X.shape[1] # number of training examples
  14. mini_batches = []
  15. # Step 1: Shuffle (X, Y)
  16. permutation = list(np.random.permutation(m))
  17. shuffled_X = X[:, permutation]
  18. shuffled_Y = Y[:, permutation].reshape((1,m))
  19. # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
  20. num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
  21. for k in range(0, num_complete_minibatches):
  22. ### START CODE HERE ### (approx. 2 lines)
  23. mini_batch_X = shuffled_X[:, k * mini_batch_size : (k+1) * mini_batch_size]
  24. mini_batch_Y = shuffled_Y[:, k * mini_batch_size : (k+1) * mini_batch_size]
  25. ### END CODE HERE ###
  26. mini_batch = (mini_batch_X, mini_batch_Y)
  27. mini_batches.append(mini_batch)
  28. # Handling the end case (last mini-batch < mini_batch_size)
  29. if m % mini_batch_size != 0:
  30. ### START CODE HERE ### (approx. 2 lines)
  31. mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : m]
  32. mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m]
  33. ### END CODE HERE ###
  34. mini_batch = (mini_batch_X, mini_batch_Y)
  35. mini_batches.append(mini_batch)
  36. return mini_batches

你应该记住

  • Shuffling和Partitioning是构建小批次数据所需的两个步骤
  • 通常选择2的幂作为最小批量大小,例如16、32、64、128。

3 Momentum

因为小批量梯度下降仅在看到示例的子集后才进行参数更新,所以更新的方向具有一定的差异,因此小批量梯度下降所采取的路径将“朝着收敛”振荡。利用冲量则可以减少这些振荡。

冲量考虑了过去的梯度以平滑更新。我们将先前梯度的“方向”存储在变量【HW】L2W2-梯度下降的优化 - 图25中。这将是先前步骤中梯度的指数加权平均值,你也可以将【HW】L2W2-梯度下降的优化 - 图26看作是下坡滚动的球的“速度”,根据山坡的坡度/坡度的方向来提高速度(和冲量)。

【HW】L2W2-梯度下降的优化 - 图27

图 3:红色箭头显示了带冲量的小批次梯度下降步骤所采取的方向。蓝点表示每一步的梯度方向(相对于当前的小批量)。让梯度影响【HW】L2W2-梯度下降的优化 - 图28而不是仅遵循梯度,然后朝【HW】L2W2-梯度下降的优化 - 图29的方向迈出一步。

练习:初始化速度。速度【HW】L2W2-梯度下降的优化 - 图30是一个Python字典,需要使用零数组进行初始化。它的键与grads词典中的键相同,即:
【HW】L2W2-梯度下降的优化 - 图31

  1. v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
  2. v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])

注意:迭代器l在for循环中从0开始,而第一个参数是v[“dW1”]和v[“db1”](在上标中为“1”)。这就是为什么我们在“for”循环中将l转换为l+1的原因。

  1. # GRADED FUNCTION: initialize_velocity
  2. def initialize_velocity(parameters):
  3. """
  4. Initializes the velocity as a python dictionary with:
  5. - keys: "dW1", "db1", ..., "dWL", "dbL"
  6. - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
  7. Arguments:
  8. parameters -- python dictionary containing your parameters.
  9. parameters['W' + str(l)] = Wl
  10. parameters['b' + str(l)] = bl
  11. Returns:
  12. v -- python dictionary containing the current velocity.
  13. v['dW' + str(l)] = velocity of dWl
  14. v['db' + str(l)] = velocity of dbl
  15. """
  16. L = len(parameters) // 2 # number of layers in the neural networks
  17. v = {}
  18. # Initialize velocity
  19. for l in range(L):
  20. ### START CODE HERE ### (approx. 2 lines)
  21. v["dW" + str(l+1)] = np.zeros(parameters['W' + str(l+1)].shape)
  22. v["db" + str(l+1)] = np.zeros(parameters['b' + str(l+1)].shape)
  23. ### END CODE HERE ###
  24. return v

练习:实现带冲量的参数更新。冲量更新规则是,对于【HW】L2W2-梯度下降的优化 - 图32:

【HW】L2W2-梯度下降的优化 - 图33%20dW%5E%7B%5Bl%5D%7D%20%5C%5C%0AW%5E%7B%5Bl%5D%7D%20%3D%20W%5E%7B%5Bl%5D%7D%20-%20%5Calpha%20v%7BdW%5E%7B%5Bl%5D%7D%7D%0A%5Cend%7Bcases%7D%5Ctag%7B3%7D%0A#card=math&code=%5Cbegin%7Bcases%7D%0Av%7BdW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cbeta%20v%7BdW%5E%7B%5Bl%5D%7D%7D%20%2B%20%281%20-%20%5Cbeta%29%20dW%5E%7B%5Bl%5D%7D%20%5C%5C%0AW%5E%7B%5Bl%5D%7D%20%3D%20W%5E%7B%5Bl%5D%7D%20-%20%5Calpha%20v%7BdW%5E%7B%5Bl%5D%7D%7D%0A%5Cend%7Bcases%7D%5Ctag%7B3%7D%0A&id=qpI4c)

【HW】L2W2-梯度下降的优化 - 图34%20db%5E%7B%5Bl%5D%7D%20%5C%5C%0Ab%5E%7B%5Bl%5D%7D%20%3D%20b%5E%7B%5Bl%5D%7D%20-%20%5Calpha%20v%7Bdb%5E%7B%5Bl%5D%7D%7D%20%0A%5Cend%7Bcases%7D%5Ctag%7B4%7D%0A#card=math&code=%5Cbegin%7Bcases%7D%0Av%7Bdb%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cbeta%20v%7Bdb%5E%7B%5Bl%5D%7D%7D%20%2B%20%281%20-%20%5Cbeta%29%20db%5E%7B%5Bl%5D%7D%20%5C%5C%0Ab%5E%7B%5Bl%5D%7D%20%3D%20b%5E%7B%5Bl%5D%7D%20-%20%5Calpha%20v%7Bdb%5E%7B%5Bl%5D%7D%7D%20%0A%5Cend%7Bcases%7D%5Ctag%7B4%7D%0A&id=nCO7B)

其中L是层数,【HW】L2W2-梯度下降的优化 - 图35是动量,【HW】L2W2-梯度下降的优化 - 图36是学习率。所有参数都应存储在parameters字典中。请注意,迭代器lfor循环中从0开始,而第一个参数是【HW】L2W2-梯度下降的优化 - 图37【HW】L2W2-梯度下降的优化 - 图38(在上标中为“1”)。因此,编码时需要将l转化至l+1

  1. # GRADED FUNCTION: update_parameters_with_momentum
  2. def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
  3. """
  4. Update parameters using Momentum
  5. Arguments:
  6. parameters -- python dictionary containing your parameters:
  7. parameters['W' + str(l)] = Wl
  8. parameters['b' + str(l)] = bl
  9. grads -- python dictionary containing your gradients for each parameters:
  10. grads['dW' + str(l)] = dWl
  11. grads['db' + str(l)] = dbl
  12. v -- python dictionary containing the current velocity:
  13. v['dW' + str(l)] = ...
  14. v['db' + str(l)] = ...
  15. beta -- the momentum hyperparameter, scalar
  16. learning_rate -- the learning rate, scalar
  17. Returns:
  18. parameters -- python dictionary containing your updated parameters
  19. v -- python dictionary containing your updated velocities
  20. """
  21. L = len(parameters) // 2 # number of layers in the neural networks
  22. # Momentum update for each parameter
  23. for l in range(L):
  24. ### START CODE HERE ### (approx. 4 lines)
  25. # compute velocities
  26. v["dW" + str(l + 1)] = beta*v["dW" + str(l + 1)]+(1-beta)*grads['dW' + str(l+1)]
  27. v["db" + str(l + 1)] = beta*v["db" + str(l + 1)]+(1-beta)*grads['db' + str(l+1)]
  28. # update parameters
  29. parameters["W" + str(l + 1)] = parameters['W' + str(l+1)] - learning_rate*v["dW" + str(l + 1)]
  30. parameters["b" + str(l + 1)] = parameters['b' + str(l+1)] - learning_rate*v["db" + str(l + 1)]
  31. ### END CODE HERE ###
  32. return parameters, v
  1. parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
  1. W1 = [[ 1.62544598 -0.61290114 -0.52907334]
  2. [-1.07347112 0.86450677 -2.30085497]]
  3. b1 = [[ 1.74493465]
  4. [-0.76027113]]
  5. W2 = [[ 0.31930698 -0.24990073 1.4627996 ]
  6. [-2.05974396 -0.32173003 -0.38320915]
  7. [ 1.13444069 -1.0998786 -0.1713109 ]]
  8. b2 = [[-0.87809283]
  9. [ 0.04055394]
  10. [ 0.58207317]]
  11. v["dW1"] = [[-0.11006192 0.11447237 0.09015907]
  12. [ 0.05024943 0.09008559 -0.06837279]]
  13. v["db1"] = [[-0.01228902]
  14. [-0.09357694]]
  15. v["dW2"] = [[-0.02678881 0.05303555 -0.06916608]
  16. [-0.03967535 -0.06871727 -0.08452056]
  17. [-0.06712461 -0.00126646 -0.11173103]]
  18. v["db2"] = [[0.02344157]
  19. [0.16598022]
  20. [0.07420442]]

注意

  • 速度用零初始化。因此,该算法将花费一些迭代来“提高”速度并开始采取更大的步骤。
  • 如果【HW】L2W2-梯度下降的优化 - 图39,则它变为没有冲量的标准梯度下降。

怎样选择【HW】L2W2-梯度下降的优化 - 图40?

  • 冲量【HW】L2W2-梯度下降的优化 - 图41越大,更新越平滑,因为我们对过去的梯度的考虑也更多。但是,如果【HW】L2W2-梯度下降的优化 - 图42太大,也可能使更新变得过于平滑。
  • 【HW】L2W2-梯度下降的优化 - 图43的常用值范围是0.8到0.999。如果你不想调整它,则【HW】L2W2-梯度下降的优化 - 图44通常是一个合理的默认值。
  • 调整模型的最佳【HW】L2W2-梯度下降的优化 - 图45可能需要尝试几个值,以了解在降低损失函数【HW】L2W2-梯度下降的优化 - 图46的值方面最有效的方法。

你应该记住

  • 冲量将过去的梯度考虑在内,以平滑梯度下降的步骤。它可以应用于批量梯度下降,小批次梯度下降或随机梯度下降。
  • 必须调整冲量超参数【HW】L2W2-梯度下降的优化 - 图47和学习率【HW】L2W2-梯度下降的优化 - 图48

4 Adam

Adam是训练神经网络最有效的优化算法之一。它结合了RMSProp和Momentum的优点。

Adam原理
1.计算过去梯度的指数加权平均值,并将其存储在变量【HW】L2W2-梯度下降的优化 - 图49(使用偏差校正之前)和【HW】L2W2-梯度下降的优化 - 图50 (使用偏差校正)中。
2.计算过去梯度的平方的指数加权平均值,并将其存储在变量【HW】L2W2-梯度下降的优化 - 图51(偏差校正之前)和【HW】L2W2-梯度下降的优化 - 图52(偏差校正中)中。
3.组合“1”和“2”的信息,在一个方向上更新参数。

对于【HW】L2W2-梯度下降的优化 - 图53,更新规则为:

【HW】L2W2-梯度下降的优化 - 图54%20%5Cfrac%7B%5Cpartial%20%5Cmathcal%7BJ%7D%20%7D%7B%20%5Cpartial%20W%5E%7B%5Bl%5D%7D%20%7D%20%5C%5C%0Av%5E%7Bcorrected%7D%7BdW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cfrac%7Bv%7BdW%5E%7B%5Bl%5D%7D%7D%7D%7B1%20-%20(%5Cbeta1)%5Et%7D%20%5C%5C%0As%7BdW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cbeta2%20s%7BdW%5E%7B%5Bl%5D%7D%7D%20%2B%20(1%20-%20%5Cbeta2)%20(%5Cfrac%7B%5Cpartial%20%5Cmathcal%7BJ%7D%20%7D%7B%5Cpartial%20W%5E%7B%5Bl%5D%7D%20%7D)%5E2%20%5C%5C%0As%5E%7Bcorrected%7D%7BdW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cfrac%7Bs%7BdW%5E%7B%5Bl%5D%7D%7D%7D%7B1%20-%20(%5Cbeta_1)%5Et%7D%20%5C%5C%0AW%5E%7B%5Bl%5D%7D%20%3D%20W%5E%7B%5Bl%5D%7D%20-%20%5Calpha%20%5Cfrac%7Bv%5E%7Bcorrected%7D%7BdW%5E%7B%5Bl%5D%7D%7D%7D%7B%5Csqrt%7Bs%5E%7Bcorrected%7D%7BdW%5E%7B%5Bl%5D%7D%7D%7D%20%2B%20%5Cvarepsilon%7D%0A%5Cend%7Bcases%7D%0A#card=math&code=%5Cbegin%7Bcases%7D%0Av%7BdW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cbeta1%20v%7BdW%5E%7B%5Bl%5D%7D%7D%20%2B%20%281%20-%20%5Cbeta1%29%20%5Cfrac%7B%5Cpartial%20%5Cmathcal%7BJ%7D%20%7D%7B%20%5Cpartial%20W%5E%7B%5Bl%5D%7D%20%7D%20%5C%5C%0Av%5E%7Bcorrected%7D%7BdW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cfrac%7Bv%7BdW%5E%7B%5Bl%5D%7D%7D%7D%7B1%20-%20%28%5Cbeta_1%29%5Et%7D%20%5C%5C%0As%7BdW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cbeta2%20s%7BdW%5E%7B%5Bl%5D%7D%7D%20%2B%20%281%20-%20%5Cbeta2%29%20%28%5Cfrac%7B%5Cpartial%20%5Cmathcal%7BJ%7D%20%7D%7B%5Cpartial%20W%5E%7B%5Bl%5D%7D%20%7D%29%5E2%20%5C%5C%0As%5E%7Bcorrected%7D%7BdW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cfrac%7Bs%7BdW%5E%7B%5Bl%5D%7D%7D%7D%7B1%20-%20%28%5Cbeta_1%29%5Et%7D%20%5C%5C%0AW%5E%7B%5Bl%5D%7D%20%3D%20W%5E%7B%5Bl%5D%7D%20-%20%5Calpha%20%5Cfrac%7Bv%5E%7Bcorrected%7D%7BdW%5E%7B%5Bl%5D%7D%7D%7D%7B%5Csqrt%7Bs%5E%7Bcorrected%7D_%7BdW%5E%7B%5Bl%5D%7D%7D%7D%20%2B%20%5Cvarepsilon%7D%0A%5Cend%7Bcases%7D%0A&id=YkrPI)

其中:

  • t计算出Adam采取的步骤数
  • L是层数
  • 【HW】L2W2-梯度下降的优化 - 图55【HW】L2W2-梯度下降的优化 - 图56是控制两个指数加权平均值的超参数。
  • 【HW】L2W2-梯度下降的优化 - 图57是学习率
  • 【HW】L2W2-梯度下降的优化 - 图58是一个很小的数字,以避免被零除

和之前一样,我们将所有参数存储在parameters字典中

练习:初始化跟踪过去信息的Adam变量【HW】L2W2-梯度下降的优化 - 图59

说明:变量【HW】L2W2-梯度下降的优化 - 图60是需要用零数组初始化的python字典。它们的key与grads的key相同。

  1. # GRADED FUNCTION: initialize_adam
  2. def initialize_adam(parameters) :
  3. """
  4. Initializes v and s as two python dictionaries with:
  5. - keys: "dW1", "db1", ..., "dWL", "dbL"
  6. - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
  7. Arguments:
  8. parameters -- python dictionary containing your parameters.
  9. parameters["W" + str(l)] = Wl
  10. parameters["b" + str(l)] = bl
  11. Returns:
  12. v -- python dictionary that will contain the exponentially weighted average of the gradient.
  13. v["dW" + str(l)] = ...
  14. v["db" + str(l)] = ...
  15. s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
  16. s["dW" + str(l)] = ...
  17. s["db" + str(l)] = ...
  18. """
  19. L = len(parameters) // 2 # number of layers in the neural networks
  20. v = {}
  21. s = {}
  22. # Initialize v, s. Input: "parameters". Outputs: "v, s".
  23. for l in range(L):
  24. ### START CODE HERE ### (approx. 4 lines)
  25. v["dW" + str(l + 1)] = np.zeros(parameters["W" + str(l+1)].shape)
  26. v["db" + str(l + 1)] = np.zeros(parameters["b" + str(l+1)].shape)
  27. s["dW" + str(l + 1)] = np.zeros(parameters["W" + str(l+1)].shape)
  28. s["db" + str(l + 1)] = np.zeros(parameters["b" + str(l+1)].shape)
  29. ### END CODE HERE ###
  30. return v, s

练习:用Adam实现参数更新。回想一下一般的更新规则是,对于【HW】L2W2-梯度下降的优化 - 图61:

【HW】L2W2-梯度下降的优化 - 图62%20%5Cfrac%7B%5Cpartial%20J%20%7D%7B%20%5Cpartial%20W%5E%7B%5Bl%5D%7D%20%7D%20%5C%5C%0Av%5E%7Bcorrected%7D%7BW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cfrac%7Bv%7BW%5E%7B%5Bl%5D%7D%7D%7D%7B1%20-%20(%5Cbeta1)%5Et%7D%20%5C%5C%0As%7BW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cbeta2%20s%7BW%5E%7B%5Bl%5D%7D%7D%20%2B%20(1%20-%20%5Cbeta2)%20(%5Cfrac%7B%5Cpartial%20J%20%7D%7B%5Cpartial%20W%5E%7B%5Bl%5D%7D%20%7D)%5E2%20%5C%5C%0As%5E%7Bcorrected%7D%7BW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cfrac%7Bs%7BW%5E%7B%5Bl%5D%7D%7D%7D%7B1%20-%20(%5Cbeta_2)%5Et%7D%20%5C%5C%0AW%5E%7B%5Bl%5D%7D%20%3D%20W%5E%7B%5Bl%5D%7D%20-%20%5Calpha%20%5Cfrac%7Bv%5E%7Bcorrected%7D%7BW%5E%7B%5Bl%5D%7D%7D%7D%7B%5Csqrt%7Bs%5E%7Bcorrected%7D%7BW%5E%7B%5Bl%5D%7D%7D%7D%2B%5Cvarepsilon%7D%0A%5Cend%7Bcases%7D#card=math&code=%5Cbegin%7Bcases%7D%0Av%7BW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cbeta1%20v%7BW%5E%7B%5Bl%5D%7D%7D%20%2B%20%281%20-%20%5Cbeta1%29%20%5Cfrac%7B%5Cpartial%20J%20%7D%7B%20%5Cpartial%20W%5E%7B%5Bl%5D%7D%20%7D%20%5C%5C%0Av%5E%7Bcorrected%7D%7BW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cfrac%7Bv%7BW%5E%7B%5Bl%5D%7D%7D%7D%7B1%20-%20%28%5Cbeta_1%29%5Et%7D%20%5C%5C%0As%7BW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cbeta2%20s%7BW%5E%7B%5Bl%5D%7D%7D%20%2B%20%281%20-%20%5Cbeta2%29%20%28%5Cfrac%7B%5Cpartial%20J%20%7D%7B%5Cpartial%20W%5E%7B%5Bl%5D%7D%20%7D%29%5E2%20%5C%5C%0As%5E%7Bcorrected%7D%7BW%5E%7B%5Bl%5D%7D%7D%20%3D%20%5Cfrac%7Bs%7BW%5E%7B%5Bl%5D%7D%7D%7D%7B1%20-%20%28%5Cbeta_2%29%5Et%7D%20%5C%5C%0AW%5E%7B%5Bl%5D%7D%20%3D%20W%5E%7B%5Bl%5D%7D%20-%20%5Calpha%20%5Cfrac%7Bv%5E%7Bcorrected%7D%7BW%5E%7B%5Bl%5D%7D%7D%7D%7B%5Csqrt%7Bs%5E%7Bcorrected%7D_%7BW%5E%7B%5Bl%5D%7D%7D%7D%2B%5Cvarepsilon%7D%0A%5Cend%7Bcases%7D&id=cYCaI)

注意:迭代器 lfor 循环中从0开始,而第一个参数是【HW】L2W2-梯度下降的优化 - 图63【HW】L2W2-梯度下降的优化 - 图64。编码时需要将l转换为 l+1

  1. # GRADED FUNCTION: update_parameters_with_adam
  2. def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
  3. beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
  4. """
  5. Update parameters using Adam
  6. Arguments:
  7. parameters -- python dictionary containing your parameters:
  8. parameters['W' + str(l)] = Wl
  9. parameters['b' + str(l)] = bl
  10. grads -- python dictionary containing your gradients for each parameters:
  11. grads['dW' + str(l)] = dWl
  12. grads['db' + str(l)] = dbl
  13. v -- Adam variable, moving average of the first gradient, python dictionary
  14. s -- Adam variable, moving average of the squared gradient, python dictionary
  15. learning_rate -- the learning rate, scalar.
  16. beta1 -- Exponential decay hyperparameter for the first moment estimates
  17. beta2 -- Exponential decay hyperparameter for the second moment estimates
  18. epsilon -- hyperparameter preventing division by zero in Adam updates
  19. Returns:
  20. parameters -- python dictionary containing your updated parameters
  21. v -- Adam variable, moving average of the first gradient, python dictionary
  22. s -- Adam variable, moving average of the squared gradient, python dictionary
  23. """
  24. L = len(parameters) // 2 # number of layers in the neural networks
  25. v_corrected = {} # Initializing first moment estimate, python dictionary
  26. s_corrected = {} # Initializing second moment estimate, python dictionary
  27. # Perform Adam update on all parameters
  28. for l in range(L):
  29. # Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
  30. ### START CODE HERE ### (approx. 2 lines)
  31. v["dW" + str(l + 1)] = beta1*v["dW" + str(l + 1)] +(1-beta1)*grads['dW' + str(l+1)]
  32. v["db" + str(l + 1)] = beta1*v["db" + str(l + 1)] +(1-beta1)*grads['db' + str(l+1)]
  33. ### END CODE HERE ###
  34. # Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
  35. ### START CODE HERE ### (approx. 2 lines)
  36. v_corrected["dW" + str(l + 1)] = v["dW" + str(l + 1)]/(1-(beta1)**t)
  37. v_corrected["db" + str(l + 1)] = v["db" + str(l + 1)]/(1-(beta1)**t)
  38. ### END CODE HERE ###
  39. # Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
  40. ### START CODE HERE ### (approx. 2 lines)
  41. s["dW" + str(l + 1)] =beta2*s["dW" + str(l + 1)] + (1-beta2)*(grads['dW' + str(l+1)]**2)
  42. s["db" + str(l + 1)] = beta2*s["db" + str(l + 1)] + (1-beta2)*(grads['db' + str(l+1)]**2)
  43. ### END CODE HERE ###
  44. # Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
  45. ### START CODE HERE ### (approx. 2 lines)
  46. s_corrected["dW" + str(l + 1)] =s["dW" + str(l + 1)]/(1-(beta2)**t)
  47. s_corrected["db" + str(l + 1)] = s["db" + str(l + 1)]/(1-(beta2)**t)
  48. ### END CODE HERE ###
  49. # Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
  50. ### START CODE HERE ### (approx. 2 lines)
  51. parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)]-learning_rate*(v_corrected["dW" + str(l + 1)]/np.sqrt( s_corrected["dW" + str(l + 1)]+epsilon))
  52. parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)]-learning_rate*(v_corrected["db" + str(l + 1)]/np.sqrt( s_corrected["db" + str(l + 1)]+epsilon))
  53. ### END CODE HERE ###
  54. return parameters, v, s

现在,你学习了三种有效的优化算法(小批次梯度下降,冲量,Adam)。让我们使用每个优化器来实现一个模型,并观察其中的差异。

5 不同优化算法的模型对比

我们使用“moons”数据集来测试不同的优化方法。(该数据集被命名为“月亮”,因为两个类别的数据看起来有点像月牙。)

  1. train_X, train_Y = load_dataset()


【HW】L2W2-梯度下降的优化 - 图65

我们已经实现了一个三层的神经网络。你将使用以下方法进行训练:

  • 小批次 Gradient Descent:它将调用你的函数:
    - update_parameters_with_gd()
  • 小批次 冲量:它将调用你的函数:
    - initialize_velocity()update_parameters_with_momentum()
  • 小批次 Adam:它将调用你的函数:
    - initialize_adam()update_parameters_with_adam()
  1. def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
  2. beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
  3. """
  4. 3-layer neural network model which can be run in different optimizer modes.
  5. Arguments:
  6. X -- input data, of shape (2, number of examples)
  7. Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
  8. layers_dims -- python list, containing the size of each layer
  9. learning_rate -- the learning rate, scalar.
  10. mini_batch_size -- the size of a mini batch
  11. beta -- Momentum hyperparameter
  12. beta1 -- Exponential decay hyperparameter for the past gradients estimates
  13. beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
  14. epsilon -- hyperparameter preventing division by zero in Adam updates
  15. num_epochs -- number of epochs
  16. print_cost -- True to print the cost every 1000 epochs
  17. Returns:
  18. parameters -- python dictionary containing your updated parameters
  19. """
  20. L = len(layers_dims) # number of layers in the neural networks
  21. costs = [] # to keep track of the cost
  22. t = 0 # initializing the counter required for Adam update
  23. seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
  24. # Initialize parameters
  25. parameters = initialize_parameters(layers_dims)
  26. # Initialize the optimizer
  27. if optimizer == "gd":
  28. pass # no initialization required for gradient descent
  29. elif optimizer == "momentum":
  30. v = initialize_velocity(parameters)
  31. elif optimizer == "adam":
  32. v, s = initialize_adam(parameters)
  33. # Optimization loop
  34. for i in range(num_epochs):
  35. # Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
  36. seed = seed + 1
  37. minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
  38. for minibatch in minibatches:
  39. # Select a minibatch
  40. (minibatch_X, minibatch_Y) = minibatch
  41. # Forward propagation
  42. a3, caches = forward_propagation(minibatch_X, parameters)
  43. # Compute cost
  44. cost = compute_cost(a3, minibatch_Y)
  45. # Backward propagation
  46. grads = backward_propagation(minibatch_X, minibatch_Y, caches)
  47. # Update parameters
  48. if optimizer == "gd":
  49. parameters = update_parameters_with_gd(parameters, grads, learning_rate)
  50. elif optimizer == "momentum":
  51. parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
  52. elif optimizer == "adam":
  53. t = t + 1 # Adam counter
  54. parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
  55. t, learning_rate, beta1, beta2, epsilon)
  56. # Print the cost every 1000 epoch
  57. if print_cost and i % 1000 == 0:
  58. print ("Cost after epoch %i: %f" %(i, cost))
  59. if print_cost and i % 100 == 0:
  60. costs.append(cost)
  61. # plot the cost
  62. plt.plot(costs)
  63. plt.ylabel('cost')
  64. plt.xlabel('epochs (per 100)')
  65. plt.title("Learning rate = " + str(learning_rate))
  66. plt.show()
  67. return parameters

现在,你将依次使用3种优化方法来运行此神经网络。

5.1 小批量梯度下降

运行以下代码以查看模型如何进行小批量梯度下降。

  1. # train 3-layer model
  2. layers_dims = [train_X.shape[0], 5, 2, 1]
  3. parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
  4. # Predict
  5. predictions = predict(train_X, train_Y, parameters)
  6. # Plot decision boundary
  7. plt.title("Model with Gradient Descent optimization")
  8. axes = plt.gca()
  9. axes.set_xlim([-1.5,2.5])
  10. axes.set_ylim([-1,1.5])
  11. plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
  1. Cost after epoch 0: 0.690736
  2. Cost after epoch 1000: 0.685273
  3. Cost after epoch 2000: 0.647072
  4. Cost after epoch 3000: 0.619525
  5. Cost after epoch 4000: 0.576584
  6. Cost after epoch 5000: 0.607243
  7. Cost after epoch 6000: 0.529403
  8. Cost after epoch 7000: 0.460768
  9. Cost after epoch 8000: 0.465586
  10. Cost after epoch 9000: 0.464518

【HW】L2W2-梯度下降的优化 - 图66

  1. Accuracy: 0.7966666666666666

【HW】L2W2-梯度下降的优化 - 图67

5.2 带冲量的小批量梯度下降

运行以下代码,以查看模型如何使用冲量。因为此示例相对简单,所以使用冲量的收益很小。但是对于更复杂的问题,你可能会看到更大的收获。

  1. # train 3-layer model
  2. layers_dims = [train_X.shape[0], 5, 2, 1]
  3. parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
  4. # Predict
  5. predictions = predict(train_X, train_Y, parameters)
  6. # Plot decision boundary
  7. plt.title("Model with Momentum optimization")
  8. axes = plt.gca()
  9. axes.set_xlim([-1.5,2.5])
  10. axes.set_ylim([-1,1.5])
  11. plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
  1. Cost after epoch 0: 0.690741
  2. Cost after epoch 1000: 0.685341
  3. Cost after epoch 2000: 0.647145
  4. Cost after epoch 3000: 0.619594
  5. Cost after epoch 4000: 0.576665
  6. Cost after epoch 5000: 0.607324
  7. Cost after epoch 6000: 0.529476
  8. Cost after epoch 7000: 0.460936
  9. Cost after epoch 8000: 0.465780
  10. Cost after epoch 9000: 0.464740

【HW】L2W2-梯度下降的优化 - 图68

  1. Accuracy: 0.7966666666666666

【HW】L2W2-梯度下降的优化 - 图69

5.3 Adam模式的小批量梯度下降

运行以下代码以查看使用Adam的模型表现

  1. # train 3-layer model
  2. layers_dims = [train_X.shape[0], 5, 2, 1]
  3. parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
  4. # Predict
  5. predictions = predict(train_X, train_Y, parameters)
  6. # Plot decision boundary
  7. plt.title("Model with Adam optimization")
  8. axes = plt.gca()
  9. axes.set_xlim([-1.5,2.5])
  10. axes.set_ylim([-1,1.5])
  11. plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
  1. Cost after epoch 0: 0.690552
  2. Cost after epoch 1000: 0.185501
  3. Cost after epoch 2000: 0.150830
  4. Cost after epoch 3000: 0.074454
  5. Cost after epoch 4000: 0.125959
  6. Cost after epoch 5000: 0.104344
  7. Cost after epoch 6000: 0.100676
  8. Cost after epoch 7000: 0.031652
  9. Cost after epoch 8000: 0.111973
  10. Cost after epoch 9000: 0.197940

【HW】L2W2-梯度下降的优化 - 图70

  1. Accuracy: 0.94

【HW】L2W2-梯度下降的优化 - 图71

5.4 总结

优化方法 准确度 模型损失
Gradient descent 79.70% 振荡
Momentum 79.70% 振荡
Adam 94% 更光滑

冲量通常会有所帮助,但是鉴于学习率低和数据集过于简单,其影响几乎可以忽略不计。同样,你看到损失的巨大波动是因为对于优化算法,某些小批处理比其他小批处理更为困难。

另一方面,Adam明显胜过小批次梯度下降和冲量。如果你在此简单数据集上运行更多epoch,则这三种方法都将产生非常好的结果。但是,Adam收敛得更快。

Adam的优势包括:

  • 相对较低的内存要求(尽管高于梯度下降和带冲量的梯度下降)
  • 即使很少调整超参数,通常也能很好地工作(【HW】L2W2-梯度下降的优化 - 图72除外)

参考