TensorFlow教程

在此笔记本中,你将学习在TensorFlow中执行以下操作:

  • 初始化变量
  • 创建自己的会话(session)
  • 训练算法
  • 实现神经网络

编程框架不仅可以缩短编码时间,而且有时还可以进行优化以加快代码速度。

1 探索Tensorflow库

首先,导入库:

  1. import math
  2. import numpy as np
  3. import h5py
  4. import matplotlib.pyplot as plt
  5. # import tensorflow as tf
  6. import tensorflow.compat.v1 as tf
  7. from tensorflow.python.framework import ops
  8. from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
  9. %matplotlib inline
  10. np.random.seed(1)

现在,你已经导入了库,我们将引导你完成其不同的应用程序。你将从一个示例开始:计算一个训练数据的损失。

【HW】L2W3-Tensorflow - 图1%20%3D%20(%5Chat%20y%5E%7B(i)%7D%20-%20y%5E%7B(i)%7D)%5E2%20%5Ctag%7B1%7D%0A#card=math&code=loss%20%3D%20%5Cmathcal%7BL%7D%28%5Chat%7By%7D%2C%20y%29%20%3D%20%28%5Chat%20y%5E%7B%28i%29%7D%20-%20y%5E%7B%28i%29%7D%29%5E2%20%5Ctag%7B1%7D%0A&id=iLwql)

  1. tf.compat.v1.disable_eager_execution() # 此函数只能在创建任何图、运算或张量之前调用。它可以用于从TensorFlow 1.x到2.x的复杂迁移项目的程序开头。
  2. y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
  3. y = tf.constant(39, name='y') # Define y. Set to 39
  4. loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
  5. init = tf.global_variables_initializer() # When init is run later (session.run(init)),
  6. # the loss variable will be initialized and ready to be computed
  7. with tf.Session() as session: # Create a session and print the output
  8. session.run(init) # Initializes the variables
  9. print(session.run(loss)) # Prints the loss

loss = 9

在TensorFlow中编写和运行程序包含以下步骤:

  1. 创建尚未执行的张量(变量)。
  2. 在这些张量之间编写操作。
  3. 初始化张量。
  4. 创建一个会话。
  5. 运行会话,这将运行你上面编写的操作。

因此,当我们为损失创建变量时,我们仅将损失定义为其他数量的函数,但没有验证其值。为了验证它,我们必须运行init = tf.global_variables_initializer()初始化损失变量,在最后一行中,我们终于能够验证loss的值并打印它。

现在让我们看一个简单的例子。运行下面的单元格:

  1. a = tf.constant(2)
  2. b = tf.constant(10)
  3. c = tf.multiply(a,b)
  4. print(c) # Tensor("Mul:0", shape=(), dtype=int32)

看不到结果20!而是得到一个张量,是一个不具有shape属性且类型为“int32”的张量。你所做的所有操作都已放入“计算图”中,但你尚未运行此计算。为了实际将两个数字相乘,必须创建一个会话并运行它。

  1. sess = tf.Session()
  2. print(sess.run(c)) #20

Great! 总而言之,记住要初始化变量,创建一个会话并在该会话中运行操作

接下来,你还必须了解 placeholders(占位符)。占位符是一个对象,你只能稍后指定其值。
要为占位符指定值,你可以使用”feed dictionary”(feed_dict变量)传入值。在下面,我们为x创建了一个占位符,以允许我们稍后在运行会话时传递数字。

  1. # Change the value of x in the feed_dict
  2. x = tf.placeholder(tf.int64, name = 'x')
  3. print(sess.run(2 * x, feed_dict = {x: 3})) # 6
  4. sess.close()

当你首次定义x时,不必为其指定值。占位符只是一个变量,你在运行会话时才将数据分配给该变量。也就是说你在运行会话时向这些占位符“提供数据”。

当你指定计算所需的操作时,你在告诉TensorFlow如何构造计算图。计算图可以具有一些占位符,你将在稍后指定它们的值。最后,在运行会话时,你要告诉TensorFlow执行计算图。

1.1 线性函数

让我们开始此编程练习,计算以下方程式:【HW】L2W3-Tensorflow - 图2,其中【HW】L2W3-Tensorflow - 图3【HW】L2W3-Tensorflow - 图4是随机矩阵,b是随机向量。

练习:计算【HW】L2W3-Tensorflow - 图5 ,其中【HW】L2W3-Tensorflow - 图6【HW】L2W3-Tensorflow - 图7是从随机正态分布中得到的,W的维度为(4,3),X的维度为(3,1),b的维度为(4,1)。例如,下面是定义维度为(3,1)的常量X的方法:

  1. X = tf.constant(np.random.randn(3,1), name = "X")

你可能会发现以下函数很有用:

  • tf.matmul(…, …)进行矩阵乘法
  • tf.add(…, …)进行加法
  • np.random.randn(…)随机初始化
  1. # GRADED FUNCTION: linear_function
  2. def linear_function():
  3. """
  4. Implements a linear function:
  5. Initializes W to be a random tensor of shape (4,3)
  6. Initializes X to be a random tensor of shape (3,1)
  7. Initializes b to be a random tensor of shape (4,1)
  8. Returns:
  9. result -- runs the session for Y = WX + b
  10. """
  11. np.random.seed(1)
  12. ### START CODE HERE ### (4 lines of code)
  13. X = tf.constant(np.random.randn(3, 1), name = "X")
  14. W = tf.constant(np.random.randn(4, 3), name = "W")
  15. b = tf.constant(np.random.randn(4, 1), name = "b")
  16. Y = tf.add(tf.matmul(W, X), b)
  17. ### END CODE HERE ###
  18. # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
  19. ### START CODE HERE ###
  20. sess = tf.Session()
  21. result = sess.run(Y)
  22. ### END CODE HERE ###
  23. # close the session
  24. sess.close()
  25. return result
  1. print( "result = " + str(linear_function()))

预期输出:
result = [[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]

1.2 计算Sigmoid

Great!你刚刚实现了线性函数。Tensorflow提供了各种常用的神经网络函数,例如tf.sigmoidtf.softmax。对于本练习,让我们计算输入的sigmoid函数值。

你将使用占位符变量x进行此练习。在运行会话时,应该使用feed字典传入输入z。在本练习中,你必须:
(i)创建一个占位符x
(ii)使用tf.sigmoid定义计算Sigmoid所需的操作;
(iii)然后运行该会话

练习:实现下面的Sigmoid函数。你应该使用以下内容:

  • tf.placeholder(tf.float32, name = "...")
  • tf.sigmoid(...)
  • sess.run(..., feed_dict = {x: z})

注意,在tensorflow中创建和使用会话有两种典型的方法:

Method 1:

  1. sess = tf.Session()
  2. # Run the variables initialization (if needed), run the operations
  3. result = sess.run(..., feed_dict = {...})
  4. sess.close() # Close the session

Method 2:

  1. with tf.Session() as sess:
  2. # run the variables initialization (if needed), run the operations
  3. result = sess.run(..., feed_dict = {...})
  4. # This takes care of closing the session for you :)
  1. # GRADED FUNCTION: sigmoid
  2. def sigmoid(z):
  3. """
  4. Computes the sigmoid of z
  5. Arguments:
  6. z -- input value, scalar or vector
  7. Returns:
  8. results -- the sigmoid of z
  9. """
  10. ### START CODE HERE ### ( approx. 4 lines of code)
  11. # Create a placeholder for x. Name it 'x'.
  12. x = tf.placeholder(tf.float32, name="x")
  13. # compute sigmoid(x)
  14. sigmoid = tf.sigmoid(x)
  15. # Create a session, and run it. Please use the method 2 explained above.
  16. # You should use a feed_dict to pass z's value to x.
  17. with tf.Session() as sess:
  18. result = sess.run(sigmoid, feed_dict={x:z})
  19. ### END CODE HERE ###
  20. return result
  1. print ("sigmoid(0) = " + str(sigmoid(0)))
  2. print ("sigmoid(12) = " + str(sigmoid(12)))

预期输出:
sigmoid(0) = 0.5
sigmoid(12) = 0.9999938

总而言之,你知道如何
1.创建占位符
2.指定运算相对应的计算图
3.创建会话
4.如果需要指定占位符变量的值,使用feed字典运行会话。

1.3 计算损失

你还可以使用内置函数来计算神经网络的损失。因此,对于i=1…m,无需编写代码来将其作为【HW】L2W3-Tensorflow - 图8%7D#card=math&code=a%5E%7B%5B2%5D%28i%29%7D&id=PJJ4E)和【HW】L2W3-Tensorflow - 图9%7D#card=math&code=y%5E%7B%28i%29%7D&id=CDGxs)的函数来计算:

【HW】L2W3-Tensorflow - 图10%7D%20%5Clog%20a%5E%7B%20%5B2%5D%20(i)%7D%20%2B%20(1-y%5E%7B(i)%7D)%5Clog%20(1-a%5E%7B%20%5B2%5D%20(i)%7D%20)%5Clarge%20)%5Csmall%5Ctag%7B2%7D%20%0A#card=math&code=J%20%3D%20-%20%5Cfrac%7B1%7D%7Bm%7D%20%20%5Csum_%7Bi%20%3D%201%7D%5Em%20%20%5Clarge%20%28%20%5Csmall%20y%5E%7B%28i%29%7D%20%5Clog%20a%5E%7B%20%5B2%5D%20%28i%29%7D%20%2B%20%281-y%5E%7B%28i%29%7D%29%5Clog%20%281-a%5E%7B%20%5B2%5D%20%28i%29%7D%20%29%5Clarge%20%29%5Csmall%5Ctag%7B2%7D%20%0A&id=JdqH6)

你可以使用tensorflow的一行代码中做到这一点!

练习:实现交叉熵损失。你将使用的函数是:

  • tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)

你的代码应输入z,计算出sigmoid(得到a),然后计算出交叉熵损失【HW】L2W3-Tensorflow - 图11,所有这些操作都可以通过调用tf.nn.sigmoid_cross_entropy_with_logits来完成:

【HW】L2W3-Tensorflow - 图12%7D%20%5Clog%20%5Csigma(z%5E%7B%5B2%5D(i)%7D)%20%2B%20(1-y%5E%7B(i)%7D)%5Clog%20(1-%5Csigma(z%5E%7B%5B2%5D(i)%7D)%5Clarge%20)%5Csmall%5Ctag%7B2%7D%0A#card=math&code=-%5Cfrac%7B1%7D%7Bm%7D%20%20%5Csum_%7Bi%20%3D%201%7D%5Em%20%20%5Clarge%20%28%20%5Csmall%20y%5E%7B%28i%29%7D%20%5Clog%20%5Csigma%28z%5E%7B%5B2%5D%28i%29%7D%29%20%2B%20%281-y%5E%7B%28i%29%7D%29%5Clog%20%281-%5Csigma%28z%5E%7B%5B2%5D%28i%29%7D%29%5Clarge%20%29%5Csmall%5Ctag%7B2%7D%0A&id=aQUqc)

  1. # GRADED FUNCTION: cost
  2. def cost(logits, labels):
  3. """
  4. Computes the cost using the sigmoid cross entropy
  5. Arguments:
  6. logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
  7. labels -- vector of labels y (1 or 0)
  8. Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
  9. in the TensorFlow documentation. So logits will feed into z, and labels into y.
  10. Returns:
  11. cost -- runs the session of the cost (formula (2))
  12. """
  13. # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
  14. z = tf.placeholder(tf.float32, name='z')
  15. y = tf.placeholder(tf.float32, name='y')
  16. # Use the loss function (approx. 1 line)
  17. loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)
  18. # Create a session (approx. 1 line). See method 1 above.
  19. sess = tf.Session()
  20. # Run the session (approx. 1 line).
  21. cost = sess.run(loss, feed_dict={z:logits, y:labels})
  22. # Close the session (approx. 1 line). See method 1 above.
  23. sess.close()
  24. ### END CODE HERE ###
  25. return cost
  1. logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
  2. cost = cost(logits, np.array([0,0,1,1]))
  3. print ("cost = " + str(cost))

预期输出 :
cost = [1.0053872 1.0366409 0.41385433 0.39956614]

1.4 使用独热(One Hot)编码

在深度学习中,很多时候你会得到一个y向量,其数字范围从0到C-1,其中C是类的数量。例如C是4,那么你可能具有以下y向量,你将需要按以下方式对其进行转换:

【HW】L2W3-Tensorflow - 图13

这称为独热编码,因为在转换后的表示形式中,每一列中的一个元素正好是“hot”(设为1)。要以numpy格式进行此转换,你可能需要编写几行代码。在tensorflow中,你可以只使用一行代码:

  • tf.one_hot(labels, depth, axis)

练习:实现以下函数,以获取一个标签向量和【HW】L2W3-Tensorflow - 图14类的总数,并返回一个独热编码。使用tf.one_hot()来做到这一点。

  1. # GRADED FUNCTION: one_hot_matrix
  2. def one_hot_matrix(labels, C):
  3. """
  4. Creates a matrix where the i-th row corresponds to the ith class number and the jth column
  5. corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
  6. will be 1.
  7. Arguments:
  8. labels -- vector containing the labels
  9. C -- number of classes, the depth of the one hot dimension
  10. Returns:
  11. one_hot -- one hot matrix
  12. """
  13. ### START CODE HERE ###
  14. # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
  15. C = tf.constant(C, name='C')
  16. # Use tf.one_hot, be careful with the axis (approx. 1 line)
  17. one_hot_matrix = tf.one_hot(labels, C, axis=0)
  18. # Create the session (approx. 1 line)
  19. sess = tf.Session()
  20. # Run the session (approx. 1 line)
  21. one_hot = sess.run(one_hot_matrix)
  22. # Close the session (approx. 1 line). See method 1 above.
  23. sess.close()
  24. ### END CODE HERE ###
  25. return one_hot
  1. labels = np.array([1,2,3,0,2,1])
  2. one_hot = one_hot_matrix(labels, C = 4)
  3. print (one_hot)
  1. # 上面这个函数里面也是调用tf.one_hot,结果一样。写函数可能是为了让人熟悉tf的操作流程。
  2. one = tf.one_hot(labels,depth=4,axis=0)
  3. sess = tf.Session()
  4. ones = sess.run(one)
  5. sess.close()
  6. print(ones)

预期输出:
one_hot = [[0. 0. 0. 1. 0. 0.]
[1. 0. 0. 0. 0. 1.]
[0. 1. 0. 0. 1. 0.]
[0. 0. 1. 0. 0. 0.]]

1.5 使用0和1初始化

现在,你将学习如何初始化0和1的向量。 你将要调用的函数是tf.ones()。要使用零初始化,可以改用tf.zeros()。这些函数采用一个维度,并分别返回一个包含0和1的维度数组。

使用Tensorflow构建你的第一个神经网络

在这一部分作业中,你将使用tensorflow构建神经网络。请记住,实现tensorflow模型包含两个部分:

  • 创建计算图
  • 运行计算图

让我们深入研究你要解决的问题!

2.0 问题陈述:SIGNS 数据集

一个下午,我们决定和一些朋友一起用计算机来解密手语。我们花了几个小时在白墙前拍照,并提出了以下数据集。现在,你的工作就是构建一种算法,以帮助语音障碍者和不懂手语的人的交流。

  • 训练集:1080张图片(64 x 64像素)的手势表示从0到5的数字(每个数字180张图片)。
  • 测试集:120张图片(64 x 64像素)的手势表示从0到5的数字(每个数字20张图片)。

请注意,这是SIGNS数据集的子集。完整的数据集包含更多的手势。

这是每个数字的示例,以及如何解释标签的方式。这些是原始图片,然后我们将图像分辨率降低到64 x 64像素。

【HW】L2W3-Tensorflow - 图15

图 1:SIGNS数据集

运行以下代码以加载数据集。

  1. # Loading the dataset
  2. X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
  3. X_train_orig.shape, Y_train_orig.shape
  1. ((1080, 64, 64, 3), (1, 1080))

更改下面的索引并运行单元格以可视化数据集中的一些示例。

  1. # Example of a picture
  2. index = 0
  3. plt.imshow(X_train_orig[index])
  4. print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
  1. y = 5

【HW】L2W3-Tensorflow - 图16

通常先将图像数据集展平,然后除以255以对其进行归一化。最重要的是将每个标签转换为一个独热向量,如图1所示。运行下面的单元格即可转化。

  1. # Flatten the training and test images
  2. X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
  3. X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
  4. # Normalize image vectors
  5. X_train = X_train_flatten/255.
  6. X_test = X_test_flatten/255.
  7. # Convert training and test labels to one hot matrices
  8. Y_train = convert_to_one_hot(Y_train_orig, 6)
  9. Y_test = convert_to_one_hot(Y_test_orig, 6)
  10. print ("number of training examples = " + str(X_train.shape[1]))
  11. print ("number of test examples = " + str(X_test.shape[1]))
  12. print ("X_train shape: " + str(X_train.shape))
  13. print ("Y_train shape: " + str(Y_train.shape))
  14. print ("X_test shape: " + str(X_test.shape))
  15. print ("Y_test shape: " + str(Y_test.shape))
  1. number of training examples = 1080
  2. number of test examples = 120
  3. X_train shape: (12288, 1080)
  4. Y_train shape: (6, 1080)
  5. X_test shape: (12288, 120)
  6. Y_test shape: (6, 120)

注意 12288 = 【HW】L2W3-Tensorflow - 图17,每个图像均为正方形,64 x 64像素,其中3为RGB颜色。请确保理解这些数据的维度意义,然后再继续。

你的目标是建立一种能够高精度识别符号的算法。为此,你将构建一个tensorflow模型,该模型与你先前在numpy中为猫识别构建的tensorflow模型几乎相同(但现在使用softmax输出)。这是将numpy实现的模型与tensorflow进行比较的好机会。

模型LINEAR-> RELU-> LINEAR-> RELU-> LINEAR-> SOFTMAX 。 SIGMOID输出层已转换为SOFTMAX。SOFTMAX层将SIGMOID应用到两个以上的类。

2.1 创建占位符

你的第一个任务是为XX创建占位符,方便你以后在运行会话时传递训练数据。

练习:实现以下函数以在tensorflow中创建占位符。

  1. # GRADED FUNCTION: create_placeholders
  2. def create_placeholders(n_x, n_y):
  3. """
  4. Creates the placeholders for the tensorflow session.
  5. Arguments:
  6. n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
  7. n_y -- scalar, number of classes (from 0 to 5, so -> 6)
  8. Returns:
  9. X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
  10. Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
  11. Tips:
  12. - You will use None because it let's us be flexible on the number of examples you will for the placeholders.
  13. In fact, the number of examples during test/train is different.
  14. """
  15. ### START CODE HERE ### (approx. 2 lines)
  16. X = tf.placeholder(tf.float32, shape=[n_x, None])
  17. Y = tf.placeholder(tf.float32, shape=[n_y, None])
  18. ### END CODE HERE ###
  19. return X, Y
  1. X, Y = create_placeholders(12288, 6)
  2. print ("X = " + str(X))
  3. print ("Y = " + str(Y))
  1. X = Tensor("Placeholder:0", shape=(12288, None), dtype=float32)
  2. Y = Tensor("Placeholder_1:0", shape=(6, None), dtype=float32)

预期输出:
X = Tensor(“Placeholder:0”, shape=(12288, ?), dtype=float32)
Y = Tensor(“Placeholder_1:0”, shape=(6, ?), dtype=float32)

2.2 初始化参数

你的第二个任务是初始化tensorflow中的参数。

练习:实现以下函数以初始化tensorflow中的参数。使用权重的Xavier初始化和偏差的零初始化。维度如下,对于W1和b1,你可以使用:

  1. W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
  2. b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())

请使用seed = 1来确保你的结果与我们的结果相符。

注:the TF2 replacement for tf.contrib.layers.xavier_initializer() is tf.keras.initializers.glorot_normal(Xavier and Glorot are 2 names for the same initializer algorithm) documentation link.

  1. # GRADED FUNCTION: initialize_parameters
  2. def initialize_parameters():
  3. """
  4. Initializes parameters to build a neural network with tensorflow. The shapes are:
  5. W1 : [25, 12288]
  6. b1 : [25, 1]
  7. W2 : [12, 25]
  8. b2 : [12, 1]
  9. W3 : [6, 12]
  10. b3 : [6, 1]
  11. Returns:
  12. parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
  13. """
  14. tf.set_random_seed(1) # so that your "random" numbers match ours
  15. ### START CODE HERE ### (approx. 6 lines of code)
  16. W1 = tf.get_variable('W1', [25,12288], initializer=tf.keras.initializers.glorot_normal(seed = 1))
  17. b1 = tf.get_variable('b1', [25,1], initializer=tf.zeros_initializer())
  18. W2 = tf.get_variable('W2', [12,25], initializer=tf.keras.initializers.glorot_normal(seed = 1))
  19. b2 = tf.get_variable('b2', [12,1], initializer=tf.zeros_initializer())
  20. W3 = tf.get_variable('W3', [6,12], initializer=tf.keras.initializers.glorot_normal(seed = 1))
  21. b3 = tf.get_variable('b3', [6,1], initializer=tf.zeros_initializer())
  22. ### END CODE HERE ###
  23. parameters = {"W1": W1,
  24. "b1": b1,
  25. "W2": W2,
  26. "b2": b2,
  27. "W3": W3,
  28. "b3": b3}
  29. return parameters
  1. tf.reset_default_graph()
  2. with tf.Session() as sess:
  3. parameters = initialize_parameters()
  4. print("W1 = " + str(parameters["W1"]))
  5. print("b1 = " + str(parameters["b1"]))
  6. print("W2 = " + str(parameters["W2"]))
  7. print("b2 = " + str(parameters["b2"]))
  1. W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32>
  2. b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32>
  3. W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32>
  4. b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32>

预期输出:
W1 =
b1 =
W2 =
b2 =

如预期的那样,尚未对参数进行验证。

2.3 Tensorflow中的正向传播

你现在将在tensorflow中实现正向传播模块。该函数将接收参数字典,并将完成正向传递。你将使用的函数是:

  • tf.add(...,...)进行加法
  • tf.matmul(...,...)进行矩阵乘法
  • tf.nn.relu(...)以应用ReLU激活

问题:实现神经网络的正向传递。我们为你注释了numpy等式,以便你可以将tensorflow实现与numpy实现进行比较。重要的是要注意,前向传播在z3处停止。原因是在tensorflow中,最后的线性层输出作为计算损失函数的输入。因此,你不需要a3

  1. # GRADED FUNCTION: forward_propagation
  2. def forward_propagation(X, parameters):
  3. """
  4. Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
  5. Arguments:
  6. X -- input dataset placeholder, of shape (input size, number of examples)
  7. parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
  8. the shapes are given in initialize_parameters
  9. Returns:
  10. Z3 -- the output of the last LINEAR unit
  11. """
  12. # Retrieve the parameters from the dictionary "parameters"
  13. W1 = parameters['W1']
  14. b1 = parameters['b1']
  15. W2 = parameters['W2']
  16. b2 = parameters['b2']
  17. W3 = parameters['W3']
  18. b3 = parameters['b3']
  19. ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
  20. Z1 = tf.add(tf.matmul(W1,X),b1) # Z1 = np.dot(W1, X) + b1
  21. A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
  22. Z2 = tf.add(tf.matmul(W2,A1),b2) # Z2 = np.dot(W2, A1) + b2
  23. A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
  24. Z3 = tf.add(tf.matmul(W3,A2),b3) # Z3 = np.dot(W3,A2) + b3
  25. ### END CODE HERE ###
  26. return Z3
  1. tf.reset_default_graph()
  2. with tf.Session() as sess:
  3. X, Y = create_placeholders(12288, 6)
  4. parameters = initialize_parameters()
  5. Z3 = forward_propagation(X, parameters)
  6. print("Z3 = " + str(Z3))
  1. Z3 = Tensor("Add_2:0", shape=(6, None), dtype=float32)

预期输出:
Z3 = Tensor(“Add_2:0”, shape=(6, ?), dtype=float32)

你可能已经注意到,正向传播不会输出任何缓存。当我们开始进行传播时,你将在下面理解为什么。

2.4 计算损失

如前所述,使用以下方法很容易计算损失:

  1. tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))

问题:实现以下损失函数。

  • 重要的是要知道tf.nn.softmax_cross_entropy_with_logits的”logits“和”labels“输入应具有一样的维度(数据数,类别数)。 因此,我们为你转换了Z3和Y。
  • 此外,tf.reduce_mean是对所以数据进行求和。
  1. # GRADED FUNCTION: compute_cost
  2. def compute_cost(Z3, Y):
  3. """
  4. Computes the cost
  5. Arguments:
  6. Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
  7. Y -- "true" labels vector placeholder, same shape as Z3
  8. Returns:
  9. cost - Tensor of the cost function
  10. """
  11. # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
  12. logits = tf.transpose(Z3)
  13. labels = tf.transpose(Y)
  14. ### START CODE HERE ### (1 line of code)
  15. cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
  16. ### END CODE HERE ###
  17. return cost
  1. tf.reset_default_graph()
  2. with tf.Session() as sess:
  3. X, Y = create_placeholders(12288, 6)
  4. parameters = initialize_parameters()
  5. Z3 = forward_propagation(X, parameters)
  6. cost = compute_cost(Z3, Y)
  7. print("cost = " + str(cost))
  1. cost = Tensor("Mean:0", shape=(), dtype=float32)

预期输出:
cost = Tensor(“Mean:0”, shape=(), dtype=float32)

2.5 反向传播和参数更新

所有反向传播和参数更新均可使用1行代码完成,将这部分合并到模型中非常容易。

计算损失函数之后,你将创建一个”optimizer“对象。运行tf.session时,必须与损失一起调用此对象。调用时,它将使用所选方法和学习率对给定的损失执行优化。

例如,对于梯度下降,优化器将是:

  1. optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)

要进行优化,你可以执行以下操作:

  1. _ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})

通过相反顺序的tensorflow图来计算反向传播。从损失到输入。

注意编码时,我们经常使用_作为“throwaway”变量来存储以后不再需要使用的值。这里_代表了我们不需要的optimizer的评估值(而 c 代表了 cost变量的值)。

2.6 建立模型

现在,将它们组合在一起!

练习:调用之前实现的函数构建完整模型。

  1. def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
  2. num_epochs = 1500, minibatch_size = 32, print_cost = True):
  3. """
  4. Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
  5. Arguments:
  6. X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
  7. Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
  8. X_test -- training set, of shape (input size = 12288, number of training examples = 120)
  9. Y_test -- test set, of shape (output size = 6, number of test examples = 120)
  10. learning_rate -- learning rate of the optimization
  11. num_epochs -- number of epochs of the optimization loop
  12. minibatch_size -- size of a minibatch
  13. print_cost -- True to print the cost every 100 epochs
  14. Returns:
  15. parameters -- parameters learnt by the model. They can then be used to predict.
  16. """
  17. ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
  18. tf.set_random_seed(1) # to keep consistent results
  19. seed = 3 # to keep consistent results
  20. (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
  21. n_y = Y_train.shape[0] # n_y : output size
  22. costs = [] # To keep track of the cost
  23. # Create Placeholders of shape (n_x, n_y)
  24. # X = tf.placeholder(tf.float32, shape=[n_x, None])
  25. # Y = tf.placeholder(tf.float32, shape=[n_y, None])
  26. X, Y = create_placeholders(n_x, n_y)
  27. # Initialize parameters
  28. parameters = initialize_parameters()
  29. # Forward propagation: Build the forward propagation in the tensorflow graph
  30. Z3 = forward_propagation(X, parameters)
  31. # Cost function: Add cost function to tensorflow graph
  32. cost = compute_cost(Z3, Y)
  33. # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
  34. ### START CODE HERE ### (1 line)
  35. optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
  36. ### END CODE HERE ###
  37. # Initialize all the variables
  38. init = tf.global_variables_initializer()
  39. # Start the session to compute the tensorflow graph
  40. with tf.Session() as sess:
  41. # Run the initialization
  42. sess.run(init)
  43. # Do the training loop
  44. for epoch in range(num_epochs):
  45. epoch_cost = 0. # Defines a cost related to an epoch
  46. num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
  47. seed = seed + 1
  48. minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
  49. for minibatch in minibatches:
  50. # Select a minibatch
  51. (minibatch_X, minibatch_Y) = minibatch
  52. # IMPORTANT: The line that runs the graph on a minibatch.
  53. # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
  54. ### START CODE HERE ### (1 line)
  55. _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
  56. ### END CODE HERE ###
  57. epoch_cost += minibatch_cost / num_minibatches
  58. # Print the cost every epoch
  59. if print_cost == True and epoch % 100 == 0:
  60. print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
  61. if print_cost == True and epoch % 5 == 0:
  62. costs.append(epoch_cost)
  63. # plot the cost
  64. plt.plot(np.squeeze(costs))
  65. plt.ylabel('cost')
  66. plt.xlabel('iterations (per tens)')
  67. plt.title("Learning rate =" + str(learning_rate))
  68. plt.show()
  69. # lets save the parameters in a variable
  70. parameters = sess.run(parameters)
  71. print ("Parameters have been trained!")
  72. # Calculate the correct predictions
  73. correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
  74. # Calculate accuracy on the test set
  75. accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
  76. print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
  77. print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
  78. return parameters

运行以下单元格来训练你的模型!在我们的机器上大约需要5分钟。 你的“100epoch后的损失”应为1.016458。如果不是,请不要浪费时间。单击笔记本电脑上方栏中的正方形(⬛),以中断训练,然后尝试更正你的代码。如果损失正确,请稍等片刻,然后在5分钟内回来!

  1. parameters = model(X_train, Y_train, X_test, Y_test)
  1. Cost after epoch 0: 1.882638
  2. Cost after epoch 100: 1.355498
  3. Cost after epoch 200: 1.132938
  4. Cost after epoch 300: 0.942660
  5. Cost after epoch 400: 0.807396
  6. Cost after epoch 500: 0.743592
  7. Cost after epoch 600: 0.643909
  8. Cost after epoch 700: 0.582018
  9. Cost after epoch 800: 0.536528
  10. Cost after epoch 900: 0.492420
  11. Cost after epoch 1000: 0.467958
  12. Cost after epoch 1100: 0.428083
  13. Cost after epoch 1200: 0.405582
  14. Cost after epoch 1300: 0.387620
  15. Cost after epoch 1400: 0.376748

【HW】L2W3-Tensorflow - 图18
预期输出:
Train Accuracy: 0.9990741
Test Accuracy: 0.725

Nice!你的算法可以识别出表示0到5之间数字的手势,准确度达到了71.7%。

评价

  • 你的模型足够强大,可以很好地拟合训练集。但是,鉴于训练和测试精度之间的差异,你可以尝试添加L2或dropout正则化以减少过拟合。
  • 将会话视为训练模型的代码块。每次你在小批次上运行会话时,它都会训练参数。总的来说,你已经运行了该会话多次(1500个epoch),直到获得训练有素的参数为止。

2.7 使用自己的图像进行测试(可选练习)

祝贺你完成了此作业。现在,你可以拍张手的照片并查看模型的输出。要做到这一点:
1.单击此笔记本上部栏中的 “File” ,然后单击”Open”以在Coursera Hub上运行。
2.将图像添加到Jupyter Notebook的目录中,在 “images” 文件夹中
3.在以下代码中写下你的图片名称
4.运行代码,然后检查算法是否正确!

  1. import scipy
  2. from PIL import Image
  3. from scipy import ndimage
  4. ## START CODE HERE ## (PUT YOUR IMAGE NAME)
  5. my_image = "thumbs_up.jpg"
  6. ## END CODE HERE ##
  7. # We preprocess your image to fit your algorithm.
  8. fname = my_image
  9. image = np.array(ndimage.imread(fname, flatten=False))
  10. my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
  11. my_image_prediction = predict(my_image, parameters)
  12. plt.imshow(image)
  13. print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
  1. /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:11: DeprecationWarning: `imread` is deprecated!
  2. `imread` is deprecated in SciPy 1.0.0.
  3. Use ``matplotlib.pyplot.imread`` instead.
  4. # This is added back by InteractiveShellApp.init_path()
  5. /opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:12: DeprecationWarning: `imresize` is deprecated!
  6. `imresize` is deprecated in SciPy 1.0.0, and will be removed in 1.3.0.
  7. Use Pillow instead: ``numpy.array(Image.fromarray(arr).resize())``.
  8. if sys.path[0] == '':
  9. Your algorithm predicts: y = 3

【HW】L2W3-Tensorflow - 图19

You indeed deserved a “thumbs-up” although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn’t contain any “thumbs-up”, so the model doesn’t know how to deal with it! We call that a “mismatched data distribution” and it is one of the various of the next course on “Structuring Machine Learning Projects”.

尽管你看到算法似乎对它进行了错误分类,但你确实值得“竖起大拇指”。原因是训练集不包含任何“竖起大拇指”,因此模型不知道如何处理! 我们称其为“数据不平衡”,它是下一章“构建机器学习项目”中的学习课程之一。

你应该记住

  • Tensorflow是深度学习中经常使用的编程框架
  • Tensorflow中的两个主要对象类别是张量和运算符。
  • 在Tensorflow中进行编码时,你必须执行以下步骤:
    - 创建一个包含张量(变量,占位符…)和操作(tf.matmul,tf.add,…)的计算图
    - 创建会话
    - 初始化会话
    - 运行会话以执行计算图
  • 你可以像在model()中看到的那样多次执行计算图
  • 在“优化器”对象上运行会话时,将自动完成反向传播和优化。