https://keras.io/guides/functional_api/
常用函数
model.compile()
from keras.models import Model
from keras.layers import Input, Dense
a = Input(shape=(32,))
b = Dense(32)(a)
model = Model(inputs=a, outputs=b)
model.compile(optimizer, loss=None, metrics=None, loss_weights=None, sample_weight_mode=None, weighted_metrics=None, target_tensors=None)
loss — 损失函数
metircs — 模型评估标准
评价函数和损失函数类似,但不会用于训练过程
作用:1、训练的时候观察到评价的指标变化情况 2、可以加入早停机制,评价指标不再下降,就停止训练
回调函数 Callbacks
keras.callbacks.callbacks.Callback()
fit_generator
model.fit()需要传递的参数是batch_size
model.fit_generator()需要传递一个steps_per_epoch的参数,没有指定batch_size
fit() 一次性加载所有的train数据集,遍历一轮就可以作为一轮epoch的结束
generator一次只加载数据集的一部分,所以它并不知道什么时候才是一轮epoch的结束
fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)
steps_per_epoch=len(x_train)/batch_size
是指定了每一轮epoch需要执行多少steps,也就是多少steps,才能认为一轮epoch结束。
- Real-world datasets are often too large to fit into memory.
- They also tend to be challenging, requiring us to perform data augmentation to avoid overfitting and increase the ability of our model to generalize.
Since the function is intended to loop infinitely, Keras has no ability to determine when one epoch starts and a new epoch begins.
Therefore, we compute the steps_per_epoch value as the total number of training data points divided by the batch size. Once Keras hits this step count it knows that it’s a new epoch.
from tensorflow.python import keras
class LossHistory(keras.callbacks.Callback):
"""A class for keras to use to store training losses for the model to use:
1. initialize a LossHistory object inside your agent
2. and put callbacks= [self.lossHistory] in the model.fit() call
"""
def __init__(self):
self.losses = []
def on_train_begin(self, logs={}):
pass
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
def losses_clear(self):
self.losses = []
自定义Loss
def L1_Charbonnier_loss(y_true, y_pred):
"""L1 Charbonnierloss."""
eps = 1e-6
y_true = tf.convert_to_tensor(y_true, np.float32)
y_pred = tf.convert_to_tensor(y_pred, np.float32)
diff = y_true-y_pred
error = K.sqrt( diff * diff + eps )
loss = K.sum(error)
return loss
def PSNR(y_true, y_pred):
y_true = tf.convert_to_tensor(y_true, np.float32)
y_pred = tf.convert_to_tensor(y_pred, np.float32)
diff = y_true-y_pred
rmse = K.sqrt(K.mean(diff * diff))
return 20.0 * K.log(255 / rmse) / K.log(10.0)
-------------------------------------------------------------------------------------------
# training
model.compile(optimizer='adam',
loss=L1_Charbonnier_loss,
metrics=[PSNR])