Each data set is a pair of Numpy matrices.
For our experiment, the training hyper-parameter is the parameter for L2 regularization.
kernel_regularizer=keras.regularizers.l2(lambda)
Here is an example of setting up and using the callback:
early_stopping = keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=params['patience'], verbose=0, mode='auto', baseline=None, restore_best_weights=True) model.fit(x=ins_training, y=outs_training, validation_data = (ins_val, outs_val), epochs=epochs, callbacks=[early_stopping])
Here is an example function that will do this:
def compute_fvaf(model, ins, outs): y_mse = model.evaluate(x=ins, y=outs, batch_size=outs.shape[0]) fvaf = 1 - y_mse / np.var(outs) return fvaf
This function also takes as input the regularization parameter and the number of folds to use for training (Ntraining).
folds_training = (np.array(range(Ntraining)) + r) % Nfolds
np.argmax() can be useful here
Last modified: Mon Feb 18 23:23:33 2019