This is a general Question. I know how to write a custom loss function for Tensorflow that works the following way. Lets say we have a custom loss function L(y_true,y_pred) , a Neural Network NN(x) and some batch {x_1,…,x_n} with and ground truth {y_true_1,…, y_true_n}. Now Tensorflow would minimize the following objective with gradient descent: […]
- Tags ..., a Neural Network NN(x) and some batch {x_1, NN(x_1), NN(x_1))+...+L(y_true_1, NN(x_n)) My Question is: Is it also possible (in TensorFlow, NN(x_n)) So basically I dont want that sum from the first objective and rather define my own function for the batch. Thanks for help!, PyTorch or wherever) to define a Loss function L_2 that is defined on the batch so that the following objective is minimized by gradient desc, This is a general Question. I know how to write a custom loss function for Tensorflow that works the following way. Lets say we have a custom, x_n} with and ground truth {y_true_1, y_pred, y_true_n, y_true_n}. Now Tensorflow would minimize the following objective with gradient descent: L(y_true_1