Custom Loss Function of Keras Model Giving Incorrect Answer

Associate
Joined
5 Nov 2020
Posts
1
I am trying to write a custom loss function for a keras NN model, but it seems like the loss function is outputting the wrong value. My loss python function is

def tangle_loss3(input_tensor):
def custom_loss(y_true, y_pred):
true_diff = y_true - input_tensor
pred_diff = y_pred - input_tensor

normalized_diff = K.abs(tf.math.divide(pred_diff, true_diff))
normalized_diff = tf.reduce_mean(normalized_diff)

return normalized_diff

return custom_loss

Then I use it in this simple feed-forward network:

input_layer = Input(shape=(384,), name='input')
hl_1 = Dense(64, activation='elu', name='hl_1')(input_layer)
hl_2 = Dense(32, activation='elu', name='hl_2')(hl_1)
hl_3 = Dense(32, activation='elu', name='hl_3')(hl_2)
output_layer = Dense(384, activation=None, name='output')(hl_3)

optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)

model = tf.keras.models.Model(input_layer, output_layer)
model.compile(loss=tangle_loss3(input_layer), optimizer=optimizer)
Then to test whether the loss function is working, I created a random input and target vector and did the numpy calculation of what I expect, but this does not seem to match the result from keras.

X = np.random.rand(1, 384)
y = np.random.rand(1, 384)

np.mean(np.abs((model.predict(X) - X)/(y - X)))
# returns some number

model.test_on_batch(X, y)
# always returns 0.0

Why does my loss function always return zero? And should these answers match?
 
Last edited by a moderator:
Back
Top Bottom