w3resource

Custom loss function in TensorFlow for positive and negative examples

Python TensorFlow Building and Training a Simple Model: Exercise-9 with Solution

Write a Python program that creates a custom loss function in TensorFlow that penalizes errors differently for positive and negative examples.

Sample Solution:

Python Code:

import tensorflow as tf

# Custom loss function definition
def weighted_binary_crossentropy(y_true, y_pred):
    # Define custom weights for positive and negative examples
    positive_weight = 2.0  # Weight for positive examples
    negative_weight = 1.0  # Weight for negative examples
    
    # Calculate the binary cross-entropy loss with custom weights
    loss = - (positive_weight * y_true * tf.math.log(y_pred + 1e-10) +
              negative_weight * (1 - y_true) * tf.math.log(1 - y_pred + 1e-10))
    
    # Calculate the mean loss over batch
    loss = tf.reduce_mean(loss)
    
    return loss

# Simulated ground truth and predicted values (for demonstration)
y_true = tf.constant([1.0, 0.0, 1.0, 0.0], dtype=tf.float32)
y_pred = tf.constant([0.8, 0.2, 0.7, 0.4], dtype=tf.float32)
# Calculate the custom loss using the defined function
loss = weighted_binary_crossentropy(y_true, y_pred)
# Print the custom loss value
print("Custom Loss:", loss.numpy())

Output:

Custom Loss: 0.47340152

Explanation:

In the exercise above -

  • Import the necessary TensorFlow modules
  • Define a custom loss function "weighted_binary_crossentropy()" that takes two arguments: 'y_true' (ground truth labels) and 'y_pred' (predicted probabilities).
  • Inside the "weighted_binary_crossentropy()" function, we specify different weights for positive and negative examples ('positive_weight' and 'negative_weight').
  • Calculate the binary cross-entropy loss with custom weights by applying them to positive and negative examples differently. To avoid numerical instability, we add a small epsilon value (1e-10) to the logarithmic terms.
  • Calculate the mean loss over the batch by using tf.reduce_mean.
  • Finally, we return the calculated loss.

Python Code Editor:


Previous: Implementing a categorical cross-entropy loss function in TensorFlow.
Next: Implementing Gradient Descent for Linear Regression.

What is the difficulty level of this exercise?



Follow us on Facebook and Twitter for latest update.