w3resource

Implementing a categorical cross-entropy loss function in TensorFlow

Python TensorFlow Building and Training a Simple Model: Exercise-7 with Solution

Write a Python program that implements a categorical cross-entropy loss function using TensorFlow for a multi-class classification problem.

From Wikipedia –
In information theory, the cross-entropy between two probability distributions p and q over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set if a coding scheme used for the set is optimized for an estimated probability distribution q, rather than the true distribution p.

Sample Solution:

Python Code:

import tensorflow as tf
# Simulated ground truth (one-hot encoded) and predicted logits (for demonstration)
y_true = tf.constant([[1, 0, 0], [0, 0, 1], [0, 1, 0]], dtype=tf.float32)
logits = tf.constant([[0.4, 2.0, 1.3], [1.2, 0.3, 0.2], [0.4, 0.5, 2.3]], dtype=tf.float32)
# Calculate the categorical cross-entropy loss
loss = tf.keras.losses.categorical_crossentropy(y_true, logits)
# Print the loss value
print("Categorical Cross-Entropy Loss:", loss.numpy())

Explanation:

In the exercise above –

  • Import the necessary TensorFlow modules
  • Create two tensors, y_true and logits, to represent the ground truth labels (one-hot encoded) and predicted logits, respectively.
  • Calculate the categorical cross-entropy loss using TensorFlow's built-in tf.keras.losses.categorical_crossentropy function. This function takes the one-hot encoded ground truth (y_true) and predicted logits as input and computes the categorical cross-entropy loss.
  • Finally, we print the computed loss value using loss.numpy().

Output:

Categorical Cross-Entropy Loss: [2.2246234 2.1400661 1.856298 ]

Output (Explanation):

[2.2246234 2.1400661 1.856298 ]: These are the loss values for three different samples or data points in your dataset. Each value corresponds to the categorical cross-entropy loss for a specific sample.

  • The first value, 2.2246234, represents the categorical cross-entropy loss for the first sample.
  • The second value, 2.1400661, represents the loss for the second sample.
  • The third value, 1.856298, represents the loss for the third sample.

Python Code Editor:


Previous: Defining a mean squared error (MSE) loss function in TensorFlow.
Next: Python TensorFlow custom loss function.

What is the difficulty level of this exercise?



Follow us on Facebook and Twitter for latest update.