Python TensorFlow: Training Neural Networks with Adam Optimizer
Python TensorFlow Building and Training a Simple Model: Exercise-11 with Solution
Write a Python program that uses the Adam optimizer in TensorFlow to minimize the loss of a neural network.
Sample Solution:
Python Code:
import tensorflow as tf
import numpy as np
# Generate some random data for a simple regression problem
np.random.seed(0)
X = np.random.rand(100, 1)
y = 2 * X + 1 + 0.1 * np.random.randn(100, 1)
# Define the neural network model
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(1,)),
tf.keras.layers.Dense(1)
])
# Define the mean squared error (MSE) loss function
loss_function = tf.keras.losses.MeanSquaredError()
# Define the Adam optimizer with a specified learning rate
learning_rate = 0.01
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
# Compile the model with the optimizer and loss function
model.compile(optimizer=optimizer, loss=loss_function)
# Training loop
num_epochs = 100
for epoch in range(num_epochs):
# Train the model on the data
history = model.fit(X, y, epochs=1, verbose=0)
loss = history.history['loss'][0]
# Print the loss for monitoring
print(f"Epoch {epoch+1}/{num_epochs}, Loss: {loss}")
# Get the final model parameters (weights and bias)
final_weights, final_bias = model.layers[0].get_weights()
print("Final Weights:", final_weights)
print("Final Bias:", final_bias)
Output:
Epoch 1/100, Loss: 5.959100246429443 Epoch 2/100, Loss: 5.667813301086426 Epoch 3/100, Loss: 5.3968706130981445 Epoch 4/100, Loss: 5.132824897766113 Epoch 5/100, Loss: 4.87064266204834 Epoch 6/100, Loss: 4.619353771209717 Epoch 7/100, Loss: 4.378358840942383 Epoch 8/100, Loss: 4.146295070648193 Epoch 9/100, Loss: 3.9255177974700928
Epoch 96/100, Loss: 0.0867009088397026 Epoch 97/100, Loss: 0.08598435670137405 Epoch 98/100, Loss: 0.08519038558006287 Epoch 99/100, Loss: 0.08445620536804199 Epoch 100/100, Loss: 0.08377636969089508 Final Weights: [[1.0589921]] Final Bias: [1.4905064]
Explanation:
import tensorflow as tf import numpy as np
Import TensorFlow and NumPy libraries.
-----------------------------------------------------
Generate Random Data:
np.random.seed(0) X = np.random.rand(100, 1) y = 2 * X + 1 + 0.1 * np.random.randn(100, 1)
This code generates random input data X and target data y for a simple linear regression problem. It's a dataset with 100 samples.
------------------------------------------------------
Define the Neural Network Model:
model = tf.keras.Sequential([ tf.keras.layers.Input(shape=(1,)), tf.keras.layers.Dense(1) ])
Here, a simple neural network model is defined using TensorFlow's Keras API. It consists of a single dense (fully connected) layer. The 'Input' layer specifies the input shape, and the 'Dense' layer represents the output layer.
---------------------------------------------------------
Define Loss Function and Optimizer:
loss_function = tf.keras.losses.MeanSquaredError() learning_rate = 0.01 optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
In the above code -
- The mean squared error (MSE) loss function is defined to measure the model's prediction error.
- The Adam optimizer with a specified learning rate is used for model training.
------------------------------------------------------------
Compile the Model:
model.compile(optimizer=optimizer, loss=loss_function)
The model is compiled with the specified optimizer and loss function, preparing it for training.
------------------------------------------------------------
Training Loop:
num_epochs = 100 for epoch in range(num_epochs): history = model.fit(X, y, epochs=1, verbose=0) loss = history.history['loss'][0] print(f"Epoch {epoch+1}/{num_epochs}, Loss: {loss}")
In the above code:
- The training loop runs for a specified number of epochs (100 in this case).
- In each epoch, the model is trained using the model.fit() method with the input data X and target data y.
- The loss for each epoch is printed to monitor training progress.
---------------------------------------------------------------
Get the Final Model Parameters:
final_weights, final_bias = model.layers[0].get_weights() print("Final Weights:", final_weights) print("Final Bias:", final_bias)
In the above code:
- After training, the final weights and bias of the model are retrieved using model.layers[0].get_weights().
- The final model parameters are printed to the console.
This code demonstrates a complete workflow for training a simple linear regression model using TensorFlow. It includes data generation, model definition, loss function, optimizer, training loop, and parameter retrieval. The Adam optimizer minimizes mean squared error loss during training.
Python Code Editor:
Previous: Implementing Gradient Descent for Linear Regression.
Next: Experimenting with Learning Rates in Gradient Descent.
What is the difficulty level of this exercise?
It will be nice if you may share this link in any developer community or anywhere else, from where other developers may find this content. Thanks.
https://www.w3resource.com/machine-learning/tensorflow/python-tensorflow-building-and-training-exercise-11.php
- Weekly Trends and Language Statistics
- Weekly Trends and Language Statistics