![]() |
Training deep neural networks presents difficulties such as vanishing gradients and slow convergence. In 2015, Sergey Ioffe and Christian Szegedy introduced Batch Normalization as a powerful technique to tackle these challenges. This article will explore Batch Normalization and how it can be utilized in Keras, a well-known deep-learning framework. What is meant by Batch Normalization in Deep Learning?Batch Normalization is a technique used in deep learning to standardize the inputs of each layer, ensuring stable training by reducing internal covariate shifts and accelerating convergence. It involves normalizing the activations with mean and variance calculated over mini-batches, along with learnable parameters for scaling and shifting. Applying Batch Normalization in Keras using BatchNormalization ClassThe Syntax of BatchNormalization Class in Keraskeras.layers.BatchNormalization(
axis=-1,
momentum=0.99,
epsilon=0.001,
center=True,
scale=True,
beta_initializer="zeros",
gamma_initializer="ones",
moving_mean_initializer="zeros",
moving_variance_initializer="ones",
beta_regularizer=None,
gamma_regularizer=None,
beta_constraint=None,
gamma_constraint=None,
synchronized=False,
**kwargs) BatchNormalization Class ParametersHere’s a breakdown of its parameters:
These parameters allow for fine-tuning and customization of the Batch Normalization layer according to specific requirements and architectural considerations. For example, you can control whether to include learnable parameters (beta and gamma), specify the initialization and regularization methods, and adjust the axis of normalization. Implementing BatchNormalization Class in KerasIn this section, we are going to cover all the steps required to implement Batch Normalization in Keras with help of BatchNormalization Class. Let’s discuss the steps: Step 1: Importing Librariesimport numpy as np
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense, BatchNormalization Step 2: Create a dummy dataset# Generate toy dataset
np.random.seed(0)
X = np.random.randn(1000, 10) # 1000 samples, 10 features
y = np.random.randint(2, size=(1000,)) # Binary labels Step 3: Define the ModelA sequential model is defined using # Define the model model = Sequential() model.add(Dense(64, input_shape=(10,), activation='relu')) model.add(BatchNormalization()) model.add(Dense(32, activation='relu')) model.add(BatchNormalization()) model.add(Dense(1, activation='sigmoid')) Step 4: Compiling the Model# Train the model
model.fit(X_train, y_train, epochs=20, batch_size=32, validation_split=0.1) Complete Implementation of Batch Normalization using Keras Library
Output: Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 64) 704
batch_normalization (Batch (None, 64) 256
Normalization)
dense_1 (Dense) (None, 32) 2080
batch_normalization_1 (Bat (None, 32) 128
chNormalization)
dense_2 (Dense) (None, 1) 33
=================================================================
Total params: 3201 (12.50 KB)
Trainable params: 3009 (11.75 KB)
Non-trainable params: 192 (768.00 Byte) Best Practices for using BatchNormalization Class in KerasWhen using Batch Normalization in Keras, several best practices can help ensure optimal performance and stability:
By following these best practices, practitioners can effectively leverage Batch Normalization in Keras to develop robust and efficient deep learning models. |
Reffered: https://www.geeksforgeeks.org
AI ML DS |
Type: | Geek |
Category: | Coding |
Sub Category: | Tutorial |
Uploaded by: | Admin |
Views: | 12 |