![]() |
As the size of data sets and model complexity is increasing day by day, traditional training methods are often unable to stand up to the heavy requirements of various contemporary tasks. Therefore, this has given rise to the necessity for distributed training. In simple words, when we use distributed training the computational workload is split across a considerable number of devices or machines that would run the training of the machine learning models more quickly and efficiently. In this article, we will discuss distributed training with Tensorflow and understand how you can incorporate it into your AI workflows. In order to maximize performance when addressing the AI challenges of today, we’ll uncover best practices and valuable tips for utilizing TensorFlow’s capabilities. Table of Content What is Distributed Training?Distributed training is a state-of-the-art technique in machine learning where model training is obtained by combining the computational workloads split and arranged across different devices at a time, each of them contributing to the whole training in an active way. As you know, in machine learning data is the key to successfully building a model. The more quality data you have, the better your model can train, However, as the size of your dataset increases, your model’s complexity and calculations will also increase. This would make training a time-consuming process. Thus, one of the major reasons distributed training is used is that it will make computation faster in the case of training of large-scale models. There are two approaches to distributed training, they are:
Distributed Training with TensorFlowTensorFlow offers significant advantages by allowing the training phase to be split over multiple machines and devices. The main goal of distributed training is to parallelize computations, which drastically cuts down on the amount of time required to train a model. Furthermore, it enhances resource efficiency by dispersing the task among several devices, which optimizes resource utilization. Additionally, this method facilitates scalability because expanding data can be split between several devices for processing. TensorFlow uses a number of techniques to divide the computational load among distributed computing resources. Distributed Strategy in TensorFlowIn TensorFlow, the idea of a Distributed Strategy acts as an interface between various machines or devices and the training data. The two most widely adopted distributed strategies are:
Though these strategies are offered by Tensorflow but it completely depends on us how we efficiently distribute the task between the multiple devices. How does Distributed Training work in Tensorflow?Let’s understand how we can use the distributed strategies from Tensorflow to train our large-scale model. We will be using mnist dataset in this example for simplicity and easy understanding. Step 1: Import TensorFlow and define the ModelFirstly, we import tensorflow library and specifically the layers and models modules from the Keras API. Then, we define a simple neural network model. Since, we are using mnist dataset, we will create simple convolutional neural network (CNN) model using the Sequential API. This model consists of a convolutional layers, a max-pooling layer, a flatten layer, and two dense layers.
Step 2: Load and Preprocess the DatasetThe MNIST dataset consist of 60,000 training and 10,000 testing images of handwritten digits, ranging from 0 to 9. In the following code, we have reshaped the images to have a single channel (since they are grayscale). We normalize the pixel values to the range [0, 1] by dividing by 255.
Step 3: Initialize MirroredStrategyNow, we initialize the MirroredStrategy for distributed training. This strategy is used for data parallelism, it will replicate the model across multiple GPUs, if available, for computation.
Step 4: Wrap Model Creation and TrainingWe use with statement to create the model within the scope of the MirroredStrategy. This will allow TensorFlow to distribute the computations for model creation and training across the available devices. Whatever operations are mentioned under this with statement will be distributed accordingly. We compile the model by specifying desired optimizer, loss function, and metrics. In this example, we use the Adam optimizer, sparse categorical crossentropy loss function (since the labels are integers), and accuracy as the evaluation metric.
Step 5: Create dataset objectNow, we will create a TensorFlow Dataset object from the training images and labels. This Dataset object can be used to efficiently iterate over the training data during training. Here, we shuffle the dataset and batch it into batche size of 32 for training.
Step 6: Train the ModelWe use fit() method to train the model for 5 epochs, passing the distributed dataset. When the model trains, Tensorflow distributed the computation across the available devices using MirroredStrategy. The gradients updates are synchronized across devices.
Output: Epoch 1/5
|
Reffered: https://www.geeksforgeeks.org
AI ML DS |
Type: | Geek |
Category: | Coding |
Sub Category: | Tutorial |
Uploaded by: | Admin |
Views: | 11 |