![]() |
This article discusses how backpropagation works in TensorFlow, one of the most popular deep-learning libraries. Let’s learn about what is backpropagation and the other attributes related to it. BackpropagationBack propagation is a fundamental technique used in the training of neural networks which helps in optimizing the weights and biases of a model based on the error between the predicted output and the actual output. The basic idea behind this technique is to calculate the gradient of the loss function with respect to each weight and bias in the model. The gradient tells us how much the loss function will be affected by changing the weights and bias by a small amount. The main goal is to reduce the loss which is achieved by iteratively updating the weights and bias of the model based on the gradient. Backpropagation consists of two phases – the first one is a feedforward pass and the later is a backward pass where the weights and bias are optimized. Feedforward Pass:This is the first step in the training of a neural network where the data flows from the input layer to the output layer through certain hidden layers, undergoing essential computations. Neurons in each layer perform weighted sum calculations, and apply activation functions, capturing intricate data patterns. Hidden layers transform the data into hierarchical features, aiding in understanding complex structures. The process culminates at the output layer, producing predictions or classifications. During training, neural networks optimize weights and biases through backpropagation, enhancing their predictive accuracy. This process, combined with feedforward pass, empowers neural networks to learn and excel in various applications. Backward Pass:The backward pass is a critical phase in neural network training, initiated after making predictions to minimize errors and enhance accuracy. It calculates the disparity between actual and predicted values, aiming to reduce this error. In this phase, error information is retroactively propagated from the output layer to the input layer. The key objective is to compute gradients with respect to the network’s weights and biases. These gradients reveal the contribution of each weight and bias to the error, helping the network understand how to adjust parameters to minimize errors systematically. Through backpropagation, neural networks iteratively fine-tune their parameters, ultimately improving their predictive capabilities. Then the weights get updated, and both the passes run iteratively till we get reduced loss. Back propagation in TensorFlowTensorFlow is one of the most popular deep learning libraries which helps in efficient training of deep neural networks. Now let’s deep dive into how back propagation works in TensorFlow. In tensorflow, back propagation is calculated using automatic differentiation, which is a technique where we don’t explicitly compute the gradients of the function. When we define the neural network, tensorflow automatically creates a computational graph that represents the flow of data through the network. Each node consists of the mathematical operation that takes place during both the forward as well as backward pass. The goal of back propagation is to optimize the weights and biases of the model to minimize the loss. So, we use tensorflow’s automatic differentiation capabilities to compute the gradient of the loss function with respect to weights and biases. When the variable is defined, its takes a trainable parameter which can be set to True, which tells TensorFlow to keep track of its value during training and compute its gradient with respect to the loss function. Once we have the gradients, there are certain optimizers in Tensorflow such as SGD, Adagrad, and Adam which can be used to update the weights accordingly. Implementing Back propagationInstalling of the librariespip install tensorflow First install tensorflow in your system by entering the command in your terminal Importing LibrariesPython3
Here, we are importing all the important libraries need to build the model.
Loading the datasetPython
Here in this Code, we are gathering the data and preprocessing it. Preprocessing of the data require cleaning of data, removal of outliers, and if the numerical data is huge then scaling it to a specific range. In order to study the model , spilt the prepared data into training and testing data. Training and Testing the modelPython3
Here, We divide the the iris dataset into training set (80%) and testing set(20%) to facilitate the development and evaluation of the model. The ‘random_state’ argument is set for reproducibility, ensuring that same split is obtained each time the code is run. Defining a machine learning modelPython
Output: Model: "sequential_2" Here, we are defining a model using tensorflow’s Keras.
Loss function and optimizerPython
Here we are defining the loss function and optimizer used for the model.
BackpropagationNow implement the backpropagation on the trained model in a loop called training loop. Python
Output: Epoch 100/1000, Loss: 0.7505729794502258 In the above code, it represents a training loop for a neural network. It iterates through a specified number of epochs, computing predictions, calculating predictions and loss, and updating model parameters using backpropagation and an optimizer. Training progress is monitored by printing the loss every 100 epochs. Advantages
Disadvantages
|
Reffered: https://www.geeksforgeeks.org
AI ML DS |
Related |
---|
![]() |
![]() |
![]() |
![]() |
![]() |
Type: | Geek |
Category: | Coding |
Sub Category: | Tutorial |
Uploaded by: | Admin |
Views: | 9 |