![]() |
Neural networks can be created and trained in Python with the help of the well-known open-source PyTorch framework. This tutorial will teach you how to use PyTorch to create a basic neural network and classify handwritten numbers from the MNIST dataset. Modern artificial intelligence relies on neural networks, which give machines the ability to learn and make judgments that are akin to those made by humans. Regression, classification and creation are just a few of the tasks that neural networks, as computer models, may do after learning from input. The popular open-source PyTorch framework may be used to design and train neural networks in Python. In this tutorial, you will learn how to use PyTorch to classify handwritten numbers from the MNIST dataset using a rudimentary neural network. How to Create a Neural Network in PyTorch?Via the nn.Module class or the nn.Sequential container, PyTorch offers two primary methods for building neural networks. If you subclass the nn.Module class and implement the __init__ and forward functions, you may construct your own unique network. Whereas the forward function specifies how input is transferred through levels and returned as output, the __init__ method establishes the network’s layers and parameters. If you supply a list of layers as parameters, the nn.Sequential container lets you establish a network. After being assigned an order, the layers are automatically joined. Several modules and methods offered by PyTorch make neural network implementation in Python simple. The primary actions to take are as follows:
Implementing Feedforward Neural Network for MNISTFor a better understanding, let’s see how to create neural networks in PyTorch. Please be aware that these are only brief samples that you might expand and alter to suit your needs; they are not comprehensive solutions. In this example, handwritten digits from the MNIST dataset are classified using a simple feedforward neural network.
Step 1: Import the necessary librariesPython
Step 2 : Define the hyperparameters and transformationThe provided code defines hyperparameters and a transformation to apply to images in a machine learning context. Hyperparameters, including Python
Step 3 : Load and prepare the datasetThe provided code loads the MNIST dataset from the web, consisting of handwritten digit images and their corresponding labels. It initializes two datasets: train_dataset for training data and test_dataset for testing data. Both datasets are configured with transformations defined earlier, enabling image tensor conversion and pixel value normalization. Subsequently, data loaders, train_loader and test_loader, are created to facilitate batching and shuffling of data during training and testing phases, respectively. Python
Output: Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./MNIST/raw/train-images-idx3-ubyte.gz
100%|██████████| 9912422/9912422 [00:00<00:00, 78077039.54it/s]
Extracting ./MNIST/raw/train-images-idx3-ubyte.gz to ./MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ./MNIST/raw/train-labels-idx1-ubyte.gz
100%|██████████| 28881/28881 [00:00<00:00, 65021843.17it/s]Extracting ./MNIST/raw/train-labels-idx1-ubyte.gz to ./MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ./MNIST/raw/t10k-images-idx3-ubyte.gz
100%|██████████| 1648877/1648877 [00:00<00:00, 22545472.73it/s]
Extracting ./MNIST/raw/t10k-images-idx3-ubyte.gz to ./MNIST/raw
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ./MNIST/raw/t10k-labels-idx1-ubyte.gz
100%|██████████| 4542/4542 [00:00<00:00, 12298598.30it/s]Extracting ./MNIST/raw/t10k-labels-idx1-ubyte.gz to ./MNIST/raw
Step 4 : Define the neural network modelWe define a simple neural network class
Python
Step 5 : Define the loss function, the optimizer and instance of the modelThe provided code segment initializes the neural network model, moves it to the available device (either CPU or GPU), and defines the loss function along with the optimizer. Python
Output: Net(
(fc1): Linear(in_features=784, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=10, bias=True)
)
Step 6 : Define the training and test loop
Python
Step 7 : Train and test the model along Visualize some sample images and predictionsThis code segment trains and tests the model for the specified number of epochs and then visualizes some sample images along with their predictions. Python
Output: Epoch 1, Batch 200, Loss: 1.1144, Accuracy: 0.7486
Epoch 1, Batch 400, Loss: 0.4952, Accuracy: 0.8739
Epoch 1, Batch 600, Loss: 0.3917, Accuracy: 0.8903
Epoch 1, Batch 800, Loss: 0.3515, Accuracy: 0.9042
Test Loss: 0.3018, Test Accuracy: 0.9155
Epoch 2, Batch 200, Loss: 0.3067, Accuracy: 0.9123
Epoch 2, Batch 400, Loss: 0.2929, Accuracy: 0.9168
Epoch 2, Batch 600, Loss: 0.2878, Accuracy: 0.9185
Epoch 2, Batch 800, Loss: 0.2735, Accuracy: 0.9210
Test Loss: 0.2471, Test Accuracy: 0.9314
Epoch 3, Batch 200, Loss: 0.2580, Accuracy: 0.9256
Epoch 3, Batch 400, Loss: 0.2442, Accuracy: 0.9301
Epoch 3, Batch 600, Loss: 0.2354, Accuracy: 0.9338
Epoch 3, Batch 800, Loss: 0.2281, Accuracy: 0.9359
Test Loss: 0.2130, Test Accuracy: 0.9403
Epoch 4, Batch 200, Loss: 0.2149, Accuracy: 0.9403
Epoch 4, Batch 400, Loss: 0.2055, Accuracy: 0.9441
Epoch 4, Batch 600, Loss: 0.2050, Accuracy: 0.9395
Epoch 4, Batch 800, Loss: 0.2018, Accuracy: 0.9425
Test Loss: 0.1860, Test Accuracy: 0.9465
Epoch 5, Batch 200, Loss: 0.1925, Accuracy: 0.9464
Epoch 5, Batch 400, Loss: 0.1850, Accuracy: 0.9473
Epoch 5, Batch 600, Loss: 0.1813, Accuracy: 0.9481
Epoch 5, Batch 800, Loss: 0.1753, Accuracy: 0.9503
Test Loss: 0.1691, Test Accuracy: 0.9517
Epoch 6, Batch 200, Loss: 0.1719, Accuracy: 0.9521
Epoch 6, Batch 400, Loss: 0.1599, Accuracy: 0.9557
Epoch 6, Batch 600, Loss: 0.1627, Accuracy: 0.9521
Epoch 6, Batch 800, Loss: 0.1567, Accuracy: 0.9562
Test Loss: 0.1549, Test Accuracy: 0.9547
Epoch 7, Batch 200, Loss: 0.1441, Accuracy: 0.9620
Epoch 7, Batch 400, Loss: 0.1474, Accuracy: 0.9587
Epoch 7, Batch 600, Loss: 0.1447, Accuracy: 0.9601
Epoch 7, Batch 800, Loss: 0.1426, Accuracy: 0.9580
Test Loss: 0.1404, Test Accuracy: 0.9602
Epoch 8, Batch 200, Loss: 0.1360, Accuracy: 0.9627
Epoch 8, Batch 400, Loss: 0.1359, Accuracy: 0.9620
Epoch 8, Batch 600, Loss: 0.1304, Accuracy: 0.9631
Epoch 8, Batch 800, Loss: 0.1322, Accuracy: 0.9634
Test Loss: 0.1308, Test Accuracy: 0.9624
Epoch 9, Batch 200, Loss: 0.1152, Accuracy: 0.9690
Epoch 9, Batch 400, Loss: 0.1188, Accuracy: 0.9674
Epoch 9, Batch 600, Loss: 0.1303, Accuracy: 0.9637
Epoch 9, Batch 800, Loss: 0.1236, Accuracy: 0.9645
Test Loss: 0.1234, Test Accuracy: 0.9633
Epoch 10, Batch 200, Loss: 0.1112, Accuracy: 0.9679
Epoch 10, Batch 400, Loss: 0.1120, Accuracy: 0.9707
Epoch 10, Batch 600, Loss: 0.1158, Accuracy: 0.9681
Epoch 10, Batch 800, Loss: 0.1138, Accuracy: 0.9688
Test Loss: 0.1145, Test Accuracy: 0.9665
ConclusionThis post taught us how to identify handwritten digits from the MNIST dataset using a basic neural network that we could build in PyTorch. We also learned how to use the nn.Module class, the nn.Sequential container, the loss function, the optimizer, and the data loader to build, train and test a neural network in PyTorch. You may create and test out different neural network models using PyTorch a strong and adaptable framework. The PyTorch website has many materials and lessons. |
Reffered: https://www.geeksforgeeks.org
AI ML DS |
Related |
---|
![]() |
![]() |
![]() |
![]() |
![]() |
Type: | Geek |
Category: | Coding |
Sub Category: | Tutorial |
Uploaded by: | Admin |
Views: | 15 |