Horje
How do we print percentage accuracy for SVM in R

Support Vector Machines (SVM) is a powerful supervised machine learning algorithm for classification and regression tasks. This article covers the theory behind SVMs, the steps to implement them in R using the e1071 package, and a detailed example demonstrating how to calculate and print the percentage accuracy of an SVM model.

Overview of SVM

SVMs are designed to find the optimal hyperplane that separates different classes in a dataset. The goal is to maximize the margin between the closest points of the classes (support vectors) and the hyperplane. This results in a robust classifier that generalizes new data well. There are some of the main Key Concepts.

  • Hyperplane: A decision boundary that separates different classes in the feature space.
  • Margin: The distance between the hyperplane and the nearest data points from either class. SVM aims to maximize this margin.
  • Support Vectors: Data points that lie closest to the hyperplane and influence its position and orientation.
  • Kernel Trick: A method to transform data into a higher-dimensional space to make it linearly separable when it is not in the original space. Common kernels include linear, polynomial, radial basis function (RBF), and sigmoid.

Implementing SVM in R

R provides the e1071 package to implement SVM. This package offers a simple interface to train SVM models, make predictions, and evaluate performance using R Programming Language.

Step 1: Install and Load Necessary Libraries

First, you need to install and load the e1071 package, which contains the SVM implementation.

R
install.packages("e1071")
library(e1071)

Step 2: Prepare the Data

For this example, we will use the well-known iris dataset. We’ll split the dataset into training and test sets.

R
# Load the iris dataset
data(iris)

# Set a seed for reproducibility
set.seed(123)

# Split the dataset into training and test sets (70% training, 30% test)
sample_index <- sample(1:nrow(iris), 0.7 * nrow(iris))
train_data <- iris[sample_index, ]
test_data <- iris[-sample_index, ]

Step 3: Train the SVM Model

Train the SVM model using the training data.

R
# Train the SVM model
svm_model <- svm(Species ~ ., data = train_data, kernel = "linear")
summary(svm_model)

Output:

Call:
svm(formula = Species ~ ., data = train_data, kernel = "linear")


Parameters:
   SVM-Type:  C-classification 
 SVM-Kernel:  linear 
       cost:  1 

Number of Support Vectors:  24

 ( 2 10 12 )


Number of Classes:  3 

Levels: 
 setosa versicolor virginica

Step 4: Make Predictions

Use the trained SVM model to make predictions on the test data.

R
# Make predictions on the test set
predictions <- predict(svm_model, test_data)

Step 5: Calculate and Print Percentage Accuracy

Compare the predictions with the actual test labels and calculate the accuracy.

R
# Calculate the accuracy
accuracy <- sum(predictions == test_data$Species) / nrow(test_data)

# Print the percentage accuracy
percentage_accuracy <- accuracy * 100
print(paste("Percentage Accuracy:", round(percentage_accuracy, 2), "%"))

Output:

[1] "Percentage Accuracy: 97.78 %"

The iris dataset is split into training (70%) and test (30%) sets to evaluate the model’s performance.

  • An SVM model is trained using the svm function from the e1071 package with a linear kernel.
  • The trained model is used to predict the species of flowers in the test set.
  • The accuracy is calculated as the proportion of correct predictions out of the total number of test samples.
  • The accuracy is then converted to a percentage and printed.

By following these steps, you can train an SVM model, make predictions, and calculate the percentage accuracy of your model in R. This process can be applied to other datasets and SVM configurations by adjusting the parameters and data accordingly.

Conclusion

Support Vector Machines (SVM) are a powerful and versatile tool for classification tasks, and the e1071 package in R makes it straightforward to implement SVM models. In this article, we have covered the theoretical background of SVM, including key concepts such as hyperplanes, margins, support vectors, and the kernel trick. We also provided a practical, step-by-step guide on how to train an SVM model using the iris dataset, make predictions, and calculate the percentage accuracy.




Reffered: https://www.geeksforgeeks.org


AI ML DS

Related
Z-Score Normalization: Definition and Examples Z-Score Normalization: Definition and Examples
Unsupervised Clustering with Unknown Number of Clusters Unsupervised Clustering with Unknown Number of Clusters
AI in Content Creation AI in Content Creation
AI in Insurance: Innovating Risk Management AI in Insurance: Innovating Risk Management
AI in Media and Entertainment AI in Media and Entertainment

Type:
Geek
Category:
Coding
Sub Category:
Tutorial
Uploaded by:
Admin
Views:
22