Horje
Hill Climbing and Simulated Annealing for the Traveling Salesman Problem

The Traveling Salesman Problem (TSP) is a classic example where a salesman must visit a set of cities exactly once and return to the starting point while minimizing the total distance traveled.

This article explores two popular optimization algorithms—Hill Climbing and Simulated Annealing—and demonstrates their application to the TSP using Python.

Traveling Salesman Problem (TSP)

The Traveling Salesman Problem (TSP) is a well-known combinatorial optimization problem that has been extensively studied in operations research and computer science. The objective is to determine the shortest possible route that visits each city once and returns to the origin city. Due to its NP-hard nature, exact solutions are computationally expensive for large datasets, making heuristic and metaheuristic approaches like Hill Climbing and Simulated Annealing attractive alternatives.

Hill Climbing Algorithm

Hill Climbing is a local search algorithm that iteratively moves towards the direction of increasing improvement, aiming to find a better solution in the search space. Starting from a random initial solution, it generates neighboring solutions and selects the one with the lowest distance. The process repeats until no better neighbors are found.

Steps for Implementing Hill Climbing

  1. Initialize the current route and calculate its distance.
  2. Iterate to find the best neighboring route until no improvement is found.
  3. Return the best route and its distance.

Hill Climbing Implementation

Here’s how the Hill Climbing algorithm is implemented:

Python
import numpy as np
import random

# Set seed for reproducibility
np.random.seed(42)
random.seed(42)

# Generate a set of cities
num_cities = 10
cities = np.random.rand(num_cities, 2)

# Distance function
def calculate_distance(route):
    route_extended = np.append(route, [route[0]], axis=0)  # Return to the starting point
    return np.sum(np.sqrt(np.sum(np.diff(route_extended, axis=0)**2, axis=1)))

# Function to create a random initial route
def create_initial_route(cities):
    return np.array(random.sample(list(cities), len(cities)))

# Function to create neighboring solutions
def get_neighbors(route):
    neighbors = []
    for i in range(len(route)):
        for j in range(i + 1, len(route)):
            neighbor = route.copy()
            neighbor[i], neighbor[j] = neighbor[j], neighbor[i]
            neighbors.append(neighbor)
    return neighbors

def hill_climbing(cities):
    current_route = create_initial_route(cities)
    current_distance = calculate_distance(current_route)

    while True:
        neighbors = get_neighbors(current_route)
        next_route = min(neighbors, key=calculate_distance)
        next_distance = calculate_distance(next_route)
        
        if next_distance >= current_distance:
            break
        
        current_route, current_distance = next_route, next_distance

    return current_route, current_distance

Simulated Annealing Algorithm

Simulated Annealing (SA) is inspired by the annealing process in metallurgy. It is a probabilistic technique that explores the search space by allowing less optimal solutions at the beginning, gradually reducing this acceptance as the temperature decreases. This helps avoid local minima and increases the likelihood of finding a global minimum.

Steps for Implementing Simulated Annealing

  1. Initialize the current route and calculate its distance.
  2. Set the initial temperature and cooling rate.
  3. Iterate for a specified number of iterations:
    • Generate neighboring solutions.
    • Select a neighbor and calculate its distance.
    • Accept the neighbor based on a probability that decreases with temperature.
    • Update the best solution found so far.
  4. Return the best route and its distance.

Simulated Annealing Implementation

Here’s the implementation of Simulated Annealing:

Python
def simulated_annealing(cities, initial_temp, cooling_rate, max_iterations):
    current_route = create_initial_route(cities)
    current_distance = calculate_distance(current_route)
    best_route = current_route.copy()
    best_distance = current_distance
    temperature = initial_temp

    for _ in range(max_iterations):
        neighbors = get_neighbors(current_route)
        next_route = random.choice(neighbors)
        next_distance = calculate_distance(next_route)

        if next_distance < current_distance or random.random() < np.exp((current_distance - next_distance) / temperature):
            current_route, current_distance = next_route, next_distance

            if current_distance < best_distance:
                best_route, best_distance = current_route, current_distance

        temperature *= cooling_rate

    return best_route, best_distance

Comparing Hill Climbing and Simulated Annealing

To evaluate the performance of both algorithms, we run them on the same set of randomly generated cities.

Python
# Parameters for Simulated Annealing
initial_temp = 100
cooling_rate = 0.99
max_iterations = 1000

# Run both algorithms
hc_route, hc_distance = hill_climbing(cities)
sa_route, sa_distance = simulated_annealing(cities, initial_temp, cooling_rate, max_iterations)

# Print results
print("Hill Climbing:")
print("Route:", hc_route)
print("Distance:", hc_distance)

print("\nSimulated Annealing:")
print("Route:", sa_route)
print("Distance:", sa_distance)

Output:

Hill Climbing:
Route: [[0.18182497 0.18340451]
[0.60111501 0.70807258]
[0.60111501 0.70807258]
[0.30424224 0.52475643]
[0.30424224 0.52475643]
[0.30424224 0.52475643]
[0.30424224 0.52475643]
[0.18182497 0.18340451]
[0.18182497 0.18340451]
[0.18182497 0.18340451]]
Distance: 1.383174632379454

Simulated Annealing:
Route: [[0.02058449 0.96990985]
[0.02058449 0.96990985]
[0.02058449 0.96990985]
[0.02058449 0.96990985]
[0.02058449 0.96990985]
[0.02058449 0.96990985]
[0.02058449 0.96990985]
[0.02058449 0.96990985]
[0.02058449 0.96990985]
[0.02058449 0.96990985]]
Distance: 0.0

The numeric results demonstrate that Simulated Annealing often finds a shorter route compared to Hill Climbing. This is due to its ability to escape local optima by accepting worse solutions with a certain probability, which decreases over time.

Hill Climbing, being more straightforward and faster, may suffice for simpler problems with fewer local optima. However, for more complex problems like the TSP, Simulated Annealing provides a more robust solution.

Interview Questions: Hill Climbing and Simulated Annealing

Question 1: What are the main advantages of using simulated annealing over hill climbing?

The main advantages of simulated annealing over hill climbing are its ability to escape local optima and explore the solution space more thoroughly. The probabilistic acceptance of worse solutions, controlled by a temperature parameter, allows simulated annealing to avoid getting stuck in local optima and increases the chances of finding a global optimum.

Question 2: How does the temperature parameter affect the performance of simulated annealing?

The temperature parameter in simulated annealing controls the likelihood of accepting worse solutions. At higher temperatures, the algorithm is more likely to accept worse solutions, facilitating a broader search of the solution space. As the temperature decreases, the probability of accepting worse solutions diminishes, allowing the algorithm to converge towards an optimal solution. Properly scheduling the temperature decrease is crucial for balancing exploration and convergence

Question 3: Can hill climbing be modified to overcome its limitations?

Yes, hill climbing can be modified to overcome its limitations by incorporating techniques such as random restarts or using a variant called stochastic hill climbing. Random restarts involve running the hill climbing algorithm multiple times from different initial solutions, which increases the chances of finding a global optimum. Stochastic hill climbing, on the other hand, allows occasional random moves to explore the solution space more broadly.

Question 4: In what scenarios would you prefer hill climbing over simulated annealing?

Hill climbing is preferable in scenarios where the problem is simpler and unimodal, meaning it has a single peak or optimum. It is also suitable when computational resources are limited, as hill climbing is generally faster and less computationally intensive compared to simulated annealing.




Reffered: https://www.geeksforgeeks.org


AI ML DS

Related
Mastering Contour Plots with Seaborn Mastering Contour Plots with Seaborn
What is Ridge Regression? What is Ridge Regression?
What is KeyPoint Detection? What is KeyPoint Detection?
Transfer learning &amp; fine-tuning using Keras Transfer learning &amp; fine-tuning using Keras
What is the difference between batch processing and real-time processing? What is the difference between batch processing and real-time processing?

Type:
Geek
Category:
Coding
Sub Category:
Tutorial
Uploaded by:
Admin
Views:
16