[PYTHON] About the traveling salesman problem

Recently, I have been blessed with the opportunity to come into contact with "mathematical optimization" because of my work. It's a good idea, so I'd like to post the results of a little private touch.

What I did was an example of mathematical optimization, the "traveling salesman problem" Since this can be implemented with the free environment pulp, I tried to implement it while preparing the environment.

The problems I want to try are as follows.

◆ 2.12 Traveling salesman problem https://www.msi.co.jp/nuopt/docs/v19/examples/html/02-12-00.html

I drew a picture briefly below. A, B, C, D are the points on the map.

map.png

Which route has the smallest sum of distances when you leave A, write a stroke, and return? It seems that the optimization problem is called the "traveling salesman problem". It's called NP-hard, and it seems to be quite difficult.

However, the approach with the modeler called pulp that is touched in the python environment is free !!! It is easy to do, so I will refer to the following article.

◆ Mathematical optimization starting from the traveling salesman problem https://qiita.com/panchovie/items/6509fb54e3d53f4766aa

The great thing is that it has everything from installing pulp to sample code. Let's put the first setting in the sample code immediately. What about the following code that seems to compile for the time being?

import pulp as pp

#Definition of optimization model
mip_model = pp.LpProblem("tsp_mip", pp.LpMinimize)

N = 4
BigM = 10000

c_ = [
      [0,6,5,5], # pointA > [pointA,pointB,pointC,pointD]
      [6,0,7,4], # pointB > [pointA,pointB,pointC,pointD]
      [5,7,0,3], # pointC > [pointA,pointB,pointC,pointD]
      [5,4,3,0], # pointD > [pointA,pointB,pointC,pointD]
      ]

#Variable definition
x = [[pp.LpVariable("x(%s,%s)"%(i, j), cat="Binary") for i in range(N)] for j in range(N)]
u = [pp.LpVariable("u(%s)"%(i), cat="Continuous", lowBound=1.0, upBound=(N)) for i in range(N)]

#Definition & registration of evaluation index (Equation (1))
objective = pp.lpSum(c_[i][j] * x[i][j] for i in range(N) for j in range(N) if i != j)
mip_model += objective

#Conditional expression(2)Registration of
for i in range(N):
    mip_model += pp.lpSum(x[i][j] for j in range(N) if i != j) == 1

#Conditional expression(3)Registration of
for i in range(N):
    mip_model += pp.lpSum(x[j][i] for j in range(N) if i != j) == 1

for i in range(N):
    mip_model += x[i][i] == 0
    
#Conditional expression(4) (MTZ constraints)
for i in range(N):
    for j in range(N):
        if i != j:
            mip_model += u[i] + 1.0 - BigM * (1.0 - x[i][j]) <= u[j]


#Perform optimization
status = mip_model.solve()

#Understanding the results
print("Status: {}".format(pp.LpStatus[status]))
print("Optimal Value [a.u.]: {}".format(objective.value()))

for i in range(N):
    for j in range(N):
        if i != j:
            print("x[%d][%d]:%f" % (i,j,x[i][j].value()))

for i in range(len(u)):
    print("u[%d] %f" % (i,u[i].value()))

If you execute it in this state ...

Status: Infeasible
Optimal Value [a.u.]: 18.0
x[0][1]:0.000000
x[0][2]:1.000000
x[0][3]:0.000000
x[1][0]:0.999800
x[1][2]:0.000000
x[1][3]:0.000200
x[2][0]:0.000200
x[2][1]:0.000000
x[2][3]:0.999800
x[3][0]:0.000000
x[3][1]:1.000000
x[3][2]:0.000000

It seems that it has not been calculated well. It seems that the constraint conditions are not satisfied well, so after various examinations, it seems that the MTZ constraint formula is strange as it is. If you write all four formulas for x to u in the final state, it will be as follows.

u[0] < u[2]\\
u[1] < u[0]\\
u[2] < u[3]\\
u[3] < u[1]

When it comes to satisfying all of this,

u[0] < u[2] < u[3] < u[1] < u[0] < ...

It will increase infinitely, but since the effective range of u is between 1 and 4, it seems that there is a contradiction. So, finally, by removing the constraint on the variable that returns to point A, let's reconstruct the equation as follows.

u[0] < u[2]\\
u[2] < u[3]\\
u[3] < u[1]

This will change the implementation and change the MTZ condition formula to:

#Conditional expression(4) (MTZ constraints)
for i in range(N):
    for j in range(1,N):
        if i != j:
            mip_model += u[i] + 1.0 - BigM * (1.0 - x[i][j]) <= u[j]

The source code is summarized below.

sample_route.py


import pulp as pp

#Definition of optimization model
mip_model = pp.LpProblem("tsp_mip", pp.LpMinimize)

N = 4
BigM = 10000

c_ = [
      [0,6,5,5], # pointA > [pointA,pointB,pointC,pointD]
      [6,0,7,4], # pointB > [pointA,pointB,pointC,pointD]
      [5,7,0,3], # pointC > [pointA,pointB,pointC,pointD]
      [5,4,3,0], # pointD > [pointA,pointB,pointC,pointD]
      ]

#Variable definition
x = [[pp.LpVariable("x(%s,%s)"%(i, j), cat="Binary") for i in range(N)] for j in range(N)]
u = [pp.LpVariable("u(%s)"%(i), cat="Continuous", lowBound=1.0, upBound=(N)) for i in range(N)]

#Definition & registration of evaluation index (Equation (1))
objective = pp.lpSum(c_[i][j] * x[i][j] for i in range(N) for j in range(N) if i != j)
mip_model += objective

#Conditional expression(2)Registration of
for i in range(N):
    mip_model += pp.lpSum(x[i][j] for j in range(N) if i != j) == 1

#Conditional expression(3)Registration of
for i in range(N):
    mip_model += pp.lpSum(x[j][i] for j in range(N) if i != j) == 1

for i in range(N):
    mip_model += x[i][i] == 0
    
#Conditional expression(4) (MTZ constraints)
for i in range(N):
    for j in range(1,N):
        if i != j:
            mip_model += u[i] + 1.0 - BigM * (1.0 - x[i][j]) <= u[j]


#Perform optimization
status = mip_model.solve()

#Understanding the results
print("Status: {}".format(pp.LpStatus[status]))
print("Optimal Value [a.u.]: {}".format(objective.value()))

for i in range(N):
    for j in range(N):
        if i != j:
            print("x[%d][%d]:%f" % (i,j,x[i][j].value()))

for i in range(len(u)):
    print("u[%d] %f" % (i,u[i].value()))

The result is as follows.

Status: Optimal
Optimal Value [a.u.]: 18.0
x[0][1]:0.000000
x[0][2]:1.000000
x[0][3]:0.000000
x[1][0]:1.000000
x[1][2]:0.000000
x[1][3]:0.000000
x[2][0]:0.000000
x[2][1]:0.000000
x[2][3]:1.000000
x[3][0]:0.000000
x[3][1]:1.000000
x[3][2]:0.000000
u[0] 1.000000
u[1] 4.000000
u[2] 2.000000
u[3] 3.000000

It converged safely. The results are not graphical,

Point A → Point C → Point D → Point B → Point A

Seems to be the shortest, and the distance is 18. Where the problem was listed,

Point A → Point B → Point D → Point C → Point A

So, after all, the distance is 18 and it seems to be the shortest, so there are some answers.

Mathematical optimization is very fresh only in areas that we haven't touched on before. There is an environment where you can feel free to try it, so I will take this opportunity to study various things.

Recommended Posts

About the traveling salesman problem
About the Ordered Traveling Salesman Problem
Python: I tried the traveling salesman problem
Solve the traveling salesman problem with OR-Tools
I tried to implement the traveling salesman problem
Think about the minimum change problem
Simulated Annealing and Traveling Salesman Problem
About the test
About the queue
Try to solve the traveling salesman problem with a genetic algorithm (Theory)
Traveling salesman problem practice collection ③ ~ Non-random map edition ~
Try to solve the traveling salesman problem with a genetic algorithm (execution result)
About the Unfold function
About the service command
Solving the traveling salesman problem with the genetic algorithm (GA) and its library (vcopt)
Solve the Python asymmetric traveling salesman problem with a branch and bound method
About the confusion matrix
About the Visitor pattern
Examine the dual problem
Solve the Monty Hall problem
About the Python module venv
About the ease of Python
About the enumerate function (python)
A story about how to deal with the CORS problem
About understanding the 3-point reader [...]
About the components of Luigi
About the features of Python
Combinatorial optimization-typical problem-traveling salesman problem
I tried "Implementing a genetic algorithm (GA) in python to solve the traveling salesman problem (TSP)"
The story of the algorithm drawing a ridiculous conclusion when trying to solve the traveling salesman problem properly
[Python] What is @? (About the decorator)
About the return value of pthread_mutex_init ()
About the return value of the histogram.
About the basic type of Go
About the upper limit of threads-max
About the average option in sklearn.metrics.f1_score
About the behavior of yield_per of SqlAlchemy
Illustration of the results of the knapsack problem
About the size of matplotlib points
About the basics list of Python basics
Roughly think about the loss function
Solve a simple traveling salesman problem using a Boltzmann machine with simulated annealing
My friend seems to do python, so think about the problem ~ fizzbuzz ~