Skip to content

Solving unsolvable problems with genetic algorithms

Share on twitter
Share on linkedin
Share on email
Share on whatsapp
Algorithms

There are optimization problems so complex that it is not possible to find an algorithm that solves them. Finding an optimal solution for a system that depends on two or three variables or conditionals can be trivial, but what about a hundred variables? Or more? For these cases, genetic algorithms allow us to approximate, in an intelligent way, a valid solution.

What is a genetic algorithm?

Genetic algorithms are evolutionary algorithms that use principles of biology, specifically Darwin's theory of evolution (mutation, natural selection, reproduction, etc.) to evolve a series of candidate solutions to a solution optimal enough to be selected as valid. These results are contrasted with each other, selecting those that have obtained the best results. These selected solutions are mutated by applying slight random variations, or they are crossed, simulating reproduction (from two parent solutions a daughter solution is extracted, containing parameters from both). This new generation of solutions goes through the process again: it is simulated, selected and mutated as many times as necessary until an acceptable value is obtained.

Application of genetic algorithms

Genetic algorithms can solve something as mundane as seating guests at a wedding (trying to minimize conflicts), or as complex as optimizing the performance of engines or supercomputers. However, there are a number of conditions that a problem to be optimized has to meet to be addressed from a genetic algorithm:

  • The solution to the problem must be able to be coded in a series of variables (known as chromosome or phenotype, emulating genetics) that allow its mutation and/or crossing.
  • The problem must be simulated: it must be possible to code a program that allows, taking each solution or phenotype, to obtain a result.
  • The results of the simulation must be measurable. It must be possible to code an algorithm (the fitness function) that makes it possible to evaluate that one solution is better than another.

NASA, for example, has used them to design antennas for applications where the design requirements were stringent, conflicting or unusual, for which none of the existing antenna types were suitable.

Other applications of evolutionary programming are the framing of agendas and planning, space optimization in warehouses, containers, etc., or the design of water distribution topologies, printed circuits or computer networks.

Application example

To clarify the algorithm, we will propose an example of application: the classic Traveler's Problem.

The Travel Agent Problem (TSP), or Traveler's Problem, answers the following question: Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?

Although there are many solutions to this problem, to solve it from an evolutionary approach, we could follow the following series of steps.

Generation of phenotypes

We generate a series of random solutions. Each of them is a list of cities, in order of visit.

See example:

a: 1 -> 4 -> 2 -> 3 -> 5 -> 1
b: 1 -> 5 -> 4 -> 3 -> 2 -> 1
c: 1 -> 2 -> 4 -> 5 -> 3 -> 1
[...]
Simulation

For each city list, we simulate the traveler's journey from city to city. As a result we get the total distance traveled for each random list of cities.

See example:

a: 3 + 3 + 4 + 3 + 6 = 19 km
b: 6 + 3 + 7 + 4 + 2 = 22 km
c: 2 + 3 + 3 + 3 + 6 = 17 km
[...]
Extinction and survival of the fittest

We can find invalid solutions: in this example, it is not possible to travel from the city 2 to the city 5. Any solution that has such a stretch would be invalid.

We would also eliminate the worst solutions, those involving a greater distance. We would therefore be left with those solutions that are considered to be better, those involving a shorter total distance travelled.

See example:

Mejores soluciones:
a y c
Eliminadas por ser menos óptimas
b
Mutation

We make random changes in the solutions, for example changing the order of travel between some cities.

See example:

c: 1 -> 2 -> [4 -> 5] -> 3 -> 1
c': 1 -> 2 -> [5 -> 4] -> 3 -> 1
Sexual reproduction

We take two solutions and combine them, so that the list of combined cities has sections of solutions from which it inherits.

See example:

a: [1 -> 4 -> 2] -> 3 -> 5 -> 1
+
c: 1 -> 2 -> 4 -> [5 -> 3 -> 1]
=
ac : [1 -> 4 -> 2 ] -> [5 -> 3 -> 1]

This process could be made more intelligent, for example, by selecting the most optimal sections from each ancestor, to be inherited by their offspring.

Repetition of the process.

We simulate each of the new solutions again, whether they are the result of mutation, reproduction, or both. We will obtain their results again, which we will compare again by eliminating the worst ones and selecting the best ones.

Completion of the process.

The process would come to an end, in which case we would accept the best solution found so far when one or more of the conditions are met:

  • The time assigned to the process is fulfilled: we can have a maximum number of evolutions, or a maximum processing time to find a solution.
  • A plateau is detected: this is when a process stagnation is detected in which new solutions do not improve the result.
  • An acceptable solution is reached: if we have a minimum criteria we can finish the process when a solution is found that meets it.

In short

Genetic algorithms are a powerful tool that, imitating natural selection, allow us to give an optimal, or at least acceptable, solution to optimization problems that due to their complexity cannot be solved with a traditional algorithm.

Share the article

Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on email
Email
Share on whatsapp
WhatsApp

A new generation of technological services and products for our customers