Task 1: Fill in the Blanks
Key Concepts and Terms
1. Natural Computing
Natural computing draws inspiration from biology to solve computational problems, such as using evolutionary processes to optimize solutions.
2. Game of Life (John Conway)
- Garden of Eden: A unique pattern in the Game of Life that has no previous state.
- Turing-complete: The Game of Life can simulate any computational algorithm, making it as powerful as any general-purpose computer.
- Ash: The state where no new patterns emerge on the Game of Life board.
3. No Free Lunch Theorem
This theorem states that all optimization algorithms perform equally well when averaged across all possible problems. No single algorithm works best for every problem.
4. Evolutionary Algorithms
- Increasing Diversity: Helps avoid premature convergence, where the population of solutions becomes too similar, reducing effectiveness.
- Selection: Choosing individuals based on their fitness scores to breed the next generation.
- Crossover: Combining the genetic information of two parents to create offspring.
- Mutation: Random changes introduced to maintain diversity in the population and explore new solutions.
5. Quantum Computing
- Superposition: A quantum bit (qubit) can exist in multiple states simultaneously before measurement.
- Entanglement: Two qubits become correlated so that the state of one affects the state of the other. If one is measured as 0, the other is also 0, and similarly for 1.
Task 4: Optimization
Part B: Global Optimum and Convergence
Explanation:
We are asked to show whether bright side search always reaches the global optimum and whether it converges.
-
Target Function: The function has a global optimum at , where , and .
-
Reaching the Global Optimum:
- The bright side search algorithm halves the current state (via ) or moves in the opposite direction, depending on whether it minimizes the target function .
- Since the function is monotonic, the algorithm will eventually reach the optimum , as it consistently moves in the direction of improvement.
-
Why It Doesn’t Converge:
- Once is reached, the algorithm sets due to , causing the algorithm to jump back to a larger value.
- This restart prevents the algorithm from converging, as it continually repeats this cycle instead of staying at the global optimum.
Part C: Is Bright Side Search Elitist?
Explanation:
An elitist algorithm ensures that the best solution found so far is carried over to the next iteration.
Bright side search is not elitist because:
- The algorithm depends on the neighboring states of the current solution.
- It may move to a worse solution, as it doesn’t store the best found solution from the previous iterations.
- There’s no mechanism to ensure that the best solution seen so far is kept.
Part D: Is Bright Side Search Greedy?
Explanation:
A greedy algorithm makes decisions based on the locally optimal choice at each step, without considering long-term consequences.
Bright side search is greedy because:
- It makes a locally optimal decision at each step by comparing the immediate values of and .
- While the step taken might not always be globally optimal, the decision made at each point is the best within that immediate neighborhood.
Task 5:
5b: What is ?
-
What is ?
gives all the possible reactions that can be applied to the elements in based on the reaction rules . -
Elements in :
-
Applicable reaction rules:
- → Rule and Rule (because of the presence of ).
- → Rule and Rule (because of ).
-
Result: The possible reactions are:
- , Rule
- , Rule
- , Rule
- , Rule
That’s all! simply lists these reactions that can be applied to the elements in .
Task 6: Evolutionary Computing
Problem Description
We are given an evolutionary algorithm that minimizes a fitness function over the search space , where . The goal of the evolutionary algorithm is to find values of and within this range that minimize the function , which is defined as:
Why Does fitness_phi(x)
Have Poor Fitness Values at the Center?
The reason why the fitness function fitness_phi(x)
has poor fitness values near the center of the search space is due to its structure:
- The function compares two terms:
- : This is an upward-opening parabola (a “bowl”-shaped graph) that reaches its minimum at . At the origin, , but as you move away from the origin, the values of increase.
- : This is a downward-opening parabola (a “dome”-shaped graph) that reaches its maximum at . At the origin, , but as you move away from the origin, the values of decrease.
The fitness function takes the maximum value of and :
- At the center :
- So, , which is the highest value of the fitness function (and since the goal is to minimize, a high value is bad).
- Farther from the center:
- As you move away from , increases, and decreases. At some point, when becomes larger than , dominates the value of .
- The global minima of the function are therefore not near the center but farther from , where the two functions and are better “balanced.”
Part (a): Finding Two Global Optima
The fitness function has multiple global optima, meaning there are several points in the search space where the function achieves its smallest possible value. Two examples of global optima that minimize are:
- Optimum 1: , where the fitness function evaluates to .
- Optimum 2: , where the fitness function also evaluates to .
These are points where the function reaches its lowest value of 25, and thus they are two different global optima.
Part (b): Technique to Find Multiple Optima
In evolutionary algorithms, the population of solutions tends to converge to a single solution over time, which may prevent the discovery of multiple global optima. One effective technique to address this is using diversity-based selection. This approach encourages the selection of individuals that are different from others in terms of their fitness values or positions in the search space, leading to a wider exploration and helping the algorithm find multiple global optima.
Part (c): Recombination Functions
Consider the following two recombination functions given in Python. Note that the Python function random.random()
returns a random floating point number between 0.0 (inclusive) and 1.0 (exclusive), i.e., .
Which of the two recombination functions recombine_avg
and recombine_xover
is better suited for the optimization problem given by fitness_phi
?
Explanation:
The fitness function fitness_phi
has poor fitness values near the center of the search space (around ), while the global optima are located closer to the edges of the space (e.g., , ).
recombine_avg
: This function averages the values of the parents, which means it tends to create offspring closer to the midpoint between the parents. If the parents are from different parts of the search space (e.g., and ), the offspring will be drawn toward the center (near ), where the fitness is poor. Over multiple generations, this leads to a concentration of offspring near the center, which is not ideal for this problem.recombine_xover
: This function randomly selects components from the parents, allowing for a wider distribution of offspring across the search space. This prevents the population from converging too quickly near the center and increases the likelihood of finding global optima near the edges.
Conclusion:
recombine_xover
is better suited for the optimization problem because it spreads offspring more evenly across the search space, avoiding the center where fitness is poor.recombine_avg
is less ideal because it tends to pull offspring toward the center, potentially leading to suboptimal solutions.
Part (d): Mutation Function
A mutation function is defined to have the following qualities:
- It does not return solution candidates that lie outside the search space’s boundaries.
- Its effects are random and not affected by fitness.
- Its effects work the same way in all of the individual’s dimensions.
- Its effects completely change the original individual only very rarely.
- There is no point within the search space that can never be reached via mutation, provided any arbitrary individual as a starting point and infinite time.
The following Python function satisfies these requirements:
This function randomly selects a dimension (either or ) and perturbs the value slightly by a random amount between and . The mutation is constrained to stay within the bounds of the search space . This ensures that individuals mutate in a controlled but still random way, respecting the boundaries of the search space.
Part (e): Augmented Evolutionary Algorithm with Greedy Local Search
Consider that we now construct an evolutionary algorithm with local search, i.e., we augment the evolutionary algorithm by using a greedy gradient-based search algorithm on each individual before the fitness evaluation. Given the fitness function fitness_phi
, what should we expect to happen when running this augmented evolutionary algorithm?
Explanation:
- Greedy gradient-based search will immediately converge towards a local or global minimum.
- This setup will yield results very quickly, but it does not take full advantage of the evolutionary algorithm because the local search dominates the optimization process.
- Therefore, this setup is not reasonable because the evolutionary aspect is essentially redundant. Instead, we could just use the greedy local search by itself, as it would produce the same results without needing the additional complexity of the evolutionary algorithm.