In this paper, a novel population-based, nature-inspired optimization paradigm is proposed, which is called Harris Hawks Optimizer (HHO). The main inspiration of HHO is the cooperative behavior and chasing style of Harrisโ hawks in nature called surprise pounce. In this intelligent strategy, several hawks cooperatively pounce a prey from different directions in an attempt to surprise it. Harris hawks can reveal a variety of chasing patterns based on the dynamic nature of scenarios and escaping patterns of the prey. This work mathematically mimics such dynamic patterns and behaviors to develop an optimization algorithm. The effectiveness of the proposed HHO optimizer is checked, through a comparison with other nature-inspired techniques, on 29 benchmark problems and several real-world engineering problems. The statistical results and comparisons show that the HHO algorithm provides very promising and occasionally competitive results compared to well-established metaheuristic techniques.
- ๐ฆ Nature-Inspired: Mimics the cooperative hunting behavior of hawks.
- โก Efficient: Excellent at finding global optima in high-dimensional spaces.
- ๐ก Versatile: Applicable to continuous, combinatorial, and multi-objective optimization.
- ๐ Dynamic Exploration & Exploitation: Balances exploration and exploitation for robust performance.
The algorithm operates in three distinct phases that simulate the hawkโs hunting strategy:
Phase | Description | Purpose |
---|---|---|
๐ฆ Exploration | Hawks search broadly across the space. | Discover new regions of the solution space. |
๐ Transition | Hawks adjust between exploration and exploitation based on prey's energy (problem difficulty). | Dynamically balance exploration and exploitation. |
๐ก Exploitation | Hawks refine solutions near the best-known position. | Converge towards the optimal solution. |
- Initialization: Randomly initialize hawks (candidate solutions) across the search space.
- Fitness Evaluation: Evaluate each hawkโs fitness based on a domain-specific function.
- Best Solution Tracking: Track the "rabbit" (best solution) found so far.
- Exploration vs Exploitation: Hawks decide whether to explore or exploit based on preyโs energy.
- Dynamic Position Updates: Hawks update their positions using encircling, sudden dives, or random jumps.
- Stopping Criterion: Continue until a stopping condition is met (e.g., max iterations, target fitness).
- Output: Return the best solution found as the final result.
- Objective: Explore the search space broadly to find promising regions.
- Mechanism: Hawks move randomly across the search space, promoting diversity and preventing early convergence.
Key Strategies:
- Random movement across unexplored regions.
- Ensures diverse candidate solutions.
- Objective: Adapt hawk behavior based on the problemโs difficulty.
- Mechanism: Hawks assess the energy state of the prey and switch between exploration and exploitation.
Key Strategies:
- Dynamic switching between global search and local refinement.
- Balances the two strategies for optimal performance.
- Objective: Refine solutions near the best-known solution.
- Mechanism: Hawks focus their search near the best solution using various fine-tuning techniques.
Key Strategies:
- Gradual Encircling: Hawks gradually close in on the prey (optimal solution).
- Sudden Dives: Hawks make abrupt moves to test nearby solutions.
- Direct Attack: Hawks converge aggressively if prey is weak (suboptimal solution).
Below is a pseudo-code representation of the HHO algorithm for better understanding:
1. Initialize hawk positions randomly in the search space.
2. Evaluate the fitness of each hawk.
3. Identify the best solution (rabbit).
4. Repeat until stopping criterion is met:
a. Update the energy of the prey.
b. Adjust hawk positions based on exploration or exploitation:
i. Random jumps for exploration.
ii. Gradual encircling or dives for exploitation.
c. Evaluate fitness and update the best solution if needed.
5. Return the best solution found.
Ali Asghar Heidari
PhD Research Intern, Department of Computer Science, School of Computing, National University of Singapore, Singapore
Exceptionally Talented Ph.D. funded by Iran's National Elites Foundation (INEF), University of Tehran
Email:
If you use this algorithm in your research, please cite our paper as follows:
@article{Heidari2019HHO,
title={Harris hawks optimization: Algorithm and applications},
author={Ali Asghar Heidari and Seyedali Mirjalili and Hossam Faris and Ibrahim Aljarah and Majdi Mafarja and Huiling Chen},
journal={Future Generation Computer Systems},
volume={97},
pages={849--872},
year={2019},
publisher={Elsevier},
doi={10.1016/j.future.2019.02.028},
}
For further inquiries or collaborations, feel free to reach out to the author or contributors via their emails listed above. You can also follow Ali Asghar Heidariโs work on his ResearchGate page.