Welcome to the A.I.Lab website

 

Welcome to the website of the Artificial Intelligence Laboratory at the Department of Informatics and Artificial Intelligence of the Faculty of Applied Informatics of Tomas Bata University in Zlin.


Ph.D. study opportunities in A.I.Lab:

Interested in studying your Ph.D. in A.I.lab? Head to the Thesis topics section and start your career in Artificial Intelligence today!


Research spotlight:


PDF

Orthogonal Learning Firefly Algorithm

Tomas Kadavy, Roman Senkerik, Michal Pluhacek, Adam Viktorin

Logic Journal of the IGPL, Volume 29, Issue 2, April 2021, Pages 167–179

The primary aim of this original work is to provide a more in-depth insight into the relations between control parameters adjustments, learning techniques, inner swarm dynamics and possible hybridization strategies for popular swarm metaheuristic Firefly Algorithm (FA). In this paper, a proven method, orthogonal learning, is fused with FA, specifically with its hybrid modification Firefly Particle Swarm Optimization (FFPSO). The parameters of the proposed Orthogonal Learning Firefly Algorithm are also initially thoroughly explored and tuned. The performance of the developed algorithm is examined and compared with canonical FA and above-mentioned FFPSO. Comparisons have been conducted on well-known CEC 2017 benchmark functions, and the results have been evaluated for statistical significance using the Friedman rank test.

Cite as:

Kadavy, T., Senkerik, R., Pluhacek, M., & Viktorin, A. (2021). Orthogonal learning firefly algorithm. Logic Journal of the IGPL, 29(2), 167-179., DOI: 10.1093/jigpal/jzaa044.


PDF
(Open Access)

How Does the Number of Objective Function Evaluations Impact Our Understanding of Metaheuristics Behavior?

Anezka Kazikova, Michal Pluhacek, Roman Senkerik

IEEE Access, Volume: 9, 2021

Comparing various metaheuristics based on an equal number of objective function evaluations has become standard practice. Many contemporary publications use a specific number of objective function evaluations by the benchmarking sets definitions. Furthermore, many publications deal with the recurrent theme of late stagnation, which may lead to the impression that continuing the optimization process could be a waste of computational capabilities. But is it? Recently, many challenges, issues, and questions have been raised regarding fair comparisons and recommendations towards good practices for benchmarking metaheuristic algorithms. The aim of this work is not to compare the performance of several well-known algorithms but to investigate the issues that can appear in benchmarking and comparisons of metaheuristics performance (no matter what the problem is). This article studies the impact of a higher evaluation number on a selection of metaheuristic algorithms. We examine the effect of a raised evaluation budget on overall performance, mean convergence, and population diversity of selected swarm algorithms and IEEE CEC competition winners. Even though the final impact varies based on current algorithm selection, it may significantly affect the final verdict of metaheuristics comparison. This work has picked an important benchmarking issue and made extensive analysis, resulting in conclusions and possible recommendations for users working with real engineering optimization problems or researching the metaheuristics algorithms. Especially nowadays, when metaheuristic algorithms are used for increasingly complex optimization problems, and meet machine learning in AutoML frameworks, we conclude that the objective function evaluation budget should be considered another vital optimization input variable.

Cite as:

A. Kazikova, M. Pluhacek and R. Senkerik, „How Does the Number of Objective Function Evaluations Impact Our Understanding of Metaheuristics Behavior?,“ in IEEE Access, vol. 9, pp. 44032-44048, 2021, DOI: 10.1109/ACCESS.2021.3066135.

PDF
(Open access)

Why Tuning the Control Parameters of Metaheuristic Algorithms Is So Important for Fair Comparison?

Anezka Kazikova, Michal Pluhacek, Roman Senkerik

Mendel, Vol 26 No 2, 2020, Published: 2020-12-21

Although metaheuristic optimization has become a common practice, new bio-inspired algorithms often suffer from a priori ill reputation. One of the reasons is a common bad practice in metaheuristic proposals. It is essential to pay attention to the quality of conducted experiments, especially when comparing several algorithms among themselves. The comparisons should be fair and unbiased. This paper points to the importance of proper initial parameter configurations of the compared algorithms. We highlight the performance differences with several popular and recommended parameter configurations. Even though the parameter selection was mostly based on comprehensive tuning experiments, the algorithms‘ performance was surprisingly inconsistent, given various parameter settings. Based on the presented evidence, we conclude that paying attention to the metaheuristic algorithm’s parameter tuning should be an integral part of the development and testing processes.

Cite as:

Kazikova, A., Pluhacek, M., & Senkerik, R. (2020, December). Why Tuning the Control Parameters of Metaheuristic Algorithms Is So Important for Fair Comparison?. In Mendel (Vol. 26, No. 2, pp. 9-16). DOI: https://doi.org/10.13164/mendel.2020.2.009.