Welcome to the website of the Artificial Intelligence Laboratory at the Department of Informatics and Artificial Intelligence of the Faculty of Applied Informatics of Tomas Bata University in Zlin.

Ph.D. study opportunities in A.I.Lab:

Interested in studying your Ph.D. in A.I.lab? Head to the Thesis topics section and start your career in Artificial Intelligence today!

Research spotlight:


Impact of Boundary Control Methods on Bound-Constrained Optimization Benchmarking

Tomas Kadavy, Adan Viktorin, Anezka Kazikova, Michal Pluhacek, Roman Senkerik

IEEE Transactions on Evolutionary Computation Volume: 26, Issue: 6, December 2022

Benchmarking various metaheuristics and their new enhancements, strategies, and adaptation mechanisms has become standard in computational intelligence research. Recently, many challenges and issues regarding fair comparisons and recommendations toward good practices for benchmarking of metaheuristic algorithms, have been identified. This article is aimed at an important issues in metaheuristics design and benchmarking, which are boundary strategies or boundary control methods (BCMs). This work aims to investigate whether the choice of a BCM could significantly influence the performance of competitive algorithms. The experiments encompass the top three performing algorithms from IEEE CEC competitions 2017 and 2020 with six different BCMs. We provide extensive statistical analysis and rankings resulting in conclusions and recommendations for metaheuristics researchers and possibly also for the future direction of benchmark definitions. We conclude that the BCM should be considered another vital metaheuristics input variable for unambiguous reproducibility of results in benchmarking and for a better understanding of population dynamics, since the BCM setting could impact the optimization method performance.

Cite as:

Kadavy, T., Viktorin, A., Kazikova, A., Pluhacek, M., & Senkerik, R. (2022). Impact of boundary control methods on bound-constrained optimization benchmarking. IEEE Transactions on Evolutionary Computation.


Slicing aided large scale tomato fruit detection and counting in 360-degree video data from a greenhouse

Alžběta Turečková, Tomáš Tureček, Peter Janků, Pavel Vařacha, Roman Šenkeřík, Roman Jašek, Václav Psota, Vit Štěpánek, Zuzana Komínková Oplatková

Measurement, Volume 204, 30 November 2022, 111977

This paper proposes an automated tomato fruit detection and counting process without a need for any human intervention. First of all, wide images of whole tomato plant rows were extracted from a 360-degree video taken in a greenhouse. These images were utilized to create a new object detection dataset. The original tomato detection methodology uses a deep CNN model with slicing-aided inference. The process encompasses two stages: first, the images are cut into patches for object detection, and consequently, the predictions are stitched back together. The paper also presents an extensive study of post-processing parameters needed to stitch object detections correctly, especially on the patch’s borders. Final results reach 83.09% F1 score value on a test set, proving the suitability of the proposed methodology for robotic farming.

Cite as:

Turečková, A., Tureček, T., Janků, P., Vařacha, P., Šenkeřík, R., Jašek, R., … & Oplatková, Z. K. (2022). Slicing aided large scale tomato fruit detection and counting in 360-degree video data from a greenhouse. Measurement, 204, 111977.

(Open Access)

How Does the Number of Objective Function Evaluations Impact Our Understanding of Metaheuristics Behavior?

Anezka Kazikova, Michal Pluhacek, Roman Senkerik

IEEE Access, Volume: 9, 2021

Comparing various metaheuristics based on an equal number of objective function evaluations has become standard practice. Many contemporary publications use a specific number of objective function evaluations by the benchmarking sets definitions. Furthermore, many publications deal with the recurrent theme of late stagnation, which may lead to the impression that continuing the optimization process could be a waste of computational capabilities. But is it? Recently, many challenges, issues, and questions have been raised regarding fair comparisons and recommendations towards good practices for benchmarking metaheuristic algorithms. The aim of this work is not to compare the performance of several well-known algorithms but to investigate the issues that can appear in benchmarking and comparisons of metaheuristics performance (no matter what the problem is). This article studies the impact of a higher evaluation number on a selection of metaheuristic algorithms. We examine the effect of a raised evaluation budget on overall performance, mean convergence, and population diversity of selected swarm algorithms and IEEE CEC competition winners. Even though the final impact varies based on current algorithm selection, it may significantly affect the final verdict of metaheuristics comparison. This work has picked an important benchmarking issue and made extensive analysis, resulting in conclusions and possible recommendations for users working with real engineering optimization problems or researching the metaheuristics algorithms. Especially nowadays, when metaheuristic algorithms are used for increasingly complex optimization problems, and meet machine learning in AutoML frameworks, we conclude that the objective function evaluation budget should be considered another vital optimization input variable.

Cite as:

A. Kazikova, M. Pluhacek and R. Senkerik, „How Does the Number of Objective Function Evaluations Impact Our Understanding of Metaheuristics Behavior?,“ in IEEE Access, vol. 9, pp. 44032-44048, 2021, DOI: 10.1109/ACCESS.2021.3066135.