Welcome to the A.I.Lab website


Welcome to the website of the Artificial Intelligence Laboratory at the Department of Informatics and Artificial Intelligence of the Faculty of Applied Informatics of Tomas Bata University in Zlin.

Ph.D. study opportunities in A.I.Lab:

Interested in studying your Ph.D. in A.I.lab? Head to the Thesis topics section and start your career in Artificial Intelligence today!

Research spotlight:


Slicing aided large scale tomato fruit detection and counting in 360-degree video data from a greenhouse

Alžběta Turečková, Tomáš Tureček, Peter Janků, Pavel Vařacha, Roman Šenkeřík, Roman Jašek, Václav Psota, Vit Štěpánek, Zuzana Komínková Oplatková

Measurement, Volume 204, 30 November 2022, 111977

This paper proposes an automated tomato fruit detection and counting process without a need for any human intervention. First of all, wide images of whole tomato plant rows were extracted from a 360-degree video taken in a greenhouse. These images were utilized to create a new object detection dataset. The original tomato detection methodology uses a deep CNN model with slicing-aided inference. The process encompasses two stages: first, the images are cut into patches for object detection, and consequently, the predictions are stitched back together. The paper also presents an extensive study of post-processing parameters needed to stitch object detections correctly, especially on the patch’s borders. Final results reach 83.09% F1 score value on a test set, proving the suitability of the proposed methodology for robotic farming.

Cite as:

Turečková, A., Tureček, T., Janků, P., Vařacha, P., Šenkeřík, R., Jašek, R., … & Oplatková, Z. K. (2022). Slicing aided large scale tomato fruit detection and counting in 360-degree video data from a greenhouse. Measurement, 204, 111977.

(Open Access)

How Does the Number of Objective Function Evaluations Impact Our Understanding of Metaheuristics Behavior?

Anezka Kazikova, Michal Pluhacek, Roman Senkerik

IEEE Access, Volume: 9, 2021

Comparing various metaheuristics based on an equal number of objective function evaluations has become standard practice. Many contemporary publications use a specific number of objective function evaluations by the benchmarking sets definitions. Furthermore, many publications deal with the recurrent theme of late stagnation, which may lead to the impression that continuing the optimization process could be a waste of computational capabilities. But is it? Recently, many challenges, issues, and questions have been raised regarding fair comparisons and recommendations towards good practices for benchmarking metaheuristic algorithms. The aim of this work is not to compare the performance of several well-known algorithms but to investigate the issues that can appear in benchmarking and comparisons of metaheuristics performance (no matter what the problem is). This article studies the impact of a higher evaluation number on a selection of metaheuristic algorithms. We examine the effect of a raised evaluation budget on overall performance, mean convergence, and population diversity of selected swarm algorithms and IEEE CEC competition winners. Even though the final impact varies based on current algorithm selection, it may significantly affect the final verdict of metaheuristics comparison. This work has picked an important benchmarking issue and made extensive analysis, resulting in conclusions and possible recommendations for users working with real engineering optimization problems or researching the metaheuristics algorithms. Especially nowadays, when metaheuristic algorithms are used for increasingly complex optimization problems, and meet machine learning in AutoML frameworks, we conclude that the objective function evaluation budget should be considered another vital optimization input variable.

Cite as:

A. Kazikova, M. Pluhacek and R. Senkerik, „How Does the Number of Objective Function Evaluations Impact Our Understanding of Metaheuristics Behavior?,“ in IEEE Access, vol. 9, pp. 44032-44048, 2021, DOI: 10.1109/ACCESS.2021.3066135.

(Open access)

Why Tuning the Control Parameters of Metaheuristic Algorithms Is So Important for Fair Comparison?

Anezka Kazikova, Michal Pluhacek, Roman Senkerik

Mendel, Vol 26 No 2, 2020, Published: 2020-12-21

Although metaheuristic optimization has become a common practice, new bio-inspired algorithms often suffer from a priori ill reputation. One of the reasons is a common bad practice in metaheuristic proposals. It is essential to pay attention to the quality of conducted experiments, especially when comparing several algorithms among themselves. The comparisons should be fair and unbiased. This paper points to the importance of proper initial parameter configurations of the compared algorithms. We highlight the performance differences with several popular and recommended parameter configurations. Even though the parameter selection was mostly based on comprehensive tuning experiments, the algorithms‘ performance was surprisingly inconsistent, given various parameter settings. Based on the presented evidence, we conclude that paying attention to the metaheuristic algorithm’s parameter tuning should be an integral part of the development and testing processes.

Cite as:

Kazikova, A., Pluhacek, M., & Senkerik, R. (2020, December). Why Tuning the Control Parameters of Metaheuristic Algorithms Is So Important for Fair Comparison?. In Mendel (Vol. 26, No. 2, pp. 9-16). DOI: https://doi.org/10.13164/mendel.2020.2.009.