Welcome to the website of the Artificial Intelligence Laboratory at the Department of Informatics and Artificial Intelligence of the Faculty of Applied Informatics of Tomas Bata University in Zlin.
Ph.D. study opportunities in A.I.Lab:
Interested in studying your Ph.D. in A.I.lab? Head to the Thesis topics section and start your career in Artificial Intelligence today!
Slicing aided large scale tomato fruit detection and counting in 360-degree video data from a greenhouse
Alžběta Turečková, Tomáš Tureček, Peter Janků, Pavel Vařacha, Roman Šenkeřík, Roman Jašek, Václav Psota, Vit Štěpánek, Zuzana Komínková Oplatková
Measurement, Volume 204, 30 November 2022, 111977
This paper proposes an automated tomato fruit detection and counting process without a need for any human intervention. First of all, wide images of whole tomato plant rows were extracted from a 360-degree video taken in a greenhouse. These images were utilized to create a new object detection dataset. The original tomato detection methodology uses a deep CNN model with slicing-aided inference. The process encompasses two stages: first, the images are cut into patches for object detection, and consequently, the predictions are stitched back together. The paper also presents an extensive study of post-processing parameters needed to stitch object detections correctly, especially on the patch’s borders. Final results reach 83.09% F1 score value on a test set, proving the suitability of the proposed methodology for robotic farming.
Turečková, A., Tureček, T., Janků, P., Vařacha, P., Šenkeřík, R., Jašek, R., … & Oplatková, Z. K. (2022). Slicing aided large scale tomato fruit detection and counting in 360-degree video data from a greenhouse. Measurement, 204, 111977.
How Does the Number of Objective Function Evaluations Impact Our Understanding of Metaheuristics Behavior?
Anezka Kazikova, Michal Pluhacek, Roman Senkerik
IEEE Access, Volume: 9, 2021
Comparing various metaheuristics based on an equal number of objective function evaluations has become standard practice. Many contemporary publications use a specific number of objective function evaluations by the benchmarking sets definitions. Furthermore, many publications deal with the recurrent theme of late stagnation, which may lead to the impression that continuing the optimization process could be a waste of computational capabilities. But is it? Recently, many challenges, issues, and questions have been raised regarding fair comparisons and recommendations towards good practices for benchmarking metaheuristic algorithms. The aim of this work is not to compare the performance of several well-known algorithms but to investigate the issues that can appear in benchmarking and comparisons of metaheuristics performance (no matter what the problem is). This article studies the impact of a higher evaluation number on a selection of metaheuristic algorithms. We examine the effect of a raised evaluation budget on overall performance, mean convergence, and population diversity of selected swarm algorithms and IEEE CEC competition winners. Even though the final impact varies based on current algorithm selection, it may significantly affect the final verdict of metaheuristics comparison. This work has picked an important benchmarking issue and made extensive analysis, resulting in conclusions and possible recommendations for users working with real engineering optimization problems or researching the metaheuristics algorithms. Especially nowadays, when metaheuristic algorithms are used for increasingly complex optimization problems, and meet machine learning in AutoML frameworks, we conclude that the objective function evaluation budget should be considered another vital optimization input variable.
A. Kazikova, M. Pluhacek and R. Senkerik, „How Does the Number of Objective Function Evaluations Impact Our Understanding of Metaheuristics Behavior?,“ in IEEE Access, vol. 9, pp. 44032-44048, 2021, DOI: 10.1109/ACCESS.2021.3066135.