LLM-driven Automated Algorithms Design (WCCI 2026)

Tutorial at WCCI 2026

When & Where

When: TBA (during WCCI 2026)
Where: TBA (WCCI 2026 venue)
Duration: 1.5 hours
Target track: IJCNN, hybrid (with CEC)

Overview

Large language models (LLMs) are transforming how we design and automate AI techniques and algorithms. We are moving beyond hyperparameter tuning and automated selection towards fully automated algorithm design (AAD), architecture and end-to-end pipeline discovery – effectively closing the loop between ideation and evaluation.

This tutorial surveys the rapidly evolving landscape of LLM-driven algorithm discovery, including frameworks such as EASE, LLaMEA, LHNS, MCTS-AHD, PartEVO, AlphaEvolve, FunSearch, and emerging GenAI-based AI assistants.

We focus in particular on two complementary frameworks: EASE (Effortless Algorithmic Solution Evolution) – a modular framework for iterative, closed-loop generation and evaluation of algorithms, code, text and graphics; and LLaMEA (Large Language Model Evolutionary Algorithm) – an evolutionary-focused framework tightly integrated with benchmarking ecosystems such as IOH, its LLaMEA-HPO and LLaMEA-BO variants, and the BLADE benchmarking suite for automated algorithm discovery.

Participants will see how these two “competing yet complementary” approaches can be used together, and how collaboration between different teams and frameworks can accelerate global deployment of AAD techniques.

Format

The tutorial is demo-driven (no hands-on coding). We will present:

  • Short videos and narrated walkthroughs of the EASE frontend and backend.
  • Demonstrations of LLaMEA-based workflows and benchmarking setups.
  • Illustrations of task setup, fitness/evaluation loops, and result inspection.

Attendees will receive links to EASE/LLaMEA repositories, documentation and benchmarking environments. We will highlight key guardrails (testing, analysis, time/resource caps) and best practices in evaluation and benchmarking.

Advanced Topics

Building on these frameworks, we will discuss the orchestration problem: how ensembles of small and large language models can cooperatively drive algorithmic discovery.

  • Coordination layers where smaller, specialized models perform constrained optimization, symbolic reasoning or surrogate evaluation.
  • Larger foundation models handling global synthesis and hypothesis generation.
  • A multi-agent AutoML orchestration paradigm integrating human oversight, modular LLM agents and evolutionary search in a closed feedback loop.

We will show how human-in-the-loop co-discovery injects domain expertise and interpretability, keeping automated exploration guided, explainable and auditable.

Finally, we introduce a statistical validation layer that grounds algorithmic discovery in rigorous evidence, embedding Bayesian comparison, stochastic dominance tests and sequential evaluation directly into workflows. This turns LLM-driven discovery into a scientifically testable process and points towards Auto-Science systems that both generate and justify new algorithms.

Aims & Learning Objectives

The goal is to equip researchers and practitioners with conceptual tools and concrete workflows to responsibly adopt LLM-driven algorithm discovery. Participants will be able to:

  • Differentiate frameworks: understand when to use LLaMEA vs. EASE, including typical problem scopes and trade-offs.
  • Understand the architectures and workflows of EASE and LLaMEA.
  • Specify discovery tasks with measurable objectives and guardrails.
  • Benchmark automatically discovered solutions in a responsible, reproducible way.

Target Audience & Prerequisites

Target audience:

  • CI/EC/ML researchers and practitioners interested in automating algorithm or pipeline discovery and comparing frameworks.
  • Evolutionary computation researchers working on automated design of EC techniques.
  • AI and data scientists exploring LLM-in-the-loop workflows for optimization and model/algorithm design.

Prerequisite knowledge: basic machine learning and CI/metaheuristics, plus fundamentals of using LLMs.

Detailed Outline (1.5 hours)

  • Introduction & landscape: why automated algorithm discovery matters (10 min)
  • Core paradigms: LLaMEA, EASE, ReEvo, FunSearch, AlphaEvolve, LHNS, MCTS-AHD, PartEVO (10 min)
  • Deep dive into EASE architecture (10 min)
  • EASE: frontend/backend architecture + live examples (15 min)
  • LLaMEA architecture and variants: LLaMEA-HPO & LLaMEA-BO, BLADE (15 min)
  • Evaluation & safety, benchmarks (10 min)
  • Frontiers & open challenges: bias, compute cost, integration, orchestration of models (10 min)
  • Open Q&A (10 min)

Tutorial Co-authors

  • Roman Senkerik, A.I.Lab, Tomas Bata University in Zlín, Czech Republic
  • Niki van Stein, Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands
  • Michal Pluhacek, AI Center of Excellence, AGH University of Krakow, Poland
  • Swagatam Das, Electronics and Communication Sciences Unit, Indian Statistical Institute, Kolkata, India

Speaker Bios

Roman Senkerik

Roman Senkerik is Head of A.I.Lab at the Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlín. His current focus is generative AI, especially LLM-driven automated
design, evaluation-in-the-loop workflows and AutoML. He is co-architect of EASE (Effortless Algorithmic Solution Evolution), an open, modular framework that automates the creation and refinement of algorithms, code, text and images using LLMs and other generators. His broader research covers metaheuristics, adaptive strategies and parameter control in Differential Evolution, benchmarking and real-world optimization applications. He has authored over 300 peer-reviewed publications and has extensive experience organizing tutorials, special sessions and workshops at major international events.

Niki van Stein

Niki van Stein is an Associate Professor at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, specializing in Explainable Artificial Intelligence and Evolutionary Computing. Since 2022, she has led the XAI research group. Her work focuses on the intersection of AI and optimization, including automated algorithm design with applications in predictive maintenance, time-series analysis and engineering design. She obtained her PhD in Computer Science from Leiden University in 2018. With over 100 peer-reviewed publications and multiple best paper awards, she has made significant contributions to evolutionary computing and explainable AI.

Michal Pluhacek

Michal Pluhacek is ARTIQ project leader and professor at AGH University of Krakow. His research spans evolutionary computation, swarm intelligence and, more recently, applications of large language models. He has broad international experience and numerous publications at world-leading conferences and in respected journals. He received his PhD in Information Technologies in 2016 with a thesis on the development and modification of evolutionary computation techniques, and was awarded the associate professor title in 2023.

Swagatam Das

Swagatam Das is a Professor at the Electronics and Communication Sciences Unit, Indian Statistical Institute, Kolkata, and Professor-in-Charge of the Computer and Communication Sciences Division. His research interests include deep learning and non-convex optimization, and he has published more than 400 research articles. He is founding Co-Editor-in-Chief of Swarm and Evolutionary Computation and has served as Associate Editor for several leading IEEE and Elsevier journals. With over 37,000 citations and an H-index of around 90, he is a prominent figure in swarm and evolutionary computation and is an ACM Distinguished Speaker.