Skip to main navigation Skip to search Skip to main content

Adaptive Meta-Loss Networks: Learning Task-Agnostic Loss Functions via Evolutionary Optimization

  • Mirna Yunita
  • , Xiabi Liu*
  • , Zhaoyang Hai
  • , Rachmat Muwardi
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Designing appropriate loss functions is critical to the success of supervised learning models. However, most conventional losses are fixed and manually designed, making them suboptimal for diverse and dynamic learning scenarios. In this work, we propose an Adaptive Meta-Loss Network (Adaptive-MLN) that learns to generate task-agnostic loss functions tailored to evolving classification problems. Unlike traditional methods that rely on static objectives, Adaptive-MLN treats the loss function itself as a trainable component, parameterized by a shallow neural network. To enable flexible, gradient-free optimization, we introduce a hybrid evolutionary approach that combines Genetic Algorithms (GA) for global exploration and Evolution Strategies (ES) for local refinement. This co-evolutionary process dynamically adjusts the loss landscape, improving model generalization without relying on analytic gradients or handcrafted heuristics. Experimental evaluations on synthetic tasks and the CIFAR-10 and MNIST datasets demonstrate that our approach consistently outperforms standard losses such as Cross-Entropy and Mean Squared Error in terms of accuracy, convergence, and adaptability.

Original languageEnglish
Article number83
JournalComputers, Materials and Continua
Volume87
Issue number2
DOIs
Publication statusPublished - 2026
Externally publishedYes

Keywords

  • adaptive loss function
  • classification
  • evolutionary strategy
  • genetic algorithm
  • Meta-learning
  • task-agnostic optimization

Fingerprint

Dive into the research topics of 'Adaptive Meta-Loss Networks: Learning Task-Agnostic Loss Functions via Evolutionary Optimization'. Together they form a unique fingerprint.

Cite this