Abstract
Uncertainty quantification in neural networks enables the assessment of predictive reliability in artificial intelligence systems, thereby reducing the risk of unsafe decisions. Existing approaches rely heavily on ensemble construction to sample the model parameter space and capture decision variability. However, under realistic resource constraints, small-scale sampling leads to insufficient evidence sources and inaccurate uncertainty estimates. In addition, the design of uncertainty metrics significantly influences estimation accuracy and may limit applicability across different types of machine learning (ML) tasks. In this paper, a Systematic Reusable Ensemble (SRE) framework is proposed for uncertainty quantification. The approach reuses and shares neural network components during retraining to efficiently generate multiple model instances within a single training process. Furthermore, a compounded ensemble pruning strategy is introduced to promote more uniform sampling in parameter space. A general fusion metric is then developed based on evidence theory with a redesigned trust allocation mechanism. Experimental results demonstrate that the proposed framework systematically reduces ensemble construction overhead while improving the reliability of uncertainty estimation. The generalization capability of the SRE is further validated through its effectiveness in identifying high-risk decisions across at least five categories of ML tasks.
| Original language | English |
|---|---|
| Article number | 104343 |
| Journal | Advanced Engineering Informatics |
| Volume | 71 |
| DOIs | |
| Publication status | Published - Apr 2026 |
Keywords
- Ensembles
- Evidence theory
- Neural networks
- Uncertainty quantification