TY - JOUR
T1 - Time-Efficient Ensemble Learning with Sample Exchange for Edge Computing
AU - Chen, Wu
AU - Yu, Yong
AU - Gai, Keke
AU - Liu, Jiamou
AU - Choo, Kim Kwang Raymond
N1 - Publisher Copyright:
© 2021 Association for Computing Machinery.
PY - 2021/8
Y1 - 2021/8
N2 - In existing ensemble learning algorithms (e.g., random forest), each base learner's model needs the entire dataset for sampling and training. However, this may not be practical in many real-world applications, and it incurs additional computational costs. To achieve better efficiency, we propose a decentralized framework: Multi-Agent Ensemble. The framework leverages edge computing to facilitate ensemble learning techniques by focusing on the balancing of access restrictions (small sub-dataset) and accuracy enhancement. Specifically, network edge nodes (learners) are utilized to model classifications and predictions in our framework. Data is then distributed to multiple base learners who exchange data via an interaction mechanism to achieve improved prediction. The proposed approach relies on a training model rather than conventional centralized learning. Findings from the experimental evaluations using 20 real-world datasets suggest that Multi-Agent Ensemble outperforms other ensemble approaches in terms of accuracy even though the base learners require fewer samples (i.e., significant reduction in computation costs).
AB - In existing ensemble learning algorithms (e.g., random forest), each base learner's model needs the entire dataset for sampling and training. However, this may not be practical in many real-world applications, and it incurs additional computational costs. To achieve better efficiency, we propose a decentralized framework: Multi-Agent Ensemble. The framework leverages edge computing to facilitate ensemble learning techniques by focusing on the balancing of access restrictions (small sub-dataset) and accuracy enhancement. Specifically, network edge nodes (learners) are utilized to model classifications and predictions in our framework. Data is then distributed to multiple base learners who exchange data via an interaction mechanism to achieve improved prediction. The proposed approach relies on a training model rather than conventional centralized learning. Findings from the experimental evaluations using 20 real-world datasets suggest that Multi-Agent Ensemble outperforms other ensemble approaches in terms of accuracy even though the base learners require fewer samples (i.e., significant reduction in computation costs).
KW - Edge computing
KW - Multi-Agent Ensemble
KW - decentralized ensemble learning
KW - ensemble learning
UR - http://www.scopus.com/inward/record.url?scp=85114270093&partnerID=8YFLogxK
U2 - 10.1145/3409265
DO - 10.1145/3409265
M3 - Article
AN - SCOPUS:85114270093
SN - 1533-5399
VL - 21
JO - ACM Transactions on Internet Technology
JF - ACM Transactions on Internet Technology
IS - 3
M1 - 3409265
ER -