TY - JOUR
T1 - Guided Random Projection
T2 - A Lightweight Feature Representation for Image Classification
AU - Zhou, Shichao
AU - Wang, Junbo
AU - Wang, Wenzheng
AU - Tang, Linbo
AU - Zhao, Baojun
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2021
Y1 - 2021
N2 - Modern neural networks [e.g., Deep Neural Networks (DNNs)] have recently gained increasing attention for visible image classification tasks. Their success mainly results from capabilities in learning a complex feature mapping of inputs (i.e., feature representation) that carries images manifold structure relevant to the task. Despite the current popularity of these techniques, they are training-costly with Back-propagation (BP) based iteration rules. Here, we advocate a lightweight feature representation framework termed as Guided Random Projection (GRP), which is closely related to the classical random neural networks and randomization-based kernel machines. Specifically, we present an efficient optimization method that explicitly learns the distribution of random hidden weights instead of time-consuming fine-tuning or task-independent randomization configurations. Further, we also report the detailed mechanisms of the GRP with subspace theories. Experiments were conducted on visible image classification benchmarks to evaluate our claims. It shows that the proposed method achieves reasonable accuracy improvement (more than 2%) with moderate training cost (seconds level) compared with other randomization methods.
AB - Modern neural networks [e.g., Deep Neural Networks (DNNs)] have recently gained increasing attention for visible image classification tasks. Their success mainly results from capabilities in learning a complex feature mapping of inputs (i.e., feature representation) that carries images manifold structure relevant to the task. Despite the current popularity of these techniques, they are training-costly with Back-propagation (BP) based iteration rules. Here, we advocate a lightweight feature representation framework termed as Guided Random Projection (GRP), which is closely related to the classical random neural networks and randomization-based kernel machines. Specifically, we present an efficient optimization method that explicitly learns the distribution of random hidden weights instead of time-consuming fine-tuning or task-independent randomization configurations. Further, we also report the detailed mechanisms of the GRP with subspace theories. Experiments were conducted on visible image classification benchmarks to evaluate our claims. It shows that the proposed method achieves reasonable accuracy improvement (more than 2%) with moderate training cost (seconds level) compared with other randomization methods.
KW - Image classification
KW - feature representation
KW - guided random projection
KW - neural network
UR - http://www.scopus.com/inward/record.url?scp=85115149321&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2021.3112552
DO - 10.1109/ACCESS.2021.3112552
M3 - Article
AN - SCOPUS:85115149321
SN - 2169-3536
VL - 9
SP - 129110
EP - 129118
JO - IEEE Access
JF - IEEE Access
ER -