Tensor completion via a multi-linear low-n-rank factorization model

Huachun Tan*, Bin Cheng, Wuhong Wang, Yu Jin Zhang, Bin Ran

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

70 引用 (Scopus)

摘要

The tensor completion problem is to recover a low-n-rank tensor from a subset of its entries. The main solution strategy has been based on the extensions of trace norm for the minimization of tensor rank via convex optimization. This strategy bears the computational cost required by the singular value decomposition (SVD) which becomes increasingly expensive as the size of the underlying tensor increase. In order to reduce the computational cost, we propose a multi-linear low-n-rank factorization model and apply the nonlinear Gauss-Seidal method that only requires solving a linear least squares problem per iteration to solve this model. Numerical results show that the proposed algorithm can reliably solve a wide range of problems at least several times faster than the trace norm minimization algorithm.

源语言英语
页(从-至)161-169
页数9
期刊Neurocomputing
133
DOI
出版状态已出版 - 10 6月 2014

指纹

探究 'Tensor completion via a multi-linear low-n-rank factorization model' 的科研主题。它们共同构成独一无二的指纹。

引用此