A Fused CP Factorization Method for Incomplete Tensors

Yuankai Wu, Huachun Tan*, Yong Li, Jian Zhang, Xiaoxuan Chen

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

52 引用 (Scopus)

摘要

Low-rank tensor completion methods have been advanced recently for modeling sparsely observed data with a multimode structure. However, low-rank priors may fail to interpret the model factors of general tensor objects. The most common method to address this drawback is to use regularizations together with the low-rank priors. However, due to the complex nature and diverse characteristics of real-world multiway data, the use of a single or a few regularizations remains far from efficient, and there are limited systematic experimental reports on the advantages of these regularizations for tensor completion. To fill these gaps, we propose a modified CP tensor factorization framework that fuses the l 2 norm constraint, sparseness (l 1 norm), manifold, and smooth information simultaneously. The factorization problem is addressed through a combination of Nesterov's optimal gradient descent method and block coordinate descent. Here, we construct a smooth approximation to the l 1 norm and TV norm regularizations, and then, the tensor factor is updated using the projected gradient method, where the step size is determined by the Lipschitz constant. Extensive experiments on simulation data, visual data completion, intelligent transportation systems, and GPS data of user involvement are conducted, and the efficiency of our method is confirmed by the results. Moreover, the obtained results reveal the characteristics of these commonly used regularizations for tensor completion in a certain sense and give experimental guidance concerning how to use them.

源语言英语
文章编号8421043
页(从-至)751-764
页数14
期刊IEEE Transactions on Neural Networks and Learning Systems
30
3
DOI
出版状态已出版 - 3月 2019
已对外发布

指纹

探究 'A Fused CP Factorization Method for Incomplete Tensors' 的科研主题。它们共同构成独一无二的指纹。

引用此