SIA OpenIR  > 机器人学研究室
Robust Lifelong Multi-task Multi-view Representation Learning
Sun G(孙干)1,2,3; Cong Y(丛杨)1,2; Li, Jun4; Fu, Yun4
Department机器人学研究室
Conference Name2018 IEEE International Conference on Big Knowledge (ICBK)
Conference DateNovember 17-18, 2018
Conference PlaceSingapore
Source Publication2018 IEEE International Conference on Big Knowledge (ICBK)
PublisherIEEE
Publication PlaceNew York
2018
Pages91-98
Indexed ByEI ; CPCI(ISTP)
EI Accession number20190706502773
WOS IDWOS:000468072300012
Contribution Rank1
ISBN978-1-5386-9125-0
KeywordLifelong machine learning Multi-view learning Multi-task Learning Online learning
AbstractThe state-of-the-art multi-task multi-view learning (MTMV) tackles the learning scenario where multiple tasks are associated with each other via multiple shared feature views. However, in online practical scenarios where the learning tasks have heterogeneous features collected from multiple views, e.g., multiple sources, the state-of-the-arts with single view cannot work well. To tackle this issue, in this paper, we propose a Robust Lifelong Multi-task Multi-view Representation Learning (rLM2L) model to accumulate the knowledge from online multi-view tasks. More specifically, we firstly design a set of view-specific libraries to maintain the intra-view correlation information of each view, and further impose an orthogonal promoting term to enforce libraries to be as independent as possible. When online new multiview task is coming, rLM2L model decomposes all views of the new task into a common view-invariant space by transferring the knowledge of corresponding library. In this view-invariant space, capturing underlying inter-view correlation and identifying taskspecific views for the new task are jointly employed via a robust multi-task learning formulation. Then the view-specific libraries can be refined over time to keep on improving across all tasks. For the model optimization, the proximal alternating linearized minimization algorithm is adopted to optimize our nonconvex model alternatively to achieve lifelong learning. Finally, extensive experiments on benchmark datasets shows that our proposed rLM2L model outperforms existing lifelong learning models, while it can discover task-specific views from sequential multi-view task with less computational burden.
Language英语
Citation statistics
Document Type会议论文
Identifierhttp://ir.sia.cn/handle/173321/23858
Collection机器人学研究室
Corresponding AuthorCong Y(丛杨)
Affiliation1.State Key Laboratory of Robotics, Shenyang Institute of Automation
2.Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
3.University of Chinese Academy of Sciences, Beijing, China
4.Department of Electrical and Computer Engineering, Northeastern University, USA
Recommended Citation
GB/T 7714
Sun G,Cong Y,Li, Jun,et al. Robust Lifelong Multi-task Multi-view Representation Learning[C]. New York:IEEE,2018:91-98.
Files in This Item: Download All
File Name/Size DocType Version Access License
Robust lifelong mult(2490KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Sun G(孙干)]'s Articles
[Cong Y(丛杨)]'s Articles
[Li, Jun]'s Articles
Baidu academic
Similar articles in Baidu academic
[Sun G(孙干)]'s Articles
[Cong Y(丛杨)]'s Articles
[Li, Jun]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Sun G(孙干)]'s Articles
[Cong Y(丛杨)]'s Articles
[Li, Jun]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Robust lifelong multi-task multi-view representation learning.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.