Robust Lifelong Multi-task Multi-view Representation Learning | |
Sun G(孙干)1,2,3![]() ![]() | |
Department | 机器人学研究室 |
Conference Name | 2018 IEEE International Conference on Big Knowledge (ICBK) |
Conference Date | November 17-18, 2018 |
Conference Place | Singapore |
Source Publication | 2018 IEEE International Conference on Big Knowledge (ICBK) |
Publisher | IEEE |
Publication Place | New York |
2018 | |
Pages | 91-98 |
Indexed By | EI ; CPCI(ISTP) |
EI Accession number | 20190706502773 |
WOS ID | WOS:000468072300012 |
Contribution Rank | 1 |
ISBN | 978-1-5386-9125-0 |
Keyword | Lifelong machine learning Multi-view learning Multi-task Learning Online learning |
Abstract | The state-of-the-art multi-task multi-view learning (MTMV) tackles the learning scenario where multiple tasks are associated with each other via multiple shared feature views. However, in online practical scenarios where the learning tasks have heterogeneous features collected from multiple views, e.g., multiple sources, the state-of-the-arts with single view cannot work well. To tackle this issue, in this paper, we propose a Robust Lifelong Multi-task Multi-view Representation Learning (rLM2L) model to accumulate the knowledge from online multi-view tasks. More specifically, we firstly design a set of view-specific libraries to maintain the intra-view correlation information of each view, and further impose an orthogonal promoting term to enforce libraries to be as independent as possible. When online new multiview task is coming, rLM2L model decomposes all views of the new task into a common view-invariant space by transferring the knowledge of corresponding library. In this view-invariant space, capturing underlying inter-view correlation and identifying taskspecific views for the new task are jointly employed via a robust multi-task learning formulation. Then the view-specific libraries can be refined over time to keep on improving across all tasks. For the model optimization, the proximal alternating linearized minimization algorithm is adopted to optimize our nonconvex model alternatively to achieve lifelong learning. Finally, extensive experiments on benchmark datasets shows that our proposed rLM2L model outperforms existing lifelong learning models, while it can discover task-specific views from sequential multi-view task with less computational burden. |
Language | 英语 |
Citation statistics | |
Document Type | 会议论文 |
Identifier | http://ir.sia.cn/handle/173321/23858 |
Collection | 机器人学研究室 |
Corresponding Author | Cong Y(丛杨) |
Affiliation | 1.State Key Laboratory of Robotics, Shenyang Institute of Automation 2.Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China 3.University of Chinese Academy of Sciences, Beijing, China 4.Department of Electrical and Computer Engineering, Northeastern University, USA |
Recommended Citation GB/T 7714 | Sun G,Cong Y,Li, Jun,et al. Robust Lifelong Multi-task Multi-view Representation Learning[C]. New York:IEEE,2018:91-98. |
Files in This Item: | ||||||
File Name/Size | DocType | Version | Access | License | ||
Robust lifelong mult(2490KB) | 会议论文 | 开放获取 | CC BY-NC-SA | View Application Full Text |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment