SIA OpenIR  > 机器人学研究室
Lifelong robotic visual-tactile perception learning
Dong JH(董家华)1,2,3; Cong Y(丛杨)1,2; Sun G(孙干)1,2; Zhang T(张涛)1,2,3
Source PublicationPattern Recognition
Indexed BySCI ; EI
EI Accession number20213110703281
WOS IDWOS:000701148300015
Contribution Rank1
Funding OrganizationNational Key Research and Development Program of China under Grant 2019YFB1310300 ; National Nature Science Foundation of China under Grant 61821005, and 62003336 ; National Postdoctoral Innovative Talents Support Program (BX20200353) ; Nature Foundation of Liaoning Province of China under Grant 2020-KF-11-01
KeywordLifelong machine learning Robotics Visual-tactile perception Cross-modality learning Multi-task learning

Lifelong machine learning can learn a sequence of consecutive robotic perception tasks via transferring previous experiences. However, 1) most existing lifelong learning based perception methods only take advantage of visual information for robotic tasks, while neglecting another important tactile sensing modality to capture discriminative material properties; 2) Meanwhile, they cannot explore the intrinsic relationships across different modalities and the common characterization among different tasks of each modality, due to the distinct divergence between heterogeneous feature distributions. To address above challenges, we propose a new Lifelong Visual-Tactile Learning (LVTL) model for continuous robotic visual-tactile perception tasks, which fully explores the latent correlations in both intra-modality and cross-modality aspects. Specifically, a modality-specific knowledge library is developed for each modality to explore common intra-modality representations across different tasks, while narrowing intra-modality mapping divergence between semantic and feature spaces via an auto-encoder mechanism. Moreover, a sparse constraint based modality-invariant space is constructed to capture underlying cross-modality correlations and identify the contributions of each modality for new coming visual-tactile tasks. We further propose a modality consistency regularizer to efficiently align the heterogeneous visual and tactile samples, which ensures the semantic consistency between different modality-specific knowledge libraries. After deriving an efficient model optimization strategy, we conduct extensive experiments on several representative datasets to demonstrate the superiority of our LVTL model. Evaluation experiments show that our proposed model significantly outperforms existing state-of-the-art methods with about 1.16%∼15.36% improvement under different lifelong visual-tactile perception scenarios.

WOS SubjectComputer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS Research AreaComputer Science ; Engineering
Funding ProjectNational Key Research and Development Program of China[2019YFB1310300] ; National Nature Science Foundation of China[61821005] ; National Nature Science Foundation of China[62003336] ; National Postdoctoral Innovative Talents Support Program[BX20200353] ; Nature Foundation of Liaoning Province of China[2020-KF-11-01]
Citation statistics
Document Type期刊论文
Corresponding AuthorCong Y(丛杨)
Affiliation1.State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
2.Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110016, China
3.University of Chinese Academy of Sciences, Beijing 100049, China
Recommended Citation
GB/T 7714
Dong JH,Cong Y,Sun G,et al. Lifelong robotic visual-tactile perception learning[J]. Pattern Recognition,2022,121:1-12.
APA Dong JH,Cong Y,Sun G,&Zhang T.(2022).Lifelong robotic visual-tactile perception learning.Pattern Recognition,121,1-12.
MLA Dong JH,et al."Lifelong robotic visual-tactile perception learning".Pattern Recognition 121(2022):1-12.
Files in This Item:
File Name/Size DocType Version Access License
Lifelong robotic vis(1841KB)期刊论文出版稿开放获取CC BY-NC-SAView Application Full Text
Related Services
Recommend this item
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Dong JH(董家华)]'s Articles
[Cong Y(丛杨)]'s Articles
[Sun G(孙干)]'s Articles
Baidu academic
Similar articles in Baidu academic
[Dong JH(董家华)]'s Articles
[Cong Y(丛杨)]'s Articles
[Sun G(孙干)]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Dong JH(董家华)]'s Articles
[Cong Y(丛杨)]'s Articles
[Sun G(孙干)]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Lifelong robotic visual-tactile perception learning.pdf
Format: Adobe PDF
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.