SIA OpenIR  > 机器人学研究室
Visual Tactile Fusion Object Clustering
Zhang T(张涛)1,2; Cong Y(丛杨)1; Sun G(孙干)1,2; Wang QQ(王倩倩)3; Ding ZM(丁正明)4
Department机器人学研究室
Conference NameThirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)
Conference DateFebruary 7-12, 2020
Conference PlaceNew York
Source PublicationThirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)
PublisherAAAI Press,
Publication PlacePalo Alto, California USA
2020
Pages10426-10433
Contribution Rank1
ISSN2159-5399
ISBN978-1-57735-835-0
AbstractObject clustering, aiming at grouping similar objects into one cluster with an unsupervised strategy, has been extensivelystudied among various data-driven applications. However, most existing state-of-the-art object clustering methods (e.g., single-view or multi-view clustering methods) only explore visual information, while ignoring one of most important sensing modalities, i.e., tactile information which can help capture different object properties and further boost the performance of object clustering task. To effectively benefit both visual and tactile modalities for object clustering, in this paper, we propose a deep Auto-Encoder-like Non-negative Matrix Factorization framework for visual-tactile fusion clustering. Specifically, deep matrix factorization constrained by an under-complete Auto-Encoder-like architecture is employed to jointly learn hierarchical expression of visual-tactile fusion data, and preserve the local structure of data generating distribution of visual and tactile modalities. Meanwhile, a graph regularizer is introduced to capture the intrinsic relations of data samples within each modality. Furthermore, we propose a modality-level consensus regularizer to effectively align the visual and tactile data in a common subspace in which the gap between visual and tactile data is mitigated. For the model optimization, we present an efficient alternating minimization strategy to solve our proposed model. Finally, we conduct extensive experiments on public datasets to verify the effectiveness of our framework.
Language英语
Document Type会议论文
Identifierhttp://ir.sia.cn/handle/173321/27997
Collection机器人学研究室
Corresponding AuthorDing ZM(丁正明)
Affiliation1.State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
3.Xidian University
4.Indiana University-Purdue University Indianapolis, USA
Recommended Citation
GB/T 7714
Zhang T,Cong Y,Sun G,et al. Visual Tactile Fusion Object Clustering[C]. Palo Alto, California USA:AAAI Press,,2020:10426-10433.
Files in This Item:
File Name/Size DocType Version Access License
Visual Tactile Fusio(1443KB)会议论文 开放获取CC BY-NC-SAView Application Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhang T(张涛)]'s Articles
[Cong Y(丛杨)]'s Articles
[Sun G(孙干)]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhang T(张涛)]'s Articles
[Cong Y(丛杨)]'s Articles
[Sun G(孙干)]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhang T(张涛)]'s Articles
[Cong Y(丛杨)]'s Articles
[Sun G(孙干)]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Visual Tactile Fusion Object Clustering.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.