SIA OpenIR  > 空间自动化技术研究室
DeepHMap++: Combined Projection Grouping and Correspondence Learning for Full DoF Pose Estimation
Fu ML(付明亮)1,2,3; Zhou WJ(周维佳)1,2
Source PublicationSensors (Switzerland
2019
Volume19Issue:5Pages:1-19
Indexed BySCI ; EI
EI Accession number20191306716057
WOS IDWOS:000462540400051
Contribution Rank1
Funding OrganizationNational Science Foundation of China
Keyword6D pose estimation partial occlusion projection grouping correspondence evaluation
AbstractIn recent years, estimating the 6D pose of object instances with convolutional neural network (CNN) has received considerable attention. Depending on whether intermediate cues are used, the relevant literature can be roughly divided into two broad categories: direct methods and two-stage pipelines. For the latter, intermediate cues, such as 3D object coordinates, semantic keypoints, or virtual control points instead of pose parameters are regressed by CNN in the first stage. Object pose can then be solved by correspondence constraints constructed with these intermediate cues. In this paper, we focus on the postprocessing of a two-stage pipeline and propose to combine two learning concepts for estimating object pose under challenging scenes: projection grouping on one side, and correspondence learning on the other. We firstly employ a local-patch based method to predict projection heatmaps which denote the confidence distribution of projection of 3D bounding box’s corners. A projection grouping module is then proposed to remove redundant local maxima from each layer of heatmaps. Instead of directly feeding 2D–3D correspondences to the perspective-n-point (PnP) algorithm, multiple correspondence hypotheses are sampled from local maxima and its corresponding neighborhood and ranked by a correspondence–evaluation network. Finally, correspondences with higher confidence are selected to determine object pose. Extensive experiments on three public datasets demonstrate that the proposed framework outperforms several state of the art methods.
Language英语
WOS SubjectChemistry, Analytical ; Electrochemistry ; Instruments & Instrumentation
WOS Research AreaChemistry ; Electrochemistry ; Instruments & Instrumentation
Funding ProjectNational Science Foundation of China[51505470]
Citation statistics
Cited Times:1[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.sia.cn/handle/173321/24248
Collection空间自动化技术研究室
Corresponding AuthorFu ML(付明亮)
Affiliation1.State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
2.Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110016, China
3.University of Chinese Academy of Sciences, Beijing 100049, China
Recommended Citation
GB/T 7714
Fu ML,Zhou WJ. DeepHMap++: Combined Projection Grouping and Correspondence Learning for Full DoF Pose Estimation[J]. Sensors (Switzerland,2019,19(5):1-19.
APA Fu ML,&Zhou WJ.(2019).DeepHMap++: Combined Projection Grouping and Correspondence Learning for Full DoF Pose Estimation.Sensors (Switzerland,19(5),1-19.
MLA Fu ML,et al."DeepHMap++: Combined Projection Grouping and Correspondence Learning for Full DoF Pose Estimation".Sensors (Switzerland 19.5(2019):1-19.
Files in This Item: Download All
File Name/Size DocType Version Access License
DeepHMap++_ Combined(2957KB)期刊论文出版稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Fu ML(付明亮)]'s Articles
[Zhou WJ(周维佳)]'s Articles
Baidu academic
Similar articles in Baidu academic
[Fu ML(付明亮)]'s Articles
[Zhou WJ(周维佳)]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Fu ML(付明亮)]'s Articles
[Zhou WJ(周维佳)]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: DeepHMap++_ Combined Projection Grouping and Correspondence Learning for Full DoF Pose Estimation.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.