SIA OpenIR  > 水下机器人研究室
Learning mobile manipulation through deep reinforcement learning
Wang C(王聪)1,2,3,4; Zhang QF(张奇峰)1,2; Tian QY(田启岩)1,2; Li S(李硕)1,2; Wang XH(王晓辉)1,2; Lane, David4; Petillot, Yvan4; Wang, Sen4
Department水下机器人研究室
Source PublicationSENSORS
ISSN1424-8220
2020
Volume20Issue:3Pages:1-18
Indexed BySCI ; EI
EI Accession number20200808193162
WOS IDWOS:000517786200363
Contribution Rank1
Funding OrganizationNatural Science Foundation of China under grant 51705514 ; National Key Research and Development Program of China under grant number 2016YFC0300401 ; EPSRC ORCA Hub (EP/R026173/1)
Keywordmobile manipulation deep reinforcement learning deep learning
Abstract

Mobile manipulation has a broad range of applications in robotics. However, it is usually more challenging than fixed-base manipulation due to the complex coordination of a mobile base and a manipulator. Although recent works have demonstrated that deep reinforcement learning is a powerful technique for fixed-base manipulation tasks, most of them are not applicable to mobile manipulation. This paper investigates how to leverage deep reinforcement learning to tackle whole-body mobile manipulation tasks in unstructured environments using only on-board sensors. A novel mobile manipulation system which integrates the state-of-the-art deep reinforcement learning algorithms with visual perception is proposed. It has an efficient framework decoupling visual perception from the deep reinforcement learning control, which enables its generalization from simulation training to real-world testing. Extensive simulation and experiment results show that the proposed mobile manipulation system is able to grasp different types of objects autonomously in various simulation and real-world scenarios, verifying the effectiveness of the proposed mobile manipulation system.

Language英语
WOS SubjectChemistry, Analytical ; Engineering, Electrical & Electronic ; Instruments & Instrumentation
WOS KeywordROBOTICS
WOS Research AreaChemistry ; Engineering ; Instruments & Instrumentation
Funding ProjectNatural Science Foundation of China[51705514] ; National Key Research and Development Program of China[2016YFC0300401] ; EPSRC ORCA Hub[EP/R026173/1]
Citation statistics
Cited Times:5[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.sia.cn/handle/173321/26303
Collection水下机器人研究室
沈阳自动化所
Corresponding AuthorZhang QF(张奇峰)
Affiliation1.State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
2.Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110016, China
3.University of Chinese Academy of Sciences, Beijing 100049, China
4.School of Engineering & Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom
Recommended Citation
GB/T 7714
Wang C,Zhang QF,Tian QY,et al. Learning mobile manipulation through deep reinforcement learning[J]. SENSORS,2020,20(3):1-18.
APA Wang C.,Zhang QF.,Tian QY.,Li S.,Wang XH.,...&Wang, Sen.(2020).Learning mobile manipulation through deep reinforcement learning.SENSORS,20(3),1-18.
MLA Wang C,et al."Learning mobile manipulation through deep reinforcement learning".SENSORS 20.3(2020):1-18.
Files in This Item:
File Name/Size DocType Version Access License
Learning mobile mani(26225KB)期刊论文出版稿开放获取CC BY-NC-SAView Application Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Wang C(王聪)]'s Articles
[Zhang QF(张奇峰)]'s Articles
[Tian QY(田启岩)]'s Articles
Baidu academic
Similar articles in Baidu academic
[Wang C(王聪)]'s Articles
[Zhang QF(张奇峰)]'s Articles
[Tian QY(田启岩)]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Wang C(王聪)]'s Articles
[Zhang QF(张奇峰)]'s Articles
[Tian QY(田启岩)]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Learning mobile manipulation through deep reinforcement learning.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.