SIA OpenIR  > 数字工厂研究室
Path Planning Method With Improved Artificial Potential Field-A Reinforcement Learning Perspective
Yao QF(么庆丰)1,2,3; Zheng ZY(郑泽宇)1,2,3; Qi, Liang4; Yuan, Haitao5,6; Guo, Xiwang7; Zhao M(赵明)1,2,3; Liu Z(刘智)1,2,3; Yang TJ(杨天吉)1,2
Department数字工厂研究室
Source PublicationIEEE ACCESS
ISSN2169-3536
2020
Volume8Pages:135513-135523
Indexed BySCI ; EI
EI Accession number20203509117504
WOS IDWOS:000552682600001
Contribution Rank1
Funding OrganizationNational Key Research and Development Program of China [2018YFF0214704] ; Liaoning Revitalization Talents Program [XLYC1907166] ; Liaoning Province Department of Education Foundation of China [L2019027] ; Liaoning Province Dr. Research Foundation of China [20170520135] ; National Natural Science Foundation of ChinaNational Natural Science Foundation of China [61903229, 61973180, 61802015] ; Natural Science Foundation of Shandong ProvinceNatural Science Foundation of Shandong Province [ZR2019BF004, ZR2019BF041]
KeywordPath planning Learning (artificial intelligence) Gravity Potential energy Mobile agents Real-time systems Reinforcement learning neural network potential field path planning
Abstract

The artificial potential field approach is an efficient path planning method. However, to deal with the local-stable-point problem in complex environments, it needs to modify the potential field and increases the complexity of the algorithm. This study combines improved black-hole potential field and reinforcement learning to solve the problems which are scenarios of local-stable-points. The black-hole potential field is used as the environment in a reinforcement learning algorithm. Agents automatically adapt to the environment and learn how to utilize basic environmental information to find targets. Moreover, trained agents adopt variable environments with the curriculum learning method. Meanwhile, the visualization of the avoidance process demonstrates how agents avoid obstacles and reach the target. Our method is evaluated under static and dynamic experiments. The results show that agents automatically learn how to jump out of local stability points without prior knowledge.

Language英语
WOS SubjectComputer Science, Information Systems ; Engineering, Electrical & Electronic ; Telecommunications
WOS KeywordMOBILE ; OPTIMIZATION
WOS Research AreaComputer Science ; Engineering ; Telecommunications
Funding ProjectNational Key Research and Development Program of China[2018YFF0214704] ; Liaoning Revitalization Talents Program[XLYC1907166] ; Liaoning Province Department of Education Foundation of China[L2019027] ; Liaoning Province Dr. Research Foundation of China[20170520135] ; National Natural Science Foundation of China[61903229] ; National Natural Science Foundation of China[61973180] ; National Natural Science Foundation of China[61802015] ; Natural Science Foundation of Shandong Province[ZR2019BF004] ; Natural Science Foundation of Shandong Province[ZR2019BF041]
Citation statistics
Document Type期刊论文
Identifierhttp://ir.sia.cn/handle/173321/27477
Collection数字工厂研究室
Corresponding AuthorZheng ZY(郑泽宇); Qi, Liang; Guo, Xiwang
Affiliation1.Department of Digital Factory, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
2.Institutes for Robotics and Intelligent Manufacturing, Shenyang 110016, China
3.University of Chinese Academy of Sciences, Beijing 100049, China
4.College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
5.Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07029, USA
6.School of Software Engineering, Beijing Jiaotong University, Beijing, China
7.College of Computer and Communication Engineering, Liaoning Shihua University, Fushun, Liaoning 113001, China
Recommended Citation
GB/T 7714
Yao QF,Zheng ZY,Qi, Liang,et al. Path Planning Method With Improved Artificial Potential Field-A Reinforcement Learning Perspective[J]. IEEE ACCESS,2020,8:135513-135523.
APA Yao QF.,Zheng ZY.,Qi, Liang.,Yuan, Haitao.,Guo, Xiwang.,...&Yang TJ.(2020).Path Planning Method With Improved Artificial Potential Field-A Reinforcement Learning Perspective.IEEE ACCESS,8,135513-135523.
MLA Yao QF,et al."Path Planning Method With Improved Artificial Potential Field-A Reinforcement Learning Perspective".IEEE ACCESS 8(2020):135513-135523.
Files in This Item:
File Name/Size DocType Version Access License
Path Planning Method(1998KB)期刊论文出版稿开放获取CC BY-NC-SAView Application Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Yao QF(么庆丰)]'s Articles
[Zheng ZY(郑泽宇)]'s Articles
[Qi, Liang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Yao QF(么庆丰)]'s Articles
[Zheng ZY(郑泽宇)]'s Articles
[Qi, Liang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Yao QF(么庆丰)]'s Articles
[Zheng ZY(郑泽宇)]'s Articles
[Qi, Liang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Path Planning Method With Improved Artificial Potential Field-A Reinforcement Learning Perspective.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.