1. Aggarwal, S. and Kumar, N.Path planning techniques for unmanned aerial vehicles: A review, solutions, and challenges. Computer Communications, 149, pp.270-299, 2020. 2. Hart, P.E., Nilsson, N.J. and Raphael, B.A formal basis for the heuristic determination of minimum cost paths. IEEE transactions on Systems Science and Cybernetics, 4(2), pp.100-107, 1968. 3. Wang Q.Research on rapidly-exploring random trees based global path planning and its application. National University of Defense Technology, 2014. 4. Zhao, Y., Zheng, Z. and Liu, Y.Survey on computational-intelligence-based UAV path planning. Knowledge-Based Systems, 158, pp.54-64, 2018. 5. Shibata, T. and Fukuda, T.Robotic motion planning by genetic algorithm with fuzzy critic. Transactions of the Society of Instrument and Control Engineers, 30(3), pp.337-344, 1994. 6. Karaboga D.An idea based on honey bee swarm for numerical optimization (Vol. 200, pp. 1-10). Technical report-tr06, Erciyes university, engineering faculty, computer engineering department, 2005. 7. Xia, C. and Yudi, A.Multi—UAV path planning based on improved neural network. In 2018 Chinese Control And Decision Conference (CCDC) (pp. 354-359). IEEE, 2018. 8. Zhao M., Lu H., Yang S. and Guo F.The experience-memory Q-learning algorithm for robot path planning in unknown environment. IEEE Access, 8, pp.47824-47844, 2020. 9. Gautam, S.A. and Verma, N.Path planning for unmanned aerial vehicle based on genetic algorithm & artificial neural network in 3D. In 2014 International Conference on Data Mining and Intelligent Computing (ICDMIC) (pp. 1-5). IEEE, 2014. 10. Zhang B., Mao Z., Liu W. and Liu J.Geometric reinforcement learning for path planning of UAVs. Journal of Intelligent & Robotic Systems, 77(2), pp.391-409, 2015. 11. Yan, Y., Wang, H. and Chen, X.Collaborative path planning based on MAXQ hierarchical reinforcement learning for manned/unmanned aerial vehicles. In 2020 39th Chinese Control Conference (CCC) (pp. 4837-4842). IEEE, 2020. 12. Xu J., Guo Q., Xiao L., Li Z. and Zhang G.Autonomous decision-making method for combat mission of uav based on deep reinforcement learning. In 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC) (Vol. 1, pp. 538-544). IEEE, 2019. 13. Liu Q., Shi L., Sun L., Li J., Ding M. and Shu F.Path planning for UAV-mounted mobile edge computing with deep reinforcement learning. IEEE Transactions on Vehicular Technology, 69(5), pp.5723-5728, 2020. 14. Yan, C., Xiang, X. and Wang, C.Towards real-time path planning through deep reinforcement learning for a UAV in dynamic environments. Journal of Intelligent & Robotic Systems, 98(2), pp.297-309, 2020. 15. Pateria S., Subagdja B., Tan A.H. and Quek C.Hierarchical reinforcement learning: A comprehensive survey. ACM Computing Surveys (CSUR), 54(5), pp.1-35, 2021. 16. Dietterich T.G.Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of artificial intelligence research, 13, pp.227-303, 2000. 17. Sun Y., Ran X., Zhang G., Xu H. and Wang X.AUV 3D path planning based on the improved hierarchical deep Q network. Journal of marine science and engineering, 8(2), p.145, 2020. 18. Low, E.S., Ong, P. and Cheah, K.C.Solving the optimal path planning of a mobile robot using improved Q-learning. Robotics and Autonomous Systems, 115, pp.143-161, 2019. |