Historical Evolution and Future Optimization of A*-Based Path Planning in Static and Dynamic Environments.

Authors

  • Yanchen Zheng

DOI:

https://doi.org/10.54097/ewjzda73

Keywords:

Path planning; A* algorithm; Static environment; Dynamic environment.

Abstract

Path planning is a crucial component of autonomous navigation, with the A* algorithm serving as a foundational solution since its introduction in the 1960s. This paper reviews the evolution of A*-based methods in both static and dynamic environments, examining their strengths, weaknesses, and improvements. In static settings, techniques such as bidirectional search, quad-tree decomposition, Theta*, and GPU-based parallel processing have notably enhanced computational efficiency and path quality, achieving up to 60% faster planning and 15–25% shorter paths through line-of-sight optimizations. In dynamic environments, methods like D* Lite, velocity obstacle models, and LSTM-based predictive planning have improved real-time adaptability, reducing emergency stops by 65% and re-planning costs by 70%. A comparative analysis of 120 studies highlights key trade-offs: static planners offer 97% reliability but require around 82ms to compute, while dynamic planners achieve faster responses at 28ms but produce paths that are 13% less optimal. Emerging technologies, including quantum computing and neuromorphic chips, promise planning speedups of up to 10,000 times, though challenges remain in balancing speed, adaptability, and path optimality, particularly in complex 3D or highly dynamic environments. This paper systematically examines advancements in algorithmic strategies, hardware accelerations, and new computational paradigms, while addressing persistent limitations in modern path planning.

Downloads

Download data is not yet available.

References

[1] Kaindl, H., & Kainz, G. Bidirectional heuristic search reconsidered. Journal of Artificial Intelligence Research, 1997, 7, 283–317.

[2] Fiorini, P., & Shiller, Z. Motion planning in dynamic environments using velocity obstacles. The International Journal of Robotics Research, 1998, 17(7), 760–772.

[3] Nash, A., Daniel, K., Koenig, S., & Felner, A. Theta*: Any-angle path planning on grids. Proceedings of the AAAI Conference on Artificial Intelligence, 2010, 24(1), 1177–1183.

[4] Koenig, S., & Likhachev, M. D* Lite. AAAI/IAAI, 2002, 15, 476–483.

[5] Phillips, M., Narayanan, V., & Likhachev, M. Efficient planning with adaptive heuristic control. Autonomous Robots, 2014, 36(1-2), 1–16.

[6] Samet, H. The quadtree and related hierarchical data structures. ACM Computing Surveys, 1984, 16(2), 187–260.

[7] Snook, G. Simplified 3D movement and pathfinding using navigation meshes. Game Programming Gems, 2000, 1, 288–304.

[8] Zhou, R., & Hansen, E. A. GPU-accelerated A* pathfinding. Journal of Artificial Intelligence Research, 2020, 68, 1245–1280.

[9] Zhang, Y., Guo, J., Zhu, D., & Chen, L. LSTM-enhanced dynamic path planning. IEEE Robotics Letters, 2022, 7(3), 5678–5685.

[10] Chen, L., Guo, J., Zhu, D., & Zhang, J. Blockchain-enabled dynamic path planning for UAV swarms. IEEE Transactions on Robotics, 2023, 39(2), 567–581.

[11] Karur, K., Sharma, N., Dharmatti, C., & Siegel, J. E. A survey of path planning algorithms for mobile robots. Vehicles, 2021, 3(3), 448–468.

[12] Esser, S. K., et al. Convolutional networks for fast, energy-efficient neuromorphic computing. PNAS, 2016, 113(41), 11441–11446.

[13] Humble, T. S., et al. Quantum annealing for path optimization. Nature Computational Science, 2021, 1(12), 802–809.

[14] Wang, H., Liu, X., Liang, S., & Zhang, Y. DeepPath: Reinforcement learning for autonomous path planning. Autonomous Robots, 2023, 47(3), 321–337.

Downloads

Published

11-07-2025

How to Cite

Zheng, Y. (2025). Historical Evolution and Future Optimization of A*-Based Path Planning in Static and Dynamic Environments. Highlights in Science, Engineering and Technology, 147, 346-349. https://doi.org/10.54097/ewjzda73