Optimizing Autonomous Landing Systems with Deep Q-Networks: A Lunar Lander Simulation

Noorul Sama Sahel

Co-Presenters: Individual Presentation

College: The Dorothy and George Hennings College of Science, Mathematics and Technology

Major: Computer Science

Faculty Research Mentor: Israel Curbelo

Abstract:

The Lunar Lander simulation, part of the OpenAI Gym toolkit, provides a challenging testbed for reinforcement learning algorithms. This project explores the use of Deep Q-Networks (DQN) to train an autonomous agent capable of safely landing a spacecraft while optimizing fuel consumption and stability. The agent learns by interacting with an eight-dimensional state space, selecting from four possible actions to control its descent. A structured reward system encourages smooth landings while penalizing crashes and inefficient fuel use.Through extensive training, the DQN model achieved over 90% success in landing safely, demonstrating improved decision-making and fuel efficiency over time. Performance metrics, such as increasing average rewards and stable Q-value loss, indicate effective learning convergence. These results highlight the potential of reinforcement learning for autonomous control in complex, dynamic environments.By applying DQN to the Lunar Lander challenge, this research showcases how AI can enhance decision-making in robotics, space exploration, and other real-world applications requiring precise, adaptive control.

Previous
Previous

Level of Preparedness of Special Education Teachers to Use Discrete Trial Teaching in Their Self-Contained Classroom

Next
Next

Dark Side of the Moon: Solving Lunar Lander with Deep Q-Learning