Rethinking Visual Perception for Spacecraft Autonomy: Towards End-to-End Terrain Relative Navigation

Guardado en:
Detalles Bibliográficos
Publicado en:ProQuest Dissertations and Theses (2025)
Autor principal: Chase, Timothy, Jr.
Publicado:
ProQuest Dissertations & Theses
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Exploring deep space objects such as planets, comets, moons, and asteroids involves ambitious and increasingly complex scientific pursuits, requiring spacecraft to land on or maneuver within close proximity to surfaces of highly irregular and dangerous terrain; a significant challenge to navigation as communication latency is often too great to permit any Earth-based assistance through radiometric tracking, real-time planning and control, or precise GPS positioning. More recently, these challenges have been addressed through the optical tracking of prominent surface features to provide Terrain Relative Navigation (TRN). With compute power limited by radiation-tolerant hardware, current approaches to TRN are template matching and correlation techniques using static maps and features that are collected and constructed a priori with extensive human involvement. Although proven effective on recent missions, this two-stage approach limits adaptability and generalization, increases mission costs and timelines, and reduces applicable deployment scenarios. In contrast, terrestrial robotics has demonstrated the efficiency of one-stage navigation solutions such as Simultaneous Localization and Mapping (SLAM) for nearly two decades. Dynamically constructing the map and localizing within it at runtime, this "show up and navigate" paradigm offers greater flexibility, but its deployment in space is hindered by numerous challenges in visual perception that are unique to celestial environments, including a lack of rich, diverse textures, dynamic illumination conditions, and the computational complexity of image processing algorithms. To that end, this work proposes many improvements to perception in space, striving towards end-to-end visual understanding for spacecraft TRN. We begin by quantifying the feature complexities found in space environments and present interest point improvements that include state-informed matching and uncertainty-aware feature reasoning. We subsequently address the applicability of visual deep learning on spacecraft processors and introduce advancements to learning-based solutions in the presence of sparse training labels, including sim-to-real terrain detection and multi-view attention for distinctive description. Through rigorous evaluation, we demonstrate how the proposed techniques mitigate the failure modes of traditional space-vision, establishing a new state-of-the-art in extraterrestrial image processing and fostering a cohesive, unified TRN perception pipeline.
ISBN:9798293833955
Fuente:ProQuest Dissertations & Theses Global