IEEE Aerospace Conference logo
At the Yellowstone Conference Center in Big Sky, Montana, March 07 – March 14, 2026

  • Keyur Patel (NASA Jet Propulsion Lab) & Steven Arnold (Johns Hopkins University/Applied Physics Laboratory)
    • Kristen Brown (NASA - Goddard Space Flight Center) & Stephen Schmidt (NASA GSFC) & James Graf (Jet Propulsion Laboratory) & Keyur Patel (NASA Jet Propulsion Lab)
      • 02.0102 Pathway to Mission Success: Lessons Learned from the SPHEREx AI&T and V&V Program
        William Hart (NASA Jet Propulsion Laboratory), Farah Alibay (Jet Propulsion Laboratory), Jennifer Rocca (Jet Propulsion Laboratory), Heather Bottom (Jet Propulsion Laboratory), Sara Susca () Presentation: William Hart - -
        On March 11th 2025, SPHEREx, the Spectro-Photometer for the History of the Universe, Epoch of Reionization and ices Explorer, was successfully launched on a Falcon 9 rocket from Vandenberg Space Force Base. Selected in 2019 as part of NASA's Medium Explorer program, SPHEREx will perform the first all-sky spectral survey at wavelengths between 0.75μm and 5μm. When completed, the spacecraft will map 1.4 trillion spectral-spatial elements (known as voxels) over the entire sky in each of four independent surveys, taking place over a 25-month period. This data will help scientists study the early universe, the history of galaxies, and the prevalence of life-sustaining molecules in planet-forming regions of space. This paper discusses the implementation and early operations phase of SPHEREx, extending from the Project Critical Design Review in March 2022 until launch. It encompasses spacecraft integration, environmental verification as well as the launch campaign. This paper also details lessons learned from a spacecraft build, test and operations campaign that heavily involved multiple external partners, such as BAE Systems and Caltech.
      • 02.0103 Design and Development of the Multi-Angle Imager for Aerosols (MAIA) Earth Venture Instrument
        Kon-Sheng Wang (California Institute of Technology), Stacey Boland (), Kevin Burke (JPL), David Diner (Jet Propulsion Laboratory, California Institute of Technology), Saagar Patel (NASA Jet Propulsion Laboratory), Amanda Steffy () Presentation: Kon-Sheng Wang - -
        This paper describes the design and development of the Multi-Angle Imager for Aerosols (MAIA) instrument from concept through delivery. Selected in 2016 in response to the third NASA Earth Venture Instrument (EVI-3) opportunity, MAIA enables the investigation of links between different types of airborne particulate matter (PM) and adverse human health impacts such as cardiovascular disease, respiratory disease, pre-term delivery, and low birth weight. The instrument optical system features a push-broom UV/VNIR/SWIR spectropolarimetric camera with a passively cooled focal plane module and a pair of novel photo-elastic modulators to measure the radiance and polarization of sunlight scattered by atmospheric aerosols, from which the abundance and characteristics of ground-level PM are derived. The camera is mounted on a two-axis gimbal that allows multi-angle pointing, frequent target revisits, and in-flight calibration capabilities. The EVI approach presented unique challenges to instrument formulation such as designing for spacecraft “hostability,” using standardized requirements in the absence of defined spacecraft interfaces and environments, and adopting simple approaches to operability and fault protection. Implementation challenges included managing schedule and logistics during the SARS-CoV-2 pandemic, late changes to the gimbal design, completing an extensive camera calibration campaign, and expediting hardware and software modifications once the host spacecraft was selected. Several key trades and a descope undertaken during the design process will also be described. An overview is provided of instrument integration and test, which was completed in 2022. In 2023, NASA and the Italian Space Agency (Agenzia Spaziale Italiana or ASI) agreed to implement the MAIA mission as a joint NASA-ASI partnership with ASI contributing the spacecraft as well as the launch vehicle. The MAIA mission is currently planned to launch on a Vega-C launch vehicle from the Guiana Space Center in 2026.
      • 02.0104 Europa Clipper Cruise Phase: From Integration and Test to Early Operations
        Andres Rivera (NASA Jet Propulsion Lab), Thaddeus Para (NASA Jet Propulsion Lab), May Chong Chan (Jet Propulsion Laboratory), Erisa Stilley () Presentation: Andres Rivera - -
        Jupiter's icy moon Europa is a prime target in our exploration of potentially habitable worlds beyond Earth. The combination of a subsurface liquid water layer in contact with a rocky seafloor may yield an ocean rich in the elements and energy needed for the emergence of life, and for potentially sustaining life through time. Europa may hold the clues to one of NASA's long-standing quests - to determine whether we are alone in the universe. The Europa Clipper mission will characterize Europa's habitability as the first step in the search for potential life at the Jovian moon by conducting approximately four dozen flybys. Europa Clipper launched on October 14, 2024, aboard a SpaceX Falcon Heavy, beginning its 5.5-year journey to the Jovian system. This paper describes the mission’s transition from the implementation phase—highlighting the execution of system-level Verification and Validation—into launch and early cruise operations. It covers key milestones such as the flight system characterization campaign, the first low data rate operational period, the first gravity assist maneuver at Mars, and the initiation of Earth-pointed inner cruise operations. The paper focuses on the major challenges during planning and implementation, the strategies adopted to address them, and the lessons learned to inform future deep space missions.
      • 02.0105 Design and Implementation of the NASA Psyche Discovery Mission’s Science Data Center
        Ernest Cisneros (Arizona State University), James Bell (Arizona State University) Presentation: Ernest Cisneros - -
        The NASA Psyche Discovery Mission is currently cruising towards a rendezvous with the asteroid (16) Psyche, the largest M-class asteroid in the main asteroid belt [1]. The spacecraft instrument suite consists of a magnetometer, gamma ray and neutron spectrometers, multispectral imagers, and radio science experiments, all designed to unravel the history of (16) Psyche [2,3]. The resulting data products generated by the mission (e.g., images; spectroscopic, magnetic, and gravity field data; shape models; geologic maps, etc.) will be delivered to the Small Bodies Node of the NASA Planetary Data System (PDS) for long-term archiving [4]. The Psyche mission’s Science Data Center (SDC), part of the JPL Psyche Mission System, is the point of contact for all data sharing and archiving activities. Here we describe the design and implementation of the SDC, which was guided by many factors: supporting mission requirements related to data warehousing; performing data dissemination and archiving; adhering to federal, NASA, and ASU cybersecurity guidelines; and following industry best practices. Given the long baseline of the mission (launch in October 2023 and arrival at the asteroid in mid-2029), the system needs to be easily maintainable and upgradable during the mission’s lifetime. The SDC is located on the Tempe campus of ASU, with connections to the mission’s Ground Data System at JPL, and is available to the Psyche team via a web portal utilizing purpose-built tools. The SDC leverages heritage tools, concepts, and lessons learned from previous ASU instrument operations, such as for the Lunar Reconnaissance Orbiter Cameras on LRO, the Mastcam cameras on the Mars Science Laboratory rover, and the Mastcam-Z cameras on the Mars 2020 rover. We describe the heritage, principles, and requirements that guided the design and initial development of the Psyche Science Data Center, including real-world examples of the SDCs data portal, RESTful interface, in-house scripts, and early data products. The design of the SDC is centered around a relational database, with a schema to model the many files that are ingested and tracked, as well as their relationships to the PDS bundles being aggregated and delivered. The SDC disseminates data products to the Psyche Team for science investigations through a web-based portal, which also includes a RESTful interface that allows team members to upload, download and search data using in-house scripts. The three instrument teams make heavy use of the RESTful interface for uploading their PDS products. The web portal also makes mosaics and individual image products available to the team using Web Mapping Service technology. Most of the tools developed for the SDC are written in Python, using virtual environments to minimize the need to configure and maintain Python at the system level. These adhere to the UNIX philosophy of software development: make each program do one thing well, expect output (when generated) to become input to another tool, and test early and refactor as needed. References: [1] Elkins-Tanton+, Space Sci. Rev., 218, 2022. [2] Dibb+, AGU Advances, 5, doi:10.1029/2023AV001077, 2024. [3] Polanskey +, Space Sci. Rev., submitted, 2025. [4] https://pds-smallbodies.astro.umd.edu
      • 02.0107 GRACE Continuity Project Overview
        Neil Dahya (NASA Jet Propulsion Laboratory) Presentation: Neil Dahya - -
        The GRACE Continuity (GRACE-C) Mission has entered its detailed design & implementation phase for the Spacecraft and Science Instruments, targeting an launch date in Dec-2028. GRACE-C will continue the essential environmental data record of Earth system mass change initiated by the GRACE (2002-2017) and GRACE-FO (2018 – present) missions. Observations of monthly to decadal global mass changes and transports in the Earth system provide unique insights into how water is stored across the land and planet. This has broad implications for society, especially regarding water availability. Mass Change data supports monitoring of flood potential and droughts to track groundwater and aquifer volume changes, and to inform freshwater availability, irrigation, and data-driven agriculture practices in the US and globally. Monitoring changes in ice sheets, glaciers, near-surface and underground water storage, large lakes and rivers, sea level, and ocean currents provides an integrated view of Earth’s evolving global water cycle and energy balance, with important, valuable applications for everyday life. The GRACE-C mission is a NASA directed mission. The 2017-2027 US National Academy of Sciences Decadal Survey for Earth Science and Applications from Space identified Mass Change as one of five foundational Designated Observable to better understand the Earth system over the next decades, and to supply critical data for applications, adaptation, and mitigation. Just like the previous GRACE missions, GRACE-C builds on successful past partnerships with NASA/JPL, GFZ (German Research Center for Geoscience), DLR (German Aerospace Center) and ONERA (French Aerospace Lab), to ensure mission success with a fast-paced schedule and minimum cost impact to NASA. The GRACE-C mission completed its Critical Design Review in May of 2025, just 12 months after project PDR, and is working towards System Integretion Review which is scheduled for Oct of this year. Due to the compressed schedule both the Spacecraft subsystem components and instrument components are completing their delivery reviews. This paper describes the status of the project design, analyses, hardware build and measurement system performance assessments but will also discuss the key hardware issues and how the project has managed the issues while maintaining the appropriate cost and schedule risk posture.
      • 02.0108 Verification and Validation of the Europa Clipper Launch Phase and Autonomous Behavior
        Erisa Stilley (), Carolyn Brennan (Jet Propulsion Laboratory), Jean Francois Castet (Jet Propulsion Laboratory), Amanda Donner (Jet Propulsion Laboratory), Thaddeus Para (NASA Jet Propulsion Lab) Presentation: Erisa Stilley - -
        The Europa Clipper mission, NASA’s most recent flagship mission and part of the Ocean Worlds Exploration Program, will explore Jupiter’s moon Europa, one of the most promising places in our solar system to search for signs of life. Europa is believed to harbor a vast subsurface ocean beneath its icy crust, and the Clipper spacecraft will conduct nearly 50 flybys to study the moon’s surface, interior, and potential habitability. Clipper started its 5.5 year journey to the Jovian system on Oct 14, 2024, launching from Cape Canaveral on a SpaceX Falcon Heavy rocket. A few short hours after liftoff, the spacecraft separated from the launch vehicle and began a series of autonomous actions via a complex software behavior intended to establish a capable spacecraft ready to receive commands from the ground. This behavior needed to accomplish mission critical actions such as deploying the solar arrays while also detecting and responding to a myriad of potential issues, known and unknown. To ensure success, the Launch Phase test program included testing the interfaces to a new launch vehicle system, detailed flight system and subsystem verification and validation testing, and testing of the ground system and operations team. The test and analysis scope, venues, and processes utilized are discussed in detail as well as overall lessons learned and conclusions.
    • Michael Gross (NASA Jet Propulsion Lab) & Alex Austin (Jet Propulsion Laboratory)
      • 02.0201 Carbon-I: Expanding the Frontiers of Carbon Cycle Science
        Andrew Thorpe (Jet Propulsion Laboratory, California Institute of Technology) Presentation: Andrew Thorpe - -
        Earth’s dynamic carbon cycle—unique among known planets—drives the growth of our food, sustains the oxygen we breathe, and generated the fossil fuels that powered the Industrial Revolution. The Carbon Investigation (Carbon-I) mission provides unprecedented clarity on this vital cycle by delivering high-resolution measurements of the main carbon-bearing molecules in the atmosphere: methane (CH4), carbon dioxide (CO2), and carbon monoxide (CO). Carbon-I directly addresses multiple NASA Decadal Survey objectives by closing the tropical data gap, focusing on natural emissions sources (wetlands, permafrost, agriculture), measuring fire emissions, and providing robust source attribution across scales. The Carbon-I instrument builds on more than 40 years of imaging spectrometer technology development at the Jet Propulsion Laboratory. This Dyson imaging spectrometer provides a ~100 km swath that delivers monthly global land coverage at ≤400 m with ~10 times finer sampling for high-priority areas. This spatial detail is combined with high resolution atmospheric spectroscopy in the 2.04–2.37 µm shortwave infrared range to capture absorption lines of the primary gases (CH4, CO2, CO, N2O) at 0.7 nm spectral sampling interval combined with a spectral response function of <2.5 nm full width at half maximum. This ensures unambiguous discrimination of gas absorption lines, <0.25% CH4 single measurement precision, and clear separation from surface albedo variations. The Carbon-I instrument flies on the LM400 bus, a proven mid-sized spacecraft architecture with significant flight heritage for all subsystems and components, including 3-axis stabilization and low jitter capabilities for high spatial sampling science needs. The payload data processor transmitter stores up to 8 Tbit of data which is downlinked via a commercial Ka-band communication network, providing 1.46 Tb daily data (>100 million spectra per day). Carbon-I delivers actionable trace gas results at the local to regional scale to a diverse set of stakeholders and scientific communities. Beyond its core objectives, Carbon-I also contributes to water-cycle research by measuring H2O/HDO to quantify evapotranspiration and study tropical land–atmosphere interactions. A Science Enhancement Option expands Carbon-I’s scope to enhanced geologic mapping (critical minerals and rare earth elements), vegetation health (lignocellulose features for fire risk), and pollution monitoring (agricultural plastics, oil spills). Carbon-I is a NASA Earth System Explorer Mission Step 2 concept.
      • 02.0204 Mission Concept to Track Convective Cloud Systems with Satellite-Based Phased-Array Radars
        Jean Ghantous (University of Colorado, Boulder), Christopher Williams (University of Colorado Boulder) Presentation: Jean Ghantous - -
        Mid-latitude convective storm systems are among the most hazardous and economically damaging weather phenomena worldwide. Despite their importance, many aspects of their internal dynamics remain poorly understood, primarily due to the challenges associated with direct observation. In recent years, there has been increasing interest in leveraging spaceborne platforms to improve our understanding of these storms. NASA’s Jet Propulsion Laboratory (JPL)-led Investigation of Convective Updrafts (INCUS), will deploy a constellation of three nadir-pointing radar-equipped satellites to observe storm evolution at temporal intervals of 0, 30, and 120 seconds. While this configuration offers valuable snapshots of convective development, it is inherently limited by sparse temporal sampling due to the strictly nadir-pointing geometry of each satellite overpass. This study proposes an alternative mission architecture designed to overcome these limitations by enabling near-continuous sampling of convective systems during a single satellite overpass. The concept leverages a phased-array radar system capable of electronically steering its beam to track storm evolution in real time, with the potential to monitor multiple storm cells simultaneously. The scientific potential of this architecture is substantial. Continuous observation of storm life cycles during a single flyover would enable real-time measurements of internal convective processes, offering unprecedented insight into storm structure and evolution. Furthermore, this approach reduces the number of required spacecrafts in formation for the same visibility requirements. To assess the feasibility of this system, performance was evaluated under nadir-pointing constraints similar to those used in INCUS. A sample target storm and orbit scenario were simulated, and the radar design process was closed to obtain performance metrics over a single pass. Results indicate that such a design would enable single spacecraft equipped with a phased-array radar to continuously track a storm for approximately 200 seconds—representing a significant improvement over the temporal resolution offered by INCUS. Future work will address outstanding challenges, including power efficiency and mitigation of ground clutter interference. Additionally, future studies may explore the deployment of multiple phased-array systems in formation, combining the benefits of electronic beam steering with extended tracking durations and enhanced agility in observing multiple storm systems.
      • 02.0205 Origami Based Self Deployable Solar Sail with Shape Memory Alloy Actuators
        Barani L (INDIAN INSTITUTE OF TECHNOLOGY MADRAS INDIA) Presentation: Barani L - -
        This project presents a novel deployment method for solar sails designed for spacecraft in Earth orbit and interplanetary missions. Traditional solar sail systems rely on bulky mechanical booms, which add mass and reduce adaptability. To address this limitation, the proposed concept integrates origami-inspired folding techniques with shape memory alloy (SMA) actuators, offering a lightweight and compact alternative. The system is based on a hexagonal solar sail folded into a compact configuration using origami principles. Deployment is initiated by heating the SMA actuators, which expand to unfold the sail smoothly and reliably. For retraction, heat is removed from the actuators, while a central spiral spring mechanism rotates the core hexagon to assist folding. This dual mechanism ensures controlled deployment and furling, improving reliability and reusability. To withstand the harsh space environment, the SMA actuators are enclosed in flexible hollow ribs, which both protect the actuators and guide the deployment sequence. These ribs are directly linked to the spiral spring at the core, enhancing structural resilience while maintaining low mass. By combining origami folding with SMA-based actuation and spring-assisted retraction, this approach delivers a mass-efficient, adaptive, and reusable solar sail deployment system. The concept significantly reduces mechanical complexity, improves packing efficiency, and offers new possibilities for sustainable propulsion in future orbital and interplanetary missions.
    • Clara O'Farrell (Jet Propulsion Laboratory) & Ian Clark (Jet Propulsion Laboratory)
      • 02.0301 A Stochastic Approach to Terrain Maps for Safe Lunar Landing
        Anja Sheppard (University of Michigan), Christopher Reale (Charles Stark Draper Laboratory), Katherine Skinner (University of Michigan) Presentation: Anja Sheppard - -
        Safely landing on the lunar surface is a challenging task, especially in the heavily-shadowed South Pole region where traditional vision-based methods are not reliable. Due to the potential existence of valuable resources at the lunar South Pole, landing in that region has become a high priority for many space agencies and commercial companies. Relying on a LiDAR for hazard detection during descent is also risky, as this technology is fairly untested in the lunar environment. However, there exists a rich log of lunar surface mapping data from the Lunar Reconnaissance Orbiter (LRO) which could be used to create informative prior maps of the surface before descent. In this work, we propose a method for generating stochastic elevation maps from LRO data using Gaussian Processes, which are a powerful Bayesian framework for non-parametric modeling that produces interpretable uncertainty estimates. In high-risk environments such as autonomous spaceflight, provably-correct estimates of terrain uncertainty are critical. Previous approaches to stochastic elevation mapping with Gaussian Processes have focused primarily on designing the appropriate covariance function for the expected terrain distribution, which models the spatial correlation between points on the map. However, none of these existing methods have taken LRO Digital Elevation Model (DEM) confidence maps into account, despite this data containing key information about the quality of the DEM in different areas. To address this gap, we introduce a two-stage Gaussian Process model in which a secondary Gaussian Process learns spatially varying noise characteristics from DEM confidence data. This heteroscedastic information is then used to inform the noise parameters for the primary Gaussian Process which models the lunar terrain. We use variational Gaussian Processes to enable scalable training. By leveraging Gaussian Processes, we are able to more accurately model the impact of heteroscedastic sensor noise on the resulting elevation map. As a result, our method produces more informative terrain uncertainty which can be used for downstream tasks such as hazard detection and safe site selection. We compare against several stochastic mapping baselines using real-world LRO Narrow Angle Camera data at the lunar South Pole.
      • 02.0302 AI-enhanced Vision-based Hazard Detection Operations in Lunar Landing Scenario
        Mohamed El Awag (University of Rome, La Sapienza), Ludovica Cavalieri (University of Rome, La Sapienza), Simone Andolfo (University of Rome, La Sapienza), Fabio Valerio Buonomo ("La Sapienza" University of Rome), Antonio Genova (University of Rome, La Sapienza) Presentation: Mohamed El Awag - -
        Future lunar exploration, driven by both scientific and commercial objectives, requires advanced platforms capable of enabling the soft landing of both crewed and uncrewed missions with extremely high precision and autonomy. Landing near hazardous features such as lunar pits or within permanently shadowed regions poses significant challenges due to complex terrain and low sun angles that reduce surface visibility, particularly at higher lunar latitudes. Accurate hazard assessment during descent is critical to mission success and often relies on high-resolution digital terrain models (DTMs). State-of-the-art techniques perform pixel-wise slope and roughness analysis within probabilistic frameworks that account for sensor uncertainty, lander’s shape, and tolerances associated with its trajectory evolution. While robust, these approaches are computationally intensive and struggle to meet real-time constraints as DTMs spatial resolution increases and hazardous detection target size diminishes. To address these limitations, we present the design, implementation, and validation of a real-time vision-based hazard detection and avoidance (HDA) module for lunar landing missions enhanced by deep learning. This system reframes hazard map generation as a pixel-wise regression task, using a convolutional neural network to infer slope and roughness maps from monocular grayscale images, maintaining robustness to varying illumination and viewing geometries. Multiple U-Net-like architectures are evaluated to optimize the trade-off between inference speed and accuracy. The application is deployed in a hybrid fashion, employing the neural block to ease slope and roughness computation. The resulting pixel-wise distribution of these parameters is provided to a traditional processing module, which estimates the safety probability in a probabilistic framework. Training is supported by a custom Blender-based rendering pipeline developed by our research group that augments public terrain datasets with artificial obstacles and resolution-enhanced features. This software incorporates complex data processing techniques during dataset preparation: by computing all relevant geometrical properties relative to the landing plane for each possible touchdown orientation at each candidate landing site, pixel-level hazard annotations are retrieved to support the network training. By integrating the learning-based HDA system into the onboard GNC pipeline, all computationally intensive operations are efficiently performed by the trained neural network, thereby enabling real-time execution. Nevertheless, given the absence of formal guarantees regarding the robustness of the neural network component, the inferred hazard map is subjected to further control, including additional outlier rejection and consistency checks. The hazard map is then converted into a safety probability map to be provided to the Hazard Avoidance subsystem, which ranks candidate landing sites based on computed metrics, including safety and proximity to the original target. The resulting information can be subsequently employed by the onboard guidance system to perform retargeting of the original landing site. Monte Carlo numerical simulations are carried out to validate the robustness of the approach under varying initial conditions and predefined target area. By combining deep learning with probabilistic modeling, the proposed framework significantly advances onboard autonomy, improving the safety and reliability of future lunar landing missions.
      • 02.0303 HAVEN: Hazard-Response Aero-Deployable Vehicle for Emergency Nominal Re-entry
        Christopher Kwon (), Berfin Ataman (Massachusetts Institute of Technology), Yuhan Wang (Harvard University), Shantelle Ortiz (Massachusetts Institute of Technology), Cody Paige (Massachusetts Institute of Technology), Jeffrey Hoffman (Massachusetts Institute of Technology), Skylar Tibbits (MIT) Presentation: Christopher Kwon - -
        As activity in low Earth orbit (LEO) expands and the International Space Station (ISS) nears retirement, there is an emergent need for rapidly deployable emergency return vehicles. Conventional rigid capsules, though proven, impose mass and volume penalties ill-suited to modular habitats. Advances in inflatable structures, demonstrated in habitat modules and hypersonic inflatable aerodynamic decelerators, enable new approaches to entry, descent, and landing (EDL) architectures. The Hazard-response Aero-deployable Vehicle for Emergency Nominal Re-entry (HAVEN) integrates a hybrid rigid-inflatable system: a rigid core for propulsion, avionics, and life support, surrounded by inflatable habitat and heat shield elements. The shield deploys to a large diameter before atmospheric entry, reducing heating and loads during reentry, while a metallic sinusoidal lattice in the habitat provides post-landing structural integrity, overcoming limitations of softgoods-only designs. Guidance, navigation, and control are fully autonomous, with multiple redundancies, and parachute recovery is anchored to the rigid core with a staged deployment sequence for survivable landing. Prototyping and finite element analysis confirm feasibility of the deployment kinematics and load-bearing performance of the lattice structure. HAVEN’s design addresses critical gaps in current space safety infrastructure by enabling rapid, autonomous evacuation to predetermined, politically neutral landing sites, independent of crew rotation schedules or specific station architectures. By synthesizing advances in inflatable materials, autonomous systems, and human-centered design, this work establishes a new paradigm for space emergency vehicles — one that is lightweight, adaptable, and future-proofed for the evolving landscape of orbital habitation. The results demonstrate the viability of hybrid rigid-inflatable architectures for crew safety and operational flexibility, with broad implications for the sustainability and resilience of human spaceflight.
      • 02.0308 Optimized Lunar Descent Guidance: A Fusion of Circular and Bang–Bang Control
        Sreeja S (College of Engineering Trivandrum), AMRUTHA V P ( College of Engineering Trivandrum), SREERENJANA R (APJ Abdul Kalam Technological University) Presentation: Sreeja S - -
        This work addresses the autonomous guidance problem for the approach phase of lunar landing, spanning altitudes from approximately 7 km down to 100 m above the surface. During this segment of descent, the lander must achieve a precise terminal state while minimizing fuel expenditure and ensuring computational feasibility for real-time implementation. Two distinct guidance algorithms are developed and demonstrated for this problem: an indirect optimal-control formulation based on Pontryagin’s Maximum Principle, and a geometric circular guidance law augmented with analytical corrections. Each provides an independent solution framework aligned with specific mission objectives. The first algorithm is based on an indirect optimal control approach, yielding a bang–bang thrust profile governed by the bilinear tangent steering law. Building on theoretical foundations that characterize minimum-fuel solutions through structured thrust switching, this work permits two switching events in thrust magnitude: an initial maximum-thrust phase to overcome gravity, a subsequent coast phase to conserve fuel, and a final thrust phase to satisfy terminal conditions. Both the switching times and the steering law parameters are treated as unknowns and resolved through an iterative Newton–Raphson procedure to enforce terminal boundary conditions. The resulting trajectory is consistent with the necessary conditions of optimality and provides a fuel-optimal reference solution for the approach-phase descent problem. The second algorithm employs a geometric circular guidance law, originally adapted from missile–target engagement problems to planetary descent scenarios. The lander is guided along a notional circular arc terminating at the target point, with convergence ensured by a quadratic acceleration correction term applied opposite to the instantaneous tangential velocity. The coefficients of this correction are derived from the geometry of the circular trajectory and the current velocity state, yielding a law that is algebraically simple and computationally efficient. This makes it suitable for onboard real-time implementation, particularly in systems with limited computational resources. Extensive simulation studies are carried out across the entire approach-phase profile. The bang–bang optimal control algorithm consistently achieves minimum-fuel trajectories, validating its role as a rigorous and benchmark solution for the descent problem. Meanwhile, the circular guidance law demonstrates robust performance with low computational overhead, successfully steering the lander toward terminal conditions with acceptable accuracy. While the optimal-control approach is best suited for offline trajectory generation and reference planning, the geometric guidance law provides a practical onboard alternative that maintains robustness and responsiveness in real time. In summary, this study introduces two independent approaches for the approach-phase guidance of lunar landers. The indirect optimal-control method ensures strict fuel optimality under thrust and terminal constraints, while the geometric circular law ensures feasibility, robustness, and computational efficiency for onboard guidance. Together, these approaches establish a framework in which offline optimal solutions can inform onboard real-time guidance, striking a balance between fuel efficiency and practical implementation. Such a dual-strategy paradigm enhances the autonomy and reliability of future lunar landing missions. Keywords: lunar landing, autonomous guidance, approach phase, optimal control, bang–bang thrust, circular guidance, real-time implementation.
    • Joseph Bowkett (Jet Propulsion Laboratory) & Richard Volpe (Jet Propulsion Laboratory) & Paul Backes (Jet Propulsion Laboratory)
      • 02.0401 Terrain-Adaptive Strategies to Prevent and Recover from Rover Wheel-Slip
        Jasper Grant (Dalhousie University), Mae Seto (Dalhousie University), Paul Grouchy () Presentation: Jasper Grant - -
        Wheel slip in planetary rovers creates localization error, wasted power, worn tires and occasionally, mission failure. One example is the Spirit rover, reclassified as a stationary research platform, when it could not exit a sand pit despite months of attempts from NASA ground control. Another example is the Apollo 15 lunar roving vehicle which experienced wheel slip in soft soil until it was ultimately freed by astronaut effort (NASA-TR-R-401, 1972, pg. 52). Wheel-slip estimation and sensing has advanced, but active online compensation for wheel slip has received less attention. Existing approaches use real-time terrain measurements or proprioceptive feedback to respond with adjusted wheel speeds and torques. Therefore, there is potential to improve proprioceptive-only strategies by not only leveraging wheel speeds, but also potentially steering angles and active suspension, to actively respond to slip. A solution which uses these additional inputs is not confined to conventional driving and can consider unconventional gaits like “walking” or “inch worm” style locomotion. The Lunokhod rover found the terrain slopes limit of 25-27° for conventional driving. Unconventional gaits can increase the range of navigable slopes beyond this. Despite their improved performance, unconventional gaits still rely on terrain parameter knowledge. Existing physics-based simulation models also require knowledge of the soil properties of the navigated terrain. A proposed two-stage (offline then online) learning solution generates wheel-slip compensation controllers that are not confined to conventional gaits and iterate until a controller is identified to be performant on the terrain considered without directly sensing the terrain parameters. This is the contribution of this reported work. The created system uses the Project Chrono physics-based simulator to construct environments for both the offline and online learning stages. Each created environment represents an interaction between a VIPER rover model and a specific soil contact model deformable terrain. In the offline learning stage, a quality diversity algorithm -- MAP-Elites, predicts high performing solutions based on prior performance across a high-dimensional search space. Performant solutions are incrementally added to an “archive” of elites. Each individual solution represents a matrix of gains mapping rover inputs to outputs. The controller’s performance is simulated and if performant and unique, added to the archive. In the online learning stage, the Map-Based Bayesian Optimization Algorithm (MBOA) is initialized with the archive created in the first stage. The expected highest performing solutions are re-evaluated and used to predict the performance of other controllers nearby in the search space. Terrain encountered can be different from that seen in offline training to simulate encountering novel terrain in situ. Initial results of MBOA online adaptation successfully converge on controllers that perform well on the simulated terrain without directly sensing the terrain. Additionally, unconventional gait solutions were observed in initial archives indicating their performance and uniqueness. The contribution includes a software framework to evaluate and validate rover wheel-slip compensation solutions on realistic deformable terrain. This platform is extensible to multiple adaptive controllers and soil simulation models. Future work will trial this system on a rover model in real terrain.
      • 02.0404 Data-driven Tracking Control for Origami-Tensegrity Robotic Structures
        Connie Liou (Stanford University), Manan Arya (Stanford University) Presentation: Connie Liou - -
        Robotic systems that can be flat-folded for compact stowage are becoming increasingly relevant for planetary and lunar exploration missions with strict mass and volume constraints. By utilizing flat-foldable designs, we can leverage the additional unused space onboard landers, and potentially enable low-cost multi-robot exploration of these surfaces. Origami-inspired forms and structures are widely used in space due to their ultralight and compact properties. NASA’s origami-inspired robot, PUFFER (Pop-Up Flat Folding Explorer Robot) by Karras et al., demonstrates a chassis which can be flat-packed, greatly reducing the required payload volume. Its flexible structure also allows for shape adaptation on-side, enabling obstacle maneuvering techniques by lowering its center of gravity. Another approach has employed cable-tensioned folding robotics, combining origami, the art of paper folding, and tensegrity, a structural principle using a network of prestressed cables to achieve stability. Ochalek et al. have introduced a novel flat-foldable origami-tensegrity form based on Miura tubes that can change shape and stiffness in-situ. The combination of origami-tensegrity structures combine the versatility of rigid-panel origami with the compliant actuation of tensegrity systems, offering a pathway toward compact, lightweight, and modular robotic mechanism. While these flexible robots are capable of adapting and responding to unknown environments, their many degrees of freedom pose a challenging control problem. Prior work, such as that by Wang et. al., has leveraged data-driven models to learn and develop differentiable physics engines for cable-driven tensegrity-only robots, which has been proven successful in capturing the complex dynamics of these systems. Our work extends these capabilities to systems with both rigid plate bending mechanics and tensioned cable networks. In this work, we will demonstrate closed-loop tracking control of a cable-actuated, elephant-trunk structure based on the Miura-Ori tube. To address the modeling and control challenges, we leverage a finite element analysis framework for origami-tensegrity statics developed by Shuo Ma et. al. which captures the nonlinear static equilibrium behavior of such systems. Using this data, we train a multilayer perceptron to develop a scalable, data-driven surrogate model which will enable real-time model predictive control for robotic applications. We will validate our approach by implementing tracking control of the Miura-ori structure, along with reference trajectories in three-dimensional space. We will compare the real-time performance of our proposed control pipeline with simulation-in-the-loop alternatives. Our results will demonstrate accurate control of integrated origami-tensegrity structures which will open up the design of robotic limbs and chassis with morphing capabilities. Our learning-based approach offers a scalable strategy for controlling these complex, compliant structures in real time, which will enable the design of new adaptive and reconfigurable robots for exploration.
      • 02.0410 Distobee: Design and System Review of a Mobile Platform for Lunar Regolith Excavation
        Mateusz Wójcik (AGH University of Kraków), Adam Michalik (AGH Univeristy of Kracow) Presentation: Mateusz Wójcik - -
        This work presents the design, development, and partial validation of a mobile robotic platform designed for the excavation and transportation of lunar regolith. The system was developed by members of the student research group AGH Space Systems as part of their participation in the Space Resources Challenge, an initiative launched by the European Space Agency (ESA) and the European Space Resources Innovation Centre (ESRIC) in 2021. The competition aims to foster challenge-driven innovation by involving academic and industrial teams in solving practical problems relevant to the future of lunar exploration. The proposed platform was designed with a strong emphasis on simplicity, modularity, and mechanical robustness to ensure reliable operation in simulated lunar conditions. These include high levels of dust, vacuum environment, and reduced gravity. The paper outlines the system architecture, highlighting both mechanical and electronic subsystems, and details the design decisions and trade-offs made throughout the development process. A major engineering focus was the development of a custom wheel and drivetrain system, tailored to maintain traction and maneuverability on loose regolith while minimizing dust dispersion during movement. Various geometries and tread patterns were explored, taking into account constraints on mass, mechanical complexity, and ease of manufacturing. The suspension system was also optimized to ensure stability during regolith acquisition, enabling smooth operation on uneven terrain. One of the significant challenges addressed was the reliable detection of the fill level inside the onboard excavation-storage container. Given the harsh environmental conditions expected during operation, a comparative evaluation of multiple sensor technologies—including capacitive sensors and radar-based solutions—was conducted to determine the most suitable approach. Energy management represented another critical aspect of the project, as the competition imposed strict limits on available electrical power. A hybrid energy management system was implemented, combining commercial off-the-shelf components with custom-designed modules to optimize energy consumption while maintaining system performance. The prototype developed within this project serves as a functional research platform for future iterations and scientific testing. The system will undergo final testing and validation in October 2025 at the LUNA analog lunar simulation facility, where its performance will be assessed under realistic Moon-like conditions.
      • 02.0414 Quarry Bot: Mobile Cable Robot for Lunar Excavation with Dragline Approach
        Zahir Castrejon (Universtiy of Nevada Las Vegas), Nathan Kassai (University of Nevada, Las Vegas) Presentation: Zahir Castrejon - -
        Motivated by NASA’s Artemis Space Program, Quarry Bot is a cable robot developed for excavation tasks that support lunar site preparation without human intervention. The system employs a four-cable configuration with mobile anchors, allowing reconfigurability during digging to adapt to different terrain conditions. A test rig was constructed with Dynamixel actuators and load-cell tension sensing around a sandbox to bridge excavation cycles towards mobile anchor points with vertical translations. The control framework combined Lyapunov-based tracking with Non-Negative Least Squares (NNLS) tension allocation to maintain positive cable forces during excavation phases. Experimental trials showed that the platform could sustain excavation cycles with stable motion and consistent soil removal, while tests without allocation resulted in slack and cable entanglement. These findings demonstrate the viability of mobile-anchor cable robots as a platform for repetitive excavation tasks in future lunar construction efforts.
    • Christopher Green (NASA - Goddard Space Flight Center) & Elena Adams (Johns Hopkins University/Applied Physics Laboratory)
      • 02.0502 Mars Deep Subsurface Exploration with an Integrated Drill and Instrument Suite
        Joseph Palmowski (Honeybee Robotics), Kris Zacny (Honeybee Robotics Spacecraft Mechanisms Corporation), Kathryn Bywaters (Honeybee Robotics), Kevin Hubbard (Honeybee Robotics Spacecraft Mechanisms Corporation), Robert May (Honeybee Robotics Spacecraft Mechanisms Corporation), Nicholas Naclerio (Honeybee Robotics Spacecraft Mechanisms Corporation), Leo Stolov (Honeybee Robotics) Presentation: Joseph Palmowski - -
        Planetary subsurface access has long presented significant challenges that has constrained scientific discovery of both the Moon and Mars. Even the most successful Lunar drilling missions achieved limited penetration depths, with Apollo 15, 16, and 17, Soviet Luna 24, and China's Chang'e 5 all restricted to the upper 3 meters of the lunar subsurface. Current Mars technology faces similar constraints, with NASA’s Curiosity and Perseverance rovers capable of drilling core samples up to 5 centimeters deep, while the InSight probe, despite being designed for up to 5-meter-deep penetration, was only able to reach 35 centimeters. These limitations have significantly constrained our ability to explore and understand the subsurface environments of the Moon and Mars. IMPACT (Investigating Mars via Penetration and Analysis with Coiled Tubing) is an integrated drill and subsurface instrument suite for Mars, designed to penetrate up to and beyond 10-meters below the surface for in situ science investigation and/or ISRU applications. IMPACT consists of a coiled tube and injector subsystem for facilitating deep subsurface penetration, pneumatic excavation for clearing cuttings from the borehole and collecting them at the surface, and an advanced downhole subsystem that integrates rotary-percussive drilling capabilities with a suite of science instruments for downhole analysis. IMPACT leverages substantial progress achieved in existing Honeybee Robotics technologies that have been developed to high technology readiness levels (TRL). This includes LISTER, a TRL9 pneumatic drill that successfully reached a depth of 1 meter in the lunar subsurface during Firefly’s Blue Ghost mission one; RedWater, a TRL6 coiled-tube pneumatic drill designed for Mars ISRU missions and capable of reaching up to 25-meter depths; and SMART, a TRL6 rotary-percussive drill featuring integrated downhole instrumentation for subsurface analysis up to 1-meter on the Moon or Mars, as part of NASA SSERVI’s RESOURCE project. The existing development progress of these advanced drilling technologies significantly streamlines the path toward realizing IMPACT. By building on proven, high-TRL systems, IMPACT benefits from established engineering heritage and demonstrated performance in relevant environments, significantly streamlining the path toward mission readiness. By enabling deeper subsurface access on Mars, IMPACT can enable collection of essential data on the geological history, the presence and distribution of volatiles, and potential resources that Mars has to offer. This advancement will not only enhance our scientific understanding of Mars, but also lay the groundwork for future exploration efforts, long-term operations, and eventual human missions to Mars.
      • 02.0503 A Thematic Approach to Robotic Path Planning on the Moon
        Ryan Navarre (Michigan Technological University), Zane Almquist (Michigan Tech), Rich Chase (Michigan Technological University) Presentation: Ryan Navarre - -
        Path planning in remote spaces presents a notable challenge as it requires significant pre mission time and effort due to the multitude of inputs that must be considered. This challenge is especially difficult on the Moon due to highly complex terrain and sparsity of detailed remote sensing information. Importantly, paths must fulfill engineering, scientific, and operational constraints in order to ensure the successful outcome of the mission. We take a thematic STV (Science - Traverse - Survive) approach to this problem which allows for the optimization of scientific, traversability, and survivability outcomes. STV consists of three overarching metrics, the first being the scientific merit or reward (‘Science’), the second being the complexity and difficulty of the terrain (‘Traverse’), and the third being inputs that damage the system and lead to catastrophic failure (‘Survive’). Rather than contend with multiple variables individually, the STV approach works in a way that requires tracking only three primary metrics to derive an optimized path that meets the scientific goals of the mission while also remaining within engineering and operational constraints. To implement this approach, lunar remote sensing data sources were surveyed and aggregated from available data repositories, with additional data being derived from these existing datasets using geographic information systems (GIS) processes. Terrain based elements, such as slope, aspect, curvature, and ruggedness were derived from a digital elevation model (DEM). Other datasets, such as albedo and gravitational field strength, were obtained and indicate potential targets of scientific value. Each of these datasets are assessed for their impact on each of the STV metrics, weighted, and combined via various distribution functions. GIS software and processes were also employed in the data management strategy. Each data layer is standardized and segmented into a tile-based scheme and subsequently stacked together to promote data accessibility and cooperation. Our method allows for the fine-tuning of weights and thresholds for each dataset, as well as each metric of STV, to produce cost surfaces and visualizations in either a mission planning or real-time context. This approach is demonstrated across a selected portion of the Endurance mission concept path near the South Pole, and includes visualization of the STV metrics using lunar remote sensing data and GIS methods.
      • 02.0504 Path Planning in Dynamic Spatio-Temporal Space for the Lunar Surface
        Meryl Spencer (Michigan Technological University), Ryan Navarre (Michigan Technological University), Zane Almquist (Michigan Tech), Tyler Doiron (Michigan Technological University), Rich Chase (Michigan Technological University) Presentation: Meryl Spencer - -
        Upcoming NASA missions to the moon and other celestial bodies will increasingly rely on autonomous and semi-autonomous robotic systems. Initial mission planning for these systems often relies on course resolution remote sensing data. This data can include constraints on the robots, such as rough terrain, as well as important scientific sites, such as areas with increased mineral content. In addition to these fixed constraints, mission planners must also take into account time-varying constraints, such as time in sun and shadow for solar-powered components and communication windows based on satellite positions. Here we present a tool to help mission planners determine safe, efficient, and robust paths for robotic missions based on the TATERS (Tools for Autonomous Terrain Exploration of Remote Spaces) toolbox. The mission planning component of TATERS contains: 1. Methods for generating multiscale adaptive graphs of the environment based on remotely sensed data 2. Methods for finding the best paths through time-varying environments 3. Methods for assessing path risk based on the uncertainty of remotely sensed data. In this paper, we will define the algorithms underlying each of these methods. In addition, we will demonstrate example usage based on a lunar traverse near the south pole using current course resolution lunar data, and nominal communication windows based on a single satellite in orbit.
      • 02.0506 Cryo-Compatible Robotic Actuators with Use of Superconducting Materials
        Daniel Chavez-Clemente (Jet Propulsion Laboratory), Asad Aboobaker (Jet Propulsion Laboratory) Presentation: Daniel Chavez-Clemente - -
        This paper outlines a conceptual design for a superconducting electric motor to enable actuation of small robotic vehicles and manipulators in Permanently Shadowed Regions (PSRs) of the Moon. The design takes advantage of high-temperature superconductors (HTS) whose critical temperature is higher than the seasonal range encountered in lunar PSRs, eliminating the need for a cryocooler. The paper focuses on a 40Nm direct drive motor for a rover comparable in size to the Mars Exploration Rovers (MER), enabling wheel actuation without a gearbox at ground speeds encompassing MER-like and Endurance operations. It is shown that a superconducting motor is viable for this application, offering continuous driving times of about 4 hours at full torque, and in excess of 44 hours at medium to low torques, with winding temperature as the limiting factor. This performance is compatible with long-distance lunar exploration. The HTS motor is about 30% lighter than a copper-wound equivalent motor, and operates at efficiencies as high as 99.89%. A discussion of focus areas for motor control electronics and system-level trades is also included.
      • 02.0507 It’s a Dirty Job : Defining and Mitigating Dust and Sand Hazards on the Dragonfly Mission to Titan
        Ralph Lorenz (Johns Hopkins University/Applied Physics Laboratory) Presentation: Ralph Lorenz - -
        Spacecraft are assembled in clean rooms and the interactions of hardware with ‘dirt’ are generally unfamiliar to spacecraft engineers. In-situ missions on planetary surfaces confront designers with a bewildering range of possible adverse situations, a challenge made even more complicated when the materials involved are not the usual suite of more-or-less understood minerals on the terrestrial planets but are cryogenic ices and organic materials. A disciplined approach on cost-constrained programs is necessary to identify risks that must be mitigated and those possible-but-unlikely or inconsequential risks that must be accepted. A first step is to recognize the different threat mechanisms, as these have distinct probabilities, effects, and mitigation and verification approaches. These include abrasion (the removal of hardware material by energetic contact with sand/gravel), mechanical occlusion (blockage of mechanisms or flow paths) and obscuration (blockage of optical windows by dust). These interactions may occur by virtue of transport of the Titan material by natural processes (wind, airfall dust) and by the lander-induced environment (drilling operations, or downwash from the rotors). By decomposing the interactions this way, appropriate tests and designs can be formulated in each case. Some mitigations have included labyrinth seals, dust filters, encapsulation of soft insulation foam in protective materials, electrically-conductive and non-stick window coatings. In selected cases resilience to the environment has been verified by tests tailored from MIL-STD-850 (e.g. air-blasted abrasives, swirling dust deposition chamber, etc.). In some cases it is appropriate to use Titan simulants that have some chemical fidelity (i.e. similar molecular structures yielding similar microphysical properties like surface energy, important in controlling adhesion of dust to surfaces) whereas in others, such as mechanism blockage assessments by coarser material, the composition is relatively unimportant and only macroscopic properties like density and strength need to be considered, allowing for example simulants such as Arizona Road Dust or ground walnut shells to be used. Similar physics-based judgements are applied to the adequacy of room temperature vs cryogenic tests (e.g. a robotic arm was used to repeatedly drag a skid-mounted sensor over an abrasive surface at liquid nitrogen temperatures). This presentation will survey the various threats identified on Dragonfly and the design and verification approaches adopted to manage them.
      • 02.0508 Low-Voltage, Repairable EDS Coating for Lunar Dust Mitigation on EVA Spacesuit Textiles
        Keerthana Srinivasan (Princeton University) Presentation: Keerthana Srinivasan - -
        In recent decades, renewed interest in lunar exploration has emphasized the need for effective extravehicular activity (EVA) spacesuits, particularly in addressing the challenges posed by lunar dust on outer spacesuit textiles or ortho-fabric. During the Apollo missions (1968-1972), abrasive lunar dust severely compromised EVAs, as its jagged, microscopic particles easily adhered to ortho-fabric, inducing significant health risks to the eyes and lungs. A well-known lunar dust mitigation technique is electrodynamic dust shielding (EDS), which allows electrostatically charged lunar dust to be suspended from a surface. Conventional EDS implementations on rigid surfaces such as solar arrays have demonstrated removal efficiencies near 90% under laboratory conditions, but translating EDS systems to flexible substrates such as ortho-fabric presents distinct challenges. Flexible EDS electrodes have shown promising removal efficiencies in the range of ~85–95% in lab demonstrations, yet typically require high actuation voltages of 0.8-1 kV and raise insulation/safety concerns for use on wearable systems. To address these constraints, we present a low-voltage, repairable EDS electrode based on an aromatic thermosetting copolyester/multi-walled carbon nanotube/polytetrafluoroethylene (ATSP/MWCNT/PTFE) composite, engineered for direct application to ortho-fabric through spray deposition. The nitrogen-assisted spray process cures at 40°C in 30 minutes, suggesting in-situ repairability without specialized infrastructure. The composite simultaneously delivers mechanical strength (ATSP), enhanced surface conductivity for 3-phase pulsating square-wave actuation (MWCNT), and low coefficients of friction for dust repellency (PTFE). Potential health hazards are overcome through the insulation offered by our ATSP/PTFE matrix and the biocompatibility of ATSP/PTFE composites, as seen through its use in prosthetics in recent years. Tribological testing of our electrode on ortho-fabric with LMS-D1 simulant from Exolith Labs demonstrates a 56% reduction in the coefficient of friction compared to uncoated ortho-fabric, significantly lowering mechanical dust adhesion. Using a boundary element method (BEM) simulation implemented in Julia, we modeled the lunar dust particle trajectory under a three-phase pulsating electric field generated by our spiral electrode geometry. The results predict suspension and horizontal transport of particles down to 0.1 μm in radius at an applied voltage of 0.3 kV, which is attributable to both enhanced surface conductivity from MWCNTs and electric-field concentration from the spiral geometry. Future work will focus on demonstrating the suspension of lunar dust in real-time and further durability testing in vacuum chambers.
      • 02.0510 NiMEx: Smart Rover Swarms for Concurrent Mars Missions and Adaptive Role Reassignment
        Nidhi Mandrekar (Girls in Robotics (GiR)) Presentation: Nidhi Mandrekar - -
        This research presents a swarm-based robotic system NiMEx (Next-Generation Intelligent Miniature Explorers) designed to enhance the scientific yield and operational efficiency of future Mars exploration missions. NiMEx consists of ten autonomous miniature rovers with each rover under 30 inches in size. The rovers are grouped into three specialized sub-categories i.e. Aquabot, AstroBiobot and Geobot. Each rover is equipped with mission specific instruments such as spectrometers, ground-penetrating radar, laser induced fluorescence systems and rotary drills which can perform targeted tasks such as mineral analysis, biosignature detection and geo sampling. The swarm architecture is optimized using distributed autonomy with real-time inter rover communication and a localization strategy. Localization is achieved by using a beacon enabled triangulation and ultra-wideband (UWB) inter rover tracking along with Simultaneous Localization and Mapping (SLAM) technique. One rover serves as a navigation leader which generates terrain maps and broadcasts positional data to the swarm. Task allocation is governed by decentralized algorithms that dynamically assign roles based on rover capabilities, location and energy reserves. Data transmission is facilitated through a multi-hop mesh network with priority queuing, compression and redundancy protocols to ensure reliable delivery to the base station. The biggest advantage of NiMEx is to execute concurrent missions, adaptive role specialization and autonomous decision-making. It offers a scalable and resilient framework for planetary exploration. This approach significantly reduces latency, increases coverage and enhances the likelihood of detecting scientifically valuable phenomena across diverse Martian terrains.
    • Terry Hurford () & Xiang Li (NASA Goddard Space Flight Center) & Jacob Graham (NASA Goddard Space Flight Center)
      • 02.0601 Hot Cathode Ionization in Space: Characterization and Improved Beam Shaping for the NIM MS on JUICE
        Samuel Wyler (University of Bern), Robin Bonny (University of Bern), Lorenzo Obersnel (University of Bern), Rico Fausch (University of Bern), Andre Galli (), Audrey Vorburger (University of Bern), Peter Wurz (University of Bern) Presentation: Samuel Wyler - -
        On April 14, 2023, the Jupiter Icy Moons Explorer (JUICE) spacecraft was launched to the Jovian system to study the emergence of potentially habitable worlds around gas giants. The Neutral and Ion Mass Spectrometer (NIM), developed by the University of Bern in Switzerland, will characterize the atmospheres of the Galilean moons and analyze subsurface material ejected by Europa’s plumes, if present and encountered. NIM uses a power-efficient hot cathode filament to create an electron beam that ionizes incident atoms and molecules for mass spectrometric analysis. This study compares the performance of the space-qualified cathodes in the Protoflight Model, post-launch (on cruise) commissioning, with both expected performance metrics and laboratory-tested cathodes in the Flight Spare instrument. The heating current drawn by the cathode is shown to increase in the long-term due to degradation of the coating. In contrast, after air exposure, the performance of cathodes improves with use up to 100 hours after air exposure. We optimized the beam-shaping electrode potentials to efficiently ionize and store ions while minimizing thermal stress on the cathodes. Experimental results show that the uniformity of the electron beam is more important than maximizing its transmission in the ionization region, and we support these results with simulations.
      • 02.0602 Spectrum Scoring and Adaptive Swarm Optimization for the Ion Optics of the NIM TOF MS on JUICE
        Samuel Wyler (University of Bern), Peter Keresztes Schmidt (University of Bern), Andre Galli (), Robin Bonny (University of Bern), Rico Fausch (University of Bern), Audrey Vorburger (University of Bern), Peter Wurz (University of Bern) Presentation: Samuel Wyler - -
        The Neutral and Ion Mass spectrometer (NIM) aboard ESA’s Jupiter Icy Moons Explorer (JUICE) will receive in early 2026 an enhanced signal-processing chain and autonomous optimization package for its ion optics and ion source. The newly developed algorithms are designed for the mission’s limited onboard computational resources and the specific requirements of an efficient Adaptive Particle Swarm Optimization (APSO) approach to hardware optimization. In the case of NIM, the optimization problem space can extend to 19 dimensions, each representing a cathode, electrode, or ion optical lens voltage that is tuned to maximize signal intensity and mass resolution. The processing chain begins with high-pass filtering of acquired spectra for baseline adjustment. Spectrum quality is then quantified using four newly developed lightweight yet diverse signal-scoring functions. These quality metrics feed the NIM APSO algorithm that has been reviewed and adapted for space-flight mass spectrometry applications. In a first part, we present the four scoring algorithms, including their properties, advantages and disadvantages, with the intention to provide a baseline for scoring function developments of other instruments. The algorithms employ four distinct scoring strategies: highest-peak detection using either a direct (A) or a Gaussian-moment analysis (B), a highest-scoring peak detection method (C), or a global spectrum analysis (D) approach. The first three strategies are based on determining the full width at half maximum (FWHM) of the selected peak. In a second part, the improved NIM APSO algorithm is presented in detail. A degeneracy avoidance (DA) feature, as well as three different Elitist Learning Strategies (ELS) are introduced; the impact of selected optimization parameters on efficiency and outcomes is examined.
      • 02.0603 The Lunar Capillary Absorption Spectrometer (LUCAS) for Characterization of Lunar Volatiles
        Isabel King (Honeybee Robotics), Frank Sheeran (Honeybee Robotics Spacecraft Mechanisms Corporation), Sara Mayne (Honeybee Robotics), Jason Kriesel (OKSI), Andrew Fahrland (OKSI), Jennifer Stern (NASA Goddard Space Flight Center) Presentation: Isabel King - -
        Locally sourced water is a critical resource for sustaining a human presence on the Moon as intended under NASA’s Artemis program. However, the composition of the volatile mixture containing this water ice is not yet well-characterized, which makes the design of water harvesting systems challenging. The only ground-truth data to date comes from NASA's Lunar Crater Observation and Sensing Satellite (LCROSS) experiment, which detected the presence of a variety of volatiles in addition to water ice. To address this analysis gap, we propose the Lunar Capillary Absorption Spectrometer (LuCAS): a novel end-to-end isotope and trace gas analyzer system for studying lunar volatiles. LuCAS is comprised of a Sample Handling System (SHS) developed by Honeybee Robotics, and a CAS instrument developed by OKSI. The instrument can measure abundance and H, O, and C stable isotopic ratios of H2O and CO2 in regolith, as well as abundance of other volatile species that were observed during the LCROSS experiment, such as H2S. Although LuCAS is compatible with a variety of sample collection tools, the team has baselined using Honeybee Robotics’ Planetary Volatiles Extractor (PVEx) for its ability to extract and collect volatiles that are expected in the near-subsurface of the lunar poles. LuCAS development work has been funded by SBIR Phase I and Phase II projects in addition to a recently awarded DALI. Through these opportunities, the team has designed, built, and performed end-to-end testing of the LuCAS system. Accurately characterizing the abundance and isotopic ratio of individual volatile species in a mixture of volatiles is a challenging process. Specifically, the challenge lies in minimizing memory and isotope fractionation effects on the measurement. This includes optimizing the material properties of surfaces exposed to the sample and controlling flow through a sample acquisition system like the combined PVEx and SHS. In lab demonstrations, water standards (i.e., isotopically labeled water) are used to characterize the extent of memory and isotope fractionation effects. This paper summarizes the lessons learned in addressing these challenges, and the results that can be achieved by using the LuCAS system as part of a landed mission on the lunar surface.
      • 02.0604 PLANETARY VOLATILES EXTRACTOR (PVEX) for SAMPLE DELIVERY on the MOON.
        Frank Sheeran (Honeybee Robotics Spacecraft Mechanisms Corporation), Isabel King (Honeybee Robotics), Benjamin Collins (Honeybee Robotics), Sara Mayne (Honeybee Robotics), Andrew Fahrland (OKSI), Jason Kriesel (OKSI), Jennifer Stern (NASA Goddard Space Flight Center), Kris Zacny (Honeybee Robotics Spacecraft Mechanisms Corporation) Presentation: Frank Sheeran - -
        Planetary Volatiles Extractor (PVEx) is end-to-end instrument that combines mining and extraction into a single step for collecting resources on the moon and other planetary bodies. With a storied history in ISRU, PVEx has effectively demonstrated volatile collection to an inline cold trap by heating regolith trapped inside the auger after drilling. Once collected, water and other resources are delivered to a wide range of end applications from processing plants in ISRU to science instruments including a Capillary Absorption Spectrometer (CAS). PVEx consists of a 1-meter long, 1.5” ID, coring drill with a thin flex heating mounted to the inside walls of the auger. During operation, PVEx drills to depth before heating frozen regolith inside the core to slowly sublimate volatiles. These gases then travel up the drill, through a custom swivel joint, and flow to a Ricor cryocooler which acts as a cold trap. The deposition of volatiles on the cold trap creates a small pressure gradient between the drill string and cold trap which promotes volatile flow from the auger to the cold trap. Once collected, the cold trap is closed off from the auger and heated to re-sublimate volatiles for instrument delivery. In this phase of development, PVEx work focuses on characterizing auger diameters against core retention and comparing drill parameters to collection efficiency for small weight percent lunar simulant sample. These developments enable PVEx for flight missions with the expressed intent of delivering small volatile samples to highly sensitive instruments such as the CAS. Initial results indicate that a 0.75” ID shows poor core retention rates across all drill parameters while larger ID sizes such as a 1.5” and 2.0” ID show drastically increased coring efficiency with decreased auger and feed speeds. These results and others developed under this study indicate PVEx’s versatility to a wide range of ISRU applications with improved mass and power requirements over state-of-the-art solutions.
      • 02.0606 Prototype of a Laser-based Mass Spectrometer for In-situ Dating of Rocks on Planetary Surfaces
        Peter Wurz (University of Bern), Marek Tulej (University Bern), Rico Fausch (University of Bern) Presentation: Peter Wurz - -
        Determining ages of planetary materials in-situ remains a central, yet unresolved, objective of planetary science. So far, sample-return missions are considered for dating of planetary materials, which are logistics-intensive, expensive, and therefore scarce. Present instruments for in-situ dating typically exceed the allocated mass, power, or volume of flight payloads. There are several dating schemes that are suitable for in-situ dating on planetary surfaces. We selected the Rb–Sr dating system, which has been used for dating of a wide range of terrestrial and extraterrestrial materials, including Martian meteorites. Also, Rb and Sr are relatively immobile compared to other radiogenic systems proposed for flight, and are present at practical abundances in most igneous, metamorphic, and sedimentary rocks. However, separating the 87Rb and 87Sr isotopes needs extremely high mass resolution. To avoid this need we use resonant ionization of Rb and Sr, respectively. We present a laser-based compact mass spectrometry prototype instrument that provides the necessary accuracy of the isotope measurements needed for the Rb–Sr system. The first step is pulsed laser ablation of material from the rock (Laser Ablation Mass Spectrometry, LAMS). From using the already ionized species in the ablation plume we determine the chemical composition of the rock, from which we infer its mineralogy, which provides the context of the following dating measurements. The second step uses the neutral species in the ablation plume, (Laser Ablation Resonance Ionization Mass Spectrometry, LARIMS), where the Rb and Sr atoms are selectively ionized by additional lasers using resonant wavelengths, with a temporal offset. This offset separates the 87Rb and 87Sr isotopes in time and they are easily measured by time-of-flight mass spectrometry despite their almost identical mass. We report on the realized prototype instrument, with example measurements of the LAMS and the LARIMS modes. The instrument combines a laser-ablation ion source with resonance-ionization mass spectrometry in a shared, single-reflection off-axis time-of-flight (TOF) analyzer. The instrument operates in two seamlessly switchable modes: in the LAMS-only mode, where the instrument rapidly maps in 2D the chemical composition, enabling identification of mineralogical phases of interest. In dating mode, LARIMS proceeds in two steps: ions from the ablation plume are first deflected from reaching the detector, then lasers ionize neutral Rb and Sr atoms in the plume by using wavelengths tuned to resonant transitions of the respective atoms. Resonant ionization selectively ionizes only the species of interest, thereby suppressing isobaric interferences in the mass spectra, and thus yields precise 87Rb–87Sr ages. Tests with the protype system demonstrate (i) the LAMS mode and LARIMS mode within an unmodified geometry and setup, (ii) mass resolutions of 500 for both modes, and (iii) detection limits commensurate with low-ppm rubidium and strontium. With a characteristic length below 30cm, this novel type of instrument represents a practicable solution for landed in-situ dating and has been selected for NASA’s Dating an Irregular Mare Patch with a Lunar Explorer (DIMPLE) experiment experiment, as part of the Payloads and Research Investigations on the Surface of the Moon (PRISM) and Commercial Lunar Payload Services (CLPS) programs.
      • 02.0607 A Novel Laser-ablation Resonance-ionization Mass Spectrometer for in Situ Dating of Lunar Rocks
        Rico Fausch (University of Bern), Audrey Aebi (University of Berne), Amanda Alexander (Southwest Research Institute), F. Scott Anderson (Southwest Research Institute), Amy Fagan (Western Carolina University), Sierra Ferguson (Southwest Research Institute), Mary Hanson (Southwest Research Institute), James Head (Brown University), Katherine Joy (University of Manchester), Viliam Klein (), Jonathan Levine (), Steve Osterman (Southwest Research Institute), John Pernet-Fisher (The University of Manchester), Vishaal Singh (Southwest Research Institute), Romain Tartese (The University of Manchester), Tina Teichmann (Southwest Research Institute), Peter Wurz (University of Bern), Marcella Yant () Presentation: Rico Fausch - -
        In situ geochemical and geochronological measurements on the Moon are essential for reconstructing the thermo-magmatic evolution of the Earth–Moon system. Key unresolved problems include pinning down the final episode of lunar volcanism, deciphering the emplacement mechanism of irregular mare patches such as Ina—the largest of its class—and reconciling a potentially young Ina with prevailing models of the Moon’s thermal and volcanic evolution. To address these questions, NASA has selected the Dating an Irregular Mare Patch with a Lunar Explorer (DIMPLE) payload on a forthcoming Commercial Lunar Payload Services (CLPS) lander destined for Ina. DIMPLE combines a rover for sample acquisition, a robotic arm with saw for lithological preparation, and the Chemistry, Organics and Dating Experiment (CODEX), a dual-mode mass spectrometer customized for rubidium–strontium geochronology and geochemistry analysis. CODEX integrates laser-ablation mass spectrometry (LAMS) for geochemistry with element-selective laser-ablation resonance-ionization mass spectrometry (LARIMS) in a compact, single-reflection off-axis time-of-flight analyzer. In LAMS mode, a laser rasterizes the cut surface, generating a plasma plume whose ions are analyzed at m/Δm ≈ 350 (FWHM) in the mass range between m/z 1 and 250. The resulting major- and trace-element maps (sub-ppm-level detection limits) establish petrologic context, identify mineral phases, and guide subsequent isotopic spots. In dating mode (LARIMS), the initial ions of the plume are electrodynamically suppressed to prevent detector saturation, while tunable lasers ionize the neutral species of the plume. This two-step scheme eliminates isobaric interferences from 87Rb on 87Sr, allowing accurate 87Sr/86Sr and 87Rb/86Sr ratios at a mass resolving power of only m/Δm ≈ 190—three orders of magnitude lower than that demanded by conventional LAMS chronometers. Three-dimensional ion-optical modelling with SIMION yielded a single vented ion-source geometry that accommodates both user-selectable analytical modes. By integrating a non-hygroscopic micro-channel-plate detector, the design maintains high sensitivity while eliminating the costly T₀ nitrogen-purge requirement. The mass spectrometer is fully compliant with the General Environmental Verification Standard (GEVS), thanks to a monolithic AlBeMet structure, radiation tolerant electronics following a New Space approach, and a robust opto-mechanical design—all within a payload envelope of < 300 mm largest dimension. The opto-mechanical design employs a strict tolerance hierarchy, assigning each subsystem—ion-optical, laser-optical, and structure—to a class defined by its required order-of-magnitude precision. The laser beam trains and the ion-optical assembly are aligned against a common datum-reference frame, while interchangeable shim packs absorb the small mechanical deltas between hardware builds. The program employs the standard Qualification Model, Flight Model, Flight-Spare chain, augmented by an Engineering Development Unit fitted with a flight-representative ion-optic assembly for risk reduction and design reference mission rehearsals. This model philosophy ensures the flight instrument will meet the stringent requirements necessary to achieve in situ rubidium-strontium geochronometry while simultaneously generating the bulk- and trace-element datasets needed to place those ages in their full geochemical context. By delivering crystallization ages for Ina’s local rocks, CODEX improves our knowledge of the potentially youngest episode of lunar volcanism and may refine the late-stage thermal history of the Moon.
      • 02.0609 Venus Atmospheric Structure Investigation on DAVINCI - Preliminary Design and Operations Concept
        Ralph Lorenz (Johns Hopkins University/Applied Physics Laboratory) Presentation: Ralph Lorenz - -
        The DAVINCI probe was selected for flight in 2021 and is presently under development for launch in 2031. This presentation will review how the VASI’s measurements of pressure, temperature and wind, far superior in resolution and/or quantity to those of previous missions, may improve our understanding of Venus and complement DAVINCI’s composition measurements and imaging. Additionally, VASI measures the dynamics of the vehicle during hypersonic entry and parachute and freefall descent, supporting the NASA Engineering Science Investigation. During the extended Phase B/Risk Reduction Phase, VASI requirements have been refined, a sensor suite selected and sensor accommodation designed. The only near-surface temperature/pressure profile of the atmosphere of our twin planet, Venus, was obtained in 1985 by the VEGA-2 lander. The handful of other probe missions have very limited vertical resolution, or sensor failures in the lowest few km. Unlike altitudes above 40km, which have been relatively well-surveyed by radio occultation profiles from orbiter missions, the fine temperature structure of lowest part of the Venus atmosphere must be interrogated by direct measurement. This structure is important in several respects. First, the structure and composition reflects the interactions between surface and atmosphere of an ‘exoplanet in our back yard’ which may be much more typical than are those of Earth. Secondly, there are indications that particularly interesting phenomena may occur on Venus, not seen in the atmospheres of Earth, Mars or Titan (but analogous to aspects of ocean stratification on Earth): the VEGA-2 profile is impossible to reconcile with a profile that is both convectively stable and compositionally uniform. A favored hypothesis is that the lowest few kilometers are compositionally denser (lower N2). The supercritical thermodynamics of carbon dioxide add to the rich possibilities in this region. The exchange of angular momentum between the retrograde, slowly-rotating Venus and its dense atmosphere is reflected in the wind profile, which can now be interpreted by global circulation models. Again, while cloud-top (60-70km) winds are now well-known from Akatsuki and preceding missions, very little data exist on winds in the hidden lowest 40km. Doppler tracking, turbulence measurements, and trajectory reconstruction from descent imaging will shed unprecedented light on the lower atmospheric dynamics.
      • 02.0610 Using a Single Imager and Structured Light for High Resolution Mapping on Mars.
        David Klevang (Technical University of Denmark), Mathias Benn (Danish Technical University), Morgan Cable (NASA Jet Propulsion Laboratory) Presentation: David Klevang - -
        The navigational sensor onboard PIXL, the Micro Context Camera, is equipped with active LED floodlight and structured light capabilities. The first ensures a homogeneous lighted scene, to capture the optical context of the surface. The structured light consists of 2 illumination sources, each with their own designed dot pattern projected onto the surface: One designed to keep optimal distance of the PIXL instrument during scan of terrain, and one designed for broad coverage enabling safe assessment of the terrain. By virtue of triangulation the distance to the terrain is measured to accuracies of ~50 microns. While the structured light offers highly accurate absolute range measurements, the radiated pattern is sparse, leaving much terrain area uncovered. The aim here is to resolve a high terrain resolution, making use of the accurate range measurements using the structured light. Often a terrain is resolved without the scaling dimension, and only produces a disparity map. To resolve the scale factor, one could rely on the actual motion between two observations, i.e. the baseline. However, due to significant positional drift due to the thermal environment on Mars, the usage of the structured light provides the scale factor. Instead, we merge the best of both worlds, applying a highly accurate range measurement to a high-resolution disparity map. This approach makes use of the motion capabilities of PIXL, using the hexapod actuator or the robotic arm of the rover, but is not dependent on any motion metrology knowledge. Furthermore, this approach can be utilized on any single imager system with structured light and actuator capabilities.
      • 02.0611 Miniaturised French Instruments for In-situ Missions
        Gabriel PONT (CNES (French Space Agency)) Presentation: Gabriel PONT - -
        The last decade has witnessed a tremendous expansion of surface missions to solid bodies of the Solar System, in particular to the Moon, Mars, asteroids and comets. CNES has been involved in many of those, in cooperation with a varied range of partners. On June 3rd 2024, the DORN instrument developed by the French laboratory IRAP and dedicated to measuring Radon was deployed successfully on the dark face of the Moon by the Chang’e 6 lander, in cooperation with CNSA. The ChemCam LIBS, partially built by IRAP, recently excuted its millionth laser shot on Mars, on board of NASA’s rover Curiosity. Its follow up, the instrument suite SuperCam, is also giving excellent results on Perseverance. The Mass spectrometer suite SAM of Curiosity also includes a Chromatograph (SAM-GC) provided by French laboratory LATMOS. InSight’s seismometer, SEIS, developed by French laboratory IPGP measured for the first time seismic vibrations of Mars and meteorite impacts, between 2019 and 2022. These measurements provided unprecedented understanding of the planet’s internal structure and activity. In addition, CNES developed with DLR the small Mascot lander which was deployed by JAXA’s Hayabusa 2 orbiter on the surface of asteroid Ryugu, and carried an IR hyperspectral microscope, MicrOmega, developed by French laboratory IAS. Over the same period, cubesats have spread out in Low Earth Orbit, and their technologies are now available for Deep Space missions. In addition, many private actors are getting involved in Moon surface missions, notably within the frame of NASA's CLPS Program, and several of those initiatives incorporate small landers and rovers. Miniaturisation and cost reduction have been identified by CNES as essential assets to enable ground-braking scientific observatories and support future exploration of the Moon, Mars and small bodies. A new generation of the various instruments listed above is being developed aiming at a format between a fraction of 1 kg to a few kg. These instruments will be available for small surface missions, static or mobile, but they are also applicable to larger infrastructures, or be included in the investigation tool set of astronauts. We will elaborate on these new miniaturised versions, which encompass: - Compact visible and Short Wavelength Infrared Cameras based on advanced imaging technologies - IR hyperspectral imagers - µLIBS - Microchip gas chromatographic columns - Ground penetrating radars - Compact seismometers based on geophone sensors
    • Leonard Felicetti (Cranfield University) & Giovanni Palmerini (Sapienza Universita' di Roma) & Ryan Woolley (Jet Propulsion Laboratory)
      • 02.0701 Robust Trajectory Optimization against Missed Thrust Events Using Sequential Convex Programming
        Hirotaka Sekine (University of Tokyo), Kenshiro Oguri (Purdue University), Yosuke Kawabata (University of Tokyo), Ryu Funase (University of Tokyo) Presentation: Hirotaka Sekine - -
        All spacecraft equipped with propulsion systems are inherently subject to the risk of missed thrust events, in which the planned thrust operations fail due to anomalies such as unexpected safe mode entry or propulsion system failure. This risk is particularly critical for deep space missions. Even short interruptions in maneuver execution can lead to the loss of key mission opportunities, such as gravity assists, and potentially result in mission failure or spacecraft loss. The maximum allowable duration of coasting away from the nominal trajectory is referred to as the missed thrust margin. Previous studies have proposed optimizing or constraining this margin to ensure the robustness of the nominal trajectory. However, these approaches typically focus on the worst-case margin over the entire trajectory. As a result, it becomes difficult to account for the time-varying probability of missed thrust events. For example, the probability of failure can vary depending on the mission phase, such as being higher during the early operation phase. Moreover, for missions with severe fuel constraints, such as those involving micro-spacecraft, it is often too conservative or even infeasible to ensure a sufficient margin throughout the whole trajectory. We propose a probability-aware low-thrust trajectory optimization framework that accounts for robustness to missed thrust events using sequential convex programming. Assuming a predefined distribution of missed thrust scenarios, the nominal and recovery trajectories are simultaneously optimized to minimize the expected fuel consumption across the scenario distribution. This assumption enables the method to flexibly reflect time-varying risk levels, such as the increased likelihood of missed thrust events during mission-critical phases. Sequential convex programming efficiently solves this non-convex optimization by iteratively solving convex subproblems. As a result, our method enables the tractable consideration of multiple missed thrust scenarios even in highly nonlinear dynamical regions such as multi-body systems. We validate the proposed method in the circular restricted 3-body problem for the Earth-Moon system, highlighting the potential of sequential convex programming dealing with high nonlinearity in the cislunar space. We assume a direct lunar transfer trajectory from Earth, one of the critical operations in cislunar space, where a short period of missed thrust events can result in an unintended lunar flyby and loss of contact. At the same time, the time for checking out the spacecraft system is limited before the first lunar flyby. We also assume that it is a secondary payload mission inserted into this transfer trajectory, as micro-spacecraft are likely to experience missed thrust events due to their immature reliability. Specifications and a potential separation condition are from the micro-spacecraft mission, GEO-X, which is under development at the University of Tokyo and is planned for launch in 2027. The preliminary numerical simulation results indicate that the nominal trajectory without missed thrust events changes when robustness to missed thrust scenarios is considered, although this requires only a small amount of additional fuel.
      • 02.0702 Early-Phase Design of Distributed Space Antennas and Constellations for D2D Communications
        Seang Shim (), Ryusei Komatsu (Interstellar technologies inc.), Yuta Takahashi (Tokyo Institute of Technology), Hideki Yoshikado (), Harunobu Kobayashi (), Sumio Morioka (Interstellar Technologies Inc.) Presentation: Seang Shim - -
        This paper proposes a framework for the early-phase feasibility analysis of Distributed Space Antenna (DSA) constellations designed for Direct-to-Device (D2D) communication. DSAs, which form a large virtual antenna by controlling a cluster of small satellites, are a promising solution to overcome the physical aperture limitations of conventional monolithic antennas. For long-life D2D communication infrastructures, we employ Electromagnetic Formation Flight (EMFF), a propellantless control method that utilizes magnetic forces to maintain the formation. This approach allows for sustainable operation by using standard magnetic torquers (coils). The design of such constellations, however, involves complex, hierarchical trade-offs between the performance of a single DSA and the overall cost and capability of the entire constellation. Failure to resolve these conflicting requirements in the initial design phase can lead to the discovery of system-level infeasibility in later stages, resulting in costly rework and schedule delays. To mitigate this risk, our framework formulates the end-to-end design process as a multi-stage optimization problem. The first stage focuses on the feasibility of a single DSA. Here, we extend previous work by incorporating a critical new constraint: the thermal feasibility of the satellite. This is essential because the continuous power consumed to counteract the dominant J2 orbital disturbance acts as a constant internal heat source. This internal heat needs to be balanced against external thermal inputs to keep the satellite within its operational temperature limits. By solving this optimization, we derive the relationship between the number of constituent satellites and key DSA performance metrics, such as Equivalent Isotropic Radiated Power (EIRP) and total mass. In the second stage, these performance metrics are used to evaluate the design of the entire constellation. We formulate a multi-objective optimization problem that balances the total constellation cost, defined as the required number of rocket launches, against the communication performance. By solving this problem, the framework identifies the Pareto-optimal set of design parameters. This allows us to determine not only whether a feasible design solution exists for a given set of requirements but also to quantitatively understand the trade-offs between cost and performance. To demonstrate the effectiveness of this framework, a numerical simulation is conducted to present a concrete design example. This integrated methodology provides an early-phase evaluation of DSA constellations, thereby facilitating their development for future D2D applications.
      • 02.0703 Space-Interferometry Formation Design Using Relative Orbit Elements: The STARI Mission
        Antonio Rizza (Stanford University), Ethan Foss (Stanford University), James Cutler (University of Michigan), Simone D'Amico (Stanford University) Presentation: Antonio Rizza - -
        The STarlight Acquisition and Reflection toward Interferometry (STARI) mission is a NASA-funded technology demonstrator designed to advance key technologies for future space-based interferometry. It consists of two 6U CubeSats flying in close formation in a Sun synchronous terminator orbit, effectively simulating one arm of a distributed interferometer. STARI will, for the first time, successfully demonstrate the reflection and beam transfer of starlight from one spacecraft to another, as well as its injection into a single-mode optical fiber. The mission design, currently in Phase A, is led by University of Michigan and the Space Rendezvous Laboratory at Stanford University is responsible for the trajectory design and for the development of the Guidance Navigation and Control (GNC) system on-board. This paper presents the mission requirements, concept of operations and swarm trajectory design for the mission, together with a preliminary definition of the GNC architecture and operative modes. Differently from current state of the art approaches for space interferometry, the proposed solution is based on a clever representation of observation requirements in the phase-space defined by relative orbital elements. This enables high duty cycles for astronomical observations with very limited fuel consumption under navigation, control and dynamic uncertainties. Moreover, the methodology also provides a clear and elegant geometrical interpretation of the relative orbital state as configuration parameter to define the inertial direction of observation. The science relative orbit, which allows for nearly uninterrupted viewing of an inertially fixed target, is used during the observation phase and is switched with a standby orbit during engineering tasks such as reaction wheels desaturation, ground communication, and science replanning. Due to the lack of radial-normal separation between the two spacecraft in their science orbit, this transfer restores passive safety during routine engineering tasks and contingency. The proposed mission design leverages closed-form impulsive control schemes to achieve station keeping and swarm reconfiguration with fuel-optimal solutions. Finally, the choice of a Sun synchronous absolute orbit enables scanning of different portions of the sky during the mission lifetime. An optimization pipeline is proposed to define the fuel-optimal sequence of observations given a set of inertial targets to be observed. The STARI CubeSats require unprecedented level of navigation and control accuracy to achieve mission objectives pushing the need for a fully autonomous six-degrees-of-freedom Guidance Navigation and Control (GNC) system. A preliminary design of the GNC architecture for STARI is provided here, highlighting the main challenges expected in the next design phases together with the strategies that are being put in place to overcome them. The success of the STARI mission will accelerate the development of space interferometry missions paving the way for accessible and systematic exoplanets detection by means of distributed space systems and enable novel capabilities for spacecraft proximity operations.
      • 02.0704 Exploring Advanced Propulsion Systems for CubeSats for Now & for the Future
        Abhinava Nookala (Nanyang Technological University) Presentation: Abhinava Nookala - -
        CubeSats have significantly impacted the landscape of modern space applications due to their scalability, cost-efficiency, modularity, and rapid development time. Technological advancements have expanded their capabilities leading to the design of mission profiles where propulsion, whether active or passive, has evolved from an optional subsystem to a mission-critical requirement. Onboard propulsion enables mobility, transitioning small satellites from passive orbiters into agile, maneuverable assets. This capability supports key mission functions, including orbital correction, formation flying, drag compensation, and proximity operations, thereby enhancing mission scope and responsiveness. To provide CubeSats with spatial maneuverability, propulsion engineers have developed a diverse set of approaches. Some systems are miniaturized versions of conventional technologies such as cold gas, solid, liquid, or hybrid thrusters. Others—electrostatic, electromagnetic, or propellant-less— represent novel alternatives designed specifically for the CubeSat form factor. This paper presents a state-of-the-art review of propulsion technologies specifically designed for CubeSats, with a focus on their mass, volume, power consumption, and compatibility with various missions. Special emphasis is placed on Differential Drag—a highly efficient passive propulsion method that leverages atmospheric drag through surface area modulation for orbital control. The study focuses on propulsion strategies for Low Earth Orbits (LEO, <2000 km) and Very Low Earth Orbits (VLEO, <450 km) while providing broader insights into the propulsion of small satellites. It also highlights systems that are currently flight-ready or near-term viable for rapid mission integration.
      • 02.0707 Uncertainty Quantification for Low-Thrust GTO to Lunar Transfers Using Monte Carlo Simulation
        Godwin Shitta () Presentation: Godwin Shitta - -
        Low-thrust transfers are attractive for commercial and scientific missions due to their high propellant efficiency but they are highly sensitive to a variety of uncertainties, including initial state variations, thrust magnitude and direction deviations and perturbing forces from other planetary bodies. This study presents a Monte Carlo-based uncertainty quantification framework to support the design of future low-thrust Earth–Moon transfers initiated from geostationary transfer orbits (GTO) using 1,000 simulated trajectories. The methodology applies a high-fidelity propagator that models the Earth, Moon and Sun's gravity, continuous low-thrust with mass depletion, and user-defined distributions for uncertainties in initial position, velocity, thrust direction, and thrust magnitude. The system of equations includes variable-mass dynamics with third-body perturbations and solar radiation pressure numerically integrated over a representative transfer duration. By applying realistic uncertainty models to the sampled parameters, the Monte Carlo framework generates statistical distributions of key mission outcomes such as transfer duration, propellant consumption, and final orbital insertion accuracy. The results identify critical sensitivities in thrust pointing and initial injection errors, highlighting mission-design risk factors that could degrade transfer accuracy or mission success. Key results show that dispersions in lunar arrival conditions grow significantly for small thrust-direction errors if not corrected with closed-loop guidance, with up to 28% of simulated trajectories missing lunar capture under worst-case scenarios. The paper also compares its framework against public data from the ESA SMART-1 mission but with quantitative evidence on how uncertainties expand the reachable set around nominal trajectories. The paper discusses the implications for mission design margins in low-thrust cislunar transfers, providing mission designers with statistical confidence and insight into potential dispersions. It provides a reproducible and extensible methodology for mission designers to assess risk in low-thrust lunar transfers originating from commercial GTO rideshares, a rapidly growing application area. Also, it offers a comparative baseline to analyze future mission trajectories in light of uncertainties, leveraging lessons learned from the SMART-1 mission but applying them to current and next-generation low-thrust electric propulsion spacecraft. This Monte Carlo approach supports engineering decision-making in transfer design, including trade studies for propulsion margins, navigation tolerances, and capture strategies at the lunar environment. The results emphasize the necessity of robust guidance and navigation strategies for future low-thrust Earth–Moon transfers.
      • 02.0709 Expanding Venus Entry Accessibility through Resonant Transfer
        Maxwell Jacobson (Worcester Polytechnic Institute), Ye Lu (Worcester Polytechnic Institute) Presentation: Maxwell Jacobson - -
        Aerial platforms and landers have been popular choices for Venus exploration. However, there are many constraints that will limit the accessibility to specific latitude and longitude, such as heating and deceleration loads, launch window, direct-to-earth communication, and time of Venus day at arrival for science explorations. The accessibility restriction is especially constraints upon direct entry following earth-Venus interplanetary transfer. Though additional orbit insertion prior to entry may provide more flexibility, it inevitably requires a lot more mass for the propulsion system which is otherwise minimum for a direct entry. In this work, we consider a mission with resonance transfer after the first encounter of Venus after Earth launch. The resonance transfer ensures that the spacecraft will rendezvous with Venus a second time that will allow a much wider range of arrival geometry, allowing a much broader access on Venus. The methodology consists of the ballistic interplanetary transfers within a synodic period of Earth-Venus, the resonance transfer geometry that matches the V-infinities of the interplanetary transfer, and preliminary entry analysis that considers the peak deceleration limit. Preliminary results show that even with Venus-Earth line-of-sight constraint at entry point and a 50-g deceleration load limit, the mission with 1:1 resonance transfer has a much more flexible global access with an arbitrarily-chosen two-month launch window; whereas direct entry have black-out regions and very low percentage of access in most other parts of Venus. We will report on the complete analysis including heating rate limit and sunlit constraints.
    • Lembit Sihver (TU Wien and NPI of the CAS) & Ondrej Ploc (Nuclear Physics Institute of the Czech Academy of Sciences)
      • 02.0801 Decadal Advancements in Technologies Supporting Active Magnetic Radiation Shields
        Joseph Hesse-Withbroe (University of Colorado, Boulder), Katya Arquilla (University of Colorado, Boulder) Presentation: Joseph Hesse-Withbroe - -
        The galactic cosmic ray (GCR) and solar energetic particle (SEP) radiation environments encountered in deep space present one of the biggest challenges for future long-duration crewed spaceflight missions beyond LEO. Chronic exposure to GCR radiation causes a variety of detrimental physiological effects, including carcinogenesis, cataractogenesis, and degenerative cardiovascular and renal diseases, while intense SEP radiation can cause acute radiation syndrome. Estimated dose equivalents for a conjunction-class Mars mission occuring during solar maximum exceed 1000 mSv, while NASA’s career permissible exposure limit is 600 mSv. Passive radiation shielding based on aluminum, polyethylene, and water has been shown to be incapable of lowering GCR doses for a conjunction-class Mars exploration mission to levels below NASA career permissible exposure limits without excessive mass penalties on the order of 1000 t. Active magnetic radiation shields based on the Lorentz deflection of charged GCR particles have been previously investigated as an alternative technical countermeasure but, as of 2014, have not yet been considered feasible due to technological limitations of superconductors and other key technologies. However, major advancements in superconductor performance in the last decade driven by strong commercial pressures from the fusion power industry have significantly improved the potential mass-normalized performance of active magnetic shields. In this work, we present the results of a technology survey wherein present state-of-the-art capabilities of key technologies underlying magnetic shields (superconductors, power provision, thermal management, and structural support) are determined and their effect on shield performance quantified. These new technologies are incorporated into a magnetic shield evaluation model to quantify their effect on system mass and dose reduction capabilities. Preliminary results indicate that the technology advancements that have occurred in the last decade enable mass savings on the order of 60% relative to a state-of-the-art 2014 design. These results suggest that a shield capable of reducing solar minimum annual dose equivalent below 250 mSv/yr could be achieved by a shield able to be launched to TLI by a single SLS Block II Cargo.
      • 02.0803 Effects of Radiation on Photosensitive Instruments While Traversing the Van Allen Belts
        Aiden McCollum (Embry-Riddle Aeronautical University), Emelia Kelly (Embry Riddle Aeronautical University), Daniel Lopez (Embry-Riddle Aeronautical University), Troy Henderson (Embry-Riddle Aeronautical University) Presentation: Aiden McCollum - -
        In February 2024, the Polaris Dawn mission—a privately crewed spaceflight aboard a SpaceX Dragon capsule—launched into an elliptical Earth orbit with an apogee of 1,400 kilometers, marking the furthest distance traveled by a crewed spacecraft since NASA's Apollo missions. During their transit, the crew passed through regions of the Van Allen radiation belts, which are zones characterized by elevated levels of charged particle radiation. In collaboration with SpaceX and the Polaris program, Embry-Riddle Aeronautical University's Space Technologies lab integrated a payload, named LLAMAS, onto the interior wall of the Dragon capsule to capture footage of the Polaris Dawn crew throughout the mission. The LLAMAS payload is a small cuboid with two Ximea camera sensors with Fujinon 186-degree lenses to capture footage of the commercial astronauts throughout their mission in a way that could be examined through virtual reality goggles. After the completion of the Polaris Dawn mission, the LLAMAS payload was de-integrated from the Dragon capsule and returned to the Space Technologies Lab for post-flight examination and research. This paper will detail the measured effects of radiation on the LLAMAS camera sensors during the duration of the spaceflight as well as provide insight into how to correct for these effects when analyzing imagery captured in space. The effects of radiation on digital camera sensors can be assessed by analyzing the Bayer filter pattern of each video frame and correlating the frequency and density of hot pixels with the radiation measurements recorded inside the Dragon capsule. The resulting information can be used to correct for radiation effects on other deep space missions heavily reliant on digital photographic sensors, such as planetary surface cameras or star trackers. Additional data from the mission will be provided in the final paper. To accurately quantify the frequency and density of radiation strikes on LLAMAS' camera sensors—identified as hot pixels within the Bayer filter pattern—custom analysis software developed in Python and Rust was used to process thousands of video frames collected during the Polaris Dawn mission. The resulting analysis tools were designed to effectively filter out false positive hot pixels—such as those caused by bright lights or reflections—while maximizing the accurate detection of true radiation-induced hot pixels. Rust was used to handle the computationally intensive mathematical and logical components of the analysis, while Python was used for image pre-processing and data aggregation. Once the imagery has been fully processed, the persistence, density, and quantity of hot pixels will be used in tandem with the radiation data collected within the Dragon capsule to find how the cameras were impacted. This paper will also present research on on-board correction methods for radiation effects on photosensitive instruments in spaceflight environments, with direct applications to small spacecraft that rely on digital imaging sensors for autonomous navigation and location-finding.
      • 02.0806 Earth’s Magnetopause Positions in Proton Density-Velocity Space from 1963-2025
        Scott Carpenter (MarsB Collaboration and 7EDU Impact Academy), Remy Xie (MarsB Collaboration), Sophia Hu (7EDU Impact Academy ), Matthew Qi (UC Berkeley), William Sun (), Zehan He (), David Wu (), Elena Yu (MarsB Collaboration), Skyler Mao (), Siu Wa Yang () Presentation: Scott Carpenter - -
        In our prior work, we developed and successfully benchmarked the Portable Magnetopause (MP) Model against spacecraft-measured MP-position data for (1) magnetic fields and MPs at Earth and at the other solar-system dipole-bearing planets, and for (2) interplanetary coronal mass ejection (ICME) buffeting of Earth’s MP. Here, we use the Portable MP Model to plot scaled spacecraft measures of solar-wind proton-density (independent x-axis) and proton-velocity (independent y axis) to visualize the time-duration MP standoff distances (dependent z-axis contours) for Earth. Firstly, we describe the Portable MP Model, its derivation, uses, and advantages. The main significance of the Portable MP Model is that it accurately computes MP position for a variety of cases that include (i) planetary MPs in the ‘farfield,’ such as those for Earth, Jupiter, Saturn, Uranus, and Neptune; (ii) for planetary MPs in the ‘nearfield,’ such as for Mercury, for proposed artificial MPs around Mars and other solar bodies, and for MPs generated by current rings placed at Lagrange-1 co-orbits. Additionally, the Portable MP Model is a first-principles model rather than a statistical model, so its terms are human understandable and interpretable. Secondly, we generated a contour space of spacecraft solar-wind proton density and velocity onto which historical ICMEs and associated MP-standoff positions are plotted/visualized. Such visualization facilitates discussion about conjectured past and future super ICMEs on Earth’s MP-standoff positions. Thirdly, the simplification of the model’s math to basic algebra opens research into MP science to a wide audience: space-science and engineering professionals, math and science students from sixth grade to university graduate levels, and the industrious public.
    • James Kinnison (JHU-APL) & Yasin Abul-Huda (Johns Hopkins Univ Applied Physics Lab (JHU APL))
      • 02.0901 "SPACE LAW. What Will We Do with Space Debris? an Analysis Based on Selected Legal Examples"
        Mikołaj Mazan (War Studies University) Presentation: Mikołaj Mazan - -
        Abstract Track 2 Session 09 Space Debris and Micrometeoroids: The Environment, Risks, and Mitigation Concepts and Practices Title: "SPACE LAW. What Will We Do with Space Debris? An Analysis Based on Selected Legal Examples" Currently, approximately 13,000 tons of material—including defunct satellites, rocket remnants, and various technical fragments—are orbiting the Earth. This accumulation, collectively termed space debris, represents a mounting challenge, not only technologically but also legally and politically. The growing density of orbital objects is increasingly present in public discourse and policy agendas, both national and international. Emerging legislative initiatives and visible lobbying efforts underscore the urgent need for regulatory codification in this area. Concurrently, technological advancements are unlocking new possibilities for the processing and removal of orbital debris, thereby necessitating suitable legal frameworks. In light of these developments, the legal order—both national and international—is undergoing a gradual transformation, with outer space emerging as a strategically significant domain in the construction of future regulatory architecture. Purpose: This study aims to analyze existing legislative approaches to space debris and to identify innovative legal responses emerging globally. The research focuses on jurisdictions that have either adopted targeted regulations, introduced draft laws, or amended existing frameworks to include orbital debris. The objective is to trace the trajectory of legal evolution in this field and assess the prospects for comprehensive regulatory codification. Methodology: A comparative legal analysis was conducted, drawing on selected normative acts from various legal systems. The study emphasizes provisions directly addressing space debris, while also reviewing draft legislation under consultation or subject to lobbying. To enrich the analysis, the broader socio-political context was examined through governmental documents, official statements, and media discourse, offering insight into public perceptions and the impetus for regulatory action. Results: The analysis reveals diverse approaches to the legal treatment of space debris, reflecting regional, strategic, and geopolitical specificities. Notable differences emerge between, for instance, European and Asian perspectives. Both immediate and longer-term initiatives were identified, often attempting to match legal solutions with the rapid evolution of space technologies and operational practices. Conclusions: An increasing number of states are integrating space debris regulation into their legal orders. The heterogeneity of approaches reflects varying legal traditions and strategic priorities. The second space race is marked by the entrance of new actors—often states without prior space activity—offering novel legal and technological perspectives. Law, by its nature, must respond to changing conditions; yet in the context of space, the pace of innovation requires legal systems to exhibit exceptional flexibility and foresight. The management of space debris is thus emerging as both a regulatory challenge and an opportunity to redefine the role of law in a new, extraterrestrial frontier.
      • 02.0903 Epidemics in Orbit: Modeling Space Debris Dynamics to Alleviate the Orbital Traffic Jam
        Rachel Sholder (Johns Hopkins University Applied Physics Lab (JHU/APL)) Presentation: Rachel Sholder - -
        The accumulation of space debris in Earth’s orbit poses an escalating threat to satellite infrastructure, human spaceflight, and the viability of future space missions. As orbital congestion increases due to expanding satellite constellations and human activity in low Earth orbit (LEO), the risk of collision-induced fragmentation grows, potentially triggering cascading chain reactions such as the Kessler Syndrome. To address these dynamics, this study introduces the Space Junk Accumulation Model (JAM)—a novel adaptation of the classical compartmental SIR (Susceptible–Infectious–Removed) epidemiological model to capture the dynamic spread of space debris in LEO. In this formulation, Susceptible (S) objects are intact satellites not yet involved in collisions, Infectious (I) objects are debris-producing bodies or fragments that actively generate new collision risks, and Removed (R) objects represent satellites and debris naturally eliminated through orbital decay or atmospheric burn-up, as well as debris removed via active debris removal (ADR). The system of ordinary differential equations underlying Space JAM provides a tractable, deterministic framework for exploring debris proliferation and evaluating mitigation strategies, incorporating parameters such as launch rates, collision rates, orbital decay, and ADR effectiveness. While SIR models are fundamental in the mathematical modeling of infectious diseases and have been extended to other domains, there is no precedent for their application to space debris. This interdisciplinary contribution demonstrates how epidemic-style models can capture cascade thresholds in orbital environments, offer interpretable levers for policy and engineering intervention, and lay the groundwork for future extensions such as multi-layer SIR networks across orbital regimes. Ultimately, Space JAM provides a novel, accessible toolkit for anticipating debris proliferation, guiding remediation efforts, and informing policies that aim to alleviate the growing orbital traffic jam.
      • 02.0904 Enhancing Orbital Debris Remediation with Reconfigurable Space-Based Laser Constellations
        David Williams Rogers (West Virginia University), Hang Woon Lee (West Virginia University) Presentation: David Williams Rogers - -
        Orbital debris poses an escalating threat to space missions and the long-term sustainability of Earth's orbital environment. Recent literature proposes various approaches for orbital debris remediation, including the use of multiple space-based lasers that collaboratively engage debris targets. While the proof of concept for this laser-based approach has been demonstrated, critical questions remain about its scalability and responsiveness as the debris population continues to expand rapidly. This paper introduces constellation reconfiguration as a system-level strategy to address these limitations. Through coordinated orbital maneuvers, laser-equipped satellites can dynamically adapt their positions to respond to evolving debris distributions and time-critical events. We formalize this concept as the Reconfigurable Laser-to-Debris Engagement Scheduling Problem (R-L2D-ESP), an optimization framework that determines the optimal sequence of constellation reconfigurations and laser engagements to maximize debris remediation capacity, which quantifies the constellation's ability to nudge, deorbit, or perform just-in-time collision avoidance maneuvers on debris objects. To manage the computational complexity of this combinatorial optimization problem, we employ a receding horizon approach. Our computational experiments reveal that reconfigurable constellations significantly outperform static configurations, achieving greater debris remediation capacity and successfully deorbiting substantially more debris objects. Additionally, our sensitivity analyses identify the key parameters that most strongly influence remediation performance, providing essential insights for future system design. These findings demonstrate that constellation reconfiguration represents a promising advancement for laser-based debris removal systems, offering the adaptability and scalability necessary to enhance this particular approach to orbital debris remediation.
      • 02.0908 Development and Validation Status of High-fidelity Re-entry Analysis Tool LS-DARC
        Keiichiro Fujimoto (Japan Aerospace Exploration Agency), Shinjiro Tsuji (Japan Aerospace Exploration Agency), Shuto Yatsuyanagi (Japan Aerospace Exploration Agency), Takanobu Kamiya (Japan Aerospace Exploration Agency) Presentation: Keiichiro Fujimoto - -
        Space development has been increasingly diversifying in recent years, with efforts underway to enhance rocket development and operational systems to accommodate a wide range of missions and frequent launches. In addition, the expansion of satellite constellations for communication and Earth observation services, as well as growing activity in lunar and deep space exploration, are becoming more prominent. On the other hand, the increasing amount of orbital space debris has become a serious issue, raising the importance of safety assessments for rocket upper stages and spacecrafts during post-mission atmospheric re-entry. To minimize the impact on the ground risk, high accurate predictions of the destructive reentry behavior and detailed analyses of thermal and structural destruction are essential. In this study, continuous intensive research efforts have been made to understand complicated physical mechanisms based on high-fidelity predictions of the destruction process and to minimize the public safety-related risks through design-for-demise. High-fidelity physics-based model named LS-DARC (Destructive Atmospheric Re-entry Code), which enables detailed analysis of the melting and destruction process of complicated, realistic debris geometries. In this report, the current development status of LS-DARC, strategies and overall framework for efficient and appropriate uncertainty quantification, and the validation status of physical model through comparisons with actual flight data and high-enthalpy wind tunnel test data. Furthermore, parametric analysis for representative space debris and discussion of results have been conducted, in order to confirm the importance of high-accuracy predictions and to evaluate the impact of uncertainties in initial conditions, accuracies of the heat flux and aerodynamics models on the analysis results. Finally, remaining research challenges and future prospects toward the practical application of LS-DARC as a safety assessment tool are discussed.
      • 02.0913 Uncertainty in Orbit: Bayesian Spatiotemporal Modeling of Orbital Debris Proliferation
        Rachel Sholder (Johns Hopkins University Applied Physics Lab (JHU/APL)) Presentation: Rachel Sholder - -
        The accumulation of orbital debris threatens the resilience of satellite infrastructure, human spaceflight, and the sustainability of future missions. Existing environment models provide valuable deterministic forecasts but often lack explicit treatment of uncertainty and spatial heterogeneity—factors critical for risk-informed decision making. To address this gap, this study introduces a Bayesian spatiotemporal framework for orbital debris forecasting using the Integrated Nested Laplace Approximation (INLA). INLA enables efficient inference for latent Gaussian models, providing a scalable alternative to simulation-based Bayesian methods while yielding full posterior distributions. INLA is particularly well-suited to problems like space debris modeling, where data may be sparse, uncertain, or unevenly distributed in space and time. By discretizing orbital space (e.g., altitude, inclination, RAAN) into a spatial lattice and modeling temporal dynamics as random-walk components, INLA captures heterogeneity while correcting for observational gaps and biases. The framework also integrates heterogeneous data sources—including satellite catalogs (TLEs), fragmentation event records, and environment model outputs. INLA’s probabilistic forecasting of debris accumulation and collision risk yields uncertainty-aware projections critical for decision-making. While INLA has been successfully applied in epidemiology and ecology, this work represents its first application to orbital debris. By quantifying uncertainty, identifying risk hot spots, and producing exceedance probabilities, INLA establishes a novel probabilistic toolkit to support mitigation planning, active debris removal prioritization, and the sustainability of near-Earth space.
      • 02.0914 A Labeled Multi-Bernoulli Filter with RK4 Propagator for Orbital Debris Tracking
        Atri Bhattacharjee () Presentation: Atri Bhattacharjee - -
        The proliferation of space debris, particularly in the lethal non-trackable (LNT) 1-10 cm diameter range, poses a critical and growing threat to the sustainability of space operations. Ground-based tracking is fundamentally limited by atmospheric interference, making a space-based solution imperative. An autonomous, space-based sensor constellation is a promising solution, but requires a robust estimation engine capable of maintaining an accurate catalog of numerous, closely-spaced objects from sparse and noisy data. This paper describes the design, implementation, and validation of a high-fidelity Labeled Multi-Bernoulli (LMB) filter specifically for the orbital dynamics of space debris. We integrate a 4th Order Runge-Kutta (RK4) method as the core orbital propagator within the filter's predict-update cycle to ensure high-precision state estimation. The filter's performance is rigorously evaluated in a simulated environment using a ground truth database generated by the high-fidelity MASTER debris model. We demonstrate through Monte Carlo simulations that the filter successfully converges from a state of high initial uncertainty and maintains a stable, low-error track. Performance is quantified using the Optimal Sub-Pattern Assignment (OSPA) distance, which shows a rapid decrease to a low steady-state value, validating the filter's accuracy and robustness. This validated filter serves as the foundational estimation layer for a proposed end-to-end autonomous sensor tasking framework that utilizes a Graph Neural Network. While the full training of the decision-making agent was beyond the scope of this work, the successful development of this orbital LMB filter is a critical and enabling first step.
    • Paul Chodas (Jet Propulsion Laboratory) & Michael Werth (The Boeing Company) & Jeffery Webster (NASA / Caltech / Jet Propulsion Laboratory)
      • 02.1001 Challenges in Finding Large Near-Earth Objects
        Richard Wainscoat (University of Hawaii) Presentation: Richard Wainscoat - -
        Near-Earth Object surveys, such as Pan-STARRS, Catalina, and ATLAS have been searching the sky for many years for Near-Earth Objects. Despite these efforts, the inventory of large (>1km) NEOs is not complete, and the inventory of NEOs with diameter 140 meters or larger is only approximately 43% complete. Although impacts by objects with diameter > 100 meters are rare, impacts by larger objects can cause devastation and loss of life in large areas. Impact energy is proportional to the mass of the NEO, and therefore scales with the cube of the diameter. New facilities such as the Rubin Observatory and later the NEO Surveyor Mission will improve the discovery rate of large NEOs, but some NEOs will remain difficult to discover. A wide variety of reasons why discovering some NEOs can be challenging will be discussed. These include long periods, periods close to an integer number of years with repeating unfavorable apparitions, weather, the Galactic Plane, and small aphelion. Recent discoveries of larger NEOs will be used to highlight some of these factors.
      • 02.1003 Exploring Rapid-Response Flyby Recon Mission Architectures Enabling Asteroid Mass Estimation
        Evan Smith (Johns Hopkins University/Applied Physics Laboratory), Justin Atchison (Johns Hopkins University Applied Physics Lab), Pegah Pashai (Johns Hopkins University/Applied Physics Laboratory), Collette Gillaspie (Texas A&M University), Taejoo Lee (Johns Hopkins University/Applied Physics Laboratory), Matthew Rodriguez (Johns Hopkins University/Applied Physics Laboratory), Rachel Sholder (Johns Hopkins University Applied Physics Lab (JHU/APL)), Joseph Linden (JHU/APL), Elisabeth Abel (Johns Hopkins University/Applied Physics Laboratory), Rylie Bull ( Johns Hopkins University/Applied Physics Laboratory), Michelle Chen (Johns Hopkins University Applied Physics Laboratory), Ronald Daly (Johns Hopkins APL), Thomas Ruekgauer (Johns Hopkins Applied Physics Laboratory), Connor Thompson (Johns Hopkins University/Applied Physics Laboratory) Presentation: Evan Smith - -
        Flyby missions are the fastest, cheapest, and sometimes only means of obtaining critical measurements for Planetary Defense (PD) scenarios. Past missions, primarily focused on asteroid science, have measured the orbit, binarity, shape, and rotation of asteroids. Mass, one of the most important asteroid characteristics for PD scenarios, has unfortunately proven much more difficult to measure via flyby, only being possible with certain combinations of large targets and slow flybys. An adaptation of a relative tracking scheme, a measurement technique previously used to map the gravitational fields of the Earth and Moon, has the potential to substantially increase flyby mass measurement sensitivity. A relative tracking scheme adapted for a small-body flyby is envisioned to consist of two spacecraft conducting simultaneous flybys of a target. If at least one of the spacecraft passes very close to the asteroid, its change in motion induced by the small body’s gravity can be observed through relative measurements of the other spacecraft. This paper begins by establishing some high-level mission objectives for a rapid reconnaissance architecture that includes a mass estimate. High-level trades pertaining to the flight system architecture, relative tracking payload selection and imaging payload configuration are then considered. In conclusion a concept of operations and flight system capable of measuring the most critical hazardous asteroid characteristics, including asteroid mass are presented.
      • 02.1006 Using Impact Flash to Initiate Nuclear Explosive Devices in Planetary Defense Missions
        Mark Boslough (University of New Mexico), Russell TerBeek (Sandia National Laboratories) Presentation: Mark Boslough - -
        The problem of asteroid impact mitigation has many solutions, and the optimal answer depends on circumstances. For some missions, a kinetic impact or gentle continuous force (e.g. ion beam or gravity tractor) will suffice. However, if the warning time is too short, or if the asteroid is too large, then only a nuclear detonation will impart the required change in velocity to safely steer the asteroid away from Earth or robustly disrupt in a way that minimizes the probability of impact by dangerous fragments. In this paper, we propose a novel method of reliably fuzing the nuclear explosive device (NED) to prepare it for detonation. This relies upon the phenomenon of impact flash: when an impactor hits a target at high speeds (often exceeding 2 km/s), it produces a high-intensity pulse of light that can be detected at considerable distance. Many laboratory experiments have been performed which characterize the emissions produced by this impact as the impact produces high temperature ejecta and shock waves that travel through both the impactor and target [e.g. Boslough (1985); Ang, (1990)]. This body of work establishes credibility to use an impact flash trigger in a real-world scenario. In practice, the impactor would be a separate spacecraft from the NED module. We can use lidar transmitters and receivers on the impact module, which can be controlled with small thrusters to ensure that it stays a nominal distance ahead of the NED and that it remains at rest in its reference frame. As an example, if the closing speed of the asteroid is roughly 10 km/s, and if the two craft are 100 meters apart, then the impact flash would be detected by the NED 10 ms before it would strike the target. This would provide sufficient buffer time for the NED to count down and detonate. The brightness and rapid rise time of an impact flash would be difficult to mistake for any other source, and its magnitude can be estimated from theory and tests in order to calibrate the NED’s receiver. The advantage of this approach over other systems is that the high impact speeds are guaranteed to generate an impact flash between the impactor and the target. This could be crucial if there is a debris field surrounding the asteroid. It also does not require passive information, such as sunlight collected by optical cameras. Furthermore, this approach would allow the effective prescription of the height of burst in-situ. If the asteroid turns out to be larger than predicted, or if a reconnaissance mission cannot be flown, then this mission type could function as its own reconnaissance. The optimal height of burst could be achieved by the impactor craft’s thrusters putting it a little closer to, or a little farther from, the NED, or by resetting the count-down clock interval to prescribe the time between impact and detonation.
      • 02.1008 Modeling Atmospheric Breakup and Burst of Meteoroids Using High-Energy Equation of State Frameworks
        Raul Gutiérrez-Zalapa (Universidad Nacional Autómoma de México), Mario Rodriguez-Martinez (Universidad Nacional Autónoma de México) Presentation: Raul Gutiérrez-Zalapa - -
        The atmospheric entry of meteoroids and asteroids poses a critical challenge for planetary defense, mission planning, and high-fidelity risk assessment. In particular, the breakup and burst phenomena of these high-velocity bodies are inherently nonlinear, involving complex material response under extreme pressure, temperature, and strain-rate conditions. Traditional continuum models often lack the ability to represent the rapid thermodynamic transitions during atmospheric fragmentation. In the present study, we explore the application of high-energy and thermal-explosion equations of state (EOS), specifically the Jones–Wilkins–Lee (JWL) and Tillotson models, for the numerical simulation of meteoroid fragmentation and airburst phenomena during atmospheric entry. We focus on the application of the Tillotson EOS, originally developed for hypervelocity impact simulations, to model the mechanical and thermodynamic evolution of a meteoroid composed of silicate material (e.g., basalt). The Tillotson formulation accommodates a smooth transition between compressed solid, partially vaporized, and fully vaporized states, making it well-suited for capturing the physics of shock-induced breakup and thermal burst events. The simulation framework is implemented in a two-dimensional axisymmetric domain with spatially varying energy deposition to mimic aerodynamic heating and shock focusing. We first evaluate a controlled atmospheric entry scenario in which a meteoroid of 10 m diameter experiences increasing thermal energy concentrated at its leading hemisphere. The local energy increases induce internal pressures that are computed using the Tillotson EOS. A rupture threshold is defined at 1 GPa to determine the onset of mechanical fragmentation. Subsequently, we introduce a localized "burst" phase, modeled as a transient injection of internal energy in the core, simulating either shock focusing or structural failure followed by an explosive expansion of gas and material. The results demonstrate that the Tillotson EOS successfully captures key features of atmospheric breakup, including the development of internal pressure gradients, progressive fragmentation zones, and the amplification of thermodynamic variables during burst. We further compare the pressure fields before and after burst, highlighting how the EOS-based model transitions from quasi-static compression to explosive expansion. Visualizations of 2D pressure and energy fields reveal the spatial localization of damage and correlate well with analytical expectations of burst radius and energy scaling. In the context of planetary defense and mission safety, this approach enables physics-based prediction of fragment dispersion, overpressure zones, and energy deposition altitudes. Unlike empirical models or simplified hydrodynamics, EOS-based methods offer a pathway toward unified simulation environments where mechanical integrity, thermochemical response, and fragmentation can be consistently described. Finally, we discuss potential extensions of the model, including coupling with damage mechanics, adaptive mesh refinement for real-time breakup tracking, and the use of JWL EOS for modeling detonation-driven secondary explosions in engineered deflection scenarios. The use of high-fidelity EOS models in aerospace simulations represents a critical step forward in understanding and mitigating the risk of high-energy atmospheric entry events.
      • 02.1009 Review, Design, and Comparative Analysis of Microspine Gripper Technologies for Asteroid Attachment
        Michael Akers (Clarkson University), Megan Michaud (), Michael C. F. Bazzocchi (York University) Presentation: Michael Akers - -
        Numerous methods for lander attachment to the surface of an asteroid have been proposed and developed, each of which has unique advantages and disadvantages. Microspine gripper technology is a promising solution as a nondestructive attachment option capable of comparatively high reaction forces on most asteroid surfaces. However, most current microspine designs fail under substantial compressive and torsional loads that constrain their use in asteroid detumbling or redirection procedures. For example, detumbling an asteroid requires the landers to generate considerable torques—resulting in torsional reaction forces on the landers; whereas, for asteroid redirection, landed thrusters have been proposed to generate a change in asteroid trajectory—leading to significant compressive forces. Under torsional loads, the lander rotates, and the spines on the side of the lander in compression fail. Further, the microspines on the side in tension are overexerted and either rupture or slip out of their asperities. This paper reviews microspine gripper technology, investigates different design aspects, analyzes alternative designs, and introduces a novel configuration that may increase the engagement of microspines under torsional and compressive load conditions, furthering microspine capabilities, especially towards asteroid detumbling or redirection. A total of six unique alternative designs were compared using six key criteria for successful asteroid exploration missions. The alternate designs were developed such that they vary their leg and truss configurations, carriage placements, and microspine organization. The comparison criteria were mass, volume, cost, torsional force, tensile force, and the power required to engage the microspines on the asperities. Design attributes were assigned to each alternative using various metrics for each criterion, derived through geometric analysis, ordinal ranking, force-moment analysis, and power considerations. A comparative analysis was performed using the Analytic Hierarchy Process, and the results were normalized to find the best alternative design. A consistency index was computed to ensure the reliability of the multi-criteria decision-making process. The final selected design was a multi-phalanx configuration with radially symmetric leg sets around the lander. Each leg set consisted of a main leg and carriage as well as a secondary, slightly smaller, telescoping leg in the reverse orientation relative to the main leg and carriage. This multi-phalanx design was selected primarily because of the low mass and volume and the potentially high strength when subjected to torsional and compressive loads. The opposing carriages of each leg can create individual reaction forces. By increasing the tension in the second leg, the component forces horizontal to the surface of the asteroid of each carriage can be increased, thus lowering the angle of the resultant force.
    • David Sternberg (NASA Jet Propulsion Laboratory) & Kenneth Cheung (NASA - Ames Research Center)
      • 02.1101 Learning Safety-Guaranteed, Non-Greedy Control Barrier Functions Using Reinforcement Learning
        Minduli Wijayatunga (University of Illinois at Urbana-Champaign), Roberto Armellin () Presentation: Minduli Wijayatunga - -
        This work introduces a reinforcement learning (RL) framework to learn a non-greedy control barrier function (CBF) for safe and fuel-efficient control. Traditional CBF approaches often take a greedy stance, prioritizing immediate satisfaction of safety constraints without considering longer-term efficiency. In contrast, our method learns a CBF that minimizes total thrust consumption and/or other criteria while ensuring that safety and invariance conditions are met over the trajectory. The method works as follows: The RL agent outputs a candidate CBF value, and the gradient of this CBF with respect to the state is obtained via automatic differentiation. A quadratic program then computes the thrust input, formulated as a Quadratic Problem (QP) with both Control Lyapunov Function (CLF) and CBF constraints. This thrust is held constant under a zero-order hold policy, and the system is forward propagated accordingly. The reward function penalizes violations of the original safety constraints after propagation, as well as unnecessary fuel usage. We also noticed that by including the propagated state, the Lie derivatives of the learned CBF, the distance to the target (when a CLF is used), and the value of the original safety function in the RL observation space we obtain an RL agent that accounts for future behavior, enabling long-term planning and more efficient thrust use. We evaluate our approach on two benchmark problems from Panageou et al. (2021): (i) a simple cruise control task and (ii) a spacecraft rendezvous scenario involving docking. Our method consistently demonstrates lower fuel consumption compared to the baseline greedy-CBF approach, validating the effectiveness of learning a non-greedy CBF while retaining safety. Additionally, we apply our framework to an inspection task, where the goal is to learn a CBF that maximizes an inspection metric that accounts for lighting and distance conditions while maintaining safety by respecting keep-in and keep-out zone constraints. This demonstrates the flexibility of our framework to adapt to task-specific objectives beyond reaching a fixed target. Our results highlight the potential of reinforcement learning as a tool not only for autonomous safe control but also for embedding long-term cost-awareness directly into the structure of the control barrier itself.
      • 02.1104 A General Purpose Method for Autonomous Interception of Non-Cooperative Moving Targets
        Tanmay Patel (University of Toronto), Erica Tevere (Jet Propulsion Laboratory) Presentation: Tanmay Patel - -
        This paper reports on our mathematical framework of an autonomous, vision-based interception algorithm for non-cooperative targets, its implementation on three separate types of mobility platforms, and experimental results from field testing. The three types of mobility platforms include an unmanned aerial vehicle, a four-wheeled skid-steered ground vehicle, and a 700-pound spacecraft simulator that moves on a flat floor on air bearings using 8 thrusters. We use only a monocular camera stream with fiducials for target tracking. The same approach is validated on the three types of platforms to understand generalizability by addressing platform specific idiosyncrasies and dynamics. Our algorithmic approach is based on (1) a nonlinear Extended Kalman Filter to estimate the relative target pose amid intermittent measurements, (2) an uncertainty-aware motion predictor that propagates a plausible target trajectory conditioned on its history, and (3) a receding-horizon planner that solves a constrained sequential convex program in real-time to guarantee a kinematically feasible and time-efficient interception path. It makes minimal assumptions about the environment, operates entirely in the local observer frame without global information, and relies on a single sensing modality—a monocular camera stream. The operating regime also assumes limited observability due to partial fields of view, sensor dropouts, and target occlusions. For target detection we deploy a classical approach for nominal cases and a learning-based approach to address off-nominal image quality. We incorporate platform-specific details such as degrees of freedom and vehicle dynamics through modular interfaces. Ground rover experiments involve (1) two rovers rendezvousing and stopping at a prescribed distance and orientation and (2) leader-follower scenarios with a passive follower rover maintaining a prescribed relative pose offset from its leader. The unmanned aerial vehicle experiment demonstrates a single UAV identifying a dynamic target fiducial and landing on it. The experiments with the spacecraft simulators involve two such bodies using thrusters for dynamic rendezvous and station keeping. We evaluate and compare interception pose errors, success rates under different target motion profiles, and computational latency on multiple embedded processors (an Nvidia Jetson Orin, a ModalAI VOXL2, and a Raspberry Pi 5). In each scenario the target’s motion is dynamic and non-cooperative exhibiting stochastic behavior. Results show excellent performance, for example, with the ground vehicles demonstrating 3-5 centimeter errors over tens of meters of relative travel on natural terrain. The paper will also discuss the impacts of environmental factors and steps taken to mitigate these issues.
      • 02.1107 Reinforcement Learning for Closed-loop Whole-Body Control of a Debris Capture Satellite
        Vighnesh Vatsal (Tata Consultancy Services), Nijil George (Tata Consultancy Services), Rolif Lima (Tata Consultancy Services,), Kaushik Das (TCS) Presentation: Vighnesh Vatsal - -
        The proliferation of space debris has become a matter of grave concern for new and existing orbiting assets, leading to potential collisions and losses. One approach for mitigating the impacts of space debris involves the use of satellites with robotic manipulation arms mounted on them. The approach for capturing debris with such a system involves going to a rendezvous orbit with a previously identified debris object, using a vision-based grasp planner to find an approach trajectory for the robotic arm, and finally controlling the attitude of the satellite along with the joints of the robotic arm and gripper to execute a successful grasp. Building upon our previous work on grasp planning for satellite-mounted manipulators, after the orbit rendezvous stage, we propose a whole-body control strategy using reinforcement learning (RL) to control the combined satellite and robotic arm system. In order to execute debris capture in a closed-loop manner, we employ a lightweight vision-based grasp planner that allows for real-time visual servoing, as well as tactile sensing on the robotic arm’s end-effector to prevent the target object from slipping. The reinforcement learning model, once trained, would be able to operate with lower computation costs than classical approaches such as MPC. We demonstrate the efficacy of this closed-loop RL-based controller for debris capture in a high-fidelity physics simulation environment and compare its performance with classical controllers.
      • 02.1114 Grasp Optimization for Space Manipulation Using Multiple Underactuated Spacecraft
        M. Reza Emami (University of Toronto), Jun Yang Li (University of Toronto) Presentation: M. Reza Emami - -
        Multiple underactuated spacecraft agents can rigidly attach to a large object and cooperatively use their actuators, e.g., thrusters and reaction wheels, for tasks such as in-space assembly. This process, dubbed as fractionated manipulation, allows for the use of cost-effective miniaturized spacecraft, offering large workspaces and enhanced robustness, compared to a single space manipulator. For a manipulation task, it is desirable that the agents attach to the object in an optimal grasp configuration, i.e., the location and attitude of agents relative to the object. Specifically, the agents are positioned and oriented such that the actuators are able to collectively produce the net wrench, i.e., force and moment, required to drive the object along a desired trajectory, while minimizing the expended energy. Given a large number of candidate attachment poses, i.e., positions and orientations, the evaluation of grasp configurations by brute force is prohibitively expensive to compute. This paper provides a method to geometrically generate the optimal grasp configuration for a specific desired trajectory, which is also suitable for general-purpose space manipulations. The thruster efficiency is prioritized over the efficiency of reaction wheels, due to the scarcity of fuel in space, thus the grasp configuration is optimized as a function of thrust vectors. The net thrust required for the desired trajectory is modelled as a point-cloud in three-dimensional space. The point-cloud must lie within the set of all attainable net forces, represented by a zonotope whose generators are the thrust vectors. Semi-nonnegative matrix factorization is used to obtain the generators of the minimal zonotope. Additional generators, if available, are used to increase the volume of the zonotope, ensuring that the system is under force closure, while improving the system's robustness and maneuverability. Once the zonotope is generated, thrust directions of the agents are obtained. Next, the attachment locations are determined. Naturally, thrusters should point away from the object to avoid plume impingement. Further, it is desirable that the thrust vectors go through the center of mass of the object, so that the agents avoid applying moments inadvertently. Hence, for each zonotope generator vector, an outward search is conducted from the point where the generator vector is incident upon the surface of the object until an attachment position is found. The position is such that the local geometry allows for the attachment at the desired orientation without obstruction. The proposed method is verified through simulations. A case study on orbit and attitude manipulation of a noncooperative object is performed, using underactuated CubeSats, each equipped with a single unidirectional thruster and three reaction wheels. The desired trajectory and required net wrench profile are predetermined. The proposed method, based on zonotope and semi-nonnegative matrix factorization, is used to obtain the optimal grasp configuration, which is compared against the results obtained using a combinatorial method with sparse candidate grasp configurations.
      • 02.1115 Agile Tradespace Exploration for Space Rendezvous Mission Design via Transformers
        Yuji Takubo (Stanford University), Daniele Gammelli (Stanford University), Marco Pavone (Stanford University), Simone D'Amico (Stanford University) Presentation: Yuji Takubo - -
        Spacecraft rendezvous enables on-orbit servicing, debris removal, and crewed docking, forming the foundation for a scalable space economy. Designing such missions requires rapid exploration of the tradespace between control cost and flight time across multiple candidate targets. However, multi-objective optimization in this setting is challenging, as the underlying constraints are often highly nonconvex, and mission designers must balance accuracy (e.g., solving the full problem) with efficiency (e.g., convex relaxations), slowing iteration and limiting design agility. To address these challenges, this paper proposes an AI-powered framework that enables agile mission design for a wide range of Earth orbit rendezvous scenarios. Given the orbital information of the target spacecraft, boundary conditions, and a range of flight times, this work proposes a Transformer-based architecture that generates, in a single parallelized inference step, a set of near-Pareto optimal trajectories across varying flight times, thereby enabling rapid mission trade studies. The model is further extended to accommodate variable flight times and perturbed orbital dynamics, supporting realistic multi-objective trade-offs. Validation on chance-constrained rendezvous problems with passive safety constraints demonstrates that the model generalizes across both flight times and dynamics, consistently providing high-quality initial guesses that converge to superior solutions in fewer iterations. Moreover, the framework efficiently approximates the Pareto front, achieving runtimes comparable to convex relaxation by exploiting parallelized inference. Together, these results position the proposed framework as a practical surrogate for nonconvex trajectory generation and mark an important step toward AI-driven trajectory design for accelerating preliminary mission planning in real-world rendezvous applications.
  • Glenn Hopkins (Georgia Tech Research Institute) & James Hoffman (Kinemetrics)
    • Glenn Hopkins (Georgia Tech Research Institute)
      • 03.0101 Mechanical Design of GTRI's X-band Polarization-diverse AESA Testbed (XPAT)
        Maxwell Tannenbaum (Georgia Tech Research Institute) Presentation: Maxwell Tannenbaum - -
        The Department of Defense (DoD) is increasingly in need of advanced radar and electronic warfare (EW) systems, which require a software-defined radio frequency (RF) aperture that can be reconfigured to meet a variety of system requirements and mission sets. This paper details the mechanical design approach for a fully-integrated, novel, compact X-band polarization-diverse active electronically scanned array (AESA) developed by the Georgia Tech Research Institute (GTRI). The system was designed for suitability with airborne and ground-based AESA applications that require high output power on transmit. It consists of custom transmit/receive (T/R) modules, liquid-cooled cold plates optimized to reduce thermal gradients between high power amplifiers, power/control electronics, electro-magnetic interference (EMI) lids for improved isolation between channels, and a 64-element radiator. In order to achieve full polarization diversity, a dual-linear radiator element design was employed, supported by double the number of (T/R) components typically found in an AESA. This required the integration of two 12-Watt high power amplifiers (HPAs) per element in an X-band lattice of 0.57λ at 11 GHz. This lattice spacing allows the system to achieve ±60° scan angle capability. The result is a power-dense and modular physical AESA architecture that leverages commercial off the shelf (COTS) chipsets to reduce cost. Each of the T/R modules includes an RF board with supporting active components for eight dual-channel radiating elements, on-board analog beamforming from dedicated RF input/output (I/O) ports, and local processing via a field programmable gate array (FPGA) for command, control, and timing of critical functions. In addition to relaying polarization, amplitude, and beam position commands, the FPGAs sequence and monitor power conditions in conjunction with a separate power board, which is integrated into each module and provides power for all RF and control interface components on the RF board. The technical accomplishments described herein represent the synthesis of multiple state-of-the-art technologies to create a modern and capable high-power AESA for a wide range of DoD applications. Milestone achievements in the areas of electronics packaging, thermal management, and additive manufacturing (AM) for RF systems were conceptualized and enacted on this project. Aspects of the mechanical design could be leveraged in future AESA architectures for enhanced field-of-view applications ranging from S through Ku-band frequencies.
      • 03.0104 Untethered Interference Cancellation by a Simultaneous Jamming and Communication Terminal
        Bruce McGuffin (MIT Lincoln Laboratory) Presentation: Bruce McGuffin - -
        A radio was designed to support simultaneous co-channel jamming and communications using an in-band Simultaneous Transmit and Receive (STAR) antenna array [1] followed by two layers of adaptive estimator/subtractors. These cancellers use a reference signal obtained by tapping off a small fraction of the transmitted jamming signal. The estimator/subtractors can suppress other locally transmitted signals if they are tethered to the receiver to provide reference signals. The output of the implemented local interference cancelling front-end resembles a four-channel array, and array interference-nulling techniques can be applied to the four-channel output to suppress untethered interference signals. Due to the preceding adaptive cancellers, the untethered interference nulling algorithm used cannot require array calibration. The received communication signal has a known format and contains known symbol sub-sequences that can be used as training data. The proposed algorithm for untethered interference cancellation with signal training data and without array calibration appeared as an intermediate result in [2]. The algorithm maximizes the output signal-to-interference-plus-noise ratio using the training sequence to estimate Minimum Variance Distortionless Responce (MVDR) type array weights without knowing the signal Angle-of-Arrival (AoA). Here we call it MVDR for Uncalibrated Arrays (MUA). This paper describes the receiver design and presents simulation results for the MUA algorithm applied to the four-channel output. It is shown that with ideal self-interference cancelling, untethered interference nulling performance is good whenever the input Interference-to-Noise-Ratio is high enough for the algorithm to implicitly estimate the array interference response, and not so high that the algorithm cannot reliably detect signal training data, provided the interference and signal AoAs are at least 45° apart. Signal to Interference Ratio improvements as high as 50 dB have been observed when untethered interference is strong. References: [1] K. E. Kolodziej, B. T. Perry and J. S. Herd, "In-Band Full-Duplex Technology: Techniques and Systems Survey," in IEEE Transactions on Microwave Theory and Techniques, vol. 67, no. 7, July 2019, pp. 3025-3041 [2] Keith Forsythe, “Utilizing Waveform Features for Adaptive Beamforming and Direction Finding with Narrowband Signals”, Lincoln Laboratory Journal, Vol. 10, No. 2, 1997, pp 99-125. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Department of the Air Force under Air Force Contract No. FA8702-15-D-0001 or FA8702-25-D-B002. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of the Air Force.
    • James Hoffman (Kinemetrics) & Thomas Williamson (Georgia Tech Research Institute)
      • 03.0201 Deployable Offset Fed Antenna System for Earth Weather Observation
        Jenna Commisso (Opterus R&D) Presentation: Jenna Commisso - -
        Opterus’ innovative deployable offset fed antenna stows compactly during launch, replacing the large, rigid reflectors commonly used in polarimetric radiometer payloads. In development with Weather Stream (formerly Orbital Micro Systems), this technology will transform space-based Earth weather observation by significantly reducing costs enabled by a compact payload. Utilizing concepts from Opterus’ center-fed Spiral Wrapped Antenna Technology (SWATH), the offset fed reflector is efficiently manufactured using a mold based composite materials process. The thin shell reflector folds up tightly around a central hub based on origami principles and deploys once in-orbit. The reflector is positioned away from the spacecraft with a boom that also provides precise setting of RF-transparent tensioned spokes on the front and back sides of the reflector. These provide stiffness to the reflector which allows the system to meet the desired scanning rate with minimal surface deformations. This new advancement enables new frontiers in cost-effective high frequency RF performance as the cost per satellite is greatly reduced due to the launch volume and manufacturing cost savings associated with the molded, deployable reflector technology. Design, analysis and test results from the ongoing development effort with Weather Stream will be discussed.
    • James Hoffman (Kinemetrics) & Christopher Edmonds (Georgia Tech Research Institute)
      • 03.0301 High-Power Radio Frequency Anti-Satellite (ASAT) Weapons – Technical Aspects of an Emerging Threat
        Marian Lanzrath (), Grzegorz Lubkowski (Fraunhofer INT), Michael Suhrke (Fraunhofer INT), Martin Schwarz () Presentation: Marian Lanzrath - -
        Over the last decades, the importance of satellite systems and constellations has continuously increased. Today, they are indispensable not only for their originally military functions but also for a wide range of civilian applications, including GNSS (Global Navigation Satellite Systems), SatCom (Satellite Communication), and Earth Observation. The persistent availability and integrity of satellite-based services are critical for both military operations and modern societies, supporting everything from navigation and communication to disaster management and environmental monitoring. There are several thousands of satellites in different Orbits (LEO – Low Earth Orbit, GEO – Geostationary Earth Orbit, MEO – Medium Earth Orbit and Elliptical Orbits). State-of-the-art anti-satellite (ASAT) weapons include kinetic-kill systems such as ASAT missiles, which physically destroy satellites but also generate vast clouds of orbital debris, posing long-term risks to other space assets. In addition to these, directed-energy weapons such as high-energy lasers are being developed and tested to temporarily dazzle or permanently damage satellite optical systems without creating debris. An emerging threat to satellite systems from high-power radio frequency (HPRF) weapons, which can disrupt, degrade, or potentially destroy satellite electronics through intense electromagnetic pulses. There are different technologies available for such powerful HPRF sources which are based either on solid-state power amplifiers or on microwave tube technology. Depending on their power and size, HPRF systems can be deployed from ground-based platforms, airborne systems, or even space-based assets, targeting both satellites in orbit and terrestrial ground stations. Consequently, a key research question is the examination of the vulnerability of satellites to HPRF threats. A first step for the assessment of HPRF threats to satellites is the technological evaluation of their electronic components including payload with regards to implemented mitigation measures (e.g. EMC protective measures) to derive EMC requirements for the satellite as well as potential coupling paths (e.g. antennas or apertures). This fundamental analysis provides first results on satellite vulnerability to HPEM attacks. For a more refined information there is a need of susceptibility tests on such satellites to get reliable information on their vulnerability. In addition, this paper addresses the emerging threat posed by HPRF weapons. It includes an analysis of feasible HPRF technologies and introduces an assessment methodology for evaluating the potential impact of ground-based, airborne, and space-based HPRF systems on satellites. This methodology enables a preliminary estimation of threat-relevant field amplitudes based on available experimental HPRF sources. Furthermore, the study considers the influence of orbital debris in HPRF engagement scenarios, including its potential effects on HPRF propagation.
      • 03.0302 Snow Accumulation Sounding Using C3 Class Hexacopter Integrated FMCW Radar
        Lee Taylor (University of Kansas), Fernando Rodriguez-Morales (University of Kansas), Carlton Leuschen (University of Kansas), Jilu Li (The University of Kansas) Presentation: Lee Taylor - -
        This paper presents snow sounding images observed using Aurelia C-3 class X6 hexacopter integrated (nadir-looking) 2-8 GHz Ultra-wideband Frequency Modulated Continuous Wave (FMCW) Radar. Multiple passes of the same flight path were taken as a measurement of opportunity before and after a significant snowfall event during February 2025 in Lawrence Kansas, USA. Image processing was performed on geolocated data, with before and after images presented and compared. The before and after snowfall dataset comparison successfully demonstrates snow layer interface detection near the theoretical limit for range resolution, measuring approximately 5 cm of snowfall using the cost effective small formfactor radar integrated with a commercially available Unmanned Aerial System (UAS) from an altitude of 100 m Above Ground Level (AGL). The significance of this observation is in showing that low Size Weight and Power (SWaP) ultra-wide band FMCW Radar offers increased cost-effectiveness and versatility for snow depth retrieval compared to competing technologies including relatively low-cost traditional fixed wing integrated FMCW Snow Radars, while maintaining vertical resolution and performance comparable to that of higher SWaP systems.
      • 03.0303 Probabilistic Evaluation of NGSO Interference on GEO-FSS Links Using a Monte Carlo Approach
        Remi Plazinski (UPVD) Presentation: Remi Plazinski - -
        The advent of "New Space," characterized by the democratization of access to space and the emergence of satellite mega-constellations, is leading to a saturation of orbit-spectrum resources. Indeed, space is now widely accessible to emerging countries as well as private actors, according to data from the United Nations Office for Outer Space Affairs (UNOOSA) and the Committee on the Peaceful Uses of Outer Space (COPUOS) [1]. At the same time, the regulation of the use of these resources is based on the principle of not exceeding interference threshold parameters. With the arrival of a large number of potential sources of interference, it is necessary to reduce the computation time of these parameters to ensure better coordination. The objective of this work is, first, to reconstruct the quantities used to evaluate interference in the context of this study, based on the recommendations of the International Telecommunication Union. Then, to develop a probabilistic method aimed at reducing the computation time of interference simulations, using a Monte Carlo–type approach. The analysis focuses on the Fixed Satellite Service (FSS) in the Ku band (11–14 GHz) and Ka band (20–30 GHz). It takes into account both existing and planned Non-Geostationary Satellite Orbit (NGSO) constellations. The results are valid for a downlink from a Geostationary Earth Orbit (GEO) satellite to a ground station. The methodology is based on a Monte Carlo simulation approach aimed at statistically evaluating the impact of secondary lobes emitted by NGSO satellites—particularly those from mega-constellations—on a ground station operating in downlink with a GEO satellite within the FSS framework. The approach involves randomly generating constellation configurations, simulating the associated interference indicators, and comparing the results to the regulatory thresholds defined by the ITU. NGSO satellites are randomly positioned within an error corridor centered on their nominal orbit, in accordance with the tolerances defined by the ITU. This process makes it possible to reproduce the uncertainty in the actual positions of the satellites, as well as the local density induced by large-scale constellations. For each satellite, the antenna gain in the direction of the GEO station is calculated, taking into account secondary lobes based on realistic antenna patterns, when available. Based on these gains, several key interference metrics are estimated: the Equivalent Power Flux Density (EPFD) [2][3], the Carrier-to-Interference ratio (C/I) [4], as well as the equivalent noise temperature at the input of the receiving station [5]. The results from the various iterations are then analyzed statistically. Histograms are constructed to represent the distribution of the simulated values, and kernel density estimation is used to obtain a smoothed approximation of the cumulative distribution functions, allowing for a more precise assessment of exceedance risks. Finally, the resulting indicators are compared against the regulatory thresholds defined by the ITU, particularly those in the Radio Regulations. Any exceedance of these thresholds is interpreted as a case of harmful interference for the GEO station.
    • Mark Bentum (Eindhoven University of Technology) & Melissa Soriano (Jet Propulsion Laboratory)
      • 03.0402 Assessment of GRAIL Radio Science Signal Noise Patterns in Support of Lunar Radio Occultation
        Yu-Ming Yang (NASA Jet Propulsion Lab), Kamal Oudrhiri (Jet Propulsion Laboratory), Paul Withers (), Daniel Erwin (University of Southern California), Dustin Buccino (Jet Propulsion Laboratory) Presentation: Yu-Ming Yang - -
        Over the last few decades, many planetary missions have relied on Radio Science signals and observations to probe the planets and improve our understanding of the solar system. Recently, JPL has utilized NASA's Gravity Recovery and Interior Laboratory (GRAIL) radio science as Signals of Opportunity (SoOP) to map the moon's ionosphere and investigate the measurements for the future Lunar Radio Occultation (RO) mission, which aims to study lunar dust and surrounding plasma interactions. This research will provide an assessment of the GRAIL radio science signal noise patterns corresponding to different locations in Geocentric Solar Ecliptic (GSE) coordinate, solar plasmas conditions, and orbit dynamics of radio occultation experiments for the new scientific observations derived using GRAIL RO data; the statistical analysis of the noise patterns, estimated using the actual GRAIL RO data, will be in comparison with the uncertainties derived using analytical models. The research findings provide a reference for the scientific community to design future lunar Radio Occultation Missions, improving the understanding of constraints and errors in these new scientific measurements made by GRAIL-like Radio Science instruments.
  • Kar Ming Cheung (Jet Propulsion Laboratory) & John Enright (Toronto Metropolitan University)
    • Behzad Koosha (The George Washington University ) & Shervin Shambayati (Aerospace Corporation)
      • 04.0101 A Survey on Autonomous eVTOL Communications: Challenges, Solutions, and Opportunities
        Ying Wang (Stevens Institute of Technology), Bing Mak (Aspen Consulting Group), Ishan Aryendu () Presentation: Ying Wang - -
        Low Earth Orbit satellite constellations are essential for extending 5G Non Terrestrial Network services, but they introduce significant challenges in maintaining high throughput and stable links due to rapid elevation changes, Doppler shifts, and limited coverage durations. This paper presents an integrated elevation aware link adaptation framework for downlink communications in 5G NTN systems. It dynamically adjusts the Modulation and Coding Scheme using Channel Quality Indicator feedback, Block Error Rate trends, and elevation angle tracking. Unlike conventional CQI only schemes, our approach leverages both instantaneous and historical link conditions to achieve more responsive and robust performance. The system adheres to 3GPP Release 17 NTN standards and simulates slot level SNR variations driven by orbital slant range dynamics. We incorporate realistic link budget modeling based on 3GPP Case 9, accounting for Doppler, path loss, and atmospheric effects. Seamless satellite handover and elevation guided beam switching are integrated to preserve link continuity as satellites move in and out of view. This is particularly important in urban environments where tall buildings frequently obstruct line of sight, requiring adaptation strategies that respond to elevation and signal quality to maintain throughput. Rule based link adaptation, though simple, is often insufficient under the rapidly changing link dynamics of LEO satellite systems, including fluctuating SNR, sporadic CQI accuracy, and beam transition boundaries. To address these limitations, we propose exploring machine learning or heuristic driven strategies that infer optimal MCS behavior from observed feedback rather than relying on static thresholds. Reinforcement learning and online decision policies have outperformed fixed rules in terrestrial 5G, and similar benefits are expected in NTN scenarios. Our framework surfaces CQI trends, BLER history, and elevation angle as input features to support future integration of adaptive learning agents. This direction is particularly relevant for ultra reliable and low latency applications such as ISR, airborne terminals, and remote-edge operations where service guarantees must be maintained despite orbital variability. We validate the proposed system through both Matlab based simulation and srsRAN based Over the Air (OTA) demonstration comprising over 720000 time slots across a 12 minute satellite pass using 15 kilohertz subcarrier spacing. Experimental results show more than 30 percent improvement in throughput over a CQI only baseline, faster convergence to optimal modulation settings, and smoother transitions during handover. The adaptive MCS controller supports more aggressive use of high coding rates in favorable SNR intervals while maintaining acceptable error rates. This work contributes to the advancement of space communication systems by addressing the challenges of highly dynamic LEO environments using standards compliant, service aware adaptation. The proposed methods are compatible with future satellite constellations and scalable to multi beam, multi user spaceborne architectures. By combining physical layer modeling with responsive control logic, our framework offers a practical and extensible solution for resilient 5G NTN operations. It supports scalable quality of service delivery across both civilian and mission critical scenarios in dynamic LEO environments.
      • 04.0102 O2O Architecture and Test Systems
        Peter Rossoni (NASA/GSFC) Presentation: Peter Rossoni - -
        The Orion-Artemis 2 Optical Communications (O2O) Payload provides data transfer at high rate for NASAs first crewed moonshot since the Apollo days. O2O’s purpose and scope stresses incorporating the operational utility of a high performance communication package (up to 260 Mega bits per second) into an astronaut mission with excellent Size, Weight and Power efficiency. This paper describes O2O’s operational architecture based on the optical C-Band wavelength (1550 nm) and its test system. Of particular note are the intricacies of integrating such a high performance system into existing infrastructure. The method of integration/implementation, the architecture and certification/qualification testing are all central to what will be an essential component of future space missions, both robotic and astronautic.
      • 04.0105 Inter-Satellite Link Configuration for Fast Delivery in Low-Earth-Orbit Constellations
        Arman Mollakhani () Presentation: Arman Mollakhani - -
        End-to-end latency in large low-Earth-orbit (LEO) constellations is dominated by propagation delay, making the network diameter—the longest shortest-path in hops—a critical performance metric. While LEO networks promise unprecedented low-latency global connectivity, this potential is fundamentally constrained by the inter-satellite link (ISL) topology. Current ISL layouts, however, have rarely been optimized to explicitly minimize network diameter while simultaneously satisfying stringent physical and operational constraints, including maximum link distance, line-of-sight, per-satellite hardware limits, and long-term link viability over orbital periods. This study formulates the selection and assignment of inter-plane ISLs as a constrained diameter-minimization problem. We model a Starlink-inspired Walker–Delta constellation where each satellite is equipped with two fixed intra-plane links and may activate up to two inter-plane links. Beginning with a baseline feasible topology, our approach employs an iterative local-search algorithm that progressively refines the network. This procedure intelligently replaces or reinforces links to shrink the diameter, guided by a multi-objective evaluation that prioritizes hop-count reduction while considering broadcast efficiency and link stability. The resulting optimized ISL configurations are sparse, fully connected, and adhere to all geometric and hardware limits. Critically, we empirically measure the trade-off between performance and operational robustness. A snapshot-only optimization yields the minimal network diameter by utilizing links that can be dynamically reconfigured. In contrast, enforcing a viability constraint produces a topology with complete link stability over multiple orbital periods, which is suitable for static routing and results in a competitively small network diameter. Simulations demonstrate that both configurations achieve substantial reductions in worst-case latency compared to non-optimized layouts. The developed framework and quantitative findings offer concrete guidance for future LEO network deployments, providing a practical method for designing diameter-aware topologies that balance the competing demands of ultra-low latency and long-term operational stability.
    • Shervin Shambayati (Aerospace Corporation) & Behzad Koosha (The George Washington University )
      • 04.0202 Security Challenges in Space-Based Delay Tolerant Networks
        Mohammad Salam (Chicago State University), Robert Short (NASA Glenn Research Center) Presentation: Mohammad Salam - -
        Space-based Delay Tolerant Networks (DTNs) are crucial for communication in extreme environments such as interplanetary missions and satellite constellations. These networks face significant challenges due to long communication delays, intermittent connectivity, and changing network topologies. This paper explores key security issues in space-based DTNs, including data authentication, confidentiality, integrity, trust management, and defense against denial-of-service (DoS) attacks. Traditional authentication methods are insufficient for space-based DTNs, requiring delay-tolerant schemes to maintain data integrity over long periods. Data confidentiality is critical, as sensitive information must be protected from interception, though traditional encryption can be resource intensive. Data integrity is safeguarded through technologies like hash chains and blockchain-based verification. Trust management is difficult in intermittent communication scenarios, leading to the development of reputation-based models. DoS attacks also pose a threat to these resource-limited networks, with rate-limiting and traffic management systems proposed as countermeasures.
      • 04.0203 TCP Performance Analysis for Cislunar Networks
        Qinqing Zhang (Peraton Labs) Presentation: Qinqing Zhang - -
        The Gateway, a vital component in NASA’s Artemis deep space exploration program, will be placed near the Moon, which is known as cislunar space. It serves as a command and control, and communication hub for cislunar operations, also as a relay between lunar assets and mission control centers on the Earth. In this new cislunar network architecture, reliable and efficient data communication is critical to the success of the Gateway flight system operations. There has been a strong interest in utilizing Internet-type of protocols (e.g., TCP/IP protocol suites) for space communication networks. However, TCP/IP protocols, while widely deployed in terrestrial networks, do show performance degradation in space communication networks. Delay Tolerate Networking (DTN) is a viable alternative to perform more efficiently in the space environment. NASA has taken the steps to adopt both IP and DTN (as a sustainable mode in later phases) to develop, deploy and operate the cislunar network. In this paper, we analyze and evaluate TCP/IP performance over cislunar links which are characterized by large bandwidth-delay product (BDP). The original TCP flow control and congestion control algorithms were designed to be robust and adaptive to different network conditions. The algorithms have evolved in the past decades, with many new TCP congestion control algorithm variants (without changing the TCP protocol structure) defined in IETF standards and implemented in commercial off the shelf (COTs) products, e.g., Linux OS kernel and VxWorks OS kernel. We examined and selected several TCP variants in our analysis, e.g., TCP NewReno, TCP LinuxReno and HTCP, which are mostly suitable for the cislunar links. We also looked into the TCP options and extensions such as “Window Scale and Timestamps” [IETF RFC1323], “Selective Acknowledgment (SACK)”, “Proportional Rate Reduction (PRR)” and “Explicit Congestion Notification (ECN)” [IETF RFC 2018] and made recommendations of using these options. The performance analysis and evaluation are based on network simulation using the well-known NS3 simulation tool to generate quantitative results. We modified the NS3 TCP module implementation and evaluated different congestion control algorithms and TCP options. We enabled packet tracing and flow monitoring to generate packet trace files, and recorded the key TCP algorithms parameters and performance metrics. The detail simulation results help us gain insights on parameter configurations and algorithm selection, in consistent with the IP stack implementations in the OS kernel.
      • 04.0204 Hierarchical Community Detection and Routing on Networks with Forman-Ricci Curvature and Flow
        Oliver Chiriac (Aalyria Technologies) Presentation: Oliver Chiriac - -
        Modern communication networks are trending towards increasing size and heterogeneity, with emerging networks (such as in 5G/6G non-terrestrial networks) comprising constellations in low-Earth orbit (LEO), airborne platforms, maritime assets, and terrestrial systems. These networks are often highly dynamic, with fast-changing topologies that pose a challenge to even state-of-the-art network orchestration systems. In this work, we introduce a curvature-informed, hierarchical framework for network orchestration. Our approach leverages the discrete geometry of the underlying graph -- specifically, the Forman-Ricci curvature, its higher-order augmentations, and associated geometric flows -- to recursively partition the network into a hierarchy of nested communities, enabling parallelizable routing across interconnected subnetworks. The solving framework consists of two key components: (i) a community detection module, which determines the inter-community and intra-community links at each level of the hierarchy; and (ii) a routing module, that computes the optimal paths for service requests between nodes. This provides a general and scalable method for reducing the complexity of large-scale topology and routing problems into more tractable subproblems, giving operators enhanced visibility and control at multiple levels of the network. Experiments on both synthetic and real-world satellite constellations show that our hierarchical framework achieves substantial performance gains in routing efficiency and scalability. In particular, we demonstrate that Forman-Ricci curvature and flow-based community detection consistently outperform baseline clustering algorithms across a range of heterogeneous network topologies. On large‐scale networks, our approach significantly reduces computation time while maintaining fulfillment performance comparable to flat solvers. These results provide evidence for the benefits of curvature-based methods in hierarchical routing and contribute to the development of robust, scalable architectures for space networking systems. Finally, we conclude with a discussion of future research directions.
    • Claudio Sacchi (University of Trento) & Tommaso Rossi (University of Rome Tor Vergata)
      • 04.0301 An NTN Uplink Radio Interface Based on Constant-Envelope Multicarrier Modulations and NOMA
        Claudio Sacchi (University of Trento) Presentation: Claudio Sacchi - -
        The evolution of the radio interface design in 6G Non-Terrestrial Networks (NTNs) is going toward adopting new waveforms and non-standard multiple access methodologies. Recent studies, carried out in the framework of the Next Generation EU RESTART ITA-NTN project, evidenced that a modular SDR-based PHY-layer design can be profitably exploited to make available a wide choice of programmable waveforms obtained using specific processing blocks inserted in a global lattice-based framework. Among these, Constant-Envelope (CE) multicarrier modulations (CE-OFDM and CE-SC-OFDM) might play a key role in future NTN transmissions thanks to their immunity to nonlinear distortion and their augmented diversity against frequency-selective multipath propagation. Such techniques enable orthogonal multiple access (OMA) in the downlink, with uniform user performance. On the other hand, OMA is not allowed in the uplink, because the multiple carriers attributed to a user are nonlinearly “packed” by the phase modulation into a radio-frequency single-carrier signal, which cannot be slotted in the numerical Discrete Fourier Transform (DFT) domain, as it happens for OFDMA and SC-FDMA. Despite this, CE multicarrier modulations look very suitable for the NTN uplink thanks to: i) 0dB Peak-to-Average Power Ratio (PAPR) that does not require any input-backoff (IBO) applied to nonlinear power amplification, ii) performance improvement (expressed in terms of reduced Eb/N0) achieved as compared to conventional OFDM and SC-OFDM in the presence of dispersive multipath channels. To exploit the advantages of CE multicarrier modulations in NTN uplinks, we propose the use of Non-Orthogonal Multiple Access (NOMA) as a multi-user transmission scheme. We believe that the combination of CE multicarrier waveforms and NOMA can offer a novel, flexible, robust, and spectrally efficient NTN uplink radio interface targeted to broadband applications. In our paper, we shall first introduce the motivation for the use of CE-OFDM and CE-SC-OFDM in NTN scenarios, discussing their potential advantages along with their tradeoffs. Then, we shall consider some power-domain NTN NOMA scenarios of technical interest, combined with single-rate and multi-rate CE-OFDM and CE-SC-OFDM data transmission. Simulation results, obtained with simulated 3GPP NTN multipath channel profiles, will discuss the influence of the various degrees of freedom of CE multicarrier modulations (modulation index, oversampling factor, use of nonlinear arctangent demodulation vs. approximated linear demodulation, introduction of phase unwrapping in the RX chain, etc.) on the NOMA system performance. The conclusion of the paper will summarize the outcomes of the analysis, opening the eye toward the viability of the proposed uplink transmission solution in the framework of 6G NTN PHY-layer standardization.
      • 04.0303 Reconfigurable Metasurfaces as Enablers for Adaptive and Modular Space Communication Architectures
        Ivan Iudice (CIRA Italian Aerospace Research Center), Domenico Pascarella (CIRA (Italian Aerospace Research Centre)), Sonia Zappia (CIRA, Italian Aerospace Research Centre), Vincenzo Galdi (University of Sannio) Presentation: Ivan Iudice - -
        The increasing demand for agile, energy-efficient, and software-defined space communication systems has brought intelligent electromagnetic surfaces into the spotlight as key enabling technologies. This paper explores the role of reconfigurable and intelligent metasurfaces in shaping next-generation space architectures. By passively manipulating wavefronts through programmable sub-wavelength elements, metasurfaces can enhance link quality, provide dynamic coverage steering, and reduce onboard processing and energy demands—particularly beneficial in small satellite and distributed Low-Earth Orbit (LEO) constellations. We discuss architectural scenarios where metasurfaces support modular and adaptive connectivity in space-to-space and space-to-ground systems. Furthermore, we explore how metasurfaces, when controlled via Artificial Intelligence (AI) or embedded logic, contribute to the vision of interoperable, software-defined, and updateable space infrastructures. The paper concludes with a roadmap for integrating intelligent metasurfaces into space digital twins, onboard optimization loops, and standardized interfaces for future cognitive space systems.
      • 04.0304 Distributed Reinforcement Learning for Resource Management in Satellite Edge Computing Systems
        Camilo Rojas (University of Genoa), Nour Badini (), Fabio Patrone (University of Genoa), Juan Fraire (), Mario Marchese (University of Genoa, Italy) Presentation: Camilo Rojas - -
        The emergence of large-scale Low Earth Orbit (LEO) satellite constellations has renewed attention to satellite networks, with a vision to deliver global connectivity and performance levels comparable to terrestrial infrastructures. Despite this progress, such constellations are often treated as standalone systems rather than being fully integrated into next-generation communication architectures. To bridge this gap, extensive research has been devoted to the development of Non-Terrestrial Networks (NTN), aiming to enable unified operation across terrestrial and space segments within cellular infrastructures. Nonetheless, realizing this vision remains challenging due to persistent issues in latency, dynamic scheduling, and resource management. To tackle these challenges, we present the development of a Space Cloud, where Multi-access Edge Computing (MEC) services are deployed across satellite nodes and interconnected through Inter-Satellite Links (ISL), forming a distributed space-based data center. By enabling in-orbit computation, the Space Cloud reduces dependence on terrestrial infrastructure, as processing no longer needs to be offloaded to ground-based data centers. This architectural shift is key to meeting the low-latency requirements of modern, latency-sensitive applications. This paper proposes a distributed Reinforcement Learning (RL) approach to manage computational resources in space-based MEC environments. The system leverages a scalable actor-critic framework, where neural network-based actors and critics are deployed on individual satellites. Each node operate autonomously and in a distributed manner, enabling the network to optimize actions locally while considering both instant rewards and long-term impact. The RL model leverages historical data and processing patterns to control the activation of on-orbit servers, aiming for efficient resource utilization. We evaluate the proposed strategy using a synthetic satellite constellation built with the MeteorNet framework. Moreover, through Pareto-efficient analysis across key performance indicators (KPI), we benchmark our approach against conventional baselines and assess its applicability under the constraints of specific space missions. The results demonstrate that the RL controller achieves comparable task failure and latency rates to the baselines, while significantly reducing resource consumption.
    • Patrick Stadter (The Aerospace Corporation) & David Copeland (Johns Hopkins University/Applied Physics Laboratory)
      • 04.0402 Designing a Networking System for Lunar Exploration over the Moon–Earth Link
        Shunsuke Higuchi (KDDI Research, Inc. ), Tetsu Joh (KDDI Research, Inc.), Masaki Suzuki (), Chikara Sasaki (KDDI Research, Inc.), Atsushi Tagami (KDDI Research, Inc.), Yu Morinaga (Japan Aerospace Exploration Agency), Kiyohisa Suzuki (Japan Aerospace Exploration Agency) Presentation: Shunsuke Higuchi - -
        Recent advancements in lunar exploration underscore the necessity for continuous terrain monitoring and responsive teleoperation of robotic systems deployed on the Moon. Given the high latency and intermittent availability of Earth–Moon communication links, it is essential to assess communication performance requirements for such systems quantitatively. In this study, we derive concrete throughput and latency requirements for lunar robotic missions based on realistic operational parameters, including rover velocity, light detection and ranging (LiDAR) sensor resolution, field-of-view, and round-trip communication delay. Assuming the use of delay/disruption-tolerant networking (DTN) with the Bundle Protocol (BP) for the Earth–Moon segment, and that endpoint devices remain IP-based, we evaluate a system architecture in which IP–BP–IP gateways mediate communication across the cislunar link. This architecture is consistent with current recommendations from the Consultative Committee for Space Data Systems (CCSDS) and does not require modification of existing endpoint hardware. We implement and compare three gateway designs that differ in the open systems interconnection (OSI) model layer—data link (Layer 2), network (Layer 3), or transport (Layer 4)—at which protocol conversion occurs. These designs are evaluated using simulated lunar network conditions and application-level traffic patterns representative of LiDAR data streaming and robotic teleoperation. Our results clarify how protocol termination strategies impact system-level metrics such as throughput, latency, and control responsiveness. We examine the trade-offs between reliability, complexity, and responsiveness to inform future system designs. This study makes three main contributions. First, it derives concrete communication requirements for point-cloud transmission and teleoperation latency, grounded in realistic operational assumptions for lunar robotic systems. Second, it identifies viable system architectures that can satisfy these requirements, showing that either a combination of UDP with application-layer reliability or the use of TCP with Layer 4 termination is sufficient. Third, it provides system-level design insights that clarify where reliability enforcement—whether at the transport or application layer—should be located, and how this choice influences implementation complexity in lunar communication systems.
      • 04.0405 Validation of Joint Doppler and Ranging with GNSS in an Earth-based Hardware Demonstration
        William Jun (NASA Jet Propulsion Laboratory), Dennis Ogbe (Jet Propulsion Laboratory), Paul Carter (Jet Propulsion Laboratory) Presentation: William Jun - -
        Joint Doppler and Ranging (JDR) is a novel geometric constraint on one-way radiometric measurements between a user and a station. When navigation infrastructure is sparse, such as for a lunar south pole user during the development of the LunaNet architecture, a surface station providing JDR and other traditional differential corrections can drastically improve PNT performance for nearby users. Previous literature on JDR demonstrates nearly a factor of two improvement on the state-of-the-art (differential GNSS techniques on the Moon) in estimating real-time user position, velocity, and timing. However, all previous JDR analyses have been performed in simulation. This work aims to increase the technology readiness level of JDR by demonstrating real-time position estimation using JDR in a hardware demonstration with GNSS observables. For the demonstration, we set up an outdoor mobile kit that deploys a real-time kinematic (RTK) GNSS receiver. This kit acts as a station and a duplicate kit acts as the user. The software defined radio on the station logs its true position and its range and Doppler shift measurements from the RTK GNSS receiver and transmits this data to the user’s radio. The user deploys a real-time position estimation filter with measurements from its own RTK GNSS receiver and implements JDR using the station’s range and Doppler measurements. We artificially limit the GNSS satellites available to both the user and station to mimic a sparse navigation infrastructure. The RTK GNSS receivers provide ground truth for error analysis and a separate navigation filter running on the user’s radio estimates navigation performance without the aid of a station (without JDR). Ultimately, this hardware demonstration increments the technology readiness level of JDR to 4 and serves as the hardware foundation for future demonstrations of sparse infrastructure navigation. Using this hardware setup, future demonstrations of a surface station-enhanced PNT architecture will include a one-way radiometric range measurement between the station and user’s radios, and additional kits to emulate radiometric measurements from a sparse LunaNet constellation.
      • 04.0406 Observability Analysis of Simultaneous Localization & Calibration for Cooperative Radio Navigation
        Alexis Fernando Marino Salguero (German Aerospace Center (DLR)), Robert Poehlmann (German Aerospace Center (DLR)), Emanuel Staudinger (German Aerospace Center - DLR) Presentation: Alexis Fernando Marino Salguero - -
        Future robotic surface exploration missions, e.g. on Mars, demand robust and accurate navigation solutions to ensure safe autonomous operation, avoid hazards, and achieve mission objectives. Since no GNSS-like infrastructure exists on Mars, accurate navigation remains challenging. Cooperative radio navigation emerged as a key technology in such scenarios, enabling precise localization in infrastructure-free environments by exchanging radio signals among networked nodes to estimate inter-node ranges. Methods based on signal propagation time, such as round-trip time ranging, require sub-nanosecond Time-of-Flight (ToF) measurement accuracy. However, environmental factors, e.g. temperature changes, can affect group delays of radio transceivers, introducing ranging bias. Transceiver group delays require precise calibration; pre-launch calibrations alone are insufficient for extraterrestrial environments, making in-situ online calibration essential. To address the need for in-situ calibration, Simultaneous Localization and Calibration (SLAC) for cooperative radio navigation has been proposed to estimate the ranging bias as a calibration parameter and apply online corrections. However, the SLAC framework has not yet been thoroughly analyzed. A proper design should account for the system’s observability, i.e. when the calibration parameter can be estimated and how accurately. Accounting for observability is not only critical for ensuring reliable performance, it also allows optimization of measurement strategies. With the final paper, we present an observability analysis of SLAC for cooperative radio navigation, aimed at determining under which conditions the node positions and ranging biases can be estimated. This analysis yields necessary and sufficient conditions for the SLAC problem to be observable. Specifically, we exploit the equivalence between the full column rank of the Fisher Information Matrix and the Jacobian of the observation model to determine the minimum number of static and mobile nodes required for SLAC, and to identify the geometric configurations that yield strong, weak, or non-observable systems. Our results show that, in the absence of cooperation between nodes, the system is inherently non-observable, making SLAC infeasible. Based on these theoretical findings, we propose practical solutions: In non-cooperative scenarios, at least one node should be equipped with means for accurate self-calibration to enable the estimation of calibration parameters for the remaining nodes. In scenarios with poor observability, geometry-aware deployment strategies, such as avoiding colinear configurations, should be adopted to improve estimation performance. The observability analysis is validated through extensive simulations considering different configurations with varying numbers of static and mobile nodes. For each configuration, the Cramér–Rao Bound (CRB) is computed to quantify how the number of nodes influences the system’s observability. In addition, real-world experiments were conducted using radio nodes integrated into both static and mobile entities. The experiments confirmed the theoretical result that geometric configurations close to collinearity lead to poor or non-existent observability. Conversely, with appropriate geometric arrangements and a sufficient number of nodes, the experiments confirmed observability of SLAC in realistic conditions, enabling accurate estimation of both positions and ranging biases.
      • 04.0407 Analysis of Cost-Effective Lunar Positioning Constellation with Heterogeneous Satellites
        Sae Ogoshi (the University of Tokyo), Yosuke Kawabata (University of Tokyo), Kenshiro Oguri (Purdue University), Satoshi Ikari (University of Tokyo), Ryu Funase (University of Tokyo), Shinichi Nakasuka (University of Tokyo) Presentation: Sae Ogoshi - -
        This study proposes a hybrid navigation system to enhance the cost-effectiveness and positioning accuracy of lunar positioning satellite constellations. The currently proposed Lunar Navigation Satellite System (LNSS) concept, developed in Japan, which requires all eight satellites to be equipped with Global Navigation Satellite System (GNSS) receivers, is prohibitively expensive because installing large high-gain antennas requires larger satellite buses, which significantly increases system cost. This study addresses that limitation by proposing a hybrid navigation architecture that selectively integrates GNSS observation with non-GNSS methods like optical navigation (OPNAV) and satellite-to-satellite tracking (SST). Our approach uses a constellation of eight satellites in Elliptical Lunar Frozen Orbits (ELFO), dividing them into two types: large satellites equipped with high-gain GNSS antennas for precise orbit determination, and small satellites that rely on SST and optical navigation. This configuration aims to achieve high overall positioning accuracy while significantly reducing costs compared to an all-GNSS-equipped constellation. We use an Extended Kalman Filter (EKF) to integrate GNSS, SST, and optical navigation measurements for precise orbit determination. GNSS measurements include pseudorange and pseudorange rate. SST measurements, both two-way and one-way, account for link budget and lunar occultation to ensure realistic measurement feasibility. Optical navigation estimates the line-of-sight vector to celestial bodies like the Moon and Earth by using an on-board camera to detect their elliptical outlines, fitting an ellipse, and applying geometrical constraints. Our previous simplified study, which used a 10-minute observation interval for GNSS and SST and a 5-minute interval for optical navigation, demonstrated that a higher number of large satellites improved the orbit determination accuracy of the small satellites. The results showed that with six or more large satellites, the accuracy of the small satellites surpassed that of the large satellites. To address a key limitation of the previous study—the long observation intervals that led to larger absolute positioning errors—we perform an updated, more detailed simulation. We simulate all satellite combinations with a more practical one-second observation interval for all components to achieve a more realistic precision evaluation and to assess the hybrid navigation architecture's ability to balance accuracy, cost-efficiency, and mission feasibility. The results also suggest that an optimal number of large satellites exists in the range of two to six, as this configuration minimizes discrepancies while maintaining an accuracy comparable to that of the large satellites. This updated study provides a more robust and comprehensive assessment of the proposed hybrid system, advancing the field toward a practical and affordable solution for lunar positioning.
      • 04.0408 A Mission Concept to Demonstrate Surface Station Enhanced Navigation at the Lunar South Pole
        William Jun (NASA Jet Propulsion Laboratory), Paul Carter (Jet Propulsion Laboratory), Dennis Ogbe (Jet Propulsion Laboratory), Mark Panning (Jet Propulsion Laboratory), Rodney Anderson (Jet Propulsion Laboratory / California Institute of Technology), Kar Ming Cheung (Jet Propulsion Laboratory) Presentation: William Jun - -
        A surface station near the lunar south pole can work alongside the planned LunaNet infrastructure to significantly improve the navigation performance of lunar surface users. This surface station-enhanced position, navigation, and timing (SSE PNT) service provides differential corrections, Joint Doppler and Ranging corrections, and an additional pseudorange observable to nearby users to supplement a lunar relay network’s PNT services during the initial phases of the constellation’s deployment. A lunar station providing these services is essential during early missions to the south pole that require real-time, highly accurate PNT (<10 m) but only have access to 2 or 3 navigation orbiters. We propose a mission to demonstrate a surface station-enhanced PNT architecture on the Moon from a single payload on a commercial lunar payload service (CLPS) lander. The payload would consist of two receivers capable of receiving LunaNet’s one-way pseudorange code: the augmented forward signal (AFS). One receiver acts as the surface station and the other as a user. The antennas for these receivers are mounted separately on the lander to create a baseline of ≥2 meters. The receivers gather AFS measurements from an early LunaNet constellation, likely with only 3 total orbiters. The station receiver utilizes its known ground truth position to estimate corrections and transfers the corrections to the user receiver, which deploys two real-time navigation filters: one filter utilizing SSE services and the other using traditional range and Doppler-based positioning with only AFS measurements. Although typical baselines for a SSE PNT service would be on the order of kilometers, this small baseline design allows the SSE demonstration to be completely contained on a single mission. Navigation simulations estimate that SSE positioning of the user receiver with a 3 satellite LunaNet constellation results in around 1 m positioning accuracy at 3 sigma, an order of magnitude improvement on traditional positioning with LunaNet’s AFS (15 m at 3 sigma). When implemented, this SSE demonstration mission would be the first asset on the moon that self-positions to a real-time accuracy of <10 m at 3 sigma using a sparse LunaNet constellation of three orbiters. This mission would demonstrate the performance gains of an SSE PNT service, increment its technology readiness level, and prepare the SSE PNT concept for future implementation in lunar and Martian environments.
      • 04.0409 Lunar Relay Ground Stations
        Alexander Ford () Presentation: Alexander Ford - -
        Terrestrial military communication systems are qualified to U.S. Military Standards (MIL-STDs) for durability, EMI/EMC, safety, and reliability, but these baselines do not fully address lunar hazards, operational constraints, or networking realities. This paper compares relevant MIL-STDs to lunar relay ground station needs, identifies technical gaps, and frames a cost-forward path. This paper considers a standard that reuses defense-industrial supply chains and existing test infrastructure to minimize non-recurring engineering (NRE), shorten schedules, and reduce verification cost and variance. This is a comparison of these terrestrial baselines against lunar conditions and operational constraints. This includes lunar dust, cosmic and solar radiation, thermal swings, micrometeoroids, reduced gravity, seismic activity, sparse maintenance opportunities, prolonged power interruptions, and high-latency/intermittent data links. Examining a path toward hybrid civil-military standards that merge MIL-STD robustness with spaceflight-proven methods, aligned with emerging U.S. cislunar strategies for interoperable, resilient communications and navigation. By utilizing parts and protocol stacks across civil, commercial, and defense users, the approach increases vendor competition, enables part reuse, and stabilizes acceptance criteria. This will lower life-cycle cost.
    • Jaime Esper (NASA - Goddard Space Flight Center) & Mazen Shihabi (Jet Propulsion Laboratory)
      • 04.0501 Orbital Optimization of a Distributed Heliocentric Relay Network for Mars-Earth Communications
        Jules Pénot (Massachusetts Institute of Technology), Hamsa Balakrishnan () Presentation: Jules Pénot - -
        The Deep Space Network (DSN) has become a critical bottleneck for deep-space communications, limiting both the downlink time and bandwidth available to spacecraft operating beyond cislunar space. Even new technologies such as optical communication systems, which promise significantly higher bandwidths, do not address other challenges such as communication interruptions during solar conjunctions. Such capacity and reliability issues reduce the scientific productivity of autonomous missions through the Solar System, and pose a significant risk to the safety of astronauts during crewed missions beyond Earth orbit. As an alternative to this direct-to-Earth (DTE) communications paradigm, we propose the creation of a hyper-distributed network of radio-frequency (RF) relay satellites in Solar orbit for communications between Earth and Mars. Such a distributed network presents several advantages over DTE RF communications, including reduced free-space path loss, built-in redundancy, lower size, weight and power (SWaP) of future Martian communications systems, and freeing DSN capacity for other spacecraft. This paper presents three main areas of investigation. First, we develop an approach for computing the number of relay satellites deliverable to a particular orbit with a single launch based on parameters such as relay antenna diameter, the launch vehicle’s payload mass and volume constraints, and the satellites’ orbital insertion Delta-V. We then develop an efficient algorithm for evaluating the performance of a given satellite constellation, relying on the network’s minimum spanning tree as a heuristic to find the maximum-bandwidth path between Earth and Mars at any simulation timestep. To evaluate the performance of a network, we propagate the orbits of Mars, Earth, and all the satellites, over a five-year simulation period, and take the average datarate from Mars to Earth over that five-year period as our measure of performance. Finally, we implement a modular genetic algorithm to find the optimal Mars-Earth relay network subject to a fixed number of satellite launches. We identify six categories of modular satellite formations (e.g., elliptical rings of satellites and planar Mutually Orbiting Groups (MOGs)) that maintain consistent inter-satellite spacing, and encode them into the design space under a unified genome. We assume the continuous use of one DSN 34 m antenna on Earth, the presence of relay satellites in Martian orbit, and the use of SpaceX’s Starship launch vehicle for deploying the relay satellites. The constellations identified have an average Mars-Earth datarate that increases substantially with the number of launches. Our results show that it is possible to achieve Mars-Earth datarates of over 143 Mbps with 25 Starship launches, 223 Mbps with 50 launches, and 335 Mbps with 75 launches. These datarates represent an over 100-fold increase in throughput compared to the current DTE system between the DSN and the Mars Reconnaissance Orbiter, while using only technologically-mature Ka-band communications systems.
    • Kar Ming Cheung (Jet Propulsion Laboratory) & Alessandra Babuscia (NASA Jet Propulsion Laboratory)
      • 04.0602 Cryogenic Testbed for Passive Optical Data Links on Planetary Surface
        Lin Yi (Jet Propulsion Laboratory, California Institute of Technology) Presentation: Lin Yi - -
        As aerospace missions extend to smaller, power-constrained craft operating in low temperature environments, especially those on deep space planetary surfaces, collecting large amounts of data, traditional communication radio may consume too much power than is affordable. Modulating Retroreflectors (MRRs) offer a passive and high bandwidth solution, using optical communication architecture by modulating externally supplied laser light, thus eliminating the need for onboard laser transmitters. MRRs have been previously demonstrated to support long-distance optical links with MBps data rates with power. However, these systems have largely been evaluated at room temperature. This work presents an experimental cryogenic optical testbed designed to evaluate the viability of MRR-based systems in low temperature (150K). The MMR testbed consists of a fiber-coupled 1550 nm laser routed through a collimator, an acoustic optical modulator (AOM), and a planar retroreflecting mirror within a vacuum cryogenic chamber, which retroreflects the modulated beam to a detector outside of the cryogenic testbed. A thermal regulation system, using heating resistors, was implemented to maintain the temperature of the AOM electronics at 280 K. A custom Python feedback loop maintains temperature stability using resistive heaters and calibrated sensors. Initial alignment and performance verification were conducted on an optical bench at room temperature, and the testbed had a measured field-of-view of 0.5°. Modulation tests were performed at temperatures ranging from 250 K to 150 K and at modulation frequencies between 1 kHz and 60 kHz. At room temperature, the modulation component, at 1kHz, of the total returned signal was measured to be approximately 10%. As the temperature was lowered to 150 K, the modulation decreased to about 3%. Additionally, modulation slightly decreased as the frequency increased; however, modulation was still detectable at all of the tested frequencies. Despite this reduction, the modulated signal remained detectable, confirming that MRR communication is feasible under cryogenic conditions. This research has resulted in the successful development of a robust cryogenic testbed and measurement framework, as well as initial experimental results for continued research and testing of MRR-based communication systems. The infrastructure lays the groundwork for advancing the maturity of cryogenically operable optical communication systems and supporting future exploration missions requiring reliable, low-power data links in harsh environments.
      • 04.0603 Elevation-Aware BLER-Centric Link Adaptation for Dynamic LEO 5G NTN Systems
        Ying Wang (Stevens Institute of Technology), Eric Forbes () Presentation: Ying Wang - -
        Low Earth Orbit (LEO) satellite constellations are essential for extending Fifth Generation (5G) Non Terrestrial Network (NTN) services, but they introduce significant challenges in maintaining high throughput and stable links due to rapid elevation changes, Doppler shifts, and limited coverage durations. This paper presents an integrated elevation aware link adaptation framework for downlink communications in 5G NTN systems. It dynamically adjusts the Modulation and Coding Scheme (MCS) using Channel Quality Indicator (CQI) feedback, Block Error Rate (BLER) trends, and elevation angle tracking. Unlike conventional CQI only schemes, our approach leverages both instantaneous and historical link conditions to achieve more responsive and robust performance. The system adheres to the Third Generation Partnership Project (3GPP) Release 17 NTN standards and simulates slot level Signal to Noise Ratio (SNR) variations driven by orbital slant range dynamics. We incorporate realistic link budget modeling based on 3GPP Case 9, accounting for Doppler, path loss, and atmospheric effects. Seamless satellite handover and elevation guided beam switching are integrated to preserve link continuity as satellites move in and out of view. This is particularly important in urban environments where tall buildings frequently obstruct line of sight, requiring adaptation strategies that respond to elevation and signal quality to maintain throughput. Rule based link adaptation, though simple, is often insufficient under the rapidly changing link dynamics of LEO satellite systems, including fluctuating SNR, sporadic CQI accuracy, and beam transition boundaries. To address these limitations, we propose exploring machine learning or heuristic driven strategies that infer optimal MCS behavior from observed feedback rather than relying on static thresholds. Reinforcement learning and online decision policies have outperformed fixed rules in terrestrial 5G, and similar benefits are expected in NTN scenarios. Our framework surfaces CQI trends, BLER history, and elevation angle as input features to support future integration of adaptive learning agents. This direction is particularly relevant for ultra reliable and lowlatency (URLLC) applications such as Intelligence, Surveillance, and Reconnaissance (ISR), airborne terminals, and remote edge operations where service guarantees must be maintained despite orbital variability. We validate the proposed system through both MATLAB based simulation and Software Radio based Over the Air (OTA) demonstration comprising over 720,000 time slots across a 12 minute satellite pass using 15 kilohertz (kHz) subcarrier spacing. Experimental results show more than 30% improvement in throughput over a CQI only baseline, faster convergence to optimal modulation settings, and smoother transitions during handover. The adaptive MCS controller supports more aggressive use of high coding rates in favorable SNR intervals while maintaining acceptable error rates. This work contributes to the advancement of space communication systems by addressing the challenges of highly dynamic LEO environments using standards compliant, service aware adaptation. The proposed methods are compatible with future satellite constellations and scalable to multi beam, multi user spaceborne architectures. By combining physical layer modeling with responsive control logic, our framework offers a practical and extensible solution for resilient 5G NTN operations. It supports scalable quality of service (QoS) delivery across both civilian and mission critical scenarios in dynamic LEO environments.
      • 04.0604 Development Status of the Inter-Spacecraft Optical Communicator
        Jose Velazco (Chascii Inc.) Presentation: Jose Velazco - -
        Chascii is currently developing the inter-spacecraft omnidirectional optical communicator (ISOC) to provide fast connectivity and navigation information to small spacecraft forming a swarm or a constellation in LEO and cislunar space. The ISOC operates at 1550 nm and employs a dodecahedron body that holds six optical telescopes and 20 external arrays of detectors for determining the angle of arrival of the incoming beams. Additionally, the ISOC features six fast avalanche photodetector receivers for high-speed data rate connectivity. The ISOC should be suitable for distances ranging from a few kilometers to a few thousand kilometers. It will provide ubiquitous coverage and gigabit connectivity among smallsats, forming a constellation around the moon. It will also offer continuous positional information among these spacecraft, including bearing, elevation, and range. We also expect the ISOC to provide fast, low-latency connectivity to assets on the lunar surface, such as landers, rovers, instruments, and astronauts. Chascii is currently developing a lunar ISOC, including all its transceivers, optics, and processing units. We are developing the ISOC as a key candidate to enable LunaNet. In this paper, we will present the development status of the ISOC as well as link budget calculations for operations around the moon using pulse-position modulation. We will present experimental results of angle-of-arrival testing using various experimental apparatus. We will also present connectivity results obtained with two ISOCs, including measured bit-error tests under different conditions. We will discuss our ISOC development roadmap, which includes LEO, GEO, and lunar missions, spanning the 2026-2029 timeframe. We believe the ISOC, once fully developed, will provide commercial, high-data-rate connectivity to future scientific, military, and commercial missions around LEO, cislunar space, and beyond.
      • 04.0605 PULSE-A: Polarization-Modulated Optical Communications at the CubeSat Form Factor
        Logan Hanssler (University of Chicago), Juan Prieto Asbun (The University of Chicago), Seth Knights (University of Chicago), Sofia Mansilla (University of Chicago), Everette Shelton (University of Chicago), Catherine Todd (), Elizabeth Rosario (The University of Chicago), Graydon Schulze-Kalt (University of Chicago), Leah Vashevko (), Daniel Lee (), Robert Pitu (The University of Chicago), Rodrigo Spinola e Castro (The University of Chicago), Aidan Etterer (), Akash Piya (), Vidya Suri (), Lauren Ayala (The University of Chicago), Rohan Gupta (University of Michigan, Ann Arbor), Mason McCormack (The University of Chicago), Vincent Redwine (), Danielle Riekse (University of Chicago), Tian Zhong (University of Chicago) Presentation: Logan Hanssler - -
        Recent advancements in sensing and observation technology have created an unprecedented need for space-to-ground bandwidth. While traditional radio frequency (RF) technology has innovated to meet this increasing requirement, most available options present significant size, weight, power, and cost (SWaP+C) restrictions, often forming a bottleneck for mission and science operations. Optical communication provides a promising alternative, capable of meeting many of the same SWaP+C requirements while providing orders of magnitude improvement in available bandwidth. The Polarization-modUlated Laser Satellite Experiment (PULSE-A) is a CubeSat mission currently under development at the University of Chicago aiming to demonstrate space-to-ground optical downlink at a data rate of up to 10 Mbps. Along with meeting the intense SWaP+C requirements of a university CubeSat mission, PULSE-A’s optical payload is designed to use circular polarization shift keying (CPolSK), making it the first demonstration of polarization modulation for space-to-ground optical communication. In addition to probing the viability and possible advantages of CPolSK for optical downlink, PULSE-A works to make optical communication technology more accessible. The project is entirely open-source and relies heavily on hardware designed and built by students to support both its technical and educational mission. The PULSE-A team is developing the mission’s 1U optical payload, 3U CubeSat bus, portable optical ground station (OGS), and RF ground station (RFGS) almost entirely in-house, making use of commercial off-the-shelf components alongside student-designed hardware to reduce cost and development overhead. The optical payload functions as a CPolSK-based, transmit-only optical communications terminal. Two lasers emitting orthogonal linearly polarized light at 1550 nm are driven in alternation, producing a linear polarization-modulated beam when combined. The beam is then amplified and converted to CPolSK states in free-space, where the data beam is then transmitted out of the spacecraft and toward the OGS. To meet the stringent pointing requirements of optical communications, PULSE-A makes additional use of a fine-steering mechanism informed by a 1064 nm ground-to-space beacon laser to account for body pointing error from a COTS Attitude Determination and Control System. The transmission beam is collected via an 11” amateur telescope at the OGS and then separated by polarization state. Detection is accomplished with high gain avalanche photodiodes, and a field programmable gate array (FPGA) demodulates and decodes the received data. In this paper, we will provide an overview of the design and expected operations for PULSE-A’s payload, bus, OGS, and RFGS. We will highlight design considerations and analyses made to support polarization modulation of the optical downlink as well as pointing, acquisition, and tracking.
    • Marc Sanchez Net (Jet Propulsion Laboratory)
      • 04.0702 A Comparative Analysis of Measured and Simulated Multipath Effects on Satellite Tracking Error
        Rodrigo Negri de Azeredo (University of Bordeaux), Stéphane Victor (Université de Bordeaux) Presentation: Rodrigo Negri de Azeredo - -
        Satellite tracking systems widely use the monopulse technique due to its greater accuracy and relatively straightforward implementation compared to alternative position measurement strategies. However, certain operational conditions can subject the system to multipath effects, which impact tracking capability and wide-bandwidth satellite communication. Ground stations developed by Safran Data Systems are designed to operate across diverse environments, including land and maritime settings, and employ antennas with different reflector sizes. Consequently, these variations give rise to different levels of multipath interference, which can, in some cases, influence the system's functioning. This paper presents an analytical approach for developing a model of multipath effects in satellite tracking systems over different types of terrestrial surfaces. The model combines the specular (coherent) and diffuse (non-coherent) components of multipath over the direct-path received signal. Experimental data were collected by tracking low Earth orbit satellites in different frequencies. The geometry of the problem takes into account the curvature of the Earth and the orbital parameters of the satellite to calculate the trajectory and potential signal reflection points, particularly when tracking is performed at low-angle elevations (near the horizon). In this case, regions of interest for reflection points can be extended up to tens of kilometers from the receiving antenna, introducing uncertainty into the practical test conditions, especially about surface roughness. The model estimates the impact of multipath on tracking position errors, which are essential for proper system operation. The measured test data and the introduced multipath model exhibit comparable signatures for position errors, agreeing the orders of practical and theoretical values. The numerical results obtained using the modeling approach can identify and reproduce the effects of multipath in different scenarios. Accordingly, simulations may be performed under a variety of performance-degrading scenarios to support the development of compensation and mitigation strategies addressing the adverse effects of multipath interference on satellite communication signals.
      • 04.0703 Modeling and Analysis of Air-to-Ground Cellular KPIs in a 5G Testbed Using Android Smartphones
        Simran Singh (North Carolina State University), Anil Gurses (NC State University), Ozgur Ozdemir (NCSU), Ram Asokan (Wireless Research Center of North Carolina), Mihail Sichitiu (NC State University), Ismail Guvenc (), Magreth Mushi (North Carolina State University), Rudra Dutta (North Carolina State University) Presentation: Simran Singh - -
        The integration of cellular communication with Unmanned Aerial Vehicles (UAVs) extends the range of command and control (C2) and payload communications of autonomous UAV applications. Accurate modelling of this air-to-ground wireless environment aids UAV mission planning. Models built on and insights obtained from real-life experiments intricately capture the variations in air-to-ground link quality with UAV position, offering more fidelity for simulations and system design than those that rely on generic theoretical models designed for ground scenarios or ray-tracing simulations. In this work, we conduct multiple aerial flights at the AERPAW Lake Wheeler testbed site to study the variation in key performance indicators (KPIs) of a private 4G/5G cellular base station (BS) with the UAV's altitude, distance from the BS, elevation, and azimuth relative to the BS. Variations in both 4G and 5G physical layer KPIs (signal strength, received signal quality, channel rank, modulation and coding scheme, and channel quality index), and application layer throughput are logged and analyzed, using two Android smartphones: a Keysight Nemo device, with enhanced KPI access, obtained through a rooted operating system (OS), and a standard smartphone running a custom application that utilizes open-source Android APIs. The BS, provided by Ericsson, consisted of two sectors, such that handover events occur during UAV flights. The observed signal strength measurements are compared to theoretical predictions from free space path loss models that incorporate the cell tower's antenna radiation patterns. Furthermore, mathematical model parameters for polynomial curve approximations are derived to fit the observed data. Light machine learning approaches, namely random forests and gradient boosting regressors, are also used to model KPI behaviour as a function of UAV position relative to the BS. Model performance is evaluated using metrics such as goodness-of-fit, mean average error (MAE), and root mean square error (RMSE). The insights and models generated from real-life experiments in this study can serve as accurate and valuable tools in the design, simulation, and deployment of cellular communication-based UAV systems.
    • Claudio Sacchi (University of Trento) & David Taggart () & Len Yip (Aerospace Corporation)
      • 04.0801 Performance Analysis of Signals Integration Approaches in Digital Communication Systems
        Ashraf Abdel Aziz (Al-Baha University, Saudi Arabia) Presentation: Ashraf Abdel Aziz - -
        Signals integration is used in digital communication systems with data fusion to enhance the performance and reduce the multipath effect. The two main approaches for signals integration in digital communication systems with data fusion are full and semi-full signals integration. In full signals integration systems, there are multiple receivers producing very large number of bits and the entire signals integration system is closely resembled analog multiple receiver implementations. This approach achieves the optimum performance at the expense of high cost and complexity. In semi-full signals integration systems, only few numbers of bits are used after preliminary processing of signals at each individual receiver. This method could reduce system complexity and cost at the expense of overall performance degradation. This paper provides performance analysis of full and semi-full signals integration approaches in digital communication systems in case of non-coherent frequency shift keying (NCFSK) receivers with Gaussian noise and Rician fading stochastic model. The performance loss due to semi-full signals integration is analyzed for different number of information bits.
      • 04.0803 Extending the SCaN Program Operation Cloud (SPOC) to the Solar System Internet
        Alan Hylton (NASA) Presentation: Alan Hylton - -
        The introduction of the SCaN Program Operation Cloud (SPOC) intends to provide a cloud infrastructure to NASA missions to unify scheduling, data transport, and management in a manner that integrates government and commercial service providers. Recent advances in Delay Tolerant Networking (DTN) have proven that enterprise container deployment, management, and overall orchestration can operate over space links without software modification. In particular, Kubernetes was successfully demonstrated over NASA's Laser Communications Relay Demonstration (LCRD) -- which has a 4 second round trip time -- using High-rate DTN (HDTN). In this paper we propose using Kubernetes and HDTN to extend the reach of the SPOC to include spacecraft directly, transcending the traditional model where space assets are treated as endpoints to one where they become integral nodes within a unified network architecture. Traditional approaches to spacecraft management will struggle to scale with the projected increase in space assets. On Earth, Kubernetes' efficacy in this regard is well-known, and the SPOC's usage of it suggests organizational buy-in within NASA for container-based solutions. Combined with the recent LCRD tests providing technical and operational validation for extending these capabilities to space assets, we can now consider how to fundamentally re-architect space communications. This work demonstrates that DTN extends beyond basic file transfer and streaming to enable higher-layer enterprise services. We emphasize that selective deployment of DTN across the heterogeneous network assets should be considered a best-practice. We recall the benefits of containerization, including modular and flexible deployment of capabilities and unified interfaces for software management and configuration. Resilience and autonomy features of Kubernetes, such as self-healing, can reduce operational overhead and enabling distant missions. The benefits of standardization are also discussed, especially from the user's perspective. Security in DTN and Kubernetes are also considered. A curated collection of Kubernetes pods designed for present and near-future satellite computing capabilities is proposed to bring these benefits to new missions while lowering the barrier to entry. Finally, we consider the computational overhead of running Kubernetes in space environments and propose a phased implementation approach to mature this architectural vision. We conclude with a discussion on the next steps to realizing this fundamental shift toward a true Solar System Internet where space assets operate as first-class network constituents rather than remote endpoints with a particular application to LunaNet.
      • 04.0804 Towards Algebro-geometric Foundations for Temporospatial Network Modeling
        Alan Hylton (NASA) Presentation: Alan Hylton - -
        Space communication networks present unique modeling challenges due to their inherent temporospatial nature, characterized by variable delays, disruptions, and mobility. These challenges are fundamentally geometric in nature, requiring data structures that capture sufficient information to enable decision-making—such as forwarding choices—while discarding unnecessary complexity. The obvious choice is graphs, which are highly effective for modeling network connections precisely because they distill essential geometric relationships, yet they fall short in capturing network dynamics over time. Even sequences of graphs have been shown incapable of adequately representing the evolution of temporospatial networks. However, graphs do represent a good starting point as they are intuitive and well-established, and by transforming them into algebraic objects, we create a bridge between the algebraic and geometric worlds in a way that can be extended to include time. This paper proposes a novel approach to modeling space networks through the lens of algebraic geometry, providing a mathematically rigorous foundation that balances expressive power with computational tractability. We begin by establishing the mapping between communication networks and algebraic varieties, and in particular, we recall the construction of the so-called graph varieties. This construction illuminates natural connections between graphs and algebraic constructs. Prior work by the authors has uncovered the fundamental research focus: the resulting Zariski topology is too coarse to support routing algorithms. Moreover, there is no interpretation of the regular functions on graph varieties within the networking context. Despite the initial limitations, there are several reasons to pursue algebraic geometry. One is the aforementioned computational aspect. Another powerful reason is that these algebro-geometric objects can be combined, suggesting rigorous ways of modeling multiple network domains and routing across their boundary. We approach these problems simultaneously, as they inform one another, by extending beyond varieties to schemes. We explore two coarse refinements of the Zariski topology, seeking the coarsest refinement that supports path-finding sheaves. This formulation allows us to apply powerful results from algebraic geometry to networking problems by providing computationally meaningful representations. We work with and provide code written in Macaulay2 that transforms network graph descriptions into corresponding algebraic objects, enabling practical experimentation with these theoretical constructs and providing an accessible entry point for theoretical exploration. Our implementation works with static network snapshots, and we offer two considerations for adding the time dimension. These include stacks and moduli spaces, and are augmented with an appeal to analogs of curvature in the algebro-geometric setting, thereby laying groundwork for fully temporospatial modeling. The resulting approach offers a promising mathematical foundation for modeling space networks (and indeed other temporal systems) at appropriate levels of abstraction -— capturing essential network properties while avoiding the excessive complexity and rigidity of systems of differential equations. The resulting framework supports algorithm development directly, and because the resulting algebraic objects can be combined, it also supports advances in network architecture design and analysis.
      • 04.0805 Closed-Loop Modeling of Phase-Shifted Full-Bridge Converter with Current Doubler Output
        Kasemsan Siri (), Markell Hardee (The Aerospace Corporation), Natan Ranjbar (University of Colorado, Colorado Springs) Presentation: Kasemsan Siri - -
        To fulfill low-voltage high-power applications in RF communication, a large-signal averaged model is developed for a zero-voltage-switched (ZVS), phase-shifted full-bridge (PS-FB-CDO) converter with current doubler output. Analytically included in the model derivations are the large-signal effects due to the leakage inductance of the converter’s isolation transformer for autonomous operation in both continuous and discontinuous conduction modes (CCM & DCM). Its fundamental small-signal frequency-response characteristics are also extracted from the developed model, which can be easily implemented into a circuit analysis program (PSPICE/LTSpice) for both CCM and DCM. Furthermore, the closed-loop model of the PS-FB-CDO converter is derived to include the large-signal average model of both the outer voltage-regulation control loop and the inner current control loop using current-mode control (CMC), which includes the active slope (RAMP) compensation, guaranteeing sub-harmonic free operation across CCM and DCM boundaries. The average model of the closed-loop PS-FB-CDO converter is also developed and further validated by an as-is pulse-by-pulse switching model of the same dual-closed-loop control converter. Additional lessons learned from this analytical modeling and simulation are (1) discovery of the non-uniform current sharing within the two output inductors in the conventional CDO stage under transient closed-loop operation, (2) discovery of a practical solution, employing two synchronously controlled rectifier switches in addition to the two conventional output rectifiers. Included in the as-is switching model of the closed-loop PS-FB-CDO converter, the added synchronous rectifiers (SR) have restored uniform current sharing within the two CDO’s inductors, ensuring symmetric utilization of the B-H magnetic characteristics of the transformer’s core material. As a result, potential saturation of magnetic flux density in the transformer is eliminated.
      • 04.0806 Real-World LoRaWAN Performance & Propagation Modeling Using UAV, Helikite, and Vehicle Measurements
        Sergio Vargas Villar (North Carolina State University), Simran Singh (North Carolina State University), Ozgur Ozdemir (NCSU), Mihail Sichitiu (NC State University), Ismail Guvenc () Presentation: Sergio Vargas Villar - -
        This paper presents a field-based evaluation of LoRaWAN signal propagation conducted at two locations within the Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW) testbed: Lake Wheeler Field and NC State University’s Centennial Campus. Three distinct transmission platforms were deployed: a ground vehicle, a multirotor drone at 50 meters, and a helikite at a steady altitude of 150 meters. These platforms enabled a comparative study on how altitude, mobility, and terrain influence wireless signal reception across a fixed gateway network. We analyze received signal strength (RSSI) and signal-to-noise ratio (SNR) as functions of distance and spreading factor (SF). Three complementary metrics are visualized: SNR versus distance with demodulation thresholds, probability of successful reception, and SNR boxplots grouped by distance bins. These plots reveal link degradation patterns and demonstrate the role of adaptive SF selection in maintaining communication reliability. To characterize propagation behavior, we apply a log-distance path loss model to empirical data from the ground vehicle experiment, which encompassed both rural and urban non-line-of-sight (NLOS) conditions. Model parameters are optimized through error minimization techniques. Our results show that the helikite platform, due to its stable high-altitude position, provided the most reliable and consistent link performance. Conversely, the drone and vehicle exhibited higher variability due to movement, obstructions, and terrain-induced multipath. These findings demonstrate the influence of platform dynamics and altitude on LoRaWAN reception performance, providing support for future aerial network planning efforts.
      • 04.0807 Geometric Interpretation of Quantum-Optimum Detection Operators for Weak Optical Signals
        Victor Vilnrotter () Presentation: Victor Vilnrotter - -
        The mathematical structure of quantum-optimum detection operators for optical signals is derived, and approximated in a three-dimensional number-state basis for the case of weak optical signals. It is shown that mathematical rotation of the received signal-states in Hilbert-space achieves quantum-optimum detection performance for optical signals such as BPSK (binary phase-shift keying), OOK (on-off keying) and PPM (pulse-position modulation) formats originating from interplanetary or even greater interstellar distances. The approximate three-dimensional number-state model is interpreted to help identify the physical operations required to implement the quantum-optimum receiver with well-known field-preparation and signal-processing techniques for various modulation formats, and the increase in complexity required to achieve the theoretical limits are discussed. Graphs demonstrating the practical performance of receivers approximated by three-dimensional detection operators are evaluated, and compared with the theoretical limit on ideal receiver performance dictated by the laws of quantum mechanics.
      • 04.0808 Kolmogorov–Arnold Networks for UAV Air-to-Ground Path Loss Modelling
        Kürşat Tekbıyık (İstanbul Technical University), Gunes Karabulut Kurt (Polytechnique Montréal), Antoine Lesage-Landry (Polytechnique Montréal) Presentation: Kürşat Tekbıyık - -
        Accurate modeling of air-to-ground (A2G) wireless channels for unmanned aerial vehicles (UAVs) is essential for designing reliable and robust aerial communication systems. Traditional path loss models are widely used due to their simplicity. Still, they lack the flexibility to incorporate UAV-specific parameters like orientation (yaw, pitch, roll) and real-time mobility dynamics. These limitations hinder their applicability in practical UAV scenarios, where channel conditions can change rapidly. Recent efforts have explored deep learning models to better capture the nonlinear relationships between UAV parameters and wireless channel behavior [1]. While these models, including those based on neural networks, can offer strong predictive performance, they often act as black boxes with little interpretability, making them less suitable for real-time system design or engineering analysis. To address this, the study introduces the use of Kolmogorov–Arnold Networks (KANs) for UAV A2G wireless channel modeling. KANs are a class of neural architectures inspired by the Kolmogorov–Arnold representation theorem, which allows any multivariate function to be decomposed into a sum of univariate functions and additions. Unlike traditional deep learning models, KANs offer a modular and interpretable structure, enabling a more transparent understanding of how individual UAV parameters affect signal propagation. The research utilizes a publicly available dataset from a rural UAV flight experiment, as described in [2]. The proposed KAN-based model is trained to map multivariate inputs such as UAV position, altitude, orientation, and frequency to RSS or path loss values. The model pipeline includes data normalization, training with a loss function that balances accuracy and smoothness, and evaluation against baseline models like polynomial regression, multilayer perceptrons (MLPs), and 3GPP models. In summary, the study presents a novel application of KANs in wireless communications, specifically for modeling UAV A2G channels. The key contributions include: C1) The first application of KANs for UAV wireless channel modeling; C2) An interpretable framework that decomposes path loss into UAV-specific univariate functions; C3) Empirical validation using real-world data, demonstrating improved performance and explainability compared to existing models. This work highlights a promising direction for explainable and data-driven modeling of dynamic aerial communication channels. REFERENCES [1] S. Hussain, S. F. N. Bacha, A. A. Cheema, B. Canberk, and T. Q. Duong, “Geometrical features-based mmWave UAV path loss prediction using machine learning for 5G and beyond,” IEEE Open Journal of the Communications Society, vol. 5, pp. 5667–5679, 2024. [2] A. Gurses and M. L. Sichitiu, “Air-to-ground channel ¨ modeling for UAVs in rural areas,” in IEEE 100th Vehicular Technology Conference (VTC2024-Fall), 2024, pp. 1–6.
      • 04.0810 CE-Multicarrier Modulations vs. Multicarrier Modulations in Non-Terrestrial Network Scenarios
        Claudio Sacchi (University of Trento) Presentation: Claudio Sacchi - -
        Some recent work highlighted the potential advantages of constant-envelope (CE) multicarrier modulations (CE-OFDM and CE-SC-OFDM) in various wireless scenarios. The strengths of CE-multicarrier modulations have been recognized in terms non-linear distortion immunity and improved multipath resilience thanks to the augmented diversity against frequency-selectivity. As far as the tradeoffs are concerned, they should be pointed out the increased bandwidth occupation of the RF signal, the increased computational complexity due to oversampling, the nonlinear effects involved by arctangent demodulation of the received phase-modulated multicarrier signal and, finally, the capability of exploiting orthogonal multi-carrier multiple access (OFDMA, SC-FDMA) only in the downlink. During the activities of the Integrated Terrestrial-Non-Terrestrial Network (ITA-NTN) project, funded by the EU in the framework of NextGenerationEU workprogramme, a wide range of waveforms has been considered for NTN scenarios, including CE multicarrier ones. The aim of our paper is to analyze in detail this typology of waveforms and to contextualize them in some NTN application scenarios considered by the ITA-NTN project. Then, we shall analyze the performance of these waveforms in various NTN channels (aerial, satellite) at various operational frequencies (S-band, Ka-band, EHF). The comparison will be done with the conventional multicarrier techniques (OFDM, SC-OFDM) so to conclude the paper with some guidelines about the employment of CE multicarrier waveforms in future Space communication applications.
      • 04.0811 AI for Dynamic Environments and Exploration Using Deep Reinforcement Learning and FPGAs
        John Porcello () Presentation: John Porcello - -
        Deep Reinforcement Learning (DRL) has achieved remarkable performance for many tasks. However, a key challenge is designing and Implementing DRL for dynamic environments. Dynamic environments exhibit state and/or reward changes requiring the agent to adjust actions to maximize reward. The majority of real world environments are dynamic environments and may exhibit nonstationarity, unpredictable or random like behavior, or the environment is only partially observable to the agent. This paper considers techniques for determining the environment changepoints through relevant features, statistics, and similar means that allow the agent to detect and adapt to the change in environment. Furthermore, in the context of the exploration versus exploitation task of DRL, this paper looks at the exploration task by an agent in such a dynamic environment and techniques to improve effectiveness. FPGA based DRL systems represents a high performance, practical, field deployable, AI solution that offers several key advantages such as relatively low SWAP-C, scalable, fully reconfigurable, high throughput, and low latency. Design data for such high performance DRL systems to support AI applications using FPGAs is provided herein. The implementation discussion and details in this paper focus on high performance, FPGA based DRL systems for complex tasks.
      • 04.0812 Linearized UC1875 PWM Controller in Phase-Shifted Full-Bridge Current-Doubler-Output Converters
        Natan Ranjbar (University of Colorado, Colorado Springs), Kasemsan Siri () Presentation: Natan Ranjbar - -
        Stability analysis of space-class phase-shifted full-bridge (PSFB) converters is hindered by the absence of a linear dynamic macro-model for the UC1875 pulsed-width-modulated (PWM) controllers, as the UC1875-SP variant is the only QML-V radiation-hardened phase-shifted PWM controller available for use in space qualified PSFB power topologies. A frequency-domain macro-model suitable for loop-stability analysis is, however, absent. Developed using SPICE is a linearized model of the UC1875 PWM controller, that can be embedded directly in SPICE as a subcircuit model. Implemented with the phase-shifted macro-model is a large-signal averaged model of the PSFB’s power stage utilizing a current-doubler-output (CDO). Performance of the developed averaged macro-model was validated against a 1.2-kW rated closed-loop pulse-by-pulse switching model operating in Continuous Conduction Mode (CCM), enabling for a Middlebrook loop-break AC injection into the outer voltage-control feedback loop in order to obtain direct Bode-plot extraction for loop-stability analysis. The result is a reusable schematic form model that closes a critical modeling gap for electrical design engineers designing radiation-hardened PSFB systems driven by the UC1875-family controllers.
      • 04.0813 A Robust Feedforward Symbol Timing Recovery Algorithm for Burst-Mode Satellite Communications
        Len Yip (Aerospace Corporation) Presentation: Len Yip - -
        In this paper, a robust feedforward symbol timing recovery algorithm is presented for a burst-mode digital receiver. Different clocks that are used in the satellite and terminals as well as Doppler drift can introduce clock drift between the transmitter and receiver in the satellite communication link. Our proposed algorithm estimates both the symbol rate and timing offset followed by an interpolator to reconstruct the data symbols. A simulation example is given to compare the performance of the proposed method with the conventional feedback loop method.
      • 04.0815 Trajectory- Aware Handover Strategies for Drone Deliveries and Air Taxis
        Siri Geddam (National Institute of Technology, Warangal), Sai Sruthi Ilapuram (University of North Texas), Kamesh Namuduri (University of North Texas), K L V Sai Prakash Sakuru (National Institute of Technology Warangal) Presentation: Siri Geddam - -
        Reliable wireless communications ensure that unmanned aerial systems (UAS) and urban air mobility (UAM) platforms such as electrical vertical takeoff and landing (eVTOL) vehicles to safely operate. Continuous command-and-control (C2) and telemetry connectivity is required to satisfy necessary safety standards but maintaining such links across varying altitudes and heterogeneous infrastructures present significant challenges. Existing cellular infrastructures are optimized for ground users, resulting in fragmented coverage, unnecessary handovers, and susceptibility to interference in aerial environments. This research focuses on horizontal handover (HHO) and vertical handover (VHO) mechanisms in two scenarios: (1) drone delivery services that fly at 400 ft and connected via terrestrial base stations (TBS), and (2) passenger-carrying air taxis flying at 4000ft that use a hybrid architecture involving TBS during takeoff & landing, and satellite communications while cruising. Low-altitude delivery drones primarily rely on TBS, but dense deployments, fragmented coverage, and urban obstacles lead to frequent HHOs. These frequent transitions increase latency, cause packet losses, and heighten risks of temporary disconnections. For UAM platforms, the problem extends beyond frequent terrestrial handovers as air taxis must also transition between TBS and satellite links when they climb or descend, while handling multiple HHOs across multiple satellite spot beams during cruise. These procedures should strictly satisfy latency requirements to ensure uninterrupted C2 links. A mechanism for each scenario is proposed to address these challenges. For drones, integration of trajectory- aware predictive handovers and proactive network-assisted association reduce the number of handover triggers . For air taxis, VHOs are supported through beam-aware buffering and make-before-break techniques, while dual- connectivity provides redundant access to TBS and satellite networks to enhance reliability. These strategies leverage cross-layer design principles, incorporating flight trajectory context into mobility management to optimize performance. This study highlights the importance of trajectory-awareness in next generation air-mobility management frameworks. By integrating trajectory knowledge and cross-layer awareness into handover processes, reliable connectivity can be ensured for both drones and air taxis, paving the way for safe and scalable UAS and UAM operations within emerging 5G and 6G ecosystems.
    • Genshe Chen (Intelligent Fusion Technology, Inc.) & Eugene Grayver (Aerospace Corporation)
      • 04.1001 Scalable Unsupervised RF Modulation Classification via Statistics-Guided Design-Space Exploration
        Joseph Krozak (.Krozak Information Technologies) Presentation: Joseph Krozak - -
        We address unsupervised radio-frequency (RF) modulation classification in contested, congested, and dynamic spectral environments central to signals intelligence (SIGINT), electronic warfare (EW), and measurement and signature intelligence (MASINT) missions. We present a statistics-guided design-space exploration (DSE) framework that converts effect-size–based discriminability measurements into search constraints. For targeted grouping contexts, discriminability indices select a pinned discriminative feature spine and enforce rollup-aware budgets across feature families. Low-discrepancy sampling (Sobol/LHS) then selects remaining features, projection pipelines (e.g., PCA→UMAP variants), and clusterers under admissibility/save gates. The system emits interpretable System Knowledge Maps (grand confusion matrices with alignment) and size, weight, and power (SWaP) telemetry (per-signal runtime/memory), supporting dataset-scale deployment planning. We evaluate on RadioML-2018.01A (24 modulations) and TorchSig Narrowband Overlap—TNO (19 modulations overlapping RadioML). In the ALLMODS setting—K-way clustering where K equals the dataset’s modulation catalog (K=24 or K=19)—guided DSE driven by discriminability analyses improves Adjusted Rand Index (ARI) versus unguided baselines by ~+25% on RadioML and ~+73% on TNO, with consistent gains across a range of signal-to-noise ratios (SNR) and strongest lifts at moderate-to-high SNR. Within contested semantic subspaces on TNO, guided DSE yields even larger relative improvements: Phase-Shift Keying (PSK) ~+325%, Frequency Modulation / 2-Gaussian Minimum Shift Keying (FM/2-GMSK) ~+220%, and Quadrature Amplitude Modulation (QAM) ~+298%. Elite configurations operate at sub-millisecond per-signal CPU latency with sub-kilobyte memory, and residual errors align with separability-map adjacency seams, indicating that disentanglement—not brute-force search—drives performance. This paper establishes the classification core—a reproducible path from discriminability measurement to guided search and interpretable outputs. Planned follow-ons extend to Particle Swarm Optimization (PSO), distance-aware ensembles with semantic-subspace specialists, tiered architectures, and real-time pipelines (GPU acceleration, streaming signal ingestion, and software-defined radio (SDR) integration, e.g., Ettus USRP B210).
      • 04.1002 AI-Driven Design of Stacked Intelligent Metasurfaces for Software-Defined Radio Applications
        Ivan Iudice (CIRA Italian Aerospace Research Center), Giacinto Gelli (Università degli Studi di Napoli Federico II), Donatella Darsena () Presentation: Ivan Iudice - -
        The integration of reconfigurable intelligent surfaces (RIS) into future wireless communication systems offers promising capabilities in dynamic environment shaping and spectrum efficiency. In this work, we present a consistent implementation of a stacked intelligent metasurface (SIM) model within the NVIDIA’s AI-native framework Sionna for 6G physical layer research. Our implementation allows simulation and learning-based optimization of SIM-assisted communication channels in fully differentiable and GPU-accelerated environments, enabling end-to-end training for cognitive and software-defined radio (SDR) applications. We describe the architecture of the SIM model, including its integration into the TensorFlow-based pipeline, and showcase its use in closed-loop learning scenarios involving adaptive beamforming and dynamic reconfiguration. Benchmarking results are provided for various deployment scenarios, highlighting the model’s effectiveness in enabling intelligent control and signal enhancement in Non-Terrestrial-Network (NTN) propagation environments. This work demonstrates a scalable, modular approach for incorporating intelligent metasurfaces into modern AI-accelerated SDR systems and paves the way for future hardware-in-the-loop experiments.
      • 04.1003 Resilient UAV Data Mule via Adaptive Sensor Association under Timing Constraints
        Md Sharif Hossen (NC State University), Ozgur Ozdemir (NCSU), Mihail Sichitiu (NC State University), Ismail Guvenc () Presentation: Md Sharif Hossen - -
        Unmanned aerial vehicles (UAVs) can be critical for time-sensitive data collection missions, yet existing research often relies on simulations that fail to capture real-world complexities. Many studies assume ideal wireless conditions or focus only on path planning, neglecting the challenge of making real-time decisions in dynamic environments. To bridge this gap, we address the problem of adaptive sensor selection for a data-gathering UAV, considering both the buffered data at each sensor and realistic propagation conditions. We introduce the Hover-based Greedy Adaptive Download (HGAD) strategy, designed to maximize data transfer by intelligently hovering over sensors during periods of peak signal quality. We validated HGAD using both a digital twin (DT) and a real-world (RW) testbed at the NSF-funded AERPAW platform. Our experiments show that HGAD significantly improves download stability and successfully meets per-sensor data targets. When compared with the traditional Greedy approach that simply follows the strongest signal, HGAD is shown to outperform in the cumulative data downloaded. This work demonstrates the importance of integrating SNR-aware and buffer-aware scheduling with DT and RW signal traces to design resilient UAV data-mule strategies for realistic deployments.
      • 04.1004 Testset for Cis-Lunar Communications and Navigation : System Design
        Eugene Grayver (Aerospace Corporation), David Lee (Aerospace Corporation), Eric McDonald (The Aerospace Corporation), Jon Verville (NASA Goddard Space Flight Center) Presentation: Eugene Grayver - -
        The NASA Lunar Communications Relay and Navigation System (LCRNS) will provide reliable high-rate communications and navigation services to future lunar missions. LCRNS provides data and navigation (PNT) services. The data services include real-time and non-real time (delay tolerant network). Both types of data services can be delivered over a relatively low-rate S-band or the high-rate Ka-band links. The S-band link supports data rates from a few kbps to a few Mbps, as well as an 'emergency' mode at 15 bps. The Ka-band link supports data rates from 1 to 50 Mbps. LCRNS also provides one-way and multi-way ranging, position, and timing using a signal derived from the terrestrial L1C GPS. This signal also carries low-rate broadcast messages (the in-phase channel is for PNT, the quadrature is for data). This complex system will be designed, launched, and operated by a commercial entity – Intuitive Machines and provided as a service to NASA. The system is being developed at a remarkably fast rate, with just two years between NSN Services contract award and launch. The commercial procurement model does not provide NASA with the same level of insight into the development and verification of the hardware as was common on previous missions. NASA is working with The Aerospace Corporation to create a testset that will be used to verify LCRNS requirements compliance. The Aerospace Corporation is building the testset hardware and developing the software on an extremely compressed schedule. The modems are based on software defined radio to allow rapid addition of features and insertion of test observation points. The testset is designed to verify the entire communications and navigation stack – from RF to applications. The application interface allows for true end-to-end testing using actual ground mission operations software. A government provided data services software element allows for representative flight user data to be exercised through the testset, including command and telemetry flows, CCSDS CFDP, and simultaneous multi-user data flows to exercise the required networking functions. This paper provides an in-depth overview of the testset system architecture to include the software and IT elements (a previous paper [1] described the hardware). The testset is very flexible and can be adopted to future missions. The goal of this and subsequent papers is to accelerate development of testsets by adopting a common design paradigm. To that end, we describe low level details from the IT infrastructure and software development approach to mapping of functional blocks to appropriate computer language. The test definition and design leverage concise YAML descriptions rather than a proprietary test script. This approach gives up some of the flexibility in exchange for significantly faster test development. A full Python interface is provided to implement unusual or complex tests. The Python interface has access to the entire testset ‘state’ as well as direct control over all the signal processing blocks.
      • 04.1005 Digital Regenerative Ranging Implementation with Accelerated Acquisition
        Eugene Grayver (Aerospace Corporation), Michael Varner (The Aerospace Corporation), Jason Zheng (The Aerospace Corporation) Presentation: Eugene Grayver - -
        Satellite ranging is a very mature and somewhat niche field. The approach to regenerative ranging presented in this paper ignores the constrained computational resources available on satellites. The code acquisition uses ‘brute-force’ FFT-based correlation over the entire code period. This technique requires a lot of memory as well as significant compute power but allows acquisition with dramatically less dwell time. Shorter acquisition time is desirable for shared ground stations that support multiple satellites. The PN code regeneration is implemented using a tracking despreader that was originally designed for processing DSSS data links. The despreader is used in a novel configuration where the roles of the ‘input’ and ‘reference’ signals are flipped. The system is intended for simulation, test, and for ground stations and is implemented in real-time software executing on a CPU.
      • 04.1006 Neuro-Symbolic Foundation Agent for Hyperparameter Tuning in Multi-UAS Positioning
        Yajie Bao (Intelligent Fusion Technology, Inc.), Deeraj Nagothu (Intelligent Fusion Technology Inc), Dan Shen (Intelligent Fusion Technology, Inc), Genshe Chen (Intelligent Fusion Technology, Inc.), Erik Blasch (Air Force Research Laboratory), Khanh Pham (Air Force Research Laboratory) Presentation: Yajie Bao - -
        This paper presents a design of a foundation model (FM) agent for hyperparameter tuning (HT) under the neuro-symbolic (NS) framework. Hyperparameters are critical for the performance of algorithms, which could take many human and computation resources to find the optimal hyperparameters. While many methods have been proposed for HT, human involvement is still inevitable for these methods to work. FMs have opened up possibilities for developing intelligent agents to perform tasks that require human participation. In this work, we design a workflow for an FM agent to tune hyperparameters and achieve the desired performance. By incorporating NS reasoning into the workflow, our design enhances FM reasoning by specifying the logic rules for (generating) responses and enabling human verification and intervention. Experiments on the HT of the range-only positioning algorithm for multiple unmanned aerial systems (UASs) in a Global Navigation Satellite System (GNSS)-denied environment are provided to demonstrate that the proposed design can achieve comparative performance against human experts with improved efficiency.
    • Evan Ward (U.S. Naval Research Laboratory) & Lin Yi (Jet Propulsion Laboratory, California Institute of Technology) & John Enright (Toronto Metropolitan University)
      • 04.1203 First Flight Results of RTK-Based Relative Navigation on the SNUGLITE-III CubeSat
        Hanjoon Shim (Seoul National University), Yonghwan Bae (Seoul National University), Jae Woong Hwang (Seoul National University), Changdon Kee (Seoul National University) Presentation: Hanjoon Shim - -
        This paper presents the first in-orbit demonstration of a real-time kinematic (RTK) based relative navigation system conducted during the commissioning phase of the SNUGLITE-III CubeSat mission. SNUGLITE-III consists of two identical 3U CubeSats named Hana, which is the target satellite, and Duri, which serves as the chaser satellite. The two satellites were launched together in a combined 6U configuration. The mission aims to demonstrate propellant-free autonomous rendezvous docking and formation flight and GPS radio occultation data collection using a dual-satellite system. A key enabling technology for achieving these mission goals is precise relative navigation. High-accuracy relative positioning is essential for safe rendezvous and docking maneuvers, where centimeter-level precision is required to prevent collision risks and ensure operational safety. While many previous CubeSat missions have relied primarily on optical sensors, with GPS playing only a secondary role, the increasing availability of GPS receivers on CubeSats and the favorable signal conditions in low Earth orbit make high-precision GPS-based navigation increasingly viable. To enable real-time operation on resource-constrained CubeSat platforms, efficient navigation algorithms are critical. The system described in this study achieves centimeter-level relative navigation by applying a single-baseline RTK solution. Unlike conventional systems that rely on a ground-based reference station, this system uses GPS measurements transmitted between the two satellites. It adopts the RTK framework typically used in ground-to-user applications but applies it in a space-to-space context through inter-satellite communication. This represents the first demonstration of this type of RTK-based relative navigation using single-frequency GPS receivers onboard CubeSats in low Earth orbit. In this relative navigation system, the target satellite (Hana) transmits raw GPS measurements at one-second intervals to the chaser satellite (Duri) through a low-power inter-satellite communication link based on LoRa technology. Duri processes the received data onboard to compute the relative navigation solution. A differential GPS (DGPS) technique is first applied to narrow the ambiguity search space. Then, a recursive ambiguity filter (RAF) improves the float solution, and the LAMBDA method is used to resolve integer ambiguities and calculate the final RTK solution. The SNUGLITE-III satellites were launched aboard the Nuri rocket from the Naro Space Center in South Korea in November 2025. After deployment from a P-POD in their combined 6U form, the satellites remained mechanically coupled for approximately one month during the commissioning phase. During this period, attitude control was handled by the Duri satellite, which first completed angular rate damping and then maintained a nadir-pointing attitude using reaction wheels. The RTK-based relative navigation system was validated while the satellites remained mechanically connected, prior to their separation. Because the relative position between the two satellites was fixed and known during this phase, the configuration provided an ideal environment for verifying the performance of the onboard navigation algorithms. Using housekeeping data recorded at ten-second intervals, the performance of the DGPS, RAF, and RTK techniques was analyzed. The results confirm the effectiveness of the system and demonstrate its readiness for operational deployment in the mission’s subsequent phases, as well as its potential utility for future CubeSat formation flight missions.
      • 04.1204 From Quaternions to Möbius Transformations: A Relativistic Framework for Navigation & Star Tracking
        Andrew Tennenbaum (Astrobotic Technology Inc), John Crassidis (University at Buffalo - SUNY) Presentation: Andrew Tennenbaum - -
        This paper introduces a mathematical framework that generalizes classic Euclidean spacecraft navigation algorithms to account for relativistic effects using Möbius transformations. Traditional star tracker algorithms rely on representations of celestial references as 3D vectors on the unit sphere and use tools such as quaternions, interstar angles, and the Davenport Q-method to estimate spacecraft attitude. These approaches implicitly assume Euclidean geometry and fail to accommodate Lorentz boosts, which become non-negligible in high-velocity, or high accuracy regimes. We propose a novel correspondence between this classical toolkit and an alternative formulation grounded in complex analysis and special relativity, where the celestial sphere is treated as the Riemann sphere via stereographic projection. In this relativistic setting, Möbius transformations - 2x2 complex matrices with unit determinant, encoding both attitude & velocity - serve as analogs to quaternions, and crossratios replace interstar angles as projective invariants. We present the Davenport M-method, MOBEST, and M-TRIAD: relativistic counterparts to the Q-method, QUEST, and TRIAD, respectively. These algorithms operate directly in the complex domain and estimate spacecraft pose via least-squares optimization over Möbius transformations. We provide a detailed error analysis of the crossratio under focal length and pixel perturbations and demonstrate its robustness relative to interstar angles, particularly in constraining the star identification search space. Furthermore, we derive a bijective mapping between quaternion+velocity pairs and Möbius transformations, offering a bridge between Euclidean and relativistic kinematic models. This work establishes a foundational mathematical and algorithmic framework for relativistic navigation and star tracking, potentially enabling higher-accuracy attitude estimation in next-generation missions. All algorithms are implemented in MATLAB and available open-source.
      • 04.1208 Detecting Craters in Images Using Classical and Machine Learning Techniques for Lunar Navigation
        Kyle Smith (), Trupti Mahendrakar (Florida Institute of Technology), Max Marshall (NASA - Johnson Space Center), Shaurya Gupta (NASA - Johnson Space Center) Presentation: Kyle Smith - -
        Loss of ground-based communication is an increasing concern for crewed and uncrewed spacecraft as cislunar traffic grows. To address navigation challenges in lunar environments, a Target & Range-adaptive Optical Navigation system (TRON) was developed to enable onboard navigation independent of ground communication. One component of the TRON suite is a technique designed specifically for terrain relative navigation (TRN) through the identification of lunar craters from orbital altitudes. The effectiveness of this method is directly tied to crater detection performance, as many false positives or false negatives can deteriorate the navigation results downstream. This paper presents a novel combination of machine learning and traditional computer vision methods for crater identification with built-in fault prevention. The approach is evaluated on synthetic imagery from Gateway’s NRHO orbit and real lunar imagery collected during the Artemis I mission, showing promising results and establishing a clear path toward future flight demonstrations.
      • 04.1209 Extended Linear Quadratic Regulator Informed RRT*
        Xavier Kipping (The University of Alabama), Jordan Larson (University of Alabama) Presentation: Xavier Kipping - -
        Autonomous guidance is critical for future space missions, particularly for cislunar satellite rendezvous operations, where communication delays and complex dynamics present significant challenges. This paper introduces the Extended Linear Quadratic Regulator-informed optimal Rapidly-exploring Random Tree (ELQR-RRT*) search, a novel sampling-based guidance algorithm designed for obstacle avoidance in nonlinear environments. We validate the ELQR-RRT* by solving a fuel-optimal rendezvous problem in an Earth-Moon 9:2 L2 Southern Near Rectilinear Halo Orbit (NRHO). Its performance is benchmarked against two established methods: a shooter-based RRT* (S-RRT*) and a direct optimization solution from the Astrodynamics Software and Science Enabling Toolkit (ASSET). Our comparative analysis demonstrates that ELQR-RRT* generates a smooth, near-optimal trajectory, successfully navigating the complex multi-body dynamics of the cislunar regime while avoiding obstacles. The results affirm the algorithm’s capability to deliver high-quality guidance solutions, highlighting its potential for real-time autonomous operations.
    • Mark Cockburn (US Department of Transportation) & Jason Glaneuski (US DOT / RITA / Volpe Center )
      • 04.1301 A Novel Grid-Based Conflict Detection Algorithm for Trajectory Based Operations
        Alexandra Davidoff (Embry-Riddle Aeronautical University), Sarah Reynolds (Embry Riddle Aeronautical University ), Luke Newcomb (), Omar Ochoa (Embry-Riddle Aeronautical University) Presentation: Alexandra Davidoff - -
        The management of the National Airspace System (NAS) for commercial flight operations has traditionally relied on waypoint-based frameworks. However, increasing air traffic complexity and a persistent shortage of qualified Air Traffic Control Officers (ATCOs) have necessitated the shift towards Trajectory Based Operations (TBO). TBO offers a more dynamic and data-driven approach, enabling precise management of flight paths by continuously predicting and adjusting aircraft trajectories. Despite its benefits, implementing TBO introduces considerable computational challenges, such as reliably predicting and detecting conflicts among numerous aircraft trajectories in both real-time and pre-flight planning phases. To address these challenges, advanced computational approaches are required to support the transition towards TBO without overwhelming ATCOs or compromising operational safety. Conflict detection algorithms play an important role in ensuring that aircraft maintain separation, thereby preserving airspace safety and operational efficiency. Traditional conflict detection methods, such as pair-wise checking, suffer from scalability limitations, exhibiting O(N2) computational complexity, which causes the algorithms to become increasingly infeasible as air traffic volume grows. These algorithms are therefore unsuitable for the large-scale operations of TBO. After establishing the state of the art in conflict detection for air traffic, this work introduces a novel grid-based conflict detection algorithm designed specifically with scalability and efficiency in mind. Grid-based methods are commonly used for conflict detection within air traffic management as they provide increased performance over brute-force methods, therefore providing a scalable solution for the conflict detection problem. The proposed centralized algorithm improves upon previous research by spatially partitioning the airspace of the continental United States into discrete grid cells and performing systematic local conflict checks, significantly reducing computational complexity. A conflict is defined based on established FAA thresholds and occurs when two aircraft are within five nautical miles laterally and 1000 feet vertically at approximately the same time. Additionally, the algorithm focuses on the cruise phase of flight in Class A airspace above 18,000 feet. The solution detects elusive conflicts by interpolating between trajectory points when a conflict is possible, thereby only performing rigorous searches when necessary. By locally searching for conflicts within each cell and its neighbors, the grid-based method achieves a computational complexity of O(N log N), making it suitable for implementation in TBO. In this paper, we show how the algorithm is validated and verified through experiments on historical commercial flight data obtained from the NASA Sherlock database in which conflicts are systematically injected. Experimentation includes pressure testing on an increasing number of flights from 2020. Between several hundred to several thousand flights operating at approximately the same time are evaluated. The experiments both verify the algorithm’s ability to detect conflicts in dense airspace, avoiding false positives, and validate the algorithm’s applicability to large-scale operations within TBO. This research therefore fills a critical gap in the literature by producing an accurate centralized conflict detection algorithm capable of real-time operations at scale.
      • 04.1302 A Hybrid Centralized and Decentralized Framework for Traffic Management in Advanced Air Mobility
        Alexandra Davidoff (Embry-Riddle Aeronautical University), Omar Ochoa (Embry-Riddle Aeronautical University) Presentation: Alexandra Davidoff - -
        The dynamic and safe routing of aircraft within Unmanned Aircraft Systems Traffic Management (UTM) is a key issue for the development of autonomous Advanced Air Mobility (AAM). The future framework of AAM will involve many autonomous vehicles, such as electric Vertical Take Off and Landing vehicles, occupying the same airspace with differing levels of priority, requiring scalable, efficient, and safe solutions. Machine Learning is a key candidate for this area, specifically through the usage of Reinforcement Learning (RL) for dynamically routing aircraft and resolving potential spatial and temporal conflicts that occur. However, to incorporate a system utilizing ML into the safety-critical environment of AAM, rigorous guarantees of safety are necessary to maintain the operational safety of the National Airspace System. Formal Methods, encompassing rigorous mathematical techniques to reason about the safety of software, may be used to build a safety harness around RL models. This work provides a brief survey of current solutions in the area of RL for UTM, Safe RL, and Safe RL for UTM. After establishing the current state of the art, a hybrid centralized and decentralized framework for UTM in AAM is proposed integrating formal methods into Deep RL to dynamically and safely route autonomous AAM traffic. This design provides multiple layers of redundancy by combining a centralized trajectory planning system with a decentralized conflict detection and resolution mechanism. Both systems integrate FM into their architectures to guarantee safety throughout the entire flight operation lifecycle. A procedural and deterministic conflict detection algorithm is further integrated into the centralized system to provide an additional redundant check on the airspace. The centralized component handles high-level flight trajectory planning, resolving obvious conflicts during the tactical planning stage. This component utilizes Deep Q Learning and integrates a Product MDP, a reactive synthesis technique, into the policy learning stage to integrate safety into trajectory generation. The centralized component monitors the current traffic situation, generates safe trajectories upon receiving requests from UAVs, and dynamically detects and resolves obvious conflicts that occur. The decentralized component utilizes Multi-Agent Deep RL, specifically Proximal Policy Optimization, to enable the autonomous vehicles to perform fine-grained conflict detection and resolution. Because aircraft frequently have a more complete picture of the immediate traffic situation than centralized air traffic control might, the decentralized system allows UAVs to independently make flight planning decisions to resolve less obvious and immediate conflicts. The UAVs make fine-grained adjustments to planned trajectories, such as small increases or decreases in altitude or lateral position to avoid separation violations. Safety is enforced through the integration of shielding, a reactive synthesis technique, to prevent UAVs from taking unsafe actions during both training and deployment. Actions are then reported to the centralized system. In this paper we show how the framework is validated through a proof-of-concept study within a novel AAM simulation environment. The developed simulation environment models UAVs flying within multiple configurations of an AAM airspace consisting of corridors, intersections, merge points, and split points. Simulation experiments demonstrate the feasibility and safety of the proposed UTM framework.
      • 04.1303 Mobility-Resilient Datalink Protocol for Next-Generation Aeronautical Communication
        Raouf Hamzaoui (De Montfort University), Sergun Ozmen (Turkish Airlines), HASSNA LOUADAH (De Montfort University), Feng Chen (De Montfort University), Fulya Aybek Cetek (Eskisehir Technical University), Rosa Arnaldo Valdés (Universidad Politécnica de Madrid), Pedro Reinaldos Manzanares (), Guillermo Martín Vicente (Universidad Politécnica de Madrid), Michael Regnault (ENAC) Presentation: Raouf Hamzaoui - -
        This paper presents the design and analysis of a mobility-resilient datalink communication protocol to meet the demands of next-generation aeronautical communication over the Aeronautical Telecommunication Network/Internet Protocol Suite (ATN/IPS). The protocol was developed under the Horizon Europe SESAR JU-funded ATMACA (Air Traffic Management and Communication over ATN/IPS) project. The project responds to the increasing demand for seamless, resilient, and interoperable air-ground communication in environments that involve frequent changes in geography, connectivity, and operational roles. The protocol introduces new capabilities that overcome limitations in existing mobility or session management protocols, such as Proxy Mobile IPv6 and Session Initiation Protocol (SIP). It includes mechanisms for adaptive routing, fault recovery, and network reconfiguration. These mechanisms follow software-defined networking principles and support the delivery of uninterrupted air-ground communication, including Controller–Pilot Data Link Communications (CPDLC) messages, across changing conditions. The architecture of the protocol consists of three integrated logical layers: transport, session, and context. Each layer addresses a specific aspect of mobility. The transport layer establishes physical connections, the session layer maintains logical application-level continuity, and the context layer tracks user state, roles, and operational conditions. Together, these layers provide support for terminal, session, service, and user mobility. The architecture includes several software-defined roles. These roles include Air Traffic Management (ATM) Servers, Air Traffic Control (ATC) Agents, and Context Management Agents. Each role has clearly defined responsibilities related to provisioning, mobility coordination, and session management. The network architecture is organized hierarchically across sector, facility, area, and flight information region levels, supporting both local and cross-regional coordination. This hierarchy supports distributed control and coordination across the entire airspace. A core feature of the protocol is the DataLink Information eXchange (DIX) format. This binary message format supports structured communication between agents, clients, and services. It encodes application data, session identifiers, context metadata, and mobility events. The format reduces signaling overhead and allows fast propagation of events and updates. The structure of DIX messages supports asynchronous communication and real-time adaptability. This approach ensures compatibility with legacy systems while enabling integration with new air traffic management services. The protocol uses a hierarchical addressing model that assigns operational meaning to each network identifier. This model allows intelligent routing and scalable service discovery. The addressing scheme maps directly to roles and responsibilities in airspace operations. Each node receives a unique address that reflects its location, organizational affiliation, and function. These identifiers enable precise routing and facilitate cross-domain interoperability. The protocol supports both direct peer-to-peer connections and logical session-based communication. This dual-mode capability allows flexibility in deployment. The protocol works across satellite, terrestrial, and airport-based communication systems. It also supports transitions between different access technologies without interrupting ongoing services. The network can operate in centralized or federated modes, depending on the needs of the air traffic service provider. The design also improves scalability and simplifies integration with other SESAR and ICAO systems. In conclusion, the ATMACA protocol provides a robust and scalable communication framework. It addresses existing shortcomings in datalink systems and supports future development in global air traffic management infrastructure.
      • 04.1304 Air-to-Air Channel Characterization for UAV Communications at 3.4 GHz
        Anil Gurses (NC State University), John Kesler (North Carolina State University), Mihail Sichitiu (NC State University) Presentation: Anil Gurses - -
        The proliferation of Uncrewed Aerial Vehicles (UAVs) in applications such as flying ad-hoc networks (FANETs), precision agriculture, disaster response, and future 6G integrated networks, necessitates the development of accurate and robust Air-to-Air (A2A) wireless communication systems. While existing research has predominantly focused on Air-to-Ground (A2G) links, the A2A channel remains significantly under-characterized, especially in the sub-6 GHz frequency bands critical for reliable data exchange. Current A2A models often oversimplify the channel, relying on static assumptions that neglect the profound impact of the UAVs' three-dimensional mobility and the physical characteristics of the aerial platforms themselves. This paper addresses this research gap by presenting a preliminary set of measurements for the 3.4 GHz A2A channel. We have developed a lightweight, reconfigurable, open-source channel sounder using USRP B210 Software-Defined Radios (SDRs) and a high-precision Global Navigation Satellite System-disciplined oscillator (GNSS-DO), deployed on two UAVs. We conducted measurement campaign at the Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW) Lake Wheeler testbed, an ideal, instrumented rural environment for UAV experimentation. The campaign featured a sphere flight trajectory around the second drone designed to capture the dynamic channel characteristics during maneuvers, including circular orbits, different altitudes, and elevation angles. From this data, we present a thorough analysis of the fundamental channel characteristics. We extract and model the fading parameters from the channel measurements, including channel impulse response (CIR), and analyze their dependence on link geometry. We also characterize the fading statistics, providing insights into the RMS delay spread for A2A links in this environment. This foundational channel measurement dataset provides a more realistic and validated tool for the design, development, emulation, and performance evaluation of physical and MAC layer protocols for next-generation UAV communication networks.
    • Patrick Morrissey (Collins Aerospace) & Krishna Sampigethaya (Embry-Riddle Aeronautical University)
      • 04.1402 Protecting against Space Invaders: An AI-driven Framework for Proactively Defending Space Systems
        Kendra Cook () Presentation: Kendra Cook - -
        The increasing reliance on satellite systems for communication, navigation, finance, and national defense has elevated the importance of securing space-based infrastructure against cyber threats. Cyber incidents such as the ROSAT, NASA Terra EOS, and ViaSat attacks have demonstrated that adversaries, including state-sponsored actors and independent hackers, are actively targeting space assets for espionage, disruption, and warfare. A comparative analysis of vulnerabilities in terrestrial IT systems versus space systems reveals that while some vulnerabilities overlap, operating in space introduces distinctive vulnerabilities and challenges that require tailored solutions. Through this research, a structured review of existing defensive strategies and AI applications in cybersecurity is conducted, identifying gaps in their adaptation for space systems. The literature review highlights that while terrestrial cybersecurity has benefited from significant advances in AI-driven intrusion detection, anomaly detection, and autonomous threat response, similar applications remain underexplored in space environments. The lack of a standardized framework for AI-based space cybersecurity is a critical shortcoming. To address these issues, this study proposes practical framework that leverages artificial intelligence (AI) to enhance proactive security measures for space systems, including both onboard and ground-based implementations. The proposal also provides recommendations for developing, training, and implementing the AI model, as well as ensuring model data integrity. Ultimately, this project aims to bridge the knowledge and implementation gap between terrestrial and space-based security practices, providing a forward-looking, AI-enhanced defense framework to protect space systems from present and future threats.
      • 04.1403 The Current State of Satellite Constellation Security and Design Moving Forward
        Trey Burks (Tennessee Technological University) Presentation: Trey Burks - -
        Satellite constellations and swarms are a concept that is rapidly becoming more common in the world of space systems. Organizations ranging from the government to private internet providers are starting to utilize multiple satellites working together to accomplish their mission, often with the ability to communicate with one another in orbit. This survey aims to cover and provide a brief summarization and overview of the current world of satellite systems and constellations and the current state of their security using open source information. Focus is put on systems currently in development or deployed, real world cyber attacks against satellites, research being conducted on satellite testbeds for cybersecurity, and research that is being conducted for theoretical attacks against satellites. Systems covered include Starlink, Kuiper, ViaSat, GNSS, those from NRO, NASA, SDA, and several more. Each system is briefly compared and their security posture is analyzed from the perspective of an attacker and defender. It is expressed in this survey that several of these systems offer robust security, but several may contain gaps that attackers can exploit. Additionally, some of the experimental systems may offer new attack vectors, and thought is given to how those could impact the security of the constellations. The goal of this survey is to highlight the significance of cybersecurity in satellites, and where attackers might direct their attention in the future. It also briefly provides advice and resources for cybersecurity researchers to enter the field, with hopes of providing avenues for researchers to get started in a very challenging field.
      • 04.1405 Deep Learning Based Anomaly Detection for Securing ADS-B in NextGen Aviation
        Suleman Khan (Linköping University), Andrei Gurtov (Linköping Univeristy) Presentation: Suleman Khan - -
        The Next Generation Air Transportation System (NextGen) employs the Automatic Dependent Surveillance-Broadcast (ADS-B) to manage congested airspace and optimize air traffic operations. ADS-B delivers precise aircraft location information through satellite navigation, improving air traffic management (ATM). Despite its benefits, the plain-text nature of ADS-B makes it vulnerable to various cyber attacks, including evasion, injection, alteration, replay, jamming, and spoofing. These attacks can mislead aircraft pilots and air traffic control (ATC) personnel, posing a dangerous threat. To ensure the security of air-ground communication, we propose a cutting-edge anomaly detection framework for the ADS-B protocol. Our proposed framework employs a three-stage deep learning approach, which includes Spatial Graph Convolution Networks (GCN) and a deep auto-regressive generative model. The first stage classifies the data across the operating aircraft airspace as either normal or under attack using GCN. In the second stage, the system analyzes the state sequences of airspace to identify anomalies using a generative WaveNet model and outputs the attacked features. The final stage comprises an aircraft classification module that utilizes each aircraft's unique RF transmitter signal characteristics, enabling ground station operators to scrutinize incoming messages. Experimental results demonstrate that the proposed framework achieves a detection accuracy of up to 99.99\% on decoded ADS-B data and 99.25\% on raw IQ samples, with a False Alarm Rate (FAR) as low as 0.25\%. Furthermore, the integrated RF fingerprinting module achieves 87.2\% accuracy in distinguishing legitimate aircraft from spoofed entities. These findings confirm the robustness and effectiveness of our approach in securing ADS-B against sophisticated cyber threats.
  • Alex Austin (Jet Propulsion Laboratory) & Catherine Venturini (The Aerospace Corporation)
    • Young Lee (Jet Propulsion Laboratory) & Benjamin Donitz (NASA Jet Propulsion Laboratory) & Lee Jasper (Space Dynamics Laboratory )
      • 05.0101 Initial In-Orbit Operation and Commissioning Results of the SNUGLITE-III CubeSat Mission
        Yonghwan Bae (Seoul National University), Hanjoon Shim (Seoul National University), Jae Woong Hwang (Seoul National University), Changdon Kee (Seoul National University) Presentation: Yonghwan Bae - -
        SNUGLITE-III is a CubeSat mission developed by the GNSS Laboratory at Seoul National University. It consists of two identical 3U CubeSats designed to demonstrate several mission capabilities in low Earth orbit, including autonomous formation flight, GNSS radio occultation (GNSS-RO) data collection, and an autonomous rendezvous and docking operation in the final phase. The mission relies on non-propulsive aerodynamic differential drag for relative orbit control, enabling fuel-free maneuvers suitable for small satellite platforms. The satellites are equipped with mission-specific subsystems that enable proximity operations without real-time ground intervention. The two satellites are integrated into a combined 6U configuration and are scheduled to be launched aboard the Korean KSLV-II (Nuri) launch vehicle from the Naro Space Center in November 2025. After deployment from the P-POD, the satellites will remain mechanically connected for an initial one-month commissioning phase. This period is intended to verify the health and functionality of each satellite’s subsystems, including power, communication, ADCS, GNSS-based navigation, and payload units. This paper introduces the overall system architecture of SNUGLITE-III and presents the results from the early stages of in-orbit operation. Key findings from the commissioning phase include measurements of post-deployment detumbling performance, verification of inter-satellite communications, and the initial activation of the GNSS-RO system. Subsystem-level assessments such as battery charging behavior, onboard computer performance, and execution of ground-commanded functional tests are also discussed. In addition, the planned sequence for satellite separation and transition to proximity formation flight is described, including safety constraints and timing logic implemented in the flight software. These results confirm the operational readiness of the SNUGLITE-III platform and provide insights into system-level integration and risk management strategies for multi-satellite CubeSat missions. The findings serve as a reference for future developments of cooperative satellite operations involving autonomous navigation, docking, and scientific data collection with limited ground support.
      • 05.0102 An Open-Source Analytical Method for Determining Optimal Satellite Orbits for Region Targeting
        Ella Shepherd (California State Polytechnic University, Pomona), Jordy Samaniego (), Angel Reyna (Bronco Space ), Jacob Showman (Bronco Space ICON Lab, California Polytechnic University, Pomona), Lukas Sandau (Cal Poly Pomona), Matthew Chang (Cal Poly Pomona), Michael Pham (Cal Poly Pomona) Presentation: Ella Shepherd - -
        The CADENCE – SWANS (Continuous Autonomous Detection Enabling Networked Collaboration Explorers of a Space Weather Anomaly Notification System) mission is a pathfinder for using proliferated space assets to autonomously monitor space weather phenomena. This is accomplished by demonstrating the use of a low-cost coarse energy spectrometer in Low Earth Orbit (LEO) that could be deployed as a low SWaP-C secondary sensor on spacecraft traveling in all orbit domains. One of the challenges encountered when designing a proliferated observation system is determining the optimal set of orbits to place this satellite in, which maximizes the time spent monitoring a region of interest. Solving this challenge requires the creation of an analytical method which is capable of efficiently searching through all possible and all feasible orbits for a reference mission, with maximum time spent in the region of interest as the primary metric. The CADENCE – SWANS Mission Ops Team presents an investigation of methodologies for optimizing these orbits using open-source tools, such as NASA’s General Mission Analysis Tool 2025 (GMAT), with the South Atlantic Anomaly (SAA) selected as a representative region of interest. Analysis was conducted using both a brute force search for optimal orbits and as a survey of existing satellites, to provide mission designers with options for deploying instruments on new dedicated space assets and on reflights or rideshares with existing space assets. The curated analysis interface allows the user to analyze customized output reports that present the full range of data and alter the GMAT code with the aid of user-friendly explanations defining the process for the identification of the variables being tracked. This paper demonstrates the feasibility and necessity of implementing sophisticated open-source resources for the growing number of university-driven SmallSat and CubeSat programs. CADENCE – SWANS will perform these tasks as part of a technology demonstration for the Department of Defense and Air Force Research Laboratories (AFRL) through the University Nanosatellite Program (UNP) and the Bronco Space Lab at Cal Poly Pomona.
      • 05.0103 The EXACT X-Ray Positioning, Navigation, and Timing Mission
        Mel Nightingale (University of minnesota - Twin Cities), Kyle Houser (), John Sample (Montana State University), Demoz Gebre Egziabher (University of Minnesota, Twin Cities Campus) Presentation: Mel Nightingale - -
        This paper presents the Experiments in X-Ray Characterization and Timing (EXACT) mission, an on-orbit demonstration of pulsar-based positioning, navigation, and timing. EXACT will validate the Hard and Fast X-ray spectrometer (HaFX), a sensor designed to generate phase and Doppler measurements from X-ray pulsar observations. Deployed on the International Space Station as part of the Space Test Program’s H-12 mission, HaFX will observe and tag the arrival time of photons from the Crab pulsar (PSR B0531+21). The time tagging will be synchronized with the GPS pulse-per-second. The performance of HaFX will be quantified in post-process by comparing phase (range) and Doppler (velocity) estimates derived from it with estimates from GNSS. The paper outlines the mission design, payload design, and expected results from the on-orbit experiment.
      • 05.0104 CubeSat Enabled Differential Absorption Radar for Lunar Water and Ice Mapping
        Aman Chandra (FreeFall Aerospace) Presentation: Aman Chandra - -
        Observations of the Moon from recent missions such as the NASA/DLR Stratospheric Observatory for Infrared Astronomy (SOFIA) have independently detected widespread hydration on the lunar surface. Analysis of spectral signatures confirm molecular water in abundances ranging from 100– 400 µg g−1 H2O. The University of Arizona is developing a Ka-band chirp radar system with an integrated 2.5- meter inflatable membrane deployable reflector. The payload packages in a 6U volume making it a suitable payload for deployment from a 12U CubeSat. This system has the potential to provide a game changing ~ 10 cm vertical and < 2 km cross range resolution thus enabling 3-D mapping of water on the lunar surface. The 12U spacecraft design also includes a passive radiometer for independent water line detection using limb sounding against the sun. The mission named LunaCat has been proposed to NASA’s 2024 CSLI program and derives heavily from the CatSat 6U inflatable antenna program currently operational in LEO. LunaCat’s launch is nominally planned for early-mid 2027 with a 2 year mission lifetime including 1 year science operations phase. This paper describes LunaCAT's detailed mission design and development of chirp radar payload hardware.
      • 05.0105 The Design and Performance of the BAE Systems FASTER Hyperspectral Microwave Radiometer
        David Newel (BAE Systems) Presentation: David Newel - -
        BAE Systems is conducting the R1 mission on the International Space Station (ISS) which includes a hyperspectral radiometer named the Full Atmosphere Sounding Radiometer (FASTER). The R1 mission is an internally funded activity to develop and demonstrate a CubeSat-compatible hyperspectral radiometer and advance the technology readiness level (TRL) of this hardware through space operation. Hyperspectral radiometry is being used in the design of next-generation microwave instruments that exceed the performance of current operational sensors while reducing size and mass to a CubeSat-compatible form factor. For the R1 mission, the selected application of this next-generation capability is hyperspectral sampling across the oxygen absorption line from 55 to 58 GHz to provide Atmospheric Vertical Temperature Profile (AVTP) Environmental Data Records (EDRs). This paper gives an overview of the R1 mission and the FASTER instrument and provides the on-orbit performance. The FASTER instrument is made up of an antenna, RF receiver, and a digital receiver. An overview of the antenna design and the RF receiver is provided. The digital receiver is made up of a high speed analog-to-digital (ADC) board and a FPGA-based digital processor board. The ADC operates at 6.4 giga-samples-per-second providing 3.2 GHz of bandwidth with all the sampled data flowing to the FPGA board. The FPGA board ingests the raw sampled data to convert to spectral bins, integrating and aggregating the sampled signals. Weighting functions are provided for FASTER and for heritage instruments. Atmospheric weighting functions for the FASTER instrument show the fine vertical spacing and coverage into the upper atmosphere provided by the FASTER hyperspectral instrument. On-orbit performance results are summarized. AVTP retrievals are summarized and show good correlation to Numerical Weather Products (NWP) and to ATMS. Comparison between NWP and FASTER for an entire collect are shown. To determine a quantitative performance estimate, each of these collects is averaged to generate a profile showing the bias and precision to NWP as a function of altitude. These results are presented and show performance within the current operational instrument performance requirements. With months of data taken, the overall performance of FASTER to NWP is presented and trends analyzed. Since the ISS orbit crosses the ATMS orbit multiple times per day, a large number of FASTER samples co-located with ATMS have been collected. Comparison of the FASTER performance to ATMS is presented for multiple co-locations. Performance through the middle of the atmosphere where both FASTER and ATMS have channels is shown to be nearly identical. The benefit of having additional channels with weighting functions in the atmosphere is shown and future work in this area is identified. The FASTER instrument has demonstrated a hyperspectral radiometer with reduced mass and envelope compatible with CubeSat missions. Even with the limitations of the R1 technology demonstration mission, FASTER has demonstrated EDR performance equaling the operational ATMS instrument illustrating the capability of a hyperspectral instrument.
      • 05.0106 The Farside Seismic Suite: Status Update and Plans
        Asad Aboobaker (Jet Propulsion Laboratory) Presentation: Asad Aboobaker - -
        The Farside Seismic Suite (FSS) is a lunar surface payload for studying seismic activity on the moon to inform questions of lunar seismicity and deep internal structure, local surface structure near its landing site, and micrometeorite impact rates. Delivered to the far side of the moon on the deck of a commercial lander, FSS is designed to take seismic data continuously for over 4 months, including throughout the lunar night, from its landing site in Schrödinger Basin. FSS contains two sensitive seismometers as well as power, communications, and thermal control systems that allow it to outlast its host lander and operate independently for its planned mission duration. Here we report on the status of FSS’ development and plans for future work ahead of its expected launch in 2027. We describe aspects of FSS’ integration and test campaign, including functional and performance testing, vibration testing, thermal vacuum testing, and testing of the FSS radio. In addition, an issue was discovered with an electronic part on FSS’ battery management card. The state of FSS’ flight software and flight sequence development is also described. Finally, the plans for integration with the host lander prior to launch are outlined at a high level.
    • Young Lee (Jet Propulsion Laboratory) & Laura Jones-Wilson (Jet Propulsion Laboratory) & Dexter Becklund (The Aerospace Corporation)
      • 05.0201 A Prototype Low-Cost Hybrid-Actuated Mobile Spherical Terrain Exploration Rover
        Winnie Gao (California Polytechnic State University, San Luis Obispo), YouYou Luo (Cal Poly SLO), Drew Schlauch (California Polytechnic State University), Stephen Kwok Choon (California Polytechnic State University of San Luis Obispo) Presentation: Winnie Gao - -
        Abstract: This paper presents the design, development, and testing of a low-cost Hybrid-Actuated Mobile Spherical Terrain Exploration Rover (HAMSTER) intended for use on a lunar surface exploration mission. As current technologies develop and push boundaries of space exploration, the need for robotic systems capable of maneuvering across various obstacles and terrains is expected to increase. This paper explores the design considerations, manufacturing, and testing completed. HAMSTER uses a pendulum-based actuation method for steering, and an actuated internal shaft that propels the vehicle forward. The internal structure includes a two-tiered central case comprised of: the navigation control module, accelerometer, motor controller, battery, voltage regulators, Raspberry Pi, pendulum servo for steering, and a motor shaft. A camera is being considered to aid in navigation that shall be internally mounted; however, this has presented challenges that will be discussed. Additive manufacturing through 3D printing was selected for rapid prototyping fabrication of the HAMSTER chassis and exterior shell. This has allowed for the proof-of-concept phase of this design sequence, where the shape, form, and function of the vehicle are to be refined through iteration. In conclusion, this paper outlines the considerations found in the design, prototyping, fabrication, and assembly with a discussion on proposed work to be completed by Winter 2026 that includes integration and preliminary testing of the HAMSTER vehicle. Introduction: Spherical robots have been explored and used in many terrestrial applications; this project focuses on how this type of locomotion mechanism can work within a challenging terrain such as the Moon's surface. The increasing demand for small, robotic systems capable of overcoming rough terrain presents an opportunity to explore the use of rolling locomotion. Several similar prototypes have been explored, such as the Roball and Rosphere. HAMSTER aims to develop a low-cost, spherical robot to explore the capabilities of rolling locomotion for a lunar terrain mission. This paper details the design, development, fabrication, and preliminary testing of the vehicle. Included is a description of the motor actuation performance, exterior shell design, and low-cost considerations that allow for rapid prototyping. Included is a preliminary description of the missions that such a vehicle can be utilized. In conclusion, this paper presents a low-cost, accessible approach to the development of a spherical rover with in-field testing, to verify, and validate its performance.
      • 05.0202 AstroDOME: Standardized, Mass-Produceable, Small NASA Astrophysics Observatories for Big Missions
        Matthew Marcus (NASA Goddard Space Flight Center) Presentation: Matthew Marcus - -
        NASA’s Goddard Space Flight Center is investigating the use of small, commercial off the shelf (COTS) spacecraft to enable game changing astrophysics research at radically reduced cost over previous generations of missions. This paper will focus on the Mission Systems Perspective of this investigation. It will cover the spacecraft to instrument interface, and lessons learned in early concept payload development. The aforementioned low-cost spacecraft have proliferated in the last ~5 years. They represent an order of magnitude reduction in the cost of spacecraft bus development and acquisition. They are enabled by low cost smallsat launch options in the immediately preceding years, and are seeded by recent disaggregated missions in other sectors such as telecommunications megaconstellations (e.g. Starlink, OneWeb, Project Kuiper) and ongoing Space Development Agency (SDA) disaggregated architecture developments. Our study was tasked with developing a standardized, mass-produceable, astrophysics payload which is reconfigurable to different wavelength bands while maintaining a single optics bench design, and a standardized mechanical, electrical, data, and thermal interface to the spacecraft bus, and which may be reconfigured to multiple COTS spacecraft offerings. In this paper we will discuss lessons learned from our case study. We will also present identified keys to mission success for performing cutting edge astrophysics observations while taking advantage of economies of scale with a disaggregated, standardized architecture.
      • 05.0203 HexSense Lunar Mapping: Deployable 360 Cameras for Panoramic Inspection & 3D Reconstruction
        Fangzheng Liu (MIT Media Lab), Nicolas STAS (Massachusetts Institute of Technology), Ariel Ekblaw (Massachusetts Institute of Technology), Joseph Paradiso (MIT) Presentation: Fangzheng Liu - -
        The HexSense is a type of low-cost and miniature wireless sensor node that can be ballistically deployed from a rover or lander to the area of interest on the Lunar surface. Upon landing, each HexSense can automatically stand upright to realize a relatively determined orientation, enabling better wireless communication. With a modular design, each HexSense can carry different sensor payloads for different scientific applications. In this paper, we show a novel approach for in-situ panoramic inspection and 3D mapping of the area of interest on the lunar surface using HexSense equipped with custom-designed 360-degree cameras. The unique ballistic deployment mechanism of HexSense allows sensor nodes to be scattered across challenging terrains and automatically orient for optimal data collection. The captured images can be stitched together to panoramic images for in-situ omniview inspection. The images from multiple HexSenses can also be used to reconstruct a 3D model of the area of interest on the lunar surface. This method addresses the challenges of mapping in the hard-to-reach or dangerous areas for a slow-moving rover or a non-movable lander. We described the system design, time synchronization approach, localization approach, panoramic inspection, and 3D reconstruction result, highlighting the potential for scalable, distributed mapping to support future lunar exploration and scientific research.
    • Rachit Bhatia (West Virginia University) & Ashwati Das-Stuart (NASA Jet Propulsion Lab) & Ryan Woolley (Jet Propulsion Laboratory)
      • 05.0301 Decentralized Allocation of Observation Tasks for Cooperating Agile Spacecraft Clusters
        Nicholas Niziolek (University of Colorado, Boulder), Nolan Ales (University of Colorado, Boulder), Eric Frew (University of Colorado, Boulder), Jonathan Chan (Lockheed Martin), Michael Jacobs (), Moses Chan (Lockheed Martin) Presentation: Nicholas Niziolek - -
        The rapid growth in the number of large-scale Earth-observing satellite constellations has made autonomous task allocation and observation scheduling problems increasingly relevant. This work focuses on an approach to the task allocation problem of distributing mission critical targets among commanding satellites to utilize the available sensing satellites best. Many current approaches to this problem are centralized, relying on a single commanding agent for allocation and are susceptible to single-point failures, communication network fracture, and the computational limits of solving these large problems. This work focuses on a decentralized approach, requiring inter-satellite communication and consensus to distribute targets from a globally known list. To perform task allocation, this work utilizes the decentralized consensus-based bundle assignment (CBBA) algorithm. The CBBA algorithm uses a greedy approach to generate task bundles and a market-based consensus approach to eliminate conflicts. One of the challenges with task allocation is that the value of an assignment is a function of how the task can be scheduled by a spacecraft cluster. Thus, in order to determine the score for each potential task bundle, a lightweight scheduling algorithm based on a mixed integer linear program (MILP) is used to provide an approximation of the final schedule given the assignment. This MILP is formulated to optimize pointing directions for each satellite at each time within a discrete set of possible directions. The resulting objective function value is a heuristic for covariance and error-based mission objectives, approximated by a diminishing return function using the target observation counts. Because this scoring method is not submodular, a variation of CBBA was implemented, called bid-warped CBBA (BWCBBA), allowing the same global consensus properties of CBBA with non-submodular scoring methods. Simulation experiments were used to demonstrate and assess the task allocation algorithm. The experiments simulated a constellation of satellites with narrow field of view sensors and instantaneous pointing capabilities. The algorithm was compared against the performance of the MILP as solved over the whole constellation, providing a centralized baseline. Each algorithm was scored using the sum of rewards for each target, as calculated with a diminishing return function based on the observations from the resulting MILP schedules. For the centralized case, all observations of each target are used in scoring, whereas only observations from assigned agents are considered in the decentralized case. This assumption ignores the possibility of opportunistic observations of targets assigned to other agents, which was common in simulation. This assumption significantly impacts the final scoring, where allowing opportunistic measurements was shown to increase the final score. Additional results will give more analysis of the mission completion performance and scalability of the investigated algorithm.
      • 05.0302 Optimal Design of Distributed Transport Architectures with Multimodal On-Orbit Servicers
        Shanmurugan Selvamurugan (Georgia Institute of Technology), Edgar Lightsey (Georgia Tech) Presentation: Shanmurugan Selvamurugan - -
        Propellant capacity remains a central bottleneck in spacecraft operations, making on-orbit servicing (OOS)—through propellant depots and refuellable servicing spacecraft (servicers)—a promising pathway to extend the lifespan of orbital assets. Servicers can maneuver payload satellites to higher orbits using their own fuel, allowing the payloads to conserve propellant for mission-specific tasks. One approach to an OOS-enabled infrastructure involves a distributed network of space-resident transport vehicles operating between low-Earth orbit (LEO), medium-Earth orbit (MEO), and geostationary orbit (GEO), enabling enhanced flexibility and adaptability to diverse mission profiles. Multimodal propulsion systems, combining high-thrust, low-Isp and low-thrust, high-Isp modes, amplify the benefits provided by such architectures by enabling fuel-efficient transfers under strict time constraints. This paper investigates the design of both time-constrained fuel-optimal and time-optimal multi-vehicle transport architectures using servicers equipped with multimodal propulsion. Analytical methods are developed to estimate TOF and ΔV costs for transfers between circular inclined orbits with differing right ascensions of the ascending node (RAAN). These models incorporate J2 perturbations, employing two-burn impulsive transfers with combined semi-major axis and inclination changes for high-thrust maneuvers, and a piecewise-constant thrust control approach for low-thrust maneuvers. To solve both iterations of the transport logistics problem, a hybrid optimization framework is adopted: a genetic algorithm (GA) is used to explore the design space for optimal depot placement, staging and switching orbits, and properties of the high-thrust and low-thrust transfer legs, while sequential quadratic programming (SQP) refines the solutions. Monte Carlo simulations over varied initial RAAN phasing between network nodes are used to assess how servicer mass influences fuel efficiency and transfer duration in a dual-servicer architecture, assuming a baseline propellant mass fraction estimated from design heuristics and data from comparable spacecraft.
      • 05.0303 Autonomous Propellant-Free CubeSat Formation Control: Toward Flight Validation on SNUGLITE-III
        Jae Woong Hwang (Seoul National University), Hanjoon Shim (Seoul National University), Yonghwan Bae (Seoul National University), Changdon Kee (Seoul National University), Jaegang Kim (Sejong University) Presentation: Jae Woong Hwang - -
        Distributed space systems with small satellites have been gaining increasing attention in recent years due to their flexibility, scalability, and potential for cost-effective space missions. One of the key challenges in such systems is formation control—specifically, how to control their relative motion and maintain precise formations. While thruster-based approaches have traditionally been used, their application to CubeSats is often limited by strict mass and volume constraints. Consequently, various propellant-free control techniques have been actively investigated as viable alternatives. Among these, differential aerodynamic forces—namely, drag and lift—have received attention in Low Earth Orbit (LEO), where they are among the dominant perturbation forces acting on satellites. Formation control using these forces has been extensively studied and successfully demonstrated in several CubeSat missions. However, those missions generally lacked key onboard capabilities—such as real-time relative navigation and inter-satellite communication—required for fully autonomous control. As a result, previous maneuvers relied heavily on ground-based support, including orbit determination, maneuver planning, and command uplink. While effective for maintaining coarse formations, this ground-dependent approach is limited in both responsiveness and precision, restricting its applicability to close-range formation flying or rendezvous and docking. This paper presents the design, implementation, and simulation of an autonomous formation control system developed for the SNUGLITE-III (Seoul National University GNSS Laboratory satellite-III) mission. SNUGLITE-III consists of two identical 3U CubeSats, scheduled to be launched as a joint 6U unit in November 2025. After deployment, two satellites are planned to passively separate in orbit and perform close-range formation flying within a 1 km separation range, followed by autonomous rendezvous and docking. Each satellite is equipped with a GPS-based relative navigation system and an inter-satellite communication link, enabling onboard computation of attitude commands to generate differential drag and lift. The proposed system is evaluated through high-fidelity simulations covering the full operational sequence—from initial separation to formation acquisition and maintenance. These findings highlight the system’s potential as a practical solution for future autonomous formation flying and rendezvous operations in LEO.
      • 05.0304 Low-Control Satellite Formations for Enhanced GNSS-R Resolution in Perturbed Orbits
        Angel Pan Du (University of Luxembourg), Lawanya Awasthi (University of Luxembourg), M. Amin Alandihallaj (University of Luxembourg), Andreas Hein (SnT, University of Luxembourg) Presentation: Angel Pan Du - -
        High-fidelity Earth observations for measuring geophysical parameters have become indispensable for areas such as climate science and resource management. In recent years, this industry has driven a shift toward formations of small and cost-effective satellites rather than single large platforms. These distributed systems enable simultaneous, multi-point measurements across wide areas, improving temporal coverage and resilience. Global Navigation Satellite System Reflectometry (GNSS-R) exploits navigation signals reflected from the Earth surface to retrieve geophysical parameters. However, its low spatial resolution is a limiting factor, and no practical implementations currently exist, due to the need to make largely spatially distributed measurements in space. One approach to overcome this limitation is a mission with a synchronized CubeSats (satellites the size of 10x10x10 cm units) formation with a distributed antenna array. By combining reflected signal measurements from multiple, distributed satellites , a larger aperture is effectively synthesized, yielding finer ground sampling than achievable with single-satellite systems. This article presents a novel framework for enhancing GNSS-R spatial resolution by exploring the dependency of various CubeSat formation geometries on beamforming performance and station-keeping requirements. Optimal configurations are identified to balance instantaneous resolution, ground-illumination stability, and control effort. The formation would target a nominal resolution of under 1 km, an order of magnitude improvement over the 10 km resolution achieved by the finest GNSS-R mission to date. The effectiveness critically depends on the type of formation geometry, its control to maintain its geometry, and desired signal over time. Some formation geometries can maintain near-constant ground illumination, but perturbations induce drift in relative positions, which degrades the synthesized beam pattern and spatial resolution over time. The presented approach characterizes satellite formation‐keeping accounting for orbital perturbations , such as Earth non-sphericity, solar radiation pressure, aerodynamic drag, and third-body effects, in order to quantify the impact over the resulting beamforming performance. Given the constraints of CubeSats, including limited volume, mass and power, active station-keeping via propellant-less control methods are particularly appealing for enabling scalability and a longer mission lifetime. An instance of this technology is using a combination of solar and aerodynamic forces, which is an ideal method to deploy, maintain and reconfigure a large formation of small satellites. Hence, these require detailed modelling of the satellite dynamics, including the effects of aerodynamic forces (caused by the residual atmosphere in LEO) and solar radiation pressure. Building on the preliminary comparative analysis of formation geometries, these demonstrate that the satellite configuration affects beamforming performance under orbital perturbations. Different types of formation geometries are evaluated, such as linear, circular, Y-shaped, or spiral configurations, against performance metrics, like sidelobe suppression, and spatial resolution degradation over time. While Y-shaped formations provide high instantaneous resolution, they exhibit spatial inconsistencies along the orbit that are aggravated by perturbation-induced drift. In contrast, spiral formations, due to their elliptical and distributed structure, maintain more consistent ground illumination over time, as well as low control requirements, requiring only periodic compensation for orbital perturbations to maintain beam pattern stability and spatial resolution, making them more suitable for sustained mission performance.
      • 05.0305 Small Formation-Flying Swarm for Remote Sensing through SAR from Very Low Earth Orbit
        Jose Pedro Ferreira (University of Southern California), Aaron Pereira (University of Adelaide), Jose Velazco (Chascii Inc.) Presentation: Jose Pedro Ferreira - -
        Synthetic Aperture Radar (SAR) imagery utilizes the emission and reception of electromagnetic waves along with the relative motion of the antenna to map the environment based on object reflectivity. This motion over a target area creates a synthetic aperture, leveraging high altitudes to image extensive strips of terrain. The frequency domain makes them independent of lighting conditions, enabling nighttime imagery, and minimally affected by weather phenomena. Space-borne SAR offers several advantages, including all-weather, day-to-night operation and data collection of ground features. Depending on the frequency of operation, various levels of performance are achievable for Earth Observation (EO), from a few meters of subsurface penetration at P-Band to high-definition imaging at Ka-Band. Legacy space-based SAR were launched on proven satellite platforms that are large, bulky, and expensive to accommodate the large antennas and solar panels that support the required duty cycle of the RF transmitter. In recent years, commercial operators like ICEYE, Capella Space, and Umbra have introduced X-Band imaging solutions for the broader market. A new and novel capability enabled by an inter-spacecraft optical communicator (ISOC) between swarms is aperture synthesis across distributed sensors. This has the potential to significantly improve remote sensing performance by using radar cross-range resolution improvement through separate radar transceivers. To produce a coherent beam from both satellites, we will interconnect them via their optical communicators to synchronize their VHF transmissions. Very low-Earth orbit (VLEO), situated at an altitude between 250-350 km, offers a unique vantage point to undertake cost-effective Earth observation missions by boosting revisit times and increasing spatial resolution. Compared to higher orbits, SAR in VLEO offers distinct advantages including improved spatial resolution, reduced power requirements, better signal-to-noise ratio, cost efficiency, and optimized revisit times, complementing the infrastructure deployed at higher and lower altitudes. Challenges such as increased atmospheric drag lead to shorter satellite lifespans and more complex orbital maintenance techniques. The lower orbital altitude also brings a smaller ground coverage. A reconfigurable antenna with steering and shaping capabilities compensates for the latter disadvantage. This mission also uses high-speed optical links for precision timing and ranging to synthesize flexible and dynamic SAR apertures using distributed VLEO platforms. Such distributed bistatic SAR architectures are resilient, adaptable and effective. Small launchers will support these platforms, which enable high launch cadence critical for national security and disaster monitoring applications. The capability is enabled by the ISOC mounted on each satellite platform, forming a small formation-flying swarm linked to the ground using high-throughput optical terminals, enabling rapid data downlinking. Using cost-effective, distributed, VLEO platforms with inter-satellite optical links enables bistatic SAR to deliver unmatched performance and capability in small satellite form factor for EO and national security architecture, especially in high-latitude regions.
      • 05.0306 Autonomous Visual-Inertial Navigation for Networked Satellites
        Frederik Markus (Carnegie Mellon University), Zachary Manchester (Carnegie Mellon University) Presentation: Frederik Markus - -
        Current orbit-determination methods rely heavily on Earth-based infrastructure like ground-based radars or Global Navigation Satellite Systems (GNSS). This paper introduces a joint orbit-and-attitude-determination method using angular velocity data from a gyroscope, one or more low-cost RGB cameras and ranging data from other satellites. Machine-vision algorithms are used to identify known landmarks in images captured by the spacecraft. These landmark locations are fused with gyro data and satellite ranging measurements and combined with a spacecraft dynamics model in an Iterated Extended Kalman Filter (IEKF) that estimates the position, velocity, and attitude of the spacecraft, as well as the gyroscope bias. This full pose estimation is done autonomously, without the use of any exogenous, Earth-based inputs. We explore the benefits of extending the number of networked satellites and investigate different network topologies. We validate the performance of the approach in simulation using satellite imagery.
    • Michael Swartwout (Saint Louis University) & Bruce Yost (NASA - Ames Research Center) & John Samson (Morehead State University )
      • 05.0402 Mission Life Enhancement Results for the Imaging X-ray Polarimetry Explorer (IXPE)
        BIll Kalinowski (BAE Systems) Presentation: BIll Kalinowski - -
        Almost 4 years into its mission, the Imaging X-ray Polarimetry Explorer (IXPE) continues to produce award winning astrophysics discoveries with its three imaging polarimeters. Launched in December 2021, IXPE now operates beyond its 2-year design life and 3-year extended mission. Over the last several years, the IXPE team studied operational adjustments that could enhance the lifetime of the single-string spacecraft, enabling continued scientific data collection. The team focused its efforts on the most notable life-limiting element of the IXPE spacecraft, the lithium-ion battery. The wet chemistry of IXPE’s lithium-ion battery degrades over time as a function of its temperature, the number of charge and discharge cycles, and the level of charge and discharge. In identifying key operational changes that could extend the battery life, the IXPE team updated thermal and power system models. The thermal model update included an update to the pre-launch model correlation using on-orbit data, as well as a more refined battery thermal model that incorporated endo- and exo-thermic properties of the battery during the charge and discharge cycles. Additionally, the power system model was refined using detailed data provided by the battery supplier. The team devised and implemented operational changes that decreased the depth of discharge of the battery on each orbit. The changes were modeled and predicted to add year's worth of charge/discharge cycles to the battery life, thus allowing the mission to be extended. The changes included modifications to battery heater setpoints during eclipse and scheduling ground station passes only during sunlit periods of the orbit. This paper describes the operational changes as well as the observed improvement of battery performance following these changes. IXPE was executed as a NASA Class D Small Explorer Mission led by Marshall Space Flight Center with Ball Aerospace as the prime contractor responsible for providing the spacecraft, payload elements, and performing system level assembly, integration, and test. The mission-enabling imaging polarimeters are an international contribution from the Italian Space Agency through the Italian National Institute of Nuclear Physics (INFN), the Institute of Astrophysics and Space Planetology (IAPS) and National Institute of Astrophysics (INAF). The mission collaboration includes Stanford University, McGill University, Massachusetts Institute of Technology (MIT), and the University of Colorado Laboratory for Atmospheric and Space Physics (LASP).
      • 05.0403 Revectoring a Mission Post-Launch: The AFRL's XVI Mission, Operations and Lessons Learned
        Erika Chavez (Space Dynamics Lab), Lee Jasper (Space Dynamics Laboratory ) Presentation: Erika Chavez - -
        The Air Force Research Lab’s (AFRL) Small Satellite Portfolio (SSP) produces end-to-end smallsat missions. The portfolio has designed, launched and operated four satellites in the last five years and aims to continue to provide the warfighter with impactful and enabling technologies. One of these four missions was the AFRL XVI CubeSat which was launched on SpaceX’s Transporter 8 in June of 2023. XVI’s primary objectives were to demonstrate that Link-16 network participation was possible from Low Earth Orbit (LEO) and provide the warfighter with a beyond-line-of-sight capability without modifications to existing network hardware or software. XVI pioneered the research, technology and testing for Link-16 participation from LEO. The Space Development Agency (SDA) became XVI’s transition partner, given the use of the Link-16 payloads in LEO on the SDA Transport Layer satellites. XVI’s mission design, testing and operations are discussed. The focus of this paper is to describe lessons learned and explain how the AFRL team repurposed the mission post-launch due to regulatory delays which led to hardware component failures. Some of the valuable artifacts that emerged from the revectoring include the following: utilizing the unclassified operations center for training of high school students to military generals, providing on-orbit data for artificial intelligence/machine learning (AI/ML) models, ground software automation development, advancing Very Low Earth Orbit (vLEO) operations and ground tracking. The collaboration and achievements with the Space Force’s Space Test Course are also discussed. XVI’s educational outreach has expanded since launch and has allowed the AFRL’s University Nanosat Program (UNP) students to receive on-orbit operational experience and training. XVI’s revectoring post-launch demonstrates AFRL’s capability to adapt to new mission challenges.
    • Jin S. Kang (U.S. Naval Academy) & Michael Swartwout (Saint Louis University)
      • 05.0502 Solar Sail Racing Championship (SRC): A Competition-Driven Platform for Wokforce Development
        Jessica McGrath (Solar Racing Federation) Presentation: Jessica McGrath - -
        The Solar Sail Racing Championship (SRC) is a university-focused, competition-style constellation of large CubeSats that race from the geosynchronous graveyard orbit to Mars using only solar-sail propulsion. Conceived as a recurring, prize-driven event, SRC transforms each spacecraft into an application-based classroom and every race season into an intensive, end-to-end mission design practicum. By requiring teams to design, build, launch, navigate, and autonomously control an interplanetary solar sail smallsat within two years, SRC delivers an immersive training pipeline that bridges traditional coursework and the skill set demanded by today’s deep-space exploration programs. Each SRC team integrates a flight-qualified, ultra-light solar sail, an attitude-determination and control system, a telemetry beacon, and an imaging payload into a small satellite to be deployed from an ESPA-type geosynchronous transfer vehicle. After a rideshare to GEO, craft undertake a multi-phase race: Earth-orbit spiral, lunar pass, heliocentric cruise, and Mars fly-by, while continuously modulating sail attitude for thrust and navigation. These phases expose participants to structural design for large deployables, low-thrust trajectory optimization, autonomous guidance, and deep-space communications, compressing the learning curve typically encountered only in national space projects. Although undergraduate involvement is encouraged, SRC is structured to support multidisciplinary graduate research groups mentored by industry engineers. Students will develop and test novel boom mechanisms, autonomous optical navigation algorithms, and unique attitude control methods, and contend with strained link budgets. These research topics are likely to evolve into master’s theses or doctoral dissertations. Faculty advisors leverage SRC as a scaffold for engineering curricula, the space community gains through the free exchange of ideas, while corporate mentors gain early insight into emerging talent and technology. SRC exemplifies a “small mission for workforce development” by pairing affordable CubeSat hardware with a compelling, media-facing race that sustains student enthusiasm and stakeholder investment. Graduates exit the program with flight-proven designs, publications, and experience in cross-functional teamwork, attributes that are immediately transferable to the commercial, civil, and defense space sectors. Importantly, the technologies and methods developed through this effort are directly applicable to the future of deep space commerce. The presentation will summarize the inaugural SRC campaign, highlight lessons learned in mentoring structures and curriculum integration, and outline pathways for other institutions or companies to field a team in forthcoming race cycles.
    • Michael O'Connor (United States Space Force) & Rashmi Shah (Jet Propulsion Laboratory/California Institute of Technology) & Laila Kazemi (Star Forge Consulting )
      • 05.0601 Enabling Suborbital Ionospheric Science with Low-Cost, Power-Efficient Boom Systems
        Regan O'Neill (Clemson University), Lawrence Coleman (Clemson University), James Davis (Clemson University), Andrew Hodges (), Stephen Kaeppler (Clemson University), Matthew Hall (Clemson University) Presentation: Regan O'Neill - -
        Sounding rockets face tight volume, mass, and power constraints, making cost-effective deployment of ionospheric instruments essential for high-impact atmospheric science. To address this challenge, we developed two low-cost, mechanically simple deployment mechanisms—AURA (Antenna Unfurling Release Assembly) and VECTR (Velocity-Enabled Controlled Telescoping Rod)—designed specifically for suborbital missions. These systems extend scientific sensors beyond the payload envelope, reducing electromagnetic interference, overcoming the Debye sheath, and maximizing data quality. Commercially available boom and antenna systems often exceed $10,000, limiting accessibility for student-led and budget-constrained missions. AURA and VECTR reduce this cost to approximately $100 per unit using off-the-shelf and custom-machined components. Both mechanisms require no continuous power and support rapid iteration and repair. AURA passively deploys an electrically short antenna by releasing a preloaded coiled tape measure using a nichrome burn wire. VECTR uses a servo-actuated cam gear to release a spring-loaded telescoping boom supporting dual fixed-bias spherical Langmuir probes. While the release is motor-driven, the boom then passively extends in under one second, minimizing power consumption and mechanical complexity. VECTR offers a low-cost, customizable alternative to conventional deployable booms such as STACERs, which are widely used in aerospace applications but are often cost-prohibitive for short-duration or student-led missions. These instruments measure plasma density, temperature, and Faraday rotation in the ionosphere’s E-region (90–150 km altitude). Each system fits within a 10-inch diameter envelope and has an integrated mass under 6.8 kg, with VECTR extending about 15 inches and AURA about 24 inches from the rocket body. The systems have undergone extensive lab testing, including full mission simulations, and incorporate remove-before-flight procedures to prevent accidental deployment. Deployment is triggered by a timer event line onboard the sounding rocket, with data acquisition integrated via NASA telemetry for real-time monitoring and post-flight analysis. Qualification testing is underway at NASA Wallops Flight Facility in July 2025, including electrical sequencing, RF interference testing, moment of inertia characterization, GPS rollout, and stage separation compatibility. Vibration qualification includes sine testing from 10–144 Hz at ≤3 in/s along the thrust axis and random vibration up to 10 Grms axially and 7.6 Grms laterally (20–2000 Hz). While these systems simplify deployment, ongoing testing will evaluate their performance under flight conditions. Two AURA units and one VECTR unit will fly on the NASA GHOST mission in November 2025 from Andøya Space Center aboard a Terrier-Improved Malemute sounding rocket. Unlike conventional flop-down booms—which are common but impose design constraints—these systems use lateral deployment, enabling greater flexibility in payload layout. This flexibility is critical on suborbital platforms, where NASA engineers have only two viable boom mounting locations. Predicted measurements from GHOST will quantify plasma density (10⁹–10¹² particles/m³) and temperature (250–1500 K), demonstrating the systems’ ability to support high-fidelity ionospheric science. By significantly lowering deployment cost and complexity, AURA and VECTR reduce barriers to entry for suborbital science missions, enabling more frequent launches and broader atmospheric measurement campaigns.
      • 05.0602 Earth-observing Photonic Integrated Circuit (EPIC) Instruments Using Interferometric Imaging
        Mate Adamkovics (Lockheed Martin Space Systems Company) Presentation: Mate Adamkovics - -
        The Earth-observing Photonic Integrated Circuit (EPIC) instrument concept suite has been developed to enable extremely low cost and compact instrumentation for Earth science applications. The platform includes both a multispectral aerosol polarimeter (EPIC MAP) as well as a greenhouse gas spectrometer (EPIC GHG). The photonic sensors are novel interferometric imagers. Light is collected by millimeter-sized apertures (e.g., lenslets or micro-mirrors) on the device, which couple to waveguides in the photonic circuit, where signal phases and amplitudes are measured. When coupling into the device with waveguide-grating couplers the apertures are polarization selective. Pairs of apertures form interferometer baselines that sample spatial frequencies in two dimensions. Multiple baselines on the device are used for reconstructing images of the intensity distribution in the field of regard. Silicon nitride and lithium niobate structures are hybridized for a photonic device that is compact, low-noise, and stable. Here we describe two instrument concepts, detail models and methods for interferometric imaging, and provide status and laboratory demonstration of the interferometric imager development.
      • 05.0604 Integrated Attitude Control Architecture for Manipulator-Equipped SmallSats
        Laila Kazemi (Star Forge Consulting ) Presentation: Laila Kazemi - -
        As interest in on-orbit servicing, assembly, and debris removal grows, there is a pressing need to adapt small satellite platforms for contact-based operations using robotic manipulators. These emerging mission profiles require Attitude Determination and Control Systems (ADCS) that can handle new challenges not present in traditional smallsat missions. Specifically, when a robotic arm is mounted on a small satellite, its mass and inertia can be comparable to the spacecraft itself, resulting in significant dynamic coupling that can severely impact stability and pointing accuracy. This research focuses on the design of an ADCS architecture and control algorithm tailored for small satellites performing complex manipulator-based operations. We present a digital twin environment that integrates a multibody dynamics model of the satellite-manipulator system with a full ADCS suite, including reaction wheels, a star tracker, gyroscope, accelerometer, and a quaternion-based PID control law. To address the manipulator-induced internal disturbances, a feedforward control layer is implemented that anticipates and compensates for reaction forces and torques at the base of the robotic arm. To validate the approach, three representative mission scenarios are simulated: 1. Free motion tracking – The robotic arm moves along a predefined trajectory without interacting with external bodies. 2. Cooperative capture – The satellite captures and synchronizes with an object of known inertia in orbit. 3. Non-cooperative object detumbling – The system attempts to stabilize a captured object with unknown inertial properties, representing active debris removal. The ADCS design is evaluated for each scenario in terms of stability, control effort, and responsiveness. Preliminary results highlight several key findings: • The internal disturbances introduced by manipulator motion often dominate external environmental torques, necessitating dedicated compensation strategies. • The feedforward algorithm significantly improves pointing accuracy during slow manipulator movements but increases actuator workload. • Controller gains must be adapted in real-time based on manipulator pose and post-capture dynamics to maintain performance. • The geometry of capture events must be considered in mission planning to avoid exceeding angular momentum limits of the platform. By simulating and iterating within the digital twin, we demonstrate how ADCS design can be made closely integrated with manipulator behavior and operational timelines. This approach enables the development of robust, cost-effective control systems for a new class of small satellite missions that involve interaction with external objects. The work supports the broader objective of extending the capabilities of low-cost platforms into domains traditionally reserved for larger spacecraft, while ensuring system-level performance and reliability.
      • 05.0605 Fabrication, Calibration, and Performance Evaluation of a Color-Coded Aperture Camera
        Timothy Setterfield (Jet Propulsion Laboratory, California Institute of Technology) Presentation: Antonio Teran Espinoza - -
        Whereas typical depth sensors require active elements or large form factors and frequent recalibration, depth from defocus can be performed using a single camera. Depth from disparity using a color-coded aperture is particularly promising, since computationally inexpensive existing stereo block matching algorithms can be used. Previous assessments of the utility of color-coded aperture cameras have mainly been qualitative in nature. For deployment in robotic or small satellite applications, quantitative measures of performance are required. In this paper, the fabrication of a three hole color-coded aperture is outlined in detail. A novel radiometric and geometric calibration procedure is used to characterize filter crosstalk and the relationship between depth and disparity. Finally, simulated and real-world experiments are conducted to assess the robustness and accuracy of the color-coded aperture camera method.
      • 05.0606 WESAT Mission: Solar Irradiance and UV Index Variability in Space vs Ground in Trivandrum, India
        Lizy Abraham (South East Technological University (SETU), Waterford, Ireland), Resmi R (Kerala Technological University), Surya Jayakumar () Presentation: Lizy Abraham - -
        WESAT (Women Engineered SATellite) is India’s first women-engineered satellite, launched on January 1, 2024, at 9:10 AM IST with India’s 60th PSLV mission PSLV-C58 from Satish Dhawan Space Centre (SDSC), Sriharikota. The payload is crafted to measure the intensity of Ultraviolet (UV) Radiations and Solar Irradiance in space, tracing the PSLV Orbital Experiment Module (POEM) trajectory at 350 km above sea level. The WESAT payload is designed to measure these critical parameters, allowing for the calculation of the Ultraviolet Index (UVI), which facilitates comparison between space-based data and corresponding ground measurements taken from Earth’s surface. By bridging these datasets, the mission aims to enhance our understanding of environmental variability and contribute to improved climate modeling. Normally, UV Radiation Intensity data are obtained from multispectral satellite metadata (spectroradiometers) from meteorological satellites such as MODIS, TEMIS and NOAA, where UV intensity is indirectly estimated by filtering the corresponding spectrum bands. With the launch of WESAT, exclusive point-source data is received for UV Radiation Intensity, corresponding to the direct measurement of UVI in space, which is the first of its kind. Unlike conventional satellites that require batteries or external power, WESAT operates entirely on the photovoltaic effect, enabling its instruments to convert incident sunlight into measurement power directly. Due to this feature, the payload did not have to wait until POEM reached its orbital path at 350 km to start functioning. As soon as the PSLV heat shield separated at 113.68 km (approximately 3 minutes after launch), the payload was exposed to the external environment and automatically began transmitting telemetry data. Thus, WESAT has provided continuous UV Radiation Intensity data from 113.68 km upwards to 650 km (XPoSat orbital altitude - The primary satellite of the PSLV - C58) and subsequently down to 350 km (POEM orbit). This altitude-resolved dataset is one-of-a-kind, since no other payload begins data transmission prior to orbital insertion. Complementing the satellite, a Ground Monitoring Station has been established at Thiruvananthapuram, Kerala, India. The station records real-time UV Radiation, Solar Irradiance, Temperature, and Humidity. For meaningful comparison, the space data was timestamped to Earth-based measurements, ensuring synchronization. However, the raw telemetry received from WESAT contained multiple anomalies, necessitating rigorous data cleaning and preprocessing. Anomalous values were filtered, noise was reduced, and only validated datasets were retained for analysis. This cleaning process was critical to derive accurate correlations between space and ground observations. Correlations between different parameters, including UV Radiation, Solar Irradiance, Temperature, Humidity, and altitude were analyzed. Also, time series analysis was performed to study temporal variations of UV intensity and solar irradiance in both space and ground data. The analysis identifies patterns and circumstances under which UVI is elevated, such as post-rainfall clear skies with minimal atmospheric interference, and quantifies the relationships between environmental parameters and UV intensity. By integrating space-based and ground-based datasets, WESAT mission elucidates the influence of atmospheric filtering on UV intensity, characterizes conditions leading to extreme UVI values, and provides a understanding of solar radiation variability in the lower atmosphere.
      • 05.0607 Accelerating FPGA-Based SoC Development for Reaction Wheels Using Open-Source Tools
        Yari Nys (Arcsec Space) Presentation: Yari Nys - -
        This work presents an approach for accelerating the development of FPGA-based System-on-Chip (SoC) architectures for small-size reaction wheels using open-source tools. In the context of the next generation Zyra reaction wheels, the goal is to enhance the performance of motor control algorithms by leveraging FPGA and FPGA-based SoC platforms. The control algorithms are implemented as modular VHDL blocks, which are integrated using the LiteX framework to provide a Wishbone or AXI interconnect with a RISC-V softcore processor. For verification, the VHDL modules are simulated using cocotb with the NVC compiler, while GTKWave enables waveform visualization. This toolchain significantly reduces development cost and time while maintaining design flexibility, enabling rapid prototyping and deployment of efficient motor control systems for space applications.
      • 05.0608 From Ground Tests to Orbit: ECSS Error Characterization of SentinelTRAC
        Laila Kazemi (Star Forge Consulting ), Colin Huber (Redwire space) Presentation: Laila Kazemi - -
        Redwire’s SentinelTRAC star tracker has achieved significant flight heritage, with more than 100 first-generation units currently operating on orbit across a variety of missions. This large on-orbit population, combined with extensive ground qualification and simulation using the BlackLynx star field simulator, has enabled a detailed assessment of performance against the ECSS-E-ST-60-20C framework. Results are presented for low-frequency spatial error (LFSE), high-frequency spatial error (HFSE), and temporal noise, derived from both controlled ground campaigns and on-orbit telemetry. Comparisons between expected and measured performance highlight the influence of factors such as straylight, detector readout noise, star magnitude distribution, and multi-star field effects. The analysis demonstrates how individual contributors manifest differently in ground versus flight conditions, and how blended solutions can mitigate variability between star fields and sensor heads. By aligning observed behaviors with ECSS-defined error categories, a clear quantitative baseline of SentinelTRAC’s performance is established. These results not only validate the robustness of the first-generation design but also reveal areas requiring refinement. Building on this foundation, a roadmap is outlined for the SentinelTRAC third generation, ensuring that lessons learned from both laboratory analysis and flight operations directly inform future performance improvements and resilience in diverse mission environments.
      • 05.0609 Transforming a Technology Demonstration into a Rapid Reusable Platform for Sensing Maturation
        Tyler Hoover (Sandia National Laboratories), Steven Brown () Presentation: Tyler Hoover - -
        On-orbit demonstration and validation of sensor technology, data collection, and data processing is a critical step in maturing sensing systems for high-rigor missions. Traditionally, demonstrators have been multi-year campaigns to design and test custom solutions based on application specific platforms for optimal performance. However, an ever-evolving national security environment demands that new technologies be matured at a faster cadence for spaceflight while improving efficiency, capability, compatibility, and ease of integration, all at reduced costs. In response to these objectives, Sandia National Laboratories (SNL) has developed a versatile hosted payload platform that reduces development time and quickens the time-to-market availability of new sensor technologies through integration efficiencies. Starting with an early demonstrator from 2021, LEONIDAS, the payload platform has been iterated for standardization and boast several key features: a modular chassis design to support multi-sensor systems; a sealed envelope to mitigate outgassing concerns of COTS components; a custom compute platform designed around the AMD Versal ACAP for flexible/reconfigurable high performance heterogenous computing at the edge; common IO to simplify internal/external cabling; and standardized test fixtures. The transformation of LEONIDAS has enabled SNL to rapidly pursue follow-on missions, including one successful mission and two missions targeting operations in 2026. Improvements are evident in increased payload capability achieved in less time and at a fraction of the cost compared to the original LEONIDAS mission. SNL aims to further refine this method to provide off-the-shelf reuse capabilities to answer unfilled flight manifests on future platforms as technology maturation opportunities. SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.
      • 05.0610 Novel Suite of Sensors for Recording Spatial, Temporal, and Spectral Optical Lightning Signals
        Grant Soehnel (Sandia National Laboratories), Garrett Giddings (), Grace Dean (Sandia National Labs), Steven Brown () Presentation: Grant Soehnel - -
        Lightning is by far the most common fast transient optical signal seen by space-based Earth observing sensors. Dedicated optical lightning sensors in space have historically been limited in their geographical coverage, spatial resolution, temporal resolution, and/or spectral resolution. Sandia National Laboratories has designed and built a low-cost multi-sensor payload to address the gaps in available optical lightning data as observed from space. This payload includes a set of 16 spectral band-pass radiometers, a high-speed framing camera, and an event-based camera that is triggered by a fast-changing optical signal in the wide-band radiometer. All radiometers and cameras are slaved to collect synchronous data. The payload is housed inside an air-tight enclosure at 1 atm to reduce the risk to COTS components in the space environment. The first mission will be a three-month flight on the International Space Station. The radiometer sample rate is 100 kHz with spectral bands ranging from 400−1650 nm. The framing camera will run at 600 Hz and provide 0.8 km spatial resolution on the ground. The Prophesee EVK4 event-based camera has an effective temporal resolution approaching 10 kHz and spatial resolution of 0.25 km on the ground. The optical and opto-mechanical hardware for the radiometers consists of mostly off-the-shelf commercially available components. A unique testbed has been built to calibrate the radiometer responses to known input flux as a function of field angle. This mostly automated process involves holding the entire payload with a robotic arm, centering each radiometer aperture to the light source, and tilting the payload to sweep the field-of-view angle. This capability allows a payload with uncertain alignment tolerances to still record radiometrically accurate data. SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.
    • John Dickinson (Sandia National Laboratories) & Michael Mclelland (Southwest Research Institute) & Dimitris Anagnostou (Heriot Watt University)
      • 05.0702 A Novel Inclination Bias Approach for Mitigating Ground Track Drift in Indian Nanosatellites
        Parthiban P () Presentation: Parthiban P - -
        Ground track repeatability is critical for nanosatellites in sun-synchronous low Earth orbits (SSO), particularly for imaging, communication, and Earth observation payloads. However, orbital decay due to atmospheric drag leads to longitudinal drift in the ground track over time, demanding active orbit maintenance—which is often infeasible for small satellites with limited propulsion budgets. This paper presents a novel inclination bias method to passively mitigate ground track drift without requiring any post-launch maneuvers. The proposed approach introduces a deliberate, small inclination offset at the time of orbit injection by the launch vehicle. This bias modifies the nodal regression rate, allowing the natural evolution of the orbital plane to counteract the drift induced by semi-major axis decay. Through high-fidelity, full-force propagation simulations including atmospheric drag, solar flux, and geopotential harmonics, the method is validated for a range of injection altitudes (561 km to 666 km) and mission durations up to 180 days. Results demonstrate that, with properly tuned inclination offsets, ground track deviation can be limited to within ±300 km across the entire mission span—without performing a single maneuver. The approach is entirely passive, requires no onboard system modifications, and leverages existing launch vehicle injection capability, offering a robust and low-complexity design strategy for small satellite missions where propulsion resources are absent or highly constrained.
      • 05.0703 Iterative Repair for Small Satellite Power, Attitude and Mission Task Schedule Management
        Brooklyn Beck (Virginia Tech), Ella Atkins (Virginia Tech) Presentation: Brooklyn Beck - -
        This paper investigates application of a lightweight iterative repair task scheduling algorithm to a small Low Earth Orbit (LEO) space science satellite mission. The work is inspired by the deployment of the Jet Propulsion Laboratory’s Casper to Earth Observing 1, Intelligent Payload Experiment (IPEX), and Mars Perseverance Rover. Iterative repair is a rescheduling method that begins with a working schedule updated based on new information or emerging constraint conflicts. One centralized planner communicates task data between spacecraft subsystems including power management, science payload, and guidance navigation and control (GNC). This paper’s target mission is a space science CubeSat in a highly elliptical Molniya Orbit with two onboard processors and a solar energy recharging capability. A Molniya Orbit requires significant slewing between different target celestial bodies, and the availability of two onboard processors supports partitioning scheduled task execution from real-time iterative repair execution, communication, and other activities not managed by the task scheduler. This research implements the published GERRY iterative repair algorithm that considers task deadlines, release times, and temporal constraints. In an initial use case, a science task must be rescheduled due to insufficient data quality in the first acquisition attempt. In a second use case, a slewing maneuver requires more execution time than allocated due to unwinding a momentum wheel nearing saturation. The full paper will pose energy management, slewing, and science data models, constraints, and irregularities that trigger iterative repair. This work assumes communication and computing subsystems operate without anomalies. During real-time mission simulation in Systems Tool Kit (STK) networked to our C-based GERRY implementation, delays and faults are communicated to the centralized planner that in turn reschedules tasks as needed to meet temporal, energy, observation window, and system performance constraints, checking all impacted tasks and constraints to maintain arc consistency. In the most common scenarios, task execution delays accumulate due to small deviations above allocated execution times requiring only modest timeline updates. If a science task must be moved forward in the schedule to meet an observation window, the planner must also account for state constraints such as satellite orientation and power availability. More significant delays or task execution failures can result in iterative repair asking for assistance, coordinating with mission operators to define a new schedule beyond the scope of what iterative repair can manage. Iterative repair case studies are benchmarked by rescheduling runtime and success rate, and benchmarked against results from full rescheduling baselines. Onboard plan repair is critical to next generation small spacecraft, especially those increasingly tasked with higher bandwidth data acquisition and machine learning augmented processing pipelines that in turn place higher demands on power and GNC support. Iterative repair is efficient onboard as it remains in a monitor-only state until constraint violation triggers activity. Future work will interface task scheduling and rescheduling algorithms with power system and data management optimizers essential to mission success, especially for future missions requiring machine learning and multi-spacecraft coordination activities.
      • 05.0704 Design of a Cold-Gas Propulsion System for the STARI Mission
        Althea Noonan (), Edgar Lightsey (Georgia Tech) Presentation: Althea Noonan - -
        The STarlight Acquisition and Reflection toward Interferometry (STARI) mission is a NASA-funded joint mission led by the University of Michigan with Georgia Institute of Technology, Stanford University, and Rensselaer Polytechnic Institute as participants. The mission aims to demonstrate the first transfer of starlight between spacecraft by utilizing two 6U CubeSats that will be developed by the universities. The satellites orbiting in Low Earth Orbit will be separated by approximately 100 meters, allowing for the higher angular resolution needed to effectively search for nearby planets that reside in habitable zones around their stars, as Earth does. As the propulsion system provider, the Georgia Tech Space Systems Design Lab (GT SSDL) will design, manufacture, and deliver cold-gas propulsion units for each spacecraft. This project builds upon Georgia Tech’s experience of working on similar formation flying missions such as the SunRISE, VISORS and SWARM-EX missions. In particular, the Virtual Super-Resolution Optics using Reconfigurable Swarms (VISORS) mission features two 6U formation flying CubeSats, and as such, it is the initial baseline for the STARI propulsion system design. Utilizing the design heritage of the VISORS mission aids the STARI mission in both timeline and budget constraints while providing an opportunity to make further improvements upon the VISORS propulsion system design. The main design change that the team is exploring during Phase A of the STARI mission is the addition of a heater to the cold-gas propulsion system. The GT SSDL’s cold-gas propulsion systems, such as the VISORS propulsion system, traditionally feature a two-tank design that allows fluid to completely vaporize before exiting the nozzles. Historically, the propulsion system has been subject to variable thrust based on environmental conditions that must be compensated for by the control system. The goal of the heater in the STARI propulsion unit is to improve upon the heritage design by providing a more consistent and repeatable thrust and impulse. This paper presents the design process for the STARI propulsion system to highlight the development of major performance parameters such as delta V, required propellant mass, and specific impulse. It will also examine the design changes from the VISORS propulsion system and discuss lessons learned from prior missions.
      • 05.0708 Trussed Deployable Structures That Enable Rendezvous & Proximity Operations
        Ryan Rickerson (Southwest Research Institute), Toby Harvey () Presentation: Ryan Rickerson - -
        This paper describes the use of a folding truss structure to increase the stiffness of deployed structures on satellites. Satellites designed for rendezvous, proximity operations, and docking may require deployed structures that exhibit high stiffness and high natural frequencies for their attitude determination and control systems. The nature of large thin solar array panels with simple hinge mechanisms does not lend itself well to stiff deployed structures. Truss structures achieve high specific stiffness by increasing a structure's effective depth and utilizing lightweight and optimized members that are designed to only react axial loads. The truss structure described in this paper replaces simple hinges between honeycomb composite solar arrays panels with a deployable truss structure to meet deployed natural frequency requirements for two different spacecraft (e.g. 2Hz for a 2kW solar array wing and 5.7Hz for a 350W wing). The design is extremely modular, enabling different design configurations depending on mission requirements. Pultruded composite members are utilized throughout the structure for buckled battens, tensioned diagonals, and longeron elements. In addition to the truss members, synchronization linkages enforce deployment constraints on the system creating a single degree of freedom deployment which can be damped with a single rotational damper. Most systems that provide a passive deployment (i.e. no motors) result in a kinematically indeterminate system design, which can lead to heightened risk of overswing, mechanical stress, and potential impact with the spacecraft. To control the deployment of kinematically indeterminate structures, dampers should be added at each hinge line. Dampers may require thermal control hardware that requires additional avionics and harnessing which increases the mass and cost of the total system. On the other hand, fully controlled deployments typically utilize motors, which add additional power, cost, and mechanism risks to the deployment. The system described in this paper incorporates truss elements to create both a passive kinematically determinate deployment system and a high stiffness low mass support structure.
      • 05.0710 Autonomous Detection of Magnetometer Misalignment and Magnetorquer Polarity for Small Satellites
        Srianish Vutukuri () Presentation: Srianish Vutukuri - -
        Earth magnetic field-based sensing and actuation devices, namely magnetometer and magnetorquers, play a crucial role in small satellite missions during the launch and early orbit phase. These components are often the primary choice for attitude control in such missions due to constraints in size, mass, and cost that limit the use of alternative actuators like reaction wheels or thrusters. Before initiating attitude stabilization via detumbling control, it is essential to validate the orientation and configuration of these components, as potential errors during assembly or telecommand delivery affects their mounting information and destabilizes the satellite. This work presents an autonomous onboard methodology for validating the mounting of both magnetometers and magnetorquers in two distinct stages. In the first stage, the mounting configuration of the magnetometer is verified in a torque-free condition, i.e, magnetorquers are switched-off and without requiring knowledge of the spacecraft attitude. After injection, although the orbit is approximately known, the satellite attitude remains unknown. The only spacecraft state reliably available in this phase is the body rate, obtained from rate sensors. This work introduces a novel method that utilizes body rate data and magnetic field measurements in the satellite body frame to detect and correct misalignments in the sensor mounting. The algorithm is designed for onboard implementation and leverages the predictable nature of the rate of change of the geomagnetic field in the inertial frame along the satellite orbit. During torque-free motion, the temporal variation of the local geo-magnetic field exhibits characteristics directly related to the variation of angle between the magnetic field vector and the inertially-fixed angular momentum vector. It is shown that only the correct sensor mounting produces an angle profile consistent with this expected behavior, while incorrect configurations result in inconsistent or erratic profiles. Both analytical and simulation studies confirm the effectiveness of the method across various orbital locations and initial rate conditions. The possible limitations of the methodology are highlighted along with strategies to overcome them. Additionally, the approach demonstrates robustness to typical noise and bias present in magnetic field measurements during early mission phases. In the second stage, the polarity of magnetic torquers is verified. A modified detumbling control law is applied sequentially along individual body axes. The observed response in the rate sensor data alone is then used to infer and correct polarity errors in magnetorquers. This strategy has been validated through simulation for multiple initial angular velocity conditions, demonstrating its reliability and adaptability. Together, these autonomous procedures provide a lightweight, reliable solution to sensor and actuator configuration validation during early orbit operations, enhancing the robustness and autonomy of small satellite missions.
      • 05.0711 A Lyapunov-Based Magnetorquer-Only Sun-Pointing Controller
        Paulo Fisch (Carnegie Mellon University), Zachary Manchester (Carnegie Mellon University), Pedro Rocha Cachim (Carnegie Mellon University) Presentation: Paulo Fisch - -
        We present a control scheme that can simultaneously spin-stabilize and sun-point a spacecraft using only magnetorquers. The controller is derived from a novel Lyapunov function that combines both spin-axis stabilization and sun-pointing components into a single, unified feedback law with a guarantee of almost-global asymptotic stability. We evaluate our controller's performance through both extensive Monte-Carlo simulations and on orbit as part of the recent PY4 mission. In all cases, the controller points within ten degrees of the sun and achieves a 100\% success rate.
      • 05.0712 Trajectory Optimization for Free-Flying Robots in Microgravity Using Convex Obstacle Representation
        André Teixeira (Universidade de Lisboa - Instituto Superior Tecnico), Rodrigo Ventura (Instituto Superior Técnico) Presentation: André Teixeira - -
        Planning collision-free and dynamically feasible trajectories is crucial to enable autonomous robotic operations. For free-flying robots in microgravity environments, such as the International Space Station, this problem is particularly challenging due to the complexity and variability of the environment, which makes it difficult to maintain a reliable model. We propose a two-step solution to this problem. In the first step, a collision-free geometric path is computed without dynamic constraints. In the second step, this path is optimized into a time-parameterized trajectory that is dynamically feasible, using a convexified representation of the environment. Unlike most prior works that rely on non-convex free space decomposition into unions of convex sets, our method convexifies obstacles through planar approximations of relevant obstacles. This approach reduces computational cost both during obstacle computation in new maps and during the optimization stage. We validate our method in environments of increasing complexity, demonstrating its efficiency and its ability to represent multiple environments through planar approximation.
    • Nicole Fondse (Aerospace Corporation) & Kara O'Donnell (Aerospace Corporation)
      • 05.0801 HERMES: A Satellite Surrogate and GS Communication Testbed for CubeSats Using SDR and OpenC3 COSMOS
        Luiz Felipe Bromfman (Virginia Tech), Gustavo Gargioni (Virginia Tech), Zachary Leffke (Virginia Tech) Presentation: Luiz Felipe Bromfman - -
        This paper details the design, verification, and validation (V&V) of the Hardware Electronic Real-time Message Exchange System (HERMES), a satellite surrogate communication testbed (SSCT). HERMES leverages commercial off-the-shelf (COTS) hardware to emulate a deployed satellite communication subsystem. The system comprises a small and low-cost ground-station (GS) setup testbed for evaluating the data-link communication between the satellite and GS. HERMES weighs the capable open-source ground-station user interface OpenC3 COSMOS for sending packets while offering a firmware for emulating the responses from the satellite flight computer. The results show that HERMES offers excellent value to student-led CubeSat programs. One advantage of HERMES is the possibility of testing the communication between the satellite and GS without waiting for a functioning engineering model of the satellite and the onboard computer (OBC) subsystem firmware. This is achieved because the SSCT testbed offers an open-source surrogate satellite firmware deployed to the COTS microprocessors that emulate the satellite's OBC behavior. Another advantage of HERMES lies in its standardization of communications, which uses standard packetization. This standard lays the foundation for the early adoption of mission packet definitions adaptable to various mission objectives and payloads. Therefore, while HERMES can receive and decode uplink commands sent from COSMOS/GS and handle the packets to produce and send downlink telemetry back to the COSMOS/GS, it enables the GS development to advance in parallel to the satellite hardware and flight software. The radio frequency communication is transmitted, between two SDRs, while an embedded microprocessor emulates the flight radio leveraging GNU radio software. The experiment results demonstrate all packets sent to and from HERMES are being received and correctly interpreted for all configurations. The latency of the packet communication process between COSMOS and the GS hardware was calculated and saved as part of the meta-data for each experiment. Furthermore, HERMES was evaluated using a flight unit from the upcoming Virginia Tech's CubeSat UTProSat-1's OBC, slated to launch in 2025. The overall comparison results demonstrate the potential of HERMES as a surrogate COTS satellite subsystem and ground-station testbed for communication V&V with OpenC3 COSMOS.
  • Jordan Evans (Jet Propulsion Laboratory) & Darin Dunham (Lockheed Martin)
    • Travis Imken (Jet Propulsion Laboratory) & Bogdan Oaida (Jet Propulsion Laboratory, California Institute of Technology) & Maria De Soria Santacruz Pich (Jet Propulsion Laboratory)
      • 06.0101 Spectral Mapping of Mars at the Context Scale
        Peter Sullivan (NASA Jet Propulsion Lab), Christine Bradley (Jet Propulsion Laboratory), Joseph Ewing (JPL), Olya Filimonova (JPL), Abigail Fraeman (Jet Propulsion Laboratory), Bryant Mueller (), David Ting (California Institute of Technology), Robert Green (Jet Propulsion Laboratory) Presentation: Peter Sullivan - -
        We present a concept for a dedicated orbiter to achieve shortwave infrared (SWIR) spectral mapping of Mars with 6 m spatial sampling, matching that of the Mars Reconnaissance Orbiter (MRO) Context Camera. Complementary to the SWIR spectrometer, and sharing the same telescope, is a thermal infrared (TIR) spectrometer sampling at 12 m. The SWIR channel covers 600 to 3600 nm in wavelength with 7 nm sampling, and the TIR channel covers 7 to 12 microns with 0.1 micron sampling. The orbital altitude is chosen for minimal telescope mass, and on-board calibration mechanisms are eliminated in favor of low-drift detectors and spacecraft pointings. Nodding the spacecraft over 6×6 km target areas allows sufficient integration time to achieve a signal-to- noise ratio >100 at 20% reflectance in the SWIR channel and noise-equivalent differential temperature >1 K for a 200 K scene in the TIR channel. From a sun-synchronous orbit and lifetime of one Mars year, the mission has nearly global access to these sensitivity levels. The spacecraft has the capability to directly downlink an average of two targets per orbit, accumulating over 17,000 targeted observations of high-value areas.
      • 06.0102 The Engineering Challenges of Putting a Cubesat into GTO: The Lessons of GTOSat
        Larry Kepko (NASA GSFC), Dakotah Rusley (NASA - Goddard Space Flight Center), Pavel Galchenko (NASA - Goddard Space Flight Center) Presentation: Larry Kepko - -
        GTOSat is a novel 6U cubesat mission that will conduct breakthrough science of the Earth’s outer radiation belts. The low perigee and high eccentricity of the geosynchronous transfer orbit (GTO), combined with a high radiation environment and a rapidly changing orbital debris landscape, brought many unique challenges that we present here. Designed for a low inclination GTO, GTOSat will measure electron spectra and pitch angles of both the seed and the energized electron populations simultaneously using a compact, high-heritage Relativistic Electron Magnetic Spectrometer (REMS) built by The Aerospace Corporation. A boom-mounted Fluxgate Magnetometer (FMAG), developed by NASA GSFC, will provide 3-axis knowledge of the ambient local magnetic field. The 6U bus is sun-pointed and spin-stabilized, with double deployable 3U solar array wings. Mitigation of radiation effects is accomplished through a multi-pronged systems approach consisting of spot shielding of vulnerable components and a ‘vault’ that reduces the total dose for a 1-year orbit to less than 10 krad, and a 6U lid made of Z-shielding. The attitude determination and control system (ADCS) consists of reaction wheels, magnetorquer bars, fine sun sensors, and an inertial measurement unit. Communication is achieved via an S-Band SDR transceiver working primarily through the Near-Space Network (NSN) with real-time radiation belt monitoring enabled through the Space Network (SN). GTOSat contains a new scalable radiation tolerant command and data handling (C\&DH) system that could be used for future missions, and paves the way for high reliable, capable SmallSat constellations and missions beyond low earth orbit (LEO). We conclude with a discussion of the programmatic issues that affect the launch of GTOSat, and which can impact future similar missions.
    • Michael Lisano (Jet Propulsion Laboratory) & Keith Rosette (Jet Propulsion Laboratory) & Ryan Sorensen (NASA Jet Propulsion Lab)
      • 06.0204 A Compact, Autonomous, Submersible Holographic Microscope for Passive In-Situ Microbial Sensing
        Alexander Ramirez (California Institute of Technology), J. Kent Wallace (Jet Propulsion Laboratory) Presentation: Alexander Ramirez - -
        The deep ocean remains one of Earth's most extreme and least understood environments, serving as a compelling analog to the icy ocean worlds beyond our planet. A major challenge in studying marine microbial communities is the difficulty of capturing and maintaining pressurized samples for laboratory analysis—a technique that becomes infeasible when operating in distant oceans across the solar system. To address this, we present a fully autonomous, submersible digital holographic microscope (DHM) designed for in-situ microbial imaging at depths down to 1000 meters. This compact, mechanically passive instrument contains no moving parts, enabling a robust, low-power architecture capable of operating reliably in harsh, remote environments. The internal electronics system is built around a Raspberry Pi 4 single-board computer that manages data acquisition, control, and sensor logging. Power is supplied by a 7000 mAh lithium-polymer battery and regulated by a Mini-box OpenUPS. Custom internal electronics further filter the power to reduce high-frequency noise before delivery to sensitive components such as the laser diode and Raspberry Pi, ensuring stable performance for noise-sensitive optical systems. The submersible supports both single and multi-wavelength DHM configurations. A collimated light beam is formed using a fiber-pigtailed laser diode, which can be replaced with a multi-wavelength setup using a multimode or multi-diode fiber-coupled combiner. The base system operates at 405 nm, whereas multi-wavelength configurations include 450 nm, 520 nm, and 638 nm to enable spectral absorption-based characterization. Interchangeable monochrome camera modules are supported, including a polarization-sensitive camera that enhances contrast and enables detection of birefringent structures, providing key metrics for analyzing microorganisms with complex internal morphology. With all components integrated into a compact package, the instrument supports fully autonomous operation, with onboard software managing timing, environmental logging, and image capture. Initial deployments have demonstrated the successful performance of the custom housing and passive sample chamber at depths up to 350 meters. Ongoing pressure and thermal testing are validating the structural and electronic integrity of the system at its 1000-meter target depth. Additionally, bio-safe 3D-printed resin components are being evaluated to optimize flow within the sample chamber, which functions without active pumping by leveraging natural ocean currents to direct microbial samples through the imaging region. The complete software stack will be open-source, leveraging Python for control and logging, and C++ for time-critical tasks such as camera triggering and data acquisition. This architecture supports future integration of machine learning capabilities for event-triggered imaging, adaptive sampling, and real-time microbial detection, thereby equipping the system to operate autonomously in environments where human intervention is not possible. Looking ahead, we plan to incorporate an internal thermal management system that sinks heat from high-power compute modules to the external housing and the surrounding ocean. This will enable the use of more computationally intensive platforms such as the Raspberry Pi with Hailo AI acceleration or NVIDIA Jetson modules, unlocking high-performance edge inference and onboard classification abilities. This complete system represents a foundational step toward autonomous, high-resolution biological analysis in environments where sample return is impractical and microbial life is most likely to be present.
      • 06.0205 Progress towards a Near-Unity Fill Factor FIR Thermopile Detector for Space-Based Remote Sensing
        Ricardo Braga Nogueira Branco (NASA Jet Propulsion Lab), Byeong Eom (), Brian Pepper (Jet Propulsion Laboratory), Matthew Kenyon (Jet Propulsion Laboratory) Presentation: Ricardo Braga Nogueira Branco - -
        High-priority science can be accomplished by using thermal detectors for thermal imaging and spectroscopy of planets, satellites, and primitive bodies throughout the Solar System. Thermopile detectors are uncooled thermal detectors that are reliable and can be packaged within a small instrument for use in cubesats, smallsats, and rovers. We demonstrate in this work the latest advancements in JPL’s thermopile detector technology. Thermopile detectors do not currently have the large format sizes currently available in competing technologies such as bolometers, and do not have the design architecture optimized for far-infrared (FIR) remote sensing. The next-generation thermopile detector being developed at JPL will have 16x the number of pixels compared to the current state-of-the-art thermopile, diffraction-limited pixels, and near-unity fill factor that will enable a compact instrument with hyperspectral imaging capability for demanding future missions for Earth science, planetary science, and Moon exploration.
      • 06.0206 Compact Coronagraph 2 (CCOR-2) Program Overview with Integration and Test Lessons Learned
        Timothy BABICH (US Naval Research Lab) Presentation: Timothy BABICH - -
        Abstract - The Compact Coronagraph 2 (CCOR-2) is a novel white-light solar coronal imager built by the US Naval Research Laboratory (NRL) on behalf of the National Oceanographic and Atmospheric Administration (NOAA) to provide operational space weather observations. The first compact coronagraph (CCOR-1) launched on the GOES-19 spacecraft to geosynchronous orbit on June 25, 2024. CCOR-2 launched on the Space Weather Follow-On Lagrange 1 (SWFO-L1) spacecraft on September 24, 2025. The compact design of these coronagraphs was critical to enabling their accommodation on the GOES-19 and SWFO-L1 missions. These instruments are considered the first operational coronagraph assets. A third CCOR instrument is under development for a European Space Agency mission called Vigil. In this paper, we provide an overview of the CCOR program, the design and operations concept of the CCOR instruments, and discuss lessons learned from the integration and test process with a focus on the second instrument, CCOR2. CCOR-2 is the primary optical payload on NOAA’s SWFOL1 mission; a key element for future space weather infrastructure and the study of solar physics. The SWFO-L1 spacecraft will operate at the first Lagrange point (L1) between the Sun and Earth. CCOR-2 will be one of NOAA’s primary data sources with which to detect the emergence of coronal mass ejections (CMEs), the primary drivers for space weather and geomagnetic storms here on Earth. CCOR-2 images will be used to estimate CME speed, trajectory, and mass.
    • Mohamed Abid (Jet Propulsion Laboratory / NASA) & Peter Sullivan (NASA Jet Propulsion Lab)
      • 06.0301 Environmental Testing of Thales LPT Cryocoolers
        Ian McKinley (Jet Propulsion Laboratory) Presentation: Ian McKinley - -
        Infrared instruments often use mechanical cryocoolers to provide cooling to the cryogenic temperatures necessary for their operation. These instruments often also have limited budgets that require using commercial off the shelf coolers and leveraging qualification test programs performed by previous spaceflight instruments. Over the last ten years, numerous environmental tests have been performed on Thales LPT cryocoolers for various spaceflight applications. Tests include multiple random vibration tests and extensive thermal vacuum temperature cycling tests performed on LPT9510, LPT9310 and LPT9310-HP cryocoolers. This paper describes the various environmental tests performed in order to document the broad range of environments for which these coolers have been qualified for spaceflight. Number of cold starts and hot to cold excursions are discussed for each cooler as well as operating hours at various temperatures in vacuum. Various random vibration environmental test levels are presented as well as measurements of cold tip resonant frequency and damping as a function of supported mass. Overall, none of the coolers tested to these environments experienced any degradation in thermodynamic performance as a result of these tests. In addition, historical requirements for lifetime and radiation as well as typical spaceflight fracture control requirements for cryocoolers will be discussed. Electromagnetic interference requirements along with tests performed on Thales LPT coolers to show compliance with these requirements are also presented.
      • 06.0302 The Hyperspectral Thermal Emission Spectrometers - Characteristics and Performance
        Simon Hook () Presentation: Simon Hook - -
        HyTES represents a new generation of airborne TIR imaging spectrometers with much higher spectral resolution and a wide swath. HyTES-1 is a pushbroom imaging spectrometer with 512 spatial pixels over a 50-degree field of view. HyTES includes many key enabling state-of-the-art technologies including a Dyson-inspired spectrometer and high performance convex diffraction grating. The Dyson optical design allows for a very compact and optically fast system (F/1.6) and minimizes cooling requirements since a single monolithic prism-like grating design can be used which allows baffling for stray light suppression. The monolithic configuration eases mechanical tolerancing requirements which are a concern since the complete optical assembly is operated at cryogenic temperatures (~100K). HyTES-1 originally used a Quantum Well Infrared Photodetector (QWIP) and had 256 spectral channels between 7.5μm to 12μm. In 2021 this was upgraded to a Barrier InfraRed Detector (BIRD) array with 284 spectral channels. The first science flights with the QWIP were conducted in 2013 and the first science flights with the BIRD in 2021. A second BIRD array with improved performance has now been completed and is being integrated into HyTES in early 2025. Work is also underway on a second instrument referred to as HyTES-2. HyTES-2 has an extended wavelength coverage between 3-12 um and 512 spectral channels. HyTES-2 is expected to begin flights in 2027/8. Many flights have been conducted HyTES-1, and the instrument can now be deployed on a Twin Otter, Gulfstream or ER2 aircraft allowing a variety of pixel sizes depending on flight altitude. In 2023 a joint campaign was conducted with the European Space Agency to acquire data for simulating and evaluating data from several upcoming thermal infrared missions including the ASI/NASA SBG-TIR, the ESA LSTM and IRO/CNES TRISHNA missions. This paper will describe the instruments and their development together with some results for a variety of applications such as mineral mapping or gas mapping.
      • 06.0303 GLAMR: A NIST-Traceable Facility for Spectral and Radiometric Calibration and Characterization
        Julia Barsi (NASA - Goddard Space Flight Center), Brendan McAndrew (NASA - Goddard Space Flight Center) Presentation: Julia Barsi - -
        The Goddard Laser for Absolute Measurement of Radiance (GLAMR) is a mobile spectral and radiometric sensor characterization and calibration facility based at NASA/Goddard Space Flight Center. Based on NIST’s traveling Spectral Irradiance and Radiance Calibration using Uniform Sources (SIRCUS), GLAMR consists of a system of tunable lasers to generate quasi-monochromatic energy between 310 and 2500nm, a large integrating sphere, wavemeters, a control system to automate operations and a data system to record and serve telemetry. GLAMR provides an unpolarized, full field of view, spectrally and radiometrically uniform field for an instrument under test to view. GLAMR’s measurement approach combines spectral and radiometric characterization into one measurement sweep, allowing for calculation of an instrument’s Absolute Spectral Response (ASR), from which responsivity, spectral band metrics, out-of-band response, stray light and linearity, among other parameters, can be derived. GLAMR radiometric uncertainties range from 0.15 to 0.4% (k=1) depending on the spectral region and the spectral uncertainty is in the range of tens of picometers. This paper will provide a description of the GLAMR system, operation, traceability and the performance for recent GLAMR measurement campaigns for space-flight and airborne Earth observation missions, including key instrument results and lessons learned.
    • Thomas Backes (Georgia Institute of Technology) & Donnie Smith (Waymo)
      • 06.0402 Side-Looking SAR Using 12-18 GHz FMCW Radar Integrated with C3 Class Hexacopter
        Lee Taylor (University of Kansas), Fernando Rodriguez-Morales (University of Kansas), Carlton Leuschen (University of Kansas), Jilu Li (The University of Kansas) Presentation: Lee Taylor - -
        This paper presents initial images from a side-looking Synthetic Aperture Radar (SAR) collected using a 12-18 GHz ultra-wideband Frequency Modulated Continuous Wave (FMCW) Radar integrated with C3 class Aurelia X6 Hexacopter. SAR processing was performed and geolocated image presented for FMCW Radar data collected during April 2025 in Lawrence KS, USA. The Hexacopter operated at 100 m Above Ground Level (AGL) with a 45 ° incident angle. A preliminary SAR image was successfully generated and compared to visible spectrum satellite imagery of the parking lot scene for Clinton International Model Airport. This paper will focus on data processing considerations leading to the SAR images. The significance of this investigation is demonstrating the ability to collect high resolution SAR images using cost effective FMCW radar with ultrawideband capabilities integrated with commercially available Unmanned Arial System (UAS). System design reduces cost and improves versatility for surface information collection compared to traditional fixed wing high-resolution imaging systems. Continued efforts are being made to collect repeat passes using a two-receive-channel version of the FMCW Radar to evaluate single pass data collection capabilities compatible with Interferometric SAR (InSAR) processing.
      • 06.0405 Maneuver Detection Based Adaptive Transition Probability Matrix for Improved IMM Estimate
        Bibhabasu Mondal (LRDE, DRDO), Rajbabu Velmurugan (Indian Institute of Technology Bombay), Viji Panakkal (Central Research Laboratory) Presentation: Bibhabasu Mondal - -
        A tracking system can efficiently track a target if the motion model used by the system match the target’s motion model. The Interacting Multiple Model (IMM) algorithm is commonly employed in tracking systems for this purpose. IMM uses a set of models to represent the possible evolution of the target’s state. Since the target’s motion can switch between different modes, the IMM algorithm must switch accordingly. The switching between models is governed by the Transition Probability Matrix (TPM), which plays a key role in determining both the estimation accuracy and the response time of the tracker. In the conventional Interacting Multiple Model (CIMM) algorithm, the TPM is predefined and set heuristically. The diagonal elements of the TPM represent the probability that a model will continue in its current state, which directly affects the accuracy of the estimate. A higher value for the diagonal elements typically leads to more accurate estimates in the case of model matched filtering. However, larger diagonal elements also result in slower model switching, as the off-diagonal elements determine the speed at which models switch during changes in the target’s motion. This creates a trade-off between accuracy and responsiveness, limiting the performance of the CIMM algorithm. To address this limitation, this paper introduces a new approach that enhances the adaptability of the TPM through maneuver detection and correction. In this proposed method, during the onset and termination of maneuver, the TPM is reset to the original value, and the target’s state is re-estimated from ‘n′ scans back to the current scan. Carrying out estimation from ‘n′ scans back with the original TPM enhances the faster model adaptation and leads to improved estimation accuracy.
    • Craig Agate (Toyon Research Corporation) & Dan Harris (Northrop Grumman Corporation)
      • 06.0502 Hybrid Hard/Soft Data Association in Multi-target Tracking
        Stefano Coraluppi (Systems & Technology Research), Matt Henry (Systems and Technology Research) Presentation: Stefano Coraluppi - -
        This paper describes an approach to multi-target tracking (MTT) that includes both hard data association decisions for kinematic tracking as in multiple-hypothesis tracking (MHT) and soft data association inferencing for object type and identity estimation as in probabilistic data association filtering (PDAF). MHT is based on maximum a posteriori (MAP) estimation on the space of global data association hypotheses. A weakness of MHT is that it tends to be optimistic, in the sense that we condition over the MAP data association solution (established over the current reasoning window) and neglect competing solutions, without reflecting the inherent data association uncertainty in our solution. In practice, the optimism is short-lived in the sense that, in kinematic tracking, the impact of past association decisions on current state estimates degrades rapidly over time due to target process noise. On the other hand, the impact of erroneous association decisions will persist for those target states with no process noise. This concern is precisely the matter of interest here. We address the MTT problem for targets that have both a kinematic state that includes process noise, as well as a type (or identity) state that does not. Correspondingly, we adopt a soft data association approach to capture association ambiguity, solely for target-type (or target-identity) inferencing. This constitutes a hybrid scheme: (i) hard data association for track management and kinematic tracking, and (ii) soft data association for type/identity inferencing. Target identity is a related but distinct concept from target type. Identity is a unique identifier of a specific target of a known type. This distinction introduces differences between the statistical characteristics of type inferencing and identity inferencing, as the latter problem exhibits a stronger inherent coupling among objects: if one target is observed to have a specific identity, other targets necessarily will not. Thus, our approach to identity estimation includes an additional rebalancing step of marginal distributions over identity. Improved performance of our hybrid MHT-JPDAF solution approach over conventional MHT is demonstrated with simulated multi-target datasets.
    • Laura Bateman (Johns Hopkins University/Applied Physics Laboratory) & William Blair (Georgia Tech Research Institute)
      • 06.0604 Near Real-Time Georectification of Satellite Imagery for Insights
        Paulo Fisch (Carnegie Mellon University), Punarjay Chakravarty (Planet Labs), Ravi teja Nallapu (Planet Labs R&D), Kiruthika Devaraj (Planet Labs), Zachary Manchester (Carnegie Mellon University) Presentation: Paulo Fisch - -
        Georectification is a foundational step in many satellite-imagery workflows, supporting applications such as Earth science, disaster response, and agricultural monitoring. These domains often depend on rapid and accurate geolocation to extract timely insights. However, conventional georectification methods typically rely on ground-control points, detailed camera models, and large external datasets. These approaches can be computationally intensive and may require several hours to complete, making them impractical for scenarios that demand near-real-time analysis—such as decision-making within a single orbital period (~90 minutes). To address this limitation, we propose a fast, two-stage geolocation and georectification pipeline that aligns unlocalized images to previously georeferenced ones using image content alone. The first stage performs coarse localization using a particle filter informed by ResNet-50 image embeddings. By comparing the embedding of a newly captured image to a database of georeferenced imagery in cosine similarity space, the particle filter identifies the region of highest visual correspondence. Once the filter converges, we apply SIFT keypoint matching between the localized image and the matched reference to perform fine registration. The resulting correspondences yield a homography transformation that maps the new image onto known geographic coordinates. To account for the satellite's motion during acquisition, we model its trajectory as linear over the region of interest, enabling sequential refinement of the pose estimate. This approach removes the need for metadata, external pose priors, or camera calibration, significantly reducing system complexity. We validated the method using Level-0 imagery and publicly available GEOTIFFs from a conventional processing pipeline. The particle filter achieves initial geolocation within 200 meters, and the subsequent SIFT-based refinement reduces the georectification error to 13 pixel RMS, which for our dataset translates to 6.5 meters RMS—on par with conventional methods but with significantly lower latency. The full pipeline runs in approximately one minute per collect, making it suitable for time-constrained processing. The primary contributions of this work are: (1) a novel particle filter-based pipeline for rapid coarse localization of unreferenced satellite imagery, and (2) an efficient fine-alignment method using classical keypoint matching to achieve sub-ten-meter georectification accuracy. Together, these techniques enable fast, model-free image alignment with accuracy sufficient for a wide range of downstream applications.
      • 06.0605 Cross-Region Mineral Mapping with Hyperspectral data-Trained Multispectral Data Features
        Tsubomatsu Hideki (Ibaraki University), Satoru Yamamoto (Advanced Industrial Science and Technology), Hideyuki Tonooka (Ibaraki University) Presentation: Tsubomatsu Hideki - -
        Hyperspectral (HS) remote sensing provides superior spectral resolution for mineral identification compared to multispectral (MS) data. However, HS imagery from sensors such as AVIRIS, EMIT, and HISUI often has limited spatial coverage, while MS imagery (e.g., from ASTER) offers more complete regional coverage. In many operational scenarios, MS images cover the full area of interest, whereas HS data are available only for subregions. This mismatch motivates the development of reliable MS-based classifiers that can be trained or validated using partial HS data. This study proposes a Physics-Informed Feature Engineering (PIFE) approach to enhance the generalization of MS-based mineral classification across different regions. The method identifies domain-invariant "master features" through a multi-step process. First, a comprehensive set of band-ratio and band-math indices is generated. From these, key indices that maintain high importance across both training and target regions are statistically selected. These are used to identify spectrally similar "representative pixels" across sites. The representative pixels guide an offset correction to reduce systematic brightness differences between regions. The feature set is then enriched with spectral descriptors such as spectral slopes (differences between adjacent bands) and band depth, which reflect absorption strength in key mineralogical bands. A Normalized Difference Vegetation Index (NDVI) mask is applied to exclude vegetated areas and focus the analysis on exposed rock and soil. Feature selection is performed to eliminate redundant or low-importance variables, and Principal Component Analysis (PCA) is applied when appropriate for dimensionality reduction. We validate the PIFE framework using a reciprocal transfer experiment involving three arid sites in the southwestern United States: Cuprite (training site), and Bodie Hills and Silver Peak Range (validation sites). These sites were selected due to their well-mapped occurrences of alteration minerals such as alunite, kaolinite, and muscovite. MS data from ASTER are used for classification, while HS data from AVIRIS, EMIT, and HISUI serve as ground-truth references. HS-based mineral maps are generated using both the Tetracorder and Hourglass methods (HISUI evaluated with Hourglass only) to provide validation labels. Results demonstrate that the PIFE method significantly improves the transferability and robustness of MS-based classifiers across geographically and spectrally diverse regions. This supports the operational use of widely available MS archives for large-scale mineral mapping, especially when limited HS data are available for calibration or validation.
      • 06.0608 Dynamic Bias Estimation Offset Determination for TDOA Geolocation
        Daniel Johnson (Georgia Tech Research Institute), Jeffery Hurley (Georgia Tech Research Institute), David Alvord (Georgia Tech Research Institute) Presentation: Daniel Johnson - -
        Time difference of arrival (TDOA) geolocation techniques typically work with the assumption that sensor uncertainty will be Gaussian. In practice, the noise observed often varies over time in non-Gaussian ways, especially if one or more sensors need calibration and produce static offsets which are particularly problematic. The resulting errors lead to covariance values which produce ellipsoids that are less likely to contain the actual solution. A bias estimation technique is proposed for TOA and TDOA values that minimizes the ellipsoid volume with candidate offset values via a solver to find the most likely offset. The ellipsoid volume is generally well-behaved around the true offset and the determination of any offset is amenable to using a hillclimbing solver, provided that the possibility of other proximal local minima is handled. The technique was demonstrated to be effective in the lab and in the field with known truth values. The technique can be used to correct TDOA errors and improve solutions as well as discovering whether sensors are out of calibration.
    • John Glass (RTX) & John Grimes (BAE Systems, Inc)
      • 06.0703 Replacing Subspace Tracking Methods with Dynamic Mode Decomposition
        Efrain Gonzalez (), Carole Hall (Stony Brook University), Cove Kramer (), Curtis Madsen (Sandia National Laboratories), Craig Vineyard (), Jason Adams (Sandia National Laboratories) Presentation: Efrain Gonzalez - -
        Dynamic mode decomposition (DMD) is a physically interpretable algorithm used for modeling systems whose state changes over time. Originally, DMD was used as a way of extracting dynamic modes from fluid flow data, but it has since seen successes in the fields of neuroscience, epidemiology, finance, and video processing. A key problem in the video processing domain is the ability to separate the background and foreground in a scene. Several algorithms, ranging from the supervised to the unsupervised, have been developed to tackle this problem. Among them are a few streaming DMD methods which have been created because of DMD's ability to extract spatio-temporal modes and its computational efficiency. Several subspace tracking methods have also been developed to address the background modeling problem. Subspace tracking methods are often associated with principal component analysis (PCA). Previous studies have indicated that DMD methods have an advantage over PCA-based methods in several application domains due to PCA's limited ability to model temporal dynamics effectively. Our interest is to study whether there is a benefit to using streaming DMD based methods over subspace tracking methods for background modeling. Among the PCA-related algorithms that have been used for this purpose are the fast approximated power iteration (FAPI) algorithm and the fast data projection method (FDPM). FAPI and FDPM have been shown to be computationally efficient methods for developing a background model in real-time and FAPI is a part of the backbone of a method that has been popularized for background modeling called ARCS FAPI. Despite its many successes, ARCS FAPI is known to be limited in its ability to handle platform motion and clouds. In this work, we study whether streaming DMD methods hold an advantage over a version of ARCS FAPI and FDPM using two methods of comparison on simulated datasets. We test their ability to model background in real-time by comparing the absolute percent error across all video frames and then compare the ability of each algorithm to differentiate between pixels associated with the background and those associated with the targets by using an $F_1$-score. We found that both FAPI and FDPM performed similarly and were much more computationally efficient than the DMD methods. However, one of the DMD methods was shown to outperform those algorithms on a dataset with platform motion and jitter. Our results indicate that there is potential for DMD methods to be successful in remote sensing applications.
    • Erik Blasch (Air Force Research Laboratory) & Paul Schrader (Air Force Research Laboratory Information Directorate)
      • 06.0802 Hierarchical Out-of-Distribution Detection with Topological Data Analysis
        Derek Deblieck (University of Nebraska-Lincoln), HONGZHI GUO (University of Nebraska-Lincoln), Paul Schrader (Air Force Research Laboratory Information Directorate) Presentation: Derek Deblieck - -
        Automatic Target Recognition (ATR) systems powered by machine learning (ML) face significant challenges in reliably detecting out-of-distribution (OOD) inputs during real-time operation. Deep neural networks (DNNs), when trained on in-distribution data (IDD), tend to produce overconfident predictions on OOD samples, which can lead to critical failures in downstream defense, security, and surveillance applications for aerial, terrestrial, maritime, and celestial environments. Topological Data Analysis (TDA) offers a promising solution by capturing global structural features of data. Since OOD samples often exhibit distinct topological characteristics from IDD samples, these features can be leveraged for effective OOD detection. Additionally, DNNs reduce the topological complexity of input data as it propagates through layers, which is an effect observed when processing IDD, but not OOD data. This paper proposes a hierarchical OOD detection framework that exploits both these TDA-based insights. First, a probabilistic clustering approach in topological space identifies low-complexity OOD samples. Second, the cosine distance between topological feature vectors is used to quantify the reduction in topological complexity across the DNN, where OOD samples typically exhibit less reduction compared to IDD. We next introduce a novel, synergistic implementation of these two approaches for enhanced OOD detection performance. This system is initially evaluated on a measured unimodal image dataset. Then it is validated in a measured multi-modal setting using the US Air Force Research Laboratory’s ESCAPE II dataset, which includes electro-optical (EO) and infrared (IR) ground and aerial imagery, as well as their data fusion aggregates. The reported experimental results demonstrate the robustness and accuracy of the proposed approach in detecting OOD samples across single- and multi-modal scenarios from several relevant environments.
      • 06.0803 State Space Quantification in Reinforcement Learning
        Rodney Sanchez (Rochester Institute of Technology), Paul Schrader (Air Force Research Laboratory Information Directorate), Jamison Heard (Rochester Institute of Technology) Presentation: Rodney Sanchez - -
        Neural Network architecture design is currently understood as an art, where the network's hyperparameters are largely formulated through a synthesis of scientific intuition and investigative methodologies. This process is further exacerbated in reinforcement learning (RL), where informative representations (e.g., t-distributed Stochastic Neighbor Embedding or t-SNE plots) are often unavailable to validate the efficiency of an agent's feature extractor. In the context of RL, this exacerbates the misunderstanding of the provided algorithm's performance as either a byproduct of learning more descriptive "states" or producing network policy improvements. As a potential solution, we introduce a metric to robustly quantify the number and the rate of change of agent states. This metric is defined using Modern Hopfield Networks, yielding an associative space for each input. First, the metric's validity is demonstrated by comparing policy convergence to the rate of change in the associative space within simple Gymnasium environments and multi-modal complex Atari environments. Next, the metric is assessed on intrinsic forms of feature extraction, such as topological data abstraction. The incorporation of topology is hypothesized to aid the agent in producing succinct state spaces. Synergistically, these methods potentially provide an informative quantitative measure to how an RL agent feature extractor performs working with temporal difference learning or outside the constraints of independent identically distributed state space representations.
      • 06.0805 Leveraging Interferometric Techniques for a Cooperative Space-Based Space Domain Awareness
        Nicholas Wondra (Sandia National Laboratories) Presentation: Nicholas Wondra - -
        The increasing number and density of trackable space objects has made the need for timely and accurate Space Domain Awareness (SDA) acute. A directed, distributed space-based SDA system can likely complement the Space Surveillance Network (SSN) by using cooperative sensing techniques. A novel system is envisioned to perform space domain awareness from a constellation of dedicated space-based platforms by leveraging interferometric SAR techniques for batch processing of space objects. The performance of three test cases is evaluated for coverage in executing unqueued space observation, with a sun-synchronous plane of SDA spacecraft exhibiting system architecting and commercial strengths. The system’s resilience, distributed nature, and potential availability in contested and wartime scenarios constitute compelling advantages for supplementing the space surveillance network with additional space-based observation.
  • John Dickinson (Sandia National Laboratories) & Patrick Phelan (Southwest Research Institute)
    • Robert Merl (Los Alamos National Laboratory) & Jamal Haque (Lockheed Martin Space Systems Company)
      • 07.0101 A Data-Driven Surrogate Modeling and Sensor/Actuator Placement Framework for Flexible Spacecraft
        Matthew Hilsenrath (Colorado State University Department of Systems Engineering), Daniel Herber (Colorado State University) Presentation: Matthew Hilsenrath - -
        Flexible spacecraft structures present significant challenges for physical and control system design due to nonlinear dynamics, mission constraints, environmental variables, and changing operational conditions. This paper presents a data-driven framework for constructing reduced-order surrogate models of flexible spacecraft using the method of Dynamic Mode Decomposition (DMD), followed by optimal sensor/actuator pair placement. High-fidelity simulation data from a nonlinear flexible spacecraft model, including coupled rigid-body and elastic modes, are captured by defining a mesh of nodes over the spacecraft body. The data-driven methods are then used to construct a modal model from the time histories of these node points. Optimal sensor/actuator placement for controllability and observability is performed via a nonlinear programming technique that maximizes the singular values of the Hankel matrix. Finally, the sensor placement and dynamics modeling approach is iterated to account for changes in the dynamic system introduced by sensor/actuator physical mass. The proposed methodology enables initialization of physical modeling without requiring a direct analytical model and provides a practical solution for onboard implementation in model-based control and estimation systems. Results demonstrate optimal design methodology with substantial model-order reduction while preserving dynamic fidelity, and provide insight into effective sensor-actuator configurations for estimation and control.
      • 07.0103 SCALES: A Dual Computing Architecture for Deploying Edge Computing Resources on Small Spacecraft
        Michael Pham (Cal Poly Pomona), Zachary Gaines (California Polytechnic State University, Pomona), Matthew Alexander Mariano (Bronco Space ICON Lab, California Polytechnic University, Pomona), Kelly Williams (California State Polytechnic University, Pomona), Luca Lanzillotta (California Polytechnic State University), Tanya Patel (California State University), John Pollak (Cal Poly Pomona), Giselle Revolorio (), Phyllis Nelson (California State Polytechnic University Pomona) Presentation: Michael Pham - -
        SCALES (Spacecraft Compartmentalized Autonomous Learning and Edge Computing System) will provide an integrated hardware / software system for safely deploying Machine Learning (ML) algorithms on small spacecraft. This is accomplished through a dual computing architecture that features a high-performance edge computer, the NVIDIA Jetson Orin, which is supervised by radiation tolerant watchdog circuits and utilizes a radiation tolerant FD-SOI (Fully Depleted Silicon On Insulator) flight processor. By partitioning the riskier high performance computing tasks to a dedicated processor, SCALES provides a safe environment for experimental deployments of ML algorithms with minimal risk of avionics downtime due to functional interruptions. The SCALES flight processor uses the NASA JPL F Prime (F’) flight software framework, an open-source reusable component-based framework which has been deployed on a variety of small satellites and the Mars Ingenuity Helicopter. SCALES links this flight software with a flexible ML environment on the Jetson. With this reusable and open-source component-based framework, flight software stacks already using F’ in other parts of the spacecraft may be able to seamlessly integrate into a single software deployment with SCALES. The project provides example deployment of F’ on our chosen hardware with a Yocto Linux build system, several ML model implementations, and an improved F Prime Python toolchain. SCALES includes a radiation tolerant power system based on the flight-proven OreSat Cubesat architecture and voltage regulator designs tested for use in the CERN Atlas Small Wheel. SCALES also employs an Ethernet based payload and peripheral interface system. These integrated systems minimize the engineering overhead required to integrate SCALES as a primary flight computer or dedicated payload processor on a Small Satellite. Our paper discusses the hardware architecture for SCALES, the design rationale for the selection of the NVIDIA Jetson Orin and its NXP i.Mx 8X companion processor, and initial benchtop, benchmark, and radiation testing results for the selected hardware at both the component and system level.
      • 07.0104 RADSoM Launchpad
        Srabanti Roy (Johns Hopkins University/Applied Physics Laboratory), Owen Pochettino (Johns Hopkins University/Applied Physics Laboratory) Presentation: Srabanti Roy - -
        RADSoM is a compact, highly reconfigurable, reusable, rad-tolerant Field Programmable Gate Array (FPGA) System on Module (SoM) suitable for the space environment. Since its existence, it has been successfully utilized in several programs/projects which are processor based, some of which concentrate on memory intense applications, some of which are DSP based. RADSoM users find its features like the powerful reusable PolarFire FPGA (from Microchip), on board high speed DDR3 memories attractive. RADSoM provides a portable component/processor IP block useful in spearheading new spacecraft hardware design efforts. RADSoM provides a low cost, flexible approach for electrical designs which permit a “plug in” approach to FPGA designs. Due to the flexible, portable, low cost, low risk, user friendly and easily adaptable nature of RADSoM, it is the perfect candidate to use in space applications. Leveraging the attractive features of RADSoM, an effort has been underway to take it a step further and create Board Support Packages (BSPs) for the RADSoM product families. This will be highly beneficial for the existing first generation (V1) RADSoMs and the imminent next generation (V2) RADSoMs. Developing a BSP for V2 RADSoM family looks ahead into the near future and ensuring its availability and readiness when there is indeed a need for it. RADSoM V2 uses Microchip’s new PolarFire SOC FPGA with DDR4 memories/controller, onboard quad core RISC-V processor. Serving the RADSoM community with a V2 BSP, is essential to satisfy the growing interests in RISC V processor architecture. The main purpose of our effort to develop BSPs for RADSoMs V1/V2 is to address RADSoM community’s desire for a BSP. Even though RADSoM V1 has been utilized in various programs/projects there has always been something lacking, i.e. an available BSP and set of reference designs for the RADSoM products. This could have provided users easy start to integrating the RADSoMs in their designs. Due to the lack of BSPs these programs spent significant amount of time initially to get their RADSoM based design interfacing to a processor up/running. This also resulted in these programs having their own methodologies to establish their RADSoM baseline designs; there was no consistency between these projects. Therefore, having BSPs for V1, V2 will enable easy baseline integration of RADSoMs on such programs particularly, which use soft core processor-based subsystems. This provides a low cost, less complex, flexible, novel solution for RADSoM users to easily integrate RADSoM s in their applications. The end result is less schedule impacts, low-cost solution and a firm foundation of a hardware and software, while elevating the opportunities to utilize RADSoM's attractive features. All of these make the RADSoM with their BSPs a technology enabler. The BSPs will include set of PolarFire FPGA (V1), PolarFire FPGA SoC (V2) based reference designs. These designs will include support for multi-core soft-core processors like LEON3, RISC-V on V1 and hard-core processor like RISC-V on V2. The BSPs will support peripherals/components like SpaceWire, JTAG, custom UARTs, SPI I/F, I2C I/F, AMBA I/F, high speed I/Os and RADSoM’s onboard high speed DDR3/DDR4 memories.
      • 07.0105 Tiled Plate Solving and Distortion Correction for Robust Star Tracking in WFOV Imagery
        Gabriela Gavilánez Gallardo (Embry-Riddle Aeronautical University), Daniel Lopez (Embry-Riddle Aeronautical University), Troy Henderson (Embry-Riddle Aeronautical University) Presentation: Gabriela Gavilánez Gallardo - -
        Accurate orientation is essential in autonomous missions, where success depends entirely on the reliability of the information provided by the system’s onboard perception methods. Among these, Wide Field-of-View (WFOV) cameras play a key role, supporting applications ranging from autonomous driving to spacecraft navigation due to their ability to capture large portions of the environment. They offer a compact, single-sensor solution for situational awareness, increasing the likelihood of capturing a sufficient number of reference features, particularly stars, in the context of celestial navigation and star tracking. However, wide-angle optics introduce significant nonlinear distortion, particularly towards the edges of the frame, posing challenges for accurate perception and navigation. This distortion creates a severe adverse effect, producing geometric inconsistency, which directly impairs the accuracy of star-based attitude estimation methods. Traditional mitigation approaches rely on global nonlinear distortion models, such as the Brown–Conrady or division models, which apply a uniform polynomial correction across the entire image field. Although these methods effectively address low to moderate distortion levels, they assume smooth and continuous radial and tangential deformation and thus struggle under conditions involving local artifacts, such as partial obstructions, vignetting, lens asymmetry, or misalignment. For these cases, global plate solving often fails to match observed star patterns with the celestial reference database, resulting in degraded or invalid orientation estimates. Considering these limitations, this work introduces a novel three-stage pipeline that integrates a nonlinear distortion correction method with tile-wise plate solving to improve the reliability and accuracy of start tracking for wide-angle imagery. The first proposed stage applies a nonlinear fisheye distortion model to rectify the raw image. The undistorted image is then subdivided into overlapping tiles for the second stage, each covering a smaller field where distortion is minimal and more uniform. Each tile is independently plate-solved by matching detected stars to a known star catalog, enabling localized estimation of pointing vectors. For the final stage, a filtering and fusion step determines valid tile solutions, returning a global orientation estimation. The implementation of this framework is being developed for onboard deployment on EagleCam 2, a 1U CubeSat currently in the development phase for a future Nova-C lunar lander mission. Preliminary testing suggests that the proposed method offers a practical and robust solution for coarse attitude estimation in challenging conditions, providing resilience to severe distortion, optical artifacts, and partial obstructions commonly encountered in wide-angle imaging scenarios.
      • 07.0108 LandSat Next Instrument Image Processing & Compression Pipeline – from Photons to Data Cubes
        Patrick Phelan (Southwest Research Institute), Didier Keymeulen (Jet Propulsion Laboratory), Michael Koets (Southwest Research Institute) Presentation: Patrick Phelan - -
        The ongoing development of payload electronics at Southwest Research Institute (SwRI) and RTX to support the LandSat Next Instrument Suite (LandIS) as part of the long-tenured USGS/NASA LandSat mission has resulted in the development of a novel image processing and compression pipeline to move 1.8Gbps of scientific data to the host spacecraft. This pipeline contains functions to communicate with three different focal planes, to apply data corrections and aggregations on incoming pixels, and to compress the final image products for transmission to the ground. This real-time, near-lossless data compression implementation on FPGA, conformant with CCSDS 123.0-B-2, was developed by the Jet Propulsion Laboratory (JPL). The image processing pipeline used is part of a larger science "observation" campaign envisaged as a series of "studies" that are scheduled using a macro execution engine maintained by Flight Software (FSW). This paper discusses the design and architecture of the LandIS image processing and compression pipeline hardware and associated controlling firmware and software employed to meet the various engineering and payload science requirements of the LandSat Next instrument.
      • 07.0109 RAD-TECH: A Research Portfolio to Enable State-of-the-Art Electronics in Radiation Environments
        John Dickinson (Sandia National Laboratories), Collin Burt (Sandia National Laboratories), Joseph D'Amico (Sandia National Laboratories), Colin McKay (Sandia National Laboratories), David Hughart (Sandia National Laboratories), David Lee (Sandia National Laboratories) Presentation: John Dickinson - -
        Sandia National Laboratories is kicking-off a Research Mission Campaign, RAD-TECH or Radiation Assured Design and Testing for Electronics and Computational Hardening, consisting of a portfolio of research studies over the next seven years to evaluate and help enable the use of State-of-the-Art (SotA) Microelectronics in radiation environments, including for space applications. The research campaign is divided into 3 thrust areas: 1) Discover novel radiation effects in SotA components; 2) Mature assessment Methodology to enable better component testing, modeling, and simulation; and 3) Harden electronics to be resilient in these environments through techniques scaling from the transistor/semiconductor nanoscale to the component/board/system level macroscale. Four research projects completed in FY25 on topics including: 1) assessment of the latest SRAM devices; 2) radiation testing of networks-on-chip (NOCs) for advanced Systems-on-Chip (SoCs) semiconductors; 3) nonvolatile memory control logic hardening, specifically for Magnetoresistive Random Access Memory (MRAM); and 4) extremely rapid power cutoff circuitry to prevent destructive latch-up from radiation. All projects successfully yielded discoveries that improved Sandia's radiation assessment methodologies and pointed the way to or yielded novel hardening techniques. In addition to continued research in MRAM control logic hardening and SRAM characterization, the FY26 portfolio includes projects on five additional topics, including: 1) Exploration of angular effects on varying transistor types in heavy-ion irradiation; 2) Investigating new techniques to harden modern SoC architectures with built-in blocks; 3) Novel methods of rapid damage annealing in state-of-the-art nodes; 4) Hardening of embedded Phase Lock Loops (PLLs) for improved Focal Plane Array (FPA) performance; and 5) Demonstration of high-performance processing monitored by a radiation hardened watchdog circuitry. Over the next seven years, the research campaign will continue to build on these inquiries with the ultimate goal of producing a functional design for a national security component to enable the latest in high-performance computing in applications subject to extreme radiation environments.
      • 07.0110 Analysis of GEMM Implementations Using MIMD-based Processor for Multi-Function ISAC Systems
        Saquib Siddiqui (Arizona State University) Presentation: Saquib Siddiqui - -
        Objectives General Matrix-Matrix Multiplication (GEMM) and General Matrix-Vector Multiplication (GEVM) operations are critical in multifunction space communications applications, ranging from free-space optical integrated sensing and communications (ISAC) to ISAC-enabled satellite systems. The underlying components of such systems include beamforming, channel estimation, interfer- ence mitigation, and adaptive filtering, among others. This paper aims to develop an efficient implementation of GEMM and GEVM using a Multi-Instruction Multi-Datapath (MIMD)-based Domain Adaptive Processor (DAP). The goal is to achieve high throughput, low latency, and energy-efficient computation while maintaining scalability and reconfigurability. Methods The DAP integrates a runtime-reconfigurable systolic ar- ray within the DASH SoC platform. Unlike SIMD processors, the MIMD architecture enables fine-grained control, parallel execution, and tailored dataflow. A row-stationary, column-streaming strategy is introduced to reduce memory overhead and support pipelined computation across processing elements (PEs). A case study ex- amines GEMM for radar-communication interference mitigation, focusing on a 4×64 by 64×4 cross-covariance matrix mapped to an 8×8 PE array. Analytical modeling estimates latency, compute time, throughput, power, and energy efficiency across varying array sizes, geometries, and matrix dimensions. Execution time is broken down into instruction loading, data loading, and computation/routing costs. Results Scaling PEs improves latency up to 3× (with diminishing returns beyond 16 PEs). Computational cost grows nearly linearly, showing ∼2× improvement at 16 PEs. Throughput saturates at 8 PEs but increases steadily (≈2×) with more columns or PEs. Power consumption remains stable (≈1.08–1.16× growth under higher workloads), highlighting efficient streaming dataflow. Geometry affects performance: with 8 PEs, latency gains range from 1.25× (4×2) to 4× (1×8); with 16 PEs, gains range from 1.16× (4×4) to 2.11× (2×8). Diagonal mappings introduce higher overhead due to extensive PE usage. Scalability analysis shows consistent 1.9–2× gains in latency and throughput when input dimensions double, while energy scales linearly with workload. Conclusion The proposed MIMD-based DAP achieves significant improvements in GEMM/GEVM performance, delivering up to 8048.7 GOPS/W, surpassing contemporary accelerators such as Google TPU and MIT Eyeriss. Results validate the advantages of domain-specific, reconfigurable architectures for multifunction RF systems, combining high performance, scalability, and energy efficiency.
      • 07.0111 Flexible FPGA Accelerator for Real-Time Neuromorphic Optical Flow in Space Applications
        Linus Silbernagel (NSF SHREC Center - University of Pittsburgh), Daniel Stumpp (University of Pittsburgh), Alan George (University of Pittsburgh) Presentation: Linus Silbernagel - -
        Event-based vision sensors have become popular due to their high temporal resolution, low power consumption, and high dynamic range. These properties make the sensors attractive compared to traditional cameras for many computer-vision tasks that require low latency in resource-constrained environments such as space. An important component of many computer-vision tasks is optical flow, which is the estimation of an object's perceived motion in the scene. Traditional approaches use frame-by-frame displacement of an object for estimating optical flow. However, with event-based sensors, the paradigm has shifted to using pixel-wise event information. The new paradigm enables algorithms to take advantage of the higher temporal resolution of the sensors while avoiding the inefficiency of frame-based algorithms on spatially sparse data. These algorithms need to achieve high performance to keep up with the low-latency event streams from the sensors. Hardware such as field-programmable gate arrays (FPGAs) are commonly used as accelerator platforms because they allow for reconfigurable hardware to tailor the algorithmic implementation. Existing solutions for accelerating event-based optical flow are either too computationally expensive to run on embedded platforms or sacrifice precision in favor of enhanced speed. This research presents an FPGA-based acceleration architecture of a plane-fitting algorithm using a Savitzky-Golay filter for event-based optical-flow calculations. Unlike traditional methods that store a sensor frame, the proposed architecture uses a small history of recent events. This optimization enables the architecture to be expanded to higher-resolution sensors without the memory limitation of storing all frame data on-chip. The developed architecture has multiple parameters that can be configured to allow for a tailored FPGA implementation. The architecture was tested on two standard event-based datasets and a recently released space-based dataset. Our experimental results show that this design reaches real-time performance on an embedded platform and achieves a throughput of 506.51~Kevts/s. This architecture demonstrates the scalability of event-based optical flow to future high-resolution sensors, delivering low-latency and precise computer-vision capabilities for embedded platforms like those used in space.
      • 07.0112 LDPC Decoding Acceleration Architectures on Versal AI Edge for Space Platforms
        Noah Perryman (Voyager Technologies) Presentation: Noah Perryman - -
        A future communications network routing data through multiple space nodes can significantly improve information mobility and communications resilience for space applications. This future system, composed of both radio frequency (RF) and free-space optical (FSO) communication links, requires satellites to provide communication services that perform demodulation and forward error correction (FEC) decoding of the received signal before re-encoding and forwarding the signal. This regenerative capability prevents the accumulation of errors as communications are routed throughout the network; however, this capability incurs a significant increase in the design complexity of communication satellites, especially for FEC decoding on high data-rate communications. To achieve regenerative communications with high-throughput FEC decoding requires both an effective FEC algorithm and high-performance processor architecture. First, low-density parity-check (LDPC) codes, used for FEC decoding in advanced communications standards such as Wi-Fi (802.11n) and 10G Ethernet (IEEE802.3an), are excellent algorithms for their high-throughput and error-correction capabilities, and the iterative and parallelizable processes in LDPC decoding are highly amenable for acceleration. Second, the Versal Adaptive System-on-Chip (SoC), a next-generation space-qualified processor, features an adaptive, heterogeneous architecture enabling high-performance onboard processing. In this paper, we propose a high-throughput LDPC decoder architecture accelerated on the AI Engine (AIE) tiles of the Versal Adaptive SoC, achieving up to 4.147 Gbps for a single iteration, as a potential solution to the future communications network.
      • 07.0113 Enabling Fault-Tolerant Autonomous Lunar Habitats with High-Performance Spaceflight Computing
        Sarkis Mikaelian (NASA Armstrong Flight Research Center) Presentation: Sarkis Mikaelian - -
        The lunar surface presents unfavorable constraints and harsh living conditions. To address these challenges, autonomous habitats will require complex integrated systems that combine advanced software, high-performance hardware, and cutting-edge sensors to ensure sustainability, safety, and operational efficiency. Consequently, maintaining a sustainable presence on the Moon requires reliable infrastructure and efficient development, precise monitoring, and utilization of resources within a lunar installation. These elements are essential not only to ensure that lunar settlement can be long-term, self-sustaining, and resource-efficient, but also to serve as a foundation for future missions and eventual human habitation on Mars. Humans are not native to the Moon; therefore, our survival and ability to thrive will depend on autonomous systems that can foster safety and resilience through high-availability architectures, graceful degradation, and highly fault-tolerant spaceflight hardware capable of continuing operation during failures. This requires advanced human-rated distributed systems architectures with specialized electronics, scalable capabilities, and an integrated design approach. Unlike current practices focused on short-term missions and regularly maintained components, permanent lunar compute systems must be designed for extended operations beyond mission durations. Each component must ensure the habitat’s immediate survival and functionality, while accounting for prolonged exposure to heightened radiation and extreme thermal cycling, where resources for maintenance are limited or non-existent. This paper explores the necessity of transitioning toward fault-tolerant, highly autonomous hardware systems designed for multi-year missions. Such systems must implement (1) subsystem-level redundancy, (2) autonomous fault management, (3) integrated health management, and (4) prognostics. It also identifies critical subsystems that require high levels of autonomy, supported by radiation-hardened processors and extreme thermal loads, which are essential to mitigate long-term degradation and ensure sustainable lunar habitation. Finally, the paper aligns with NASA’s identified Civil Space Shortfalls, particularly in high-performance onboard computing (IDs 1550/1551/1554/1555), advanced data acquisition (ID 1549), extreme-environment avionics (ID 1552), radiation monitoring and countermeasures (IDs 1519/1526/1527), and autonomous health management (ID 1535). It proposes NASA’s new High-Performance Spaceflight Computing (HPSC) processor as a turnkey solution, delivering 100 times the performance-per-watt of legacy rad-hard CPUs and enabling onboard AI, edge computing, and fault-tolerant features essential for sustained lunar autonomy and beyond.
    • Mark Post (University of York) & Michael Epperly (Southwest Research Institute) & Patrick Phelan (Southwest Research Institute)
      • 07.0203 The SpaceVNX+ Standard for Small Form-Factor, Modular Electronics for Space Applications
        Steve Parkes (STAR-Dundee Ltd.) Presentation: Steve Parkes - -
        SpaceVNX+ [1] is a standard for small form factor equipment modules and units which is being designed specifically for space applications. SpaceVNX+ is a development of the military/aerospace standard VNX+ [2] which is an emerging VITA standard for small form factor systems based on the earlier VNX standard. VNX is widely used in aerospace and military terrestrial applications. The ANSI/VITA standard for VNX+ is expected to be published in 2025. VNX+ is designed with a very small form factor: a board size of 84 x 73 mm, however, spaceflight components are significantly larger than commercial, industrial and automotive components, so a larger board is required. The SpaceVNX+ board will have the same height as a VNX+ board (84 mm) but a length of 120 mm. This will almost double the board area and permitted power consumption (thanks to a longer thermal interface) of a SpaceVNX+ module compared to a VNX+ module. SpaceVNX+ provides a standard platform for the implementation of the entire range of avionics applications on-board a spacecraft, from simple remote terminal units to high-performance payload data-handling units. SpaceVNX+ is intended to be complementary to larger form factor standards such as VITA SpaceVPX [3] and ESA’s ADHA [4]. To support a complete range of space applications SpaceVNX+ will support various forms of redundancy. Single-string units, which efficiently support both spacecraft-level redundancy and cross-strapped unit-level redundancy, will be facilitated. Redundancy inside a unit will also be supported, allowing dual-redundant modules and 1-of-M redundant units inside a unit. SpaceVNX+ provides high-performance interconnect between modules which can carry SpaceWire, SpaceFibre, Ethernet (Base-X) and/or PCIe. The principle drivers for SpaceVNX+ are: the space environment, small form factor, modularity, scalability both within and outside units, versatility, performance and availability supported by fault tolerance. This paper introduces SpaceVNX+, describes its architecture and modules, and the types of system that can be constructed with those modules. It also explores the decisions made as SpaceVNX+ was developed and explains the rationale behind those decisions. [1] VITA, https://vita.com, last visited June 2025. [2] VITA, “VNX+ Base Standard, DRAFT – Not Approved”, VITA 90.0, Revision 0.0, 16th June 2025. [3] ANSI/VITA, “SpaceVPX System Standard”, ANSI/VITA 78.0-2022, August 2022. [4] ESA, “Advanced Data Handling Architecture (ADHA)”, ADHA-2 Workshop, ESTEC, November 2022.
      • 07.0204 SpaceFibre Evolution: 100 Gbps with Low-Overhead SoC Integration
        Alberto Gonzalez Villafranca (STAR-Barcelona SL), Steve Parkes (STAR-Dundee Ltd.), Dave Gibson (STAR-Dundee Ltd.) Presentation: Alberto Gonzalez Villafranca - -
        SpaceFibre (ECSS-E-ST-50-11C) is the latest generation of spacecraft onboard data-handling networks, evolving from the widely adopted SpaceWire standard to meet the increasing demands for higher data rates, enhanced reliability, and robust quality-of-service (QoS) capabilities. SpaceFibre reached TRL-9 in April 2021. Recent developments have further enhanced the capabilities of SpaceFibre, addressing challenges associated with advanced space missions. A higher-speed variant of SpaceFibre surpasses current solutions based on 8b/10b encoding, which can achieve lane rates in the order of 10 Gbit/s, resulting in 40 Gbit/s link rates on a quad-lane link. A novel and efficient encoding scheme, combined with integrated Forward Error Correction (FEC), enables lane speeds in current space-qualified technology of 25 Gbit/s with minimal protocol stack modifications. An initial implementation demonstrated aggregate data rates of 100 Gbit/s using a quad-lane configuration and was successfully validated on an AMD Versal FPGA during heavy-ion radiation testing. To support these high speeds efficiently, a SpaceFibre Processor Endpoint has been developed to integrate SpaceFibre networks with processor-based System-on-Chips (SoCs) running operating systems such as Linux. Performance results across multiple SoCs confirm the effectiveness of this integration strategy, demonstrating substantial benefits for representative spacecraft data-handling systems. This hardware-accelerated endpoint interface achieves data rates into user-space memory at tens of Gbit/s with minimal processor overhead. This paper presents the new high-speed variant of SpaceFibre. Then introduces the SpaceFibre Processor Endpoint and software, demonstrates implementations in different SoCs, and analyses their performance results. Collectively, these advancements significantly enhance the throughput, efficiency, and integration flexibility of SpaceFibre. They open new possibilities for spacecraft data-handling architectures, enabling unprecedented levels of performance and reliability.
    • Chris Iannello (University of Central Florida) & Thomas Cook (Voyager Space)
      • 07.0403 Maximum Power Point Tracking (MPPT) Energy Extraction without a Power Converter.
        Samuel Kerem (Johns Hopkins University/Applied Physics Laboratory) Presentation: Samuel Kerem - -
        The MPPT technique is most commonly used to maximize energy from solar panels charging a battery. It can also be applied to thermophotovoltaics, optical power transmission, wind turbines, and other sources of harvested energy. The efficiency of power transfer depends on the appropriate matching of the source's output voltage and current with the battery's (load's) input voltage and current. The source and load conditions are independently alternating, so a continuously variable buck-boost converter with adjustable Pulse-Width Modulation is used to affect the matching. The paper presents a novel approach to achieving optimal source-battery voltage and current matching by reconfiguring the internal connections of the battery's module-built cells. The battery input state can be modified by reconfiguring the serial and/or parallel connections of the battery's cells to tune the battery's input current-sinking properties to match the source's output I-V curve.  A similar approach can be used to recharge a rotorcraft's high-voltage (90V-100V) battery module from a Radioisotope Thermal Generator (RTG), which produces a nominal output of around 30V. The rotorcraft's battery is a combination of multiple Li-Ion cells connected in series and parallel to deliver high operational power. Instead of using a high-frequency switching DC-DC converter to boost the RTG output and constantly manage the converter's duty cycle, the battery's internal structure is reorganized into several independent sections, each of which is directly connected to the RTG. Additionally, the status of each section is monitored through Coulomb counting to estimate its state of charge under various external scenarios, thereby delivering a specific charging profile that can also be used for a Machine Learning database. Directly charging the battery through separate sections, which are serially reconnected after recharge, reduces the mass and volume of the recharging module by a factor of ten. The elimination of an intermediate power conversion demonstrates a charging efficiency of over 99.9%. The battery reconfiguration is implemented using static switches, eliminating the need for transformers and inductors, which results in minimal Electromagnetic and RF emissions, as well as negligible heat dissipation and power noise, thus creating an ideal environment for highly sensitive instruments. Machine Learning enables failure prediction, leading to enhanced reliability and longevity, as well as higher end-of-life battery capacity, which is crucial for long-duration missions. Those features would benefit programs such as Cislunar and deep space missions, air-gliding bodies, Unmanned Aerial and Underwater Vehicles, electrical propulsion, and directed energy applications — overall applications where efficient energy conversion, volume, and mass of the power source play an essential role.
      • 07.0404 Backpack Battery : Auxiliary Power for Dragonfly during Entry, Descent, and Landing
        Tonle Bloomer (Johns Hopkins University/Applied Physics Laboratory), Matthew Rodriguez (Johns Hopkins University/Applied Physics Laboratory) Presentation: Tonle Bloomer - -
        Detailed is the design of a switching power electronics module, the EDL Battery Controller (EBC), providing auxiliary power to Dragonfly during the Entry, Descent, and Landing (EDL) phase of the mission only. NASA’s Dragonfly Mission is a rotorcraft designed for science exploration on Saturn’s Moon Titan. The mission's cruise stage lasts for 6.7 years before reaching Titan and beginning the 5–7 hour EDL stage of the mission. The lander battery sizing (energy and mass) is sized to support rotorcraft flight and science operations after the EDL mission phase. Therefore, the lander battery alone had potentially insufficient capacity for safe EDL phase landing across worst case conditions. To ensure the lander battery meets state of charge (SOC) requirements at first landing and without incurring lander battery redesign (and excessive mass penalties) additional power from some auxiliary source was needed. The EDL Primary Batteries (EPB) and the EDL Battery Controller (EBC) were added into the EDL Assembly to meet this need. The EPBs provide the additional watt hours required while the EBC regulates the flow of current to the lander during EDL to maintain a safe lander SOC. The EBC additionally completely isolates the primary batteries from the rest of the spacecraft to minimize leakage current during cruise phase and any other non-charging operations. The EBC primary function is to convert EDL primary battery voltage down to the Dragonfly lander battery voltage using a buck converter topology with a constant current, constant voltage charging method. Instantaneous power throughput is designed for ~850W. The buck converter is designed using peak current control, where the current loop is handled with analog electronics and discrete ICs. These PCB implemented circuits create the ramp compensation PWM regulation that provides constant output current. The peak current setpoint itself is commanded via digital control provided by a Microsemi FPGA. The FPGA closes the loop on the lander battery voltage, a proxy for state of charge, and then programs the output current that the peak current loop regulates to. To create system level redundancy there are two separate converters that are simultaneously running in parallel. However, to create output current sharing (as opposed to one channel dominating the net charge current) the digital controllers must pass their separate lander battery voltage senses to each other. Charging the Dragonfly lander's lithium-ion battery via primary lithium batteries creates critical mission safety protection parameters. Namely, the EBC must protect against EPB over temperature, EPB undervoltage, lander battery overvoltage, EBC input, and EBC output overcurrent. The EBC also implements autonomous restart behavior; if a fault clears then the channel will resume charging without software intervention. The EBC engineering model (EM) hardware has been built and tested including some environmental testing. The EBC EM was also tested in an end-to-end configuration with the lander test battery and EM EPBs successfully. The EBC has passed its Engineering Design Review and is on track to support the success of the Dragonfly Mission.
      • 07.0406 Structural and Thermal Analysis of a Lithium Battery for Interceptor Missile AFTS Application
        Steven Karpov (Space Information Laboratories), Timothy Anderson (Signet Technologies) Presentation: Steven Karpov - -
        This paper presents the structural and thermal analysis of a lithium battery system designed to power an autonomous flight termination system (AFTS) integrated into an interceptor missile. The battery, developed by Space Information Laboratories (SIL), a 3.7 Ah configuration and is subjected to qualification testing based on the thermal, shock, and vibration environments projected for the missile’s flight profile. Structural and thermal assessments were performed using Siemens SimCenter 3D to ensure the battery meets performance requirements outlined in program specifications. The structural analysis evaluated the battery’s ability to withstand launch-induced dynamic loading, modeling glued contact interactions between aluminum structural supports and the battery module. The model incorporated foam encapsulants and detailed mechanical features of the housing and PCBA. The thermal analysis accounted for heat loads generated by onboard electronics, with discrete modeling of components dissipating 0.030 W or more, and smeared load distribution for lower-power components. Thermal contact resistance and detailed PCB stack-ups were included in the model. A steady-state solution was used to analyze PCBA temperatures, while a transient thermal solution over a 4500-second mission duration assessed battery cell temperature response. Results from both analyses confirmed that the battery design satisfies all structural and thermal requirements for the interceptor application. The comprehensive modeling approach ensures that the battery will perform reliably under the demanding conditions of flight, contributing to the overall safety and effectiveness of the AFTS system.
      • 07.0407 Power Electronic Building Block (PEBB) as a Plug and Play Universal Power Convertor for Spacecrafts
        Awais Karni (North Dakota State University), Omid Beik (Michigan State University) Presentation: Awais Karni - -
        The advancement of space exploration missions, particularly those involving deep-space human exploration and megawatt-scale nuclear electric propulsion, demands highly reliable and versatile power electronics. Spacecraft power systems rely heavily on power management and distribution (PMAD) units and power processing units (PPUs) to regulate and deliver power to electric propulsion thrusters, spacecraft buses, and various auxiliary loads. These systems typically incorporate multiple converters, including ac-dc, dc-ac, and dc-dc topologies, to support the diverse voltage and current requirements of spacecraft subsystems. Given the extreme operational environment of space, characterized by high radiation, vacuum, and extreme temperatures, any failure in these converters can severely disrupt mission operations. The inability to perform extensive in-space repairs further underscores the need for robust, reliable, and easily replaceable power electronics solutions. To address this critical challenge, a universal Power Electronic Building Block (PEBB) is proposed, based on a neutral point clamped (NPC) full-bridge converter. The NPC topology is well-known in terrestrial power electronics for its high-power density, multilevel output, and reduced voltage and current stress on semiconductor switches. Traditionally applied for ac-dc or dc-ac conversion, this work innovatively extends the NPC topology to perform dc-dc conversion as well, creating a fully versatile, plug-and-play converter suitable for spacecraft applications. The PEBB is designed with two configurable ports, Port I and Port II, which can act as inputs or outputs depending on the type of conversion. Eight distinct operational modes, leveraging combinations of switches, clamping diodes, and capacitors, enable precise generation of unipolar and bipolar voltages. This multilevel control not only facilitates multiple conversion types but also reduces total harmonic distortion (THD) and minimizes filtering requirements, thereby enhancing overall efficiency and power quality. The universal PEBB can be constructed using radiation-hardened (rad-hard) semiconductor devices and enclosed within a vacuum-sealed, shielded box to protect against radiation, vacuum-induced breakdowns, and other space environment hazards. This design ensures reliable operation under harsh conditions and maintains plug-and-play capability, allowing for immediate replacement of faulty units without interrupting spacecraft operations. Such an approach significantly enhances fault tolerance in PMAD and PPU architectures, which are composed of numerous converters where individual failures can compromise overall mission performance. Simulation studies validate the PEBB’s performance across dc-ac, dc-dc, and ac-dc conversions. For dc-ac conversion, the PEBB produces a three-level alternating output waveform with improved quality and lower THD compared to conventional two-level inverters. In dc-dc conversion, the converter generates multilevel DC outputs, reducing filtering requirements and extending the functionality of NPC topology beyond its conventional use. During ac-dc operation, the anti-parallel diodes of the switches allow the PEBB to function as a controlled rectifier, converting ac inputs into stable dc outputs. In conclusion, the proposed NPC-based universal PEBB represents a robust, flexible, and fault-tolerant solution for spacecraft power systems. Its ability to seamlessly perform ac-dc, dc-ac, and dc-dc conversions, combined with radiation protection and plug-and-play design, ensures uninterrupted operation of critical spacecraft subsystems. This innovation addresses key challenges in deep-space power electronics, supporting the reliable operation of megawatt-scale spacecraft and advancing the development of long-duration missions.
      • 07.0408 BOOST Converter Performance Optimization for Microsatellite Power Systems Using MPPT Fuzzy Control
        ABDELKADER HADJ-DIDA (Algerian Space Agency ASAL - Satellite Development Centre CDS) Presentation: ABDELKADER HADJ-DIDA - -
        In the field of space technology, microsatellites have become essential due to their flexibility, low cost, and wide range of applications, from Earth observation to telecommunications. One of the major challenges to ensure the reliable operation of these spacecraft is the efficient management of their Electrical Power System (EPS), mainly based on solar energy. Future space missions require power systems capable of delivering high performance while overcoming constraints such as limited mass, volume, and the harsh space environment. To supply, store, and distribute electrical energy to various microsatellite subsystems, a variety of DC-DC converters are used; Design specifications and simulation parameters of a Boost converter for various microsatellite applications are identified in this work. Achieving optimal power conversion efficiency to enhance the microsatellite’s EPS energy performance is a critical objective during its design, due to the limited energy availability and strict weight constraints of space systems. Maximizing the energy extracted from the microsatellite’s solar panels under varying environmental conditions depends on advanced Maximum Power Point Tracking (MPPT) techniques. Mastery of these techniques is therefore essential to optimize energy performance and ensure the microsatellite’s autonomy throughout its mission. This paper presents the performance optimization of a DC-DC boost converter designed for microsatellite power systems using a Fuzzy Logic-based MPPT Controller. The proposed approach enhances the efficiency and reliability of the EPS, contributing to the successful accomplishment of the microsatellite mission in orbit. Power analysis and control strategies of the proposed converter were carried out and simulated within the MATLAB/Simulink environment. All design objectives, analytical calculations, and simulation results of the Boost converter are thoroughly analyzed, verified, and validated in this paper. Key-words: Microsatellite - Electrical Power System EPS - DC-DC Boost Converter - MPPT - Fuzzy Logic Controller - Performance Optimization.
    • Mohammad Mojarradi (Jet Propulsion Laboratory) & Andrew Kirby (Los Alamos National Laboratory)
      • 07.0501 Design and Evaluation of a SiGe BiCMOS Operational Amplifier for Wide-Temperature Applications
        Alex Seaver (University of Tennessee), Steven Corum (University of Tennessee - Knoxville), Benjamin Blalock (The University of Tennessee) Presentation: Alex Seaver - -
        The operational amplifier is a cornerstone component of analog electronics design, serving critical roles in innumerable applications such as sensing, signal conditioning and filtering, data acquisition, analog computation, communications, and control systems. Owing to their ubiquity, it is inevitable that engineers wish to use operational amplifiers in extreme environments such as in space exploration where temperatures can range from nearly absolute zero (i.e., 0K or about −273◦C [degrees Celsius]) to well over 125◦C. However, designing amplifiers to operate reliably over such wide temperature requires more care than is commonly given to the design of commercial off-the-shelf amplifiers, which are typically designed to operate over temperature ranges no wider than −55◦C to 125◦C. While CMOS based amplifiers can be designed to operate down to deep cryogenic temperatures, it is desirable to turn towards bandgap-engineered materials to overcome challenges specific to such wide temperature design, such as reduced precision, decreased long-term reliability, and degraded radiation tolerance at extreme temperatures. For space exploration applications, it is beneficial to develop a modular library of mixed-signal integrated circuits (ICs) known to be reliable in extreme environments in order to enable speedy design of integrated electronic systems for space missions. This work develops the design of an integrated two-stage operational amplifier for use in such a modular extreme-environment IC library. The amplifier is designed in a commercially available 90nm BiCMOS process featuring silicon-germanium heterojunction bipolar transistors (HBTs) and is evaluated across wide temperature from −180◦C to 125◦C, with in situ tests also demonstrating successful operation at 4.2K (−269◦C) as part of a linear voltage regulator (LVR).
      • 07.0502 A Review of SiGe BiCMOS Devices for Ocean Worlds Exploration: Leveraging SiGe HBT and pFET Devices
        Md Omar Faruk (University of Tennessee Knoxville), Steven Corum (University of Tennessee - Knoxville), Zakaraya Hamdan (University of Tennessee), Alex Seaver (University of Tennessee), Benjamin Blalock (The University of Tennessee), Jeffrey Teng (The Aerospace Corporation), John Cressler (Georgia Tech), Linda Del Castillo (Jet Propulsion Laboratory), Mohammad Mojarradi (Jet Propulsion Laboratory), Travis Graham () Presentation: Md Omar Faruk - -
        Space exploration that seeks to find biosignatures of extraterrestrial life beyond earth requires robust and reliable electronic systems for sensing, data processing, controlling motor/actuators, and communication while surviving extreme environments like Europa. Commercial-off-the-shelf (COTS) components are designed for commercial or military specification (mil-spec) temperature ranges. Wide temperature applications necessitate the need for a “Warm-Electronics-Box” to keep COTS components operational. However, COTS components are also designed with little to no regard towards radiation tolerance. Also, size, weight, and power (SWaP) constraints may limit the use of “Warm-Electronics-Box” based approaches. Electronic system designs must be tailored to the extreme conditions under which they will operate. The presence of extreme cold temperatures and high radiation adversely affects CMOS and BJT device parameters over time compromising electronic systems reliability. This paper overviews device performance degradation issues induced by high ionizing irradiation and extreme cold and promotes a strategic design philosophy that exclusively uses Silicon Germanium (SiGe) Heterojunction Bipolar Transistors (HBTs) and Silicon P-channel MOSFETs (pFET) to capitalize on their favorable performances in cryogenic and high radiation environments. This approach avoids nFET devices, due to their degradation under similar conditions.
      • 07.0503 Design Considerations for Wide-Temperature Analog LVRs with Emitter Follower SiGe HBT Pass Devices
        Steven Corum (University of Tennessee - Knoxville), Alex Seaver (University of Tennessee), Benjamin Blalock (The University of Tennessee) Presentation: Steven Corum - -
        As the demand for space exploration continues to grow within both the scientific community and commercial enterprises, the need for extreme environment electronics must similarly grow to support such efforts. Two destinations of current interest are the Lunar surface and “Ocean Worlds” such as Europa’s surface, both of which are considered extreme with respect to temperature. The Lunar surface has the more extreme temperature profile of the two destinations, 25K to 398K, and a total ionizing dose (TID) profile up to 100 krad. Europa, with its 77K surface temperature, has the more extreme TID profile of 5 Mrad. A ubiquitous circuit for these missions is the analog linear voltage regulator (LVR), which provides stable supply voltage regulation to analog, RF, and mixed-signal systems under a wide range of loading conditions. The LVR’s pass device is responsible for delivering supply current to the load and plays a critical role in the stability of the regulator. The heterojunction bipolar transistor (HBT) is an attractive option for a pass device, in extreme environments, due to the HBTs inherent immunity to hot carrier effects and TID. The HBT also enjoys an improvement in performance parameters, such as current gain β, Early voltage (V_A), and transition frequency (f_T), at cold temperatures. This work provides an analysis of the emitter follower HBT pass device LVR topology, discusses design trade-offs with using an HBT pass device, and discusses design considerations specific to the HBT emitter follower pass device amplifier stage. An LVR is designed using an integrated two-stage SiGe BiCMOS operational amplifier with an HBT emitter follower pass device in a commercially available 90-nm SiGe BiCMOS process. Characterization of the LVR is performed from −175ºC (98.15K) to 125ºC (398.15 K). The LVR is functionally validated down to 4.2 K while supplying VCC of 2.0 V to a 10 GHz RF low noise amplifier (LNA).
    • Neil Dahya (NASA Jet Propulsion Laboratory) & Didier Keymeulen (Jet Propulsion Laboratory)
      • 07.0603 Mitigating SEFIs in the Peripheral Circuitry of Non-Volatile Memories in Radiation Environments
        Daniel Puckett (Sandia National Laboratories), Henri Malahieude (Sandia National Laboratories), Joseph D'Amico (Sandia National Laboratories), Joshua Joffrion (Sandia National Laboratories) Presentation: Daniel Puckett - -
        Non-Volatile Memories (NVMs) are essential to computing systems, including systems present in radiation environments. Previous radiation tests on NVMs have revealed failure cases where the peripheral circuitry of an NVM is affected such that the NVM is inaccessible but no data is corrupted. We experienced this failure case, which we call lock-up, on an NVM we tested. Lock-up occurs for many microseconds at a time, which can lead to several kilobytes of data corruption. In this paper, we present a two-pronged mitigation strategy for lock-up to improve NVMs' reliability and availability and evaluate our mitigation strategy through simulations. First, we address NVMs' reliability by designing and evaluating several detection policies, which can detect when lock-up begins and ends consistently and with low overhead. Second, we address NVMs' availability by designing the Cache Window algorithm, which pre-loads data into a radiation hardened cache, enabling the processor to make forward progress while the NVM is in lock-up.
    • Matthew Lashley (GTRI) & Leena Singh (MIT Lincoln Laboratory) & John Enright (Toronto Metropolitan University)
      • 07.0702 Solving Geodesic Equations with Composite Bernstein Polynomials for Optimal Trajectory Planning
        Nick Gorman (University of Iowa), Gage MacLin (), Venanzio Cichella (University of Iowa) Presentation: Nick Gorman - -
        This work presents a trajectory planning method based on composite Bernstein polynomials, designed for autonomous systems navigating complex environments. The method is implemented within a symbolic optimization framework that allows for smooth, continuous path representations and precise control over trajectory shape. Trajectories are planned over a cost surface that encodes obstacle information as continuous fields, rather than discrete boundaries. Areas near obstacles are assigned higher costs, naturally encouraging the trajectory to maintain a safe distance while still allowing for efficient routing through constrained spaces. This smooth cost encoding supports gradual changes in path curvature, improving navigation in tight or cluttered regions. The optimization is subject to several constraints: (1) the geodesic equations of the cost surface, which ensure the path follows the most efficient direction relative to the cost surface; (2) boundary constraints to enforce fixed start and end conditions; and (3) a cost inequality constraint to ensure minimum clearance from obstacles by limiting the allowable cost at points along the trajectory. The use of composite Bernstein polynomials allows the trajectory to maintain global smoothness while enabling fine control over local curvature to meet geodesic constraints. The symbolic representation supports exact derivatives, improving both the efficiency and accuracy of the optimization process. The method is applicable in both two- and three-dimensional environments and is well suited for ground, aerial, or underwater systems. Demonstrations show that the approach can efficiently generate smooth, collision-free paths in scenarios with multiple obstacles, maintaining clearance without requiring extensive sampling or post-processing. Compared to traditional planners such as graph-based (e.g., A*) or sampling-based (e.g., RRT, PRM) methods, this technique offers several advantages: it produces continuous polynomial trajectories, avoids discretization artifacts seen in other discrete space or sampling based methods by solving over a continuous domain, and significantly reduces optimizer overhead by using symbolic computations to provide exact derivatives in the continuous domain, avoiding costly numerical approximations. This approach can serve as a standalone planner or as an initializer for more complex motion planning pipelines. Future extensions may include integration of system dynamics, time-varying obstacle fields, or onboard implementation for real-time applications. Overall, the method provides a flexible and mathematically grounded solution to trajectory generation, balancing optimality, safety, and computational efficiency in challenging autonomous navigation tasks.
      • 07.0704 Release Sequence Design for Fuel-Free Formation Initialization Based on Recursive Safety Evaluation
        Hideki Yoshikado (), Yuta Takahashi (Tokyo Institute of Technology) Presentation: Hideki Yoshikado - -
        This paper presents a probabilistic design framework to determine the capture probability for magnetically actuated satellite swarms. Future space missions will increasingly rely on large distributed spacecraft systems for applications like broadband relay and radar imaging. For these missions, Electromagnetic Formation Flight (EMFF) is a promising propellant-free control technology. However, the effectiveness of EMFF is fundamentally limited by its control forces, which decay rapidly with distance. Consequently, minor deployment errors from a dispenser and a subsequent uncontrolled drift period can become critical threats to mission success. Our framework directly addresses this challenge by rigorously quantifying the probability of a successful swarm capture under these initial uncertainties. Existing deployment strategies focus on collision avoidance but fail to predict the subsequent controllability of the swarm. While these strategies, such as eccentricity/inclination (e/i) vector separation, effectively ensure safety during the drift phase, they offer no guarantee of entering the tight EMFF capture region. This critical knowledge gap forces mission designers to rely on ad-hoc safety factors and overly conservative assumptions. Such an approach can lead to unnecessarily constrained hardware or limited operational flexibility, hindering the optimization of mission parameters and robust planning for the critical initial phase. Our proposed methodology quantitatively assesses capture probability by propagating initial uncertainties through high-fidelity dynamics. The framework takes statistical distributions of initial release errors and a planned uncontrolled drift time as primary inputs. It then propagates these statistical uncertainties from the initial deployment state to the moment of control initiation, using an orbital model that includes Earth’s oblateness (J2 effect) and atmospheric drag. The resulting probabilistic state distribution is evaluated against a rigorously derived, Lyapunov-based capture boundary. This analytical boundary provides a definitive criterion for guaranteed stability, directly linking hardware specifications and operational timelines to a single mission success probability. We validated the proposed framework through two complementary studies: an along-track analytical model and a high-fidelity planar simulation. First, an along-track analytic model was developed to reveal the fundamental trade-off surface between mechanical tolerances and the allowable commissioning window, clarifying the physical limits of EMFF deployment. Second, a high-fidelity planar simulation extended this analysis to realistic three-degree-of-freedom releases. These end-to-end simulations assess the entire operational sequence, from the initial separation impulse and uncompensated drift to active capture and passive stability maintenance. This comprehensive approach allows for the simultaneous assessment of communication links, collision avoidance, and sustained controllability. The results confirm the framework's predictive accuracy and demonstrate its practical value for mission design. The proposed framework reveals critical design trade-offs, such as the balance between dispenser spring tolerance, post-deployment drift time, and required magnetic control authority. This provides actionable data that mission designers can use in early trade studies to set precise separation tolerances and establish robust timeline margins with confidence. The principles of this method are broadly applicable across various orbit classes and formation scales. In conclusion, this work significantly lowers the inherent risk in the vulnerable initial phase of distributed spacecraft missions, enhancing their overall feasibility and success
      • 07.0705 ARISE: Port–Hamiltonian Passivity-Based Damping of ESPA Injection Jitter for High-Velocity Flybys
        Harish Vernekar (University of Arizona), Leonard Vance (University of Arizona), Jekan Thangavelautham (University of Arizona) Presentation: Harish Vernekar - -
        The paper discusses Layer 1 of ARISE (Autonomous Reconfigurable Infrastructure for Swarm-based Exploration), where we model both the carrier ESPA and eight 6U deputies as mechanical systems on $SE(3)$ and shape a convex total energy around the desired relative pose. By injecting collocated torque damping through the body port, and making the closed loop strictly passive, the shaped Hamiltonian acts as a Lyapunov storage and must decrease. The layer is implemented in software at 50\,Hz using a bilinear (Tustin) discretization that preserves passivity. A passivity observer/controller (PO/PC) scales commands to handle a fixed 50\,ms delay, and a bounded nonnegative least–squares allocator maps the commanded wrench to reaction wheels under saturation limits. It also discusses the compact proofs for continuous–time passivity, sampled–time passivity with delay, and a gain condition showing that damping dominates saturation error. We also specify a guarded, dwell–time handoff to a Modified Rodrigues parameter controller for the rest of the mission timeline. On the Apophis case, separation is a half–sine push of $0.30$\,s at $t=10.0$\,s; the pH layer engages at $t_{\text{eng}}=10.30$\,s and ramps its storage gain over $6$\,s. The carrier settles from $0.5^\circ$/s and $0.3$\,m/s within $\le 4$\,min. Each deputy settles from $4^\circ$/s and $0.10$\,m/s to $\le 0.05^\circ$/s and $2$\,mm/s within $\le 2$\,min. These results show that a minimal, fixed–axis realization can robustly suppress separation transients despite thrust dispersion, inertia variation, and delay—delivering the swarm to geometry setting, imaging, and redocking with low effort and clear guarantees.
      • 07.0706 Inflight Performance of Pose and Position Vision Based Sensor Enabling Sub-mm Formation Flight
        Mathias Benn (Danish Technical University) Presentation: Mathias Benn - -
        ESA’s mission PROBA3 consist of two space segments, the Coronagraph spacecraft and the Occulter spacecraft operated at a distance between 25m and 300m, and co-aligned to the sun direction. In addition to in-orbit Formation Flying demonstration, one science objective is to achieve science observation of the Sun’s corona close to the solar rim at a specific inter-satellite distance of 150m. This configuration creates challenging illumination conditions for optical instruments which are needed to control the Formation at various operations scenarios, e.g. when cameras on the Sunwards Occulter are pointed to the Coronagraph, with the fully sunlit Earth in the background. This work presents the initial performance evaluations achieved during the LEOP phase of the PROBA3 mission. The Vision Based Sensor (VBS) extension of the micro Advanced Stellar Compass (µASC) enables general optical rendezvous and docking navigation capabilities in-between formation flying satellites – in both cooperative and non-cooperative modes. The robustness of the cooperative pose and position determination method by the VBS system rely on high accuracy synchronization between the camera system, located on the observing spacecraft, and the pulse-powered LED mires, located on the target spacecraft. Based on knowledge of the LED mires location on the target spacecraft, the VBS system provides high precision pose and position information. We discuss the design aspects of this system, outlining its performance envelope, and presents the inflight performance achieved of the VBS system during the LEOP phase of PROBA3.
      • 07.0712 Reinforcement Learning–Based Singularity Management for Variable-Speed Control Moment Gyroscopes
        Ali Elmorshedy (Beni-Suef), Mohamed Okasha (), HAITHAM ELSHIMY (United Arab Emirates University), Shamma Jamali (United Arab Emirates University) Presentation: Ali Elmorshedy - -
        This paper presents a reinforcement learning (RL)–based strategy for singularity management in spacecraft attitude control using variable-speed control moment gyroscopes (VSCMGs) in a pyramid configuration. Singularity avoidance is a critical challenge in CMG-based actuation, as near-singular configurations degrade torque authority and controllability. To address this, the spacecraft rigid-body dynamics and actuator models are implemented in Simscape and integrated with the MATLAB Reinforcement Learning Toolbox, enabling high-fidelity closed-loop training. A velocity-based steering law with a weighted pseudo inverse is employed to generate the primary torque command for attitude regulation. In parallel, an RL agent operates in the null-motion space of the control influence matrix, producing supplementary gimbal motions that reduce its condition number while preserving the commanded torque. Training is conducted with realistic spacecraft parameters and initial conditions, including randomized gimbal angles at the start of each episode. Simulation results are then compared with a baseline weighted pseudo inverse-only method. The RL-enhanced controller is shown to effectively bound the condition number, prevent divergence in ill-conditioned regions, and drive gimbal angles away from singular configurations, while maintaining actuator redundancy. These findings highlight the capability of integrating RL with physics-based spacecraft models to improve robustness in attitude control. The proposed framework demonstrates how learning-based methods can complement classical steering strategies, and motivates future research directions including momentum management, actuator saturation handling, and potential onboard implementation for real spacecraft missions.
      • 07.0717 Multi-agent Angle-Only Relative Orbit Determination via Distributed Filtering
        Giovanni Romagnoli (University of Pisa), Giordana Bucchioni (UNIVERSITY OF PISA), Giusy Falcone (University of Michigan) Presentation: Giovanni Romagnoli - -
        Modern space missions are increasingly oriented toward the deployment of distributed satellite systems, such as swarms and constellations. These architectures promise enhanced spatial coverage, scalability, and mission robustness, while distributing the sensing, computational, and power burdens across multiple agents. Such systems represent a growing trend in present and future space exploration and on-orbit servicing. This work addresses the problem of relative orbit determination (ROD) in the presence of unknown or passive space objects, using angle-only measurements collected by multiple agents in a satellite swarm. The scenario considers a group of satellites travelling along nearby orbits. When an object of interest passes through their vicinity, the agents cooperatively observe it using onboard vision-based sensors and communicate their data across the network. No maneuvers are performed, and it is assumed that all satellites are capable of synchronizing measurements and exchanging information with their peers. Among relative orbit determination methods, the angle-only approach stands out for its hardware efficiency. Vision-based sensors, such as star trackers and cameras, are ubiquitous on modern spacecraft and provide passive, lightweight, and low-power angular measurements over large dynamic ranges. This makes angle-only navigation particularly attractive for small satellite missions, where constraints on mass, volume, and power consumption are especially stringent. However, angle-only relative orbit determination (ROD) using a single observer suffers from structural observability limitations. Conventional solutions to this problem include: (1) executing frequent maneuvers, (2) employing stereoscopic vision systems - though these are range-constrained and ineffective beyond short distances, or (3) leveraging high-fidelity dynamical models, which impose significant computational burdens and require extended observation windows to overcome nonlinear estimation challenges. All these approaches demand either expensive, high-precision sensors or computationally intensive optimization methods to produce reliable state estimates. To address these limitations, this work proposes a distributed multi-agent approach to angle-only relative orbit determination. The method exploits inter-satellite collaboration to resolve geometric ambiguities while reducing reliance on prior knowledge of the target’s motion. Initially, agents perform a distributed triangulation procedure, combining their angular measurements to estimate the relative position and its uncertainty using multiple viewpoints. This geometric initialization is efficient, parallelizable, and naturally resolves range ambiguity. These initial estimates serve as the foundation for a distributed Kalman filtering process that combines local state propagation with cooperative consensus-based fusion. Each agent autonomously maintains and updates its state estimate using a linearized dynamical model while incorporating sequential angular measurements. The filter's distributed nature is achieved through dynamic average consensus, where agents iteratively exchange and merge information with neighbors to cooperatively refine both state estimates and covariance matrices. The resulting framework is scalable, robust, and computationally efficient. It distributes processing across the agents and reduces dependence on complex onboard hardware. By leveraging the spatial diversity and collaboration of multi-satellite systems, this approach enables accurate relative orbit estimation of unknown targets with minimal assumptions, making it a promising solution for future autonomous, resource-constrained missions.
    • William Jackson (L3Harris Technologies) & Michael Mclelland (Southwest Research Institute)
      • 07.0802 Distributed Data Acquisition and Control System for Rocket Engine Static-Fire Test Stands
        Piotr Slawecki (AGH University of Science and Technology), Piotr Słonka (AGH University of Science and Technology), Hubert Kowalczyk (AGH University of Science and Technology), Artur Brdej (AGH University of Krakow), Mikołaj Sala (AGH University of Cracow) Presentation: Piotr Slawecki - -
        The authors propose a modular and scalable infrastructure for rocket engine testing equipment and facilities. The system features universal master boards with PoE, Ethernet and PTP support, paired with specialized slave boards for various control and data acquisition tasks. A centralized Rust-based server and custom data processing and analysis software provide robust, real-time control and monitoring. This flexible and fault-tolerant architecture is designed for rapid iteration and is adaptable for broader space-adjacent applications. The project‘s architecture prioritizes flexibility, reliability, and performance. It consists of a main station, centralizing and managing data flow from various modules, customized and optimized for specific applications, such as sensor data acquisition and actuator control. The use of Ethernet guarantees a robust, widely adopted, and easily scalable infrastructure. When paired with Precision Time Protocol (PTP), it creates a rigorous framework, achieving industry-grade device synchronization, which is a crucial concern during rocket engine testing. The main station, equipped with high-performance forwarding software and finely tuned application-layer protocol, efficiently handles a significant amount of data and manages packet flow whilst providing monitoring and diagnostic capabilities. Altogether it creates a performant, flexible, and robust system, well-tailored for application in a rocket engine test stand. Designed with adaptability in mind, the system is composed of modular, independent subsystems to provide an easily scalable interface for test stand equipment. Each module comprises a universal master board, featuring PoE power supply and Ethernet connectivity, and a slave board with functionalities varying based on the controlled equipment, including current-loop receivers, RS485 transceivers, and relay arrays. Hardware modularity enables targeted debugging such as EMI diagnostics, and seamless subsystem upgrades. Combined with the iterative design approach of the project, this architecture enhances fault tolerance, maintainability, and design flexibility. The architecture is inherently scalable toward space-adjacent applications such as launch pad automation, fueling control, and advanced embedded ground support systems. The software architecture comprises a centralized communication server implementing a Layer-2 user space overlay network proxy built in Rust, designed to act as the center point of infrastructure enabling seamless system control and expansion. The solution operates over Transmission Control Protocol, facilitating reliable, rapid switching of communication protocol's frames through user-defined routing rules and origin-based classification, allowing for sophisticated quality of service mechanisms within complex configurations. The modular overlay proxy architecture supports interconnected deployments, creating highly scalable and reliable networking topologies suited for demanding operational environments. Data analysis component leverages bespoke mission control solutions developed in Python and Rust utilizing modern web technologies, providing operators with comprehensive situational awareness, real-time telemetry validation, and command execution capabilities. The introduced architecture offers scalability and modularity, allowing for easy adaptation of various testing facilities to the needs of a specific application. The system is being developed by the authors and associated persons, and is expected to undergo real-life testing in late September of the year 2025. Initial tests are expected to be carried out at small scale, facilitating a static fire test campaign of a 4000N liquid propellant rocket motor.
      • 07.0804 Microchannel Plate Detectors for High-Resolution Remote Sensing and Space Surveillance Applications
        Camden Ertley (Southwest Research Institute), Oswald Siegmund (Sensor Sciences LLC) Presentation: Camden Ertley - -
        Photon-counting microchannel plate (MCP) detectors continue to advance the state of the art for optical sensing and tracking applications requiring high spatial and temporal resolution. These detectors offer broadband sensitivity (UV to NIR), sub-nanosecond timing, high local count rates, and low background operation without cryogenic cooling. This makes them ideal for optical space surveillance, high-frame-rate 3D imaging LIDAR, and transient event detection in both ground-based and spaceborne systems. Recent developments across all detector components (photocathodes, MCP substrates, and electronic readouts) have enabled performance improvements and new mission capabilities. Atomic layer deposition (ALD) of resistive and emissive films on borosilicate microcapillary arrays now allows scalable, radiation-tolerant MCPs with low background rates (<0.05 events/s/cm²), uniform gain, and extended lifetime. These ALD MCPs support sealed tube configurations up to 200x200 mm2 and are compatible with high-efficiency photocathodes spanning the 120–900 nm range. New fabrication techniques, including nano-scale additive manufacturing, allow customization of pore structure and bias angle at the individual pore level. This opens pathways to tailored MCP architectures for specific imaging and timing requirements. When combined with advanced anode readouts such as cross-strip (XS) designs or event-driven ASICs (e.g., Timepix), these detectors support event rates greater than 5 MHz with timing resolution better than 100 ps and spatial resolution below 40 µm. These developments are driving adoption in next-generation systems for space situational awareness, laser ranging, and high-speed transient detection. The ability to tag single photons by time and position enables flexible post-acquisition reconstruction for tracking, background suppression, and 3D scene recovery. These capabilities are central to optical surveillance and SDA. MCP detectors remain key to enabling low-SWaP, high-performance payloads for ground-based, airborne, and orbital platforms.
      • 07.0806 Near-Zero-Volume Conformal Printed Sensor Circuit for Curved Rocket Structures
        Donghun Park (Lab for Physical Sciences/Univ. of Maryland College Park), Jason Fleischer (LPS) Presentation: Donghun Park - -
        Printed electronics offer a pathway to embed sensing and interconnects directly onto spacecraft structures, enabling higher functional density for small platforms. We present results from a SubTEC-9 sounding-rocket experiment conducted from NASA’s Wallops Flight Facility, in which hybrid printed circuits carrying temperature and humidity sensors were fabricated directly on the payload bay door and on two attached panels and operated at the edge of space. The circuits—designed and printed at the Laboratory for Physical Sciences using printers capable of sub-hairline features (∼30 µm trace spacing)—leveraged humidity-sensing inks developed at NASA Marshall and were deposited on both curved metal substrates and flexible Kapton. During the flight, the printed sensor network recorded environmental data throughout the mission and successfully downlinked telemetry to ground, demonstrating survivability and functional fidelity under ascent, vacuum/thermal extremes, and re-entry conditions. Beyond environmental sensing, the approach addresses known reliability and packaging limitations of conventional wire-bonded interconnects and opens opportunities for conformal RF architectures: printing antennas and RF routing on three-dimensional exterior surfaces can expand field-of-view and simplify integration. In this talk, we present the end-to-end workflow—materials selection, design rules, printing on complex geometry, and integration—together with pre-flight characterization and in-flight performance. We further discuss manufacturing lessons learned (e.g., adhesion on curved metals, mask strategies, print resolution/registration), outline preliminary design guidelines for mission adoption, and consider extensions to conformal antennas and deployable sub-payloads where volume and mass margins are minimal. The successful SubTEC-9 demonstration supports printed electronics as a viable, scalable option for future near-Earth and deep-space missions.
      • 07.0807 Near-Space Durability of Additively Manufactured Planar and Conformal Wi-Fi Antennas
        Erica Lee (University of Maryland), Philip Li (Laboratory for Physical Sciences), Donghun Park (Lab for Physical Sciences/Univ. of Maryland College Park) Presentation: Erica Lee - -
        Additive manufacturing (AM) is opening new directions for RF systems by allowing conformal antennas, lightweight satellite parts, and on-demand repairs of communication hardware. These advances are particularly relevant to aerospace and defense operations, where drones and satellites demand compact, reliable, and rapidly deployable RF solutions. This study investigates the durability of additively manufactured RF antennas in near-space, exploring their potential for drone communication and small-satellite payloads. We developed a Wi-Fi communication system consisting of a printed Planar Inverted-F Antenna (PIFA) and a printed Conformal Inverted-F Antenna (CIFA). Antennas were fabricated with different AM modalities, including 2-axis and 5-axis aerosol jet printing as well as 2-axis and 5-axis syringe printing, to compare performance and suitability. An Arduino-based system with NRF24L01+ transceivers was used to establish a communication link. The antennas were mounted on aluminum enclosures that provided thermal control for the electronics, while the antennas themselves remained exposed to ambient conditions throughout the flight. Ground testing confirmed stable communication and thermal regulation at both room temperature and freezing conditions during repeated five-hour trials. The antennas, flown during NASA’s Balloon Program Summer 2025 campaign, reached ~120,000 feet. Despite mechanical stresses and extreme temperature swings of -60°C to 30°C, communication between the printed antennas was successful during the flight. This work demonstrates that AM-produced RF antennas can function reliably in extreme near-space environments. The ability to fabricate conformal, lightweight, and repairable RF components suggests promising applications for UAV-based communications and the rapid repair of antennas in remote areas.
    • Douglas Carssow (Naval Research Laboratory) & Matthew Spear (Air Force Research Laboratory)
      • 07.0901 Digital Micromirror Devices as COTS Actuators for Propellant Free Attitude Control
        Jonathan Messer (University of Southern California) Presentation: Jonathan Messer - -
        The growing emphasis on mission sustainability and cost containment is driving spacecraft designers to seek reliable, low-mass alternatives to traditional thruster-based momentum management. Solar radiation pressure (SRP) is an abundant, continuous force that can be harnessed for propellant-free torque generation, provided it can be precisely and discretely modulated. This paper presents the first closed-loop laboratory demonstration of SRP attitude control using an unmodified, commercial-off-the-shelf (COTS) Digital Micromirror Device (DMD) as the SRP torque actuator. Originally developed for high-volume projection displays, Texas Instruments’ DLP product line of DMDs comprises hundreds of thousands of bistable micromirrors that tilt to fixed angles exceeding ten degrees. By commanding mirror patterns, we achieve quasi-analog control of the reflected photon momentum and, consequently, the torque applied to all three spacecraft body axes. Space-qualification studies have already demonstrated that COTS DMDs can endure launch vibration, along with proton and heavy-ion fluence representative of the Sun-Earth L2 environment, as well as operate in cryogenic conditions for astronomical instruments. By leveraging the space-suitability research, we evaluate DMDs as SRP trim surfaces that can extend geosynchronous or interplanetary mission lifetimes by minimizing thruster-based momentum dump requirements. To demonstrate this, a COTS Pico DMD pairs with a wireless microcontroller, integrating with a vacuum-suspended torsion pendulum. A light-chopped, five-watt, 630 nm LED laser serves as the radiation pressure source, while a lock-in amplifier and optical lever provide closed-loop feedback to the wireless microcontroller. Three experiments are executed: baseline oscillation under periodic optical forcing, active damping using DMD mirror modulation, and active amplification, which deliberately increases the resultant radiation pressure. This work demonstrates that flight-grade attitude trimming can be achieved with commercially mature hardware, providing a compelling COTS pathway toward propellant-free, extended-life spacecraft. Follow-on research includes characterizing DMD SRP characteristics at large angles of attack and demonstrating in-plane torque control.
      • 07.0902 Towards Reliable Memory in Space: SEU-Tolerant Design Using a Novel Error Correction Code
        Youcef Bentoutou (Satellite Development Center - Algerian Space Agency) Presentation: Youcef Bentoutou - -
        Mitigating single-event upsets (SEUs) is essential for reliable memory operation in the space radiation environment. Error Detection And Correction (EDAC) codes, such as Hamming, quasi-cyclic, and Reed-Solomon codes, are widely used to protect memory against SEUs. However, these codes face critical limitations: Hamming codes correct only single errors, quasi-cyclic codes fail to handle triple-adjacent errors, and Reed-Solomon codes incur high area, power, and latency overheads due to their complexity. This paper presents a novel EDAC code optimized for mitigating SEUs in memory systems, outperforming the conventional block codes and the EDAC code of US Patent US6604222B1. The patented code corrects single errors, double-adjacent errors, triple-adjacent errors, and double-almost-adjacent errors. However, it cannot correct double non-adjacent errors that frequently occur during SEU events. This limits its reliability in memory applications. The proposed EDAC code corrects all single errors, double errors (both adjacent and non-adjacent), and triple-adjacent errors while maintaining comparable memory and area overheads compared to the patented approach. Experimental results demonstrate that the proposed EDAC code surpasses conventional codes in terms of error detection and correction capability. Additionally, it exhibits smaller delays, requires less area, and has lower power overheads compared to various state-of-the-art EDAC schemes.
      • 07.0903 Resource and Power Overhead of Automated TMR for SRAM FPGAs in Space
        Nicholas Schmidt (University of Pittsburgh), Alan George (University of Pittsburgh) Presentation: Nicholas Schmidt - -
        With the rapid growth of interest in employing commercial-off-the-shelf (COTS) devices in space, static random-access memory (SRAM) field-programmable gate arrays (FPGAs) have been used extensively across a wide range of space applications. While some COTS devices are more tolerant to radiation than others, FPGAs are susceptible to the effects of radiation-induced soft errors. The configuration memory of SRAM FPGAs is a major vulnerability, as single-event upsets (SEUs) here can corrupt the functionality of the logic design. Overcoming these effects to develop a more reliable system is achievable by triple modular redundancy (TMR). Although the concept is quite simple, the process of applying TMR manually is tedious and error prone. Third-party synthesis tools, such as Synopsys’ Synplify, are capable of automatically implementing TMR within a design. This investigation looks to evaluate the cost of implementing Synplify’s high reliability features including block TMR and distributed TMR. Leveraging various kernels and applications, such as matrix operations and image processing, helps to illustrate how varying logic resources affects the performance of the two TMR techniques. Analysis of these techniques is considered in terms of resource utilization and power consumption. The results gathered from matrix multiplication show a 3.05× and 21.05× increase in LUTs of block TMR and distributed TMR, respectively, over the non-triplicated design. In terms of power, block TMR consumed 1.05× more Watts than the non-triplicated design while distributed TMR consumed 1.30× more Watts.
      • 07.0904 A Software-Based Approach to Radiation Mitigation for Planetary Missions
        Vandi Verma (NASA JPL-Caltech), Jeremy Nash () Presentation: Vandi Verma - -
        The Perseverance Mars rover is in the process of deploying a new onboard global localization capability that significantly improves long-distance autonomous navigation. It allows the rover to precisely determine its position by comparing panoramic images from its NavCams with high-resolution orbital maps. This paper provides an outline of the development and successful deployment of a new, onboard global localization system on the Perseverance Mars rover, a critical technology for enabling extended autonomous drives. Leveraging a fast, commercial off-the-shelf (COTS) Snapdragon 801 processor from the Ingenuity Helicopter Base Station, the system performs complex image-processing tasks to accurately determine the rover's position relative to orbital maps. This approach overcomes the limitations of the rover's slower, radiation-hardened processor, and is a paradigm shift in planetary robotics. The main focus of this paper will be on the implementation and mitigation strategies required to use a non-hardened COTS processor in the harsh space environment. We will describe the software-based memory checking and fault isolation techniques developed to address memory corruption potentially caused by aging or radiation effects. Furthermore, we will present the performance and operational results from the deployed system on Mars, including its accuracy and robustness. The paper will also discuss the lessons learned during the development and flight deployment process and outline how this technology will fundamentally change future planetary navigation by enabling more ambitious and efficient autonomous missions.
  • Lisa May (Lockheed Martin Space) & Greg Chavers (NASA)
    • Dimitry Ignatov (Booz Allen Hamilton) & Chel Stromgren (Binera, Inc.) & Kevin Post (Booz Allen Hamilton)
      • 08.0103 Historical Perspective on Artemis II Trajectory Design
        Alisha Crawley () Presentation: Alisha Crawley - -
        NASA’s Artemis 2 (AR02) mission will mark the first time in more than 50 years astronauts have visited cislunar space. To achieve this important milestone, AR02 will draw upon the success of Artemis 1 (AR01) and again utilize the Space Launch System (SLS) launch vehicle. The original directive for the creation of the SLS did not specify a mission, and as a result the vehicle has gone through numerous design iterations reflecting changes to both the vehicle and the mission profile as it was developed. The trajectory design for the AR02 mission was the result of these various design and analysis cycles which included different vehicle and mission changes. Beginning with the initial senate directive, a trajectory profile was developed that could meet initial goals and be used in concepts of operation as the mission was further defined. The initial senate directive did not include a specific launch vehicle or mission profile, which allowed for the development of a high-level trajectory design that could be modified and refined as the program continued to develop. This original trajectory profile was used in the development of the launch vehicle that would one day become the SLS. Other launch vehicle designs were considered before the SLS, including the Ares vehicles and the National Launch System. Each of these launch vehicle designs resulted in changes to the trajectory profile, and an iterative process developed where changes to the trajectory would in turn impact the vehicle design. This iterated on trajectory was used as a starting point for developing the SLS trajectories which would eventually become part of the Artemis missions. The trajectory developed for previous launch vehicles was used as a starting point for creating the SLS trajectories which would eventually become part of the Artemis missions. The same iterative process that was used for previous launch vehicles was also seen in the development of SLS and Artemis through a series of analysis cycles. Multiple iterations of each cycle type would be performed depending on changes at various levels of the program that were necessary to capture in the trajectory development. Each of these cycles would include changes to different elements of the trajectory based on various vehicle and programmatic changes, such as updated vehicle masses or changes to atmospheric and wind databases used. Each design change would result in differences to the optimal designed trajectory, and several design parameters were identified as major drivers for the trajectory based on the updates through different analysis cycles. These major design drivers were closely monitored throughout the trajectory development to ensure that all aspects of the AR02 mission could be satisfied. The ascent portion of the trajectory for AR02 was created using the Program to Optimize Simulated Trajectories II (POST2). POST2 is a targeting and optimization program that is used to perform 3 Degree of Freedom (3DOF) point mass trajectory design. POST2 was used to create the optimal trajectory profile, which was then simulated in a 6 Degree of Freedom (6DOF) environment.
      • 08.0105 Composite Structure Enabling an Efficient Lunar Surface Habitat
        Matthew Ziglar (The Boeing Company) Presentation: Matthew Ziglar - -
        The establishment of sustainable habitats on the lunar surface is paramount for advancing human exploration and scientific endeavors beyond Earth and enables NASA's Sustained Lunar Evolution in the Moon to Mars Architecture. This paper delves into the innovative design and application of composite structures for the Lunar Surface Habitat (LSH), highlighting their significant advantages in terms of weight efficiency and an enlarged configuration design space. A key focus of this study is the composite primary structure of the LSH, which constitutes only 17% of the total mission mass compared to typical module’s structure comprising 30%-50% of the total mass. This remarkable weight reduction is achieved through the use of carbon fiber reinforced polymers, which offer a high strength-to-weight ratio essential for space applications. The lightweight nature of the composite structure not only facilitates the launch and transportation of the habitat but also enhances its overall performance during lunar operations. The design process incorporates automated fiber placement (AFP) manufacturing techniques, which ensure precision and consistency in the construction of complex geometries. This method allows for the optimization of structural performance while minimizing mass, thereby addressing one of the critical challenges in habitat design for extraterrestrial environments. The AFP process enables a continuous layup from end-dome to end-dome of the module, eliminating the mass penalties and complexities of joints in the pressure shell. Through an innovative design and manufacturing flow, an internal airlock is also accomplished by embedding it inside the LSH module on the first floor. A significant advantage of utilizing composite structures is the expanded design space it affords. The reduced mass of the composite primary structure enables the development of an innovative two-story module configuration for the LSH. This design not only maximizes the use of internal volume but also strategically locates hatches low to the lunar surface, facilitating easier ingress and egress for crew members during extravehicular activities (EVAs) and the mating of pressurized vehicles to its port. The two-story layout allows for a clear separation of functional areas, enhancing operational efficiency and crew comfort. This configuration is particularly advantageous for supporting simultaneous activities thereby optimizing the habitat's functionality. The paper details how all the defined Ground Rules and Assumptions (GRAs) were met or exceeded for a four crew, 30 day mission. Chief among the Ground Rules was the mass target, which was met by implementing the very efficient structure and through the use of a deferred items approach where equipment not immediately required is brought up and installed on the first mission via a pressured logistics carrier. The findings underscore the potential of composite technology to revolutionize habitat construction for deep space applications. The successful implementation of a lightweight composite primary structure not only enhances the feasibility of lunar habitation but also sets a precedent for future missions to Mars and beyond. The study demonstrates that the strategic use of composite materials in the LSH design significantly contributes to the mission's success by optimizing mass, enhancing structural performance, and enabling innovative configurations that improve crew operations.
    • Erica Rodgers (NASA - Headquarters) & James Johnson (Colorado School of Mines) & Tara Polsgrove (NASA Marshall Space Flight Center)
      • 08.0202 Composite Habitat: An Incremental Development Path from Propellant Tank to Crewed Habitat
        Matthew Ziglar (The Boeing Company) Presentation: Matthew Ziglar - -
        This paper presents an evolutionary path for the development of a composite habitat aimed at enhancing space exploration capabilities. The proposed strategy adopts a phased approach, beginning with simple structures and progressively advancing to more complex systems. The initial phase focuses on the creation of a propellant tank, which serves as the foundational building block, leveraging lightweight composite materials that offer increased strength and stiffness while reducing mass compared to traditional aerospace metals. This foundational phase is critical for establishing the Technology Readiness Level (TRL) and Manufacturing Readiness Level (MRL) necessary for subsequent developments. The second phase involves the design and construction of a composite logistics module, which builds upon the knowledge gained from the propellant tank. This module incorporates additional functionalities such as internal and external logistics, docking capabilities, and crew accommodations, thereby enhancing mission performance and operational efficiency. The composite logistics module serves as a vital step in preparing for more complex builds, including crewed transit habitats. The third phase introduces a crewed Mars fly-by mission, akin to Apollo 8, which aims to validate critical technologies and systems required for long-duration space travel. This mission will test the composite habitat's ability to support crew life, including essential life support systems, logistics, and crew accommodations, while providing invaluable experience in operating a spacecraft beyond Earth's orbit. The final phase culminates in the development of a long-duration crewed surface habitat, primarily targeting Mars but also applicable to lunar missions. This habitat must integrate advanced systems for sustainable living, including dust mitigation, airlocks, and resupply capabilities, ensuring the safety and well-being of astronauts in a partial gravity environment. The lifecycle of this habitat is projected to span 15 years, emphasizing the need for durability and adaptability in extraterrestrial conditions. Throughout this evolutionary path, the paper emphasizes the importance of incremental development, risk management, and technology integration. By breaking down the complex campaign into manageable components, teams can gather feedback and make necessary adjustments, thereby enhancing the quality and reliability of the final product. This iterative process not only fosters innovation but also allows for the seamless incorporation of emerging technologies, ensuring that the habitat benefits from the latest advancements. Moreover, the strategy underscores the significance of partnership and collaboration among various stakeholders, including government agencies, private companies, and research institutions. Such collaborations facilitate resource sharing and knowledge exchange, accelerating progress and leading to innovative solutions that enhance overall project efficiency. A long-term vision is also crucial for guiding the incremental composite module development efforts, ensuring that each phase aligns with overarching goals and objectives. This strategic framework enables teams to navigate challenges effectively while capitalizing on opportunities, ultimately contributing to sustainable advancements in space exploration. This paper outlines a robust framework for developing a composite habitat for space exploration through a systematic, phased approach. By focusing on incremental development, technology integration, risk management, and collaborative partnerships, the proposed strategy aims to establish a human presence on celestial bodies and paves the way for future exploration and habitation.
      • 08.0204 In-Space Manufacturing Viability of Nanocomposites for the Extreme Space Environment
        Palak Patel (MIT), Brady Cruse (), Valerie Wiesner (NASA Langley Research Center), Joseph Koo (The Univ. of Texas at Austin), Brian Wardle (MIT) Presentation: Palak Patel - -
        As human space exploration moves towards a permanent presence on the Moon and towards Mars, in-space manufacturing can help reduce launch mass and cost, while allowing for customizable production to target specific use cases to improve mission durability. As we push the boundaries of human spaceflight, innovative technologies such as novel lightweight materials that can withstand the extreme space environment are imperative. Nanocomposites that contain nanomaterials such as carbon nanotubes (CNTs) and boron nitride nanotubes (BNNTs) provide advantageous properties such as increased mechanical properties due to their high strength-to-weight ratios and multifunctionalities such as high thermal emissivity and electrical conductivity for CNTs and good radiation shielding properties for BNNTs. These properties can enable lighter-weight materials for space applications such as ablative thermal protection systems for re-entry vehicles, ionizing radiation shielding for astronauts, and dust mitigation for the lunar surface. While the current state-of-the-art shows less than 15 vol% of nanotubes in nanocomposites, to take advantage of the nanotubes’ properties, it is important to have a larger volume fraction without defects such as agglomerations and voids, which would lead to lower mechanical properties. The novel bulk nanocomposite laminating (BNL) process has incorporated 40-50 vol% of CNTs or BNNTs in high-quality nanocomposites, which have demonstrated superior properties compared to the state-of-the-art. The BNL process involves synthesizing and densifying aligned nanotubes, introducing a matrix material, and providing heat and pressure to manufacture the final nanocomposite. Integrating these nanotubes into matrix materials utilized to build spacecrafts, spacesuits, and habitats can improve their properties. Manufacturing of such nanocomposites during spaceflight can allow the customization of the combination of the nanotubes and matrix materials to meet evolving mission needs. In-space manufacturing of nanocomposites, such as these, can enable timely repairs, such as restoring heat shield integrity before re-entry, thereby enhancing mission resilience and crew safety. To have this capability, it is important to demonstrate that the BNL process of manufacturing nanocomposites can be performed in a reduced gravity environment, such as on a spacecraft or a lunar/Martian habitat. To determine whether nanocomposites can be manufactured in space, the gravity-dependent step in the BNL process, i.e., infusion of the matrix material within the densified nanotubes, was performed in microgravity on a Zero-G parabolic flight. Under gravity at Earth’s surface of 1g, this step utilizes both gravity forces and capillary pressure for infusion. On the Zero-G flight, infusion of the matrix material was performed to determine whether capillary pressure would be sufficient in the absence of gravity. A study of the infusion of polysiloxane, an ablative ultra-high temperature resin used in thermal protection systems in re-entry vehicles, in densified CNTs and BNNTs was performed. The microgravity manufactured samples were characterized via scanning electron microscopy and micro-computed tomography post-flight to compare to samples manufactured in Earth’s gravity.
      • 08.0205 Behavior Analysis of Charged Lunar Regolith Simulants Lifted by Metal Wheel in Vacuum Environment
        Tadashi Matsuura (Toyota Motor Corporation), Hiroaki Kawamura (Toyota Motor Corporation), Nobutaka Tanishima (Japan Aerospace Exploration Agency), Shinichiro Noda (Toyota Motor Corporation), Natsu Fujioka () Presentation: Tadashi Matsuura - -
        This paper presents a comprehensive investigation into the behavior of regolith dust in a vacuum environment, utilizing the lunar soil simulant LHS-1 through a combination of experimental and numerical analyses. The behavior of regolith dust is a critical concern in lunar mobility, particularly due to the potential adverse effects that dust adhesion can have on equipment functionality. Such impacts can significantly impair the performance of rovers and other exploration devices, necessitating careful consideration of this phenomenon. On the lunar surface, various factors contribute to the behavior of regolith dust, including natural lofting due to sunlight and the ejection caused by Rocket Plume Surface Interaction (PSI). In particular, rovers traversing the lunar surface face additional challenges, with the wheel kick-up phenomenon being a critical factor. Understanding how this kick-up contributes to dust dispersion is essential for effective mission planning and equipment design. Additionally, the charging of dust particles during wheel kick-up has significant implications for dust behavior and subsequent adhesion to equipment. However, there is currently a dearth of existing research focusing on dust behavior during kick-up in a vacuum environment, particularly with regard to the effects of charging. In light of this context, the present study aims to elucidate the behavior of regolith dust uplifted by rotating wheels through both experimental and numerical analyses, employing the Discrete Element Method (DEM) for the numerical simulation. To accurately represent the physical behavior of particles in DEM, it is necessary to identify parameters based on experimental results. Accordingly, this study conducted parameter identification tests, including uniaxial compression tests and dynamic angle of repose tests under vacuum conditions, along with numerical analyses of both tests. The uniaxial compression tests measured the unconfined yield strength (UYS), while the dynamic angle of repose tests assessed the dynamic angle of repose and dynamic avalanche angle. The parameters obtained from these results were subsequently used to perform numerical analyses of the behavior of metallic wheels as they uplift lunar regolith simulant. To validate the results of the numerical analysis, a scale model testing apparatus for metallic wheels was constructed, and experiments were conducted under vacuum conditions, similar to those of the particle characteristic tests. The tests on metallic wheels also measured the charging conditions during wheel operation, allowing for an exploration of the influence of charging on dust uplift dynamics. The findings of this research are anticipated to provide critical insights for future lunar exploration activities, contributing to a better understanding of dust dynamics and enhancing the design and operation of lunar rovers. By addressing the complexities of dust behavior in a vacuum environment, this study aims to inform the development of strategies for mitigating dust-related challenges in upcoming lunar missions.
      • 08.0207 In-Situ Manufacturing of Conformal Fillers for High-Stress Underground Arch Restoration
        Madison Feehan (Space Copy Inc.), Timothy Anderson (Signet Technologies) Presentation: Madison Feehan - -
        In highly stressed brittle ground, excavation-induced bulking, ravelling, and overbreak can decouple preinstalled arches and trusses from the rock mass, degrading support performance and constraining constructability. Kaiser’s Muir Wood Lecture (2016), emphasizes that effective support at depth must both control bulking and mobilize confinement, which maintains contact so the rock’s self-supporting capacity can be engaged, rather than allowing “raining” rock and “flying” steel sets that lose load-bearing function until voids are backfilled. In practice, this requires a nuanced understanding of time-dependent deformation and fracture propagation in brittle media, where stress redistribution following overbreak can trigger progressive debonding of the support interface. Early detection of these gaps, before large-scale instability develops, can be the difference between routine maintenance and a significant rehabilitation effort. Field experience in both mining and civil tunnelling demonstrates that localized loss of contact often precedes global support failure, making targeted, rapid remediation technologies a critical enabler for long-term stability. Building on those principles, and on practitioner training in lunar subsurface architecture and drilling system selection, this paper proposes a sensing-to-print remediation workflow that (1) rapidly scans the cavity above ground supports; (2) computes a parametric, conformal “plug” geometry to re-establish continuous contact; and (3) fabricates and installs the plug in situ via additive manufacturing. For terrestrial mines and civil tunnels, the plug would be produced from locally available cementitious or metallic feedstocks; for the Moon, we target Wire Arc Additive Manufacturing (WAAM) of regolith to print meter-scale stiffeners and plugs that restore arch contact while surviving thermal and vacuum extremes. The enabling platform leverages a multi-purpose device that integrates materials processing, real-time data analysis, printing, and robotic quality assurance, designed for remote, harsh environments and solar-compatible operation, an architecture aligned with emerging in-situ logistics and manufacturing concepts intended to reduce mass-dependent supply chains and enable infrastructure on Earth and off-world. We frame the mechanics of “contact restoration” by linking plug stiffness and interface pressure to reductions in radial strain demand and bulking factors in the inner shell, with design targets derived from deformation-based support guidance (e.g., achieving confining effects sufficient to suppress ravelling and limit tangential strain). We outline candidate sensing (LiDAR/structured-light with SLAM), geometric processing (voxel morphologies with tolerance to scan noise), interlock and anchorage details compatible with steel arches and lattice trusses, and a control loop that tunes print parameters for irregular, low-cohesion media. A validation plan is provided: laboratory coupons in regolith simulants, full-scale analog demonstrations in brittle ground with instrumented arches, and radiation-shielding considerations for lunar tunnels. The proposed approach aims to convert a recurrent failure mode concerning the loss of arch contact due to overbreak and material crumbling into a field-serviceable maintenance operation, improving stand-up time, personnel safety, and mission logistics on both Earth and the Moon, demonstrating the capacity of this study as key for advancing knowledge in dual-use operations for long-duration missions.
    • Randy Williams (The Aerospace Corporation) & Melissa Sampson (Orbit Fab)
      • 08.0301 An Integrated Network for Commercial Spaceports: Enhancing Global Space Access and Collaboration
        Wanjiku Chebet Kanjumba (University of Florida) Presentation: Wanjiku Chebet Kanjumba - -
        The up-and-coming increase in volume and diversity of space activities, from satellite constellations and space tourism to lunar exploration and beyond, demand a coordinated approach to commercial spaceport development on Earth. The primary purpose of this paper is to examine the possible creation of an Integrated Network for Commercial Spaceports (INCS), a global framework aimed at unifying the operations, standards, and resources of commercial spaceports worldwide to meet the growing demands of the space sector, while fostering sustainability in spaceport development and utilisation. The INCS would provide a collaborative platform and network where commercial spaceports share standards, resources, and expertise to meet the growing demands of the space industry to support diverse activities such as satellite launches, space tourism, cargo transportation, and research missions, enabling seamless operations and equitable access to space. This paper investigates the possible challenges of establishing the INCS, including regulatory alignment, technological, and economic challenges, emphasising the need for international collaboration and alignment with existing space governance frameworks. Drawing lessons from existing global networks, such as the International Civil Aviation Organisation (ICAO), the INCS seeks to provide a scalable and adaptable model for international collaboration in the spaceport sector. By fostering connectivity, innovation, and fair representation, the INCS will serve as a cornerstone for humanity’s transition to a thriving, interconnected spacefaring civilisation, paving the way for a new era of space commerce and exploration.
      • 08.0303 Performance Evaluation of TSN-based Communication System for the Next Generation Launch Vehicle
        Francesco Giacinto Lavacca (Link Campus University), Tiziana Fiori (La Sapienza Università di Roma), Vincenzo Eramo (University of Roma Sapienza) Presentation: Francesco Giacinto Lavacca - -
        - Objective The objective of the paper is to propose and evaluate the performance of a Time Sensitive Network architecture applied in Next Generation Launch Vehicle in order to interconnect the terminals (on-board computer, telemetry apparatus, Inertial Reference System,...) performing the Guide, Navigation and Control (GNC) operations and producing/receiving telemetry traffic. The work described in the paper was been funded by European Space Agency (ESA) within the research frame of National Recovery and Resilience Plan (NRRP) System Integrated Avionic (SIA) technological development. - Research Motivation The aerospace sector has shown increasing interest in adopting deterministic Ethernet-based technologies to replace legacy solutions such as the MIL-STD-1553B bus, which suffers from limited bandwidth and high maintenance costs. Time Sensitive Networking (TSN), a recent evolution of standard Ethernet developed under the IEEE 802.1 working group, offers a promising open alternative. Designed to support time-, mission-, and safety-critical traffic, TSN introduces intrinsic mechanisms such as traffic scheduling, time synchronization, and stream reservation directly into the Ethernet protocol stack. These features make TSN suitable for industrial and aerospace domains requiring hard real-time guarantees. The motivation to investigate TSN in the context of launcher avionics lies in its unique combination of technical and economic advantages: (i) TSN is standardized by IEEE and supported by multiple vendors, ensuring sustainability and interoperability across devices; (ii) the standardization of key components (e.g., IEEE 802.1Qbv, 802.1Qcc, and 802.1AS) promotes vendor-neutral solutions and long-term maintainability; (iii) TSN builds on decades of Ethernet evolution, promising future extensibility; (iv) open-source TSN implementations offer flexibility and customization potential, in contrast to closed proprietary solutions. - Research Contribution The main contributions of the paper are as follows: (i) the design of a TSN-based network architecture tailored for small launchers within the framework of the NRRP System Integrated Avionic (SIA) technological development, capable of supporting services with deterministic delay requirements; (ii) the formulation of an optimization problem to compute message transmission offsets, aiming to both avoiding link contention and minimizing overall bandwidth usage; (iii) the implementation of a simple yet representative testbed, based on TSN device kits arranged in a daisy-chain topology with redundant communication paths; (iv) the performance evaluation based on the Inter Frame Space Jitter (IFSJ) metric, using a realistic traffic scenario representative of the Next Generation Launch Vehicle (NGLV); (v) results demonstrate that the proposed TSN architecture meets stringent determinism requirements, achieving IFSJ values below 50 ns.
      • 08.0304 Multi-physics Simulation Code LS-LUCA for Innovative Space Transportation System Design
        Keiichiro Fujimoto (Japan Aerospace Exploration Agency), Kaname Kawatsu (JAXA), Hiroaki Amakawa (Japan Aerospace Exploration Agency) Presentation: Keiichiro Fujimoto - -
        In recent years, securing international competitiveness in space development has increasingly required the construction of space transportation systems capable of flexibly accommodating diverse missions and high-frequency launches. In particular, the design and analysis of technically advanced systems such as reusable rockets and human spaceflight vehicles demand more precise, systematic, and comprehensive design evaluations than ever before. This study presents the development of a multi-physics simulation code,&nbsp;_LS-LUCA (Leading Unified Code for Analysis)_, which enables cross-disciplinary analysis across multiple physical domains. LS-LUCA supports high-fidelity and exhaustive system evaluations during the early design phase of space transportation systems. The code integrates aerodynamic analysis for complex geometries, attitude and trajectory simulations, control logic coupling, structural integrity, and thermal protection assessments, thereby facilitating efficient exploration of design and operational options and evaluation of system feasibility. In Japan, research and government-led investment toward the realization of an indigenous human spaceflight system are accelerating. This trend necessitates system-level feasibility assessments that include hazard event predictions such as destruction or explosion during anomalies, launch abort feasibility, and injury risk evaluations for crew members due to shock loads. LS-LUCA is capable of addressing these human spaceflight-specific safety evaluations, enabling comprehensive design studies for both reusable and crewed launch vehicles within a single framework. To achieve the design capabilities required for increasingly complex and large-scale space transportation systems, it is essential to formalize expert tacit knowledge into shareable models and accelerate design iteration cycles through high-speed computational analysis. LS-LUCA addresses the challenge of increased computational cost associated with high-fidelity multi-physics simulations by employing reduced-order models that maintain necessary accuracy while ensuring practical computation times. This presentation outlines the features and development status of LS-LUCA, including strategies and software design considerations to enhance its versatility and extensibility. Application examples to actual rocket system feasibility evaluations are also introduced. Furthermore, we discuss the integration of large language model (LLM) technologies into LS-LUCA as engineering agents, aiming to reduce the reliance on expert knowledge and complement human limitations in complex system design tasks. The prototype development and potential of LLMs to enhance engineering workflows are also explored.
    • Ryan Stephan ()
      • 08.0402 Commercial Orbit Transfer Services to Facilitate Lunar and Martian Exploration
        Matt Costello (Impulse Space) Presentation: Matt Costello - -
        Abstract—Lunar and Martian missions require significant propulsive capability to reach operational orbits. Orbital transfer vehicles (OTVs) offer a cost-effective, efficient, and flexible solution compared to traditional launch services. Impulse Space conducted detailed trajectory and performance analyses with two OTVs: Mira, a highly propulsive spacecraft bus, and Helios, a high-energy kick stage. Two design reference missions were evaluated for each vehicle as part of a study for the National Aeronautics and Space Administration (NASA) Launch Service Program (LSP) with the goal of demonstrating the current and near-term utility of OTVs to inform the development of a commercial approach to deliver payloads to exo-geosynchronous (xGEO) orbits that are not easily served by traditional launch services. The OTV capability would be procured through the VADR (Venture-Class Acquisition of Dedicated and Rideshare) launch services contract. For Mira, payload mass performance was evaluated to LLO (Low Lunar Orbit) and an ELFO (Elliptical Lunar Frozen Orbit). Results show that Mira is able to deliver 100 kg of payload to LLO through the completion of a Lunar Orbit Insertion (LOI) burn and an orbit reduction maneuver. For the ELFO mission, Mira performs a midcourse targeting maneuver to achieve the desired approach geometry for direct insertion into the ELFO, delivering 48 kg of payload mass. For many missions, it may be more effective to use Mira not only as the transfer vehicle but also as the long-term hosting platform, leveraging its robust payload support capabilities while eliminating the need for separate deployers. The DRMs employed relatively direct lunar transfer orbits. Alternative approaches, such as ballistic lunar transfers, may further increase payload performance but were not covered in this study. For Helios, a heliocentric and LLO DRM were evaluated. Results from the heliocentric orbit DRM demonstrate Helios’ significant utility in augmenting the performance of medium LVs to provide more access to higher energy escape orbits. The Helios LLO mission requires upgrades over the current version of the OTV in the form of onboard power generation and improvements to the thermal management of cryogenic propellants. The evaluation of the Helios LLO DRM factors in these potential upgrades and the effect that boiloff has on propellant mass during the transfer. The 2,800 kg of payload mass deliverable by Helios to LLO creates significant opportunities for future lunar missions including both orbiters and landers. These results highlight the current and near-term utility of commercial OTVs for lunar and deep-space missions, enabling new methods of payload delivery and mission architectures to the Moon, Mars, and beyond.
    • Jessica Marquez (NASA Ames Research Center) & Kevin Duda (Draper Laboratory)
      • 08.0501 Updates to the Injury Modes and Effects Analysis for Suited Lunar Surface Operations and Training
        Aaron Drake (KBR), Tessa Reiber (KBR), Keegan Yates (KBR), Linh Vu (), Nathaniel Newby (KBR) Presentation: Aaron Drake - -
        The Injury Modes and Effects Analysis (IMEA) is a tool that categorizes and ranks musculoskeletal and traumatic injury risks associated with the operation of an extravehicular activity (EVA) spacesuit with a focus on lunar surface operations and associated training activities. The IMEA is based on the concept of a Failure Modes and Effects Analysis (FMEA); a process employed in engineering fields to analyze system component failures and identify their effects on system operations. The IMEA assesses injuries and discomforts associated with the operation of an EVA spacesuit and their impacts on crew health and the completion of mission objectives. To achieve this, scenarios with the potential to cause injury are scored based on a combination of their likelihood to occur and the consequence should they occur and are then ranked by descending level of risk. Since the release of the original IMEA, new information relevant to suited injury has become available, including the announcement of NASA vendor contracts and the release of revised Concept of Operation (ConOps) guidelines for lunar surface exploration. The IMEA was updated to reflect this new information with modifications to risk scores, inclusion of additional injury inducing scenarios, and consideration for factors contributing to injury. The structure of the IMEA was also modified to improve top-down organization by recategorizing injury precursors into Injury Scenarios and Injury Contributors. Members of the Anthropometry, Injury Biomechanics, and Ergonomics Lab (AIBEL) at NASA’s Johnson Space Center convened regularly for approximately 6 months to reevaluate the IMEA with up-to-date knowledge on suited injury. Several Injury Scenarios were rescored or redefined (renamed, split into multiple, recategorized as Injury Contributors), new Injury Scenarios were added, and some were removed. The updates were presented to a broader group of experts to achieve consensus. Forty Injury Contributors were identified based on historical evidence or surmised based on knowledge of upcoming missions and training. Top Injury Contributors were assigned to each Injury Scenarios based on the results of a survey distributed amongst suited injury stakeholders within NASA’s Human Health and Performance Contract (HHPC). For example, the survey results identified “Suit Fit Issues” and “Contact Stresses” as the top Injury Contributors for the “Glove Use” Injury Scenario which is in alignment with historical evidence. Upon completion of the IMEA update, the three highest ranking Injury Scenarios were High Cadence EVA (training & mission), NBL Training (planetary g), and Glove Use. Results from the IMEA update were presented to internal and external subject matter experts at the second Suited Injury Summit to share information, discuss injury characterization tools, and brainstorm injury mitigation strategies. Performing periodic updates to the IMEA has resulted in a versioned tool that can be used by NASA and its partners to evaluate suited injury risks in numerous scenarios. It will continue to be updated with evolving information and can be applied to future suited endeavors including Marian surface exploration.
      • 08.0502 Review of Advanced Informatics and Display Systems for Extravehicular Activity Operations
        Joshua Elston (Texas A&M University), Jacob Keller (Amentum, NASA Johnson Space Center), Matthew Miller (Jacobs/NASA JSC), Susannah Paletz (University of Maryland, College Park), Lauren Landon (NASA (KBR)), Paromita Mitra (), John Karasinski (NASA - Ames Research Center), Jessica Marquez (NASA Ames Research Center) Presentation: Joshua Elston - -
        Extravehicular Activity (EVA) represents a complex and mission-critical component of human space exploration, necessitating precise operational execution within exceptionally arduous environments. Historically, spacesuit information management has been characterized by rudimentary displays and a significant reliance on terrestrial ground control for real-time situational awareness and procedural guidance. This model, while demonstrably adequate for Low Earth Orbit (LEO) missions, is progressively recognized as insufficient for forthcoming deep-space endeavors, particularly lunar and Martian surface operations, where substantial communication and bandwidth latencies intrinsically mandate a fundamental shift towards augmented astronaut autonomy. This paper outlines a comprehensive literature review designed to systematically analyze the transformative evolution of spacesuit advanced informatics, display, and command and control designs, concurrently examining the technological implications for the supporting ground infrastructure. The methodology for this review involves a systematic identification and synthesis of publicly available scientific and technical literature compiled over the past four decades. Sources will encompass peer - reviewed journals, proceedings from prominent aerospace and human factors conferences, technical reports issued by international space agencies, and relevant industry publications. The analytical approach will center upon tracing the progression of key technological advancements, delineating recurring human factors challenges, and identifying pivotal milestones in the development of sophisticated human-machine interfaces tailored for extreme environments. The review delves into several interconnected thematic areas. Firstly, it investigates the evolution of onsuit informatics, particularly focusing on the integration of advanced display technologies such as Augmented Reality (AR) and Head-Up Displays (HUDs), as exemplified by systems like the NASA JSC’s Joint AR and Collins Aerospace’s Information Technologies and Informatics Subsystem (IT IS). These advancements are crucial for providing astronauts with comprehensive, real-time data directly within their visual field, thereby minimizing cognitive load and enhancing in-situ decision-making capabilities. Secondly, the paper explores the integration of wearable sensor arrays for proactive health and system management, highlighting the strategic shift from reactive monitoring to predictive diagnostics achieved through the rigorous fusion of biometric, environmental, and suit telemetry data. Thirdly, it examines the development of command-and-control interfaces, detailing the progression from traditional physical controls to intuitive hands-free modalities, including natural language voice commands and foot-operated sensors, alongside advanced physical controllers meticulously designed for optimal dexterity within pressurized gloves. Crucially, the review addresses the profound impact of these on-suit technological advancements on the broader support ecosystem. Increased astronaut autonomy, primarily driven by the exigencies of communication delays and bandwidth limitations, fundamentally reconfigures the role of ground control teams from direct command to supervisory oversight, strategic planning, and sophisticated anomaly resolution. This necessitates an evolution in ground-based tools, workflows, and decision support systems, encompassing advanced data visualization platforms, predictive analytics, and increased automation within Mission Control Centers. The review draws parallels with the historical transformation of aviation cockpits, where the shift to integrated digital displays redefined pilot roles, to rigorously contextualize the analogous changes anticipated for EVA operations and their respective support teams. The synthesis of these interconnected technological and operational shifts is anticipated to offer valuable insights for the future trajectory of human space exploration.
      • 08.0503 Evaluation of Crew Emergency Return Capability during Lunar Terrain Excursions
        Harry Litaker (The Aerospace Corporation) Presentation: Harry Litaker - -
        The Lunar Terrain Vehicle (LTV) is an unpressurized lunar rover, capable of carrying two crewmembers and equipment during lunar surface exploration traverses. During these traverses, mission operations must protect for a worst-case contingency of an incapacitated crewmember needing emergency return to a safe haven such as the habitat or crew lander. The LTV can traverse up to 10 kilometers (km) away from the nearest safe haven therefore it is imperative for mission planners to understand how quickly the crew can drive the vehicle back in the event of an emergency. The purpose of the study was to determine the vehicle speeds and duration it would take an LTV operator to complete such a rapid return traverse. The traverse was simulated using the Johnson Space Center’s Systems Engineering Simulator (SES) motion table facility. This facility has a 6 degree-of-freedom motion table with a virtual reality headset based lunar South Pole lighting and terrain simulation. The study team collected vehicle speeds, traverse duration, rock collisions with the vehicle, and vibration and jerk data from the simulator, for six test subjects completing the same emergency return traverse. This study is an early look at what is achievable from a vehicle control and safety perspective to provide an understanding of what rescue times may be as the National Aeronautical and Space Administration (NASA) prepares concepts of operations and timelines for Artemis lunar surface missions.
      • 08.0504 Examination of the Joint Augmented Reality Navigation System for Lunar Surface Operations
        Amanda Smith (KBR), Andrew Nakushian (), Sarosh Nandwani (NASA - Johnson Space Center), Briana Krygier (), Paromita Mitra (), Matthew Miller (Jacobs/NASA JSC) Presentation: Amanda Smith - -
        Delays in communication between ground support in Mission Control and Extravehicular Activity (EVA) crew will be a defining characteristic of future deep-space exploration missions. This inherent communication lag necessitates a substantial increase in crew autonomy to ensure mission success and crew safety. Addressing this critical need, the Joint Augmented Reality (AR) team at NASA's Johnson Space Center (JSC) has developed a novel concept for an augmented reality informatics display specifically designed for integration into EVA suits. This advanced system aims to empower crew members to make independent decisions using local resources, aligning with NASA's objectives to enhance both crew autonomy and interaction with ground teams in future human planetary exploration missions. The present pilot study compared the performance and subjective experience of five crew members from the astronaut’s office utilizing this prototype Joint AR display against traditional paper map products for EVA navigation tasks. The investigation employed a within-subjects design, allowing each participant to experience both navigation modalities in a simulated environment. Key performance metrics collected included navigation time, waypoint accuracy, traverse path efficiency, and traverse heading variability. Additionally, subjective data was gathered on usability, perceived workload, ease of navigation, confidence in reaching destinations, and geographical awareness, assessed through distance estimation tasks. The quantitative results demonstrated clear advantages for the Joint AR condition. Participants using the AR display consistently exhibited significantly less time spent navigating, more accurately reached their intended waypoints, traversed slightly more efficient paths, and showed reduced heading variability compared to the paper map condition. These objective improvements suggest that the AR system provides more precise and streamlined navigational guidance, potentially reducing crew fatigue and consumable usage. Subjectively, the benefits of the Joint AR system were equally compelling. Participants reported higher usability scores, a noticeable decrease in perceived workload, and found navigation considerably easier with the AR display. Furthermore, they expressed greater confidence in confirming they had reached their intended destinations and demonstrated improved geographical awareness, as evidenced by more accurate distance estimations. These qualitative findings underscore the intuitive and supportive nature of the AR interface, enhancing crew confidence and situational understanding during complex EVAs. In conclusion, this study provides compelling evidence that the Joint AR informatics display represents a significant advancement in supporting crew autonomy for future exploration missions. By seamlessly integrating critical navigation and task information into the crew's field of view, the AR system has the potential to dramatically improve EVA efficiency, effectiveness, and safety. The findings advocate for the continued development and eventual implementation of such advanced AR solutions as a foundational element of next-generation spacesuit capabilities, enabling crews to operate more independently and productively in environments with delayed communication.
      • 08.0508 Development of a Ground Test Protocol for Evaluations of Drivable LTV Engineering Development Units
        Harry Litaker (The Aerospace Corporation), Omar Bekdash (Aerospace Corporation) Presentation: Harry Litaker - -
        The National Aerospace and Astronautical Administration’s (NASA) Lunar Terrain Vehicle (LTV) seeks commercial contractors to develop the next generation unpressurized rover to support two crewmembers and equipment for lunar surface expeditions. NASA research engineers, in collaboration with subject matter experts, utilized NASA Johnson Space Center’s Ground Test Unit (GTU) vehicle to develop a set of ground-based driving tasks and build an evaluation protocol for assessment of crewed operations with commercial LTVs. The purpose of the protocol is to provide a controlled, test track-like environment to methodically evaluate the crewed capabilities of an LTV and provide recommendations for design improvements. Areas evaluated through human-in-the-loop (HITL) testing include the vehicle operations, operator workload, human machine interfaces, and seated comfort, among others. After reviewing prior HITL vehicle testing from the military, automotive, aerospace industries, and NASA, a task battery was designed that encompassed all expected driving operations and environments that the LTV will need to operate in. This included driving on flat sections, sloped surfaces, overcoming obstacles such as rocks and craters, and navigating through challenging terrain that requires precise driving. These terrain types and scenarios were demonstrated in the JSC Rockyard, an outdoor facility which has simulated craters, a hill, and boulder field. All testing was performed shirtsleeve for protocol development with plans to evaluate LTVs in pressurized suits in the future. While future testing will utilize pressurized suits to evaluate the drivable vehicles, this paper will summarize the methods, metrics, and lessons learned from initial shirtsleeve testing using the GTU.
    • Andrew Abercromby (X-3PO: Extreme Physiology, Performance, Protection & Operations) & Ana Diaz Artiles (Texas A&M University)
      • 08.0603 Development of a Novel Decompression Profile and Physiological Response to Polaris Dawn's EVA
        Marissa Rosenberg (SpaceX) Presentation: Marissa Rosenberg - -
        INTRODUCTION: Polaris Dawn completed the first commercial extravehicular activity (EVA), taking the SpaceX Dragon vehicle and four astronauts to vacuum. To mitigate the risk of decompression sickness (DCS), SpaceX developed a novel decompression profile, which was validated in collaboration with NASA using a human-rated altitude chamber at NASA’s Johnson Space Center (referred to as the “20 ft Chamber”). Continued development led to further iteration post completion of the validation study with modification aimed at additional risk reduction. Data on ground validated and flight profiles are presented. METHODS: Profile Validation Eight participants, four flight crew and four age, gender, and BMI-matched volunteers, underwent the test protocol. Conditions were flight-matched by limiting ambulation pre-EVA & developing a simulated EVA task paradigm targeting expected metabolic rates, performed while supine. The profile comprised of: Phase 1: 24h @ 11.8 psi, 21% O2 Phase 2: 19h @ 9.5 psi, 26.5% O2 O2 Pre-breathe: 6m pre-breathe @ 9.5 psi, 100% O2 EVA: 90m EVA @ 4.5 psi, 100% O2 During simulated EVA, cardiac apical four-chamber views were acquired via ultrasonography monitor for Venous Gas Emboli (VGE), and symptoms were queried by a hyperbaric physician. The appearance of any left atrial or left ventricular VGE, or any symptoms concerning for DCS, were used as criteria for subject removal & potential treatment. Flown Profile After initial validation, the flight profile was further iterated for greater risk reduction. The number of phases, their environment setpoints, and the suit pressure were modified to further enhance nitrogen washout and maintain a barometric pressure above estimated tissue nitrogen pressure during EVA. The target EVA length had also been shortened due to considerations unrelated to the decompression profile. The flight profile comprised of: Phase 1: 5h 20m @ 11.80 psi, 25.7% O2 Phase 2: 5h 30m @ 10.95 psi, 27.7% O2 Phase 3: 5h 00m @ 10.05 psi, 30.1% O2 Phase 4: 3h 35m @ 9.25 psi, 32.8% O2 Phase 5: 19h 00m @ 8.65 psi, 37.5% O2 O2 Pre-breathe: 13.5 min @ 8.65 psi, 100% O2 prior to vent initiation (20m > 6 psi) EVA: 33m @ 5.07 psi, 100% O2 As every change from the validated protocol was risk reducing, SpaceX opted not to pursue additional human subjects testing prior to flight. Physiological data were collected for all crewmembers in flight. During the decompression profile heart rate, O2 saturation, and respiratory rate were measured; during EVA heart rate and core body temperature were measured. DISCUSSION: We characterized two novel decompression profiles to reduce the risk of DCS on Polaris Dawn. We observed no DCS symptoms, in either the chamber study or flight. These results are a promising indicator that this type of decompression profile, which reduces consumable utilization by having a shorter pre-breathe, is operationally feasible. Developing an EVA suit and cabin environment capable of minimizing the risk of DCS by preventing bubble formation with minimal pre-breathe enables resource-efficient protocols for future exploration. Testing a larger subject pool with increased EVA duration and suit pressure are promising avenues for future studies.
      • 08.0604 Development of an Immersive Orion Docking Simulator with Vestibular Perturbations
        John Hayes (Texas A&M University), Nancy Currie-Gregg (Texas A&M University) Presentation: John Hayes - -
        Neurovestibular disturbances pose a major challenge for astronauts following g-transitions. According to Einstein’s equivalence principle, a linear accelerometer cannot distinguish gravity from translational linear acceleration. To segregate tilt and translation, the brain must integrate inputs from the otolith organs – the brain’s linear accelerometers – with numerous other sensory inputs. Following a g-transition, the brain’s model for interpreting these inputs degrades. Conflicting signals from different sensory inputs can cause motion sickness, vertigo, disorientation, and postural/manual control decrements. The brain can adapt to these new conditions, but this adaptation process takes time, during which astronauts remain susceptible to these symptoms/performance decrements. NASA has identified manual control decrements associated with these vestibular disturbances following g-transitions as a major risk factor for future deep-space missions. Significant research is needed to characterize the operational impacts of these manual control decrements, but conducting meaningful research in these contexts can be very challenging, due to the numerous constraints of in-flight research (limited time and sample size, mass and volume constraints). Ground-based analogs must be developed to enable controlled research studies, countermeasure development/testing, and preflight training. This paper describes the development of an orbital docking simulator, in which subjects pilot a virtual reality (VR) Orion spacecraft using physical hand controllers (a rotational hand controller and translational hand controller) and are required to dock to the Lunar Gateway. The system is capable of colleting numerous task performance metrics (e.g., control inputs, spacecraft state information, fuel usage), physiological data (eye tracking, HR/V), and subjective measures (e.g., workload, motion sickness). The VR simulation is integrated with a motion platform capable of 360-degree rotation in three axes. While motion cues from this platform cannot fully simulate microgravity, the goal is to use motion cueing to partially replicate in-flight sensations/perceptions, specifically with respect to vestibular sensory conflict. Two different seating conditions have been adopted: an upright position and a supine position. The supine position aims to partially replicate the fluid shift associated with microgravity exposure, and to decouple otolith and semicircular canal inputs associated with certain motions (e.g., roll). The motion cueing algorithm uses the linear and angular accelerations of the simulated Orion spacecraft and calculates a quaternion which is used to command the motion base. Validation and tuning of the motion cueing algorithm, using both computational models of vestibular perception and human subjects testing, is ongoing. To further replicate the vestibular disruption associated with entry into microgravity, the simulator is also equipped with galvanic vestibular stimulation (GVS), an electrical stimulation technique previously demonstrated to replicate post-flight vestibular challenges. The potential applications for this simulator are vast. It can be used as an analog in controlled research studies to characterize performance decrements and test countermeasures. Further, it has potential as a preflight training tool to inform crew members’ expectations for the disruptive sensations they might experience during flight and even possibly to expedite adaptation upon entry into space. This work was supported by a NASA Space Technology Graduate Research Opportunity.
      • 08.0605 Olfactory Brain-on-Chip in Microgravity for Accelerated Aging and Drug Discovery
        Parnian Lak (Olfera) Presentation: Parnian Lak - -
        We present the first planned deployment of a patient-derived human olfactory brain-on-chip (OChip) aboard the International Space Station (ISS) to model Parkinson’s disease (PD) progression and therapeutic intervention under microgravity. OChip recreates the human olfactory (smell) pathway’s flow of signals and molecules- from olfactory epithelium to olfactory bulb and then to the cerebral cortex- in a tricompartmental platform with unidirectional axonal connectivity. Olfactory epithelium neurons are obtained via minimally invasive nasal swabbing from early-stage PD patients with hyposmia, or reduced sense of smell. These neurons carry endogenous pathological proteins, enabling the study of known disease hallmarks as well as novel early disease signatures absent in stem cell and organoid models previously studied on the ISS. This study will assess to what degree microgravity accelerates neuronal aging and pathological protein propagation, whether disease characteristics can be observed on OChip, and whether therapeutics such as prasinezumab (a PD drug in clinical trials) and our proprietary olfactory drug can slow the progression of pathological proteins. Parallel flight and ground OChips will be analyzed by daily brightfield imaging, compartment-resolved biochemical assays, and liquid chromatography-mass spectrometry–based drug distribution tracking. Based on prior research by other groups, we hypothesize that microgravity-accelerated aging will compress disease-relevant biological processes into weeks and that our therapeutic interventions will reduce propagation of pathological PD proteins, validating OChip in space as a translational platform for drug discovery.
    • Brian McCarthy (The Aerospace Corporation)
      • 08.0702 Structural and Thermal Design and Testing of the MATCH Payload aboard the Chang’e 7 Lunar Orbiter
        Popefa Charoenvicha (National Astronomical Research Institute of Thailand (NARIT)), Thanayuth Panyalert (National Astronomical Research Institute of Thailand), Peerapong Torteeka (National Astronomical Research Institute of Thailand (NARIT)) Presentation: Popefa Charoenvicha - -
        The Moon-Aiming Thai-Chinese Hodoscope (MATCH) payload, set to enter lunar orbit in 2026, represents Thailand’s first scientific instrument to explore deep space beyond Earth’s orbit — a significant milestone for the nation’s space industry. As one of seven international scientific payloads onboard the Chang’e 7 (C'E7) lunar orbiter, MATCH is tasked with advanced particle detection, supporting studies of the lunar radiation environment by analyzing the directionality and energy levels of high-energy cosmic rays. Its bi-directional detector design enables the measurement of both primary cosmic rays arriving directly from space and secondary albedo particles reflected from the lunar surface. To ensure mission success in this no-fail scenario, the payload must withstand severe mechanical loads during launch and orbit insertion, as well as extreme thermal conditions throughout its operational lifespan. This paper presents the structural and thermal design and analysis of the MATCH payload for the C'E7 lunar orbiter. The payload structure is fabricated from a high-strength, lightweight magnesium alloy with more than 95% Mg content, providing an optimal balance of stiffness, mass reduction, and thermal performance. Comprehensive finite element analyses (FEA) were performed to evaluate mechanical integrity under quasi-static, vibration, and shock loading conditions. Detailed thermal analyses, including in-orbit transient simulations and coupled thermal-structural stress analyses, were conducted to ensure thermal stability and compliance with operational limits throughout all mission phases. In addition to simulation-based analyses, the MATCH payload underwent extensive real-world qualification testing, including thermal vacuum (TVAC) and vibration table tests, to experimentally verify structural integrity and thermal performance. An iterative design approach was adopted, in which redesigns, material changes, and structural modifications were implemented based on analysis and test results to further enhance reliability and robustness. The results confirm the design’s ability to maintain required safety margins and stable operation throughout the mission lifespan. The MATCH payload not only contributes to advancing frontier research, but also demonstrates Thailand’s growing expertise in space engineering and international collaboration.
      • 08.0705 Thermal Testing and Analysis of a Modular Box Prototype for the Dragonfly Lander
        Marisa Teti (Johns Hopkins University/Applied Physics Laboratory) Presentation: Marisa Teti - -
        Dragonfly is a NASA New Frontiers mission that will send a rotorcraft lander to Titan, Saturn’s largest moon. Titan has low gravity and a dense atmosphere, making flight an ideal way to traverse its surface. Operation of the rotorcraft poses thermal challenges due to the extremely cold Titan temperature of -180°C and an atmosphere that is approximately 1.5 times Earth pressure. The rotorcraft lander will be insulated with foam to maintain and control the thermal environment within the lander. In order to choose a foam that can survive Titan conditions, extensive testing of foam on a modular box, one-third the length of the flight lander, has been conducted in the Johns Hopkins University's Applied Physics Lab (JHU/APL) Space Simulation Laboratory (SSL) using the Titan Pressure Environmental Chamber (TPEC), which mimics the temperature and pressure the lander will experience on Titan. This paper will showcase the data and analysis from these thermal chamber tests that has driven the design for Dragonfly’s lander foam insulation.
      • 08.0706 Europa Clipper Mechanical Subsystem Response to Canary Box and Plume Impingement Late Findings
        Jon Hamel (Jet Propulsion Laboratory) Presentation: Jon Hamel - -
        The Europa Clipper mechanical team had to respond to two significant and late findings on the project. The first was the addition of what is referred to as the Canary Box; an entire subsystem added to address the late revelation that certain electrical components throughout the spacecraft were not sufficiently radiation-shielded for the mission. The second was the discovery of a thruster plume impingement on the stowed magnetometer instrument. The Canary Box involved the addition of an electronics box to the exterior of the spacecraft top panel. The box itself is designed with unique thermal and radiation shielding properties to mimic different environments within the spacecraft in support of triaging the health of the plagued electronics. From a system perspective, the box is mass added after finite element model correlation, system test, and final coupled loads analysis. Thus much precaution had to be taken in order to ensure the box and necessary harnessing did not cause inadvertent harm to the spacecraft and its dynamics. The thruster plume impingement was addressed by the addition of a plume shield mounted on the canister which supports the deployable boom that hosts the magnetometer. There were severe constraints on the design and implementation of the plume shield. The shield itself had to be sufficiently sized and placed to protect the instrument yet could only be mounted to existing fastener locations on nearby structure. It must survive extreme temperature swings; both the heating from deflecting the thruster plume as well as the cold of deep space at the outer planets. It had very specific structural stiffness requirements given how it is mounted and it had to be simple to install onto the fully-populated spacecraft. This paper will discuss the solutions from the mechanical subsystem in addressing these two issues and how they were satisfactorily implemented respecting the principle of “do no harm” to the spacecraft.
    • Erica Deionno (The Aerospace Corporation) & Steve Snyder (Jet Propulsion Laboratory)
      • 08.0803 Development of a Structural Thermal Model for a Water-Based Propulsion System on the GEO-X Mission
        Ryo Minematsu (University of Tokyo), Ten Arai (), Koki Ikeda (), Masaki Fujii (), Taiki Achiha (), Sanguk Jeong (), Natsumi Hirota (), Minwoo Han (), Hayato Yoshida (), Kenshin Yamakawa (), Isamu Moriai (), Hiroyuki Koizumi (), Mariko Akiyama (), Ryota Fuse (), Ryu Funase (University of Tokyo), Yuichiro Ezoe () Presentation: Ryo Minematsu - -
        A water-based propulsion system is scheduled to be integrated into the GEO-X (Geospace X-ray image) mission. GEO-X mission aims to visualize the Earth’s magnetosphere through X-Ray imaging and demonstrates deep space access using the spacecraft’s own propulsion system. A structural thermal model (STM) of the GEO-X mission has been designed and developed in preparation for the planned launch in 2027. The STM includes the propulsion system and is used to evaluate the performance of the system. The propulsion system comprises a resistojet thruster for momentum unloading and delta-V maneuvers, as well as a gridded ion thruster for delta-V operations in deep space. Water, employed as the propellant for all thrusters, offers the advantage of liquid-phase storage, thereby facilitating a more compact tank design compared to conventional noble gas propellants. Although the reduced performance of water-based thrusters relative to noble gas propellant can result in longer orbital transfer time and higher effort hour requirements, this mission avoids such effort hour increases by employing an autonomous operation system with the ion thruster. The autonomous control system adjusts thruster operation by modifying the propellant flow rate and accommodating variations in the surrounding thermal environment, in response to inputs specifying the target delta-V and orbital transfer time. Furthermore, to maintain thermal control, a water circulation system is incorporated. This circulation system serves a dual purpose: it provides cooling for the water propulsion system and also functions as the water propellant source. Simulations indicate that the system achieves a thermal conductance of 1 W/K. The water circulation system is also planned to be incorporated as a temperature control parameter within the autonomous control system for the ion thruster.
      • 08.0804 The Space Gas Station
        John Slough (MSNW LLC) Presentation: John Slough - -
        Liquid oxygen (LOX) based propellant depots in Low Earth Orbit (LEO) have been identified as a potential method for reducing launch costs with NASA, along with various aerospace contractors, investing in fuel storage and transfer technologies. In this scenario however, the LOX fuel is still transported from Earth at a considerable fuel cost, requiring 22 kg of launch propellant for every kilogram placed in orbit. The proposed Space Gas Station (SGS) would address this inefficiency by enabling in-situ acquisition of oxygen. Initially, the SGS would capture and store neutral oxygen atoms found in LEO at an altitude of roughly 300 km, thereby significantly decreasing the fuel launch requirements as oxygen constitutes nearly 90% of total propellant mass. Ultimately, the SGS is intended to operate at altitudes of 800 - 900 km within the ionosphere, utilizing highly inflatable magnetic focusing coils to capture hydrogen and oxygen ions, which would then be stored as water ice. For drag make-up at 300 km altitude, a basic magnetic cusp dipole configuration would be employed to direct the ambient oxygen ions into a gridded ion accelerator similar to those used in ion propulsion systems. Utilizing the in-situ oxygen ions eliminates complications and losses associated with ionization processes. The focusing magnetic fields are below a milli-Tesla, requiring less than 1.2 kWe to power. Given that neutral oxygen flow exerts just 0.12 N of drag force onto the five-meter radius conical collector, the structural demands for the collector are minimal. A thin membrane supported by stays, similar to an umbrella, can serve as the collector and be retracted for a much smaller footprint at launch. Likewise, the magnetic cusp coils and tethers can be readily compacted and stowed for launch. The missions considered for the SGS include a human expedition to Mars, which would provide adequate lead time for technology maturation as well as LOX collection and processing. The SGS would supply fuel for the Reaction Control System thrusters essential for aerocapture on the Entry, Descent, and Landing (EDL) vehicle (~5 tons LOX) and for the Mars Ascent Vehicle (MAV), which requires ~30 tons LOX. Preliminary analysis suggests that a prototype SGS could harvest up to 2 tons of O₂ per year, contingent on system scale and orbital altitude. Deploying multiple SGS units could fulfill the demands of both mission phases in a timely manner. The successful implementation of SGS technology would enable significantly larger payloads on all missions reliant on LOX propellant, substantially reduce costs, and expand capabilities for deep space missions. This would encompass not only crewed Mars missions but also unmanned exploratory and sample-return missions, While the advantages of In Situ Resource Utilization are clear, ISRU approaches on distant planetary bodies face significant challenges due to hostile environments and the need for autonomous mining, chemical processing, and storage operations. The SGS presents a simpler, more accessible and maintainable solution in LEO. It can also be employed for both the outbound and return portions of the mission.
    • Christofer Whiting (NASA - Glenn Research Center) & Vincent Bilardo (Intuitive Machines LLC)
      • 08.0901 Radioisotope Power Systems (RPS) Deployment and Integration into Future Missions
        Young Lee (Jet Propulsion Laboratory) Presentation: Young Lee - -
        Young H. Lee, John Elliott, Benjamin Marti, Michael Fifield Jet Propulsion Laboratory, California Institute of Technology 4800 Oak Grove Dr. Pasadena, CA 91109 Young.H.Lee@jpl.nasa.gov Allen T. Guzik, Salvatore M. Oriti, Scott D. Wilson, Curtis A. Flack NASA Glenn Research Center 21000 Brookpark Rd, Cleveland, OH 44135 Stephen G. Johnson Idaho National Laboratory 1955 Fremont Ave., Idaho Falls, ID 83415 Steven R. Vernon 11100 Johns Hopkins Rd, Laurel, MD 20723 June F. Zakrajsek The Aerospace Corporation 2310 E. El Segundo Blvd. El Segundo, CA 90245 Paul Schmitz Power Computing Solutions, Inc. 468 Newport Ct. Avon Lake, OH 44012 This paper describes a study investigating options for the deployment and integration of Radioisotope Power Systems (RPS) for future space missions, expanding on prior research and comparing RPS technologies for an example lunar rover mission concept1. The intent of the study is to identify the key steps and factors involved in successful integration of existing and upcoming RPS technologies - including MMRTG, Next Gen RTG, and two conceptual Stirling RPS variants - into future mission architectures. A three-pronged parallel assessment informs the study method, which considers the views of the rover and spacecraft team, the RPS development team, and the launch vehicle integration team. From the rover and spacecraft team’s perspective, the study addresses the impacts of RPS technologies on operational conditions, RPS accommodation requirements, and the development of new processes for integration and testing, including risk mitigation strategies. From the RPS development team’s viewpoint, the study investigates the effects of the lunar and induced environments on the RPS, significant risk areas, necessary process updates for RPS integration and testing, processing constraints, and DOE-led nuclear safety and security assessments. Lastly, the launch vehicle integration team’s perspective covers accommodation requirements, including fairing and payload processing, and the certification process for a launch vehicle carrying a RPS for future missions. The findings of this study include a comprehensive framework to evaluate the feasibility of implementing new RPS technologies in future missions, and a detailed report on how this would impact requirements and implementation from all perspectives. The significance of this research lies in its potential to streamline the integration of advanced power systems, crucial for extended and ambitious robotic and human missions, by proactively identifying challenges and developing robust RPS deployment and integration strategies. This study provides critical insights for mission planners and system developers, ensuring successful deployment of next-generation RPS technologies. The study results are pre-decisional information for planning and discussion purposes only. [1] M. Clark, Y. Lee, T. Hudson, J. Elliott, A. Davis, M. Chodas, A. Guzik, J. Zakrajsek, P. Schmitz, “Comparison of Radioisotope Power Systems to Enable the Endurance Mission Concept”, IEEE Aerospace Conference, 979-8-3503-5597-0/25/$31.00 ©2025 IEEE
      • 08.0902 Development of Radioisotope Stirling Generators for High Efficiency Space Power Generation
        Ernestina Wozniak (NASA - Glenn Research Center), Matthew Stang (NASA - Glenn Research Center), Hannah Sargeant (University of Leicester) Presentation: Ernestina Wozniak - -
        Radioisotope power systems have been powering space missions for decades, historically using radioisotope thermoelectric generators to convert heat to usable electricity. Teams at NASA’s Glenn Research Center have been studying and testing free-piston Stirling convertors which provide order of magnitude efficiency gains for space power generation when compared to state-of-the-art thermoelectric power conversion. The Stirling convertors have been validated as stand-alone units with extensive environmental and extended lifetime testing. Recently, several efforts have integrated multiple Stirling convertors in generator configurations representing the interfaces and geometry that would be present in a flight system. Three separate radioisotope Stirling generator testbeds have been designed: the Plutonium-238 Stirling Generator Testbed, an internal NASA effort; the Americium-241 Stirling Generator Testbed, an international collaboration with a team from the University of Leicester; and the Heat Source Agnostic Stirling Generator Testbed, a conceptual study. The Plutonium-238 Stirling Generator Testbed (originally named the Dynamic Radioisotope Power System Testbed) began the initial design process in 2019, with more resources for the effort added in 2021 for design, analysis, and drawing creation. First testing occurred in 2023 and resulted in investigation of some anomalies, with follow-on testing occurring in 2024 and 2025. Initially, the Americium-241 Stirling Generator Testbed was part of a conceptual design in 2022 which showed promise and led to an international collaboration effort to build hardware and test the design in a laboratory environment. The first integration and testing of the Americium-241 Testbed occurred in 2025 at NASA Glenn Research Center, was successful, and validated thermal models of the system. Further efforts are ongoing to implement improvements to the design and test a second version in 2026. The Heat Source Agnostic Stirling Generator Testbed effort began as an idea to adapt to the changing environment surrounding fuel production and planned usage. Debates have been ongoing about the best mission options for the available radioactive material, and due to this dynamic environment, a Stirling generator that can accommodate various heat sources is attractive. Recently, the Heat Source Agnostic Stirling Generator Testbed idea received funding to implement a conceptual study for feasibility with the potential to lead to hardware build, integration, and testing campaign, contingent on results from analyses and additional funding opportunities. Design and analysis efforts for the Heat Source Agnostic Stirling Generator Testbed show the feasibility and limitations of a Stirling generator design that can accommodate a variety of heat sources.
      • 08.0904 High Temperature Advanced Closed Brayton Converter Cycle Analysis and Conceptual Design
        Gregory Daines (GE Aerospace) Presentation: Gregory Daines - -
        This paper focuses on the conceptual design of an Advanced Closed Brayton Converter (ACBC) with high operating temperatures using Helium-Xenon (He-Xe) as the working fluid. This conceptual design effort is intended to explore feasibility of high operating temperatures and estimate mass and efficiency of the Brayton converter. As we aim for sustained human space exploration, there is a need for a stable and scalable supply of electrical power. While nuclear fission power is promising for providing a stable, efficient, and power-dense energy source for deep space missions and has a long history of use both terrestrially and in space, there is a need to develop efficient power convertors to convert the reactor’s heat into useful electricity. The current study intends to explore advancing the state of the art of Brayton converters by targeting a high turbine inlet temperature of 1700 K, a high compressor inlet temperature of 400 K, a power output of 25 KWe, a specific power lower than 10 kg/kWe, an exergy efficiency surpassing 35%, and a maintenance-free service life of at least 10 years. High operating temperatures permits miniaturization of the heat rejection system while maintaining good conversion efficiency. Based on these system level requirements, optimal cycle operating conditions were identified which informed the conceptual design of the turbomachine, heat exchangers, and balance of plant. The proposed ACBC design leverages several novel technologies to achieve its aggressive performance targets, including advanced actively cooled turbine blades, high-temperature materials, and additive manufacturing of superalloys. This work lays the foundation for future advanced power generation systems which would enable exploration of the Moon, Mars, and deep space.
      • 08.0905 Fission Surface Power Technology Maturation Needs - an Overview
        Vincent Bilardo (Intuitive Machines LLC) Presentation: Vincent Bilardo - -
        Successful development of a nuclear Fission Surface Power (FSP) system, at any power level, will require a period of focused investment in maturing a broad suite of technologies, prior to initiating the flight development program. Starting with the nuclear micro-reactor which generates the sustained heat at the heart of the system, each key subsystem features units which today sit at a Technology Readiness Level (TRL) of 3 or 4, far short of the 6 level typically required by a system Preliminary Design Review milestone. This gap is often called the technology “valley of death” due to the budget, schedule, and degree of difficulty typically required to traverse it. This paper will identify these hardware units generally, list their current TRL, and propose a summary level technology maturation approach addressing the key FSP subsystems of Reactor (including fuel forms, moderators, and control devices), Power Conversion (both Brayton and Stirling), Power Management and Distribution (up/down convertors, rectifiers, and power transmission cables), Heat Rejection (radiators, fluid pumps), and Instrumentation, Controls and Electronics (including hardening against the unique fission reactor neutron and gamma radiation environment, flight software, and control architectures). Maturation targets, realistic time frames, and cost estimates are also discussed.
      • 08.0909 Coupled Multi-Unicouple Modeling for RTG Performance Prediction under Material Uncertainty
        Eden Brunner (University of Pittsburgh), Matthew Barry (University of Pittsburgh) Presentation: Eden Brunner - -
        Radioisotope Thermoelectric Generators (RTGs) provide reliable power for sub-surface and deep-space missions through the conversion of heat from plutonium-238 dioxide decay into electrical power through thermoelectric effects. Current uncertainty quantification methods in the RTG community focus on single-unicouple analysis, while RTG systems contain hundreds of interconnected unicouples operating under coupled thermal and electrical conditions. This study extends previous Monte Carlo uncertainty propagation work to converter-level systems, addressing the critical need to understand how material property uncertainties cascade through electrically coupled unicouple arrays. We propose a mathematical framework incorporating differential equations for coupled heat transfer and electrical current flow between unicouples, enabling system-level performance prediction under material uncertainty. Monte Carlo methods will be used to quantify uncertainty propagation across multiple unicouples with varying material properties, with the goal of determining optimal unicouple arrangements for robust performance. The proposed approach aims to provide prediction intervals for system-level performance metrics and identify configurations that minimize sensitivity to material property variations. Within our procedure, Dirichlet thermal boundary conditions for the cold and hot sides of the unicouples will remain invariant. This work seeks to advance RTG design capabilities by enabling engineers and scientists to account for system-level uncertainty effects in thermoelectric converter arrays, with potential applications for improving mission reliability in future deep space exploration.
      • 08.0910 Establishing a Self-Sustaining Lunar Data Economy through Commercial Data Markets
        Jacob Matthews (University of Dayton Research Institute) Presentation: Jacob Matthews - -
        NASA's Artemis program requires comprehensive lunar surface characterization generating 100 GB to multiple terabytes of data per candidate site, far exceeding current collection capabilities. While the Commercial Lunar Payload Services (CLPS) program provides initial data acquisition infrastructure, sustainable lunar development demands a self-reinforcing data economy that extends beyond government funding cycles. Current orbital assets provide insufficient resolution for mission-critical operations. The Lunar Reconnaissance Orbiter (LRO) and its Lunar Orbiter Laser Altimeter (LOLA) achieve 10–60 m horizontal resolution while the Lunar Reconnaissance Orbiter Camera (LROC) stereo imaging enables 2–4 m/pixel terrain models, yet safe landing operations require 0.5–2 m/pixel hazard identification with cm-scale site preparation details. Surface operations demand comprehensive geotechnical characterization (bearing strength, thermal conductivity, dust properties) and subsurface structure mapping to tens of meters depth. Resource assessment requires high-resolution synthetic aperture radar and gamma-ray spectrometry across polar regions with extensive in-situ validation. At current data return costs exceeding $1M per gigabyte, comprehensive site characterization remains economically prohibitive for most stakeholders. This cost barrier limits data collection to narrow, mission-specific campaigns rather than the systematic surveys needed for sustained lunar operations. This paper proposes a market-driven approach to lunar data collection anchored initially by CLPS missions but evolving into a self-sustaining commercial data marketplace. Government anchor customers establish initial demand through data purchase agreements, creating economic incentives for persistent surface data collection. As commercial lunar activities expand to mining prospecting, infrastructure development, and scientific research, diverse stakeholders generate sustained demand for standardized, accessible datasets. This data marketplace model transforms episodic government missions into continuous commercial operations. Persistent data collection platforms, enabled by advanced power systems that survive lunar night for years, reduce unit costs through extended operational duration and multi-customer data sharing. Revenue from early customers funds expanded data collection capabilities, creating a positive feedback loop where increased data availability drives broader market participation. The proposed architecture establishes three key market mechanisms: standardized data products enabling price discovery and quality comparison; subscription-based access models providing predictable revenue streams for data collectors; and open marketplace platforms connecting data producers with diverse customer segments including government agencies, commercial ventures, and academic institutions. Success metrics include data cost reduction from >$1M/GB to <$10K/GB through commercial radioisotope power systems, establishment of standardized lunar data products supporting multiple use cases, and creation of sustainable revenue streams independent of government appropriations. This market-driven approach accelerates comprehensive lunar characterization while building economic foundations for permanent lunar presence. The transition from government-funded data collection to commercial data markets represents a fundamental shift enabling scalable, sustainable lunar operations. By establishing economic incentives for persistent data collection, this approach supports both immediate Artemis objectives and long-term space economy development.
      • 08.0911 Status of Current L3Harris RTG Programs
        Leo Gard (Aerojet Rocketdyne) Presentation: Leo Gard - -
        Abstract—L3Harris Technologies (formerly Aerojet Rocketdyne) is currently working on two major flight Radioisotope Power System (RPS) contracts. This paper will discuss the current status of those two RPS programs, the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) and the Next Generation RTG (NGRTG). The MMRTG has powered two different missions to Mars and will be utilized during the upcoming mission to Saturn’s moon, Titan (Dragonfly Mission). The Next Gen RTG is based on the previously utilized GPHS-RTG. The MMRTG is designed to support missions in atmospheric environments and the Next Gen RTG is designed for deep space vacuum missions. The Next Gen RTG has a higher power to weight ratio than the MMRTG due to these differences in environmental requirements. The MMRTG program is in process of completing the generator to be used in the Dragonfly Mission. The Next Gen RTG program is currently testing out the power conversion unicouples and demonstrating the manufacturing and assembly operations.
  • Tom Mc Ateer (NAVAIR) & Christopher Elliott (CMElliott Applied Science LLC)
    • Will Goins (William Goins P.E. ) & Richard Hoobler (University of Texas at Austin) & Christopher Elliott (CMElliott Applied Science LLC)
      • 09.0101 In-Flight Power Generation for UAVs Using a Deployable Savonius Turbine
        Aaditya Mathur () Presentation: Aaditya Mathur - -
        This study explores the feasibility of generating in-flight power using a deployable vertical-axis wind turbine (VAWT) mounted on a lightweight unmanned aerial vehicle (UAV). The goal is to determine whether the power generated can offset the added drag and increase overall flight endurance. Airborne Wind Energy (AWE) systems offer a promising alternative to ground-based turbines by accessing higher-altitude winds and reducing material and infrastructure demands. Three VAWT configurations, Savonius, Helical Darrieus, and H-Type Darrieus, were prototyped and tested under controlled wind conditions. The Savonius turbine, demonstrating superior voltage output, was selected for further study. To evaluate the trade-off between power generation and aerodynamic performance, the turbine was integrated into a deployable system and tested both experimentally and through Computational Fluid Dynamics (CFD) simulations. Results showed that the turbine generated an average of 21.59 watt-hours at 3.9 rad/s (30 m/s), producing approximately 1.47 N of thrust. However, this came at a cost: the average lift-to-drag ratio dropped from 21.17 to 9.56, reducing glide distance from 2582.74 meters to 1165.56 meters and glide time from 86.05 to 38.86 seconds. Despite these aerodynamic penalties, both simulations and testing confirmed the feasibility of harvesting meaningful electrical power during non-powered or low-powered segments of flight. The energy generated could support onboard electronics, communication modules, or sensor payloads. Although glide efficiency was reduced by more than 50%, selective deployment during descent or loitering can mitigate this trade-off. This regenerative approach offers a novel method to extend UAV endurance and enable persistent sensing or energy storage for ground-based systems.
      • 09.0102 MH-60R Helicopter Desktop Two-Seat Crew Simulator and Trainer
        Robert Richards (Stottler Henke Associates, Inc. (SHAI)) Presentation: Robert Richards - -
        The US Navy in conjunction with Stottler Henke has developed and deployed a 2-seat crew simulator and training tool used for training for the US and international versions of the MH-60R helicopter. Since the MH-60R helicopter can be configured differently for different nations, the crew simulator is also adapted to the specific versions of the helicopter. Some nations flying or soon to be flying the MH-60R include Australia, Denmark, Greece, Norway, Spain, India, and South Korea. The crew simulator, called the Operator Machine Interface Assistant (OMIA), is primarily an expandable, easily modifiable low-cost PC-hosted desktop 2-seat crew simulator in use by US and non-US Navy training squadrons, and fleet squadrons at port, and at sea. The OMIA crew simulator allows the front-seat airborne tactics officer (ATO) to work in coordination with the Sensor Operator (SO) to prosecute a mission. The user interface emulates how the helicopter interface changes based on the inputs of both operators. OMIA provides training in most aspects of flight operations except flying, this includes but is not limited to navigation operations, radio operations, emergency operations, RADAR, ISAR, ESM, FLIR, and both active and passive acoustics. The two crew members can work together and perform these operations in a coordinated fashion as they would during actual flights. This tool is not only used for training, but also for tactics development and documentation by Navy test squadrons, developing tactics details using the simulator when a full flight simulator or helicopter is not needed. In some cases, due to the lag of simulator upgrades certain functionality may only be available in helicopters, the Manned Flight Simulator (MFS) facility at PAX River and OMIA. As of the time of this paper the modeling and simulation of the digital magnetic anomaly detection (DMAD) functionality is only available in a few helicopters, at MFS and in OMIA. This paper demonstrates the significant training benefit OMIA provides for both single-seat and 2-seat training for the navies that fly the MH-60R. As well as the potential for low-cost portable simulators in general to provide high-fidelity training on both land and at sea, as well as a tool to support future tactics development.
      • 09.0103 An Open Synthetic Operational Aircraft Performance Model (SOAP)
        Lance Bays (Vortex Control Technologies) Presentation: Lance Bays - -
        Modern airliners rely on Aircraft Performance Monitoring (APM) systems to detect fuel inefficiencies from engine deterioration and increased airframe drag. Developers of these systems face significant obstacles: restricted access to operational data due to proprietary concerns, insufficient data volume and variety for model training and testing, and lack of ground truth to verify thrust and drag predictions. To address these challenges, this paper introduces SOAP, a synthetic operational aircraft performance model that generates simulated datasets to support APM development. The model aligns with a growing trend of using synthetic data as a surrogate for testing avionic systems where real data are restricted, lack “interesting” properties, or fall short in volume. SOAP combines industry-standard physics-based aerodynamic and engine models with stochastic parameterization to represent aircraft condition variability and operational diversity. Drag and engine deviations are explicitly included in the physical formulation for individual aircraft, enabling evaluation of an APM method’s ability to identify root causes of performance variation. The model focuses on data from the cruise phase of flight, where most fuel is consumed and the near equilibrium in forces make performance variations most identifiable. Diversity in operational flight conditions is achieved through stochastic sampling. The model assigns realistic distributions for altitude, weight, speed, ambient conditions, and geographic parameters based on typical airline operations. Each parameter follows statistical distributions that capture real-world variability while maintaining physical constraints, such as the relationship between aircraft weight and achievable cruise altitude. Aircraft-specific variations differentiate individual tail numbers within the fleet. The model assigns unique characteristics to each aircraft, including airframe drag deviations, empty weight variations, instrumentation biases, and per-engine fuel consumption offsets. These tail-specific parameters create realistic performance variation while providing ground truth for root causes, a feature absent in real operational data. An example implementation demonstrates the model for a fleet of 35 narrowbody aircraft in the 737-800/A320 class. The simulation generates cruise flight data using parameters tailored to the aircraft type and informed by realistic operational patterns observed from ADS-B tracking. Analysis confirms that operational parameters show consistent distributions across the fleet, while performance metrics display distinct, separable variations by tail number, essential for evaluating APM algorithms. The dataset and open-source model are publicly available, allowing researchers with proprietary airline data to refine the probabilistic inputs for specific fleets or operations. The synthetic datasets enable rigorous testing of APM methods under controlled conditions with known performance baselines. Researchers can adjust fleet size, operational complexity, and degradation levels to evaluate algorithm robustness across diverse scenarios. The model is computationally inexpensive and enables rapid generation of large, diverse datasets needed for developing advanced APM systems, including machine learning and Bayesian methods. By providing ground truth and unlimited data volume, SOAP facilitates research that could detect subtle performance degradations that cost airlines billions annually.
      • 09.0105 Three-Dimensional Aerodynamic Analysis of a Wing–Rotor Configuration for a Folding Wing UAV
        Seung hyun Choi (Korea Aerospace University), LEE YooSang (Korea Aerospace University), Byeong geun Cho (Korea Aerospace University), Hui chan Choi (Korea Aerospace University), Kim Young run (korea aero space university), Dongwon Jung (Korea Aerospace University) Presentation: Seung hyun Choi - -
        Urban Air Mobility (UAM) platforms have recently gained attention as promising alternatives for next-generation aerial transport. To support compact and efficient vertical operations, multiple airframe types—including Lift & Cruise, Tilt-Wing, and Tilt-Rotor—are currently under active development. Among these, the variable-folding wing Vertical Take-Off and Landing (VTOL) design features a unique mechanical system. Each wing can independently pivot around its own axis in three-dimensional space, enabling geometric transformation during flight. These geometric variations cause shifts in the center of gravity (CG) and changes in the moment of inertia (MOI), leading to nonlinear characteristics in aerodynamic forces and moments. Consequently, they increase the complexity of the modeling process required for accurate trim analysis. To capture such nonlinear aerodynamic behavior, wind tunnel experiments are typically conducted. However, comprehensive testing across a wide range of flight configurations is required, resulting in significant time. As a result, this becomes a critical limitation during the early design phase, where rapid evaluation is essential. To address the limitations associated with physical testing, this paper proposes a dynamics modeling method for trim analysis of a variable-folding wing VTOL. The method encompasses multicopter, transition, and fixed-wing flight modes without relying on wind tunnel experiments. Propeller thrust and torque are computed using a three-dimensional Blade Element Momentum Theory (BEMT), which considers both axial and normal inflow velocity components relative to the blade elements. The model also accounts for changes in blade orientation induced by wing pivoting. To capture non-uniform inflow caused by rotor wake, a three-dimensional strip theory is applied along the wing span. Localized inflow conditions at each wing segment are governed by the wing pivot angle, angular velocities of the wing and body, and propeller thrust. These variations enable accurate estimation of angle of attack and aerodynamic forces. Additionally, angular velocity components generated by wing articulation are included to enhance the fidelity of the dynamics model. Based on this method, a six-degree-of-freedom (6-DOF) flight dynamics model is developed to simulate the complete flight envelope of the variable-folding wing VTOL. The dynamics model proposed in this paper effectively captures variations in CG, aerodynamic coefficients, and thrust characteristics resulting from the geometric reconfiguration of the variable-folding wing VTOL. To validate the theoretical method, a prototype vehicle was developed and flight-tested. Observed altitude stability across different wing configurations verified the model’s thrust and aerodynamic force predictions. By integrating BEMT with three-dimensional strip theory, the model accurately captures unsteady aerodynamic effects that occur during the transition phase. Trim conditions for multicopter, transition, and fixed-wing modes were analytically derived using MATLAB/Simulink. Simulation results confirmed the effectiveness of the proposed model for early-stage dynamic analysis of the variable-folding wing VTOL.
      • 09.0106 UrbanNav: A Comprehensive Simulation Framework for Urban Air Mobility Research
        Aadit Khan (Iowa State University ), Adam Haroon (Iowa State University), Yasser Shoukry (), Cody Fleming (Iowa State University) Presentation: Aadit Khan - -
        Urban Air Mobility (UAM) promises to revolutionize urban transportation through autonomous aerial vehicles operating in complex urban environments. However, safe UAM integration requires sophisticated collision avoidance and regulatory compliance capabilities that must be validated in realistic environments before deployment. Current UAM simulation platforms suffer from critical limitations: they employ oversimplified airspace models without realistic restricted zones, lack integration with actual geographic data, provide insufficient support for diverse control algorithms, and fail to model complex interactions between autonomous vehicles, ground infrastructure, and air traffic management systems. This paper presents UrbanNav, an open-source simulation framework that addresses these limitations through comprehensive real-world modeling and extensible modular architecture. The framework incorporates OpenStreetMap library to model any urban landscape and its infrastructure including hospitals, airports, and other no-fly zones. Our modular framework supports the design of multiple UAV types with interchangeable dynamics models, controller implementations, and sensor configurations. UrbanNav provides comprehensive collision avoidance through algorithms such as Optimal Reciprocal Collision Avoidance (ORCA), in addition to robust virtual sensor design, high fidelity six-degree-of-freedom UAV dynamics modeling, and robust air traffic management capabilities. Additionally, UrbanNav includes thin Gymnasium env wrapper enabling integration with modern reinforcement learning algorithms. We demonstrate the framework's capabilities through multiple UAVs operating simultaneously in realistic urban environments. Experimental validation shows robust simulation performance while maintaining accurate physics modeling, comprehensive logging, and advanced visualization capabilities. We further validate the framework's reinforcement learning integration by successfully training autonomous agents for goal-directed navigation. UrbanNav provides the research community with an innovative tool that bridges theoretical UAM research and practical implementation challenges, offering an extensible platform for accelerating development of safe, efficient, and scalable urban air transportation solutions.
      • 09.0107 Hybrid BEMT - Vortex Theory Model to Predict Amphibious Propeller Performance
        Md Umar Farooq ( Indian Institute of Technology Mandi ), Sayani Biswas (Indian Institute Of Engineering Science and Technology, Shibpur), Jagadeesh Kadiyam () Presentation: Md Umar Farooq - -
        Accurate modelling of propeller performance is essential for predicting thrust, torque, and efficiency across diverse operating conditions, enabling the design of efficient and reliable rotor systems. This study presents a hybrid modelling approach that integrates Blade Element Momentum Theory (BEMT) with Vortex Theory to improve performance predictions of amphibious propellers operating in dual-media environments, namely air and water. Classical BEMT, while widely used, suffers from inherent limitations such as inaccurate handling of high axial induction factors, stall behaviour, and tip losses, particularly under transitional flow regimes. To address these shortcomings, the study proposes a modified BEMT framework incorporating high-induction factor corrections by Shen and Spera, along with the Viterna–Corrigan stall model to account for post-stall airfoil behaviour. In addition, vortex-based modelling is employed to capture tip vortex dynamics, which classical BEMT fails to represent accurately. The hybrid model combines sectional blade discretization with velocity-based transition criteria to determine the onset of stall and dynamic effects. The methodology is validated through comparison against an extensive set of experimental data from APC propellers operating under various conditions, including multiple propeller diameters, rotational speeds, and fluid velocities. Results indicate that the hybrid model significantly outperforms both pure BEMT and vortex-only models, especially in predicting performance across transitional regimes where air-water interaction dominates. Detailed error analysis shows a substantial reduction in average and maximum prediction errors for thrust and torque coefficients. The study concludes that the hybrid modelling approach provides a more reliable framework for amphibious propeller design, particularly in applications involving marine and aerial robotics. Future work is proposed to further refine cross-medium transition models and expand the framework to fully unsteady and three-dimensional flow conditions.
    • Nathaniel Hamilton (University of Dayton) & Will Goins (William Goins P.E. ) & Kerianne Hobbs (Air Force Research Laboratory)
      • 09.0206 Collaborative LLM-Based Agents for Autonomous Multi-UAV Mission Execution
        Jeremy Ludwig (Stottler Henke), Daniel Tuohy (Stottler Henke Associates, Inc (SHAI)), Forest Finnigan (Stottler Henke Associates, Inc (SHAI)) Presentation: Jeremy Ludwig - -
        Recent advances in large language models (LLMs) have unlocked new potential for integrating natural language understanding into autonomous systems. This paper presents research conducted under the Collaborative LLM-based Agents (COLA) project, which investigated the feasibility of deploying collaborative, LLM-driven software agents to control and coordinate unmanned aerial vehicles (UAVs) in various mission contexts. Specifically, the study focused on low-latency decision-making, operational reliability, and seamless human-agent interaction. A proof-of-concept (PoC) system was developed that leverages a modular agentic architecture powered by LLMs to coordinate multi-agent unmanned systems. Agents communicate with each other and with human operators using natural language alongside formatted messages, and their behaviors are shaped through structured prompt engineering and predefined mission plans. The system was built around a rapid development workflow for generating LLM-based agents, a validation and verification (V&V) framework to assess their safety and reliability, and an operator-centric user interface (UI) designed to support trust and effective supervision. This architecture enables dynamic, context-aware responses during mission execution without requiring pre-scripted behaviors. The PoC system was evaluated in both simulated and live operational environments using a collaborative multi-drone search task. Each agent, including a centralized mission supervisor and multiple drone agents, was responsible for managing flight operations such as takeoff, waypoint navigation, battery monitoring, and return-to-launch decisions. A key feature of the system is its ability to execute handoffs of control between human pilots and agents using natural language confirmation protocols. Simulations enabled rapid iteration and exploration of collaborative behaviors, including dynamic task reallocation and heterogeneous agent team coordination. Live flight exercises validated system performance in real-world conditions, revealing emergent agent behavior and adaptability in the face of unexpected instructions and environmental perturbations. The study identified insights and limitations inherent to integrating LLMs with autonomous systems. Notably, issues such as prompt instability, environmental variability, and communications intermittency (i.e., Denied, Disrupted, Intermittent, and Limited [DDIL] environments) underscore the need for robust safety constraints, fallback behaviors, and hard-coded decision boundaries. The live flight demonstration revealed real-world challenges—such as inconsistent telemetry, UI stress responses, and agent hesitancy in ambiguous states—that are not fully captured in simulation-based testing. The COLA project demonstrated the viability of collaborative autonomy via LLM-based agents, laying a solid foundation for scalable, generalizable mission capabilities. The work highlights the importance of integrated V&V, resilient human-machine interfaces, and iterative field validation in transitioning LLM-enabled autonomy from concept to operational reality. Future work aims to expand the agentic architecture, automate safety validation pipelines, and support larger-scale autonomous operations, advancing the deployment of collaborative AI agents in critical aerospace applications.
      • 09.0207 Vision Language Model-informed Relative Navigation for GPS-denied Environments
        Alexander Pasqualina (United States Air Force Academy), Caleb Craddock (United States Air Force Academy), Alison Peckham (), Katelyn Winkleblech (), Sneha Laxminarayan (), Harshang Chhaya (University of Florida), Sandip Sharan Senthil Kumar (University of Florida), Jared Paquet (University of Florida), J. Humberto Ramos (University of Florida), Michael Anderson (United States Air Force Academy) Presentation: Alexander Pasqualina - -
        This paper presents a relative localization framework for autonomous agents operating in GPS-denied environments, where one of the agents maintains access to global positioning information and contextual visual information is used to assist GPS-denied agents with localization and for decision-making and guidance adaptation. The system estimates the global pose of the GPS-denied agents by leveraging relative range and bearing measurements to the GPS-aware agent through a Kalman filter. In addition to the localization pipeline, we integrate a Vision-Language Model (VLM) into the agents to support mission-level guidance and target recognition tasks. One can query the VLM with flexible, high-level prompts such as “Detect dangerous objects, and get closer to them,” “Look for strange or out-of-place items,” or “This is a reconnaissance mission; search for relevant features.” These queries allow the agent to extract mission-specific meaning from its visual environment, enabling it to highlight areas of interest, flag unusual scenes, or identify specific targets. In combination with the GPS-enabled agent, such recognized regions or targets can be globally localized when seen by GPS-deficient teammates. This approach is validated in both simulation and real-world environments. In simulation, the approach is tested in cluttered indoor scenes with varying object distributions and visual complexity. In real-world experiments, a laboratory motion capture system (MoCap) acts as a surrogate for GPS, providing trusted external localization. In this testing, a MoCap-aware robot and a MoCap-denied partner coordinate using ultra-wideband (UWB) ranging and onboard cameras. The VLM is queried periodically for semantic feedback during exploration, which helps the agents prioritize regions or objects of interest for follow-up actions while pinpointing these detections in global coordinates. This work shows that even in GPS-denied settings, agents can benefit from the semantic reasoning capabilities of modern neural network architectures, giving rise to contextually adaptive behavior and decision-making. Future work will explore real-time interaction with operators through natural-language mission updates and scaling this approach to a higher number of interacting agents.
      • 09.0209 A Framework for Scenario Generation, Training, and Evaluation of Neuro-Symbolic AI for Autonomy
        Hambisa Keno (STR), Nicholas Pioch (Systems & Technology Research) Presentation: Hambisa Keno - -
        Application of Unmanned Aerial Vehicles (UAVs) in search and rescue, emergency management, and law enforcement has gained traction with the advent of low-cost platforms and sensor payloads. The emergence of hybrid neural and symbolic AI approaches for complex reasoning is expected to further push the boundaries of these applications with decreasing levels of human intervention. However, current UAV simulation environments lack semantic context suited to this hybrid approach. To address this gap, HAMERITT (Hybrid Ai Mission Environment for RapId Training and Testing) provides a simulation-based autonomy software framework that supports the training, testing and assurance of neuro-symbolic algorithms for autonomous maneuver and perception reasoning. Developed under DARPA's Assured Neuro-Symbolic Reasoning and Learning (ANSR) program, HAMERITT includes scenario generation capabilities that offer mission-relevant contextual symbolic information in addition to raw sensor data. Scenarios for area search and moving target pursuit include symbolic descriptions for entities of interest and their relations to scene elements, as well as spatial-temporal constraints in the form of time-bounded areas of interest with prior probabilities and restricted zones within those areas. HAMERITT also features support for training distinct algorithm threads for UAV maneuver vs. perception within an end-to-end mission. Most recent work includes improving scenario realism and scaling symbolic context generation through automated workflow.
      • 09.0210 Cooperative Area Coverage with High Altitude Balloons Using Multi-Agent Reinforcement Learning
        Adam Haroon (Iowa State University), Tristan Schuler (U.S. Naval Research Laboratory) Presentation: Adam Haroon - -
        High Altitude Balloons (HABs) can leverage opposing stratospheric wind layers for energy-efficient maneuvering to support persistent missions including surveillance and reconnaissance, high fidelity in-situ atmospheric sensing, and communication networks. The ability for a single HAB to maneuver is completely dependent on the local wind characteristics, which vary highly depending on geographic region and season. Coordination between multi-agent teams of HABs can lead to improved persistent station keeping and area coverage. In our prior work, we developed RLHAB, an open-source simulation environment for training a single HAB agent to station-keep using deep reinforcement learning. We introduced realistic uncertain flow fields by aggregating radiosonde profiles to generate synthetic forecasts and used European Centre for Medium-Range Weather Forecasts (ECMWF) Complete ERA5 Reanalysis as the observable forecast. This work extends the RLHAB framework to perform cooperative multi-agent area coverage in small to medium-sized regions. Current research on area coverage with teams of HABs is primarily focused on global constellations of HABs using Voronoi Partitioning and Extremum Seeking Control with thousands of balloons. However, these techniques frequently do not perform well for coverage of smaller areas, or with a smaller team of HABs. We have adapted the RLHAB simulation environment to support multi-agent deep reinforcement learning through the PettingZoo parallel_env interface, enabling multiple altitude-controllable agents to operate and communicate simultaneously in shared, stochastic flow fields. We train and evaluate using multiple state-of-the-art multi-agent reinforcement learning (MARL) algorithms, such as QMIX, MADDPG, and MAPPO. We also evaluate and discuss the performance of different observation spaces and reward functions that allow for shared knowledge of the environment and encourage persistent spatial coverage, minimal redundancy, and robustness to forecast uncertainty. This cooperative approach enables scalable teams of HABs to autonomously distribute across large regions, enhancing mission resilience and spatial-temporal coverage in dynamic stratospheric environments.
      • 09.0211 Time-Frequency Wavelet Transformer Forecasting for Hypersonic Glide Vehicle Trajectory Prediction
        Howard Li () Presentation: Howard Li - -
        This study presents a novel Wavelet Transformer Forecasting (WTF) framework for predicting hypersonic glide vehicle (HGV) trajectories with lateral maneuvers. Hypersonic glide vehicles (HGVs) present unique challenges for trajectory prediction due to their high velocity, complex maneuvering capabilities, and nonlinear flight dynamics. Traditional methods struggle with the multiscale nature of HGV trajectories where both high-frequency maneuvers and low-glide phases must be simultaneously modeled. Our approach combines wavelet decomposition with transformer-based sequence modeling to capture both temporal and frequency-domain characteristics of complex HGV trajectories. The proposed architecture decomposes trajectory signals into multi-resolution wavelet coefficients, processes them through attention-based neural networks, and reconstructs accurate future trajectories. Experimental results demonstrate significant improvements in prediction accuracy compared to traditional methods, particularly for trajectories with complex maneuvering patterns.
      • 09.0212 Analytical Estimation of Off-Nominal Volumes under UA Intent Uncertainty in UTM BVLOS Operations
        Vincent Kuo (Metis Technology Solutions, Inc.), Seungman Lee (MCT/NASA Ames Research Center), Priyank Pradeep (AMA Inc), Jose Ignacio De Alvear Cardenas (San Jose State University Research Foundation) Presentation: Vincent Kuo - -
        According to the Unmanned Aircraft System (UAS) Traffic Management (UTM) standard established by the American Society for Testing and Materials (ASTM), each Unmanned Aircraft (UA) must remain within its operational intent volumes, which encompass the planned flight path, for at least 95% of the flight time. Sharing the compliance status of UA with these volumes enables service providers to maintain consistent situational awareness and take timely, risk-mitigating actions, such as replanning, when off-nominal situations arise. However, uncertainties inherent in UA operations Beyond Visual Line of Sight (BVLOS) often lead to deviations from planned trajectories. This introduces significant operational risk, particularly when the airspace an off-nominal aircraft may traverse is not well defined, and the off-nominal situation remains unknown to other operators. To enhance operational safety, it is essential to generate off-nominal volumes that encompass the range of trajectories an off-nominal aircraft might reach, so that service providers can notify relevant operators of the off-normality. The composition of off-nominal volumes is not defined in the standard specification for UTM services. As a result, current practices rely either on operator-defined volume sizes or heuristic methods. Under intent uncertainty, these practices may result in off-nominal volumes that are either excessively large to trigger unnecessary avoidance maneuvers and increasing airspace complexity or too small to prevent potential mid-air collisions. Additionally, a method for defining a spatial safety zone, which conceptually similar to off-nominal volumes, has been proposed for commercial aircraft using differential game theory and optimal control to ensure safety outside the zone. However, this approach depends on the precise execution of time-varying optimal maneuvers, which require intensive numerical computation and accurate implementation, posing challenges for real-time application. This study presents an analytical solution for efficiently and accurately generating off-nominal volumes under uncertain operational conditions. Using time-varying nonlinear equations, the methodology estimates off-nominal volumes in the horizontal plane by analyzing pairwise encounter geometry. The line of sight between the off-nominal and nominal aircraft is modeled under the assumption of planar collision encounters, incorporating the dynamic intent of the off-nominal aircraft. A Lyapunov-like qualitative analysis is performed, leading to a rigorous derivation of a closed-form solution that defines the contour of the off-nominal volume. This solution characterizes the maximum region within which an unmanned aircraft could potentially conflict with a nominal aircraft, based on its last known state and performance constraints, without requiring knowledge of the nominal aircraft’s future intent. The proposed approach is computationally efficient, minimally reliant on intent information, and effectively identifies regions of elevated operational risk. As a result, it enhances situational awareness and supports proactive collision risk mitigation, particularly for unmanned aircraft operating beyond visual line of sight in the presence of off-nominal traffic. Finally, the accuracy of the analytical solution was evaluated against off-nominal trajectories generated by three different types of trajectory predictors over a look-ahead time of approximately 300 seconds. Among these, the analytical solution showed the best containment performance for trajectories generated by the probabilistic trajectory predictor, while the time-varying worst-case trajectory predictor resulted in the poorest containment.
      • 09.0213 Hybrid-Constrained Multi-Agent Reinforcement Learning for Collaborative Dual-Aircraft Scenarios
        Hao Liang (), Yang Gao (), Hui Chang (), ZhiJun Zhao (), Shaojia Xi () Presentation: Hao Liang - -
        In recent years, multi-agent reinforcement learning (MARL) has emerged as a promising approach for multi-unmanned combat aerial vehicle (UCAV) autonomous countermeasure systems. However, conventional end-to-end MARL methods often lack expert knowledge guidance, leading to low training efficiency, which poses challenges for simulation-to-reality (sim2real) transition. In this study, we focus on the scenario of cooperative beyond-visual-range (BVR) aerial engagement involving multiple unmanned combat aerial vehicles. To address these challenges, we propose the hybrid-constrained multi-agent proximal policy optimization (HC-MAPPO) algorithm. First, we design a rule filter mechanism, where expert rules dictate agent behavior in well-understood states to ensure predictable and interpretable maneuvers, while the neural policy is applied otherwise. Second, we formulate the multi-agent aerial combat problem as a Constrained Markov Decision Process (CMDP) and incorporate a cost-critic network into the actor-critic architecture, which enables explicit estimation of long-term constraint costs and decouples penalty from task rewards. Third, we develop a bilevel optimization framework for constrained policy search, which provides theoretical convergence guarantees and demonstrates improved training stability over traditional Lagrangian-based methods. Empirical results demonstrate that HC-MARL achieves superior success rate compared to existing MARL baselines. Ablation studies further confirm the necessity of both constraints: removing either one leads to performance degradation.
    • Andrew Lynch (Tactical Air Support Inc.) & Tom Mc Ateer (NAVAIR) & Thomas Fraser (Lockheed Martin Corp)
      • 09.0305 Experimental and Numerical Analysis of a Test Rig for Structural Testing of Aircraft Wing
        Hessam Ghasemnejad (Cranfield University) Presentation: Hessam Ghasemnejad - -
        Complexities are involved in testing an aircraft structure, including its size, which is essential for determining cost and feasibility. This paper focuses on the novelty of investigating the use of a scaled-down version of a full-scale aircraft wing and a test rig to predict the test rig's structural response to the wing's loading condition, thereby improving design and modelling techniques for the test rig. The paper also presents a methodology for sizing the test rig and the aircraft wing to perform structural tests on the wing. Numerical and experimental models subjected to various load cases are compared. This study starts by justifying the implemented technique used to perform the finite element analysis (FEA) for relevant parts of the test rig. A detailed explanation of the sizing method and its overall effect on the test rig is also provided. The study's results indicate a good similarity between the numerical and experimental models in terms of the stresses and deformation of the test specimen.
      • 09.0306 Rotor Dynamics and Sprayer Efficiency in Agricultural UAVs
        Harrison Dean (), Srikanth Bashetty (University of Oklahoma) Presentation: Harrison Dean - -
        The use of rotary-wing Unmanned Aerial Vehicles (UAVs) to spray pesticides and other farming products offers significant benefits to agricultural practices. These systems offer advantages through their ability to target specific areas with different application rates, reduce chemical waste, and adapt to different field conditions. They can also be used to carry out supplementary tasks such as field mapping. However, UAV performance in this setting is dependent on design choices that affect stability and maneuverability. This then has an influence on the efficiency of the sprayer system as rotor-induced airflow can disrupt droplet dispersion, making the pesticide less effective. This research uses simulation and experimentation to evaluate how changes in the design of an agricultural UAV can affect and ultimately improve its performance. The project began by conducting a literature review, which guided the simulation and experimentation setup and identified the technical requirements of UAVs integrated with sprayer systems. This research also highlighted the benefits of implementing UAV sprayer systems in agriculture. Based on this analysis, it was determined that an effective UAV would have a hexacopter frame due to the lift capacity and stability it provided. A sprayer system capable of distributing water or pesticides would then be designed to be integrated with the UAV, and computational simulations could be run to model the rotor-induced airflow and predict the distribution. This could then be compared to experimental results and inform future design changes. The design of the UAV was divided into two systems, the UAV system and the attached sprayer system. The two were developed alongside each other to ensure that the sprayer system was structurally compatible with that of the UAV. However, the sprayer system components were finalized after those of the UAV system, which is currently being assembled and is nearly flight-ready, to ensure proper integration. The UAV design features a carbon fiber frame because of the higher strength-to-weight ratio it provides, allowing it to support a payload of 1 L and an estimated maximum takeoff weight of 6kg. The other UAV system components, including the motor, ESCs, flight controller, and battery, were chosen with the goal of maintaining a thrust-to-weight ratio of 2 and a flight time of 10 minutes while ensuring their electrical compatibility. Future work will involve integrating the sprayer system into the UAV frame to enable flight testing and allow for data collection to begin. The experimentation will primarily revolve around measuring the uniformity of the sprayer system under different flight conditions, with changes being made to the setup of the UAV depending on how the results compare to those of the simulations. This will provide insight into how the characteristics of the system can affect UAV performance and inform the best practices for their implementation in agriculture, demonstrating their value in agricultural applications.
      • 09.0308 Geometry Adaptive Deep Q-Network for UAV-Based Emitter Localization in Cluttered RF Environments
        Christopher Peters (Southern Methodist University), Michael Watts (Southern Methodist University ), Mitchell Thornton (Southern Methodist University) Presentation: Christopher Peters - -
        This paper describes the design and evaluation of a prototype cooperative multi-agent reinforcement learning (MARL) framework enabling a swarm of unmanned aerial vehicles (UAVs) to autonomously locate RF emitters whose positions are unknown in dense multipath environments. Each UAV serves as an antenna array element enabling the array to dynamically vary its geometry resulting in improved fields-of-view for subsequent measurements yielding high emitter location accuracies. Building upon the Location on a Conic Axis (LOCA) algorithm for sensor time-difference-of-arrival (TDoA) localization, a deep Q-network (DQN) iteratively guides UAV repositioning such that few measurements are required to achieve a localization accuracy in terms of an operator-specified 3D spherical error probable (SEP) goal. Because very few consecutive measurements are required, the SEP goal is quickly achieved. A robust digital twin of the swarm and environment, including multipath, weather effects, and tolerances such as UAV on-board positioning and clock jitter, validates our performance. To our knowledge, this is the first RF emitter location system implemented within an autonomous UAV swarm exploiting dynamic spatial array diversity through customized reinforcement learning. The composite reward function balances current and future array geometry, current emitter location accuracy, and minimal UAV repositioning path length. This strategy provides rapid convergence to a low localization error, typically within ten subsequent measurements, each with a different array configuration. A two-phase control strategy guides the swarm into a LOCA-optimized formation: first, a 2D azimuthal-plane localization stage rapidly reduces the circular error probable (CEP) area, then elevation is refined through 3D localization by leveraging array geometry diversity to improve SEP position and volume. This approach typically achieves an order-of-magnitude reduction in localization uncertainty, enhancing mission effectiveness in cluttered RF terrains. The novel architecture integrates adaptively-scaled UAV repositioning by prioritizing spatial formations that reduce SEP using evenly-spaced column lattice formations with corrections if UAV agents exceed a radius threshold. A sequential extended Kalman filter (EKF) filteres TDOA measurements at each timestep, refining current localization estimates and integrating prior estimates. The DQN policy scales to varying swarm sizes without hardware or software modifications, leading to even lower localization error and earlier convergence. Using our Cyber Autonomy Range (CAR) facility, our performance is evaluated with realistic digital twins of the external environment and the localization system. Our results include many test scenarios with varying simulation parameters allowing our prototype to be exercised much more broadly than if a physical test range were used. Our digital twins are implemented at the physical layer with physics simulation engines for RF propagation, fading, scattering, kinematic dynamics with gravity field interactions, and weather effects using physical layer accurate protocols including 5G and IEEE 802.11 standards. The system twin includes dominant sensor hardware error sources such as local clock inaccuracies and UAV drift. The environmental twin is implemented using Keysight’s EXata high fidelity urban environment within the CAR. With these realistic constraints, our functional evaluations confirm the system’s low localization error and SEP under degraded RF conditions, demonstrating its viability for aerospace and defense applications.
      • 09.0311 Human as an Actuator Dynamic Model Identification
        Matthew Kirchner (Auburn University), Harrison Bonner (Auburn University) Presentation: Matthew Kirchner - -
        When a human interact with physical vehicle, e.g. an aircraft to space vehicle, the pilot influences the dynamics of the system. This effectively adds a form of actuator dynamics as the pilot cannot react instantaneously to directions or other environmental stimuli. A motivating example is that of computing an "H-V" diagram for a helicopter directly from the dynamics of the vehicle. In this case, the H-V diagram alerts the pilot to safe and unsafe flight regimes: An engine failure in an unsafe zone would make autorotation, and hence a safe landing, impossible. If under the assumption a pilot is tasked with performing an emergency autorotation, then the reaction times and response the pilot is capable of executing directly impacts the size of the unsafe region. This makes characterizing the human pilot response as a form of actuator critical in calculating these regions. Presented is this work is a method for estimating the human dynamic model. Unlike previous works in this area, we do not attempt to estimate in the frequency space and instead solve in the time domain using a form of trajectory optimization. This allows for multiple data collects to be used jointly to form a single, common dynamic model and also allows for the generalization to nonlinear model identification, if needed. We also make use of simulators to collect the data. While past work has focused on simple point tracking on a screen, mostly resulting in a dynamic model identification of just the joystick movement, we create scenarios that reflect compound flight maneuvers, for example executing a coordinated turn, the mimic actual flight operations. Finally, the proposed method allows for more realistic multi-input, multi-output model identification that more accurately models a human response during actual flight operations.
      • 09.0312 Gentle Perching of Multirotor Drone Systems on Vertical Tree Trunks with Microspines
        Julia Di () Presentation: Julia Di - -
        Perching allows unmanned aerial vehicles (UAVs) to reduce energy consumption, remain anchored for surface sam- pling operations, or stably survey their surroundings. Previous efforts for perching on vertical surfaces have predominantly focused on lightweight mechanical design solutions with relatively scant system-level integration. Furthermore, perching strategies for vertical surfaces commonly require high-speed, aggressive landing operations that are dangerous for a surveyor drone with sensitive electronics onboard. This work presents the preliminary investigation of a perching approach suitable for larger drones that gently perches on vertical tree trunks and reacts and recovers from perch failures. The system in this work consists of vision- based perch site detector, an IMU (inertial-measurement-unit)- based perch failure detector, an attitude controller for soft perch- ing, an optical close-range detection system, and a fast active elastic gripper with microspines. We validated this approach on a modified 1.2 kg commercial quadrotor with component and system analysis. Initial human-in-the-loop autonomous indoor flight experiments achieved a 75% perch success rate on a real oak tree segment across 20 flights, and 100% perch failure recovery across 2 flights with induced failures.
      • 09.0313 Aircraft System Identification Using Equation Error Formulation in Ordinary Least Squares Framework
        Hassan Akmal (SAMI) Presentation: Hassan Akmal - -
        Despite the traditional availability of airframe configuration parameters obtained through either Computational Fluid Dynamics (CFD) simulations or through wind tunnel testing (WTT), aircraft system identification through aerodynamic parametric estimation plays a crucial role in verifying the theoretical predictions of CFD as well as in interpreting the experimental results of WTT. Moreover, aircraft identification assists greatly in developing more accurate, more comprehensive airframe mathematical models for flight simulations purposes as well as for stability and flight controls design purposes. For small-sized fixed-wing UAVs with relatively short and low-cost-oriented design cycles, system identification techniques offer much affordable alternatives to characterize the system, specially when compared with conventional CFD or WTT methods. Several schemes to estimate aerodynamic parameters have been proposed in literature including Extended Kalman Filter (EKF), iterated EKF (IEKF), Maximum Likelihood Method, and Fourier transform based frequency domain methods. Each scheme has its own merits and demerits. EKF and IEKF, for instance, are recursive in nature, produce time-histories of parameter estimates, and involve requisite linearization which can cause convergence problems. Similarly, Maximum Likelihood Methods are fundamentally based on Fisher model structure which considers measurement noise to be a random vector with a certain probability density. This would require initial estimates of estimated error covariance matrix for successful and timely convergence of the estimator. This work examines the ordinary least squares (OLS) estimation method in an equation error formulation in the time domain to accurately estimate aircraft’s aerodynamic coefficients. First, a true aircraft model for a fixed wing UAV is postulated in the linear framework for the lateral-directional and longitudinal dynamics, and dimensional stability and control derivatives of the aircraft are estimated. Next, a non-linear, six degrees of freedom (6DOF) aircraft’s true model is developed for decoupled motion dynamics and all dimensionless stability and control derivatives are estimated, both for lateral-directional and longitudinal systems. Noise sequences similar to those usually observed in actual flight are made part of the simulated aircraft motion. Estimation results are validated against the true model values through statistical analytical measures including estimated parameter covariance matrix, standard errors of estimated parameters, t-statistic hypothesis testing and Coefficient of Determination (R2). Local time-domain smoothing techniques are applied to the noisy flight data for smoothed numerical differentiation of angular accelerations experienced by the aircraft during design maneuvers. Finally, open loop Dutch roll and short period modes of the aircraft are analyzed after obtaining the accurate estimates of aircraft’s stability and control derivatives. The estimation results demonstrate the superiority of the proposed linear regression-based statistical scheme compared to aforementioned methodologies since it essentially offers a non-recursive, simpler, single-shot linear algebra-based solution. It is also shown the scheme is robust, adaptable in nature and flexible enough to be applied to different aircraft model structures. The cost-effectiveness, computational ease and implementation straight-forwardness make this technique an ideal candidate for comprehensive and full-blown identification of any aircraft system.
      • 09.0314 Ad-Hoc UCV Resource Provisioning for UA Missions: A KPI Assessment Tool
        Flavio Souza (), Andrei Gurtov (Linköping Univeristy), Zelong Wang (Linköping University), Lourenco Pereira (Aeronautics Institute of Technology ), Carlos Ribeiro (Instituto Tecnológico de Aeronáutica) Presentation: Flavio Souza - -
        Unmanned Aerial Systems (UAS) operating Beyond Visual Line of Sight (BVLOS) require assured Command and Control (C2) and mission data connectivity under dynamic and sometimes contested conditions. In line with the purpose of Open RAN (O-RAN), we investigate an architecture in which a Unmanned Aircraft (UA) Control Vehicle (UCV) is deployed as an ad-hoc entity to provision local access, edge computing, and mission-centric situational awareness. The UCV unit hosts control functions (as UAS Traffic Management - UTM) and service data functions (as UAS Service Supplier - USS), in addition to allocating resources per user/flow via a Software-Defined Network (SDN). Given the mission dynamics and changing application context, events frequently force resource reallocation. SDN is a promising approach to maintaining end-to-end Quality of Service (QoS) in the face of mobility and topology changes, as it provides a network-wide view that enables rapid route steering, load balancing, and retuning of priorities for User Equipment (UE) link. In parallel, O-RAN guidance emphasizes per-UE and Quality of Experience (QoE)-driven adaptation, allowing application feedback to drive policy changes when measured performance deviates from expectations. However, the key question remains: can an ad-hoc, edge-deployed SDN UCV consistently meet aviation-aligned Key Performance Indicator (KPI) targets—such as C2 latency/jitter, continuity, and update rates—amid expected and unexpected events? Our study is designed to test exactly this, using the UCV scenario where control traffic is low-bandwidth/low-latency and payload video is uplink-heavy and variable. To investigate this question, we present a lightweight simulation/emulation toolchain that lets us run real-time experiments without a full RAN. PX4 SITL and QGroundControl generate realistic C2/telemetry and payload flows, while Mininet emulates the SDN data plane and edge control. Thin adapters bridge UDP between the flight stack and the emulated network, maintaining realistic timing and traffic behavior. This design follows the emulation rationale seen in Mininet-RAN, which utilizes real software stacks and traffic for closed-loop trials and KPI collection at a low cost with high flexibility, making it suitable for the rapid and reproducible evaluation of control policies at the edge. Our current implementation abstracts the radio, keeping the architecture intentionally extensible and straightforward (e.g., adding new KPIs, policies, or external controllers later), so that future work can plug in more detailed RAN elements, RAN Intelligent Controller (RIC)-style control/telemetry, or SDR components as needed. Looking ahead, the tool enables progressively richer investigations: from SDN-based control toward O-RAN-style policies, and complementary stacks such as Advanced Airspace Availability Protocol (A3P) and IETF DRIP to strengthen situational awareness, trust, and dual broadcast/network dissemination under degraded links. It also supports systematic KPI evaluation grounded in a defense-oriented taxonomy of UA communication requirements—availability/continuity, latency/jitter, bandwidth, security, and range—so results can be read directly against mission and regulatory targets. Finally, by keeping the architecture extensible, we can embed capability-aware policies and capability-based planning (where mission goals and risk posture drive per-UE QoS/QoE) as future work, thereby closing the loop from mission context to resource allocation to KPI outcomes for increasingly complex scenarios.
      • 09.0315 CubeSat Drone: The Development of a CubeSat Experimental Testbed
        Miguel Nunes (University of Hawaii), Pedro Casau (University of Aveiro) Presentation: Miguel Nunes - -
        This paper presents a hybrid control architecture for global trajectory tracking of an aerial vehicle with the form factor of a 3U CubeSat. The vehicle is equipped with ten bidirectional propellers mounted on facets of its frame. We derive a dynamical model that accounts for the propeller layout and design a proportional-derivative (PD) feedback law combined with a hybrid supervisor to globally asymptotically stabilize the vehicle's pose along arbitrary (bounded) reference trajectories. To solve the control allocation problem, we implement a minimum control effort algorithm. Simulation results demonstrate the convergence of position and orientation errors, producing physically plausible rotor speed commands.
    • Richard Hoobler (University of Texas at Austin) & Nikolaus Ammann (DLR (German Aerospace Center)) & Christopher Elliott (CMElliott Applied Science LLC)
      • 09.0403 The First Open-Source Framework for Learning Stability Certificates from Data
        Zhe Shen (Stable Robotics Ltd., UK) Presentation: Zhe Shen - -
        Before 2025, no open-source system existed that could learn Lyapunov stability certificates directly from noisy, real-world flight data. No tool could answer the critical question: is this controller still stabilizable—especially when its closed-loop system is a total black box. We broke that boundary. This year, we released the first-ever open-source framework that can learn Lyapunov functions from trajectory data under realistic, noise-corrupted conditions. Unlike statistical anomaly detectors, our method does not merely flag deviations—it directly determines whether the system can still be proven stable. Applied to public data from the 2024 SAS severe turbulence incident, our method revealed that, within just 60 seconds of the aircraft's descent becoming abnormal, no Lyapunov function could be constructed to certify system stability. Moreover, this is the first known data-driven stability-theoretic method ever applied to a civil airliner accident. And our approach works with zero access to the controller logic—a breakthrough for commercial aircraft where control laws are proprietary and opaque.
      • 09.0408 Reliable Maintenance Policy for Distributed Affine Formation Control of UAVs
        Zhonggang Li (Delft University of Technology), Hongyu Zhou (Delft University of Technology), Raj Thilak Rajan (Delft university of technology) Presentation: Zhonggang Li - -
        Distributed affine formation control (AFC) enables unmanned aerial vehicle (UAV) swarms to achieve coordinated motion while maintaining a desired geometric configuration, thereby enhancing their maneuverability. However, conventional leader-follower distributed AFC remains vulnerable to dynamic changes in the network topology, where UAVs may be temporarily unavailable due to maintenance in real-world applications. In this work, we propose a reliable maintenance policy that enables individual UAVs to detach from the swarm without compromising the formation stability. Our policy introduces an agent homogenization strategy that replaces the conventional leader UAVs with virtual leaders, and thereby ensuring all operational UAVs are followers and thus eligible for maintenance. A relative affine localization (RAL) technique is employed, which allows the remaining UAVs to estimate the relative positions of missing neighbors by leveraging the known formation geometry. Our proposed framework is validated through a series of experiments with a swarm of Crazyflie quadrotors in an indoor environment, which demonstrate the effectiveness of our policy that allows individual UAVs to be removed and returned in sequence while the rest of the swarm maintains its target configuration with high accuracy. Our proposed maintenance policy enables robust and long-duration deployments of UAV swarms in inaccessible and harsh environments.
      • 09.0409 Convex Stability Can Stabilize Dynamic Systems, Declared as Unstabilizable by Current Methods
        Rama Yedavalli (Robust Engineering Systems, LLC, Dublin, OH 43017) Presentation: Rama Yedavalli - -
        In 2024 IEEE Aerospace Conference, we introduced a new concept labeled as Convex Stability as having superior properties of high stability robustness for inevitable real life perturbations in flight vehicle control. This concept was in turn the result of an al-together new, innovative approaches labeled as the Transformation Allergic (TA) and Phase Variable Allergic (PVA) Approaches. In this 2026 conference, we apply this new Convex Stability concept based controllers to show that using this Convex Stability concept, we can stabilizable any dynamic system, which can possess multiple equilibrium points (states) and thus can be either a time-invariant as well as a time varying system. Especially for linear time invariant systems, Convex Stability based controllers can easily stabilize any LTISS system, that is abandoned by the current literature eigenvalue based control design methods as being uncontrollable, unstabilizable, unobservable and undetectable. This is possible because Convex Stability based controllers do not use eigenvalues at all and instead use the always real Transformation Allergic Indices (TAIs) and the Phase Variable Allergic (PVA) Indices. Thus TA, PVA Approaches and the related Convex Stability based controllers enable Static Output Feedback Stabilization via the Distributed Actuation and Sensing capabilities that are completely new via our proposed TAI and PVAI philosophy. It is well known in the current literature that Static Output Feedback Stabilization was a long standing unsolved problem in linear systems theory for a long time, but our proposed Convex Stability concept is shown to have solved this problem via the always real TAIs and PVAIs, In other words, in Convex Stability based controllers, the output follows the input exactly however fast and time varying that input commend is. Thus Convex Stability based controllers do not suffer from saturation effects when high control gains are used, which is in stark contrast to the current literature eigenvalue based designs. Convex Stability based controllers also possess extremely high stability robustensss to real parameter variations, under modeled and over-modeled dynamics, and work well with dynamic systems that possess multiple equilibrium points. Convex Stability concept has shown that Mapping Theorem (that connects continuous time systems Hurwitz/Lyapunov Stability with discrete time systems and sampled data systems Schur Stability ) becomes irrelevant for Convex Stability and even becomes incorrect in the eigenvalue based designs because eigenvalue properties prevent the Co-existence of negative real part eigenvalue based Hurwitz Stability with zero state as the equilibrium state with the Schur Stability which has all unity-magnitude as its Schur Equilibrium state. This is the reason why Convex Stability based controllers can easily stabilize non-minimum phase systems, as well as those systems declared by eigenvalue methods as uncontrollable, unstabilizable. In the full paper, we provide an elaborate design steps that demonstrate how an unstabilizable (by current literature eigenvalue based methods) can be easily stabilized via the Distributed Actuation and Sensing philosophy associated with the Convex Stability concept. Examples include both from space vehicles (satellite control) as well as atmospheric flight vehicle control.
      • 09.0410 Envelope-Aware, Cascaded Altitude Controller for VTOL via Reinforcement Learning
        Wan Faris Aizat Wan Aasim (United Arab Emirates University), Mohamed Okasha (), Abdullah Hayat (UAEU) Presentation: Wan Faris Aizat Wan Aasim - -
        A vertical-axis altitude controller for a vertical takeoff and landing unmanned aerial vehicle (VTOL) is presented in this study. The controller consists of a proximal policy optimization (PPO) agent in the outer loop and a finite-time Lyapunov controller (FTLC) in the inner loop together. The agent generates a reference for the vertical speed, and the FTLC monitors it at the acceleration level. This allows the inner loop to maintain its classical stability while simultaneously enabling learning to plan long-range, goal-directed motion. Envelope-aware reference shaping is active throughout the training process, including stop-feasible brake caps, action bounds, and soft/hard speed limitations. The training follows a four-stage curriculum, which includes near-hover, mid-range travel, far-range travel, and silent hold. We evaluate using task-level key performance indicators: time-to-settle as a function of initial altitude error; residual time beyond the ideal travel time predicted by distance divided by the vertical cruise speed; adherence in the near-target zone; overshoot (median and 95th percentile); soft-overspeed exposure; actuator capping; and hold quality measured by percent time within the success band and the root-mean-square change in the reference vertical speed over the last ten seconds. During a head-to-head simulation versus a cascaded FTLC-outer and FTLC-inner baseline that was tweaked with envelope awareness, the PPO-outer controller was able to match travel efficiency, preserve zero observed overshoot, and achieve near-silent hold while still obeying all limitations. Based on these findings, it can be concluded that a safety-aware, modular learning-plus-control architecture has the potential to produce competitive and interpretable performance in simulation, as well as providing a viable path towards flight-ready behavior.
      • 09.0411 Development of an UAS with Geomatics and Bioinspired Features for ISR Missions
        Rodrigo Rangel (BRVANT / BRV UAV & Flight Systems) Presentation: Rodrigo Rangel - -
        This paper describes the development of a multifunctional navigation system with bio-inspired features that uses a management software and Geomatics system, updated by Artificial Intelligence (A.I.) routines, to define/monitor/update the mission trajectory and mission procedures of a customized UAS. The development considers the concept of the Geomatics systems and aerial mobility applied in Intelligence, Surveillance and Reconnaissance (ISR) missions, as an alternative way to collect field data, identifying and learning the characteristics of the field through A.I. routines to update the navigation system of the aerial platform, in terms of geolocation and IMU parameters, when the GPS signal fail or is denied. The aerial platform design consists of a fixed-wing aircraft, based on COTS equipment, capable of operating in VLOS or BVLOS mode with on-board processing of Artificial Intelligence and Computer Vision routines that represent some biologically inspired characteristics in flying animals. Basically, the airborne mission system is a dedicated computer with interface boards for data processing and sensors integration (e.g. GPS, EO, NIR, and others), capable of performing onboard processing of some bioinspired features, such as: a horizon-based procedure to determine the vehicle's attitude, used for stabilization (similar to the birds), a bio echolocation feature used for obstacle avoidance system (similar to the bats), and a third system, a bioinspired navigation algorithm with solar direction estimation. In this context, considering the intrinsic characteristics of the field, due the profile of the ISR missions, the system can be used to collect image data from different frequency spectrums, obtaining their geographical position after the onboard processing and/or post-processing, where the aerial platform scans the field to identify targets, or known points, through pattern recognition (mimicking migratory birds). Pattern recognition is achieved by AI routines, which perform object segmentation on the locations and the definition and classification of each object to create the respective pattern. Once created, the pattern is used as a reference point to update the navigation process of the airborne platform in case of GPS failure or denial. The management software, a complementary software running within the Command and Control Station (C2S), is capable of displaying real-time situational analysis of predefined mission areas, and allows the operator to optimize and/or update maps or navigation procedures. A.I. enhanced learning is envisaged to simulate the mission profiles and “understand” pattern changes on the maps. The described development illustrates the feasibility of combining aeronautical requirements, low-cost off-the-shelf hardware with Artificial intelligence, Computer Vision and Geomatics routines, to develop UAS with bioinspired navigation and control subsystems. The result is a “next-generation” UAS, in terms of bioinspired A.I. and computer vision capabilities, for possible use in Intelligence, Surveillance and Reconnaissance (ISR) applications.
      • 09.0413 Laser-Guided Control for Multicopter UAVs: Preliminary Design and Testing
        Ruaa Nakkar (King Fahd University of Petroleum and Minerals) Presentation: Ruaa Nakkar - -
        This paper presents a preliminary study of a laser-guided, vision-based framework for evaluating and tuning multicopter trajectory tracking. A floor line and a UAV-projected laser dot are captured by a downward-facing onboard camera, and an offline image-processing pipeline computes the signed lateral deviation between them for proportional–integral–derivative (PID) tuning. We compare the vision-derived error directly against OptiTrack on straight X/Y trajectories at fixed altitude. Experiments show that the vision metric gives more repeatable and less variable lateral-error signals than OptiTrack, with mid-range proportional and derivative gains yielding the most stable tracking, while excessive integral action degrades performance.Overall, the method delivers a practical, vision-based means for controller tuning and lays the groundwork for future real-time visual control of UAVs.
  • Virgil Adumitroaie (Jet Propulsion Laboratory) & Kristin Wortman (Johns Hopkins University Applied Physics Laboratory)
    • Tiberiu Barbat (Virtual-Ing) & Virgil Adumitroaie (Jet Propulsion Laboratory)
      • 10.0102 Modeling Stochastic Process Uncertainties in Spacecraft Dynamics: A New Capability in Basilisk
        Juan Garcia Bonilla (University of Colorado Boulder), Hanspeter Schaub (University of Colorado) Presentation: Juan Garcia Bonilla - -
        Spacecraft are subject to dynamic uncertainties that cannot be adequately represented by static random parameters, such as fluctuations in atmospheric density, solar flux variability, and thruster noise. These effects are more naturally modeled as stochastic processes governed by stochastic differential equations (SDEs). This paper introduces a modular framework for SDE-based uncertainty modeling that enables users to assign noise-driven dynamics to arbitrary simulation parameters. The approach is demonstrated within the open-source Basilisk astrodynamics toolkit, which now supports stochastic states driven additive and multiplicative noise, and includes numerical SDE integrators. Representative processes, including Ornstein-Uhlenbeck and Gauss-Markov models, are reviewed and demonstrated. A de-orbit case study with stochastic atmospheric density demonstrates the importance of modeling uncertainty with SDEs, showing how process choice and correlation time fundamentally alter orbital lifetime predictions. By embedding stochastic modeling into a widely used astrodynamics toolkit, this work advances simulation fidelity and provides new capabilities for mission design, operations, and autonomy validation.
      • 10.0104 Developing Reduced-Order Models to Predict Supersonic Shock-Induced Performance Metrics in Scramjets
        Rehaan Kadhar () Presentation: Rehaan Kadhar - -
        Hypersonic flight led by scramjet engines has the potential to revolutionize the commercial and defense industries, but the development and production of engines for this technology can be challenging. To optimize their designs pre-production, developers use simulation platforms like Computational Fluid Dynamics (CFDs) to model various factors such as fluid flow characteristics and supersonic combustion. However, they require a lot of computational power, time, and resources, making them inaccessible to many. This study focuses on the development of Machine-Learning (ML) algorithms powered by artificial neural networks to assist in optimizing scramjet engine development. The algorithm was trained on various CFD analyses of internal flow field characteristics in scramjet engines on Ansys Fluent and was paired with a self-developed Reduced-Order Model (ROM) to predict and visually represent fluid flow within the engine geometry. The design process involved designing multiple 2D geometries, discretizing them, and sampling data points using a Latin-Hypercube Sampling method. 250 supersonic CFD simulations were performed using accurate boundary conditions for each geometry. Data was processed and collected through a combination of integration, derivation and inverse trigonometry. Performance efficiency was calculated by solving for the entropy flux and was used to develop Python algorithms that were enhanced using TensorFlow libraries. With a mean squared error (MSE) of 0.206, this compiled model acts as a cost-effective tool, avoiding the need for power-heavy simulations. The final model can provide numerical outputs, varied graphs, and visual contours similar to that of high-fidelity computational simulations. By expanding upon this idea, the future will have easier access to optimization, eventually leading to well-developed hypersonic planes.
      • 10.0109 Attitude Propagator for Geostationary Flexible Satellite – Comparative Study
        SI MOHAMMED Mohammed Arezki (Center Satellite Devloppment - CDS) Presentation: SI MOHAMMED Mohammed Arezki - -
        Attitude prediction is a fundamental task in satellite dynamics, as it allows anticipating the future orientation of a spacecraft and ensuring mission success. It relies on dynamic models capable of extrapolating the past attitude history into the future. A precise prediction is particularly important for geostationary flexible satellites, where external disturbances and structural flexibility can significantly affect stability and pointing accuracy. The main objective of this paper is to develop and assess a methodology for predicting the attitude of a geostationary flexible satellite, expressed both in terms of Euler angles and modal displacements, while accounting for the influence of external perturbations. To this end, we present an attitude propagator derived from a linearized dynamic model, constructed around an equilibrium point under the assumptions of small librations and small elastic deformations. This formulation provides a balance between analytical simplicity and physical realism. To evaluate the effectiveness of the presented approach, a comparative study based on numerical simulations was carried out. The results demonstrate that the model provides consistent and reliable predictions of the satellite’s behavior. In particular, the predicted magnitudes of the main variables are of the order of 0.14 deg for attitude, 46 millidegrees per second for angular velocity, and approximately 7×10⁻³ m for modal displacement. These findings highlight the capability of the presented linearized model to capture the essential dynamics of geostationary flexible satellites under external disturbances, offering a practical and computationally efficient tool for attitude prediction in real operational scenarios.
      • 10.0110 Model Rocket Performance Analysis Platform
        Aziz DURMUŞ (KYRENIA UNIVERSITY), Onur Aygün (University of Kyrenia), Berkay Doğan (University of Kyrenia) Presentation: Aziz DURMUŞ - -
        This research presents a comprehensive computational platform for model rocket performance analysis, integrating multiple advanced analytical modules to address critical aspects of rocket design and flight dynamics. The platform implements trajectory analysis using Newton's laws of motion with atmospheric density interpolation, Monte Carlo simulation methodology for probabilistic landing area analysis, and design optimization utilizing the Nelder-Mead algorithm for multi-parameter optimization. The system incorporates six-degrees-of-freedom analysis, thermal heating calculations, composite material property computations, and drag coefficient estimations. Static stability analysis employs Mach-dependent aerodynamic coefficient calculations to determine stability margins and neutral point positions. Built on Angular framework with TypeScript, the platform provides real-time visualization through Chart.js integration and generates comprehensive PDF reports for each analysis module. The platform's modular architecture enables independent analysis of aerodynamic, structural, and performance parameters, facilitating evidence-based design decisions for model rocket development. Validation studies demonstrate an overall accuracy of 87.2% across all computational modules, with trajectory analysis showing the highest precision. This computational tool represents a significant advancement in accessible rocket science education and research, providing professional-grade analysis capabilities previously limited to specialized aerospace software. The platform serves as an educational resource for students and researchers while offering practical design optimization tools for amateur rocket enthusiasts and engineering professionals.
    • Jeremiah Finnigan (Johns Hopkins University/Applied Physics Laboratory) & Ronnie Killough (Southwest Research Institute)
      • 10.0201 Insights from Three Decades of Operating and Modernizing a Multi-Mission Distributed Object Store
        Jenette Sellin (NASA Jet Propulsion Laboratory), Janine Huang (Jet Propulsion Laboratory), Elliot Trapp (Jet Propulsion Laboratory), Gus Razo (), Richard Rangel () Presentation: Jenette Sellin - -
        The Distributed Object Manager (DOM) is a general-purpose, extensible, customizable, high-performance, object-oriented cataloging system that is used for data operations by dozens of flight missions at NASA’s Jet Propulsion Laboratory. DOM has been actively supporting missions since its inception in 1993, when it was first developed to support JPL space mission operations. DOM's lasting relevance provides a unique context from which to draw valuable lessons in software architecture, maintenance, and real time operations as JPL looks forward to the next generation of multi-mission data management systems. DOM predates the widespread adoption of modern cloud-based data management technologies by more than a decade, while providing much of the same functionality. DOM empowers missions to define custom schemas and object types, manage user permissions and privileges, and customize notification services; this allows DOM to meet mission-specific needs throughout entire mission lifecycles, many of which span multiple decades. Additionally, DOM features lightweight, distributable catalog servers and provides command line interface (CLI) tools and graphical user interface (GUI) clients to access the different object store servers enabling a full suite of create, read, update, and delete (CRUD) database operations. DOM continues to support critical uplink and downlink data transactions via the Deep Space Network for many flagship missions from Voyager through Europa Clipper. Over the past three decades, DOM has undergone several infrastructure modernizations while maintaining its core functionality and effectiveness. DOM has faced various technical and operational challenges, such as scaling to support an increasing number of flight projects, extending to support growing mission data volumes, coordinating maintenance downtime, ensuring consistency across significant administrative personnel changes, and incorporating emerging technologies and best practices. To address these, DOM’s core functionality has been extended to include a remote method invocation (RMI) interface, a file notification service (FNS), event-driven client programs (Message Reactors), and more. We present insights from the current DOM administrative team on evidence-based operations and data management best practices. DOM's fundamental challenge has been to provide continuous mission support while adapting to changes in data, hardware, cybersecurity, and mission requirements. In adapting to these challenges, DOM exemplifies a successful approach for administering a mature, lasting, reliable flight mission data management system.
      • 10.0203 Accelerating the Pre-Silicon Functional Verification Value Ramp Using Aspect Oriented Development
        Hamilton Carter (Mentor Graphics a Siemens Business) Presentation: Hamilton Carter - -
        The primary value of every pre-silicon digital verification coding is to find bugs in the design under test, (DUT), prior to production. Finding bugs faster is better: it has been shown that the cost of debugging issues in digital designs increases exponentially with the amount of time it takes to find the issue. Consequently, reducing the time taken to implement verification code is one of the holy grails of verification engineering. Historically however—especially in object-oriented verification methodologies like the Universal Verification Methodology, (UVM), where every checker object is run against the design for every test case—there is a high price to pay for frequent check-ins of new or modified verification code: a single flawed checker can bring down an entire regression causing 1000s of test cases to report false negative failures. These massive false-negative incidents are costly in two ways, first, we lose valuable coverage that could have been gained if the constrained random test cases had not erroneously failed, (typically, coverage is only collected from passing test cases.) Second, new, real bugs—surrounded by 1000s of false failures—may be masked or simply ignored. Therefore, the desire to simulate new verification code against a DUT is tempered by the desire not to ruin the existing verification test bench and its associated regression data creating a tug of war that can lead to long smoke tests and code review processes that retard the rate at which verification value can be added. We demonstrate a novel development process with three advantages: 1. It significantly reduces verification coding to regression delivery time allowing new test cases and new checkers to be released into regression rapidly. 2. It guarantees that there will be no regression-wide false negative failures as a result of the newly released code. 3. It accelerates the speed at which new engineers can be productively added to a verification project. Our novel methodology uses a programming paradigm defined in the 1970s, but seldom used: aspect-oriented development. We first review the concepts of aspect-oriented programming demonstrating that these concepts—extending multiple classes to add new or modified layers of functionality to the test bench—can be easily implemented using SystemVerilog and the UVM factory pattern. We will then review the object-oriented UVM framework paying particular attention to classes that need to be created or modified to implement a checker for an unverified design feature along with the stimulus necessary to exercise that feature. In the third section we outline how to deploy aspect-oriented techniques using the UVM factory pattern to insulate the existing verification test bench from new development while at the same time allowing the newly developed code to utilize and benefit from all the features of the existing test bench. In section four, we discuss how the organization inherent in aspect-oriented programming accelerates the learning curve faced by new engineers entering a verification project. In the paper’s final section, we briefly review both the implementation and benefits of the process and discuss other avenues where it can be efficiently deployed.
      • 10.0204 Hosting Machine Learning Algorithms in Flight Software via the F Prime Python Software Framework
        Matthew Alexander Mariano (Bronco Space ICON Lab, California Polytechnic University, Pomona), Kelly Williams (California State Polytechnic University, Pomona), Michael Pham (Cal Poly Pomona), Zachary Gaines (California Polytechnic State University, Pomona), Tanya Patel (California State University), Phyllis Nelson (California State Polytechnic University Pomona) Presentation: Matthew Alexander Mariano - -
        We present a Python based API, software components, and reference system for safely running machine learning models alongside an F Prime flight software binary. F Prime is an open-source reusable component-based flight software framework developed at JPL which has been deployed on a variety of small satellites and the Mars Ingenuity Helicopter. F Prime Python was originally developed to provide a means of directly integrating rapidly prototyped Python code into the C++ F Prime binary. This link is facilitated using PyBind11 to enable users to create F Prime components that are programmed and executed entirely in Python as a part of a broader F Prime deployment. Our paper presents updates made to this F Prime Python toolchain as part of the SCALES (Spacecraft Compartmentalized Autonomous Learning and Edge Computing System) project. These updates are compatible with F Prime v3.x.x. The full SCALES implementation includes reference hardware and software, with F Prime running on a radiation-tolerant processor and Python ML processing done on a separate NVIDIA Jetson Orin to segregate the risk of the ML software and hardware from the essential flight software functions. SCALES provides an open-source reference for machine learning manager components that can thus be used to safely host experimental machine learning models alongside a trusted flight software deployment. The SCALES reference machine learning manager component uses the Hugging Face API for executing Machine Learning (ML) models. This creates a lightweight and reusable bridge for a flight software binary to interact with the over 1 million pretrained ML models hosted on the Hugging Face Hub. The SCALES F Prime reference deployment will demonstrate a typical F Prime flight software binary operating alongside Microsoft’s Phi-3-mini “Small Language Model”, Depth Anything V2 Vision Transformer, and the Resnet-152 V1.5 CNN. In addition to presenting the F Prime Python-enabled machine learning manager, we also discuss analysis of the interaction between a real time F Prime environment and the asynchronous Python environment, compartmentalization of the software stack to retain core flight software safety, and initial deployments of this system on prototype edge computing hardware.
      • 10.0205 Automated Extraction and Cross-Document Analysis of Aerospace Project Requirements Using LLMs
        Imededdine Ben Slimene (German Aerospace Center - DLR) Presentation: Imededdine Ben Slimene - -
        Abstract— Requirements engineering remains a fundamental yet time-consuming activity for aerospace and other complex systems, where large-scale project documents define mission- and safety-critical system specifications. Traditional requirements engineering is often challenged by the scale, heterogeneity, and ambiguity of such documents, making it difficult to maintain traceability, consistency, and completeness. This paper introduces a human-in-the-loop requirements analysis tool that automates requirement extraction, classification, and interdependency analysis for aerospace project documentation. The tool employs large language models (LLMs) to parse natural-language documents and identify candidate requirements. Two configurations are evaluated: a naive baseline LLM without contextual knowledge, and a retrieval-augmented (context-aware) LLM incorporating a project-specific document knowledge base. Extracted requirements are automatically categorized into relevant functional domains such as telemetry, communications, timing, or safety, and further analyzed for cross-document dependencies and potential conflicts. A key feature is the ability to detect and link interdependent requirements across separate documents, which is critical in complex aerospace systems where requirements are often distributed and coupled. The approach is designed to support, rather than replace, human engineering expertise, with a human-in-the-loop process enabling iterative review and validation of the extracted and classified requirements. An initial evaluation on representative aerospace project documents demonstrates promising results in terms of extraction precision, classification accuracy, and identification of ambiguous or conflicting statements. The system architecture emphasizes local execution for data privacy and supports integration with formal modeling workflows to enable future verification tasks.
      • 10.0206 Towards an AI-augmented Monitoring and Control Applications Lifecycle Management System
        Luigi Palladino (EUMETSAT), Francesco Croce (EUMETSAT), Diogo Vicente (), Joao Silva (EUMETSAT) Presentation: Luigi Palladino - -
        The increased rate at which satellite constellations are being placed in orbit is creating visible challenges on the management, development and maintenance of ground segment software applications. Ensuring maintainability and reliability, while scaling systems and teams is a challenge, particularly across a diverse set of specialised domains such as: mission control, flight dynamics, mission planning, simulation, among others. The traditional segregation of the mission control applications into specialised domains, has led to the instantiation of bespoke solutions that formalised distinct approaches for managing the lifecycle of each software product. This challenge has led to the definition of ALM-R (Application Lifecycle Management Requirements), a comprehensive set of guiding requirements, principles and processes to ensure that the lifecycle of mission control applications is managed in a cohesive, coherent and cost- effective. ALM-S (Application Lifecycle Management System) is the instantiation of ALM-R and regarded the deployment of off-the-shelf tooling, the implementation of bespoke workflows, services, observation and reporting mechanisms enabling the different stakeholders to manage the different viewpoints inherent to the management of the application lifecycle. With this work, we introduce AIA-ALM-S (Artificial Intelligence Augmented ALM). This, represents the overall evolution of ALM-S through the adoption of AI agents that can assist maintenance engineers with both recurring and non-recurring tasks as well as knowledge management. The proposed approach required an update to the ALM-R which had to be reviewed and enhanced in the light of what could now be achieved through the use of AI. Similarly to ALM-S, AIA-ALM-S is not as a collection of isolated AI tools, but as a cohesive system designed to enhance automation, assist engineers, and ultimately, scale capabilities effectively. This isn’t about replacing existing solutions, but augmenting them with intelligent capabilities. AIA-ALM-S adopts an architecture comprising 5 layers: Knowledge Base Layer (KBL), ALM Agentic Layer (AAL), Management and Orchestration Layer (MOL) and finally the Presentation Layer. The KBL sits at the centre of AIA-ALM-S and. The AAL will make available a set of intelligent agents that are able to act on behalf of maintenance engineers and uses as its foundation the KBL. The AAL standardises agent definitions ensuring a clear approach for extending the agentic feature set with future agents and builds upon containerisation and secure APIs for interoperability. The KBL is an agentic system connected to diverse data sources within the Ground Segment, encompassing satellite mission operations data, internal documentation, and code repositories. The MOL orchestrates the different processes following the software engineering lifecycle concepts, practices and workflows outlined in ALM-R. The Presentation Layer provides user interfaces and access to the system’s functionality. ALM-S is based on the software development lifecycle, which also provides the use cases for AIA-ALM-S. Although its boundaries are defined by ALM-R, the use cases span across different mission control functions. Initial examples include agents facilitating communication between systems such as the Mission Control System and Satellite Simulator, as well as agents assisting in documentation summarisation. These examples highlight the value of AIA-ALM-S and serve as building blocks for more comprehensive intelligent systems.
      • 10.0207 Simple Data Model Approach to Integrate FACE® Software Applications Designed from Disparate DSDMs
        Travis Rogers (Georgia Tech Research Institute) Presentation: Travis Rogers - -
        Software integration can be difficult, costly and time-consuming. Luckily, the Future Airborne Capability Environment® or FACE® Technical Approach offers methodologies that make integration easier. However, even when developers follow this technical approach, there can still be challenges to integration. One serious challenge occurs when two or more software applications need to be integrated, but each application's supporting message set and transport services originate from a different Domain-Specific Data Model (DSDM). DSDMs are key in creating the message sets that components within an application produce and consume. The process of rectifying the differences in disparate DSDMs to ensure components within a combined application can intercommunicate is daunting. This paper presents two methodologies to integrate software applications that come from disparate DSDMs. The first methodology uses a common data model to reproject both application’s message sets from a single, Simple Data Model (SiDM). This allows the software to run on a single TSS. The second methodology applies a solution where both applications integrate while maintaining their own message sets, transport services, and DSDMs allowing all semantic meaning to remain unchanged. To achieve this SiDM is used to bridge the semantically meaningful message sets between two or more applications. This allows for two or more applications with disparate DSDMs to communicate using both their native data models and across-interface with the common data model. This paper will provide the methodology to construct SiDM and explain both integration methodologies. This paper will also reference nonproprietary examples where this technique is implemented. By implementing this methodology, applications become more universally useful as “off the shelf” products that can be integrated without major reconfiguration. Keywords—software, FACE, Domain-Specific Data Models, Data Modeling, integration
    • Martin Stelzer (German Aerospace Center (DLR)) & Peter Lehner (German Aerospace Center (DLR))
      • 10.0301 Architecting a Robust Software Stack for Small Satellites Using ROS2, Docker, Zenoh, and ZFS
        Raajitha Rajkumar (The Aerospace Corporation), Gabriel St. Angel (The Aerospace Corporation), Shon Cortes (The Aerospace Corporation), Phaedrus Leeds (The Aerospace Corporation) Presentation: Raajitha Rajkumar - -
        As the demand for autonomy, resilience, and interoperability in small satellite missions increases, leveraging modern radiation-tested commercial-off-the-shelf (COTS) hardware and open-source software becomes critical for enabling robust, fault-tolerant spaceborne computing. This work presents an integrated architecture built around a dual NVIDIA Orin NX compute system with dedicated solid state NVME hard drives, the SAMV71Q21RT microcontroller, ROS 2 used in conjunction with Zenoh as the communication middleware, and Docker Compose for container orchestration. Together, these technologies enable advanced crosslinking/downlinking, improved flight safety, and scalable, mission-critical operations in resource-constrained space environments. We address three key challenges: ensuring fault tolerance during onboard failures, enabling high-performance crosslinking between satellites, and simplifying the downlinking and storage of large sensor datasets using COTS systems. The dual Orin architecture provides computational redundancy and load balancing for perception, control, and crosslink tasks. Failover between the two Orins is handled via Zenoh and ROS 2 Pub/Sub patterns, allowing seamless handover of state and command queues. This enables continued operation even when one compute node fails or reboots unexpectedly. Flight safety is further supported by a distributed lock model, synchronizing maneuver and safety decisions across nodes in real time. To coordinate low-level command and telemetry flows with real-time determinism, the SAMV71 microcontroller serves as a reliable interface between the Orin stack and the satellite’s critical subsystems. Sensor data is collected and relayed over UART, SPI, USB and other interfaces, then published over ROS 2. Using Docker Compose, each compute unit hosts isolated containers for individual functions—ranging from image processing to telemetry handling—facilitating modular testing, reusability, and rapid redeployment. Crosslink performance is enhanced via Zenoh, a lightweight protocol ideal for low bandwidth links that supports peer-to-peer, distributed publish/subscribe and query patterns over low-bandwidth links. This allows satellites to dynamically share state, telemetry, and consensus data with minimal latency, supporting autonomous formation flying and rendezvous proximity operations (RPO). Data for downlink is managed through ROS 2 bags and stored on COTS solid-state drives. When triggered by onboard events or ground commands, data can be down sampled using the ROS2 open-source tools. This method allows for the selection of mission-relevant data segments for downlink, optimizing bandwidth usage. Data stored on the solid-state drives is protected from corruption using the ZFS filesystem’s proven and highly configurable checksumming and parity features. This architecture demonstrates how commercial tools can be combined to improve reliability, scalability, redundancy and mission performance in small satellite operations. The proposed solution is designed to reduce development cost and complexity while increasing system adaptability to fault conditions and changing mission objectives. Preliminary ground testing shows promising results in real-time crosslink synchronization, state recovery, and multi-sensor fusion with limited onboard compute. Future work includes in-orbit validation and extending the architecture for swarming and inter-satellite decision-making. By leveraging open-source software, modern radiation tested hardware accelerators, and containerized systems engineering, this framework represents a scalable path forward for future autonomous space missions seeking to maximize safety, reliability, and operational intelligence.
      • 10.0302 Modular Ground Calibration Software for DSSD-BGO Detectors in the MATCH Lunar Particle Payload
        Thanayuth Panyalert (National Astronomical Research Institute of Thailand), Peerapong Torteeka (National Astronomical Research Institute of Thailand (NARIT)), Popefa Charoenvicha (National Astronomical Research Institute of Thailand (NARIT)) Presentation: Thanayuth Panyalert - -
        The Moon-Aiming Thai-Chinese Hodoscope (MATCH) is a hybrid particle detector developed for China’s Chang’E-7 lunar mission, aimed at supporting space weather monitoring and cosmic radiation studies in the Sun-Earth-Moon system. MATCH is capable of detecting a variety of charged particles including Jovian and Galactic Cosmic Ray (GCR) electrons, lunar albedo protons and alpha particles, contributing to the study of high-energy particle interactions with the lunar surface. This paper presents the development of a modular ground-based calibration software system designed specifically for MATCH, which integrates a Double-Sided Silicon Strip Detector (DSSD) for position tracking and a Bismuth Germanate (BGO) scintillator stack for energy measurement. The software architecture reflects the scientific logic of MATCH’s observation mode, ensuring accurate reconstruction of particle energy and directionality from detector signals. It features a configurable and modular signal processing pipeline, including waveform preprocessing, histogram construction, model-based peak fitting, and calibration mapping. Algorithms for direction estimation based on DSSD strip patterns and species identification using BGO energy profiles based on the dE-E technique are integrated within the framework. The system supports a range of alpha and gamma radiation sources for laboratory-based calibration. Outputs from the software are optimized for use in scientific workflows, including energy spectrum reconstruction, incident angle mapping, and integration with downstream mission analysis tools. By embedding scientific logic directly into its core, the system bridges the gap between raw particle event data and mission-ready interpretations. This software plays a critical role in preparing MATCH for its mission objectives in charged particle characterization and space environment studies around the Moon.
      • 10.0303 Multi-Fidelity Simulation for Lunar and Planetary Rover Missions
        Juan Garcia Bonilla (University of Colorado Boulder), Asher Elmquist (Jet Propulsion Laboratory), Tristan Hasseler (Jet Propulsion Laboratory) Presentation: Juan Garcia Bonilla - -
        Planetary rover development depends on simulation to evaluate autonomy, mobility, and system performance in environments that cannot be replicated on Earth. Simulation requirements span a wide spectrum, from reduced-order models for long-duration mission planning, to high-fidelity multi-body dynamics, terramechanics, and sensor rendering for mobility and navigation, to end-to-end integration of full flight software stacks. Emerging applications such as reinforcement learning and replacement reality add further demands for scale, domain randomization, and hardware interfacing. This paper surveys these diverse needs and the modeling capabilities required to meet them, highlighting recurring foundations. Insights are drawn from a technology development effort for the Endurance lunar rover concept at the Jet Propulsion Laboratory, where many of these capabilities were implemented in the Dshell-DARTS framework, but the results generalize to rover missions broadly. We conclude that effective rover simulation requires adaptable, multi-fidelity frameworks that emphasize reuse and integration, making simulation a central asset throughout the mission lifecycle.
      • 10.0304 Mission Planning Simulation and Design Software Scaling for Shared and Distributed Memory Computing
        Sam Siewert (California State University) Presentation: Sam Siewert - -
        Mission planning and simulation software architecture should ideally provide an open, scalable solution for a wide range of aerospace systems and allow for analysis and design optimization. Existing open-source mission planning and simulation tools such as GMAT (General Mission Analysis Tool), Nyx-space, Godot, ODtbx, and NOS3 provide propagators for space systems celestial mechanics and trajectory determination along with analysis and design tools. However, they all could be improved with parallel scaling software design patterns for propagation and optimization features. Specifically, use of the most current object oriented, and multi-paradigm programming language shared and distributed memory parallel programming methods used in the simulations as well as for optimization. This paper presents a software architectural pattern for use by the core simulation and optimization features of mission planning by providing a simple stand-alone example along with work to create a new prototype extension to existing open source. Many mission planning, simulation and optimization objectives benefit from open-source parallel scaling, especially with Monte Carlo optimization, and problem scaling to simulate many celestial bodies and space vehicles, over long durations. Present tools either lack parallel scaling or use proprietary methods, often limited to one type of parallelism (e.g., shared memory). The goal of this investigation is to re-engineer the core propagation and Montel Carlo features for speed-up and scaling with portable parallel software engineering design patterns that integrate both shared memory and distributed memory parallel methods. While co-processing can also be used in addition, based upon lack of open solutions for co-processors such as GP-GPU (General Purpose – Graphic Processing Units), proprietary methods such as “CUDA” (Compute Unified Device Architecture), Advance Micro Devices “ROCm”, and Apple “Metal” are avoided in this study to avoid incorporating proprietary parallel software. Longer term, open co-processing methods such as OpenCL can be investigated, but here, we have found that a focus on open shared and distributed memory scaling provides significant advancement. Our intent is to evaluate the relative merits of combining OpenMP (shared memory) and MPI (distributed memory) to extend existing mission planning and simulation tools. Specifically, the tools benefit from scaling for Monte Carlo analysis and multi-body simulation supported by current supercomputing machines which have millions of distributed memory cores. Not only has the trend for mission design and analysis tools been incorporation of proprietary methods such as Intel TBB (Thread Building Blocks) in extensions to GMAT, but also full replacement of open-source tools like GMAT with proprietary tools like ANSYS System Toolkit for mission design. Preserving open-source mission design and analysis tools that are competitive in terms of performance, scaling, and features is beneficial to the broader space science community by providing full source options that are portable and extensible. The open-source design pattern will be presented by example in a small-scale simulation along with experience using this pattern in existing larger open-source mission design and analysis tools. Results showing comparison of runtimes for sequential compared to parallel as well as existing proprietary threaded runs are presented and analyzed in this paper.
      • 10.0305 Cyber Resilient Provenance Tracking for Virtual Certification
        Alexander Weinert (German Aerospace Center (DLR)), Paula Ruß (German Aerospace Center - DLR), Malte Struck (German Aerospace Center - DLR), Michael Felderer (DLR Institute of Software Technology), Andreas Schreiber (German Aerospace Center (DLR)) Presentation: Alexander Weinert - -
        The Virtual Product House (VPH) is an integration and testing center for virtually certifying aircraft components. This is important because developing an aircraft component is costly and time-consuming. If a production team discovers that a component does not meet requirements, the team must repeat the entire design, build, and test process. This repetitive development process is costly and can hinder future innovation. Provenance refers to the origin or history of an artifact, object, or piece of data. In the digital age, provenance has become increasingly important for digital content. While digital artifacts change frequently, provenance records typically do not. In aircraft development, we use provenance to track the development steps of the aircraft , as well as the digital processing workflows. We assume that traceability of aircraft data and reliability of tools and toolchains are both essential for simulation-supported aircraft certification. To ensure the quality, reliability, and trustworthiness of provenance data, cybersecurity aspects must be considered. Since attacks and countermeasures in this field are constantly evolving, signatures showing the authenticity and integrity of data must occasionally be updated. Blockchains can store chains of digital signatures, i.e., data series. Based on the requirements of a virtual design process, we have extended the provenance tracking system that we developed for the Virtual Product House (VPH) by applying a blockchain-based digital signature scheme. Our solution is adaptable to emerging security technologies, including post-quantum cryptography. We provide a high-level overview of the proposed system and discuss the design choices behind the platform. We present the design and architecture of a software prototype that we developed to test the feasibility of our concept. Our goal is to demonstrate the architecture, key features, and functionality of our proposed provenance tracking solution for digital aerospace design and certification processes.
    • Kristin Wortman (Johns Hopkins University Applied Physics Laboratory) & Robert Klar (Southwest Research Institute)
      • 10.0401 Clarifications on Determinism: Multidisciplinary Perspectives for Avionics, Engineering, and Beyond
        Bjorn Andersson (), Dionisio De Niz (Carnegie Mellon University), William Vance (TriVector Services, Inc.), John Ross (), Roshini Ashok (US Army ), Chase Blakey (Intuitive Research and Technology ), Tuan Bui (US Army CCDC AvMC) Presentation: Bjorn Andersson - -
        It is often claimed that determinism is important for avionics. Unfortunately, the current use of the word determinism creates confusion. Often determinism is either (i) undefined, (ii) defined but depends on other concepts that are undefined, or (iii) defined but used with different meanings by different persons. This confusion is unfortunate because it hampers (i) communication and collaboration between industry participants, (ii) technology transfer from researchers to practitioners, and (iii) research feedback with practitioners informing researchers about important pain points. In response, recent work [1] presented an initial taxonomy for different meanings of the word determinism---this taxonomy asked questions and provided possible answers so each assignment of answers to all questions yields a definition of determinism. It also presented how different disciplines have used the word determinism. Unfortunately, there were two gaps in [1]. 1st gap: there are other aspects of determinism that the taxonomy did not discuss, e.g., dependence on (i) prediction method, (ii) reality vs model, (iii) time horizon, (iv) observation vs state. 2nd gap: the notion of determinism was only used for disciplines that rely on a model of computation (MoC) and it only considered some MoC. Therefore, we address these gaps by: (i) adding additional questions to the taxonomy, and (ii) extending the discussion on previous work to other MoC and to other disciplines. Addressing the 1st gap helps to clarify the notion of determinism and hence mitigate the aforementioned confusion. Addressing the 2nd gap puts the notion of determinism in perspective. We give examples highlighting that a given system deemed deterministic can become non-deterministic because either (i) the definition of determinism is changed, or (ii) because the assumptions made are changed. Below, we give one example of material that our full paper will contain. [1] presented nine questions such that answers to these nine questions yield a definition of determinism. However, there are additional questions that we believe should have been added to this list of questions. Q10: Is determinism about reality or model? Suppose that the truth about reality of a system is a differential equation (DE) of the form x’(t)=x^{1/2} with the initial condition x(0)=0, t is time (a real number), and x(t) is a real (as opposed to complex). Reality: One could say that this system is deterministic because (i) one can interpret this DE as a simulation where the initial state of the simulation is given by the initial condition and the derivative yields how this state changes as time progresses, and (ii) from such reasoning, there is only one solution and hence one may view the system as deterministic. On the other hand, one could say that this system is non-deterministic because the DE has two solutions, and hence reality can behave in two different ways. One solution is x(t)=(2/3)*x^{3/2}. Another solution is x(t)=0. Model: Discretization of the DE yields a difference equation that has a single solution. [1] Andersson et al., "What is Determinism? Definitions and Implications for Airworthiness and Critical Software,'' DASC 2024.
      • 10.0404 Real-Time Stability Monitoring of Neural Network Based Missile Guidance
        Kenneth McDonald (University of Central Florida), Zhihua Qu (University of Central Florida), Trevor McCants (Lockheed Martin MFC), Jason Beck (), Edward Daughtery (Lockheed Martin MFC) Presentation: Kenneth McDonald - -
        Missile guidance is a two-point-boundary-value optimization problem, and it is analytically solvable only for simple cases, such as the proportional navigation (ProNav) for the point mass model. Since missile dynamics are highly nonlinear, its guidance is typically solved either numerically or approximately. On the other hand, neural networks can be trained to learn complex optimal solutions such as nonlinear optimal guidance laws, and a trained model can real time be implemented as a block in a closed-loop feedback configuration. However, as the trained neural network is validated to fit the given set of input-output data, it can replicate optimal guidance for the trained state values but may not maintain either stability or (near-)optimal performance for untrained state values. In this paper, an explicit analytical condition is derived to test stability for the general missile guidance problem, and it is shown to be applicable to neural network-based guidance problem with nonlinear and nonholonomic missile models. This means that stability of any trained neural network-based guidance law can be checked online. Illustrative examples show the effectiveness of the proposed stability monitoring.
      • 10.0406 Entity Resolution for Aircraft Type Matching on Heterogeneous Aviation Data Sources
        Karna Bryan (National University) Presentation: Karna Bryan - -
        The lack of standardization in aircraft type identification across diverse aviation databases hinders comprehensive safety analysis by creating significant barriers to heterogeneous data integration. Aviation safety studies that aim to aggregate data by aircraft type in order to identify trends are time-consuming and require substantial manual data cleaning. This is largely due to the absence of common keys, inconsistent and non-standard representations of aircraft type, and the use of varying levels of granularity, such as make-model-series, make-model, ICAO codes, and popular names. These inconsistencies make it difficult to align data across sources, particularly when attempting to link safety event records to operational exposure metrics. Currently, risk calculations require manual curation, as safety event counts (the numerator) and operational exposure data such as flight hours or departures (the denominator) are stored in separate and often incompatible databases. The lack of common identifiers for aircraft type interferes with the timely and scalable analysis of safety trends across aircraft categories. To address this issue, this research applies the entity resolution (ER) framework to the aviation domain, introducing a novel data source for ER techniques. ER enables the automated matching of records across datasets that lack common identifiers by leveraging similarities in attribute content and structure. A key contribution of this work is the development of a hierarchical, hand-labeled “gold” dataset that aligns aircraft types across two differing aviation taxonomies. This dataset serves as a benchmark for training and evaluating ER models. Both a baseline feature-based ER model and a deep learning model using the state-of-the-art open-source DITTO framework are developed and assessed. DITTO uses BERT-based sequence-pair encodings combined with a binary classifier fine-tuned for record-matching tasks. The trained model is further tested on additional aviation data sources, demonstrating its ability to generalize to noisy, previously unseen inputs from the same domain, and thereby supporting scalable, automated aircraft type resolution for safety analytics.
      • 10.0407 Range-Calibrated Anomaly Prediction for Spacecraft Data Using Multi-head Attention and GRUs
        Shivanjali Khare () Presentation: Shivanjali Khare - -
        Anomaly prediction is a critical method used by aerospace engineers to ensure overall reliability, safety, and operational efficiency of space missions. The proposed model takes advantage of deep learning approach that combines autoencoders, attention mechanisms, and recurrent neural networks to detect unusual patterns in time series data from space missions. This allows the model to identify both spatial and temporal anomalies within the data. A key innovation is a new range calibration mechanism inspired by alpha-beta pruning. This approach dynamically detects the anomalies based on threshold values and assists in reducing false positives within feature ranges. Additionally, the model also employs a series of sequential post-processing techniques to optimization the overall F1-score for anomaly prediction. The proposed approach is evaluated on two real-world datasets Soil Moisture Active Passive (SMAP) and the Mars Science Laboratory (MSL) rover (Curiosity) from NASA. Performance results demonstrate superior performance when compared to the other baseline models. The approach can proactively mitigate mission-critical system failures, efficient resource allocation, and mission success in aerospace systems.
      • 10.0408 Assuring System Design Compliance with Designated Software Control Categories for System Safety
        Vu Tran (Naval Air Warfare Center - Weapons Division ) Presentation: Vu Tran - -
        Our paper proposes a set of generic safety design requirements (GSDRs) for mitigating the risks of software control malfunction in safety-critical systems. The growing reliance on software control over electromechanical control continues to drive the need for improving software system safety standards and practices. Safety system design of software control often rely on on independent safety mechanisms, such as interlocks, to protect the system against software malfunction. Effective system safety program requires assurance of risk mitigating system design across the system engineering lifecycle. This paper introduces a set of GSDRs derived from the MIL-STD-882E software control category classification. Using these GSDRs help ensure system design compliance with designated SCC for system safety. Our GSDRs can be used to improve the rigor in implementation of hazard analysis methods such as functional hazard analysis, system requirements hazard analysis, and system hazard analysis. In addition to describing the GSDRs, the paper discusses the underlying control models that guide their development and how they can be used to enhance the hazard analysis methods mentioned.
    • Aleksandra Markina Khusid (MITRE Corporation) & Hongman Kim (Jet Propulsion Laboratory)
      • 10.0501 A Cyber-Physical Testing Framework for Evaluating Space Habitat Performance
        Herta Montoya (The University of Texas at San Antonio), Manuel Salmeron (), Motahareh Mirfarah (Purdue University), Sreehari Manikkan (Purdue University), Christian Silva (Purdue Univeristy), Shirley Dyke (Purdue University) Presentation: Herta Montoya - -
        Designing resilient and efficient space habitats is critical to support long-term human presence on the Moon or Mars. These habitat systems must not only withstand harsh and unpredictable extraterrestrial environments but also integrate autonomous fault detection and intervention capabilities. To validate smart habitat technologies and address the complexities of operational scenarios, innovative testing methods are required. This study presents the development and implementation of a thermomechanical cyber-physical testing (CPT) framework, a novel and cost-effective approach that couples real-time numerical simulations with physical components to evaluate habitat system responses under extreme dynamic conditions. Unlike conventional testing or modeling alone, this hybrid methodology captures critical emergent behaviors and interactions vital to system resilience that are otherwise inaccessible. The CPT framework is demonstrated through a series of scenarios utilizing a system-of-systems lunar habitat testbed integrated with an autonomous health management system. Scenarios are designed to replicate the thermomechanical cascading effects that would occur in a lunar habitat due to a micrometeorite impact damaging its structural protective layer during lunar night. The framework allows for real-time execution of both damage and repair interventions on the numerical subsystem, while simultaneously imposing the resulting conditions on physical components. Enabling and performing thermomechanical CPT tests for complex systems undergoing disruptions, with real-time damage and repair capabilities, can be far from straightforward. Thus, this paper outlines the experimental methods and systematic approaches necessary to design and configure tests that successfully execute challenging experiments, highlighting the challenges and solutions in recreating realistic fault scenarios. The CPT approach offers a new pathway for testing ideas and enhancing autonomous fault detection and decision-making in off-world habitats, providing crucial insights into fault propagation, system-level interdependencies, and the limitations of current diagnostic strategies.
      • 10.0504 Model-Based Automatic Requirement Generation Using Requirement Patterns
        Luke Nakatsukasa (Booz Allen Hamilton), Jenny Rafizadeh () Presentation: Luke Nakatsukasa - -
        This paper describes a method for automatically generating textual requirement statements from non-requirement Systems Modeling Language (SysML) or Unified Architecture Framework Modeling Language (UAFML) model information using software methods. This solution addresses some of the common systems engineering challenges encountered in programs involving requirements development with Model-Based Systems Engineering (MBSE). Some of the main challenges faced in these types of programs are high cost and an aggressive schedule. The approach leverages the digital model as the source of truth for automatically-generated requirements. The solution, Requirements Auto Generator (RAGr), identifies and translates modeling patterns into requirements by populating predetermined requirement patterns. A requirement pattern is a series of slots each containing a word or word phrase. Modeling patterns are consistent modeling structures of small groups of elements, relationships, and attributes in the digital model that consistently translates to requirements via requirement patterns. RAGr populates specific slots with text derived from model metadata structures to create complete textual requirement statements. RAGr also creates fully-formed requirement elements within the model and their traceability relationships. This automation performs much of the necessary, but repetitive and tedious requirement development work. Therefore, system engineers are able to perform both modeling and requirements development with greater efficiency and accuracy. To demonstrate this method, a scalable software solution was implemented to translate SysML and UAFML (modeling languages) into natural language textual requirement statements. The implemented solution uses predefined, but configurable, requirement patterns so that pattern slot wording may be customized based on programmatic needs. This solution reduces programmatic cost and schedule duration by automatically generating requirements at speed while remaining consistent and integrated with the system model.
      • 10.0506 Modelling and Testing a Real-time Human-In-The-Loop Controlled Lunar Landing Simulator
        Miguel Neves (German Aerospace Center - DLR) Presentation: Miguel Neves - -
        The International Space Exploration Coordination Group (ISECG), comprised of 28 space agencies, is advancing the Global Exploration Roadmap (GER) to include the establishment of the Lunar Orbital Platform-Gateway (LOP-G), a space station orbiting the Moon. This platform will facilitate astronaut landings on the Moon's south pole. This paper emphasizes the critical role of lunar landers in crewed Moon missions, with specifications tailored to mission requirements such as size, propulsion, payload, landing gear, energy systems, and controllers. It details the development of a Human-In-The-Loop (HITL) controlled lunar lander simulator at the German Aerospace Center (DLR), featuring a dynamic model with Reaction Control System (RCS) thrusters and a main thruster for landing. A Hertzian-based ground contact detection model is used for simulating landing forces. Despite the presence of autopilot Guidance, Navigation, and Control (GNC) systems, manual control by astronauts, as demonstrated in Apollo missions, remains essential. This paper focuses on the astronaut’s manual control of the lunar lander. The lunar lander is integrated within DLR's Robotic Motion Simulator (RMS) framework, including a low gravity-adapted washout filter and Human Machine Interface (HMI) devices for manual steering. Simulation scenarios, including those based on Apollo 11 mission data, are recreated and tested. The framework's evaluation of controller performance, pilot sensor data and lander path supports advancements in spacecraft design, GNC validation, and astronaut flight control training.
      • 10.0507 SAFER: A Generative Model-Based Framework for Safety Requirements Verification
        Yaniv Mordecai (Amazon), Noga Chemo (Tel Aviv University) Presentation: Yaniv Mordecai - -
        We introduce a framework for Foundational Analysis of Safety Engineering Requirements (SAFER), a model-driven methodology supported by Generative AI to improve the generation and analysis of safety requirements for complex safety-critical systems. Safety requirements are often specified by multiple stakeholders with uncoordinated objectives, leading to gaps, duplications, and contradictions that jeopardize system safety and compliance. Existing approaches are largely informal and insufficient for addressing these challenges. SAFER enhances Model-Based Systems Engineering (MBSE) by consuming requirement specification models and generating the following results: (1) mapping requirements to system functions, (2) identifying functions with insufficient requirement specifications, (3) detecting duplicate requirements, and (4) identifying contradictions within requirement sets. SAFER provides structured analysis, reporting, and decision support for safety engineers. We demonstrate SAFER on an autonomous drone system, significantly improving the detection of requirement inconsistencies, enhancing both efficiency and reliability of the safety engineering process. We show that Generative AI must be augmented by formal models and queried systematically, to provide meaningful early-stage safety requirement specifications and robust safety architectures.
      • 10.0509 Coordinated World Model Learning for Deep Space Robot Teams
        Andrzej Skulimowski (AGH University) Presentation: Andrzej Skulimowski - -
        Advances in launcher and space robotics technologies have opened new possibilities for planetary multi-robot missions. However, significant technological challenges remain, particularly in the development of models and software to enable cooperation and coordination among fully autonomous robot teams operating without continuous human supervision. This challenge is especially relevant for planetary surface robots in deep-space missions beyond Mars' orbit—such as those targeting the icy moons of giant planets, where the long travel time for control and feedback signals between Earth and the robots renders real-time remote control or supervision problematic. A crucial component of all mobile autonomous system software is the robot world model (WM). This is a complex system that integrates various machine learning methods, pattern recognition, image understanding algorithms, Simultaneous Localization and Mapping (SLAM), and a dynamic knowledge base. While significant progress has been made in developing WMs for single robots, the theory and practice of team WM learning lack a unified framework. This paper addresses the above gap from a general perspective and introduces a novel concept for autonomous multi-robot WM learning. The proposed approach integrates synergistic methods such as multi-agent model-based reinforcement learning (MARL/MBRL), imitation learning, federated learning, transfer learning, and other collaborative learning paradigms and techniques. Multi robot learning will also be related to self-supervised learning and to the combinations of individual and group learning outcomes. We will propose a group WM-building process that leverages coordination mechanisms enhanced by anticipatory behaviors within planetary robot teams. The resulting WMs will support simultaneous solution of multi-criteria MRTA-MAPP (Multi-Robot Task Allocation and Multi-Agent Path Planning) problems. The risk of damage will be minimized while maximizing exploration efficiency, measured by the expected scientific benefits achieved within the shortest possible time. When presenting the above concepts, we will refer to the robotic exploration of icy moons of giant planets potentially harboring subsurface water reservoirs. The presentation will be illustrated by an example of four-robot mission exploring an icy moon surface, looking for fissures and water plums in an optimal way from the point of view of exploration time and the expected scientific yield. In summary, we will compare the reliability and costs of fully autonomous single-robot mission to multi-robot missions to the outer planets of the Solar System and their moons. We will demonstrate that the risks encountered during the exploration of unknown deep-space environments can be minimized when implementing and deploying efficient WM building algorithms based on combinations of multiple group ML techniques. We will show that synergic yields resulting from deploying multiple robots in team missions justify the additional expenses incurred by larger payload and higher software development costs. A numerical example will refer to a potential Europa lander mission equipped with multiple rovers, hopping robots, and cryobots.
      • 10.0510 A High-Fidelity Actuator Model for Robotics Autonomy in Space and Beyond
        Preston Rogers (NASA Jet Propulsion Lab), Joseph Bowkett (Jet Propulsion Laboratory), Paul Backes (Jet Propulsion Laboratory) Presentation: Preston Rogers - -
        This work presents a generalized actuator modeling approach that enhances simulation accuracy, with applications extending beyond Mars exploration. Actuators are integral components in space robotics, where precision and reliability are paramount. The circa-2023 Sample Retrieval Lander (SRL) design exemplifies this, relying on a robotic arm to retrieve Mars sample tubes for return to Earth. With Earth–Mars round-trip communication delays on the order of minutes, direct human teleoperation is unsafe and impractical, necessitating autonomous execution through onboard kinematic controllers. Because the robotic arm represents a single point of failure in such a critical mission, we develop an exceptionally high-fidelity simulation environment to validate the controller design. Unlike many system simulations that idealize actuator performance to simplify computations, our model captures key dynamics often overlooked in traditional approaches. Its accuracy is validated through comparisons with real-world actuator tests, demonstrating both mission relevance for SRL and potential applicability to future space robotics systems.
      • 10.0511 Automated Consistency Checks for Contract Data and MBSE Models Using Natural Language Processing
        Jyotirmay Gadewadikar (MITRE), Aleksandra Markina Khusid (MITRE Corporation) Presentation: Jyotirmay Gadewadikar - -
        Maintaining consistency between model-based systems engineering (MBSE) models and their corresponding Contract Data Requirement List (CDRL) deliverables is crucial in large-scale systems engineering projects. This is especially important when CDRLs are not automatically generated fully and directly from the MBSE tools. Traditionally, consistency check task are manual, labor-intensive, and prone to errors, as it involves line-by-line reviewing of documents against their associated models, which are usually in vastly different formats. This paper introduces an automated tool leveraging machine learning techniques to enhance the efficiency and accuracy of consistency checks between CDRLs and Cameo diagrams. By utilizing Python and its libraries, the tool parses the Cameo models and corresponding CDRLs, employing sequence matching to identify and evaluate document-model mappings of requirements. The implementation of this tool significantly reduces the time required for consistency checks, as demonstrated by a quantitative time study showing a reduction from 22.5 minutes to just 4 seconds for 162 comparisons, with 100% accuracy. Contractors can often deliver large increments of CDRLs near to their due dates, leaving the Government little time to perform effective reviews without the assistance of such a tool. The tool's limited dependencies facilitate broader applicability across different projects. This research not only addresses existing gaps in automated consistency checks but also provides a foundation for future advancements in the field. The findings underscore the potential for substantial time savings and improved accuracy, while highlighting the necessity of human oversight in handling sensitive information. Future research may further refine and automate these processes, enhancing their utility across diverse programs.
      • 10.0513 Mission-Oriented Site Selection for Future Lunar Mining Operations
        Ebru Tulu (University of Turkish Aeronautical Assocation), Yağmur Kaya (University of Turkish Aeronautical Association), Deniz Kılıc () Presentation: Ebru Tulu - -
        This study presents a systematic approach to identifying the most suitable mining regions on the Moon. The developed method simultaneously evaluates various data layers to determine the most favorable areas for potential in-situ resource utilization (ISRU). The analysis integrates geochemical data such as iron (Fe), titanium (Ti), and thorium (Th); topographic variables like surface slope and elevation; and environmental factors including sunlight duration and regolith structure. The Analytic Hierarchy Process (AHP) method was used to assign values to each parameter according to its importance for lunar mining operations. The goal is to strike a balance between scientific potential and engineering feasibility. Locations were ranked based on composite suitability scores, enabling prioritization of regions that are both resource-rich and operationally accessible. The multilayered analysis allows for a more detailed and balanced spatial evaluation of the lunar surface, revealing areas where advantages intersect. The study adopts a flexible decision-making model that can adapt to various mission scenarios, from robotic explorations to crewed missions. The AHP method enables an objective assessment of the importance of criteria, reducing subjectivity in site prioritization. Preliminary results indicate that a spatial trade-off is required between high mineral content and challenging terrain conditions. This method has facilitated the creation of detailed suitability maps due to the abundance of resource data. The validity of the model was tested by comparing it with known geological features and mission planning documents. Furthermore, the resolution limitations of orbital data were addressed, and strategies were proposed to reduce spatial uncertainties in surface characterization. This research contributes to the field of space resource utilization by providing: (1) a transparent and reproducible decision support tool adaptable to different mining objectives; (2) a quantitative evaluation of the effect of parameter weights on spatial prioritization; and (3) foundational insights for future in-situ validation missions. Although the proposed system focuses on lunar applications, the methodology can be applied to Mars or asteroid exploration missions with appropriate parameter adjustments. In conclusion, integrated approaches that systematically assess both technical and economic factors can significantly improve pre-mission planning and reduce risks for future commercial and scientific ventures in space.
      • 10.0514 Multi-Agent Mental Models for Cooperation of Heterogeneous Robots in a Space Scenario
        Adrian Bauer (German Aerospace Center - DLR), Sant Brinkman (ESTEC/ESA), Anne Köpken (German Aerospace Center), Nesrine Batti (DLR - Deutsches Zentrum für Luft- und Raumfahrt), Jörg Butterfaß (German Aerospace Center - DLR), Tristan Ehlert (Deutsches Zentrum für Luft- und Raumfahrt), Emiel Den Exter (), Werner Friedl (), Philipp Knestel (German Aerospace Center - DLR), Florian Lay (German Aerospace Center - DLR), Xiaozhou Luo (German Aerospace Center - DLR), Ajithkumar Narayanan Manaparampil (German Aerospace Center - DLR), Luisa Mayershofer (German Aerospace Center - DLR), Antonin Raffin (), Anne Reichert (German Aerospace Center - DLR), Annika Schmidt (German Aerospace Center - DLR), Florian Schmidt (German Aerospace Center - DLR), Rute Luz (), Peter Schmaus (German Aerospace Center (DLR)), Daniel Leidner (), Thomas Krueger (European Space Agency), Neal Lii (German Aerospace Center) Presentation: Adrian Bauer - -
        With each space robotic mission, a team of specialists on ground is usually required to support the robot throughout its mission. When scaling from single robots to fleets of robots, this will no longer be a viable way of conducting missions. To operate more autonomously and adapt to dynamic scenarios without relying on a team of human experts, robots must be able to create a mental model of their environment. In the Surface Avatar ISS-to-ground technology demonstration telerobotic mission, astronauts onboard the International Space Station command a team of heterogeneous robots in our lab. This work focuses on the final session with NASA astronaut Jonny Kim where he was tasked with commanding the robots in a collaborative manner in order to complete the assigned tasks. To equip a complex robot such as DLR's Rollin' Justin with the ability to collaborate and coordinate with other robots, we deploy a combination of models and heuristics that allow it to create a mental model of its environment including its surrounding agents. Rollin' Justin also shares the estimated states generated in the mental model with the other robots. In this paper, we describe the application of this concept to our space experiment, present the models used to create the mental model, and evaluate it based on data collected in the final Surface Avatar experiment.
      • 10.0515 Simulink-Flight Software Bridge for Model Based Testing
        Sherif Matta (NASA - Johnson Space Center), Marc Carbone (NASA - Glenn Research Center) Presentation: Sherif Matta - -
        Abstract—Model-Based Testing (MBT) has become a powerful tool used for validating complex flight software systems, particularly in safety-critical space applications. This paper presents a novel MBT framework that leverages Simulink® with operational flight software used in space missions, including applications within NASA’s Gateway program. The core of our approach is a Simulink–Flight Software Bridge. The bridge is designed to allow real-time sending and receiving of telemetry and commands with the flight software using the Space Packet Protocol (SPP). To facilitate rapid test development, the MBT bridge blockset is auto generated from standard interface definitions XTCE files. We developed a tool to parse the XTCE files and extract telemetry definitions and command arguments, and automatically configure the Simulink bridge accordingly. The resulting framework supports open and closed-loop simulations, Model-in-the-loop (MIL), Software-in-the-loop (SIL), Hardware-in-the-loop (HIL), and functional validation against interface specifications. Our implementation improves testing coverage and early bug detection in mission-critical spaceflight software. The solution can be leveraged to be used with other applications with similar interface architectures.
    • Daniel Clancy (Georgia Tech Research Institute) & Georges Labrèche (Tanagra Space) & Pooria Madani (Ontario Tech University)
      • 10.0602 Autonomous Tip-and-Cue Earth-Observing Constellation Tasking with Reinforcement Learning
        Mark Stephenson (Unversity of Colorado, Boulder), Hanspeter Schaub (University of Colorado) Presentation: Mark Stephenson - -
        While Earth-observing constellations often collect images from an \textit{a priori} request list, this paradigm greatly limits the phenomena that can be observed: Emergent and unpredictable events are often valuable imaging targets. Although tip-and-cue architectures exist to image such events---usually with a ``tipping'' leader satellite that cues observations by the follower satellite(s)---these lack the flexibility or capacity desired from modern Earth-observing constellations. In this work, reinforcement learning is demonstrated as a way of autonomously and scalably tasking a homogenous constellation of satellites with scanning and imaging instruments. A per-agent policy is learned that is executable onboard each satellite and able to respond to the high-uncertainty environment, solving a problem that traditional pre-planning approaches cannot handle and demonstrating collaborative behavior between agents. As satellites are added to a constellation, the performance of the satellites working together grows faster than the number of satellites. Depending on the location of a satellite within the constellation, it may assume a specific role that biases it towards scanning or imaging.
      • 10.0603 Overcoming Challenges of Realism in Competitive Space-Based Reinforcement Learning with AstroCraft
        Loren Anderson (Huntington Ingalls Industries), Richard Erwin (Air Force Research Laboratory), Moses Hansen (Purdue University), Rehman Qureshi (Auburn University), Victoria Outkin (), Anthony Aborizk (University of Florida), Rohan Kulkarni (Columbia University), Elyssa Dennery (), Sahitya Senapathy (Endeavor AI, Inc), Srija Makkapati (), William Flathmann (Joint Warfare Analysis Center), Kathryn Alcalde (), Berkan Dokmeci (Washington University in St. Louis), Shahin Firouzbakht (Aerocine Ventures Inc), Edfil Basan (Vereer) Presentation: Loren Anderson - -
        Reinforcement learning (RL) has emerged as a popular choice for training artificial intelligence algorithms in competitive scenarios due to recent successes of achieving superhuman performance in board and video game environments. However, there is currently a lack of open-source, competitive environments designed to exhibit challenges of realism in the domain of spacecraft control to assist in the transfer of RL algorithms to physical systems. We present AstroCraft, a space-based capture-the-flag environment comprised of two opposing teams, each containing multiple maneuverable satellites and a space station in geosynchronous equatorial orbit. The primary goal of each team is to maneuver a satellite to capture a flag at the opposing player’s station and return the flag to its own station without being tagged-out by opposing satellites. We first perform an experimental study on AstroCraft that elicits challenges from realisms such as long time horizons, complex dynamics, and stringent fuel constraints. Throughout, we conduct experiments on similar environments from the literature indicating that many of these realisms are not significantly present and do not degrade performance. Finally, we design an RL algorithm which first gathers data from a heuristic opponent competing against itself; constructing this dataset enables the application of Conservative Q-Learning for offline pretraining before further online finetuning. This algorithm produces a model that is superior to the original heuristic opponent. We believe that the lessons learned from our experiments on AstroCraft provide promising avenues for constructing RL algorithms that overcome challenges of realism in simulation and physical systems alike.
      • 10.0605 A Physics-Informed Machine Learning Approach to the Jet Engine Aircraft Fuel Consumption Problem
        Francisco Velásquez-SanMartín (University of Navarra), Marta Zárraga-Rodríguez (Tecnun-University of Navarra), Xabier Insausti (Tecnun (Universidad de Navarra)), Ruben Armañanzas (University of Navarra), Jesús Gutiérrez-Gutiérrez (Tecnun (University of Navarra)) Presentation: Francisco Velásquez-SanMartín - -
        Every year, the annual number of flights worldwide increases and fuel consumption still remains a major issue to be addressed. Aviation industry stakeholders are constantly urged to reduce fuel consumption due to operational and environmental standards. Accurate estimation of fuel consumption is a key element for achieving such reduction. In this sense, physics-based models provide valuable insights since they rely on the governing physical equations of the system in order to study the interaction between aerodynamic, engine and design parameters and have a better understanding of how fuel consumption occurs. However, such physics-based models require the knowledge of the exact value of a large number of parameters, which are difficult to obtain and have to be approximated. Therefore, these models, although being valid, do not fully capture phenomena that is inherent to real-world flight data. On the other hand, data-driven models do fit accurately to real-world flight data, but may give physically implausible predictions in some scenarios and are less interpretable from a physics-based perspective. The field of Physics-Informed Machine Learning opens up a promising path of merging both physics-based and data-driven models by embedding known physical behavior into the training process of learning algorithms. This work proposes a hybrid framework based on Physics-Informed Machine Learning. More specifically, on Physics-Informed Neural Networks to model jet engine fuel consumption during the aircraft’s climb flight phase. Our framework is based on incorporating a mathematical model of such flight phase into the training process of a neural network as hard constraints or penalty terms in the loss function. The mathematical model consists of a first order non-linear ordinary differential equation of the aircraft’s rate-of-climb as a function of time and the fuel flow rate as a function of the aircraft’s rate-of-climb. Furthermore, the neural network is trained on real-world flight data such as altitude, true airspeed, and rate-of-climb, with the objective of predicting the aircraft’s fuel consumption. The resulting hybrid framework successfully demonstrates predictive accuracy of aircraft fuel consumption among different selected flights, improving, thus, the generalization capability of physics-based and data-driven models and providing a better understanding of fuel consumption on a real-world scenario. Data from such flights correspond to a specific jet engine aircraft model and are obtained from a publicly available database.
      • 10.0609 Predicting Cognitive State and Workload of Aviators with Machine Learning of Physiological Data
        Jeffrey Johnston (United States Military Academy) Presentation: Jeffrey Johnston - -
        Aviation is considered a dangerous profession, and pilot error is determined to be the cause of most accidents. Stress and fatigue are mental states common to aviators which can be overwhelming or even distracting leaving them susceptible to fatal mistakes. Stress and fatigue can often manifest into physiological signs such as an elevated heart rate. These changes can be detected and used to alert aviators to potentially dangerous states. In this work, our aim is to predict the cognitive state and workload of the aviator through a machine learning model that relies on the ECG signal. We use the data provided by the US Army Aeromedical Research Laboratory (USAARL), which conducted a study with Army helicopter pilots flying Upset Prevention and Recovery Training (UPRT) in a fixed wing aircraft. During this study the aviators flew the aircraft through unsettling maneuvers while wearing sensors to collect their physiological data. We apply the Catch22 feature extraction technique as well as the Entropy and Complexity technique to both the Raw ECG data and derived Heart Rate that generate interpretable features, directly reflecting the pilot’s mental state. These features prove effective in distinguishing between cognitive states and workload levels when used to train a predictive machine learning model. With hold-out validation, our highest performing model using Raw ECG data with the Catch22 extraction method accurately classifies high versus low workload with 95% accuracy, 86% precision and a recall of 90%. Our results demonstrate the potential of ECG-based predictive models for real-time monitoring of aviator cognitive performance and safety.
      • 10.0612 Mission-specific Trajectory Dispersions for T-SCVx Landing Guidance
        Waylon Lee (Texas A&M University), Carson Capps (Texas A&M University), Jack Goldhammer (Texas A&M University), Gregory Chamitoff (Texas A&M University) Presentation: Waylon Lee - -
        With multiple missions planned to target the Moon’s south pole, robust and high-performance powered descent guidance algorithms are critical to ensuring safe and autonomous landings. One leading approach is the six degree-of-freedom (6DOF) Successive Convexification (SCVx) algorithm, currently utilized in vehicles such as the Falcon 9. SCVx works by iteratively solving a non-convex optimal control problem by approximating it as a sequence of convex subproblems, efficiently generating real-time, constraint optimized trajectories. To further enhance SCVx’s robustness and convergence speed, recent work by Briden introduced a transformer-based warm-starting method known as T-SCVx. T-SCVx maintains the rigor of SCVx’s convex optimization framework while enhancing its responsiveness in dynamic environments, especially where onboard computational resources or time are constrained. In this work, we train a transformer neural network with mission-specific trajectory dispersions to increase safety and convergence reliability of T-SCVx, extending on previous efforts of training a dataset for landing. By providing feasible initial trajectory generations for preliminary targeting activities during the coasting phase of flight prior to powered descent guidance, under performing burns may be effectively accounted for by the trained neural networks, increasing convergence robustness. This methodology of customized neural network training with mission specific dispersions data can be useful for similar initial conditioned/mass properties vehicles landing at or near the same landing site, such as multiple Starship landings. In this work, the simulation is of a transfer from the Artemis Gateway proposed 9:2 Near Rectilinear Halo Orbit (NRHO) around the Moon to a low-lunar oribit (LLO) transitioning to T-SCVx powered terminal descent. During training, the initial trajectory leg between NRHO-LLO is simulated in NASA Goddard's GMAT with an underperforming engine modeled to produce a diverse set of initial conditions for training terminal descent conditions. After the transformer has been trained and tuned, the simulation is run in the Space Teams PRO environment, starting at a variety of terminal descent points derived from the simulated dispersion.This simulation is used to validate the method's ability to maintain accuracy and safety in scenarios with significant trajectory variation. Our mission-informed dispersion-based training outperformed our spherical distribution trained implementation in convergence speed for the case where initial conditions live within both spherical distribution and mission-informed trained transformer (average 8.18 seconds +-5.16 seconds at 3 sigma). The mission-informed trained transformer also shows consistently smaller variance in solve times for all cases while reliably converging on all test cases.
      • 10.0615 GoDSAT: Reinforcement Learning Based Satellite Constellation Sensor Tasking Framework
        Amir Saeed (Johns Hopkins University), Nicholas Realuyo (United States Air Force), Alhassan Yasin (Johns Hopkins University/Applied Physics Laboratory), Benjamin Johnson (Accenture Federal Services), Benjamin Rodriguez (Johns Hopkins University/Applied Physics Laboratory) Presentation: Amir Saeed - -
        Timely and continuous observation of ground targets using space-based assets is critical for mission success in domains such as defense surveillance, disaster response, and environmental monitoring. However, persistent custody of targets, particularly using multi-satellite constellations, remains a challenging problem due to the dynamic nature of orbital mechanics and tasking constraints. Traditional tasking strategies often rely on static satellite-target pairings or pre-planned schedules that lack adaptability and do not generalize across scenarios. In this work, we present a novel reinforcement learning (RL)-based approach to satellite sensor tasking that enables dynamic decision-making for maintaining target custody. Our framework integrates a previously trained Q-learning model into a new MATLAB environment called GoDSAT (Government-Oriented Dynamic Satellite Tasking). GoDSAT was developed to simulate satellite behavior and target interactions over time, supporting rapid prototyping and evaluation of tasking strategies across diverse scenarios. At the core of our innovation is a custody transfer algorithm that allows satellites to dynamically hand off tracking responsibilities. Rather than relying on fixed satellite-target pairs, the algorithm reassigns custody at every time step to the satellite closest to the target in Euclidean distance. This enables adaptive scheduling and continuous observation by leveraging the full capability of a satellite constellation. The imported Q-table, trained specifically to optimize persistent custody behavior, allows the system to execute intelligent sensor tasking policies without requiring retraining or additional learning overhead in the new environment. Our results demonstrate that this approach effectively maintains custody over dynamic ground targets through coordinated tasking and handoff, extending prior work that was limited to static configurations. The reusable nature of the RL policy also accelerates experimentation, enabling researchers and operators to explore different constellation configurations, orbital scenarios, and scheduling constraints. This work introduces a flexible and extensible simulation platform for real-time satellite tasking experimentation and showcases the potential of reinforcement learning to scale sensor scheduling strategies in multi-agent space systems. By combining a trained policy with a dynamic custody transfer mechanism, we take a step toward more autonomous, responsive, and resilient satellite operations.
      • 10.0618 Stealth in Orbit: Poisoning Federated Learning Models in Satellite Constellations
        Pooria Madani (Ontario Tech University), Carolyn Mc Gregor (University of Ontario Institute of Technology) Presentation: Pooria Madani - -
        Direct data sharing in satellite constellations is often infeasible due to privacy concerns, limited bandwidth, intermittent connectivity, and strict energy constraints. These challenges make traditional centralized machine learning approaches impractical in space-based systems. Federated Learning (FL) addresses this by allowing each satellite to train a model locally on its own data and share only model updates with the rest of the constellation. This decentralized approach reduces communication overhead, preserves data privacy, and enhances the autonomy and resilience of in-orbit machine learning. However, the same decentralized nature that makes FL appealing for space applications also introduces new vulnerabilities. Among them, backdoor poisoning attacks, where adversaries manipulate model behavior through malicious local updates, represent a particularly serious and underexplored threat in federated satellite systems. In this paper, we investigate backdoor injection in FedSpace, a federated learning framework for satellite constellations. We propose a patch-based poisoning strategy in which a small number of compromised satellites embed visually plausible triggers into their local training data. These triggers are crafted to induce targeted misclassifications (e.g., hiding strategic infrastructure) while maintaining high overall accuracy on clean, unaltered inputs. We evaluate our approach across diverse configurations and constellation topologies, including varying poisoning rates and orbital communication structures. Our results show that stealthy and effective backdoors can be implanted with limited adversarial access, revealing a critical vulnerability in space-deployed federated systems. This work calls attention to the urgent need for robust security measures in future FL-enabled satellite infrastructures.
      • 10.0619 Terrain-Aware Low-Altitude Path Planning
        Yixuan Jia (Massachusetts Institute of Technology), Andrea Tagliabue (), Annika Thomas (Massachusetts Institute of Technology), Navid Dadkhah Tehrani (), Jonathan How (MIT) Presentation: Annika Thomas - -
        In this work, we study the problem of generating low-altitude path plans for nap-of-the-earth (NOE) flight in real time with only RGB images from onboard cameras and the vehicle pose. We propose a novel training method that combines behavior cloning (BC) and self-supervised learning that enables the learned policy to outperform the policy trained with standard BC approach and even the expert planner. We first collect demonstrations of low-altitude path plans from a sampling-based planner, which will be called the expert planner. Then we use the collected demonstrations and the terrain elevation map to train a deep neural network, which will be called the student policy. The student policy takes two RGB images from onboard cameras: one from a forward-facing camera and the other from a camera tilted downward. In addition, it also takes the vehicle’s current pose and a heading direction pointing to the goal location. Then it generates several sets of the following: a path plan, the predicted collision cost of the path plan, and the predicted elevation of the path plan with respect to the terrain. Multiple sets of those are produced to account for the multimodality in path planning problem (e.g. one can go either left or right to avoid an obstacle right in front). The main contribution of this work is that we use the path plans from the expert planner to supervise only motions in horizontal planes (in contrast to the standard BC approach, which supervises the whole 3D motion using expert demonstrations). We observe that the expert planner can quickly identify feasible paths but struggles to converge to low altitude paths due to the combination of steep terrain and constraints on the turning radius, climbing rate and altitude. Therefore the idea is to use the expert demonstrations to provide guidance on identifying feasible regions and use the terrain elevation information directly to supervise the path altitude. Note that terrain elevation information is used during the offline training phase, but not used during deployment. Simulation experiments are conducted to verify the proposed approach. A second policy is trained using the standard BC approach as a baseline. Note that the two policies are trained with the same data and same hyperparameters. The only difference is the supervision signal for the planned path altitude. We test both policies on 20 pairs of start and goal locations. We observe, on average, a 24.70% reduction of path altitude using our method, compared to the standard BC approach, while maintaining comparable path lengths (paths produced by our method are 1.7% shorter on average than those produced by standard BC). The average inference time of our method is about 12.3ms on an Nvidia GeForce RTX 4090 GPU. In summary, we propose a deep learning approach to produce NOE path plans in real time with only RGB images and the vehicle pose. Preliminary simulation experiments show promising results of the proposed method.
      • 10.0621 Autonomous Reasoning for Spacecraft Control: A LLM Framework with GRPO-based RL
        Amit Jain (Massachusetts Institute of Technology), Richard Linares (Massachusetts Institute of Technology) Presentation: Amit Jain - -
        This paper presents an implemented framework that integrates Large Language Models (LLMs) with Group Relative Policy Optimization (GRPO) to enable autonomous reasoning for cross-domain spacecraft control. We implement a two-stage training methodology: (1) Supervised Fine-Tuning (SFT) using 500 expert demonstrations per system generated from optimal control solutions, where a Qwen3-4B-Base model learns to generate structured reasoning traces that justify control decisions; (2) GRPO refinement that enables sample-efficient policy adaptation through group-based policy comparisons, eliminating the need for separate value networks. Our approach generates explicit reasoning chains (e.g., "Given oscillator damping ζ, reduce control effort when x > x_safe") before control synthesis, providing interpretable decision-making crucial for mission-critical applications. We demonstrate successful control of both linear (double integrator) and nonlinear (Van der Pol oscillator) systems, with performance validated against numerically computed optimal control trajectories. The framework is currently being extended to spacecraft control applications, including orbit raising and detumbling operations, where performance will be benchmarked against numerical optimal solutions. Unlike traditional methods requiring mission-specific controllers, our unified reasoning-enabled model adapts across varying control domains—from academic test cases to spacecraft operations—while providing transparent justification for autonomous decisions. This work addresses critical research gaps in multi-environment generalization and pioneers the application of GRPO in control theory, demonstrating that a single LLM-based controller can effectively reason about and control diverse dynamical systems without environment-specific architectures.
      • 10.0630 A Novel Q-Learning Architecture for Non-Stationary Markov Decision Processes
        Daniel Clancy (Georgia Tech Research Institute) Presentation: Daniel Clancy - -
        Multi-agent architectures and learning protocols that ensure stability, robustness, and performance guarantees for unknown and changing environments are essential for operational AI/ML systems. The traditional Q-learning protocol is applicable to stationary Markov Decision Processes (MDPs) and in the infinite-horizon case, is guaranteed to produce the optimal policies when infinite state/action sampling and mild learning rate assumptions are satisfied. Traditional Q-learning for the general case of a non-stationary MDP typically fails to find more than one of the time-dependent, optimal Q value and policy solutions. The primary reason for this failure is that once the learning rate has decayed sufficiently, the Q value and policy solutions effectively become stationary. This paper presents a novel extension to the on-line, infinite-horizon, Q-Learning algorithm that is applicable to both stationary and non-stationary Markov Decision Processes (MDPs). One of the fundamental building blocks of this new paradigm is our previously developed, completely observable, Course of Action (CoA) state function. This new Q-learning protocol is then applied to two variants of an airborne surveillance mission, one that is modeled as a stationary MDP, and another that extends the original MDP into a time-varying or non-stationary MDP. The implementation of Double Deep Q Networks (DDQNs) for this new Q-learning architecture will be also be discussed and applied to the airborne surveillance missions. Lastly, the application of this work to transfer learning will also be presented.
      • 10.0631 Bringing Federated Learning to Space
        Grace Kim (Stanford University), Filip Svoboda (University of Cambridge), Nicholas Lane (University of Cambridge) Presentation: Grace Kim - -
        As Low Earth Orbit (LEO) satellite constellations rapidly expand to hundreds and thousands of spacecraft, the need for distributed on-board machine learning becomes critical to address downlink bandwidth limitations. Federated learning (FL) offers a promising framework to conduct collaborative model training across satellite networks. Realizing its benefits in space naturally requires addressing space-specific constraints, from intermittent connectivity to dynamics imposed by orbital motion. This work presents the first systematic feasibility analysis of adapting off-the-shelf FL algorithms for satellite constellation deployment. We introduce a comprehensive “space-ification” framework that adapts terrestrial algorithms (FedAvg, FedProx, FedBuff) to operate under orbital constraints, producing an orbital-ready suite of FL algorithms. We then evaluate these space-ified methods through extensive parameter sweeps across 768 constellation configurations that vary cluster sizes (1–10), satellites per cluster (1–10), and ground station networks (1–13). Our analysis demonstrates that space-adapted FL algorithms efficiently scale to constellations of up to 100 satellites, achieving performance close to the centralized ideal. Multi-month training cycles can be reduced to days, corresponding to a 9× speedup through orbital scheduling and local coordination within satellite clusters. These results provide actionable insights for future mission designers, enabling distributed on-board learning for more autonomous, resilient, and data-driven satellite operations.
    • Janki Dodiya (IU International University of Applied Science) & Georgia Albuquerque (German Aerospace Center - DLR)
      • 10.0701 Gesture-based Human-in-the-loop Control of Space Exploration Vehicle Convoys
        Damla Leblebicioglu (Northeastern University), James Tukpah (Northeastern University), Katiso Mabulu (Northeastern University), Taskin Padir (Northeastern University), Michael Shaham (Northeastern University) Presentation: Damla Leblebicioglu - -
        The deployment of multi-robot systems for planetary surface operations is essential as space exploration initiatives grow in scale and complexity. Coordinated convoys of rovers hold promise for a range of critical missions, including scientific exploration, material transport, infrastructure construction, and emergency response. However, enabling robust inter-rover communication and dynamic coordination in uncertain, unstructured environments presents technical and human factors challenges. This research is aimed at the development of a gesture-based control architecture tailored for human-in-the-loop space exploration vehicle convoys. The system demonstrates a framework in which a lead vehicle is driven by the human operator using a conventional steering wheel, while a trailing rover receives high-level commands via hand gestures detected through a wearable technology. This approach leverages intuitive interactions to minimize cognitive load while maintaining operational efficiency. When gesture input is not actively provided, the semi-autonomous rover defaults to a reactive mode, synchronizing its speed and trajectory with the lead vehicle to preserve formation and maintain convoy integrity. A Microsoft HoloLens Heads-Up Display (HUD) provides contextual information to the operator via AR (augmented reality) visualizations. A real-time cognitive workload estimation module integrates cardiovascular physiological signals—specifically heart rate and heart rate variability—into a fuzzy logic framework for shared control in convoy operations. This enables real-time adaptation of vehicle control authority based on the driver’s cognitive state. Effectiveness of this novel interface design has been validated both in simulation and hardware utilizing small scale autonomous vehicles. Performance metrics include task completion time, obstacle avoidance success rates, and tracking performance of follower vehicles. The results of this study indicate that cognitive load increases during driving, as reflected by elevated heart rate and reduced heart rate variability. Furthermore, among the evaluated approaches, the configuration in which the leader vehicle was controlled using DMPC and the follower vehicles were operated through the TapStrap 2 interface achieved the most successful tracking performance proving the success of the proposed shared control framework.
      • 10.0702 Exploratory Design of an Extended Reality UAV Mission Planning Tool
        Luljeta Sinani (German Aerospace Center (DLR)), Andreas Gerndt (German Aerospace Center (DLR)), Georgia Albuquerque (German Aerospace Center - DLR) Presentation: Luljeta Sinani - -
        Extended Reality (XR) technologies have shown promise in supporting situation awareness in Uncrewed Aerial Vehicle (UAV) operations, yet UAV mission planning remains largely 2D and desktop-based. In this paper, we explore how XR can support UAV path planning. To this end, we developed an early-stage prototype that allows users to place, grab, and move waypoints in a geo-referenced 3D terrain model using gesture-based input, while visualizing the resulting drone flight path in real time. To ensure alignment with operator workflows, we adopt a Human-Centered Design (HCD) approach by involving expert users early in the design process. We gather the first impressions from expert users through think-aloud sessions. Drawing on their feedback, we identify key design considerations for XR-based path planning tools, including support for spatial understanding, view switching (e.g., between 2D and 3D immersive views), responsive interaction, and system guidance. Synthesized into generalized design guidelines, these findings aim to inform not only the iterative development of our own prototype but also to serve as a reference for other researchers and developers designing XR-based mission planning tools.
      • 10.0705 A Bayesian Framework for Human-Agent Collaborative Fault Diagnosis
        Joshua Elston (Texas A&M University), Kazuki Toma (Texas A&M University), David Kortenkamp (TRACLabs), Daniel Selva (Texas A&M University) Presentation: Joshua Elston - -
        Virtual Assistants (VAs) have increasingly been employed to assist operators with safety- and time-critical scenarios, such as anomaly response. Despite the increased use, existing VAs for this application are limited in their capabilities, notably lacking the ability to incorporate new information when it becomes available. While such VAs can augment operator capabilities, reliance on agents (the decision-making layer of such VA systems) that are unable to respond to a dynamic environment can hamper the natural process of anomaly resolution. In operational settings, anomaly response is often an iterative process, characterized by engineers formulating hypotheses about the root cause of a fault and developing diagnostic experiments to test their hypotheses. Conducting diagnostic experiments can reduce the uncertainty in the root cause until the engineers have sufficient confidence to proceed with an anomaly resolution or recovery procedure. This iterative process of diagnostic and recovery procedures continues until the fault is resolved. VAs that cannot support this iterative process may therefore be insufficient in providing operators with the necessary cognitive support for the task at hand. This is particularly prevalent in the case of unknown fault signatures or where significant uncertainty exists regarding the root cause of a fault. The framework discussed herein expands the capabilities of an existing VA, Daphne-AT (Anomaly Treatment), to enable iterative fault diagnosis. To achieve this, a Bayesian network has been developed to model anomalies in the Environmental Control and Life Support System (ECLSS) within the Human Exploration Research Analog. Prior probabilities for anomalies and conditional probabilities for parameters conditioned on anomalies were developed to emulate likelihood score calculations previously determined by comparing existing anomalous parameters to anomaly signatures. In addition to relations between anomalies and parameters, temporal and spatial relationships between parameters measured individually at different times or locations in the habitat were incorporated into the network probabilities. This capability allows the network to capture the influence of previous parameter measurements and adjacent sensor readings on current anomaly probabilities. To help the user select diagnostic procedures, ‘hidden’ parameters were added to the Bayesian network. These ‘hidden’ parameters simulate information that is not explicitly measured by ECLSS sensors and captured in a habitat telemetry feed, but that can be manually measured on demand or for which operators can report qualitative observations to update the VA’s beliefs about its operational environment. The updated diagnosis approach also allows the VA to initiate a conversation with an operator to gather additional evidence. Based on its previous set of telemetry observations, the agent computes the information gain on the distribution of possible root causes that would be obtained by measuring each of the hidden parameters and suggests the best diagnostic procedures accordingly. The VA can subsequently update its set of observations by incorporating operator-provided evidence and updating its beliefs about the anomaly probabilities within the habitat. Anomaly likelihood scores provided to operators will be supplemented with additional information on anomaly risks and costs, along with the time to execute diagnostic or resolution procedures, enabling more holistic decision-making processes during anomaly resolution.
      • 10.0706 Toward Improving Task-Level Commanding in Space Robotics Teleoperation through Shared Mental Models
        Luisa Mayershofer (German Aerospace Center - DLR), Florian Lay (German Aerospace Center - DLR), Nesrine Batti (DLR - Deutsches Zentrum für Luft- und Raumfahrt), Sant Brinkman (ESTEC/ESA), Jörg Butterfaß (German Aerospace Center - DLR), Tristan Ehlert (Deutsches Zentrum für Luft- und Raumfahrt), Emiel Den Exter (), Werner Friedl (), Anne Köpken (German Aerospace Center), Xiaozhou Luo (German Aerospace Center - DLR), Ajithkumar Narayanan Manaparampil (German Aerospace Center - DLR), Antonin Raffin (), Annika Schmidt (German Aerospace Center - DLR), Florian Schmidt (German Aerospace Center - DLR), Rute Luz (), Adrian Bauer (German Aerospace Center - DLR), Peter Schmaus (German Aerospace Center (DLR)), Daniel Leidner (), Thomas Krueger (European Space Agency), Neal Lii (German Aerospace Center) Presentation: Luisa Mayershofer - -
        Human involvement in space missions is inherently limited by safety concerns and extreme environmental conditions. Therefore, exploration and infrastructure maintenance in space requires teleoperation of robotic systems with various capabilities. Teleoperation of robotic assets comes with significant challenges for the operator, for example due to limited bandwidth and communication latency. To support the user in achieving the teleoperation goal under these circumstances, a multi-modal user interface (UI) is needed. It allows the operator to command the robot using different input modalities, including joystick and force-feedback devices for direct teleoperation as well as a graphical user interface (GUI) for task-level commands. Complex robotic systems offer a huge variety of different task-level commands, which makes it difficult for the user to choose the optimal command to achieve a task goal. Enabling the robot to create a SMM (Shared Mental Model) with the user helps to overcome this challenge. A SMM refers to a shared and compatible mental representation which human and robot as members of a team hold, including information about the common task goal (task model) and skills and intents of the team members (user model). It is linked to improved task performance and lower mental workload of the operator in human-robot teleoperation. As the robot initially does not know about the task goal and lacks these information to create a SMM, we design a framework which allows the robot to estimate the task model of the user based on a-priori knowledge about the user's task execution behavior. We integrate the a-priori knowledge into a bayesian estimation framework and combine it with information about the user input and the world state of the robot in the remote environment. Based on that, the robot is able to estimate the subtask goal of the user and determine the task-level command best suited for achieving it. This task-level command is then recommended to the user. To evaluate the prediction accuracy of our framework, we use results from the International Space Station (ISS)-to-ground Surface Avatar telerobotic experiments from July 2024. We compare the task-level commands selected by the user with the prediction result of our framework and with the optimal sequence of task-level commands which a system expert would choose. We show, that our framework recommends task-level commands very similar to how the expert user would have used the system. Furthermore, we show different ways of integrating the recommendation into the GUI and give an outlook on future developments of our framework.
      • 10.0707 Optimization of Machine Learning Methods for Prediction of G-Induced Loss of Consciousness
        Nicole Rote (University of Colorado, Boulder), Aaron Allred (), Chris Dooley (Air Force Research Laboratory), Charles Fechtig (), Kim Cates (KBR), Allison Hayman (University of Colorado Boulder) Presentation: Nicole Rote - -
        Pilots in high-performance aircraft are susceptible to G-Induced Loss of Consciousness (G-LOC) during sustained and rapid onset +Gz acceleration due to inadequate brain perfusion. G-LOC can result in loss of life and loss of vehicle. Despite the high consequences, current cockpit countermeasures do not include the ability to identify and respond to G-LOC. We hypothesize that machine learning (ML) models using physiological measurements from the body can be trained to identify G-LOC, enabling a safety countermeasure. In this work, we aim to optimize ML model performance in the classification of G-LOC through a sequential methodology selection. Physiological measurements (i.e., biosignals) and demographics from an Air Force Research Laboratory (AFRL) experiment were collected from 13 medically qualified participants (9M, 4F) exposed to centrifuge G-loadings. During a trial, participants experienced 15-20 minutes of centrifugation, including both steady state centrifugation at 1.2 G and G-events including a gradual onset rate (0.1 G/second) and a rapid onset rate (3.0 G/second), the latter of which often resulted in G-LOC for a total of 44 G-LOC events. Electrocardiogram, respiration, skin temperature, eye-tracking, and electroencephalogram measurements were collected during the trial. Data was collected under the approval of the AFRL Institutional Review Board (Protocol #FWR20220015H). To classify G-LOC occurrence, the predictors (physiological measurements, G-level, etc.) and binary G-LOC label (0/1) data were binned into time-synchronous discrete predictor sets with corresponding G-LOC label pairs using sliding windows. These signals were baselined, features were generated, and standardized. We fit 6 ML classifiers: K-Nearest Neighbors (KNN), Random Forest (RF), Logistic Regression, Gradient Boosting Classifier, Support Vector Machine, and Linear Discriminant Analysis. We performed a data-driven sequential methodology selection, which included selection of an imputation method, sliding window parameters, feature reduction technique, and class imbalance technique for every classifier, since optimal methods may differ per approach. To address missing data, a challenge for ML classifiers, List-wise deletion and KNN imputation were considered. For feature generation, we considered 125 combinations of sliding window size, stride, and baseline window. We pursued feature reduction techniques, including Least Absolute Shrinkage and Selection Operator, Elastic Net regression, Ridge regression, Minimum Redundancy Maximum Relevance, Principal Component Analysis, Target Mean, Single Feature Performance, and Select by Shuffling. Lastly, since the dataset was imbalanced in that predictor-label pairs primarily corresponded to no G-LOC, we investigated class imbalance techniques, including Random Oversampling, Random Undersampling, Synthetic Minority Oversampling Technique, and cost function modifications. The performance of each method and classifier was assessed on unseen test data using F1-score. The best model was the RF classifier, achieving an F1-score of 0.99. The selected optimal pipeline for RF included KNN imputation, a baseline window of 18.75 seconds, a sliding window of 10 seconds, a stride of 0.25 seconds, Ridge feature reduction, and no class imbalance technique. Results showed that all classifiers were able to achieve F1-scores above 0.95 using their respective optimized pipeline. This work achieves effective classifier performance through a data-driven methodology selection. Increasing model performance enhances the ability to trigger aircraft safety measures to reduce the risks posed by G-LOC.
    • Marco Sewtz (German Aerospace Center - DLR) & Levin Gerdes (University of Malaga) & Timothy Chase (Advanced Technology Center, Lockheed Martin Space)
      • 10.0801 Continuous Recursive AI Learning
        Hanna Witzgall () Presentation: Hanna Witzgall - -
        The ability to add new classes to an existing AI model using just the new class data has long been a research objective in the machine learning community as it would allow deployed models to much more rapidly and efficiently update their models using only data learned in their environment. However, this type of sequential learning is difficult for most machine learning algorithms because standard optimizers require access to all the training classes upfront in order to generate their requisite shuffled training batches. If these models are trained incrementally on a batch containing just a single new class, the optimizer will tend to overfit to that class at the expense of previous learned classes. This degradation on prior class performance is known as catastrophic forgetting and makes it difficult to learn new classes without resorting to the inefficiency of retraining over all previously learned classes and storing all the previously learned data. This greatly increases the computational effort and the data storage requirements to add new classes to existing models and largely prevents current optimizers from teaching deployed models’ new classes. To address these concerns, the incremental learning research community has experimented with various solutions; including replay and rehearsal techniques, generalized regularization techniques and parameter isolation techniques. But so far, none of these approaches has been successful enough to be deployed practically and enable the desired goal of learning on non-i.i.d., unshuffled data representative of a temporal video stream. Recent research in incremental learning has shown the advantages of using a modified recursive-least-squares (RLS) approach that allows the optimizer to keep track of past training features outside of the current batch. These RLS incremental learning approaches have demonstrated a unique ability to completely eliminate the performance degradation from learning sequentially. But so far, these RLS-based methods have been limited to only being able to incrementally update a model’s classifier weights. This limits their ability from being able to further improve performance by updating and optimizing the model’s backbone weights for a new task. This work demonstrates for the first time the ability to use these RLS methods to train deeper feature extraction layers of a network incremental. We demonstrate the proof of concept by training a simple 3-layer feed forward network completely from scratch on incremental training data. The paper explores the interplay of these RLS methods with other network components such as the non-linear activations and the incorporation of other training paradigms such as momentum. Initial results show that that the new approach can train on either shuffled or unshuffled and obtain roughly the same accuracy as stochastic gradient descent (SGD) trained on the shuffled data batches. These results indicate the first steps in being able to train entire models on unshuffled streaming data. We expect this training approach to be of utility to the aerospace community which often has models deployed on remote platforms with limited data storage and compute power.
      • 10.0803 Enabling Localization and Mapping for Heterogeneous Robots with in Orbit-to-Surface Teleoperation
        Xiaozhou Luo (German Aerospace Center - DLR), Marco Sewtz (German Aerospace Center - DLR), Nesrine Batti (DLR - Deutsches Zentrum für Luft- und Raumfahrt), Anne Köpken (German Aerospace Center), Florian Lay (German Aerospace Center - DLR), Luisa Mayershofer (German Aerospace Center - DLR), Ajithkumar Narayanan Manaparampil (German Aerospace Center - DLR), Emiel Den Exter (), Sant Brinkman (ESTEC/ESA), Thomas Krueger (European Space Agency), Rute Luz (), Adrian Bauer (German Aerospace Center - DLR), Peter Schmaus (German Aerospace Center (DLR)), Daniel Leidner (), Neal Lii (German Aerospace Center), Riccardo Giubilato (German Aerospace Center (DLR)), Jörg Butterfaß (German Aerospace Center - DLR), Tristan Ehlert (Deutsches Zentrum für Luft- und Raumfahrt), Werner Friedl (), Philipp Knestel (German Aerospace Center - DLR), Viktor Langofer (), Antonin Raffin (), Anne Reichert (German Aerospace Center - DLR), Annika Schmidt (German Aerospace Center - DLR), Florian Schmidt (German Aerospace Center - DLR), Rudolph Triebel (German Aerospace Center - DLR) Presentation: Xiaozhou Luo - -
        Human spaceflight beyond low earth orbit has garnered increasing interest in recent years. To advance research in humanoid service and assistance robots, the German Aerospace Center (DLR) and the European Space Agency (ESA) initiated the Surface Avatar technology demonstration telerobotic mission. Astronauts aboard the International Space Station (ISS) remotely control multiple robots operating in a Martian analog environment on Earth. The project aims to validate technologies and methods for commanding robots Scalable Autonomy enabled with effective mixtures of operator immersion and task delegation, and coordinated collaboration among a heterogeneous group of robotic systems. To reduce the workload for human operators, robotic platforms should possess a variety of autonomous capabilities. A key requirement for autonomy is a robust and precise localization and perception system. This is especially vital in environments where conventional pose estimation methods, such as GNSS, are unavailable or unreliable, which necessitates solutions based solely on on-board sensors. In this work, we present our approach to localization and mapping for the diverse set of robotic assets participating Surface Avatar. Our focus is on the analysis, development, and application of visual methods for reliable ego-pose estimation in large-scale environments. Accurate pose information is essential for navigation, collision avoidance, and for establishing situational awareness on the human operator’s side, e.g., by generating spatial overlays in the command interface. Special attention is given to the technical characteristics and internal constraints of the participating robots, including DLR’s mobile humanoid robot Rollin’ Justin, quadruped Bert, and ESA’s Interact rover. Insights from a requirement analysis are applied in the robotic-specific implementations and are reflected in the overall software architecture. Feature-based methods were identified as the most effective for visual localization and mapping, offering a favorable balance between computational efficiency and accuracy. Furthermore, a decentralized system architecture, utilizing on-board processing nodes, such as embedded NVIDIA Jetson computers, proved advantageous for maximizing resource efficiency and minimizing communication overhead. Building on Rollin’ Justin’s initial localization and mapping implementation, we updated the back-end to the current state of the art and extended it to other robotic platforms. Due to limited sensor modalities available on Bert and Interact, global state estimation and localization is achieved using a full Simultaneous Localization and Mapping (SLAM) pipeline. The utilized algorithm is camera-agnostic, offering flexibility for diverse hardware setups within the heterogeneous robotic fleet. To ensure consistent localization across all platforms, we employ a unified feature-map representation using a submap-based mapping approach. Starting from a base map, typically centered on the Martian analog site’s base station, mission- and task-specific 3D maps are incrementally created by extension both online and offline. This strategy preserves real-time capability by mitigating potential scalability issues. Our global localization and mapping implementation was tested in three space-to-ground sessions between the ISS and the surface robotic team DLR’s test site in Oberpfaffenhofen, Germany. The performance and effectiveness of the implemented methods are analysed using recorded data from these on-orbit sessions. Based on these results, we conclude with an evaluation and discussion of the system’s applicability to future space exploration missions.
      • 10.0805 Towards Robust 6D Pose Tracking for On‑Orbit‑Servicing with Learned Segmentation and Motion Priors
        Anne Reichert (German Aerospace Center - DLR), Maximilian Ulmer (German Aerospace Center), Margherita Piccinin (German Aerospace Center - DLR), Daniel Eklund (German Aerospace Center - DLR), Harald Haglund (German Aerospace Center - DLR), Daniel Schenk (Universität der Bundeswehr), Maximilian Durner (German Aerospace Center - DLR), Rudolph Triebel (German Aerospace Center - DLR) Presentation: Anne Reichert - -
        Accurate and reliable satellite pose estimation is crucial for successful autonomous on-orbit servicing missions like debris removal, refueling, or repair. For such robotic operations, visual tracking is especially important, providing high-frequency pose estimates of the dynamic target which ensures collision avoidance and safe, effective manipulation in high-stakes scenarios. To this end, terrestrial visual pose tracking systems – capable of high precision and real-time performance – offer a promising foundation. However, transferring these high-performing frameworks to the space domain introduces significant challenges, as many of the assumptions and conditions that hold in terrestrial settings do not necessarily apply in orbit. Tracking in orbit must contend with complex lighting conditions, harsh reflections, and both highly textured and texture-less surfaces, all under often extreme and variable illumination. These difficulties are compounded by the fact that image data is typically limited to high-contrast grayscale, further challenging terrestrial approaches. This work investigates the deployment of a robust and accurate 6-Degrees of Freedom (6-DoF) visual pose tracking framework, originally developed for terrestrial environments, in the context of on-orbit servicing. We analyze its performance under these space-specific constraints and identify key limitations and potential adaptations required for reliable tracking in orbit. We investigate the performance and runtime behavior of a visual tracking system adapted for the orbital domain by incorporating two task-specific extensions. First, we introduce motion prior leveraging the fact that target satellites usually follow predictable trajectories. This is expected to increase the robustness of the tracking system while also reducing computational demands. Second, we integrate learning-based image segmentation to tackle the challenging visual conditions in space. Recent advances have shown that learning-based approaches can offer increased robustness to scene variability across a wide range of applications. By incorporating such concepts in the space domain, we aim to support the classical visual tracking systems which typically relies on a fixed set of parameters and struggles to handle extreme illumination changes and variability. We rigorously test the efficacy of the proposed extensions through comprehensive evaluations on both simulated orbital images and on real-world hardware-in-the-loop on-orbit servicing data. We quantify accuracy metrics and detect instances of tracking loss. This enables us to evaluate the benefits and limitations of integrating the tracking modality with learned segmentation and motion priors under various conditions. The results demonstrate the framework’s potential to provide accurate tracking for safer and more autonomous future space missions.
      • 10.0807 Vision beyond Earth: Synthetic Satellite Data for Neural Perception in Orbit
        Wout Boerdijk (German Aerospace Center), Marcus Müller (German Aerospace Center - DLR), Maximilian Ulmer (German Aerospace Center), Wolfgang Stuerzl (German Aerospace Center (DLR)), Rudolph Triebel (German Aerospace Center - DLR), Maximilian Durner (German Aerospace Center - DLR) Presentation: Wout Boerdijk - -
        Satellites have become a critical infrastructure pillar for modern life by supporting key services such as communication, navigation or weather forecasting. Yet, the consistent increase of satellites orbiting Earth brings a number of challenges with it, most importantly the steady rise of the chance of collision between orbiting bodies. At the same time, defunct satellites often remain in orbit as space debris, further adding to the number of items circulating Earth. Therefore, not only active debris removal but also means of extending the life span of a satellite - such as (preventive) maintenance and on-orbit servicing - gain increasing importance. For a system performing these tasks in space, perceptive capabilities are of high importance, such as the initial detection of the target satellite object, its successive tracking to visually follow its trajectory, or even satellite pose estimation. Here, automation is highly beneficial, and computer vision algorithms can greatly contribute - especially neural networks have shown vast advances in recent years for object perception tasks. Yet, the extremely harsh conditions in space create additional challenges, and the performance of detectors or pose estimators in industrial or house-hold applications cannot directly be expected for extra-terrestrial scenarios. Most importantly, relevant training data is often not available, and there is a lack of simulators supporting synthetic satellite data generation. In this work, we present Space OAISYS, a major extension of Outdoor Artificial Intelligent SYstems Simulator (OAISYS), supporting vast training data generation for the aforementioned tasks. Space OAISYS simulates satellite missions and creates visual data which can be used in a variety of scenarios. With the simulator one can use an arbitrary satellite object and let it orbit around any kind of celestial body. In order to use the extension for machine learning task, OAISYS Material Drivers are introduced, which can randomize materials in a great variety. Furthermore, the simulator is extended by sensor moving patterns, which create a more realistic satellite rendering result. Strong emphasis is also placed on precise modeling of visual artifacts in space such as intense reflections or blooming. To ease application, a standardized storage format is integrated. Code will be made publicly available.
      • 10.0808 The S3LI Vulcano Dataset: A Dataset for Multi-Modal SLAM in Unstructured Planetary Environments
        Riccardo Giubilato (German Aerospace Center (DLR)), Marcus Müller (German Aerospace Center - DLR), Marco Sewtz (German Aerospace Center - DLR), Rudolph Triebel (German Aerospace Center - DLR) Presentation: Riccardo Giubilato - -
        Traditional datasets to challenge and evaluate the accuracy of multi-modal SLAM algorithms, often consider environments that are affected by anthropogenic factors. The introduction of human-made elements, both in urban but also natural contexts, often introduce perceptual and structural regularities that bias the performance of state estimation and mapping. When recording data from moving platforms, such as in the case of autonomous driving, the repetitiveness of the driving direction impose significant limitations to the variety of viewpoints to expect, significantly helping place recognition algorithms in re-detecting previously visited scenes to establish "Loop Closures" within SLAM. In the context of planetary exploration, however, mobile systems operate in highly unstructured natural settings, which open large possibilities to traverse environments, posing significant challenges to the aforementioned tasks. Natural environments are characterized by extreme perceptual aliasing and abundance of ambiguous features, which, exacerbated by unconstrained traversing opportunities, pose significant limitations to SLAM algorithms to provide an accurate pose to the observer with respect to the origin of their map representation. To foster research in the fields of localization and mapping in planetary settings, and provide data to researchers to develop, test and tune algorithms for this purpose, we release a multi-modal dataset recorded from an hand-held device, that mimics the height and point of view of a mobile robotic explorer. The sensor setup, named S3LI (Stereo, Solid-State LiDAR Inertial) comprises a pair of RGB cameras in horizontal stereo configuration, a compact LiDAR with a MEMS-actuated reflective mirror, an industrial-grade IMU and a GNSS antenna. All sensors are carefully calibrated on the field, and time-synchronized via Precision Time Protocol (PTP). A differential GNSS solution is also provided for evaluation and global referencing. Contrarily to existing multi-modal datasets for SLAM, our sensor setup is intended to explore future possibilities for perception systems suitable for planetary exploration thanks to traditional camera sensors and an alternative LiDAR technology to common spinning mirrors. The dataset is recorded in the island of Vulcano, Eolian Islands, Sicily, characterized by peculiar geological properties resulting from the fusion of several volcanoes, of which one currently active. The island presents a large variety of natural environments with appearance and structures that resemble settings for future scientific exploration missions. Several sequences observe different traversable terrains and geological features spanning from lava paths or sedimentary structures, posing different challenges towards localization accuracy and long-term consistency of mapping pipelines, offering furthermore various options towards efficient fusion of complementary modalities. In addition to this, we release sequences that observe basaltic lava structures, water and vegetation, introducing dynamic elements that disturb traditional algorithmic pipelines. The dataset will be released as a collection of ROS (Robot Operating System) recordings of raw sensor data, including calibration sequences. We furthermore release a complementary toolkit as an open-source tool to replicate calibration procedures and post-processing of the data, e.g., generating differential GNSS solutions. Finally, we release example scripts and configuration files to execute existing state of the art SLAM algorithms as an aid towards the utilization of the data.
      • 10.0809 Semantic Segmentation and Depth Estimation for Real-Time Surface Mapping Using Gaussian Splatting
        Guillem Casadesus Vila (Stanford University), Adam Dai (Stanford University), Grace Gao (Stanford University ) Presentation: Guillem Casadesus Vila - -
        The growing interest in sustained lunar exploration is driving long-duration robotic missions that must operate autonomously on the Moon’s surface. As efforts move toward building infrastructure and enabling long traverses, surface mobility systems must navigate challenging environments characterized by poorly textured terrain, extreme shadows, and dynamic lighting with limited sensing and computing resources. These constraints make robust, real-time perception and mapping critical for operations. Simultaneous Localization and Mapping (SLAM) enables rovers to navigate and build situational awareness. Classical SLAM approaches, such as ORB-SLAM and KinectFusion, have demonstrated real-time performance and robust behavior in structured, well-textured environments. However, they degrade under lunar conditions, where low visual texture and lighting variation impede feature tracking. In addition, classical pipelines rely on discrete map representations—such as sparse point clouds or voxel grids—that limit spatial resolution and struggle to capture detailed surface geometry needed for hazard avoidance and infrastructure interaction. Due to these limitations, recent SLAM systems have explored neural scene representations, including 3D Gaussian Splatting (3DGS), Neural Radiance Fields (NeRF), and continuous occupancy fields. These methods enable dense, continuous surface modeling, reduce memory requirements, and improve robustness to noise and viewpoint variation. Integrated into SLAM pipelines, they provide high-resolution reconstructions well-suited to planetary exploration. However, current methods are developed and evaluated in Earth-like scenes with rich texture and diverse viewpoints, which do not generalize to lunar traverses where imagery is collected along paths close to the ground. Accurate depth estimation and semantic information are crucial in mapping pipelines. Traditional frame-to-frame tracking becomes insufficient on the Moon, as sparse features cluster around irrelevant image regions, such as shadow boundaries. At the same time, key terrain elements, like rocks and craters, are poorly captured. State-of-the-art segmentation and depth estimation models are trained on Earth data, and it remains unclear how they perform under lunar conditions or how their outputs affect map reconstruction. To address this gap, we propose a real-time 3DGS-based mapping pipeline for lunar rovers that integrates stereo and monocular depth estimation models and semantic segmentation, and evaluate its performance on a high-fidelity Unreal Engine environment. We benchmark segmentation models, ranging from classical convolutional networks like U-Net++ to transformer-based architectures like SegFormer, which are outperformed in the lunar environment. For depth estimation, we compare traditional stereo methods, such as block matching, with learning-based models like RAFT-Stereo, which achieves centimeter-level accuracy across several meters—crucial for reliable surface reconstruction during real-time mapping. The combination of semantic segmentation outputs with stereo depth estimates proves essential for updating the 3D Gaussian surface map in real-time, enabling downstream tasks such as hazard detection and path planning. Monocular depth estimation provides additional information that improves map completeness and refinement but lags in accuracy, even when combined with scaling and shifting from triangulated points. We analyze how these perception components influence mapping accuracy and efficiency through ablation studies using ground truth segmentation and depth, and demonstrate that 3DGS-based representations outperform traditional dense and sparse point cloud methods. Our work provides insight into designing robust vision-based SLAM systems for autonomous lunar operation.
      • 10.0811 RGB-NIR Reflectance and 3D Microtopography for Lunar Regolith Analysis Using ToF Imaging Systems
        Don Derek Haddad (NASA Ames Research Center), Cody Paige (Massachusetts Institute of Technology), Ariel Deutsch (), Joseph Paradiso (MIT), Jennifer Heldmann (NASA Ames Research Center), Amanda Cook (NASA Ames Research Center) Presentation: Don Derek Haddad - -
        This paper presents a method for integrated reflectance and microtopographic analysis of lunar regolith simulants using a modified, space-adapted version of a commercial Time-of-Flight (ToF) imaging system equipped with RGB and 850 nm Near-Infrared (NIR) channels. Deployed as part of the Azure Kinect à La Luna (AKALL) framework on Lunar Outpost’s Mobile Autonomous Prospecting Platform (MAPP), and tested at NASA Ames Research Center's lunar regolith analog environments, the system enables in-situ, low-bandwidth 3D terrain reconstruction combined with reflectance-based surface characterization. Recent experiments focused on the controlled variation of hydration and salt content across standardized lunar regolith simulants (NU-LHT-2M, LHT-3M, and JSC-1A), simulating surface and subsurface conditions relevant to water ice and permafrost detection. Reflectance measurements revealed spectral intensity shifts in the 850 nm NIR band that correlate with varying moisture levels and salinity. Using calibrated photometric targets and statistical modeling, the system extracts meaningful indicators of surface composition and texture from RGB-NIR data, offering a compact approach with low size, weight, and power (SWaP) requirements for identifying potential water-bearing materials in support of future in-situ resource utilization (ISRU) and science-driven lunar surface missions. Given the increasing interest in incorporating NIR-based ToF sensors into lunar rover payloads for autonomous navigation, site selection, and resource prospecting, this work demonstrates the utility of single-wavelength NIR reflectance analysis when paired with microtopographic context. The fusion of geometric and spectral data supports broader scientific and operational objectives for future lunar surface missions.
      • 10.0812 Visual SLAM with DEM Anchoring for Long-Range Lunar Surface Navigation
        Adam Dai (Stanford University), Guillem Casadesus Vila (Stanford University), Grace Gao (Stanford University ) Presentation: Adam Dai - -
        Future lunar missions will require autonomous rovers capable of traversing tens of kilometers across challenging terrain while maintaining accurate localization and producing globally consistent maps. However, the absence of GPS, extreme illumination, and low-texture regolith make long-range navigation on the Moon particularly difficult, as visual-inertial odometry pipelines accumulate drift over extended traverses. To address this challenge, we present a stereo visual SLAM system that integrates learned feature detection and matching with global constraints from digital elevation models (DEMs). Our front-end employs SuperPoint and LightGlue to achieve robustness to illumination extremes and repetitive terrain, while the back-end incorporates DEM-derived height and surface-normal factors into the pose graph, providing absolute surface constraints that mitigate long-term drift. We validate our approach on both real and simulated datasets, including LuSNAR, the Mt. Etna–based S3LI dataset, and Unreal Engine simulations with LOLA South Pole DEMs. Results demonstrate that DEM anchoring consistently reduces absolute trajectory error compared to baseline SLAM methods, enabling drift-free long-range navigation even in repetitive or visually aliased terrain.
      • 10.0814 Monocular Depth Estimation for Spacecrafts : Combining Relative and Scale Information
        Siddharth Singh (Cranfield University), Leonard Felicetti (Cranfield University), Hyo-Sang Shin (KAIST), Antonios Tsourdos (Cranfield University) Presentation: Siddharth Singh - -
        Autonomous spacecraft missions such as on-orbit servicing, active debris removal, and close-proximity operations rely heavily on precise depth estimation to enable reliable navigation and interaction with uncooperative targets. However, depth estimation in space scenarios presents unique challenges, including high contrast lighting, severe noise, textureless surfaces, and constrained computational resources. To address these limitations, we propose a novel object-specific depth estimation framework designed for robust, real-time performance in space applications. Our approach strategically integrates segmentation-guided processing, transformer-based multi-scale feature fusion, and Discrete Cosine Transform (DCT) domain Conditional Random Fields (CRFs) for efficient and accurate depth prediction. The proposed method first utilizes a lightweight backbone, such as ResNet50/EfficeintNetv2, to generate both a segmentation mask and global image features. This segmentation map enables targeted processing by isolating the object of interest and cropping it from the image with preserved aspect ratio. The cropped region is resized and passed to a modified version of DC-Depth that incorporates a Swin Transformer for local feature extraction and a multi-scale DCT-based feature fusion module. This design operates in the frequency domain, enabling the model to capture both fine edge details and smooth surface transitions while minimizing unnecessary spatial computation. Unlike traditional methods that iteratively process full-resolution images, our architecture focuses computation only on the segmented region, significantly reducing the processing load. To further enhance performance, we adopt a hybrid strategy inspired by AdaBins and DepthPro, dividing the task into two subtasks: (1) estimating a normalized surface depth map and (2) predicting the object-specific depth range. The normalized map is scaled back to the original crop size, while global features from both ResNet50/EfficientNetV2 and the Swin Transformer are used to predict the minimum and maximum depth bounds. These bounds define the dynamic range for each object, computed using the minimum observable depth and a disparity value informed by known spacecraft geometry. This object-centric normalization strategy, allows the system to generalize across spacecraft of varying scales and configurations. Our method avoids redundant computation, preserves geometric context, and leverages frequency-domain priors to improve both depth smoothness and boundary accuracy. We benchmark our approach across multiple spacecraft platforms, including the James Webb Space Telescope (JWST), International Space Station (ISS), Lunar Reconnaissance Orbiter (LRO), and Sierra Nevada’s Dream Chaser, demonstrating strong generalization and consistent performance. Preliminary results show an accuracy of 0.78m from 30-100+ meters, and 0.67m accuracy from 5-30 meters. These results support the utility of our architecture for future mission-critical operations requiring lightweight, accurate, and real-time object-specific depth estimation in the challenging visual conditions of space.
      • 10.0815 Picture Your Satellite in Space: A Hybrid Rendering Framework for Physically Based Space Images
        Daniel Schenk (Universität der Bundeswehr), Jan Wulkop (), Maximilian Ulmer (German Aerospace Center), Andreas Gerndt (German Aerospace Center (DLR)), Georgia Albuquerque (German Aerospace Center - DLR) Presentation: Daniel Schenk - -
        Deep learning has achieved remarkable success in terrestrial computer vision. However, its widespread adoption in the space domain is hindered by a critical constraint: the high cost of acquiring and annotating vast datasets required for training. Recent efforts have focused on leveraging synthetic data to reduce this burden, but a critical challenge remains: the domain gap between simulated and real-world imagery. This gap can lead to dangerous performance degradation in critical applications like on-orbit robotics. We introduce a pipeline for generating space computer vision datasets, emphasizing physically accurate rendering and enabling dataset variety. Our hybrid graphics pipeline combines two physically based renderers. For accurate material representation of satellites we use Mitsuba 3, a GPU-accelerated path tracer. CosmoScout VR renders planets with detail and realism not available in previous synthetic satellite pose estimation datasets. This includes accurate horizons and volumetric clouds that cast shadows on both terrain and cloudscape. The whole pipeline is built on open source software so that additional features like spectral rendering can easily be implemented. Use-case-specific resources like satellite models or orbital trajectories can be seamlessly integrated through a Python API. To aid in building robust vision models that do not overfit, for example, to the pattern of wrinkles in a satellite's MLI film, satellite materials can be randomized. Image output in EXR and log formats ensures compatibility with post-processing workflows. Our adoption of the BOP dataset format allows for easy integration with existing pose estimation pipelines. All our tools are available as open-source software.
      • 10.0816 A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Computers
        Arko Barman (Rice University), Jeffrey Sam (), Janhavi Sathe (), Nikhil Chigali (Rice University), Naman Gupta (), Radhey Ruparel (), Yicheng Jiang (), Janmajay Singh (), James Berck (NASA - Johnson Space Center) Presentation: Arko Barman - -
        Spacecraft deployed in outer space are routinely subjected to various forms of damage due to exposure to hazardous environments. In addition, there are significant risks to the subsequent process of in-space repairs through human extravehicular activity or robotic manipulation, incurring substantial operational costs. Recent developments in image segmentation could enable the development of reliable and cost-effective autonomous inspection systems. While these models often require large amounts of training data to achieve satisfactory results, publicly available annotated spacecraft segmentation data are very scarce. Here, we present a new dataset of nearly 64k annotated spacecraft images that was created using real spacecraft models, superimposed on a mixture of real and synthetic backgrounds generated using NASA's TTALOS pipeline. To mimic camera distortions and noise in real-world image acquisition, we also added different types of noise and distortion to the images. Our dataset includes images with several real-world challenges, including noise, camera distortions, glare, varying lighting conditions, varying field of view, partial spacecraft visibility, brightly-lit city backgrounds, densely patterned and confounding backgrounds, aurora borealis, and a wide variety of spacecraft geometries. Finally, we finetuned YOLOv8 and YOLOv11 models for spacecraft segmentation to generate performance benchmarks for the dataset under well-defined hardware and inference time constraints to mimic real-world image segmentation challenges for real-time onboard applications in space on NASA's inspector spacecraft. The resulting models, when tested under these constraints, achieved a Dice score of 0.92, Hausdorff distance of 0.69, and an inference time of about 0.5 second. The dataset and models for performance benchmark are available at https://github.com/RiceD2KLab/SWiM.
      • 10.0817 Dual-Path Framework with Uncertainty Awareness for Robust Spacecraft Pose Estimation
        Fulin Peng (SHANGHAI INSTITUTE OF TECHNICAL PHYSICS CHINESE ACADEMY OF SCIENCES) Presentation: Fulin Peng - -
        Accurate and robust pose estimation of non-cooperative spacecraft is critical for autonomous rendezvous and on-orbit servicing. While monocular vision-based methods have attracted growing interest owing to their low cost and structural simplicity, achieving high-precision pose estimation under large scale variations in target distance and complex illumination conditions remains a formidable challenge. In this paper, we propose a novel dual-path prediction network reinforced with a geometric consistency constraint to address these issues. Our framework features two distinct yet complementary pathways. The first path employs a feature pyramid network to extract multi-resolution representations, from which stable keypoints are detected and subsequently integrated with a PnP solver, thereby enabling accurate pose estimation across targets with large scale variations. The second path employs an adaptive-weighted feature pyramid network augmented with a spatial self-attention module to effectively fuse multi-scale information and strengthen global contextual reasoning. Its output is processed by two direct regression heads for rotation and translation, hence improving accuracy and robustness under occlusion and degraded geometric conditions. To ensure coherence between the two pathways, we further introduce a geometric consistency loss that enforces alignment of their outputs during training, thereby improving stability and generalization. Experimental results on SPEED and SwissCube datasets demonstrate that our framework achieves substantial improvements over existing methods, particularly under extreme conditions.
      • 10.0820 Onboard Transformer-Based Lossless Neural Compression on Satellite Imagery
        Jefferson Boothe (NSF SHREC Center - University of Pittsburgh), Ian Peitzsch (NSF SHREC), Evan Gretok (University of Pittsburgh), Alan George (University of Pittsburgh) Presentation: Jefferson Boothe - -
        Communication bandwidth for data transfer between and downlink from space platforms is and always has been exceptionally limited. Despite this limitation, the capabilities of these platforms and similarly the amount of data collected have greatly increased in recent years, with more capable high-resolution imagers and other sensors becoming commonplace on most missions. This combination of large amounts of data but limited bandwidth drives a need for data compression. Thankfully, many compression methods, both lossy and lossless, are commonplace and have even been used in the space domain. Recent research have leveraged transformers as general purpose compressors when paired with arithmetic encoding, due to their strong predictive power. Predictive power has been established to be directly correlated with compression performance. This prior research, though, primarily focuses on large language models running on server-grade hardware. This research seeks to expand upon this findings and apply transformers for lossless satellite image compression on edge devices. We use images from the RapidEye satellites to both train and test varying scales of transformer models. We compare the compression ratio with those of other compression algorithms, including general-purpose and domain-specific options. We benchmark all compression methods on various NVIDIA Jetson Orin devices. The transformer compression methods achieved a compression ratio upwards of 2.6, which is ∼1.9× higher than PNG, ∼1.67× higher than JPEG 2000, and ∼1.68× higher than gzip. The transformer compression methods achieve a throughput of >2.15kBps, which is comparable to other research running on more performant hardware. We also propose further runtime optimizations which could increase throughput.
      • 10.0822 Investigation of Neuromorphic Sensing and Processing for Moving Object Detection
        Douglas Carssow (Naval Research Laboratory), Linus Silbernagel (NSF SHREC Center - University of Pittsburgh) Presentation: Douglas Carssow - -
        This paper describes initial investigations into the use of a spiking neural network (SNN) to perform moving object detection using simulated event-based sensor (EBS) data streams to determine how well the SNN and EBS combination can perform over a range of velocities and data acquisition parameters. EBSs provide a low-power sensing capability where only threshold changes in light intensities received by a pixel are reported as “on” or “off” events depending on the positive or negative change in light intensity. This allows the sensors to report dynamic changes in the observed scene as a very low data rate output with a high temporal resolution and power requirements of less than a watt for high event rates. Neuromorphic processors designed to execute SNNs take advantage of the sparsity of activations in SNNs, which communicate in the form of spikes, to significantly reduce power consumption. This work focuses using Intel’s bootstrap training method as part of the Lava-DL library for the Loihi neuromorphic processor to train an SNN implemented with leaky integrate-and-fire (LIF) neurons. The network is trained to return object positions using simulated EBS moving object data streams with added sensor noise but without any background clutter. The results of this paper provide insight into very low-power sensing and processing for neuromorphic edge processing devices that could be applied to space platforms for applications such as detecting space debris. The results of this effort focus on training networks for single speed pixel-sized objects with random trajectories and starting locations. This provides bounds on network capabilities to detect objects. A simulated focal plane array of 100 x 100 pixels is generated using the Western Sydney University International Centre for Neuromorphic Systems (ICNS) Event Based Camera Simulator python library. This tool ingests sequences of images frames and simulates the output of an event-based sensor as if receiving the same scene. Networks for single pixel objects with a simulated point-spread function are trained and tested at velocities of 50 pixels per second up to 1000 pixels per second. This work investigates effects of latency and object velocity in reporting object positions. Results characterize object localization capabilities across object velocities for similar dataset and network parameters and show variation in object localization capability.
  • Wolfgang Fink (University of Arizona) & Andrew Hess (The Hess PHM Group, Inc.)
    • David He (University of Illinois at Chicago) & Andrew Hess (The Hess PHM Group, Inc.)
      • 11.0101 Reliability and Condition-Based Fusion Approaches to Prognostics
        Shashvat Prakash (Collins Aerospace), Yang Zhang (), Indramani Mohanty (Collins Aerosapce) Presentation: Shashvat Prakash - -
        As sensing and computation become cheaper and more ubiquitous, data-centric methods for avoiding unscheduled interruptions are more viable. In absence of sufficient data, reliability-based approaches have filled the void. For instance, a recommended removal time to ensure a certain process reliability. With additional data more modelling approaches become viable. The most common has been condition-based maintenance (CBM) where health indicators determine the pre-failure removal point. The presented approach considers both the near-term time horizon in CBM models and the life-limiting events that occur over the entire lifetime of the component. Life- impacting stressors like extreme ground temperatures, compressor surges, or the state of refurbishment at installation inform the hybridized model. This work first quantifies the fundamental prognostic health management tradeoffs into a cost model. The value of a pre-failure removal, and the resultant avoidance of unscheduled cost, is balanced by the potential decrease in productive life. With this cost construct, life expectancy techniques from reliability are employed to determine expected life given the stressors over the course of life. These modeled life impacts are combined with condition-based indicators in a holistic prognostic model, as applied to a Boeing 787 Cabin Air Compressor. The strategy impacts are expressed in terms of cost savings to the operator. With this formulation, alternate strategies may be directly compared, with a varying blend of condition-driven and proactive soft-life removals.
      • 11.0102 AI-Driven Fault Detection & Reliability Diagnostics for Spacecraft Using SimuPy and Random Forest
        Vishnupriya S Devarajulu () Presentation: Vishnupriya S Devarajulu - -
        In order to maintain the system performance and mission productivity of spacecraft, health monitoring and fault diagnostics are crucial. It's essential to ensure that a spacecraft is operating properly without anomalies, as it could jeopardize the whole mission. Traditional approaches are challenging to apply, as they often rely on post-mission data or resource limitations, which are insufficient for detecting subtle or emerging anomalies during flight. This paper introduces a simulation-driven diagnostics framework that uses NASA's SimuPy and a custom telemetry generator to emulate fault conditions and produce multivariate telemetry streams for spacecraft. We apply Random Forest classification models to the synthetic telemetry to detect and categorize anomalies. The proposed framework facilitates both pre-launch validation and in-flight anomaly detection. While previous approaches have primarily focused on retrospective failure analysis, our approach supports proactive diagnostics by simulating system behavior and dynamically injecting both nominal and fault conditions during runtime.
      • 11.0104 Evaluating Anomaly Detection Algorithms for Satellite Telemetry: A Case Study Using Public Datasets
        Lorenzo Brancato (Politecnico di Milano ), Alessandro Lucchetti (Politecnico di Milano), Francesco Cadini (Politecnico di Milano ), Seiji Tsutsumi (JAXA), Noriyasu Omata (Japan Aerospace Exploration Agency), Yu Kimura (Japan Aerospace Exploration Agency) Presentation: Lorenzo Brancato - -
        The demand for artificial satellites has been expanding rapidly across various purposes such as earth observation, navigation, communications, and space exploration in recent years. As a result, constellation missions that operate a large number of satellites simultaneously are increasing. Conventional approach in which experts manually monitor and operate satellites has limitations in satellite constellation, and there is a growing need to automate Prognostics and Health Management (PHM). In particular, the application of Machine Learning techniques to PHM has drawn attention, as it contributes to reducing operational costs and enhancing reliability of satellite systems. Until recently, the difficulty of obtaining high-quality labeled telemetry datasets including anomalies has been a barrier to the development and evaluation of PHM algorithms. However, the European Space Agency (ESA) has released benchmark datasets on satellite telemetry with labels (Kotowski et al. 2024), which makes it possible to evaluate anomaly detection algorithms under more realistic conditions. This study conducts a comparative analysis of multiple anomaly detection algorithms using the public datasets. The evaluation covers a wide range of algorithms, including unsupervised learning based on standard statistical methods, Gaussian regression models, and similarity of time series waveforms. These methods are assessed from multiple perspectives, including standard performance metrics such as event-wise precision, recall, and F-beta scores, robustness of the algorithms, and their feasibility in satellite operations. This study aims to establish a reliable anomaly detection framework that accounts for the constraints of space systems, such as remote operation and limited communication bandwidth, thereby contributing to improved reliability in satellite missions.
      • 11.0105 AeroAssist: A Prompt-Driven Multimodal AI Framework for Aircraft Maintenance
        SANDEEP KALARI (Old Dominion University), Ravi Mukkamala (Old Dominion University), Abhinav sai choudary Panchumarthi (old dominion university), Vikas Ashok (Old Dominion University) Presentation: SANDEEP KALARI - -
        Unscheduled aircraft maintenance poses significant chal- lenges, including operational disruptions, increased safety risks, and substantial financial costs. Existing diagnostic methods are limited by their reliance on unimodal data streams—visual, audio, or sensor-based—preventing early fault detection and comprehensive component-level diagnostics. These siloed approaches lack the integrative reasoning capabilities of human experts, highlighting the need for a unified, intelligent framework. In response, we pro- pose AeroAssist,a prompt-driven, multimodal diagnostic framework built on Google’s Gemini 2.5 Pro/Flash, a large-scale, pre-trained foundation model used directly at inference time, without any task- specific fine-tuning. Our primary contribution is Guided Reflective Diagnostic Reasoning (RDR), a structured prompting methodology that instantiates expert-like reasoning behaviors such as independent analysis, cross-modal synthesis, and iterative self-critique. This ap- proach leverages the generalization capabilities of large multimodal models while avoiding the limitations of data scarcity and the com- putational overhead of model retraining. We evaluate our framework on a semi-synthetic aerospace dataset under zero-shot and few-shot conditions, achieving 90% classification accuracy in the few-shot setting. Beyond quantitative results, qualitative analysis reveals the model’s ability to handle complex reasoning scenarios, such as resolving contradictory multimodal evidence, while also identifying critical failure modes, including overlooking subtle visual defects relevant to safety. Our findings demonstrate that the framework’s ability to holistically synthesize video, audio, and sensor inputs significantly enhances diagnostic reliability. This work represents a practical and technically grounded step toward developing multi- modal, foundation model–based diagnostic copilots for real-world aerospace maintenance applications.
      • 11.0107 Towards Autonomous PHM: An Application to Gearbox Fault Diagnosis
        David He (University of Illinois at Chicago) Presentation: David He - -
        Autonomous Prognostics and Health Management (Autonomous PHM) refers to the capability of a system to independently monitor, diagnose, predict, and manage its own health status without human intervention. It combines traditional PHM functions with autonomy and intelligent decision-making to enable self-sustaining operation, especially in complex or remote environments. The key characteristics of an autonomous PHM system include: (1) self-monitoring: continuous collection and analysis of sensor data to assess system health in real time; (2) self-diagnosis: identification of faults, anomalies, or degradations using AI, machine learning, or model-based reasoning; (3) self-prognosis: prediction of remaining useful life (RUL) or time to failure based on current and historical data; (4) autonomous decision-making: autonomous selection and execution of maintenance or mitigation actions (e.g., reconfiguration, load reduction); (5) adaptability: adapt pre-trained models (e.g., for fault detection or RUL estimation) from one system or component to another with limited new data; (6) minimal human oversight: designed to function reliably with little to no manual input, particularly useful in inaccessible or high-risk settings (e.g., space missions, underwater robotics, military systems). A few challenges remain for developing an effective autonomous PHM system: (1) learning with limited labeled data: limited availability of failure data for training ML models; (2) cross-platform autonomy: autonomous PHM systems often operate in varied conditions or on different equipment types. PHM functions should be adapted from one system or component to another to reduce the need to retrain models from scratch in every new setting. (3) scalability: autonomous PHM systems should scale to large, complex systems (e.g., fleets of aircraft or satellites). A model trained on one unit can be transferred to other units in the fleet to scale autonomous PHM capabilities efficiently. In this paper, the development of an autonomous PHM system by integrating self-supervised learning and large language models (LLMs) is presented. The effectiveness of the autonomous PHM system is demonstrated with an application to gearbox fault diagnosis.
      • 11.0108 Initial Study of a Physics-Based Virtual Assistant for Real-Time Fault Diagnosis in Space Habitats
        Kazuki Toma (Texas A&M University), Joshua Elston (Texas A&M University), David Kortenkamp (TRACLabs), Daniel Selva (Texas A&M University) Presentation: Kazuki Toma - -
        This paper presents preliminary results of a physics-based model approach to anomaly diagnosis for crewed space habitats, addressing the need for autonomous system fault management in future lunar Gateway and Mars surface missions. Remote operations with significant communication delays make timely access to ground-based Subject Matter Experts (SMEs) impractical, highlighting the importance of onboard diagnostic capability to maintain crew safety and mission continuity. The research investigates whether hybrid model-based reasoning can replicate the iterative SME process traditionally used for fault isolation. SMEs draw on system architecture knowledge and physical laws to formulate, simulate, and test fault hypotheses, establishing causal relationships to identify root causes. The proposed framework embeds this logic within a virtual assistant (VA) that integrates first-principles physical models and numerical simulation driven by differential equations, supporting causal inference and transparent reasoning that enhance crew trust and situational awareness. A web-based application, Daphne-AT, has been developed in our laboratory as a proto-type of those VAs and the physics-based diagnose framework is planned to be incorporated. This builds on previous human subject experiments at NASA’s Human Exploration Research Analog (HERA), which relied on static knowledge graph-based approaches for known anomalies. When an anomaly is detected, Daphne-AT alerts the crew, queries the anomaly in its knowledge graph, and ranks hypotheses by comparing the set of anomalous parameters to the set of real-time telemetry deviated from the threshold. The physics-based framework extends diagnostic capability to more granular fault modes, including subcomponent failures such as fan degradation, valve malfunction, and filter saturation within systems like the Carbon Dioxide Removal System. Operational variations such as system mode changes or shifts in crew respiration intensity are also incorporated, which allows diagnosing anomalies that are not explicitly known. In this approach, similarity checking is a core step within the physics-based diagnosis framework, used to rank multiple fault hypotheses by comparing their simulated system responses with actual telemetry trends. This technique systematically assesses how closely the time-series data generated under each hypothesis match the real system behavior, serving as an objective basis for root cause identification. Simulation experiment validated the approach by comparing multiple algorithms—including mean square error, Pearson correlation coefficient, and dynamic time warping—and demonstrated that mean square error offers practical computational efficiency for real-time onboard diagnosis with constrained processing resources, while maintaining clear separation between correct and alternative hypotheses. This work is part of a NASA STTR Phase II effort in collaboration with TRACLabs to advance autonomy for Gateway missions. Daphne-AT is integrated with a Multi-Agent Autonomous Anomaly Resolution System (MaARS) that enables coordinated interaction among the VA, crew, and robotic agents. These agents can autonomously perform diagnostic tasks or collect additional evidence beyond fixed sensor coverage, improving fault isolation and supporting autonomous operation during crewed/uncrewed periods. Because ECLSS subsystems closely interact with habitat physics, this framework can be extended to systems like thermal control, oxygen generation, and trace contaminant management. Future comparisons with data-driven methods and human-in-the-loop testing will help validate performance and impacts on crew workload and trust for long-duration missions.
      • 11.0110 Physics-Informed Machine Learning for Life Assessment of Aerospace Structures under Fatigue
        Kunal . (Indian Institute of Technology Kanpur), Pritam Chakraborty () Presentation: Kunal . - -
        Fatigue crack growth under high-cycle fatigue is one of the most severe problems in the design, maintenance, and safe operation of aircraft structures. During operation, these structures experience millions of loading cycles, which cause the gradual growth of nucleated cracks leading to ultimate failure. Thus, accurate modeling of fatigue crack growth is necessary for ensuring structural integrity, maximizing inspection intervals, and extending the service life of aerospace structures. Conventional methods of modeling crack growth under high-cycle fatigue use linear elastic fracture mechanics to arrive at the cyclic stress intensity factor, which is then used in the Paris Law describing steady state crack growth. Paris law is highly non-linear, consisting of two constants, C and m, under fully reversible loading. These parameters are evaluated using Euler integration of Paris law and linear regression of scattered crack growth measurements from standard tests. However, a lack of constraints during the calibration can render the parameter estimates to be inaccurate particularly when the data is significantly scattered. To address this limitation, Physics-Informed Machine Learning (PIML) architectures are employed to calibrate the parameters of Paris law. However, before utilizing this calibration approach, the accuracy of Physics-Informed Neural Networks (PINNs) to integrate Paris law was tested. To this end, the predictions from Physics-Infused Long-Short Term Memory (PI-LSTM) and Implicit Euler Transfer Learning (IETL) architecture were also compared to Euler integration, and a reasonable agreement was obtained. Following this, these methods were applied to obtain the parameters from numerically generated data using some assumed C and m values. It was observed from the study that the method was not only able to calibrate the parameters, but also that the network could be used to predict crack growth when the amplitudes were modified. Finally, scattered data was artificially generated by choosing distributions of C and m. Subsequently, IETL was applied to the scattered data to calibrate the parameters and showed a satisfactory comparison. In summary, this study exemplifies the merits and demerits of different PIML methods when applied to predict crack growth from the Paris law. Furthermore, the approach allows both crack growth evolution and Paris constants to be predicted from limited experimental data, thereby reducing the need for repeated costly tests across different loading cases. Finally, the reliability of the PIML framework to predict crack growth for various amplitudes and block loading is demonstrated.
    • Wolfgang Fink (University of Arizona) & Derek De Vries (Nothrop Grumman Propulsion Systems)
      • 11.0201 Investigating Large Language Model-Based Decision Making for Deep Space Habitat Systems
        Sreehari Manikkan (Purdue University), Maurice Reimer (Purdue University), Motahareh Mirfarah (Purdue University), Zhiwei Chu (), Mohan Chi (), Christian Silva (Purdue Univeristy), Lalit Dongare (Purdue University ), Shirley Dyke (Purdue University) Presentation: Sreehari Manikkan - -
        Deep space habitats are complex systems with tightly coupled interdependencies exposed to hazardous environments. Effective decision-making in these systems requires models that can scale with multiple fault scenarios while incorporating system-level expertise. Due to increased communication delays and bandwidth constraints, conventional ground control strategies and expert interventions may become less practical with extensive deep space communication delays, highlighting the need for autonomous onboard decision-making capabilities. In this work, we investigate the feasibility of using Large Language Models (LLMs) for autonomous decision support using the Human-centered Autonomous Resilient Space Habitat (HARSH), a cyber-physical testbed at the Resilient Extra-Terrestrial Habitats Institute (RETHi). As a pilot study, we replace existing heuristics and priority rules within HARSH’s health management system (HMS) with LLM-generated decisions. The LLM determines the actions to be taken by the testbed’s agent model based on outputs from the diagnostic reasoner. To support this, we conduct prompt engineering that includes concise descriptions of testbed subsystems and a defined set of possible actions. We leverage structured outputs and function-calling features of OpenAI’s API framework to obtain decision responses from the LLM. To quantify the repeatability and uncertainty of LLM decisions, we design synthetic tests with known identified failure modes, varying numbers of LLM queries, and multiple model configurations. We evaluate ten representative failure modes spanning diverse testbed states using three model variants: GPT-4.1 nano, GPT-4.1, and o3. For each failure mode and model type, we collect decision responses in batches of 1, 10, and 100 and analyze the probability distribution of outcomes and response times. Results indicate that o3 and GPT-4.1 deliver the expected actions with 100% consistency, with o3 being approximately four times slower than GPT-4.1. GPT-4.1 nano achieves around 80% accuracy while maintaining the fastest response time among the tested models. The LLM-based decision maker is then integrated into the HMS and preliminary results are obtained using a single-scenario experiment. Challenges encountered during integration, along with lessons learned and potential solutions, are discussed. Future work will focus on real-time multiple fault scenario experiments using the LLM-driven decision maker and on comparing its performance with existing heuristic-based approaches.
    • Andrew Hess (The Hess PHM Group, Inc.) & David He (University of Illinois at Chicago)
      • 11.0401 Intelligent Defect Detection and Identification for Additive Manufacturing Systems
        John Poindexter (Florida Institute of Technology), Seong Hyeon Hong (Florida Institute of Technology) Presentation: John Poindexter - -
        Additive manufacturing (AM) enables rapid prototyping and precision fabrication but faces persistent challenges from defect formation, particularly in high-reliability applications. Defect detection in AM remains challenging, particularly across new geometries and defect types, where existing methods often require extensive labeled data or manual tuning. This work introduces a semi-supervised framework based on a convolutional Variational Autoencoder (VAE) trained exclusively on non-defective samples. During training, the VAE learns a compressed latent representation of normal part structures, which provides a statistical baseline for comparison. At evaluation, encoded mean vectors are analyzed using a global Mahalanobis distance measure, which accounts for correlations across latent features and quantifies deviations from the non-defect distribution. Unlike reconstruction-error metrics, which often show overlap between subtle defects and normal samples, this latent-space distance offers a more robust and generalizable decision rule. The framework was evaluated on datasets of 3D-printed rectangles, cylinders, hourglasses, and pyramids, including both familiar and previously unseen geometries. Results show that non-defective embeddings form tightly clustered distributions, while defective parts consistently deviate outward, enabling reliable thresholding. The method achieved an average classification accuracy of 93% with F1 scores up to 0.95, balancing recall and precision while prioritizing detection of subtle anomalies. Visualizations using PCA and t-SNE confirmed that the latent space organizes shapes meaningfully and highlights separability between non-defects and defects. By reducing dependence on labeled defect datasets and applying a single global threshold across geometries, this framework demonstrates a scalable, data-efficient approach to quality assurance. It further provides a foundation for integration into real-time inspection pipelines for intelligent manufacturing.
      • 11.0402 Correlation-Informed Time-Series Forecasting for Anomaly Detection
        Lauren Bailey (), James Carnal (The University of Tennessee, Knoxville), Jamie Coble (University of Tennessee), Xingang Zhao (University of Tennessee) Presentation: Lauren Bailey - -
        Early and efficient detection of system anomalies is vital for maintaining safety for both the system and operators, and for preventing growing concerns that could increase system downtime. Time-series forecasting using machine learning (ML) techniques offers a way to predict future system states and enhance anomaly detection. This work focuses on exploring these ML methods, particularly long short-term memory (LSTM) neural networks (NNs), to assess if data-driven approaches can outperform physics-based methods in identifying anomalies. Additionally, correlation-informed LSTM (CI-LSTM), a newly developed technique, will be introduced. The CI-LSTM is a knowledge-guided ML method that leverages feature correlations to create a linear equation that predicts each feature. The residuals for each feature based on the linear equation results are incorporated into the ML loss function. Data for this study was collected from the Tri-Twin, a three-loop flow system at the University of Tennessee, Knoxville. The Tri-Twin features a heater in loop one, with heat exchangers connecting loop one to loop two and loop two to loop three. Temperature and flow sensors throughout the system enable comprehensive data collection. The system control also allows for the introduction of forced anomalies, enabling a comparison of the models' effectiveness. The physics-based model uses a modified Lagrangian formulation, while the data-driven models employ optimized NNs. Among the standard ML methods, LSTMs outperformed recurrent neural networks (RNNs) and gated recurrent units (GRUs). The baseline LSTM achieved excellent accuracy, with the highest error being for the flow rate in loop three, at a relative root mean square error (rRMSE) of 8.6%; all other errors were below 0.50%. The CI-LSTM yielded even higher accuracy, reducing the loop three’s flow rate rRMSE to 2.5%, with the other features either improving or remaining similar to the pure ML results. For anomaly detection, the LSTM detected anomalies within 8 seconds of their occurrence and identified the end of anomalies 4 seconds after they concluded. In comparison, the physics-based model took about 25 seconds to detect anomalies and overestimated durations by 22 seconds. These findings demonstrate the promise of data-driven methods for anomaly detection, as the networks can capture correlations better and facilitate more efficient identification of anomaly windows. Currently, the CI-LSTM approach is under further investigation and will be tested alongside the LSTM and physics-based methods for their anomaly detection capabilities. This work presents significant potential for the use of LSTMs for anomaly detection and the effectiveness of CI-ML methods.
    • Mark Walker (End to End Enterprise Solutions) & Andrew Hess (The Hess PHM Group, Inc.)
      • 11.0601 Mitigating Aviation Maintenance Challenges: A Technical Evaluation of AI-Enabled Sustainment
        Christopher Reese (Georgia Tech Research Institute), Max Xu (Georgia Tech Research Institute), Scott Nicholson (Georgia Institute of Technology) Presentation: Christopher Reese - -
        Aviation maintenance is a foundational pillar of operational safety, reliability, and mission success. As aircraft systems evolve to incorporate increasingly sophisticated technologies such as advanced avionics to integrated sensor networks maintenance personnel face mounting complexity in diagnostics, data interpretation, and procedural execution. These challenges are compounded by workforce shortages, Subject Matter Expert (SME) turnover, and the growing demand for predictive, data-driven maintenance solutions. In particular, the departure of experienced personnel often results in the loss of critical experience, including nuanced troubleshooting strategies and platform-specific insights that will challenge the ability to sustain operational continuity or support the next generation of maintainers. The traditional documentation and informal tribal knowledge, while historically effective, are increasingly inadequate in the face of complex systems, workforce turnover, and the demand for predictive insights and scalability. In both commercial and military domains, these limitations contribute to reduced readiness, inefficiencies, and elevated lifecycle costs. Recent advancements in Artificial Intelligence—particularly Large Language Models (LLMs) and predictive analytics—present a novel but complex opportunity to address these challenges. LLMs can ingest and synthesize unstructured data from manuals, technician notes, and maintenance logs, enabling intelligent diagnostics, contextual troubleshooting, and dynamic knowledge sharing. This paper will examine traditional maintenance practices and evaluate how different AI models can mitigate systemic inefficiencies while critically addressing the operational, ethical, and technical constraints inherent to their deployment.
    • Alexandre Popov (McGill University) & Wolfgang Fink (University of Arizona)
      • 11.0702 Modeling the Impact of Helicopter Vibrations on the Musculoskeletal Health of US Army Pilots
        Julie Johnston (MIT), Jordan Dixon (Charles Stark Draper Laboratory) Presentation: Julie Johnston - -
        The UH-60, used for troop transport, MEDVAC, and mission control, has evolved over the last 45 years from the Alpha Model to the Lima and Mike models that are currently utilized. Previous studies investigated the impact of Whole-Body Vibrations on aviators and the resulting musculoskeletal injury, but none have investigated the efficacy of the Mike model’s Active Vibration Control System on reducing the impact of helicopter vibrations on musculoskeletal health. Computational analyses of a biomechanical model using OpenSim and motion capture at varying levels of vibration was conducted. This quantifies the response of the spine and the surrounding muscles when vibratory loads are applied while positioned to manipulate the flight controls. A musculoskeletal model was developed to represent the aviator in the seated posture required to effectively manipulate the flight controls. The team recorded motion capture data with a pilot in a pilot test for concept validation. This data was then processed and input in the OpenSim inverse kinematics tool to determine joint angle and demonstrate the muscle-tendon length of several muscles in the back. A survey was also developed that builds upon previous efforts, seeking to understand the aviator’s perspective on musculoskeletal injury and prevention, with a focus on the back. Aviators are asked to describe the cause of their injury, methods of injury prevention, and recovery techniques encompassing subpopulations of flight experience: Lima-majority, Mike-only, Mike-majority, and an even mixture of L/M. The data attempts to characterize the impact of the AVCS on aviator spine health. The AVCS should decrease the rate of injury by reducing the vibratory loads experienced by the aviator. This survey is unique to previous questionnaires as it focuses on the user’s perspective of differences between the two models, and the injury or pain felt by each service member. While it was expected to see a trend of reduced injury occurrence amongst the Mike-only aviators versus those with Lima-majority flight hours, this was not the case. Injury prevalence was consistent across most populations, indicating the potential inefficacy of the AVCS. Analysis of open-ended responses, particularly from the hybrid group, provide context for the perceived impacts of using the AVCS. Some population demographics were not represented in this survey due to the nature of the unit being surveyed, which may impact the validity of some results. By quantifying the perceived efficacy of the AVCS as it relates to chronic musculoskeletal injury using a survey of pilot experience factors (flight hours, airframes, operating theatres, etc.), and by representing the maladaptive posture of the pilots with a computational simulation based on experimental pilot data; a full picture was developed of the risk of issue related to the health of US Army Aviators. The aim is to expand the overall understanding of how vibration is impacting the musculoskeletal health of aviators and their perceived impact on lifelong health from the profession. The ultimate goal is to aid in the design of additional countermeasures to improve aviator spine health and serve as a platform for optimization of systems like AVCS.
  • Mona Witkowski (Jet Propulsion Laboratory) & Michael Machado (NASA - Goddard Space Flight Center)
    • Mona Witkowski (Jet Propulsion Laboratory) & Heidi Hallowell (Ball Aerospace)
      • 12.0103 The Impact of Major Anomalies of Robotic Mars Surface Missions on Mission Timeline
        Matt Heverly (Jet Propulsion Laboratory ), Magdy Bareh (Jet Propulsion Laboratory) Presentation: Matt Heverly - -
        Robots exploring the surface of Mars have a set of objectives that must be accomplished within a constrained mission duration. This timeline constraint often comes from the qualified life of the hardware, available energy due to seasonal effects on available solar power, or, as in the case of Mars Sample Return, the proximity of Earth and Mars for the associated return launch window. No matter what the constraint, a mission timeline must be developed to show that the mission’s objectives can be accomplished within the available mission duration. A key factor that must be considered when developing a mission timeline is the loss of operational sols (Martian days) due to encountering and recovering from major anomalies. This paper examines all modern NASA Mars surface missions, including the Spirit and Opportunity rovers, the Phoenix lander, the Curiosity rover, the InSight lander, and the Perseverance rover, to understand the major anomalies encountered during their surface missions. All anomalies resulting in four or more consecutive sols of lost progress are cataloged in this study, providing a complete data set for statistical analysis. Additionally, fifteen specific anomalies are discussed in detail to illustrate key themes and lessons learned during their discovery, investigation, and recovery. Anomalies are found to be the result of seven categories: mechanical hardware, software, environmental interactions, computer hardware, power systems, electrical hardware, and command errors. Our analysis quantifies the loss in surface productivity from each anomaly, with impacts ranging from 4 to 55 sols lost for any given event. When taken cumulatively over the prime mission, it is shown that on average, missions lose 11% of their surface duration to major anomalies. Each of the missions examined have lasted past their expected prime mission with two of the vehicles (Curiosity and Perseverance) still operational. When looking not only at the prime mission, but also including the extended mission, the time lost to major anomalies drops to 6.7%. The data presented is from robotic Mars surface missions, but the lessons are applicable to any space mission. The amount of lost productivity due to encountering and recovering from major anomalies is not always considered when developing mission timelines and allocating appropriate timeline margin for anomalies is critical to ensuring a mission’s success. Pulling from a range of past Mars missions, this paper provides recommendations on mission timeline margin policies that are grounded in robotic exploration experience.
      • 12.0104 Guidance and Control Assessments of Psyche Mission’s Monthly Operations Sequences
        David Sternberg (NASA Jet Propulsion Laboratory), Abigail Couto (), Michael Ying (NASA Jet Propulsion Lab) Presentation: David Sternberg - -
        The Psyche mission, launched in October 2023, relies on its Guidance, Navigation, and Control (GNC) subsystem for reaching and orbiting the (16) Psyche asteroid. There, the GNC subsystem will help the mission return valuable data about this metal-rich asteroid. The Psyche spacecraft includes multiple sensors and actuators as part of its GNC subsystem, and each component has unique operational capabilities and behaviors that, together with mission requirements, influence GNC algorithm designs. The GNC subsystem is part of the broader mission’s flight system, necessitating that its operational sequences, built approximately monthly, are reviewed by GNC and the other subsystems. This monthly sequence review process is designed to ensure that the inputs to the sequence build from each of the subsystems have been collected and implemented in a safe and effective manner. Each subsystem has a unique process for assessing the draft sequences, which are collections of relative-timed commands, and may include periods of thrusting using electric propulsion thrusters, coasting, performing science activities, performing maintenance and characterization activities, momentum management activities, and other behaviors necessary for maintaining the health of the spacecraft. This paper presents the process by which the GNC subsystem reviews the monthly sequences. In doing so, it provides summaries of the tools that are used for making assessments of the sequences’ readiness for flight, including a high-fidelity simulation run of the GNC commands using the Controls Analysis System Testbed/GNC Integrated Systems Testbed (CAST/GIST), the vector fit tool (VFT2), constraint checker, and small forces tool. Additionally, the paper describes the individual checks that the responsible GNC engineer performs to ensure the sequenced commands meet the GNC subsystem’s flight rules and guidelines. Lastly, the paper describes the products that the GNC review process provides to the other subsystems to aid their own review processes.
    • Rob Lange (Jet Propulsion Laboratory) & Kedar Naik (BAE Systems, Space & Mission Systems)
      • 12.0201 Telemetry Selection Rules: Europa Clipper's Approach to Engineering Data Generation
        Caitlin Crawford (NASA Jet Propulsion Lab), Christopher Ballard (Jet Propulsion Laboratory), Christopher Swan (), Gorang Gandhi () Presentation: Caitlin Crawford - -
        For any spacecraft, effective data and downlink management is essential due to the limitations of on-board data volume storage and constrained downlink capabilities. For the Europa Clipper mission, we address this for engineering data by the development and implementation of telemetry selection rules (TSRs). These on-board configurable files integrate with flight software and define which telemetry channels should be saved and/or downlinked based on spacecraft state or operational context (such as downlink bandwidth). This rule-based structure allows the spacecraft to intelligently prioritize high-value data for both real-time and recorded data, ultimately improving downlink efficiency. TSRs mark a departure from heritage-based selection criteria, which relied on static structure with limited flexibility. The TSR rule-based approach offers increased flexibility, finer granularity, and the ability to prioritize data. However, it introduces complexity in definition of rule-based logic and flight predictability of data generation. Additionally, since the TSRs generate variable data depending on spacecraft state/activities, they require careful testing and simulation to avoid unintended data loss. In addition, we explore the operational considerations of this approach and the performance of TSRs during early operations of Europa Clipper. This includes planning and analyzing telemetry for variable mission critical events: i.e., what telemetry is required for safing events or low data rate periods? This paper highlights the functionality and integration of TSRs (both for real-time and recorded data), the trade-offs compared to heritage telemetry systems, operational considerations behind rule and file selection, and the limitations and complexities introduced by this novel approach to onboard engineering data generation.
      • 12.0203 Europa Clipper's Implementation and Operational Use of CCSDS File Delivery Protocol
        Christopher Ballard (Jet Propulsion Laboratory), Emily Bohannon (NASA Jet Propulsion Lab), Robert Cato (NASA Jet Propulsion Lab) Presentation: Christopher Ballard - -
        Europa Clipper has demonstrated the first use of fully acknowledged CCSDS File Delivery Protocol (CFDP) on both the uplink and the downlink data streams at the Jet Propulsion Laboratory (JPL). The protocol’s use in the design has been a success, saving the work of having a software and process infrastructure to have to retransmit and delete data products from the downlink stream, or to re-uplink files to the spacecraft. The Verification and Validation of the CFDP functionality and how the lessons from that informed the development of the operational usage will be covered. The uplink/downlink architecture and data products management constraints will also be discussed, as they drive the CFDP timer values, along with the uplink/downlink rates. In the lead up to launch, it was identified that the CFDP ground timer freeze/thaw capability needed to be made to be automated instead of solely manual. The tool algorithm will be described along with the operational use. Several operational scenarios will be described and the use of CFDP in those scenarios will be elaborated, like the launch data playback, the robustness it provided in the period that the lab power was shut down due to the Eaton Fire, and the unique 4-month low data rate (LDR) period that Clipper operated through. Additionally, a contingency method for data product downlink at very low rates (down to 520 bps) was developed for the LDR periods in the mission. Finally, lessons learned on the specific implementation of CFDP and the integration with the ground and flight systems will be discussed, with potential application to future JPL missions.
      • 12.0205 Scalable Ground Station Selection for Large LEO Constellations
        Grace Kim (Stanford University), Vedant Srinivas (Stanford University), Duncan Eddy (Stanford University), Mykel Kochenderfer (Stanford University) Presentation: Grace Kim - -
        Effective ground station selection is critical for low-Earth orbiting (LEO) satellite constellations to minimize operational costs, maximize data downlink volume, and reduce communication gaps between access windows. Traditional ground station selection typically begins by choosing from a fixed set of locations offered by Ground Station-as-a-Service (GSaaS) providers, which helps reduce the problem scope to optimize locations over preexisting infrastructure. However, finding a globally optimal solution for stations using existing mixed-integer programming methods quickly becomes intractable at scale, especially when optimizing over multiple providers and large satellite constellations. This work introduces a scalable, hierarchical framework that decomposes the global selection problem into single-satellite mixed-integer linear programming (MILP) subproblems. These subproblems yield optimal station choices per satellite, which are then clustered via k-means to identify consistently high-value stations. Cluster-level station sets are then aggregated into a global solution that satisfies system-wide constraints. By applying clustering after optimization, this approach enables scalable coordination while maintaining feasibility, offering a method to perform efficient ground station planning for next-generation satellite constellations. We evaluate our method on tasks for minimizing communication gaps between contact windows and maximizing data downlink for existing EO satellite constellations such as Capella Space (6 satellites), ICEYE (36 satellites), and Planet Lab’s Dove flocks (200 satellites). This improves upon the previous best of supporting constellations of size up to 20 satellites. We compare to a globally optimal MILP solution for small problems and to an optimized network of freely placed locations for constellations larger than 20 satellites. We demonstrate comparable performance in the objective function value and a significant decrease in the time needed to find a solution.
      • 12.0206 Global Space Traffic Management 2.0: A Hybrid GSTMA–USTCC Architecture for the New Space Era
        Wanjiku Chebet Kanjumba (University of Florida) Presentation: Wanjiku Chebet Kanjumba - -
        The exponential growth of space activities, driven by various commercial satellite launches, satellite mega-constellations, human spaceflight, space tourism, commercial spaceports, and scientific missions, has underscored the urgent need for a unified framework to manage space traffic; otherwise, it would lead to unprecedented congestion in Earth's orbital environments, increasing collision risks and accelerating space debris proliferation. Current space traffic management (STM) systems, reliant on national agencies and voluntary coordination, lack the scalability, interoperability, and real-time responsiveness required for sustainable operations. The purpose of this paper is to investigate the possible creation of two entities to address the growing risks of collisions, space debris generation, and operational inefficiencies: the Global Space Traffic Management Authority (GSTMA) and the Universal Space Traffic Coordination Centre (USTCC). The GSTMA would be a global organization dedicated to the regulation and coordination of all space traffic around Earth's orbital environments. The USTCC would act as a central, internationally governed hub for monitoring, coordinating, and regulating space traffic to ensure the safety, sustainability, and accessibility of orbital operations, by leveraging cutting-edge technologies and global partnerships to manage space activities in real-time (both within Earth’s orbital influence and beyond)—working collaboratively with the GSTMA. Building on lessons from terrestrial air traffic control (ATC) systems and existing space governance frameworks (e.g., the Outer Space Treaty, Space Data Association’s collision avoidance mechanisms), we analyze technical and operational requirements for a scalable STM architecture. Key technologies include AI-driven predictive analytics (e.g., ESA’s AI Debris Avoidance System) for collision risk assessment, distributed sensor networks (e.g., USSF’s Space Surveillance Network) for enhanced Space Domain Awareness (SDA), and blockchain-based coordination protocols for secure, decentralized data sharing among operators. Case studies are evaluated from current STM initiatives (e.g., NASA’s Conjunction Assessment Risk Analysis, EU’s SST Consortium) and emerging space traffic initiatives (e.g., EUSTM program, U.S. Space Policy Directive-3) to identify gaps in real-time decision-making and regulatory enforcement; as well as analysis of a centralized-decentralized hybrid model, where the USTCC provides global oversight while operators retain autonomy via standardized protocols, compared to uncoordinated operations. In addition, the paper presents a model architecture for GSTMA–USTCC interaction, its implications for ground-based mission operations centers, and technical metrics, for example, conjunction warning accuracy, latency reduction, and orbital capacity optimization. By integrating policy frameworks with next-gen SDA technologies, the GSTMA-USTCC framework aims to ensure the long-term safety and sustainability of orbital operations, supporting the growing space economy.
      • 12.0207 Continuing Development and Enabling of Exploration Mission Systems Software
        Matthew Miller (Jacobs/NASA JSC), David Charney (NASA - Johnson Space Center), Kenneth Davis (KBR Inc), Jacob Keller (Amentum, NASA Johnson Space Center), Phuong-Thao Jackie Vu (), Benjamin Feist (NASA - Johnson Space Center), David MacKenzie (), Luke McSherry (NASA - Johnson Space Center), Arwa Khazali (lz technology ), Omar Baig (Barrios Technology), John Cox (), Cameron Pittman (Amentum Technology), Edwin Montalvo () Presentation: Matthew Miller - -
        Future human exploration missions, like those under NASA's Exploration Extravehicular Activity Services (xEVAS) and Human Landing System (HLS) contracts, will require unprecedented collaboration and seamless data integration. While infrastructure like LunaNET is being developed, a significant challenge remains: effectively integrating mission-critical data into the "plan, train, fly, explore" workflow. Without this, the ability to provide a comprehensive and cohesive data environment for deep-space operations will be limited. The EVA Mission Systems Software (EMSS) team is actively developing foundational software to support human spaceflight mission operators. We present a suite of tools designed to enhance EVA procedure authoring and execution and facilitate mission context creation for both International Space Station (ISS) and future Artemis missions. These tools are developed iteratively, informed by continuous use in current ISS operations and extensive Artemis field testing, ensuring alignment with operational needs. CODA is an exploratory web-based platform that consolidates and temporally aligns disparate mission, training, and testing data, such as video, audio, transcripts, and telemetry. It allows users to "re-live" and analyze events by abstracting away traditional file-centric data silos. CODA seamlessly connects to existing data sources, providing context and synchronized playback of historical moments, even with media not available during real-time operations. This significantly reduces the manual burden on flight controllers, saving time, cutting costs, and improving the efficiency of research. Talky Bot is a near-real-time audio transcription application that helps flight controllers manage numerous audio communication channels. By providing written transcripts and playback capabilities, Talky Bot frees up attentional resources previously consumed by constant monitoring of multiple voice loops. Currently transcribing ISS Space-to-Ground and various ground flight voice loops, it will expand to support Artemis missions and training events. For future planetary surface operations, AEGIS modernizes Flight Operations Directorate (FOD) operations by uniting disparate spatial and temporal data. Designed for Artemis missions, AEGIS streamlines EVA planning and execution. It integrates science traceability matrix details with spatial map data to define EVA task locations and sequences, allowing for the creation of spatiotemporal EVA plans that balance scientific value with mission and safety rules. AEGIS helps define concepts like Points of Interest, EVA Stations, and traverses, providing an integrated understanding of EVA activities. Finally, Maestro addresses the complexities of creating and managing EVA procedure content. Since supporting US EVA 81 in 2022, it has gained a broader user base, aiming to become a unified platform for execution, training, and planning, as well as a historical record. A critical value of Maestro during mission execution is synchronized situational awareness, ensuring everyone knows the crew's exact step. Maestro prioritizes readability, offers flexible rendering options, includes integrated workflow management, robust version control, and a user-friendly interface for efficient procedure creation and maintenance. These EMSS development efforts collectively demonstrate a concerted approach to building the foundational, horizontally integrated data systems essential for the success and safety of future human spaceflight missions, from low-Earth orbit to deep space.
      • 12.0209 From Theory to Practice: An Operationally-Focused IP Model for GSaaS Procurement
        Marcin Kovalevskij (findgs) Presentation: Marcin Kovalevskij - -
        While Integer Programming (IP) provides a powerful theoretical framework for optimizing satellite ground station networks, a significant gap exists between mathematically optimal models and operationally viable ones. Standard IP formulations that solely maximize data throughput often recommend fragmented and impractically complex networks. This paper resolves this theory-practice gap by introducing an enhanced IP model that incorporates an Operational Complexity Penalty (OCP), a regularization term that penalizes network size to favor consolidated, manageable solutions. To ensure the validity of this penalty, we introduce a data-driven normalization method that makes the model self-calibrating. The penalty weight is dynamically scaled based on the median data-to-cost efficiency of the specific network scenario, making the trade-off between performance and complexity economically rational. Through a series of simulated trade studies on a 5-satellite constellation, we demonstrate the powerful utility of this approach. The results from our Pareto frontier analysis show that the OCP-enhanced model can reduce network complexity by over 54% compared to the unconstrained baseline while retaining nearly 85% of the maximum possible data throughput. This key finding reveals that a substantial portion of the baseline network is comprised of operationally expensive assets offering only marginal utility, confirming that a pure data-maximization objective leads to inefficient resource allocation. Furthermore, our budget sensitivity analysis confirms that under realistic financial constraints, the model consistently produces more capital-efficient networks, delivering superior data return per station activated. The significance of this work lies in its transformation of the abstract ground station selection problem into a practical, strategic planning tool, providing mission operators with a quantitative method to balance performance, complexity, and cost.
      • 12.0211 Intelligent Satellite Constellation Design for Autonomous Disaster Monitoring
        Hu Qiu () Presentation: Hu Qiu - -
        Current disaster monitoring systems face fundamental limitations in autonomous response capabilities due to single-satellite architectures, ground-dependent processing chains, and inflexible orbital configurations. This research introduces an intelligent constellation design paradigm that enables satellites to autonomously detect, assess, and respond to disaster events through cooperative multi-platform architecture and adaptive onboard decision-making. The proposed constellation design centers on three functionally differentiated satellites operating within a single orbital plane through optimized phase configuration. The survey satellite implements wide-swath multispectral sensing across thermal infrared, visible, SWIR, and LWIR spectral ranges, achieving 1000+ km coverage at 250m resolution for continuous disaster screening and atmospheric condition mapping. The coordination hub satellite integrates precision radiometric calibration systems, high-performance onboard computing platforms, and advanced communication interfaces to enable autonomous decision-making and inter-satellite coordination. The observation satellite provides agile high-resolution imaging with dual-mode capability: wide-area surveying at 5m ground sampling distance and detailed examination at 1m resolution through rapid attitude control systems. Constellation orbital design optimization addresses the fundamental challenge of balancing autonomous response speed with system reliability. Mathematical analysis and simulation modeling demonstrate that 3° angular separation between survey and coordination platforms, combined with 20° separation between coordination and observation satellites, maximizes operational efficiency. This configuration enables complete processing of initial detection data before coordination satellite arrival, provides sufficient autonomous planning computation time, and ensures observation satellite access to target areas within optimal response windows while maintaining inter-satellite communication link availability above 95%. The autonomous planning architecture represents a paradigm shift from ground-controlled to satellite-controlled disaster response operations. The coordination hub processes detected events through multi-objective optimization algorithms that evaluate disaster severity based on population exposure, hazard propagation models incorporating real-time meteorological data, atmospheric obstruction probability from dynamic cloud mapping, and communication opportunity analysis. The adaptive scoring framework self-adjusts through operational feedback loops to optimize performance across different geographic regions and seasonal disaster patterns. Real-time task scheduling employs hybrid optimization combining integer linear programming with metaheuristic search algorithms to operate within spacecraft computational constraints. The system implements a closed-loop autonomous workflow from detection through response. Initial disaster identification triggers inter-satellite data exchange, followed by autonomous threat assessment and observation task generation. High-resolution characterization data undergoes onboard fusion processing to generate immediate alert products and comprehensive analysis datasets. Priority-based communication protocols ensure critical disaster information reaches ground users within minutes while preserving complete observational records for detailed post-event analysis. Comprehensive validation methodology encompasses multi-seasonal disaster monitoring in high-risk regions, long-term sensor calibration stability verification, adaptive cloud avoidance algorithm effectiveness assessment, and communication system resilience evaluation under various operational disruptions. Performance analysis demonstrates significant improvements in disaster detection accuracy, response time reduction, and system reliability compared to existing monitoring architectures. This intelligent constellation design establishes foundational principles for autonomous Earth observation systems capable of independent disaster monitoring and emergency response across multiple hazard types, demonstrating transformative potential for space-based disaster risk reduction applications.
    • Michael Lee (NASA - Kennedy Space Center) & William Koenig (NASA - Kennedy Space Center)
      • 12.0301 Development of Procedures for a Regolith Experiment with Human-Robot Interaction on the Lunar South
        Aileen Rabsahl (), Carsten Hartmann (German Aerospace Center - DLR), Rosalinde Borrek () Presentation: Aileen Rabsahl - -
        Development of Procedures for a Regolith Experiment with Human-Robot Interaction at the Lunar South Pole for the LUNA Analog Facility This paper addresses the preparation for future lunar missions by systematically analyzing potential off nominal risks astronauts may face during extravehicular activities (EVAs) and developing corresponding malfunction procedures to enhance crew safety and mission success. The research encompasses a fundamental risk assessment of a defined lunar mission scenario, identification and prioritization of critical risks, and the subsequent development of operational procedures to mitigate selected high-priority risks. Furthermore, the study evaluates the requirements for realistic simulation of these scenarios within the LUNA analog facility, aimed at training astronauts to respond effectively to off nominal events. The risk assessment methodology involved comprehensive identification of off-nominal situations through scenario analysis and brainstorming techniques, resulting in a structured flow chart representation of potential anomalies and their consequences. A comparative evaluation method was applied to estimate the likelihood and severity of consequences, enabling the classification of risks within a risk matrix framework. Seven off-nominal scenarios were selected based on combined risk levels and mission relevance, covering a spectrum of challenges from lunar environmental hazards to technological and human-robot interaction issues. Contrary to initial expectations, only one scenario was rated as high risk due to the comparative nature of the evaluation method. Nonetheless, this relative risk assessment provided a useful prioritization tool for procedural development. Following this, two malfunction procedures addressing “Cryobox temperature out of limit” and “Unplanned rover movement during EVA” were developed, incorporating all necessary operational steps. The iterative nature of procedure development is emphasized, with future revisions anticipated following validation phases. Simulation fidelity and validity within the LUNA analog facility were critically assessed to define the equipment and functional requirements for training effectiveness. Findings indicate that while certain advanced features, such as gravity off-loading and sophisticated sun simulators, could enhance fidelity, they are not essential for the validity of training simulations. Additionally, a stationary rover mock-up is necessary for scenarios involving unplanned rover movements and situational awareness training. Limitations include the binary evaluation of component impact without a gradation scale, highlighting an area for future improvement. On top, future work includes the practical execution of training simulations incorporating the developed procedures, with campaign plans updated to include intentional failures. Comparative studies on simulation fidelity levels could elucidate the relationship between training realism and effectiveness. Nominal procedures should be enhanced to incorporate comprehensive scientific tool usage and detailed sample inspection protocols, including temperature measurement and documentation using digital tools. In conclusion, this paper contributes to the body of knowledge on managing off-nominal risks in lunar EVAs and the development of astronaut training protocols using analog facilities. These insights support ongoing efforts in mission preparation, ultimately advancing human exploration and utilization of the Moon.
      • 12.0302 From Analog Tests to the Moon: Situational Awareness Systems for Astronauts and Tourists
        Aileen Rabsahl (), Carsten Hartmann (German Aerospace Center - DLR), Bastian Ernst (German Aerospace Center - DLR) Presentation: Aileen Rabsahl - -
        Human space exploration is entering a new phase with the Artemis program and the planned return to the Moon. Efficient and scientifically valuable Extra-Vehicular Activities (EVAs) in the lunar environment require advanced Mission Control Systems (MCS) that enhance astronaut autonomy, safety, and situational awareness. This study presents the development and evaluation of a custom MCS tailored for lunar surface operations. The system was implemented and tested during a high-fidelity simulated EVA at the LUNA analog facility operated by DLR and ESA, located in Cologne, Germany. Core functionalities included real-time crew tracking, physiological workload monitoring, QR-code-based equipment management, and structured scientific documentation. All components were integrated into a web based dashboard accessible to the astronauts and the flight control team. The qualitative evaluation, based on video and voice recordings from the simulation as well as post-mission feedback interviews with all participants, revealed specific strengths and limitations of the system, informing targeted design improvements to enhance usability, robustness, and procedural integration. The findings indicate that real-time tracking supported improved orientation and navigation across the simulated terrain. Physiological monitoring enabled real time assessment of crew workload, which was considered beneficial for maintaining procedural awareness and supporting safety-critical decisions. QR-code-based equipment interaction enabled reliable identification and contextual linking of tools and samples. This contributed to procedural efficiency by reducing otherwise required feedback loops and manual logging, while also improving accuracy in tool usage and sample tracking throughout the EVA. The integrated documentation tools were seen as reducing cognitive load and streamlining post-EVA reporting processes. In addition to professional use cases, the potential application of such systems in the context of lunar tourism was examined. Based on operational observations and system interactions during the simulation, several design improvements were proposed to optimize human-system interaction and technical reliability, while better aligning with operational workflows. These include simplified graphical interfaces, adaptive audio prompts, and higher levels of automation to facilitate safe and intuitive interaction with mission procedures. The results also highlight the potential of modular MCS architectures to evolve from specialized tools for scientific exploration missions into flexible platforms capable of supporting diverse user groups, including commercial spaceflight participants. This adaptability contributes to a broader vision of inclusive and sustainable lunar surface operations.
    • Seth Kricheff (Thompson Software Solutions) & John Kenworthy (BAE Systems)
      • 12.0402 E-TFT: An Enhanced Temporal Fusion Transformer for Early and Robust Detection of UAV Threats
        Shadi Sadeghpour (The Citadel) Presentation: Shadi Sadeghpour - -
        Unmanned Aerial Vehicles (UAVs) are increasingly central to applications ranging from surveillance, agriculture, and delivery services to national defense and disaster response. However, their reliance on sensor data for navigation makes them vulnerable to covert cyberattacks, especially those that manipulate sensor inputs to induce subtle, undetectable deviations in flight behavior. These attacks, particularly sensor spoofing such as GPS and Inertial Measurement Unit (IMU) manipulation, can compromise UAV operations without triggering conventional safety mechanisms. In this paper, we introduce a novel anomaly detection framework, the Enhanced Temporal Fusion Transformer (E-TFT), designed to address the challenges of detecting early-stage, low-intensity threats in UAV systems. The E-TFT model integrates several advanced techniques for anomaly detection that are particularly effective in the early stages of attacks, where traditional detection methods typically struggle. By combining local temporal modeling through a bidirectional Long Short-Term Memory (Bi-LSTM) network and global context via Transformer layers with attention pooling, the E-TFT captures both short-term deviations and long-range dependencies in telemetry data. The attention pooling mechanism enables the model to selectively focus on critical time windows, significantly improving its sensitivity to subtle, low-intensity attacks that might otherwise go undetected. Additionally, residual fusion and gating mechanisms enhance the model’s robustness, allowing it to handle noisy data and effectively detect evolving covert threats in real-time. We evaluate the E-TFT in comparison with four state-of-the-art models: Bidirectional LSTM (Bi-LSTM), Transformer Sequence Model (TST), Hybrid CNN-Transformer, and the original Temporal Fusion Transformer (TFT). To assess the performance of these models, we tested them on a dataset that includes both normal UAV trajectories and abnormal behaviors induced by a Coordinated Sensor Manipulation Attack (CSMA). The CSMA attack simulates sophisticated sensor manipulation by subtly altering multiple sensor inputs, leading to gradual, undetectable navigation drifts that evade conventional detection methods. The dataset is derived from the PX4 flight log, which includes various attack patterns, including velocity vector manipulation, multi-sensor coordination, and high-frequency noise injection, designed to mimic real-world attack scenarios. The models are trained to classify anomalies based on ten levels of attack intensity. Our experimental results show that E-TFT outperforms all other models, particularly in detecting early-stage anomalies (segments 0 and 1), where traditional models struggle. The E-TFT achieves 98.1% overall accuracy and detects low-intensity drift two segments (~20s) earlier than the competing baselines. The E-TFT model’s ability to combine local and global temporal feature extraction offers superior detection accuracy, making it more robust in identifying subtle and covert threats. Moreover, the E-TFT significantly improves early-stage detection accuracy, crucial for preventing mission-critical UAV failures. This work contributes to the development of more effective UAV security systems by offering both a new attack model to simulate realistic threats and an advanced detection framework capable of identifying covert anomalies in real-time. The E-TFT provides a potential solution for widespread applications in both civilian and military UAV systems, ensuring early threat detection in dynamic and adversarial environments.
      • 12.0403 Silent Subversion: Sensor Spoofing Attacks via Supply Chain Implants in Satellite Systems
        Jack Vanlyssel (University of New Mexico), Gruia-Catalin Roman ( University of New Mexico), Afsah Anwar ( University of New Mexico) Presentation: Jack Vanlyssel - -
        Spoofing attacks are among the most destructive cyber threats to terrestrial systems, and they become even more dangerous in space, where satellites cannot be easily serviced and operators depend on accurate telemetry to ensure mission success. When telemetry is compromised, entire spaceborne missions are placed at risk. Prior work on spoofing has largely focused on attacks from Earth, such as injecting falsified uplinks or overpowering downlinks with stronger radios. In contrast, on-board spoofing originating from within the satellite itself remains an underexplored and underanalyzed threat. This vector is particularly concerning given that modern satellites, especially small satellites, rely on modular architectures and globalized supply chains that reduce cost and accelerate development but also introduce hidden risks. This paper presents an end-to-end demonstration of an internal satellite spoofing attack delivered through a compromised vendor-supplied component implemented in NASA’s NOS3 simulation environment. Our rogue Core Flight Software application passed integration and generated packets in the correct format and cadence that the COSMOS ground station accepted as legitimate. By undermining both onboard estimators and ground operator views, the attack directly threatens mission integrity and availability, as corrupted telemetry can bias navigation, conceal subsystem failures, and mislead operators into executing harmful maneuvers. These results expose component-level telemetry spoofing as an overlooked supply-chain vector distinct from jamming or external signal injection. We conclude by discussing practical countermeasures—including authenticated telemetry, component attestation, provenance tracking, and lightweight runtime monitoring—and highlight the trade-offs required to secure resource-constrained small satellites.
    • Zaid Towfic (Jet Propulsion Laboratory) & Dennis Ogbe (Jet Propulsion Laboratory)
      • 12.0501 Interplanetary Generalization: Zero-Shot Transfer and Domain Adaptation of Deep Stereo on Mars
        Yifei Liu (Carnegie Mellon University), Brandon Rothrock (NASA Jet Propulsion Laboratory), Robert Swan (NASA Jet Propulsion Lab), Hosei O (The University of Tokyo), Sebastian Scherer (Carnegie Mellon University), Masahiro Ono (JPL) Presentation: Yifei Liu - -
        Stereo vision is critical for the autonomous operation of the Curiosity and Perseverance rovers on Mars. These rovers employ a classical stereo algorithm based on block-matching to estimate depth from a pair of navigation cameras, which is used to build a 3D map of the terrain. The choice of this algorithm was motivated primarily by the computational constraints of radiation-hardened flight hardware and heritage from previous Mars rover missions. However, recent data from Perseverance have revealed cases where stereo perception was significantly degraded, leading to multiple instances where autonomous navigation was no longer possible. This work analyzes the flight stereo performance of Perseverance and identifies several contributing factors that lead to degraded performance, primarily featureless terrain and imaging at low solar phase angle. Using a dataset curated from the navigation cameras over the entire Mars 2020 mission to date, we provide an empirical analysis of stereo performance under these conditions and evaluate advanced learning-based methods that mitigate these failures. Representative deep stereo approaches, including cost-volume and iterative optimization methods, transformer-based architectures, and foundation models, show strong zero-shot performance on real Mars imagery. We further present a domain adaptation strategy that leverages foundation models as pseudo ground truth, showing its close alignment with flight stereo while providing dense disparity estimates. Finally, we analyze computational trade-offs to assess the feasibility of deploying these algorithms on emerging space-qualified hardware. We hope these results will inform design considerations for stereo vision systems in future rover missions and contribute toward more robust vision-based autonomous navigation.
      • 12.0503 Data-Driven Vibration Analysis from Vision: Optical Flow & ML for Vehicle and Terrain Monitoring
        Abdullah Hayat (UAEU), Mohamed Okasha () Presentation: Abdullah Hayat - -
        Aerospace and advanced mobility systems evolve, the ability to monitor structural integrity and operational environments with minimal mass and complexity becomes increasingly critical. Traditional vibration sensing relies on accelerometers or strain gauges physically attached to vehicles, introducing wiring, weight, and installation challenges—particularly for large airframes, rotorcraft, UAVs, or planetary rovers. This work presents a non-contact, vision-based framework that employs dense optical flow and machine learning to characterize and classify vibration signatures, enabling simultaneous assessment of both vehicle health and the quality of external surfaces or terrains traversed. Our methodology uses cameras to capture subtle surface motions under vibratory excitation. Unlike single-point sensors, this technique generates rich, full-field motion data even on low-texture surfaces typical of aerospace skins or automotive panels. Dense optical flow, implemented via the Farneback algorithm, computes velocity fields from video frames, effectively turning the camera into a virtual vibrometer. Experiments were conducted on a vibration shaker table to test the proposed method. The principal excitation used is deterministic sinusoidal input at different frequency and amplitudes. Extracted dense flow fields were processed through a tailored pipeline. The motion components were isolated to target primary accelerations critical to structural health. Principal component analysis (PCA) decomposed complex multi-point trajectories into independent motion signals, from which dominant frequencies were identified. These signals were then temporally filtered and summarized via statistical descriptors such as mean, RMS, peak-to-peak amplitude, and dominant frequency, building robust feature vectors. Full time-series sequences were also preserved for learning temporal patterns. Machine learning models transformed these features into diagnostics. Classical classifiers—including SVM, Random Forests, and Ensemble Decision Trees—were trained to distinguish excitation types and detect deviations from normal vibratory behavior. Additionally, LSTM and BiLSTM networks ingested sequential data, learning vibration signatures indicative of faults or terrain-induced anomalies. This framework demonstrated following contributions: • Non-contact vehicle health diagnostics: Establishes a camera-based approach capable of detecting faults such as imbalance, looseness, or evolving structural issues by learning deviations in vibration patterns. • Terrain and surface condition assessment: shows how analyzing modulation of vibration responses can indirectly characterize runway integrity, road roughness, or off-world terrain hazards, enhancing situational awareness and mission safety. • Validated high-accuracy machine learning diagnostics: Rigorous tests using BiLSTM models, underscoring robustness for aerospace and mobility applications. This vision-based approach highlights the synergy of advanced sensing, machine learning, and aerospace engineering in creating smarter, more resilient systems. Embedding such systems into platforms could enable continuous, real-time health and environment monitoring without intrusive instrumentation, reducing maintenance costs, improving operational safety, and supporting more autonomous missions. Future work will extend this framework to establish the accuracy at subpixel levels and also in actual settings
      • 12.0504 Machine Learning Argument of Latitude Error Model for LEO Satellite Orbit and Covariance Correction
        Alexander Moody (University of Colorado, Boulder), Rebecca Russell (Charles Stark Draper Laboratory), Penina Axelrad (University of Colorado Boulder) Presentation: Alexander Moody - -
        Low Earth orbit (LEO) satellites support recent developments in position, navigation, and timing (PNT) service alternatives to GNSS. These alternatives require accurate propagation of satellite position and velocity with a realistic quantification of uncertainty. It is commonly assumed that the propagated uncertainty distribution is Gaussian; however, the validity of this assumption can be quickly compromised by the mismodeling of atmospheric drag. We develop a machine learning approach that corrects error growth in the argument of latitude for a diverse set of LEO satellites. By improving orbit propagation accuracy, the uncertainty distribution remains Gaussian for longer and can therefore be modeled by the corrected mean and covariance. We compare the performance of a time-conditioned neural network and a Gaussian Process on datasets computed with an existing orbit propagator and publicly available Vector Covariance Message (VCM) ephemerides. The learned models predict the argument of latitude error as a Gaussian distribution given parameters from a single VCM epoch. We show that this one dimensional model captures the effect of mismodeled drag, which can be mapped to the Cartesian state space. The correction method only updates information along the dimensions of dominant error growth, while maintaining the physics-based propagation of VCM covariance in the remaining dimensions. We therefore extend the utility of VCM ephemerides to longer time horizons without modifying the functionality of the existing propagator.
    • Alexandra Holloway (Jet Propulsion Laboratory) & Vandi Verma (NASA JPL-Caltech)
      • 12.0601 Proximeter: An Automated Rover Arm Placement Safety Evaluation Tool
        Harel Dor (JPL), Emily Newman (Software Engineering Institute), Justin Huang () Presentation: Harel Dor - -
        We present Proximeter, a software tool for efficient, repeatable evaluation of worst-case minimum clearance between triangular meshes representing robotic hardware and local terrain across a space of possible placements encoding placement uncertainty. These spaces can be adjusted to represent different uncertainty models and to account for terrain-aware placements such as those employing contact switches and various forms of range-finding. Proximeter is an essential enabling component of safety evaluations for sampling activities on the Mars 2020 and Mars Science Laboratory (MSL) missions, allowing operators to rapidly validate tool placements against mission requirements and develop rover sequencing within the constraints of tactical operational planning. Proximeter can additionally be easily adapted to any number of mission architectures provided with appropriate hardware and kinematics models. We present the design of Proximeter, including the operational constraints that motivated the development of Proximeter and a novel application of Lipschitz optimization (LIPO) that greatly speeds up the evaluation of Proximeter queries. We then describe the various implementation of Proximeter for graphical, server-client, or command line interface, as well as the usage of Proximeter on both the MSL and Mars 2020 missions. Lastly, we discuss efforts to use Proximeter on the surface of Mars to enable autonomous placement of rover instruments without the need to wait for human evaluation and commanding.
      • 12.0602 Enhancing Scalable Autonomy Space Teleoperation with User Intervention during Task Execution
        Ajithkumar Narayanan Manaparampil (German Aerospace Center - DLR), Anne Köpken (German Aerospace Center), Luisa Mayershofer (German Aerospace Center - DLR), Nesrine Batti (DLR - Deutsches Zentrum für Luft- und Raumfahrt), Xuwei Wu (German Aerospace Center (DLR e.V.)), Harsimran Singh (German Aerospace Center - DLR), Michael Panzirsch (German Aerospace Center - DLR), Florian Lay (German Aerospace Center - DLR), Xiaozhou Luo (German Aerospace Center - DLR), Adrian Bauer (German Aerospace Center - DLR), Peter Schmaus (German Aerospace Center (DLR)), Daniel Leidner (), Rute Luz (), Thomas Krueger (European Space Agency), Neal Lii (German Aerospace Center) Presentation: Ajithkumar Narayanan Manaparampil - -
        With increasing task complexity in space robotic mission designs, advancement in the command technology and re- mote operation of the robotic team becomes a key challenge for providing an effective human-robotic team interface to help facilate successful task execution. In the International Space Station (ISS)-to-ground telerobotic experiments, Sur- face Avatar led by DLR with partner ESA, a team of robots on the (Earth) surface is commanded by an astronaut in orbit to perform various tasks in a simulated space habitat. The ISS crew can select different input modalities to command the robots with direct teleoperation with the aid of force reflection in some instances, which provides great user immersion and interaction with the environment. At the other end of the Scalable Autonomy spectrum, the robot team can also be commanded at the task level, utilizing the robot’s local intelligence to plan and execute at the crew’s high-level direction. By scaling and mixing these telerobotic command capabilities, we can bring human and robot intelligence together to solve and execute more complex and unknown tasks. The teleoperator, whether an astronaut or ground-based expert, would be able to effectively command a surface robotic team in a wide variety of conditions. A concern raised for autonomously executed robotic task is the ability to cope with unexpected situations in a timely manner. To help enhance this command scalability, particularly for a more seamless transition between the command modes, the current research presented in this paper introduces a system that allows the crew to intervene with direct user input to provide course correction during autonomous task execution mid-task execution. This work explores different solutions to provide the human teleoperator with situational awareness of the robot’s action, and approaches for user intervention. One approach we developed provides guidance forces based on the au- tonomous task and uses the movement of the haptic device as a comparison metric for interpreting the user intention. The user can observe the robot’s action via the video stream, and let the robot complete the task autonomously. The robot executes the tasks autonomously as long as the robot’s planned action (e.g. motion) concurs with user’s intention. Should the need or desire arise, the user can take over with direct teleoperation at any moment using the haptic interface. The haptic cues provided by the haptic input device can take different forms such as pose, position, or velocity. In order to understand the effectiveness of these different approaches for allowing the teleoperator to take over the robot’s autonomous task, particularly for space deployment, an implementation for user intervention is tested during the Surface Avatar ISS- Earth telerobotic experiment in 2025 where the ISS crew commanded a team of robots to perform various sample handling tasks with Scalable Autonomy Teleoperation. Its outcome and astronaut feedback are discussed. Furthermore, a user study is carried out on-ground in the same simulated space habitat with the time-delay conditions experienced by the ISS crew to study the performance and usability of seamless switching of telecommand modalities in Scalable Autonomy teleoperation.
      • 12.0603 Fault Response under Uncertainty: Human-Robot Collaboration in Rover Mobility Recovery
        Michael Newcomb (Jet Propulsion Laboratory), Michael McHenry (NASA Jet Propulsion Lab) Presentation: Michael Newcomb - -
        As lunar surface missions push toward greater autonomy, fault response remains a critical moment of human-robot collaboration. This study investigates how operators would interpret, diagnose, and respond to mobility faults during operations of the Endurance rover, a long-range lunar rover concept ranked highly in the most recent Planetary Science Decadal Survey. Across three structured evaluations which include an initial set of DesignSim interviews with experienced mobility experts, a subsequent focus group discussion, and a follow-up DesignSim with less experienced participants; we examined how different participants navigate ambiguity, automation, and operational risk. Participants consistently diagnosed the fault within 30 minutes, yet none felt prepared to initiate recovery actions on their own. Across both novice and expert groups, participants expressed a strong preference for peer validation before initiating actuator-based commands, citing a general concern about causing irreversible damage to the craft. All began their assessment with spatial or visual data, turning to telemetry only when the imagery failed to fully explain the anomaly. While trust in onboard automated recovery was high, participants emphasized the need for procedural guardrails along with expert and team-based review. Findings also revealed systemic fragilities. Diagnostic workflows were shaped by fragmented or home-grown tools, and expertise was concentrated in individuals rather than systems. Upon review, there was an elevated risk of knowledge expiration due to Endurance's expected low-frequency operations cadence, and participants stressed the need for pre-planned SME engagement protocols and capable tooling for rapid fault triage and recovery. In the context of lunar operations, where short Earth-Moon communication latency increases tempo and reduces planning slack, these findings point to a design imperative. Future fault response systems must support not only robust autonomy, but also the cognitive and social dimensions of high-consequence decision-making among human operators. We offer recommendations for tooling, team processes, and HRI strategy to support rapid, safe, and trusted human-in-the-loop recovery.
      • 12.0605 Evaluation of a Continuous Drive Autonomous Navigation Algorithm for Next-Generation Lunar Rovers
        Young-Young Shen (MDA Space), Alexander Demishkevich (MDA Space) Presentation: Young-Young Shen - -
        Future lunar surface mobility applications, including the Canadian Lunar Utility Vehicle and commercial lunar activities, will require time-efficient autonomous driving of rover platforms. This is necessary to ensure sufficient time is available for core mission activities as opposed to traversing between sites. Traditional autonomy architectures for planetary rovers typically use a “stop-and-go” concept of operations, where the rover must stop periodically to assess the terrain ahead and plan its next path. Many techniques are proposed in literature that allow continuous autonomous driving, where terrain assessment and path planning take place while the rover is on the move. These may improve the speed at which the rover can complete autonomous traverses. However, faster autonomous driving in the speed-made-good sense – that is, linear distance between origin and destination divided by traverse time – is not guaranteed and depends on terrain conditions, obstacle density, and mission parameters, among other factors. Furthermore, any increase in speed must not come at the expense of significantly increased risk of colliding with an obstacle or entering hazardous terrain. Building upon nearly two decades of rover guidance, navigation, and control development, MDA Space has developed a continuous-drive autonomous navigation algorithm for use on the next generation of lunar rovers as part of the Canadian Space Agency’s Lunar Surface Exploration Initiative. The algorithm can attempt recovery from navigation faults autonomously. The paper will provide an overview of the algorithm’s capabilities and then describe the protocol and results of the tests performed to compare the performance of the algorithm against its predecessor and demonstrate its utility in operational scenarios for human lunar exploration. The algorithm was tested in physical environments representing a wide range of complexities. These comprised obstacle courses with low, medium, and high obstacle density on both paved and undeveloped terrain. The evaluations culminated in a demonstration of the algorithm on the Canadian Space Agency’s planetary analogue terrain in St-Hubert, Quebec. In all cases, the performance of the continuous drive algorithm was compared to that of the heritage stop-and-go algorithm from which it was developed. Paired tests with sufficient repetitions were completed to allow for statistical comparison between the two algorithms. Metrics compared included speed-made-good, smoothness of paths taken, energy use, and number of obstacles hit. Evaluations were performed on an R&D rover testbed equipped with a lidar, fiberoptic inertial measurement unit, and wheel odometry, along with a differential GPS system for ground truth. It was found that the new continuous drive algorithm was between 69% and 190% faster than the heritage stop-and-go algorithm in the speed-made-good sense while choosing smoother or equally smooth paths, using less or comparable energy, and was not more likely to collide with obstacles. All results were achieved with a confidence level of at least 95%. The algorithm will be employed in a number of mission-representative scenarios in a fall 2025 demonstration, the results for which are expected by the conference. The scenarios include reducing operator workload on long traverses and teaching a path to a LiDAR-based teach-and-repeat algorithm.
      • 12.0606 Indoor Testbed for Multirobot Research with Microdrones
        Kira Hofelmann (), Daniel Louie (Santa Clara University), Connor Bishop (), Rayyan Hussain (Santa Clara University), Christopher Waight (Santa Clara University), Jenny Huynh (), Michael Neumann (), Christopher Kitts (Santa Clara University), Manoj Sharma (Santa Clara University) Presentation: Kira Hofelmann - -
        Mobile multirobot systems have become increasingly utilized due to benefits such as redundancy, increased coverage, and the ability to create improved data products (like using several scalar field measurements to compute an instantaneous gradient). Multirobot systems include a wide variety of architectures; ranging from swarms, where large numbers of relatively simple robots form loose formations, to more centralized architectures where relatively few robots move with tight formation control. An example of a more centralized architecture is cluster control, which has been developed by the Robotic Systems Laboratory at Santa Clara University to control formations on land, in the water, and in air. Real world testing of multirobot systems can be challenging, therefore, there is a need to develop a robust indoor testbed that is easy to use for experimental verification, reliability and data analysis. In order to create the testbed, we optimized an OptiTrack motion capture system to track Crazyflie 2.1 microdrones within an enclosed 3D space. To achieve this, twenty-four infrared cameras were positioned around the 6 m by 3 m by 3 m workspace for precise tracking in 3D, and thoroughly calibrated. The resulting position data had error margins below 2 cm. This real-time position data was then broadcast over ROS2 for closed-loop control of the microdrones via PID controllers. After a brief description of cluster control, a two drone cluster definition is presented and forward and inverse kinematics as well as the inverse Jacobian are developed. In our paper, we will present Optitrack position data, plots illustrating the flight performances of single drones, as well as results of two-drone cluster flights. Performance tests include hovering, step responses, and multi waypoint navigation. Individual drone performance will be compared to cluster performance, in order to address the differences cluster control architecture provides. Additionally, we will include position error results, and standard deviations of error illustrating the successful implementation of PID tuning and cluster control. These results provide further support for the implementation of cluster control architecture, which can then be used for future formation control experiments.
      • 12.0608 Calibrating IMU Pitch Misalignment in the Field to Reduce Wheel-Inertial Odometry Elevation Drift
        Tushaar Jain (Carnegie Mellon University), David Wettergreen (Carnegie Mellon University) Presentation: Tushaar Jain - -
        Rovers estimate their position (“positioning”) to navigate, map, and perhaps, one day, mine. This motivates our contribution: a method for calibrating an Inertial Measurement Unit’s (IMU) pitch relative to the rover, without extra sensors, sophisticated algorithms, external infrastructure, or ground calibration. Our theoretical and experimental results prove our method’s ability to enable more accurate elevation estimates. Additionally, we analyze the effect of terrain, IMU quality, and calibration path on the calibration. The Moon has no positioning infrastructure yet, necessitating unaided positioning methods. Visual methods can be used, but are encumbered by space computing constraints, harsh lunar illumination, and sparse visual features. The alternative is wheel-inertial odometry, which involves integrating wheel motion over time. As this process is imperfect, errors accumulate, causing unbounded position drift. Of the many errors that degrade odometry, we focus on the IMU’s pitch misalignment, which is interpreted as the rover moving on a spurious slope. This causes unbounded elevation error, degrading positioning and mapping (when a previously mapped area is revisited, the new map will be at a different elevation, a contradiction). Fundamentally, pitch misalignment is difficult to calibrate. There is no datum on the rover for its velocity vector in the IMU frame, precluding static calibration. Nevertheless, we propose a simple calibration method that can be performed in natural, irregular terrain, including on the Moon. We exploit the fact that as a rover drives in a closed circuit with a pitched IMU, its position estimates will drift in a vertical spiral. This spiral’s average slope is precisely the pitch misalignment. We begin with a mathematical analysis, demonstrating our idea’s theoretical soundness. We experimentally validate our method by identifying a 0.7-degree IMU pitch misalignment on a flight rover surrogate, which results in a greater-than-50% reduction in elevation error growth. Motivated by the fact that rovers traverse over non-uniform terrain, we evaluate the calibration on terrain of varying hardness (sand, gravel, and asphalt). Finally, we conduct controlled studies in simulation to investigate how calibration accuracy and uncertainty are influenced by several variables: terrain flatness, wheel slip, IMU quality (angle random walk and gyroscope bias instability), drive speed, number of circles driven, and circle size. Our desire to study the latter four variables is motivated by the competing need to drive a long distance for accurate calibration while keeping the procedure short enough to ignore attitude drift and be economical in time and power. Ultimately, our contributions enable more accurate rover positioning and mapping. Importantly, we work within the stringent constraints of space rover development; calibrating a flight rover IMU’s pitch on Earth terrain is not possible because the rover must be kept in a clean room (to avoid foreign-object debris), and regardless, Earth’s gravity limits mobility. Lastly, even if an Earth calibration were possible, it may be invalid on the Moon due to misalignments introduced during and after launch. Therefore, it is significant that our method can be performed on the Moon and does not require external calibration equipment or a source of ground truth position.
      • 12.0609 TATERS (Tool for Autonomous Terrain Exploration of Remote Spaces)
        Rich Chase (Michigan Technological University), Tyler Doiron (Michigan Technological University), Ryan Navarre (Michigan Technological University), Meryl Spencer (Michigan Technological University) Presentation: Rich Chase - -
        This paper reports on the development of TATERS (Tool for Autonomous Terrain Exploration of Remote Spaces), a software framework for planning and risk analysis of single and multi-robot traversals. TATERS is designed to aid in mission planning workflows by merging planning methodologies with execution of autonomy on board robotic systems. This is done by developing coarse and high-fidelity simulations as well as tying in features of autonomy stacks into Monte Carlo simulations during pre-mission studies. Further, we investigate and report on methods for curating live data, incorporating into the planning activities, and representing and updating optimal paths on autonomous systems without the need for updating large amounts of data. This is achieved by a data reduction scheme called skeletonization which compresses spatio-temporal cost cube data derived from geographic information system (GIS) data layer sets. The cost cube derivation strategy is designed using cost metrics that can be adjusted and guided by intuition. We identify and focus on three key metrics: scientific merit, traversability, and survivability.
      • 12.0611 LunarLoc: Robust Global Localization for Autonomous Surface Operations on the Moon
        Annika Thomas (Massachusetts Institute of Technology), Keerthana Srinivasan (Princeton University), Aleksander Garbuz (Massachusetts Institute of Technology), Trevor Johst (Massachusetts Institute of Technology), Dami Thomas (), Cormac ONeill (), Robaire Galliath (Massachusetts Institute of Technology), George Lordos (Massachusetts Institute of Technology), Jonathan How (MIT) Presentation: Annika Thomas - -
        As NASA progresses toward a sustained lunar presence under the Artemis program, global localization is a critical capability for autonomous surface operations where Earth-based infrastructure like GPS is unavailable. Tasks such as regolith excavation, transport, and infrastructure deployment require precise pose estimation over extended durations and varied terrain. Traditional methods such as visual-inertial odometry (VIO) suffer from drift accumulation, particularly in visually ambiguous or repetitive lunar environments. We present LunarLoc, a drift-free global localization approach designed for the lunar surface that leverages the underlying structure of the environment (i.e. persistent geological features such as boulders) rather than relying on appearance-based descriptors. Using zero-shot instance segmentation from onboard stereo cameras, LunarLoc detects and localizes boulders without requiring pre-labeled training data or fixed object classes. The system constructs a graph-based representation of these landmarks and performs global data association via geometric consistency with a previously recorded reference map. The graph-theoretic data association optimization finds the largest geometrically consistent set of 3D segment correspondences between a vehicle map and a reference map. Using this approach, LunarLoc achieves centimeter-level multi-session localization accuracy using a graph-theoretic optimization over matched landmarks. We validate LunarLoc using a physics-based simulation environment developed for the NASA Lunar Autonomy Challenge. The simulator incorporates a digital twin of NASA’s ISRU Pilot Excavator (IPEx) rover operating in a photorealistic 27 m × 27 m lunar terrain model. We perform multi-session global localization experiments with varying viewpoints and lighting conditions across multiple traverses spanning from short to long-range exploration.Our system consistently achieves sub-centimeter trajectory localization error, significantly outperforming the current state of the art in lunar localization methods, including image-based matching and crater-referenced geolocation. Beyond static frame alignment, we demonstrate how LunarLoc enables accurate global pose estimation by aligning agent trajectories within a shared map. To quantify this, we present both the transformation accuracy obtained through global localization and the resulting improvement in full trajectory estimation. Specifically, we integrate LunarLoc outputs into a pose graph optimization framework using GTSAM, where localization events are established via boulder-based localization and trajectory priors are provided by ORB-SLAM3. This reduces accumulated drift in VIO and enables agents to maintain globally consistent trajectories over long traverses. The combined system supports persistent, collaborative autonomy in GPS-denied, visually ambiguous lunar environments. In addition to releasing the full dataset and simulation playback tools, we demonstrate the method’s robustness to environmental appearance changes and evaluate its real-time performance on CPU and GPU configurations relevant to space-rated computing platforms. LunarLoc enables autonomous agents to localize and re-localize across sessions, demonstrating a promising approach for autonomy on long-duration, collaborative surface missions without dependence on external infrastructure.
      • 12.0613 Replacement Reality: Dreaming of the Moon While Driving on Earth
        Robert Swan (NASA Jet Propulsion Lab), Marcus Koh (Jet Propulsion Laboratory (University of California, Berkeley)), Juan Garcia Bonilla (University of Colorado Boulder), Asher Elmquist (Jet Propulsion Laboratory), Masahiro Ono (JPL), Georgios Georgakis (Jet Propulsion Laboratory), Ashish Goel (NASA Jet Propulsion Lab), Brandon Rothrock (NASA Jet Propulsion Laboratory), Michael Paton (Jet Propulsion Laboratory), Hari Nayar (NASA/JPL), Issa Nesnas (Jet Propulsion Laboratory) Presentation: Robert Swan - -
        Long-range autonomous rover traverses for future lunar missions, such as the Endurance concept, demand high-fidelity, large-scale testbeds to validate perception and localization systems under extreme polar conditions. Existing testing approaches have a critical trade-off: physical testbeds using lunar simulant may be terramechanically accurate, but are costly, hazardous, and limited in scale, while purely digital simulations can be flexibly modified, but lack the accuracy of real-world terramechanics and sensor readings. We propose a novel hybrid approach, “Replacement Reality,” which combines the rigor of physical testing with the flexibility of simulated testing. Replacement Reality integrates a real-time, photorealistic simulation into the perception pipeline of a physical rover. As the hardware traverses terrestrial environments, its onboard camera imagery is replaced with a localized synthetic lunar scene, allowing the autonomy stack to be tested against lunar relevant scenes and simulated hazards like craters and boulders while responding to physical dynamics. The system has been developed and tested on pre-scanned terrains on the order of tens of meters in length and width (with plans to extend that to kilometers). The pre-scanned terrains have been augmented with craters and the rover is localized within these areas using onboard visual inertial odometry and a iterative closest point algorithm to achieve centimeter-level accuracy. The deployment included an off-board computer that rendered lunar terrains and delivered it to the rover based on its current pose with a latency of 100–300 ms from the image request. The current development focused on rover applications over astronaut operations due to project needs, mixed reality scaling limitations, and lower latency requirements when working with human subjects, though this approach would be relevant to future astronaut applications. Replacement Reality was demonstrated in two field tests on a half scale lunar rover prototype. Using pre-scanned LiDAR data in a digital twin, the system realistically modified lighting, and added rocks and craters in real-time. Augmented rocks and craters were either small not to impact the mobility or non-traversable to prevent traversability mismatches between simulation and reality. This paper presents our findings and lessons learned, with comparisons to other prototypes we developed, establishing a methodology that provides a scalable and high-fidelity solution for testing and verifying robotic autonomy in a variety of scenarios.
  • Torrey Radcliffe (Aerospace Corporation) & Jeffery Webster (NASA / Caltech / Jet Propulsion Laboratory)
    • Dean Bucher (The Aerospace Corporation) & Daniel Selva (Texas A&M University) & Lisa May (Lockheed Martin Space)
      • 13.0102 Testing for Consistency, Completeness, and Validity in an Ontology of Functions
        Hamilton Johnson (University of Alabama, Huntsville), Mayuranath SureshKumar (University of Alabama, Huntsville), Hanumanthrao Kannan (The University of Alabama in Huntsville), Lawrence Thomas (The University of Alabama in Huntsville) Presentation: Hamilton Johnson - -
        Functional analysis is a promising area for the application of ontological methods in the domain of aerospace concept development. Increased specificity and uniformity in the development of functional architectures through a formal ontology enables more rigorous decomposition and analysis to catch inconsistencies or unwanted interactions between functions. This helps ensure a functional architecture meets the needs and requirements of the stakeholders. A Functions Ontology has been created as part of the development effort on a space systems ontology for use in modeling early, pre-phase A aerospace concepts. This ontology represents functions as input to output transformations and defines a preliminary taxonomy of types of functions, inputs, and outputs along with various axioms. This guides practitioners on how a functional architecture can be constructed, ensuring consistency and completeness. This paper demonstrates the methods used to test the Functions Ontology for consistency, completeness, and validity in the domain of aerospace systems. Tests include modeling the functional architectures of example systems to test the completeness of the ontology in the domain, automated reasoning tests for consistency, and dedicated instantiated models to confirm that the axioms correctly guide the reasoning, demonstrating fitness for purpose. Together, these test methods provide confidence that the ontology in development will provide value to system modelers during the initial stages of concept development. Directions for future development will be examined, including integrating the Functions Ontology with other ontologies representing solution space elements, and converting the workspace to take advantage of OWLReady2, a Python library which could allow for more direct integration of the ontology and test suite with other tooling and modeling processes.
      • 13.0103 Digital Architecture Strategy for the NASA Gateway Program
        Josh Sung (Booz Allen Hamilton), Cody Wheeler (Booz Allen Hamilton), Jeremiah Crane (Booz Allen Hamilton), Lillian Glaeser (Booz Allen Hamilton), Go Saito (), Crystal Haddock (NASA - Johnson Space Center) Presentation: Josh Sung - -
        The Gateway lunar space station is a complex system of systems including thousands of requirements spanning various aspects of hardware, software, and operational functionalities. Gateway development leverages contributors from several organizations throughout the National Aeronautics and Space Administration (NASA), external partners (EPs), and international partners (IPs). This organizational complexity impacts the program’s methods for data management and imposes significant challenges to implementation of traditional document-based methods for systems engineering (SE) processes. Notably, many Gateway organizations individually manage owned products and documents in tools like SharePoint, reducing accessibility of information and data integration opportunities. The Gateway Digital Architecture (GDA) organization was established to develop digital engineering (DE) and model-based systems engineering (MBSE) capabilities for the Gateway Program to address these challenges. The Gateway Program’s Vehicle Systems Integration (VSI) organization has embraced MBSE and DE for the program’s SE processes, including requirement development (RD) and test and verification (T&V) activities. This paradigm shift has led to the capture of the SE baseline in datasets and elements within a digital toolchain that enables dynamic data management, accessibility, and integration to address the needs of the Gateway Program and enterprise stakeholders. The GDA’s MBSE and DE solutions facilitate various SE analyses, including traceability analysis, functional analysis, requirements analysis, and configuration-based analysis. These analyses have resulted in various products supporting the Program’s life cycle reviews (LCRs). The GDA has evolved alongside the Gateway Program through its development life cycle to maintain alignment with the program's objectives and adapt to developing challenges and opportunities. Consequently, emphasis has been placed on capturing and evolving a GDA Strategy definition. The GDA Strategy ultimately aims to drive continuous development, improvement, and scalability of DE and MBSE solutions for the entire Gateway Program. Recent notable evolutions of the GDA Strategy have included development of MBSE integrated schematics to support Gateway operational team functions, API-enabled data synchronization tools to improve digital toolchain integration, and a configuration item (CI)-based data repository to consolidate and improve accessibility of program data. This paper will provide a discussion of the history of the Gateway Digital Architecture, its defined Strategy and products, and how it describes the following: guiding principles, including the five tenets of GDA Strategy; MBSE approach, including framework and applications; data as a unit of currency, including synchronization, consolidation, and visualization; and the journey of digital transformation, including wins and lessons learned.
      • 13.0104 Using an MBSE Approach to Develop Jamming Capabilities for a Counter-Drone System
        Vikram Mittal (United States Military Academy) Presentation: Vikram Mittal - -
        While drones have traditionally been employed at the strategic level, recent conflicts have accelerated their use at the tactical level. This shift creates a growing need for individual soldiers to have access to counter-drone protection. These systems must not impose a significant physical or cognitive load on the soldier, and therefore must be lightweight and require limited human interaction. This study uses a model-based systems engineering approach to develop a jamming capability that enables soldiers to activate a lightweight, low-power mechanism to block a drone’s control signal. The design builds on previous work that developed a drone detection system. That system processed a signal from a software-defined radio to alert the soldier through the Android Tactical Assault Kit (ATAK), a smartphone issued to soldiers. The model was expanded to incorporate stakeholder analysis related to the jamming capability. The logical architecture was also expanded to include jamming functionality, allowing the user to initiate jamming once a drone is detected. A corresponding physical architecture was derived from the logical model, which included a 3D printed case and cabling. These models supported a trade-space analysis exploring the balance between size, weight, and power in relation to system effectiveness. The jamming system had to operate using the conformal batteries already worn by soldiers, which further constrained the design space. A prototype was developed using a Raspberry Pi, two HackRF software-defined radio, two omnidirectional antenna, a bi-directional amplifier, a 3D printed case, and the requisite cabling. The system successfully detected the 2.4 GHz video feed used by many commercial drones. It also executed noise-based jamming when commanded remotely via an Android device over a TCP connection. Although integration with the one-watt bidirectional amplifier presented technical challenges, the core jamming functionality was effective at targeting and disrupting drone communication channels. Performance tests confirmed successful interference with 2.4 GHz control signals and video feeds. These results demonstrate the viability of a soldier-wearable, lightweight jamming system that can be integrated with existing detection tools and powered by conformal batteries. The system offers a promising approach for enhancing soldier protection against low-cost drone threats on the modern battlefield.
      • 13.0105 Evaluating Metamodel Quality of the CubeSat System Reference Model (CSRM)
        Sarah Rudder (Colorado State University Department of Systems Engineering), David Kaslow () Presentation: Sarah Rudder - -
        This paper investigates the CubeSat System Reference Model (CSRM) to objectively evaluate quality using metrics derived from the Metamodel Quality Requirements and Evaluation (MQuaRE) framework. Incorporating the CSRM into systems engineering (SE) processes can strengthen communication and collaboration across the enterprise. Providing domain-specific additions to the Systems Modeling Language (SysML), the CSRM profile establishes specialized model elements meant to facilitate mission architectures for CubeSats using a model-based systems engineering (MBSE) approach. Its development is an ongoing effort led by the International Council of Systems Engineering (INCOSE) Space Systems Working Group (SSWG), with a beta specification published by the Object Management Group (OMG). The working group aims to standardize the CubeSat system design across the industry and improve transparency in product development. Domain analysis realizes that external groups affected by CubeSats include governments, regulatory organizations, space agencies, research institutions, aerospace manufacturers, launch services providers, and various end users. With such a wide variety of invested individuals, eliciting quality requirements directly has proven to be an arduous task. Instead, the Quality Model for Metamodels (QM4MM) framework and associated metrics will be used to define verifiable requirements. Evaluating these metrics will provide CSRM contributors with guidance to address weaknesses in robustness and improve the specification. By leveraging the metamodel to quantify measurements, this study supports the next iteration of the CSRM and mitigates potential quality concerns. Understanding quality metrics of the reference model will enable improvements focused in the areas of suitability, usability, modularity, and portability. This research is anticipated to complement best practices for the greater technical community when creating new domain-specific languages (DSL) in MBSE environments.
      • 13.0106 Cataloging Patterns in Model-Based Systems Engineering
        Mark Maier (University of Utah) Presentation: Mark Maier - -
        Cataloging Patterns in Model-Based Systems Engineering Mark W. Maier University of Utah Model-Based Systems Engineering (MBSE) is a collections, tools, techniques, and methods for carrying out systems engineering processes. The most well-known MBSE elements are the notations (usually with graphical representations) like Systems Modeling Language (SysML). The full scope of MBSE is much more than the notations, it seeks to encompass the whole systems development process, potentially linking from specification to implementation specific models, supply-chain control, and manufacturing. A challenge in implementing MBSE is developing an effective map or catalog of design artifacts. In the history of civil and mechanical engineering people developed standard drawing types (e.g., floorplans, elevations, specific perspective types) as part of the processes by which a team created a concept and delivered designs. One method for identifying and organizing such elements is the “pattern,” a format originally credited to Alexander and now commonly imitated in fields as far away from civil engineering as software engineering. Some work has gone into identifying patterns, or the more general “heuristic,” in systems engineering, but few of them are specific to MBSE implementation. A “pattern,” in the Alexander sense, is a specific type of heuristic. A pattern is a prescriptive heuristc on form. It provides heuristic guidance on choosing a solution element (as opposed to some aspect of problem space refinement). Patterns are typically documented by given them an overall name (something evocative), defining a specific design problem or scenario to which they apply, and then providing the suggested solution form. Typically, this is followed by a discussion of justification, variations in application, and examples. Multiple patterns are given together to cover some aspect of repetitive design problems and integrated with documentation methods or overall processes. This approach started with the work of Alexander and has been followed in the application of pattern and pattern language concepts to other fields, such as software design. Our efforts are directed to identifying and documenting sets of MBSE patterns, primarily to facilitate MBSE education and training but also as part of organizing MBSE methods, like OOSEM. This paper describes some initial work in identifying, documenting, and using (via educational programs) MBSE patterns. We specifically describe here patterns related to: 1. Defining system context. The two- and three-part context definition diagram patterns. 2. Defining part hierarchy. The integral hierarchy pattern and the parallel category pattern (specifically the hardware-software-facility composition pattern). 3. Interface representation in layered network systems. The “stacked” pattern versus the independent diagram pattern. The three sets of patterns are part of a much larger roadmap of MBSE patterns we are identifying and documenting. The paper shows examples of each of the three, how they have been integrated into educational processes, and describes the larger roadmap of MBSE patterns.
      • 13.0107 Value Creation Patterns in Space Program Architectures
        Alexander Bühler (University of Luxembourg), M. Amin Alandihallaj (University of Luxembourg), Andreas Hein (SnT, University of Luxembourg) Presentation: Alexander Bühler - -
        Space programs face budget cuts and cancellations as their benefits may not justify their cost. In other words, their value (here: benefits minus cost) is insufficient or has not been identified (e.g. scientific gains, job creation). Defining the potential value of space programs is best addressed during its conception, i.e. architecting phase. Space program architecting approaches from literature do not explicitly consider the link between the system architecture and value delivery. We propose to systematically identify ways how value is delivered by a space program architecture from proven value delivery mechanisms. Those proven value delivery mechanisms are captured in the form of value creation patterns. Patterns capture problem-solution knowledge for a specific context. They were first introduced in architecture and later popularized in object-oriented software engineering. They were further applied in systems architecting, and recently in space systems architecting. We first develop a conceptual data model of space programs to structure organizational and technical concepts relevant to space programs, and the relationships between them. This is grounded in the ECSS Glossary and the NASA Systems Engineering Handbook. We then build a database of preliminary value creation patterns in space programs. Examples include a “dual use” pattern that was sourced from a review of the Luxembourg space sector, where the context is that the country’s space policy seeks to employ space infrastructures for the benefit of sectors other than space. The problem there is how value can be created for other sectors by using space infrastructures. Factors influencing the solution include development cost and commonality. One solution is to develop systems for dual use in space and on Earth. An example is the Luxembourg company Maana Electric, which develops ISRU appliances which can produce solar panels from sand on Earth and regolith on the Moon. Another example is the “diffusion” pattern. The context is a country with a non-space-related industrial base. The problem is how to advance the state of the art in that industrial base while contributing to space system development. Similarly to the “dual use” pattern, a key factor is the architectural similarity between the terrestrial and space systems that are developed. The solution is to utilize the capabilities of that industrial base in the development of a space system. A historical example is the Canadian STEAR program, where the country’s robotics industrial capability was applied in the development of the ISS Mobile Servicing System. To explore many similar patterns, complementing manual search, we use a Large Language Model (LLM). This LLM is then used to semantically search through the NASA Technical Reports Server and the ESA Data Discovery Portal for patterns matching or resembling the preliminary value creation patterns. This approach precedes a trade space exploration where space program architectures are designed using patterns, given a certain definition of value that may vary from different actors’ viewpoints.
      • 13.0114 Is Simplicity Golden? a Survey of Post-Launch Adaptation in Planetary Missions and Lessons Learned
        Masahiro Ono (JPL), Richard Rieber (NASA Jet Propulsion Lab), Anthony Freeman (NASA Jet Propulsion Lab), Michel Ingham (Jet Propulsion Laboratory), David Murrow (), Chloe Gentgen (Massachusetts Institute of Technology), Daniel Selva (Texas A&M University) Presentation: Masahiro Ono - -
        The conventional wisdom in space systems engineering holds that simplicity is golden: systems should minimize complexity while meeting requirements. A simpler system is typically considered more robust and less prone to risk because it can be tested thoroughly and has fewer points of failure. While the core philosophy of this principle remains valid, we argue that the reality is more nuanced—particularly for planetary exploration missions, which face substantially greater uncertainties than Earth-orbiting missions. The principle also originated during a time when software functionality onboard spacecraft was much more limited than it is today, or will be in future space systems. We investigated 10 past and ongoing missions that encountered unexpected situations and either successfully or unsuccessfully adapted to them: Galileo, Hayabusa, EPOXI, Deep Space 1, Juno, SMAP, OSIRIS-Rex, InSight, Mars 2020 Rover (Perseverance), and Ingenuity. Our study draws on a series of interviews with experts directly involved in these missions, as well as a review of relevant literature. We found that it is often departures from design minimalism—such as functional redundancy in sensing and actuation, subsystem interconnections, and onboard software flexibility—that enabled, or could have enabled, missions to adapt to anomalies and surprises. From these observations, we distilled six design principles for future planetary missions to enhance adaptability while keeping overall system complexity under control. Finally, we propose a concept of software-defined space systems (SSDSs), which is built upon the proposed design principles and can dynamically adapt physical behaviors in remote planetary environments.
      • 13.0115 The Foundations of Interplanetary Logistics: Spaceport Infrastructure for the Moon and Mars
        Wanjiku Chebet Kanjumba (University of Florida) Presentation: Wanjiku Chebet Kanjumba - -
        As humanity advances toward an interplanetary future, the development of robust spaceport infrastructure on the Moon and Mars will be pivotal in enabling sustainable exploration, interplanetary logistics, and long-term human presence beyond Earth. This paper presents a comprehensive and forward-looking framework for designing, constructing, and operationalizing extraterrestrial spaceports that serve as critical hubs for resource extraction, cargo handling, crewed missions, and transportation between Earth, the Moon, and Mars. Drawing upon insights from terrestrial logistics systems, this study integrates advancements in autonomous technologies, robotics, modular construction, and in-situ resource utilization (ISRU) to propose scalable and resilient spaceport architectures tailored to the unique environmental conditions of the Moon and Mars. Strategic site selection focuses on lunar polar regions—rich in water ice—for fuel production and life-support systems, while equatorial Martian locations such as Gale Crater and Elysium Planitia are prioritized for their solar energy potential and favourable landing conditions. To ensure coordinated, equitable, and sustainable development, the paper introduces the concept of the Interplanetary Spaceport Development Authority (ISDA)—modelled after Earth's Integrated Network for Commercial Spaceports (INCS). ISDA would serve as an international governing body responsible for overseeing spaceport siting, design, funding, regulatory compliance, and environmental stewardship on extraterrestrial bodies. Economic feasibility is explored through public-private partnerships and international collaboration models, emphasizing shared investment, risk mitigation, and knowledge transfer. Lessons learned from Earth-based frameworks like the Global Spaceport Alliance (and the proposed INCS) inform the proposal’s approach to regulatory harmonization, supply chain integration, disaster resilience, and sustainable practices. Finally, a step-by-step roadmap outlines the progression from pilot projects to large-scale deployment, positioning spaceport infrastructure as a cornerstone in transforming the Moon and Mars into fully operational logistics hubs. These developments will lay the foundation for humanity’s expansion into the solar system—and potentially beyond—in the distant future.
      • 13.0117 In-situ Resource Extraction for Mars Terraforming Aerosol Feedstock
        Tatsuwaki Nakagawa (University of Colorado, Boulder), Edwin Kite (University of Chicago) Presentation: Tatsuwaki Nakagawa - -
        Mars terraforming has been considered for over 50 years, attracting interest from both scientists and engineers. Most proposed schemes begin by raising the planet’s surface temperature to thicken the atmosphere and warm both surface and atmosphere which allows water ice to melt. Among various approaches, aerosol-based heating offers a promising first step: dispersing particles that trap outgoing infrared radiation while transmitting incoming solar radiation. Models indicate that global mean temperatures could rise by more than 30K, provided that million-ton scale quantities of aerosol are deployed. Transporting such vast quantities from Earth is impractical, which motivates in-situ production on Mars. Lunar in-situ resource utilization (ISRU) technologies and system architecture have been studied extensively but applying them on Mars poses unique challenges. Mars’s higher gravity, extreme temperature and pressure ranges, and chemically distinct soil, rock and atmosphere mean that lunar ISRU blueprints alone are insufficient and demand additional research. Its solar irradiance is lower than on the Moon, and the power requirements for terraforming ISRU system are greater, so the power-generation and storage systems must be redesigned. Moreover, our terraforming ISRU plant must process feedstock at rates of tens to hundreds of kilograms per second, which is far beyond typical system architecture that has been studies, and operate reliably over very long lifetimes; warming Mars’s could take more than a decade, so durability and maintainability are critical. Together, these factors make our ISRU technologies and system architecture substantially more complex engineering endeavor. Five candidate aerosols combine favorable optical properties for Martian warming, including low transparency to infrared radiation, with the advantage of relying entirely on Martian natural resources. Graphite can be synthesized from atmospheric carbon dioxide through high-temperature processing, using a method similar to the Mars Oxygen ISRU Experiment (MOXIE). Three types of metal nanorods—aluminum, magnesium, and iron—can be extracted from regolith. Depending on the metal, different extraction methods are suitable. In our analysis, we evaluated molten regolith electrolysis, carbothermal reduction, and hydrogen reduction. For magnesium specifically, we also considered extraction from magnesium sulfate-rich rocks, which occurs on Mars. In addition, magnesium sulfate salt nanoparticles can be obtained using hot water extraction from these sulfate deposits. We present the first system-level comparison of these materials across excavation, conveyance, and material extraction phases, using key metrics such as energy consumption, equipment mass, consumable mass, and production throughput required to achieve the desired heating under realistic Martian conditions. This study presents initial estimation of the system architecture required to terraform Mars using ISRU technologies. We identify potential system configurations by evaluating trade-offs among power and mass requirements, operational lifespan, and system complexity for material-feedstock production. Our findings will inform near-term technology development for the initial phase of Martian atmospheric engineering, specifically efforts to elevate the planet’s surface temperature. Additionally, this work provides the foundation for a comprehensive, system-level analysis of terraforming ISRU architectures.
      • 13.0118 Developing a SysML Model of a Legacy DoD System
        James Enos (K2 Alpha Solutions) Presentation: James Enos - -
        As organizations shift from traditional, document-based systems engineering to digital engineering, one of the major challenges is modeling legacy systems that may have multiple variants with different individual components within the same architecture. In addition to the challenge of developing digital model for engineered systems, modeling legacy systems introduced an additional challenge that different variants of the system may be currently fielded. This paper applies model-based systems engineering (MBSE) to a legacy system within the Department of Defense (DoD) using the Systems Modeling Language (SysML). It specifically focuses on the physical aspects of the system architecture and includes both block definition diagrams and internal block diagrams. This provides an example of developing a model of a legacy system’s structure and internal connections. Additionally, the model captures different instances of the system which accounts for some of the variants of the actual fielded system. Using a MBSE approach to transition legacy, document-based engineering has several benefits that the paper discusses. First, changes to any of the elements of the system are populated throughout the entire model, making changes more visible to the entire engineering team. Second, the model allows for the system developer to identify obsolete parts within the system as they develop new versions of the system to ensure the legacy system is updated. Finally, by understanding the interfaces of the current system, engineers can determine how to integrate new systems with these legacy systems. Overall, the paper provides a case study for the application of MBSE to legacy systems.
      • 13.0119 A Systems Engineering Approach to Mitigating Starship Cryogenic Boil-Off for Artemis III
        George Lordos (Massachusetts Institute of Technology), Daniel Rojas (Massachusetts Institute of Technology), Beverly Ma (Massachusetts Institute of Technology), Stone Smith (Massachusetts Institute of Technology), Pranav Bala (), Nicole Ding (), Olivier De Weck (MIT), Jeffrey Hoffman (Massachusetts Institute of Technology) Presentation: George Lordos - -
        Long-endurance cryogenic propellant storage is a critical capability for NASA’s deep space missions, particularly the Artemis III return to the Moon. However, current cryogenic propulsion systems are far short of the multi-month endurance required by the Human Landing System (HLS), threatening mission schedule, feasibility and/or crew safety. This motivates the development of advanced Cryogenic Fluid Management (CFM) systems to mitigate propellant loss due to boil-off during extended operations in orbit and on the lunar surface, throughout the extended Artemis III mission timeline. The gap in knowledge lies in how to optimally integrate passive and active CFM technologies at a systems level for a complex mission architecture and evaluate their performance and resilience against significant operational uncertainties and potential system failures. Hence, the objective of this work is to explore the design space in order to discover robust, integrated thermal management system architectures that could be implemented within a 3-5 year timeframe. This was accomplished using a model-based systems engineering approach to analyze the Artemis III mission concept of operations and simulate the performance of various CFM technologies under a variety of mission modes and off-nominal scenarios. Two key mission-level figures of merit were established: Lunar Ascent Propellant Margin (LAPM) as a proxy for crew safety, and the number of tanker flights required, as a proxy for mission cost. A parametric mission model was developed to generate and evaluate 26 alternative system architectures, combining technologies like Multi-Layer Insulation (MLI), cryocoolers with Broad Area Cooling (BAC), and a novel propellant mixing system inspired by ships’ propellers. The top three performing concepts were then subjected to rigorous stress-testing against uncertainties, including MLI degradation and cryocooler failures, to assess their robustness. The analysis revealed that refilling the HLS in a highly elliptical orbit (HEO) creates viable mission options. The top-performing architecture (023-MCB), a Starship V3 HLS with a hybrid system of 60-layer MLI mounted on a tank-welded network of cooling tubes, seven cryocoolers, and a passive propellant mixing system, achieved a predicted LAPM of 25.5\% with 15 tanker flights. Stress-testing demonstrated this architecture's high robustness: it can withstand an MLI degradation factor (DF) up to 2.5X of nominal performance, or the failure of six of the seven cryocoolers, and still complete the mission with a LAPM greater than the required 10\% threshold. In contrast, the best passive-only architecture could only tolerate a DF up to 1.2X of nominal. The proposed hybrid passive-active thermal management system provides a feasible and highly robust solution for long-endurance cryogenic storage, significantly enhancing mission assurance and crew safety for Artemis III. The findings provide a template for NASA and its partners to explore system architectures, develop and stress-test the necessary CFM technologies. The scalable concepts developed are also applicable to future infrastructure, such as in-space propellant depots, accelerating the transformation of space operations and enabling a sustained human presence on the Moon and beyond. This work is based on an MIT project awarded the 3rd place overall award at NASA's 2026 Human Lander Challenge in Huntsville, Alabama.
    • Joshua Calkins (Ensign-Bickford Aerospace & Defense (EBAD)) & Charlene Ung (NASA Jet Propulsion Lab)
      • 13.0202 An Uncertainty-Aware Provenance Framework for Enhanced Traceability in Engineering Systems
        Deoclecio Valente (DLR), Andreas Schäfer (German Aerospace Center - DLR), Elif Tasdemir (), Robert Hoppe (), Oliver Bertram (DLR) Presentation: Deoclecio Valente - -
        In the design and analysis of complex aerospace systems, maintaining rigorous traceability of design decisions, data transformations, and modeling assumptions is essential for ensuring system integrity, enabling certification, and supporting lifecycle assurance. Data provenance—defined as structured documentation of data sources, transformation processes, responsible agents, and applied tools—has emerged as a key enabler or transparency and auditability across a wide range of engineering workflows. Formal standards such as the W3C PROV-O ontology model provides the foundation for representing relationships in a consistent and machine -interpretable format. However, despite their utility, most existing provenance frameworks fall short in capturing and managing uncertainties- particularly when it rises from simulations, sensor measurements, or engineering assumptions characterized by incompleteness, variability, or evolving probabilistic definitions. This limitation presents a significant barrier to robust traceability in high-stakes contexts, where confidence in the model-based reasoning is critical. Although broadly relevant across engineering domains, this challenge is especially pronounced in Model based Systems Engineering (MBSE), where digital models are treated as authoritative throughout the system lifecycle and traceability under uncertainty is paramount. To address this gap, we prose a unified, extensible framework that integrates uncertainty quantification (UQ) with structured provenance modelling. The framework systematically combines quantitative analysis with expert judgment and validate observational data, enabling the rigorous tracking, classification, and reduction of uncertainty across engineering workflows. Each source of uncertainty—whether originating in data, models, or assumptions—is traced to its origin and ranked by its impact on system-level outcomes via sensitivity analysis. Parameters with negligible influence are pruned to simplify the provenance model and prioritize critical drivers. The refined set of uncertainty is then propagated through the system, with each stage—from identification and transformation to reduction and verification—captured within an extended and machine -interpretable provenance graph. This graph not only links the inputs to outputs but also maps assumptions to decisions, quantifies their influence, and imbedding the metadata to support interpretability. In particular, it provides a transparent and auditable record of the decision-making process, allowing engineers to understand not only what was done, but also why, by whom, under what confidence level, and with what implications—thereby advancing traceability, accountability, and systems assurance throughout the lifecycle. As a demonstration of applicability and effectiveness of the proposed framework, an Electromechanical Model Actuator (EMA) is presented as use case since it is a representative aerospace subsystem that integrates mechanical, electrical, and control components. This illustrates how the framework enhances end-to-end traceability, reduces modeling overhead, and promotes more transparent, auditable, and risk-informed design practices. The current simulations yield robust design with less computational time. While the EMA is situated within aerospace domain, the framework could be generalized to other safety-critical and interdisciplinary contexts where uncertainty is prevalent. The proposed provenance-aware workflows expand the current traceability paradigm. It equips decision-makers with a verifiable and actionable account of how uncertainties are identified, propagated, and mitigated—supporting robust model validation, regulatory compliance, and long-term accountability across the system lifecycle.
      • 13.0204 An Engineering Life-Cycle Assurance Process for Autonomous Space Systems
        Alessandro Pinto (NASA Jet Propulsion Laboratory), Caleb Wagner (General Astronautics), Klaus Havelund (Jet Propulsion Laboratory), Nicolas Rouquette (JPL) Presentation: Alessandro Pinto - -
        The increasing complexity of autonomous space systems, coupled with the desire to grow the space economy, necessitates standardized development practices that guarantee an assurance level commensurate with the high stakes of space missions, including cost, schedule, and potential loss of human life. Several standards have been published over the past decade, with some focusing specifically on autonomous driving systems on Earth. These standards advocate for an early analysis phase of the design specification, but they do not provide sufficient insights on how to define, decompose, and trace requirements -- which is key in high-assurance systems. The autonomous driving industry relies on a large set of simulation scenarios and actual miles driven as evidence in their assurance argument. However, space missions must obey stricter safety requirements, and operational data might not be available, which suggests the need for a model-based, requirement-driven, and formal approach to design, test, and evaluation. This paper presents an engineering life-cycle approach to designing autonomous systems, and expands on two key elements: (1) the definition and decomposition of requirements and (2) the verification and validation of requirements through automatic test generation. We introduce a standard decomposition of an autonomous system into levels and layers, and we model its components using system- and control-theoretic methods. We propose a practical way to define executable specifications that can generate behaviors. Then, we present an automated test generation framework that comprises a monitoring system, several search and optimization algorithms, and a comprehensive data analysis framework. Finally, we show how the approach has been applied to a prototypical lunar rover mission. Acknowledgments. The research performed was carried out at Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. Copyright 2025. All rights reserved.
      • 13.0206 IMAP Safety & Mission Assurance Road to Launch & Mission Success
        Christina Collura (), Anna Kristine Shin (Johns Hopkins University/Applied Physics Laboratory), Elfriede Dustin (Johns Hopkins University/Applied Physics Laboratory), Grant Miller () Presentation: Christina Collura - -
        The Interstellar Mapping and Acceleration Probe (IMAP) mission investigates two of the most important issues in space physics today — the acceleration of energetic particles and interaction of the solar wind with the interstellar medium. This revolutionary mission includes a suite of ten instruments that together resolve fundamental scientific questions. These instruments are provided by nine institutions globally, hosted on an observatory structure integrated by the Johns Hopkins Applied Physics Lab (JHU/APL). IMAP is currently in the thick of its launch campaign, finalizing system testing and working towards a launch date in September of 2025. IMAP will launch on a SpaceX Falcon 9 along with NASA’s Carruthers Geocorona Observatory and the National Oceanic and Atmospheric Administration’s (NOAA) Space Weather Follow On L1 satellite. Over the course of the mission the IMAP Safety and Mission Assurance (SMA) team has defined and implemented robust processes to ensure requirements are met, risk adequately captured, and ultimately mission success. The launch environment is a unique time for SMA, as hardware, testing, teams, and cost and schedule constraints converge. With respect to the launch campaign itself, and from a hardware and software quality perspective, this paper will present non-conformance management strategies and challenges and methods for tracking and controlling final configuration. From a mission systems assurance perspective, we will discuss the SMA role in risk management and acceptance of risk ahead of asserting launch readiness, the role of mission safety through the launch readiness process, and retrospective on SMA in an integrated operational environment with multiple rideshares. Lastly, the paper will conclude with lessons learned, reflecting on perceived lessons and risks from earlier stages of the mission and providing summary recommendations for future mission.
    • Stephen Shinn (NASA - Headquarters) & Eric Mahr (The Aerospace Corporation)
      • 13.0301 Schedule Acceleration, Benefits, and Risks: Optimization Using a Forensic Approach
        Patrick Malone (Systems Planning and Analysis, Inc.) Presentation: Patrick Malone - -
        Program schedules are central to managing complex defense acquisitions, yet delays remain a persistent challenge. According to the Government Accountability Office in its 2024 Weapons Systems Annual Assessments report, “DoD is Not Yet Well-Positioned to Field Systems with Speed”, citing widespread delays in fielding capability across Major Defense Acquisition Programs (MDAPs) also known as Major Capability Acquisitions (MCAs) and Middle Tier of Acquisitions (MTAs) leading to continued cost and schedule growth. For example, an MTA tenant is to deploy or field a capability within five years after program start. Of the 25 MTA programs evaluated in the report, 60% experienced schedule delays. Furthermore, MDAPs have continued to experience schedule growth with the average time to deliver capability increasing to eleven (11) years from eight (8) years. This is alarming as technology and threats are changing at a rapid pace. To support delay mitigation, more schedule rigor is needed. The methods presented can support acceleration and more effective program execution to provide field capability more quickly. This paper explores methods for proactive schedule acceleration, focusing on the application of forensic schedule analysis methods to identify opportunities for optimization and mitigating potential delays. The research addresses the common issues of optimistic scheduling, inadequate risk capture, and resource constraints that contribute to project delays, as highlighted in the 2024 GAO report. To support robust modeling and analysis, several methods are addressed. These include Contemporaneous Period Analysis, Fragnet (select network) modeling, and Critical Path Analysis (the longest path in the schedule). Other acceleration methods include updated logic, combining common groups of tasks, and reducing scope will be investigated. The initial hypothesis posits that logic changes can significantly accelerate schedules with minimal impact on risk and cost. In support of the hypothesis, trade-offs associated with numerous acceleration strategies are evaluated. To illustrate practical implementation of these methods, the paper presents example scenarios, based on simulated project data from historical aerospace programs. Then demonstrating how these elements can be used for predictive analysis. The results highlight the best approach (time savings) for achieving schedule acceleration while minimizing risk exposure and cost growth. Using a repeatable process, the most promising methods will be highlighted. This will result in actionable schedule management activities to meet the critical capability fielding milestones. The paper concludes with a summary of findings and an approach to providing realistic performance metrics for measuring schedule progress (e.g. earned value measures like, Schedule Performance Index (SPI), To-Complete Performance Index (TCPI), and Critical Path Length Index (CPLI)and others). Future research will include a more in-depth look at less commonly applied methods, such as; multiple network analysis optimization, and a comparative analysis of different forensic scheduling methods. This research aims to provide program managers and stakeholders with a repeatable and structured framework for proactively addressing schedule challenges and optimizing project outcomes. The results are expected to provide program managers with the tools and techniques to reduce project delays typically between 10% and 20%. This can provide significant cost savings and faster deployment of critical capabilities.
      • 13.0302 Impact of Technology and Methodology on Performance Metrics in Complex Aerospace Project Management
        Florent Nogueira (UQAR), Érika Souza de Melo (Université de Sherbrooke ), Tian Zeng () Presentation: Florent Nogueira - -
        Nowadays, successfully completing a strategic project is essential to ensuring an organization’s survival. This holds true for most companies whose goal is to sustain their operations and expand their market. The aerospace and aeronautical manufacturing sectors are undergoing a profound transformation driven by the accelerated integration of digital tools and innovative technologies. These changes are reshaping traditional project management practices, especially in the context of complex engineering projects. The adoption of IT tools, planning management software, 3D printing, computer simulation, or AI are examples of technologies that reduce the need for human and capital resources while simultaneously increasing design efficiency. This research focuses on the relationship between the use of various technologies in project management (especially R&D, design, and continuous improvement projects), the methodologies employed, and the performance indicators that measure and control the factors defining project success. This research adopts a qualitative approach based on semi-structured interviews with 25 professionals from the aerospace and aeronautical sectors across Canada, the United States, and France. The participants include project managers, engineers, and digital transformation specialists from leading organizations in civil aviation, major aerospace firms, and key systems integrators. The data were analyzed using thematic coding to identify recurring patterns and insights. Core topics of the discussions included the use of digital technologies, the application of project management methodologies (traditional, agile, or hybrid), and the perceived impact of these elements on project performance. These themes served as the analytical backbone of the study and guided the interpretation of results. The findings indicate that the adoption of digital tools positively correlates with project performance - particularly in terms of schedule adherence, cost control, and risk mitigation - when such tools are embedded within a coherent project governance structure. Metrics such as the Schedule Performance Index (SPI) and Cost Performance Index (CPI) were commonly used to monitor progress. However, advanced technologies like virtual reality and digital twins are not yet widely deployed across the sector. In contrast, solutions such as 3D printing, computational simulation, and project management software are more broadly adopted and integrated into daily operations. Artificial intelligence (AI) is an emerging trend, showing strong potential, yet its adoption remains constrained due to concerns over data sensitivity and a frequent reliance on in-house development. Moreover, the study reveals that the most effective outcomes are observed when organizations adopt a hybrid project management methodology - blending agile and traditional approaches - combined with simple, well-integrated digital tools tailored to the project context. This study contributes to a better understanding of the interplay between digital transformation and engineering project performance. It underscores the importance of adopting a systemic and strategic perspective when implementing digital solutions. The results offer actionable insights for decision-makers aiming to align technology investments with performance objectives. By shedding light on the enabling and limiting conditions of digital integration, this research helps bridge the gap between technological promise and practical impact in aerospace project environments.
      • 13.0303 Beyond the Bid: Unpleasant Surprises in Firm Fixed Price Spacecraft Contracts on NASA Missions
        Rachel Sholder (Johns Hopkins University Applied Physics Lab (JHU/APL)) Presentation: Rachel Sholder - -
        Firm Fixed Price (FFP) contracts are well-suited for missions with standardized buses and stable, well-defined payloads. However, for complex missions with bespoke spacecraft, evolving payloads, or immature technologies, FFP contracts may lead to higher overall costs compared to Cost Plus Fixed Fee (CPFF) contracts. This paper explores hypotheses for why this cost inversion occurs, presents preliminary data to support these claims, and proposes a framework for discerning mission attributes that indicate whether a FFP or a CPFF contract is expected to yield the best cost, schedule, and technical outcome. This analysis is especially timely in light of the recent surge of commercially available spacecraft solutions, heightening the need to align contract structure with mission complexity to avoid unintended financial consequences.
    • Rob Stevens (Aerospace Corporation) & Alfred Nash (Jet Propulsion Laboratory)
      • 13.0401 MBSE in Space Mission Concept Development
        Debarati Chattopadhyay (Johns Hopkins University/Applied Physics Laboratory), Maxwell Harrow (JHU-APL) Presentation: Debarati Chattopadhyay - -
        In recent years, Model-Based Systems Engineering (MBSE) has been in the spotlight as a powerful methodology for defining and designing highly complex aerospace systems. MBSE facilitates a systems engineer’s ability to create relationships between elements of a system’s design and the underlying analytical models, and simplifies the process of creating diagrams to communicate the system design to stakeholders. This paper presents an application of MBSE techniques to address interdependencies in a system design to improve system budgets, trade space analysis, and traceability through the system from stakeholder needs to performance metrics. In space mission design, interdependencies exist between all subsystems/components in a system. Some interdependencies are direct (such as mass affecting propulsion needs), and others are indirect (such as power affecting solar array mass and therefore propulsion needs). The conceptual design of space missions thus involves collaborative efforts from system and subsystem engineers, conducting trades and analysis to develop valid designs. However, traditional approaches often result in outdated subsystem evaluations due to evolving interdependencies. To resolve this, we propose an MBSE framework, ACEBox, which integrates subsystem analytical models with a unified system-level model. By providing a digital thread that exchanges information between a SysML Systems model and an array of spacecraft subsystem analysis tools, ACEBox enables digital capture of design changes that propagate through the system. This approach enables rapid, parameterized analysis of design alternatives, dynamically sizing systems based on predefined rules and rollups for critical budgets like the Master Equipment List (MEL), the Power Equipment List (PEL), the Data Budget, and the Link Budget. The single source of truth MBSE model enables the systems engineer to generate block diagrams (hierarchical and internal), showing the connections and interfaces between components linked to the architecture used for budget generation. To validate this approach, a case study is presented showcasing the performance of spacecraft design decisions against a set of mission requirements. This demonstrates the agility of using MBSE methods across a set of digitally integrated engineering design and analysis tools for system concept development, concurrently saving time and improving system performance.
      • 13.0402 Methods for Improving User Needs Incorporation in Conceptual Design Phases of Systems Engineering
        Matteo Manieri (Telespazio Belgium SRL), Alessandra Menicucci (Delft University of Technology), Nicola De Quattro (Telespazio SpA), Filippo Iodice (Uptoearth GmbH) Presentation: Matteo Manieri - -
        The New Space economy, characterized by financial models increasingly driven by private investment and paying end-users, stresses the importance of accurately identifying and fulfilling user needs. The success of Earth observation mission design critically depends on the effective collection and translation of user needs into mission and system requirements. Traditional Systems Engineering practices, however, often lack effective and efficient mechanisms to address this during the early phases of mission development. This study explores how novel methods for user needs collection and transformation can be integrated into the Systems Engineering framework to enhance the formulation of mission requirements in the conceptual design phase. Innovative methodologies for improving user needs identification and transformation, drawn from both within and outside the space sector, were identified and systematically assessed. Additionally, three previously undescribed methods for mission requirements generation were developed based on practices currently employed by New Space organizations. A total of 32 methodologies, characterized by strong user integration, were identified and further analyzed on their suitability for application in the space domain. Through a trade-off analysis involving eleven experts and 20 evaluation criteria, the two most promising methods were identified: Iterative Prototyping (a newly developed method) and Design Thinking. These two methods were subsequently adapted to facilitate their integration into the Systems Engineering process, ensuring their outputs align with the required process inputs and maintain validation and verification traceability. Next, their performance was evaluated using a real-world Earth Observation use case focused on seagrass monitoring. This case involved a user group comprising European domain experts on climate and maritime anthropogenic impacts, who generated mission requirements based on the two selected methodologies. This study reveals that numerous methodologies exist to improve user engagement, yet few address the transformation of needs into formal requirements. Traditional Systems Engineering does not sufficiently engage users in its front end practices as highlighted by the results of the trade-off analysis. Both selected methodologies, Iterative Prototyping and Design Thinking, allow for enhanced early-stage user engagement and this was found to be valid when applied to both New Space and traditional space Earth observation mission design. Iterative Prototyping supports continuous feedback loops and validation, and facilitates the derivation of mission-level requirements. Instead, Design Thinking effectively frames user problems and promotes inclusive dialogue but lacks a (traceable) path to technical specifications and mission requirements. This study finds that spatial and temporal mission requirements can be reliably derived using Iterative Prototyping. Spectral and radiometric requirements, however, still depend on conventional techniques and expert judgment. In conclusion, the study demonstrated that the use of novel methods, such as Iterative Prototyping and Design thinking applied in the early phases of the conceptual design process, enhances the formulation of mission requirements by aligning them closer to the user needs.
      • 13.0407 A Generative Modeling Approach to Resource-Efficient Early Mission Concept Formulation
        Alfred Nash (Jet Propulsion Laboratory) Presentation: Alfred Nash - -
        Projects in NASA’s Pre-Phase A aim to define a mission concept that is aligned with program goals, technically feasible, and affordable enough to merit continued development. However, this work is conducted under tight constraints—limited time, budget, and workforce—despite the need to generate, assess, and compare a wide range of mission concepts. Without structured tools to guide this process, this early concept formulation can become inefficient, increasing uncertainty and risk in later phases of mission development. Traditional concept formulation relies on discriminative tools—models that use features of a mission concept (e.g., mass, power) to predict target outcomes such as financial viability (i.e., cost). While effective for evaluating specific designs, these tools do not efficiently support the generation, assessment, nor comparison of a broad range of alternatives. Generative models are a promising alternative for overcoming these limitations. Rather than predicting outcomes from features, generative models identify the accessible design space of features from the desired target outcomes such as technical feasibility and financial viability. By guiding concept formulation to exploration within an informed population rather than by brute force or random trial-and-error search methods, generative models significantly reduce the time, labor, and finances required to identify selectable mission concepts. This paper presents the development and application of generative models at NASA’s Jet Propulsion Laboratory to support early-stage mission concept formulation. These results demonstrate the potential of generative models to transform early mission formulation into a more strategic, resource-efficient, and selection-ready process.
    • Gregory Falco (Cornell University ) & Virgil Adumitroaie (Jet Propulsion Laboratory)
      • 13.0503 Verification of SpaceAGORA.jl 6-DOF Dynamics Using University of Michigan SmallSats Telemetry
        Evan Yu (), James Cutler (University of Michigan), Giusy Falcone (University of Michigan) Presentation: Evan Yu - -
        The primary goal of this work is to establish a clear verification of the six-degree-of-freedom (6-DOF) dynamics simulation implemented within SpaceAGORA.jl. This is achieved by replaying and matching spacecraft trajectories and attitude-rate telemetry obtained from three University of Michigan missions: CYGNSS, GRIFEX, and RAX-2. Due to the restrictions associated with detailed 6-DOF flight data from government missions, the university spacecraft telemetry serves as the exclusive practical source for publicly accessible flight validation data, explicitly addressing the data availability gap prevalent in space flight dynamics validation. Many past studies for atmospheric flight, specifically aerobraking, aerocapture, and entry, use either lower-fidelity models for trajectory propagation or restricted tools such as POST2 or Genesis. SpaceAGORA.jl is a high-fidelity, open-source spacecraft trajectory and dynamics simulator developed in Julia, succeeding the Python-based Aerobraking Trajectory Simulator (ABTS). It provides the space community with an accessible tool to avoid proprietary government or costly commercial software limitations. SpaceAGORA.jl integrates comprehensive physical modeling, including atmospheric density derived from the Global Reference Atmospheric Model (GRAM) suite, gravitational harmonics to high degree and -order gravitational harmonics via NASA SPICE toolkit, solar radiation pressure, and a modular spacecraft attitude modeling capability accommodating multiple rigid bodies, reaction wheels, and reaction control thrusters. While the translational (3-DOF) dynamics of SpaceAGORA.jl have previously been benchmarked against historic aerobraking mission data, such as Mars Odyssey and Venus Express, full 6-DOF validation, including coupled translational and rotational dynamics, remains challenging due to the lack of openly available high-resolution attitude flight data from atmospheric and low-Earth-orbit conditions. In this study, telemetry from CYGNSS, GRIFEX, and RAX-2 is leveraged. CYGNSS, a constellation of SmallSats designed to monitor tropical storm conditions, is equipped with star trackers operating at 4 Hz; GRIFEX, a CubeSat mission designed to test a new atmospheric monitoring technology, employs sun sensors and magnetometers at 1 Hz; RAX-2, a CubeSat mission designed to study plasma formation in the ionosphere, delivers high-rate (50Hz) inertial measurements unit (IMU) data. For each spacecraft, the original orbital ephemeris, vehicle mass properties, and environmental profiles are recreated. This replication includes atmospheric density conditions, gravitational influences, and initial attitude states as closely aligned to flight conditions as telemetry allows. To quantify validation fidelity, comparison metrics, including spacecraft body-rate envelopes, quaternion trajectory deviations, and dynamic-pressure profiles, are defined. These results lay the foundation for further investigations into advanced guidance and control strategies for atmospheric flight missions.
      • 13.0504 Test-bed Development for Sensor Fusion Application for Rendezvous and Proximity Operations.
        Cristobal Garrido (University of Southern California), David Barnhart (USC ISI/SERC) Presentation: Cristobal Garrido - -
        Rendezvous and Proximity Operations (RPO) is still a limited technique for more than one Resident Space Object (RSO) to perform a common task. However, RPO can enable several activities in space, such as in-space service, maintenance, manufacturing, assembly, and upgrading. Currently, most of the in-space RPO demonstrations are performed under highly controlled conditions, making the procedure rigid to be applied outside of those conditions. Thus, every RPO schema is unique and can only be applied to a very specific operation condition. Furthermore, historically, space systems have been focused on independent devices performing their tasks and communicating with the Earth. Over the last decades, this approach has changed with the satellite constellations, scientific missions involving more than one spacecraft, and space stations. The next step for the sector era is massive communication mega-constellations and small satellites to increase the Earth coverage. The main impact of this is the dramatic increase in the number of RSOs, which implies an increase in the probability of collisions and the demand for commercially reusable orbital positions. RPO can mitigate those effects by removing debris and extending the useful life of current RSOs. The combination of opportunities and mitigations for increasing problems makes RPOs an essential technique to study. Unfortunately, in-space RPO technical demonstrations are quite limited due to their cost, risk, and difficulties in operating and testing in space. Those conditions inspire the path to develop a testing platform that enables the design of RPO procedures focused on flexibility and low cost. This work presents a ground test bed to emulate some aspects of the environmental conditions present during RPOs. In particular, the lighting conditions present during RPO. The goal is to study several sensors' performance and their information fusion against the model deviation to get an environment-insensitive RPO. Our testbed consists of a dark space with different light sources that emulate the sun's illumination during the orbit of the RSOs. The lighting conditions and the iteration with the object pose affect the opto-thermal sensors' response during the RPO. In addition, we consider a sensor suite composed of an optical camera, a thermal camera, and an active light depth sensor. Depending on the illumination conditions and the relative distance, the performance of those sensors differs, complementing each other. The test-bed models the RPO by assuming the Clohessy-Wilshire equations under different sources of perturbation (Earth oblateness, drag, etc.) and sensing/information uncertainty. The experiment consists of emulating an RPO procedure under environmental changes and studying the sensing uncertainty during the procedure. The data fusion algorithm consists of preprocessed sensing fusion and pixel-to-pixel sensor fusion for object detection and classification. The RPO model consists of the CW-equation system under different perturbations and uncertainties. This work describes the setup for the test bed, the sensor fusion suite, the data fusion procedure, and some preliminary results of the experiments. The final goal is to define the change in the RPO procedures due to the use of multi-sensor implementation and information fusion techniques against the RPO model's uncertainty propagation
      • 13.0506 Graceful Degradation of Spacecraft Capabilities with Simulated Component Failure
        Josef Biberstein (Massachusetts Institute of Technology), Sertac Karaman (none), Olivier De Weck (MIT), David Sternberg (NASA Jet Propulsion Laboratory), Lorraine Fesq (Jet Propulsion Laboratory), Ksenia Kolcio (Okean Solutions, Inc), Maurice Prather (Okean Solutions, Inc.) Presentation: Josef Biberstein - -
        Development of fault detection, identification, and recovery systems for space missions requires engineers to consider interactions among all spacecraft subsystems. These systems must be robust and rigorously tested so as to not cause harm to the spacecraft. Combined with the complexity of the interactions that must be considered, this leads to conservative approaches even in modern spacecraft where the system includes redundant components, yet the response to most faults is to transition to safe mode and wait for intervention from the ground. However, a single fault may not disable a spacecraft to an extent which precludes continued operation, especially in spacecraft with redundant components. In this case, transitioning to safe mode and waiting for human intervention leads to mission downtime which impacts spacecraft performance and could be avoided. In this work, we prototype a new framework for fault detection, identification, and recovery that introduces graceful degradation of spacecraft capability in the presence of component failure. To demonstrate our framework, we focus on the attitude control subsystem. We simulate using the syssim library, a software package that enables the modular modeling of arbitrary dynamical systems with a robust and expressive API for defining faults to be injected at runtime. The MONSID library — a model-based health assessment software based on the technique of constraint suspension developed by Okean Solutions — is used in the loop for fault detection and diagnosis. We evaluate the performance of this ensemble on a non-trivial cascading fault in a set of sensors associated with an attitude control system. Graceful degradation is demonstrated across groups of redundant components that allow the spacecraft to continue operation even when a single component, such as a reaction wheel or gyroscope, has failed.
      • 13.0508 Demonstrating Digital Twin Interoperability between Heterogeneous Platforms via Spatial Web
        Thomas Lu (NASA/JPL), Bingbing Li (California State University Northridge), Edward Chow (Jet Propulsion Laboratory), Alicia Sanjurjo Barrio (GMV Space Systems) Presentation: Thomas Lu - -
        The increasing activity in lunar exploration, particularly around the Lunar South Pole, abundant in resources, highlights the urgent need for collaboration and interoperability between diverse stakeholders. Digital twins (DTs) have emerged as a powerful tool to support coordination, decision-making, and mission reliability, yet current implementations remain fragmented, relying on proprietary tools and lacking standard interfaces. The IEEE 2874-2025 Spatial Web standard addresses these gaps through protocols such as the Hyperspace Modeling Language (HSML) and Hyperspace Transaction Protocol (HSTP), which aim to enable seamless, secure, and decentralized data exchange across heterogeneous platforms. This study evaluates the effectiveness of HSML and HSTP for digital twin interoperability in a lunar rover assistance use case simulated in Unity and NVIDIA Omniverse Isaac Sim. A verification framework was developed using the DID:key method for identity management, a MySQL registry for credential validation, and a RESTful HSML API for authenticated and authorized communication over Kafka. Results show real-time synchronization with latency well below human reaction time, confirming the feasibility of trusted, crossplatform interactions. These findings demonstrate the potential of the Spatial Web as a foundation for future collaborative space missions.
      • 13.0509 Nonlinear Stability Analysis of the Psyche Spacecraft Attitude Control System
        Junette Hsin (Maxar ), Mollie Johnson (Massachusetts Institute of Technology), Alexander Manka (Jet Propulsion Laboratory), Luis Sentis (The University of Texas at Austin) Presentation: Junette Hsin - -
        Launched in October 2023, the Psyche spacecraft is en route to the asteroid (16) Psyche to study its exposed nickel-iron core, relying primarily on its cold gas thrusters and reaction wheel assemblies for attitude control. The Guidance, Navigation, and Control (GN&C) team at the Jet Propulsion Laboratory (JPL) previously reported linear stability values, with the cold gas thruster (CGS) controller showing a gain margin of 13.31 dB and phase margin of 74.4°, and the reaction wheel (RWA) controller showing a gain margin of 13 dB and phase margin of 35.2°. However, these margins had not been verified against the spacecraft’s nonlinear time-domain dynamics, which include significant flex modes. This work addresses that gap by using Psyche’s Controls Analysis System Testbed/GNC Integrated Systems Testbed (CAST/GIST) tool to conduct nonlinear simulations. Our results show that the gain margins for the CGS controller of the nonlinear system are within 0.3 dB of the linear values, while those for the RWA controller are within 1 dB. In both cases, the nonlinear results show wider margins than those predicted by the linear analysis, confirming the robustness of the spacecraft’s attitude control system. These results provide the first validation of Psyche’s gain stability margins in the nonlinear domain.
    • Sarah Bucior (Johns Hopkins University Applied Physics Laboratory) & Evan Smith (Johns Hopkins University/Applied Physics Laboratory) & Benjamin Solish (N/A)
      • 13.0601 Creating Telemetry Alarms for the Europa Clipper Spacecraft
        Shelly Szanto (NASA Jet Propulsion Lab) Presentation: Shelly Szanto - -
        NASA’s Europa Clipper Spacecraft was launched in October 2024 to investigate the potential signs of life of Jupiter’s Icy Moon, Europa. The spacecraft underwent a rigorous and thorough integration and test campaign starting in April 2022. With almost 20,000 flight software and ground derived pieces of channelized telemetry, it is nearly impossible for human testers to monitor all spacecraft telemetry all at once in real-time. To address this, operators can assign values to individual telemetry channels which will notify them that a channel has reached the alarm threshold and when. Alarms play a critical role in monitoring spacecraft health and safety by notifying ground personnel of anomalous trends or off-nominal values in key telemetry. This paper outlines the methodology for proposing and implementing alarms during ground testing and for in-flight operations. Additionally, it provides an in-depth look at the underlying ground data software infrastructure that supports alarm management and processing.
      • 13.0603 Highlights of the IMAP Integration and Test Campaign
        Justin Hahn (JHU-APL) Presentation: Justin Hahn - -
        NASA's Interstellar Mapping and Acceleration Probe (IMAP) mission will provide extensive new observations of the inner and outer heliosphere and investigate some of the most important questions in heliophysics to date. The Integration and Test (I&T) of the IMAP spacecraft was led by the Johns Hopkins Applied Physics Laboratory (APL), with extensive support and involvement from various institutions participating in the mission. An initial I&T plan was developed early in the mission, with detailed planning and procedure development taking place as the spacecraft design matured and subsystems and instruments were built and tested. Several major changes to the planned I&T flow were required due to facility conflicts and schedule challenges, but these were successfully navigated by the team. This paper describes the highlights of the system I&T campaign, including hardware assembly and integration, system testing, environmental testing, and the launch campaign.
      • 13.0605 Mirrored Surface Use and Measurement in Spacecraft Assembly and Test
        Robert Elliott (Lockheed Martin Space Systems Company) Presentation: Robert Elliott - -
        Mirrored surfaces such as cubes and flat mirrors have been used to align spacecraft components for decades. These mirrored surfaces were initially measured with optical instruments and as technology advanced laser trackers have become an option for measuring these features. This white paper and presentation will describe how and why mirrored surfaces are used on spacecraft components, early measurement techniques and how mirrors are measured with a laser tracker. The advantages and disadvantages of the measurement methods will be discussed along with case studies that show schedule savings resulting from transitioning to modern tools.
      • 13.0606 Incompressible by Design: IMAP End-to-End Testing
        Musad Haque (Johns Hopkins University/Applied Physics Laboratory), Evan Smith (Johns Hopkins University/Applied Physics Laboratory) Presentation: Musad Haque - -
        A comprehensive and efficient test plan design is essential to delivering any complex space system on-time, on-budget and, most importantly, fully functional. When designing a test plan, an equilibrium must be reached between an implementer’s desire to rigorously regression-test every detail and the cost and schedule pressure that builds as a project gets closer to delivery. The risk management process is commonly used to help find this equilibrium but is generally employed late in the development cycle when cost and schedule pressure are overwhelming. An incompressible test list, a list of the bare minimum number of verification activities required, can help protect certain tests containing critical functionality but is often an afterthought in test plan development, coming together after all the individual tests have been defined. On the Interstellar Mapping and Acceleration Probe (IMAP) project, a heliophysics mission designed to study the acceleration of energetic particles and the interaction of the solar wind with the local interstellar medium, the team designed a test specifically tailored for the incompressible test list. This test, dubbed the end-to-end simulation test, prioritized test cases that verified critical interfaces while focusing on the most commonly and/or critical flight system functionality. The coordination challenges for the end-to-end simulation test went beyond marshaling the numerous observatory subsystem teams for a single test. The IMAP observatory hosts a suite of ten instruments. The instruments are designed and assembled across ten institutions and their planned operations involve an external payload operations center (POC). Defining this test early in the integration and test phase enabled the team to work through basic test setup and coordination issues early, and incrementally add in new capabilities as they became available. Seemingly minor steps in the test procedure, such as call-outs to the POC, improved with each dry run of the test. The test was set up to represent as flight-like of a configuration as possible, with the onboard fault protection system un-suspended and a truth model simulation representing a day-in-the-life of the mission running throughout the test from setup to closeout. Incompressibility aside, the modular design of the end-to-end simulation test allowed for parts of it to serve as suitable candidates for system-level regression testing. This paper discusses the methodology used for designing this test and reviews the test cases, results, and lessons learned.
    • Andrea Belz (University of Southern California) & Stephen Krein (Johns Hopkins University/Applied Physics Laboratory)
      • 13.0702 Navigating the Risks of Using Commercial Spacecraft for NASA Missions
        Rebecca Kelly (Johns Hopkins University/Applied Physics Laboratory) Presentation: Rebecca Kelly - -
        Over the past decade, NASA science missions have shifted toward the use of more commercial spacecraft and fixed price contracts. This shift has resulted in fundamental disconnects between commercial incentives and government expectations with one key source being risk. While risk is inherent in space exploration, the introduction of lower-cost commercial options has shifted the paradigm for characterizing and defining acceptable risk in a way that often proves challenging for both parties to maintain scope and cost. At the NASA level, this is particularly evident when expectations on industry performance (pricing, schedule) clash with heritage standards for reducing risk (GEVS, gold rules) which can result in disconnections between the two entities. This occurrence, if left unchecked, can lead to increased implementation risk and limit new and emerging technology in the service of science missions. To better address potential disconnects, all parties would benefit from participating in this discussion for a deep dive into the unique characteristics and challenges associated with commercial buses and how they would be best incorporated into the NASA model. This paper explores several examples of the common NASA-to-industry disconnects and their impacts on mission risk including: unique requirements levied on production-line spacecraft (scope creep), discrepancies between expectations and performance, and NASA vs. industry standards. While this list is not all-inclusive, the authors believe these to be key contributors to the fundamental disconnects that disrupt the outcomes of partnerships between NASA and commercial providers. Furthermore, to address these concerns, this paper includes a number of potential mitigation strategies that can be employed including: utilizing experienced mission implementors to navigate the interfaces between NASA and industry, streamlining or tailoring existing standards for simpler implementation and flexibility to industry input, collecting and analyzing lessons learned from previous NASA missions to identify best practices that can be applied in the future, designing and testing processes that are clearly explained and signed off to early in the program, vetting commercial partner selection and management with an emphasis on assessing quality control and assurance, design and testing processes, and partner experience and expertise, and maintaining frequent communication and partnership-level collaboration to increase the flow of information between institutions and limit areas of ambiguity. A thorough understanding of both the risk factors and mitigation options discussed in this paper will enable NASA to effectively utilize new capabilities offered by the industrial base at historically lower prices while minimizing implementation risks associated with Agency-to-commercial interfaces. By effectively addressing the disconnects that lead to these risks, NASA will be better-equipped for future commercial partnerships that provide the opportunity to accomplish science goals faster and cheaper.
      • 13.0705 Development of an Agile Mission Concept Formulation Framework
        Kinga Wrobel (Johns Hopkins University/Applied Physics Laboratory) Presentation: Kinga Wrobel - -
        The non-recurring engineering (NRE) cycle time for space missions remains misaligned with the accelerated timelines dictated by emerging threats, rapid scientific discovery, and evolving commercial data demands. To address this, a systematic re-architecting of mission formulation methodologies and toolchains is essential. Integrating artificial intelligence and machine learning (AI/ML) within human-in-the-loop frameworks enables real-time parameter consistency checks, generative design, and automated data transfer between tools, substantially reducing design cycle duration. Empirical demonstrations in generative structural design at NASA have shown order-of-magnitude improvements in development speed and mass optimization, validating the feasibility of a ~10× reduction in formulation time. Concurrently, digital engineering paradigms—particularly model-based systems engineering (MBSE), digital threads, and mission digital twins—establish an authoritative source of truth that supports concurrent workflows, automatic requirements verification, and high-fidelity trade space exploration. These frameworks mitigate late-stage design inconsistencies, reduce rework, and enable rapid iteration across subsystems. The incorporation of Industry 5.0 concepts further advances this capability by coupling intelligent automation with human expertise, fostering agile design cycles that preserve innovation while increasing throughput. When coupled with cost- and value-driven optimization methods, these techniques can establish a closed-loop feedback system in which mission utility, affordability, and user-driven requirements that are continuously evaluated during formulation. This will result in a technically rigorous, economically optimized, and operationally responsive framework capable of delivering scientifically and strategically relevant missions on compressed timelines without compromising system reliability.
    • John Ryskowski (JFR Consulting) & Brendan Kach (Raytheon) & David Scott ((Self))
      • 13.0801 Beyond Technical Excellence: Engineering the Human Variable
        Lisa Akers (Enable Awesome) Presentation: Lisa Akers - -
        Here's an uncomfortable truth about aerospace leadership: we're really good at firefighting. So good that we've stopped asking why there are so many fires to fight. The aerospace industry's most visible failures consistently trace to human system breakdowns, yet we continue engineering technical systems with extraordinary rigor while leaving human systems to chance. With more than 35 years of aerospace experience, I've learned that this firefighting addiction runs deeper than we admit. We love being the hero who stays late to solve the emergency. We get recognition for dramatic saves, not invisible prevention. This addiction creates the very problems we pride ourselves on solving. Every technical problem lives inside a human system. You cannot solve the technical problem without first understanding the human context it exists within. This isn't about adding "soft skills" to technical competence—it's recognizing that emotional architecture is a core engineering discipline. The Emotional Architect framework introduces five systematic capabilities: Reading both technical and human systems simultaneously: Most technical problems hide human problems. You need the ability to see when human dynamics are blocking technical progress and address the human system so technical excellence can emerge. Building trust that survives disagreement: Smart people disagree about technical approaches constantly. The question is whether your team feels safe enough to bring their best thinking when they might be wrong. Preventing problems instead of heroically solving them: The most elegant technical solutions happen upstream, before problems announce themselves. But upstream work is invisible until it isn't. Refusing emotional labor dumps: When someone says "whatever you think is best," they're not respecting your expertise—they're handing you their risk while avoiding accountability. Real leadership requires co-authorship, not compliance. Developing other leaders, not dependencies: The goal isn't to become indispensable. It's to build organizations that can deliver technical excellence without requiring leaders to abandon themselves. This framework emerged from real experience—like a 2018 rocket build where a team labeled as 'difficult' became highly collaborative once human system dynamics were addressed. Over the following years, I applied these principles across Orion program teams and leadership challenges, refining the approach through repeated implementation. The pattern is consistent: teams that learn to read human dynamics alongside technical specifications catch problems earlier, resolve conflicts without destroying relationships, and create conditions where innovation happens naturally rather than heroically. The same skills that make you an effective technical leader also improve every relationship in your life. Learning to read the room, build trust, navigate conflict, and develop others isn't just professional development—it's human development that makes you better at delivering aerospace products. The aerospace industry has reached a tipping point. Programs are more complex, timelines are tighter, and the cost of human system failures keeps growing. The choice is simple: keep being crisis heroes who solve problems we helped create, or become emotional architects who design conditions where technical excellence and human capability enable each other. The future belongs to technical leaders who understand that the ultimate technical skill is engineering the human variable in the system.
      • 13.0802 Boosting Digital Engineering Workforce Execution with a Semantic Data Fabric
        Heidi Davidz (MANTECH International), Kimberly Nunn (MANTECH International) Presentation: Heidi Davidz - -
        The rapid advancement of technology and the increasing complexity of engineering projects necessitate improvements in how systems are developed and acquired. Digital engineering (DE) addresses these challenges, but workforce development remains an issue. This abstract explores how adopting a semantic data fabric can empower the DE workforce by improving data integration, usability, collaboration, real-time insight, and traceability. Previous work is leveraged to illustrate specific intervention points where a semantic data fabric alleviates common issues and boosts workforce execution. An ontology-first semantic data fabric provides a robust framework for integrating diverse data sources, offering a unified and consistent view across the engineering lifecycle. By using standardized vocabularies and ontologies, it enhances data quality and accuracy, eliminating data silos common in traditional tool-to-tool integrations. This seamless data exchange reduces errors and inconsistencies, streamlining processes. Ease of use is crucial for methodology adoption. A semantic data fabric allows engineers to utilize their preferred tools, leveraging existing training and processes. Its semantic layer translates complex technical data structures into a common language. Collaboration among cross-functional teams is vital in DE. A semantic data fabric fosters better collaboration by creating a shared understanding of data and terminology. It enables engineers, designers, and analysts to easily access and share information, breaking down departmental barriers and promoting efficient workflows. This enhanced collaboration accelerates problem-solving and innovation, contributing to successful mission execution. Real-time insight and automation are significant benefits. The data fabric facilitates real-time data analysis and decision-making, increasing responsiveness and agility in engineering. It also supports the automation of data management tasks, reducing manual effort and minimizing errors. Automated data validation, transformation, and enrichment ensure high-quality data availability, improving productivity. Publish-and-subscribe capabilities enable self-service analytics, allowing engineers to explore data, see upcoming changes, create reports, and derive insights more easily. Traceability of data and decisions is essential for compliance and auditing. A semantic data fabric enhances traceability by maintaining a comprehensive record of data lineage and transformations, ensuring activities are trackable and auditable, providing confidence in the integrity of the engineering process. The semantic layer provides governance, pedigree and trust, making data more reliable for decision-making and reducing time spent establishing the chain of evidence. Building on previous work that characterized DE workforce execution challenges and common failure patterns, this presentation illustrates an example of a semantic data fabric. Sample use cases with results are shown. Its features are then mapped to an enterprise architecture model and identified DE failure patterns to demonstrate how it alleviates issues and boosts workforce execution. In conclusion, a semantic data fabric offers transformative benefits to the DE workforce, enabling them to execute more effectively and fulfill mission needs. By enhancing data integration, usability, collaboration, real-time insight, and traceability, it empowers engineers to navigate complex modern engineering projects with greater efficiency and confidence, playing a pivotal role in the future of DE and use of DE for effective program execution.
      • 13.0804 Seeking a Just Culture in Space Mission Assurance
        Barbara Braun (Aerospace Corporation) Presentation: Barbara Braun - -
        In testimony before Congress on health care quality improvement in 2001, Dr. Lucian Leape stated that the single greatest impediment to error prevention in the medical industry is “that we punish people for making mistakes.”[1] When we focus on who is responsible, people learn not to report problems, investigations fall prey to hindsight bias, and we miss the opportunity to understand what is truly going on. Just culture seeks to remedy this by focusing on what happened, rather than on who is at fault. It’s a form of systems thinking that places human error into the larger context of organizational and system error and seeks a balance between “punitive culture” and “blame-free” culture. Applying it to mission assurance, we find a new way of looking at failure that is not only relevant to human error, but also to how we handle risk and accountability. Just culture has become important to many industries where mistakes can be deadly, such as aviation and healthcare. In the space mission assurance business, mistakes are rarely deadly, but usually costly, and our approach to risk and failure resembles that of other high-criticality industries. Yet we still hew to an “old way” of looking at failure that hampers our ability to manage risk and improve success. We assume that: • Somebody did not pay enough attention • Somebody failed to recognize the significance of this indication, or of that piece of data • Somebody should have put in more effort • Somebody took a shortcut In the traditional practice of mission assurance, we deploy rules, procedures, and technology to guard our systems against human error, and tend to view people as the problem. People cut corners, skip steps, misapply technology, or choose to disregard these protections entirely, leading to the failure of inherently safe systems and tried-and-true processes. But people (and organizations) operate in a stew of conflicting priorities. Safety or success at all costs is never the only goal – organizations exist to offer products and services. Politics, resource limitations, public opinion, and deadlines constrain human action. People do their best to reconcile conflicting goals simultaneously. No system or process is inherently safe or guaranteed to bring success; people create safety and success at all levels. Ignoring these considerations, and allowing hindsight bias to inform our failure investigations, can distract us from what really went wrong in failure cases, and hamper future mission success. This paper will examine how the principles of just culture as applied in other industries – particularly healthcare and aviation – can inform space mission assurance, to ensure better mission success. [1] "Patient Safety Interview: Lucian Leape". World Health Organization. May 2007. Archived from the original on April 13, 2010