IEEE Aerospace Conference logo
At the Yellowstone Conference Center in Big Sky, Montana, March 01 – March 08, 2025

  • Keyur Patel (NASA Jet Propulsion Lab) & Steven Arnold (Johns Hopkins University/Applied Physics Laboratory)
    • James Graf (Jet Propulsion Laboratory) & Nick Chrissotimos (NASA - Goddard Space Flight Center) & Keyur Patel (NASA Jet Propulsion Lab)
      • 02.0102 Results and Lessons Learned from the Psyche Mission Launch and Solar Array Deployment
        Travis Imken (Jet Propulsion Laboratory), Leina Hutchinson (Jet Propulsion Laboratory), Alexander Lumnah (NASA Jet Propulsion Lab), Steve Snyder (Jet Propulsion Laboratory), Alexander Manka (Jet Propulsion Laboratory), Shaun Ryan (NASA Jet Propulsion Laboratory), Charles Wang (Caltech/JPL), Richard Chesko (NASA Jet Propulsion Lab), Dongsuk Han (Jet Propulsion Laboratory), Kon-Sheng Wang (California Institute of Technology) Presentation: Travis Imken - Monday, March 3th, 08:30 AM - Jefferson
        “Psyche: Journey to a Metal World” launched on October 13, 2023 from Cape Canaveral atop a SpaceX Falcon Heavy. The mission, led by principal investigator Dr. Lindy Elkins-Tanton of Arizona State University, uses an electric propulsion system to rendezvous with and orbit the large metal asteroid 16 Psyche. Psyche is a hybrid development with deep-space avionics provided by the Jet Propulsion Laboratory and a solar electric propulsion chassis based on Maxar’s 1300-series GEO communications satellites. The spacecraft hosts a magnetometer, multispectral imager, and gamma ray & neutron spectrometer science payloads and the Deep Space Optical Communications advanced technology demonstration. The Psyche mission Launch Phase encompassed the final spacecraft power-on in countdown and through the first ground commanding after array deployment. This period was the mission’s only critical event, where the spacecraft autonomously detected separation from the launch vehicle, deployed both array wings, located the Sun, and achieved a power-positive, thermally-safe, and communicative state. This paper will discuss results from the Launch phase execution, focusing on flight results compared to ground testing, methodologies used to confirm full and complete solar array deployment, and challenges faced by the launch team. The paper will also discuss lessons learned and impacts of the 1-year launch delay. This work has been carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA. Government sponsorship acknowledged.
      • 02.0103 Venus Probe Architecture
        Robin Ripley (NASA - Goddard Space Flight Center), Colby Goodloe (NASA - Goddard Space Flight Center) Presentation: Robin Ripley - Sunday, March 2th, 05:20 PM - Jefferson
        The Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging (DAVINCI) atmospheric probe is an evacuated spherical probe designed to enter the Venusian atmosphere and collect scientific data during an ~1 hour descent. The complete Probe Flight System (PFS) consists of the Descent Sphere (DS), and Entry System (ES) and the five scientific instruments the DS is designed to protect while still allowing them to take crucial scientific measurements. Prior to Venus atmospheric insertion, the DS rides inside the ES which acts as a heat shield for the first part of the descent. The PFS is carried to Venus by the Carrier, Relay, and Imaging Spacecraft (CRIS), which is a spacecraft that will perform two fly-bys of Venus prior to releasing the PFS in preparation for final descent. After separation from the CRIS, the PFS and then the stand-alone DS will communicate with the CRIS via an Adjustable Data Rate (ADR) S-band link. The DS will enter a hibernation state during the two-day coast from the point of separation from CRIS until just before reaching Venus Atmospheric Entry Interface (AEI). This low-power mode will save energy for the descent phase, when instruments will be turned on as they are needed throughout the descent. The ADR will be used to send data back to the CRIS at the highest feasible data rate based on current conditions. The Descent Sphere (DS) is a 0.84 meter diameter titanium vessel and aero fairing designed to withstand 90 atmospheres of pressure, 450°C temperatures, and the sulfuric acid atmosphere of Venus. The DS encapsulates and protects five science instruments from the harsh Venus environment: Venus Mass Spectrometer (VMS), Venus Tunable Laser Spectrometer (VTLS), Venus Atmospheric Structure Investigation (VASI), Venus fugacity of Oxygen (VfOx) sensor, and Venus Descent Imager (VenDI). The Descent Sphere supports the instruments with its own avionics bus and internal passive thermal system that place the instruments in a benign thermal environment, typical of a spacecraft bus. The DS has several penetrations into the sphere to allow for science data collection. The sapphire window on the forward hemisphere will provide VenDI with a view of the surface below the cloud layer (~35 km above the surface). The VASI port protrudes from the sphere to allow for measurement of temperature and pressure of the atmosphere throughout the descent. VfOx will also be mounted on this structure. The VMS inlet allows Venus atmospheric gas to enter the two spectrometers for analysis without letting the gases into the sphere itself. The probe itself is single string with selective redundancy and will operate autonomously during descent. The descent cannot be paused to troubleshoot any problems that may arise, so the probe operates under a “fail operational” philosophy.
      • 02.0104 System Engineering Implementation of the Investigation of Convective Updrafts (INCUS) Mission
        Alex Austin (Jet Propulsion Laboratory), Benjamin Donitz (NASA Jet Propulsion Laboratory), Yunjin Kim (NASA Jet Propulsion Lab) Presentation: Alex Austin - Sunday, March 2th, 04:55 PM - Jefferson
        The Investigation of Convective Updrafts, or INCUS, mission is a NASA Earth Ventures Mission that will measure, for the first time, the convective mass flux of convective storm systems in space using three Ka-band atmospheric radars. The INCUS instrument also includes a radiometer identical to the TEMPEST-D radiometer. INCUS is a Class D mission led by a principal investigator. It is a cost-capped mission. In order to reduce the mission cost, the INCUS project took advantage of the heritage designs of RainCube, TEMPEST-D, and 3-meter KaTENNa reflector by Tendeg. In addition, the INCUS spacecraft is Blue Canyon Technologies (BCT) Venus-class spacecraft bus. The INCUS flight system is composed of three different types of subsystems: heritage design, commercial, and new design subsystems. The goal of our system engineering approach is to meet the INCUS science requirements by integrating these subsystems in the most cost-effective manner. The first step was to find an optimum design that minimizes the changes to the heritage and commercial subsystems. The design changes are mostly caused by the requirements (such as multiple beams to increase the imaging swath and higher resolution) and parts obsolescence. Since three flight units must be built, the best implementation approach was to build the prototypes as quickly as possible to test the design changes. This plan was not as effective as planned due to the supply chain issue in the beginning of the project. The second step was to design the new subsystems to enable the heritage and commercial subsystems to be used within their capabilities. The most significant challenge to this approach was that the launch environment was unknown until the entire design was completed. It is also important to develop a contingency plan for a high-risk subsystem. The implementation plan must be flexible enough to accommodate unexpected events such as inaccurate models/analyses and test/on-orbit failures of the heritage/commercial subsystems during the development period. Risk management as a Class D mission is an essential tool to make an optimum decision. We will explain how quantitative risk management was used for the INCUS implementation approach.
      • 02.0105 Weather System Follow-on - Microwave (WSF-M) Mission Overview
        Bailey Moser Smith (BAE Systems, Inc.), Quinn Remund () Presentation: Bailey Moser Smith - Sunday, March 2th, 09:50 PM - Jefferson
        The Weather System Follow-On – Microwave (WSF-M) mission represents the next generation operational capability for spaceborne monitoring of key environmental parameters including ocean surface vector winds, tropical cyclone intensity, sea ice age/concentration, snow water equivalent, and soil moisture. WSF-M builds upon the heritage of the Special Sensor Microwave/Imager (SSM/I) series of instruments, operated by the Defense Meteorological Satellite Program (DMSP), the GPM Microwave Imager (GMI) instrument, built by Ball Aerospace, and the Naval Research Laboratory’s Coriolis WindSat instrument. The WSF-M space segment consists of a high-heritage spacecraft design, a Microwave Imager (MWI) instrument, and an Energetic Charged Particles (ECP) sensor built by the Air Force Research Laboratory. The WSF-M spacecraft leverages heritage designs tailored to program payload and mission needs. The MWI is a conically scanning, fully polarimetric microwave radiometer employing a calibration design that ensures accurate and stable on-orbit brightness temperature measurements. This paper provides an overview of the instrument and space vehicle designs, integration and test, and predicted performance.
      • 02.0106 Systems Engineering Lessons from NASA’s Plankton, Aerosol, Cloud, Ocean Ecosystem (PACE) Mission
        Gary Davis (NASA - Goddard Space Flight Center) Presentation: Gary Davis - Monday, March 3th, 10:35 AM - Jefferson
        The PACE observatory launched on 8 Feb 2023, enabling new ocean and atmospheric science products. This “in-house project” was conceived, designed, built, tested, and is operated by NASA’s Goddard Space Flight Center in Greenbelt Maryland. Mission Systems Engineering is a key aspect of any large endeavor involving new technology developments, engineering trade studies, risk management, documenting design details, troubleshooting inevitable problems, resource management, operations planning and training, and continually challenging a highly skilled, motivated, and, at times, exhausted team to perform at their very best. The lead Mission Systems Engineer (MSE) carries the technical authority which is separate and independent from safety & mission assurance and management direction. Simply put, the mission systems team is the “technical consciousness” of a flight project. The relatively small team of systems engineers has the technical memory, scar tissue from past failures, fundamental knowledge of mission objectives, and deep discipline engineering experience to lead via influence and ensure the wider project team “does the right thing” across a realm spanning years of time, hundreds of people, thousands of decisions, and tens of thousands of technical pieces. This paper describes the organization and overview of the PACE mission and examines new and old lessons of mission systems engineering.
      • 02.0107 The Endurance Mission Progress
        John Baker (Jet Propulsion Laboratory), Henry Stone (Jet Propulsion Laboratory), Richard Kornfeld (Jet Propulsion Laboratory), John Elliott (Jet Propulsion Laboratory), James Keane (Jet Propulsion Laboratory), Issa Nesnas (Jet Propulsion Laboratory), Hari Nayar (NASA/JPL) Presentation: John Baker - Sunday, March 2th, 09:00 PM - Jefferson
        This paper will provide an update on the latest developments for the Endurance mission pre-project. The proposed Endurance mission could accomplish science that fundamentally alters our understanding of our solar system while addressing long standing priority science questions. The concept uses a long-range rover to explore, traverse, and collect samples from across the far-side of the Moon. Endurance is effectively a sample return campaign in one mission and would collect 12 unique sets of samples. The mission concept was highly recommended to NASA last year by the National Academy’s Planetary Science and Astrobiology Decadal Survey (2023–2032). Since that time, the pre-project team has been doing studies including mission architecture trades, technology development and risk reduction. At the request of NASA, the team also did an analysis of alternatives (AoA). NASA has also formally announced a Science Definition Team (SDT) will be formed to create the mission science and measurement requirements. A trade study was also done to look at solar versus nuclear power. Last, due to the long-distance traverse needed for this mission, the pre-project plans to use autonomous off-road driving technology. Recently, the autonomous driving technology needed for this mission was demonstrated in lunar day, dusk and night-time conditions at rates that exceed the mission requirement. The concept includes a long- distance lunar rover that is planned to drive nearly 2,000 km across the far-side South Pole–Aitken (SPA) basin, which could be launched as soon as 2030. The large sample collection of up to 100 kg would then be delivered to Artemis astronauts at the lunar south pole basecamp for return to Earth. The 12 sample sets would be used to address long-standing, priority science questions about the dynamics of the early solar system, giant impacts, the nature of the lunar mantle, and the thermochemical evolution of rocky worlds. A lunar SPA sample return mission has been recommended for the past three planetary science decadal surveys as a medium-sized (New Frontiers-class) mission. Previous studies looked at collecting a kilogram of sample material from one site using a stationary platform. The Endurance lunar rover is a medium-sized mission that could traverse across geologically diverse terrain making for robust sample analysis and more definitive science conclusions. To keep costs low, the concept may use a Commercial Lunar Payload Services (CLPS) to deliver the approximately 600 kg rover to the center of SPA, leverages other lunar mobility developments, and engages astronauts for the sample return. The Endurance sample return mission, would be a valuable part of the Artemis program by providing long-term sustainable lunar exploration in the cost range of a medium-class mission. The mission would use commercial partners to access the US industrial base while leveraging unique technology capabilities used for other planetary rovers. Working jointly with humans on the Moon, this mission would accomplish science that could fundamentally alter our understanding of our solar system history while addressing long standing priority planetary science questions.
      • 02.0108 The PACE Ocean Color Instrument (OCI): From Concept to Commissioning
        Robert Estep (NASA - Goddard Space Flight Center) Presentation: Robert Estep - Monday, March 3th, 10:10 AM - Jefferson
        The Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) mission is a strategic climate continuity mission that was defined in the 2010 document Responding to the Challenge of Climate and Environmental Change: NASA’s Plan for Climate-Centric Architecture for Earth Observations and Applications from Space (referred to as the “Climate Initiative”). Launched on February 8, 2024, the PACE mission represents NASA’s next investment in ocean biology, clouds, and aerosol data records [1]. PACE will extend the high quality ocean ecological, ocean biogeochemical, cloud, and aerosol particle data records begun by NASA in the 1990s, building on the exceptional heritages of the Sea-Viewing Wide Field-of-View Sensor (SeaWiFS), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Multi-angle Imaging SpectroRadiometer (MISR), and the Visible Infrared Imaging Radiometer Suite (VIIRS). The PACE project office at NASA’s GSFC was responsible for the satellite development, launch and operations. The NASA Headquarters PACE Program Science office is responsible for supporting the science data processing system and assembling competed community science teams, which includes field-based vicarious calibration and data product validation efforts to support the PACE Project Science team. The mission launched into a Sun synchronous polar orbit at 676.5 km with an inclination of 98 degrees and a 1 pm local ascending node crossing time. The PACE observatory is comprised of three instruments, an Ocean Color Instrument (OCI) and two polarimeters, the Hyper-Angular Rainbow Polarimeter 2 (HARP2) and the Spectro-Polarimeter for Exploration (SPEXone). The OCI is the primary instrument on the observatory and was developed at Goddard Space Flight Center (GSFC). The OCI is a hyperspectral scanning radiometer designed to measure spectral radiances from the ultraviolet to shortwave infrared (SWIR) to enable advanced ocean color and heritage cloud and atmospheric aerosol science [2]. NASA Headquarters directed the mission development to be guided by a Design-to-Cost (DTC) process. All elements of the mission, other than the cost, are in the DTC trade space. At the heart of the DTC process are the mission studies, performed across all the mission elements. The mission studies were used to define appropriate approaches within and across elements while maximizing science capabilities at a high cost confidence. Mission baseline requirements development is also embedded within the DTC process, as these requirements were not established at the onset of the mission concept development. Baseline mission requirements are a product of the mission studies and are defined by the project office as part of the DTC process. At the time of this writing, OCI has been commissioned as part of the PACE observatory and has begun operational science data collection. This dialogue will provide a look into the design-to-cost trade space and how it influenced the Ocean Color Instrument development and capabilities from concept to commissioning.
      • 02.0110 Anomalies on the Psyche Mission: Fault Protection Performance and Lessons Learned
        Virginia Sereno (NASA Jet Propulsion Lab), Jonathan Summer (NASA Jet Propulsion Lab), Alexander Lumnah (NASA Jet Propulsion Lab), Swapnil Pujari (NASA Jet Propulsion Lab), Travis Imken (Jet Propulsion Laboratory), Steve Snyder (Jet Propulsion Laboratory) Presentation: Virginia Sereno - Monday, March 3th, 08:55 AM - Jefferson
        Abstract— On October 13th, 2023, the Psyche Mission launched from Cape Canaveral, Florida beginning its 6-year journey to the metal-rich asteroid (16) Psyche. Targeting an arrival in August 2029, the Psyche spacecraft faces a unique journey to the largest metallic asteroid in our Solar System. The first milestone for the Spacecraft to cross required successfully executing a series of autonomous launch and solar array deployment activities, finding the sun, and achieving a power positive, communicative, and thermally stable configuration on the day of launch. Following launch, the Psyche spacecraft completed a 100-day period of initial checkouts (ICO), during which several key activities demonstrated the health and functionality of all the subsystems and characterized the behavior under planned operating scenarios prior to kicking off the cruise thrusting portion of the mission. During this first portion of the mission, the Spacecraft Team built products, designed tactical and strategic activities, and executed them to support readiness for cruise thrusting. First-time activities mean numerous opportunities for problems and unintentional commanding errors to occur, making this period especially critical for the Psyche Fault Protection team, tasked with protecting the health and safety of the spacecraft in the presence of faults. Several unexpected anomalies and idiosyncratic behaviors occurred on Psyche due to conservative threshold settings and limitations in testing fidelity. This paper discusses the anomalies that have occurred, the processes the Fault Protection team used to detect, isolate, and respond to them, along with the lessons learned after the fact. This paper intends to share these lessons learned to help future missions strengthen Fault Protection designs and operational process. In concert with a discussion on the anomalies and idiosyncrasies, this paper discusses the downlink, analysis, and trending process implemented by the Psyche Fault Protection team to illustrate the ways in which operational risks can be reduced, as well as the benefit to building robust review processes.
      • 02.0111 GRACE-C Mission Overview
        Neil Dahya (NASA Jet Propulsion Laboratory), Felix Landerer (JPL-NASA / Caltech) Presentation: Neil Dahya - Monday, March 3th, 09:20 AM - Jefferson
        The GRACE-C (formally Mass Change) Mission is entering the detailed design phase of the project for both the Spacecraft and Instruments. The GRACE-C Mission builds on the successful GRACE and GRACE-FO missions to continue the generation of monthly global gravity fields which are utilized to track and trend surface water storage, aquifer storage, ice mass growth and loss of glaciers and their impacts on climate change. The GRACE-C mission is a directed mission and one of the top 5 observables listed in the 2017 decadal survey. Just like the previous GRACE missions the GRACE-C mission leverages the past partnerships with NASA/JPL, GFZ (German Research Center for Geoscience), DLR (German Aerospace Center) and ONERA (French Aerospace Lab), in order to ensure mission success with a fast-paced schedule and minimum cost impact to NASA. The project entered Phase A in March of 2023 and within 12 months has completed a System Requirements Review, Mission Design Review and Preliminary Design Review. This paper describes the status of the project design, analyses and hardware build to date. Key characteristics of the GRACE missions that enable the precise gravity models, the breakup of responsibilities that enables these missions and summary of the critical data that is generated from the GRACE missions.
      • 02.0112 Near-Earth Object Surveyor Project Progress towards CDR
        Tom Hoffman (Jet Propulsion Laboratory) Presentation: Tom Hoffman - Sunday, March 2th, 04:30 PM - Jefferson
        The Near-Earth Object Surveyor (NEOS) is completing the detailed design phase of the project for both the instrument and the spacecraft with all flight hardware currently in fabrication and/or testing. NEO Surveyor is to designed to detect, categorize and characterize NEOs using infrared imaging. The NEOS project responds to National Research Council’s report Defending Planet Earth: Near-Earth Object Surveys & Hazard Mitigation Strategies (2010), the U. S. National Near-Earth Object Preparedness Strategy and Action Plan (June 2018), and the objectives of NASA’s Planetary Defense Coordination Office (PDCO). The goals of the NEOS project are to: (1) identify impact hazards to the Earth posed by NEOs (both asteroids and comets) by performing a comprehensive survey of the NEO population; (2) obtain detailed physical characterization data for individual objects that are likely to pose an impact hazard; (3) characterize the entire population of potentially hazardous NEOs to inform potential mitigation strategies by assisting the determination of impact energies through accurate object size determination and physical properties. The mission will make significant progress toward the George E. Brown, Jr. NEO Survey Program objective of detecting, tracking, cataloging, and characterizing at least 90% of NEOs equal to or larger than 140 m in diameter. The project is a collaboration between NASA-JPL, the University of California – Los Angeles and industry, with BAE notably providing the spacecraft and key instrument elements. The current status of the hardware build, analyses and testing are discussed at the subsystem and system level. Issues and trades completed and underway are covered. Other key development activities are discussed and put into perspective for the overall project. The paper describes the maturation of the integrated modelling approach being used on NEOS to verify the project margin on meeting the science requirements.
      • 02.0113 Mission Status and Initial Results from the Surface Water and Ocean Topography Project
        Parag Vaze (Jet Propulsion Laboratory) Presentation: Parag Vaze - Sunday, March 2th, 09:25 PM - Jefferson
        A new satellite mission for oceanography and hydrology science called Surface Water and Ocean Topography (SWOT) was developed jointly by the U.S. National Aeronautics and Space Administration and France’s Centre National d’Etudes Spatiales and launched on December 16, 2022. Using state-of-the-art "radar interferometry" technology to measure the elevation of water, SWOT will observe major lakes, rivers and wetlands while detecting ocean features with unprecedented resolution. SWOT data will provide critical information that is needed to assess water resources on land, track regional sea level changes, monitor coastal processes, and observe small-scale ocean currents and eddies. SWOT is revolutionizing oceanography by detecting ocean features with 10 times better resolution than present technologies. The higher resolution is revealing small-scale ocean features that contribute to the ocean-atmosphere exchange of heat and carbon. These are major components in global climate change and will improve the understanding of the ocean environment including motion of life-sustaining nutrients and harmful pollutants. SWOT data will be used to improve ocean circulation forecasts, benefiting ship and offshore commercial operations, along with coastal planning activities such as flood prediction and sea level rise. Surface water storage, flow rates and discharge in rivers, lakes, reservoirs and wetlands are currently poorly observed at the global scale. SWOT will provide the very first comprehensive view of Earth's freshwater bodies from space and will allow scientists to determine changing volumes of fresh water across the globe. These measurements are key to understanding surface water availability and in preparing for important water-related hazards such as floods and droughts. SWOT will contribute to a fundamental understanding of the terrestrial branch of the global water cycle. SWOT is achieviong 1 cm precision at 1 km x 1 km pixels over the ocean and 10 cm precision over 50 m x 50 m pixels over land waters. Other payloads of the mission include a conventional dual-frequency altimeter for calibration to large-scale ocean topography, a water-vapor radiometer for correcting range delay caused by water vapor over the ocean, and precision orbit determination package (GPS, DORIS, and laser retroreflector). The purpose of this paper is to present the SWOT mission status, including post-launch experiences, findings and initial results from the mission’s calibration/validation and first year of the operational phase.
      • 02.0115 Contingency Decontamination Utilizing Pointing Maneuvers for the Cryogenic SPHEREx Mission
        John Alred (), Bradley Moore (JPL/Caltech), Sara Susca (), Konstantin Penanen (), Cynthia Ly (Jet Propulsion Laboratory), Valentina Ricchiuti (NASA Jet Propulsion Lab), Carlos Soares (NASA Jet Propulsion Laboratory), Jennifer Rocca (Jet Propulsion Laboratory) Presentation: John Alred - Monday, March 3th, 09:45 AM - Jefferson
        The SPHEREx mission, a collaboration between NASA's JPL and Caltech, aims to conduct the first near-infrared all-sky spectral survey using a passively-cooled, cryogenic telescope in low-earth orbit. This mission addresses NASA’s astrophysics goals by investigating the origin of the universe, the evolution of galaxies, and the abundance of biogenic ices. A significant challenge faced by SPHEREx is the management of water contamination, which can interfere with the wavelengths critical to its survey. Unique water contamination issues arise from the combination of cryogenic operating temperatures and the limited temperature control capabilities of passive cooling. In this work, we explore a novel approach to contingency decontamination of SPHEREx utilizing pointing maneuvers to mitigate water contamination risks. A comprehensive contamination model, previously developed to simulate the transport and accumulation of outgassed water on sensitive components of the SPHEREx payload, is extended and utilized to analyze contingency scenarios for the SPHEREx observatory decontamination. This model incorporates time and spatially resolved thermal data to predict the behavior of water outgassing under various operational conditions, ensuring a comprehensive molecular contamination simulation. The baseline decontamination strategy designed for SPHEREx employs decontamination heaters to control the thermal environment. During the cooldown phase, the heaters manage the temperature of optical surfaces to prevent excessive water accumulation. However, if the originally planned decontamination heaters fail, excess water contamination could be detrimental to the scientific return of the mission, effectively blinding the observatory from detecting water in the universe, a key biogenic molecule. In a contingency scenario, specifically timed pointing maneuvers must be performed to preferentially heat the optical elements. These maneuvers involve adjusting the spacecraft's orientation to expose the optical components to varying thermal environments, promoting the desorption of accumulated water. This approach was simulated with high fidelity, demonstrating its effectiveness in maintaining the required temperatures to meet water contamination requirements. The simulations showed that these pointing maneuvers can effectively reduce water contamination on the cryogenic surfaces, thus protecting the optical performance of the instrument. Through these high-fidelity simulations and the incorporation of pointing maneuvers, the SPHEREx mission can confidently mitigate water contamination risks in a contingency scenario. This ensures that, even under scenarios where the primary decontamination heaters fail, the SPHEREx mission can meet its scientific objectives.
      • 02.0116 Achieving Stability: Systems Design and Analysis Process of the Roman Space Telescope
        James Govern (Aerodyne Industries), Kuo-Chia Liu (NASA - Goddard Space Flight Center), Mark Melton (NASA - Goddard Space Flight Center), Lisa Bartusek (NASA - Goddard Space Flight Center), Michael Akkerman (), Martina Atanassova (), James Basl (KBR Inc supporting Goddard Space Flight Center), Matthew Bolcar (NASA Goddard Space Flight Center), Robert Campion (NASA - Goddard Space Flight Center), Kathleen Cheng (), David Guernsey (NASA - Goddard Space Flight Center), Gregory Michels (Sigmadyne, Inc.), Hume Peabody (NASA Goddard Space Flight Center), David Schwartz (Giant Magellan Telescope Organization), Cory Smiley (Aerodyne Industries), Lawrence Sokolsky (NASA - Goddard Space Flight Center) Presentation: James Govern - Monday, March 3th, 11:00 AM - Jefferson
        The Nancy Grace Roman Space Telescope (Roman) is the highest-priority large space mission identified by the New Worlds, New Horizon Decadal Survey in Astronomy and Astrophysics. Its mission encompasses critical research in dark energy, dark matter, exoplanets, and infrared astrophysics. Additionally, Roman will feature a technology demonstration coronagraph with active wavefront control for exoplanet imaging. Scheduled to launch no later than spring 2027, Roman will operate in a Sun-Earth L2 orbit with a primary mission duration of 5 years. To achieve its scientific objectives and technology demonstration goals, the Roman observatory must meet exceptional stability requirements. This paper outlines the multi-faceted, layered design approach employed to meet these stability requirements while adhering to cost and schedule constraints. Throughout the development phase, the team used a system-level analysis, or integrated modeling approach to enable rapid assessments of design trade-offs and facilitated the maturation of system designs to satisfy the stringent stability requirements.
    • Alex Austin (Jet Propulsion Laboratory) & Michael Gross (NASA Jet Propulsion Lab)
      • 02.0201 Starship as an Enabling Option for a Uranus Flagship Mission
        Daniel Gochenaur (Massachusetts Institute of Technology), Chloe Gentgen (Massachusetts Institute of Technology), Olivier De Weck (MIT) Presentation: Daniel Gochenaur - Monday, March 3th, 11:25 AM - Jefferson
        In 2022, the National Academy of Sciences Planetary Science Decadal Survey (PSDS) recommended exploration of Uranus as its highest priority Flagship mission for the 2030s. The PSDS recommendation relied on the Uranus Orbiter and Probe (UOP) concept as its baseline for the mission. UOP assumed a launch in 2031 on a Falcon Heavy Expendable rocket and an intermediate Jupiter flyby, allowing it to arrive at Uranus before 2049. At present, it is likely that the original UOP launch will be postponed, which will potentially cause a Jupiter gravity assist to become unavailable and could considerably delay the arrival at Uranus. However, a later launch date allows us to consider launch vehicles currently under development such as SpaceX's Starship, a two-stage heavy-lift launch vehicle that is intended to be refuellable on-orbit. Although Starship's performance capabilities have yet to be demonstrated, current development timelines suggest they will be known before selecting a launch vehicle for a Uranus mission. This study investigates the possibility of leveraging the anticipated capabilities of Starship to support a Flagship mission to Uranus. The results show that with on-orbit refueling, Starship will be capable of performing direct transfer to Uranus without the need for intermediate planetary flybys. Direct transfer with eight refueling launches allows Starship to deliver more than four mt of mass to Uranus in only nine years, compared to more than thirteen years for UOP. Larger payload masses with shorter times of flight can be achieved with additional refueling launches. The mission can be further enhanced by using Starship to perform aerocapture, an orbit insertion maneuver that uses aerodynamic drag to decelerate the spacecraft and can reduce the mission propellant requirement. As a mid- to high-lift to drag ratio vehicle, Starship can perform aerocapture while maintaining deceleration and heating values that are not more severe than those observed by aerocapture studies for other vehicles. With eight refueling launches and a nine-year transfer time of flight, Starship can deliver more than 30 mt of payload mass to Uranus using aerocapture. Shorter time of flight transfers also become available with a reduced number of refueling trips compared to propulsive orbit insertion. By using Starship to deploy a spacecraft and probe of a similar design as UOP, the reduced transfer times can facilitate an arrival at Uranus well before equinox, and can enable science phases of up to 13 years without power limitations. Performing the insertion burn with Starship allows the mission delta v available for the science tour to be multiplied considerably. Using the UOP architecture would make the mission compatible with both Falcon Heavy and Starship, thereby reducing risk. Alternatively, the additional payload mass that can be deployed to Uranus using Starship can be used to enhance the orbiter and probe architecture beyond the current design, potentially allowing for a larger instrument suite, a more capable probe or additional probes, and even a second spacecraft. This presents a higher risk, yet potentially greater science return option that could become feasible if financial conditions allow.
      • 02.0202 Uranus Orbiter and Probe: A Novel Approach to Meet the Challenges
        Stacy Weinstein-Weiss (), John Elliott (Jet Propulsion Laboratory), Alfred Nash (Jet Propulsion Laboratory), Troy Hudson (NASA Jet Propulsion Laboratory), Mark Chodas (NASA Jet Propulsion Lab), Greg Garner (JPL), Anthony Freeman (NASA Jet Propulsion Lab), Damon Landau (JPL) Presentation: Alfred Nash - Monday, March 3th, 11:50 AM - Jefferson
        Motivation: The Origins, Worlds, and Life Decadal Survey recommends a Uranus Orbiter and Probe (UOP) mission as the next planetary flagship [1]. The current President’s Budget Request for Fiscal Year 2025 does not support NASA-funded mission studies until 2027 [2], which will likely result in missing a potential Jupiter Gravity Assist (JGA). We thus need to find other trajectory options to Uranus, arriving ideally before Equinox in 2050 for unique observations. Additionally, other challenges drive the UOP design. We describe these challenges and a novel mission concept which mitigates them while achieving comparable science return to that in the Decadal Survey mission concept. Challenges: The UOP flagship faces numerous challenges. Losing the JGA means reducing flight system mass to maintain flight times to Uranus of 13.5 yrs with thermally-benign perihelia above 0.9 AU. Another challenge is power. Uranus will be 18–19 AU from the Sun, which makes Radioisotope Thermoelectric Generators (RTGs) the best power source option. Based on current best estimates, the inventory of RTGs is likely to be limited in this timeframe, driving a desire to reduce power demand and the number of RTGs required while maintaining flagship-worthy science and the earliest possible launch date that budget profiles will allow. Another challenge is ensuring launch date flexibility which allows CONOPS-similar backup launch opportunities. Perhaps the ultimate challenge is to meet these previously mentioned challenges using a credible low-cost and low-risk approach. Approach: Mass and power drivers were examined, informed by >50 years of experience in developing space science missions at JPL. In this preliminary study, we assumed the same Decadal UOP study payload and probe mass [3]. Significant power and mass reductions were achieved by techniques such as eliminating reaction wheels and adopting new electrical power distribution architectures. While some technology evolution was required, we took a “no miracles” approach. We chose a trajectory that allows launch any year without a JGA and without going much below 1 AU (no Venus flybys), thus providing yearly launch and backup opportunities with virtually identical CONOPs and environments. Results: By using a combination of new design approaches, we were able to match the same payload and science as the Decadal UOP study with 42% less dry mass and a requirement of only two Next Gen Mod 1 RTGs. The mass reduction enabled a trajectory that matches the Decadal UOP’s cruise duration while providing yearly launch opportunities. Our approach used a Falcon Heavy Expendable launch vehicle and a kick stage instead of a Falcon Heavy Expendable. The design was run through JPL’s Team-X which demonstrated that all appropriate design and cost margins were achieved. Mission development and operations phase costs were comparable to the UOP Decadal study costs. This approach is potentially extensible to other future mission concepts. Conclusions: Based on this initial study, it appears that all challenges can be met with adequate margins while achieving comparable Decadal study science using this novel approach. Evolutionary technology was used that can achieve Technology Readiness Level (TRL) 6 by the end of Phase A.
      • 02.0203 Ultra-Violet Exoplanet Explorer
        Peter Wurz (University of Bern), Brice-Olivier Demory (University of Bern), Yann Alibert (University of Bern), Pontus Brandt (The Johns Hopkins University Applied Physics Laboratory), Christoph Mordasini () Presentation: Peter Wurz - Monday, March 3th, 04:30 PM - Jefferson
        We are planning a new exoplanet space mission using an UV telescope to measure composition of the major elements in the atmospheres of exoplanets in Ultra-violet light, the Ultra-Violet Exoplanet Explorer (UVEX). Measuring the composition is a new direction in exoplanet research. The basic concept we foresee is very similar to the successful CHEOPS mission (CHaracterising ExOPlanet Satellite), a joint mission between ESA and Switzerland. CHEOPS is dedicated to studying bright, nearby stars that are already known to host exoplanets, to make high-precision observations of the planet's size as it passes in front of its host star. Similarly, UVEX will study bright nearby stars with known exoplanets in transit spectroscopically in the UV wavelength range. The UV telescope should have two instruments: 1) an UV spectrometer from 50 to 300 nm, with a spectral resolution about 1 nm, and 2) a high resolution spectrometer around the lyman-alpha line (121.6 nm) with a spectral range of +/- 400 km/s (+/- 1.63 nm) and a resolution of 10 m/s (R = 40’000). The UV spectrometer will provide element abundances for the major element of H, He, C, N, O, Na, S, and Mg. Based on the performance by the Hubble Space Telescope (STIS/COS) and an aperture of 0.5 m of the telescope we estimate that we will be able to observe about 30 exoplanets with a S/N above 2. In addition, in this wavelength range there are molecular absorptions of simple molecules common in atmospheres (CO2, CO, H2S, O3, H2O, several simple hydrocarbons) and a few ion lines (e.g., Ca II and Mg II). Additional benefit is that we will get UV spectra from stars, which will be very valuable once HST is no longer in operation. With the high resolution spectrometer we will investigate the lyman-alpha line profile to learn more about the interaction of the extended atmosphere of the exoplanets with the stellar wind of their host star. Given the projected size, UVEX would fit as an ESA fast mission or as a NASA SMEX mission. Because of interference with the Geo-corona a large orbit is needed to minimize the UV contamination of the spectra; at the minimum distance of 10 RE the geo-corona is still about 400 R. Ideally, UVEX would be placed at the L2 position as foreseen for the current ESA F missions.
      • 02.0205 Technology Supporting the GRACE-C Mission and Other Mass Change Designated Observable Missions
        Stephen Bennett (BAE Systems, Space and Mission Systems) Presentation: Stephen Bennett - Monday, March 3th, 04:55 PM - Jefferson
        BAE Systems Space and Mission Systems (formerly Ball Aerospace) has been supporting measurement of the Mass Change Designated Observable since the Gravity Recovery and Climate Experiment (GRACE) mission was launched in 2002 and planning began for follow-on measurements. BAE is currently building laser stabilization cavities for JPL to use on the GRACE-C mission, and we are exploring new accelerometer technologies with the University of Florida for better gravity measurements and other geodesy missions. The NASA-funded S-GRS IIP project, along with the upcoming InVEST program, GRATTIS, are developing an ultra-precise inertial sensor for future Earth geodesy missions. These sensors are used to measure or compensate for all non-gravitational accelerations of the host spacecraft so that they can be removed in the data analysis to recover spacecraft motion due to Earth’s gravity field, which is the main science observable. The S-GRS design follows that of the flight-proven LISA Pathfinder (LPF) Gravitational Reference Sensor (GRS) that represents the state of the art in precision inertial sensors[1,2]. The S-GRS instrument presented in this paper is a scaled-down version of the LPF GRS, with reduced mass and complexity, and is optimized with respect to performance for future geodesy missions[3]. In the on-orbit operational state, the GRS functions by sensing the state of a precision cubic Test Mass (TM) with respect to the surrounding Electrode Housing (EH) via non-contacting electrostatic distance sensing and actuation electrodes. The S-GRS Instrument approach consists to two opposing Caging Mechanism (CM) assemblies that constrain the TM for launch and release into pure freefall on-orbit such that the residual velocity is within the non-contact actuation system control authority. This paper focuses on the novel design, assembly, functional testing, and troubleshooting of the S-GRS CM. References 1. Bortoluzzi, D., Vignotto, D., Zambotti, A., et al. “In-flight testing of the injection of the LISA Pathfinder test mass into a geodesic.” Advances in Space Research, Vol. 67, 2021. 2. Bortoluzzi, D., Vignotto, D., Dalla Ricca, E., Mendes, J.. “Investigation of the in-flight anomalies of the LISA Pathfinder Test Mass release mechanism.” Advances in Space Research, Vol. 68, 2021. 3. Dávila Álvarez, A., Knudtson, A., Patel, U. et al. “A simplified gravitational reference sensor for satellite geodesy.” J Geod 96, 70 (2022).
      • 02.0206 Uranus Cruise and Tour Design Impacts on Science Cost and Risk
        Damon Landau (JPL), Reza Karimi (NASA Jet Propulsion Lab), Randy Persinger (), John Elliott (Jet Propulsion Laboratory), Stacy Weinstein-Weiss (), Mark Hofstadter (), Karl Mitchell (Jet Propulsion Laboratory), Julie Castillo Rogez (JPL/Caltech), Carol Raymond (Jet Propulsion Laboratory) Presentation: Damon Landau - Monday, March 3th, 05:20 PM - Jefferson
        A detailed trade space analysis of the Uranus Orbiter and Probe mission informs decisions on science, cost, and risk, including design choices of science tour phases, probe and orbiter payload mass, orbiter spacecraft mass, mission duration, and frequency of launch opportunities. The science tour phases flow from baseline and threshold objectives for different scientific disciplines: atmospheres, rings and small satellites, large moons, magnetospheres, and interiors. A broad search of both interplanetary cruise trajectories and Uranian tours produce a trade space of launch opportunities, flight times, and DeltaV that set bounds on spacecraft dry and wet mass. The mission design employs current technology, such as a Falcon Heavy launch vehicle and a chemical propulsion system. Gravity assists from Venus and/or Earth enable launches throughout the 2030s, when Jupiter is unavailable. The low-risk and low-cost approach delivers a probe to Uranus and completes a tour of its rings, moons, and magnetosphere within 19 years. A mission duration of 21 years enables multiple close approaches to Uranus that complete the set of objectives called for in the Planetary Science and Astrobiology Decadal Survey.
      • 02.0209 EnVision Radio Science and Altimetric Data Processing for Orbit Determination
        Tommaso Torrini (Sapienza University of Rome), Antonio Genova (University of Rome, La Sapienza), Simone Andolfo (University of Rome, La Sapienza), Anna Maria Gargiulo (Sapienza University of Rome), Pascal ROSENBLATT (Nantes University), Caroline Dumoulin (), Sebastien Lebonnois () Presentation: Tommaso Torrini - Monday, March 3th, 09:00 PM - Jefferson
        The Radio Science Experiment onboard the ESA EnVision mission, scheduled for launch in the early 2030s, will conduct comprehensive gravity and radio science investigations at Venus for 6 Venusian cycles. This mission aims to enable in-depth studies of the planet’s atmosphere and interior. The spacecraft will be inserted into a low-altitude quasi-polar orbit about Venus, hosting onboard a telecommunication system capable of dual X/Ka band frequency downlink for radio science as well as a radar that can operate in altimetric mode due to its beam-limited footprint in both along-track and cross-track directions. To predict the scientific return of the mission, it is crucial to conduct simulation campaigns of the mission scenario. One of the key challenges for the performance of the gravity experiment is the lack of accurate knowledge about Venus' atmospheric dynamics. This limitation prevents accurate orbit determination due to the mismodeling of atmospheric drag perturbation. We propose a joint analysis of radar altimetric crossover measurements and radio science data to enhance the accuracy of orbit reconstruction by minimizing systematic errors caused by atmospheric mismodeling inaccuracies. Synthetic radio measurements are simulated according to the nominal tracking schedule of the ESTRACK stations and the expected instrumentation performances. The radio tracking data are then perturbed by white Gaussian noise that accounts for effects associated with solar plasma and Earth's troposphere. Altimetric crossover measurements are derived from the difference in altitude at points where two surface swaths of the instrument intersect. These measurements are provided by the radar when it operates in nadir pointing mode for altimetry. They are generated synthetically, accounting for the instrument's constraints by adding intrinsic sensor noise and pointing error noise. We simulated a realistic orbit determination problem by carrying out a perturbative analysis. Synthetic measurements are generated using the latest Venus Climate Database atmospheric model. Parameter estimation is performed through a least-squares filter, incorporating a different atmospheric model, NASA Venus-GRAM, to emulate discrepancies between ground-truth and model-based non-conservative forces. Including altimetric crossovers provides key constraints on spacecraft radial direction, helping to mitigate the effects of atmospheric mismodeling. Moreover, altimetric data can provide essential trajectory information when radio science data are unavailable due to Venus’ occultations or lacks in tracking schedule. The developed method involves processing the combined dataset of radio and altimetric data, assigning appropriate weights to each measurement type to achieve the optimal estimate. This method is used to assess the benefits of combining radio science data with radar altimetric crossovers. The results show that this method leverages the complementary information from the two datasets, leading to significant enhancements in spacecraft state uncertainties. A more accurate trajectory is crucial to avoid errors or biases in the estimation process of geophysical parameters for studying the planet's interior and atmosphere.
      • 02.0210 The Chromospheric Magnetism Explorer (CMEx) Mission Concept
        BIll Kalinowski (BAE Systems) Presentation: BIll Kalinowski - Monday, March 3th, 09:25 PM - Jefferson
        The Chromospheric Magnetism Explorer (CMEx) seeks to conduct unprecedented measurements of the Sun’s magnetic field between the photosphere and corona. This mission contributes to the critical problems documented in the 2013 Solar and Space Physics Decadal Survey, namely “Determine How Magnetic Energy is Stored and Explosively Released.” CMEx does so by returning magnetic field strength and direction information of active regions prior to, and following eruptions. CMEx is also poised to provide insight into heliospheric magnetic fluxes, adding unique observational data to answer the so-called “open flux problem.” The CMEx mission collects spectropolarimetry data and generates magnetic field information utilizing inversion codes and other techniques that interpret Zeeman- and Hanle-effect changes to spectral line polarization. The CMEx instrument consists of a two-band ultraviolet spectropolarimeter with a single band ultraviolet imager. The instrument performs repeated raster scans of active regions, prominences, filaments, and coronal holes at a cadence allowing direct observation of evolving and changing solar magnetic structures. Launched into a 6 A.M. sun-synchronous orbit, CMEx will have continuous visibility of the sun outside of its 3-month eclipse season, allowing near constant monitoring of solar features of interest. Image stacking and subsequent spectrum demodulation onboard the observatory provides for downlink of full Stokes vector information for the observed spectral lines. CMEx also utilizes the instrument raster scan mirror to provide line-of-sight stability by compensating for spacecraft motion and attenuating system jitter. Observation plans developed by the Science Operations Center (SOC) are transferred to the Mission Operations Center (MOC) for conversion into command sequences subsequently uplinked to the observatory via KSAT ground stations. After launch in 2029, CMEx will complete a two-year science mission following a short period of combined on-orbit spacecraft and instrument commissioning. CMEx provides a high-performance space observatory by combining heritage instrument and spacecraft element designs, as well as commercial-off-the-shelf (COTS) technologies into a low-cost solution appropriate for a cost-capped small explorer class NASA mission. This paper provides an overview of the CMEx mission concept and of key observatory and ground system conceptual designs. CMEx is a candidate Heliophysics Small Explorer (SMEX) mission led by the Principal Investigator, Dr. Holly Gilbert, at the High Altitude Observatory (HAO) at the U.S. National Science Foundation National Center for Atmospheric Research (NSF NCAR). The CMEx mission partners include BAE Systems Space and Mission Systems (BAES), and the Laboratory for Atmospheric and Space Physics at the University of Colorado, Boulder (CU/LASP). As of the publication date (March 2025), the CMEx project has completed its Phase A Concept Study Report and awaits the results of the Heliophysics SMEX mission down selection process expected to complete in the second quarter of 2025.
      • 02.0211 Conceptual Mission to Dim the Sun (DimSun) Using Controllable Swarm of Smallbody Regolith Particles
        Saptarshi Bandyopadhyay (Jet Propulsion Laboratory), Sriramya Bhamidipati (NASA Jet Propulsion Lab), Maria Hakuba (Jet Propulsion Laboratory), Angadh Nanjangud (Queen Mary University of London), Maira Saboia da Silva (NASA Jet Propulsion Laboratory), Mark Richardson (California Institute of Technology), Evan Fishbein (California Institute of Technology), Aditya Paranjape (Monash University), Carl Percival (NASA Jet Propulsion Lab), John Reager (Jet Propulsion Laboratory), Tushar Jadhav (Manastu Space Technologies) Presentation: Saptarshi Bandyopadhyay - Monday, March 3th, 09:50 PM - Jefferson
        Global warming due to the rising levels of greenhouse gases in the atmosphere is an existential threat to humanity. This paper focuses on geoengineering using space-based solar radiation management. The DimSun mission concept will reduce solar insolation by 1.16% using a controlled dust cloud, made with small body regolith particles, at Solar Radiation Pressure (SRP)-balanced Sun–Earth Lagrangian L1 point. This will achieve the United Nations’ Paris Agreement target of limiting the temperature increase to 1.5◦C above pre-industrial levels during the 21st century. This will give humanity some time to permanently reduce the global levels of greenhouse gases. In this paper, we present the calculations for the size and location of the dust cloud and the key technologies necessary to enable this mission concept. In the future, once global greenhouse gas levels are permanently reduced, DimSun can be reversed by deactivating the control spacecraft, causing the dust cloud to disperse away from Earth within 100 days.
      • 02.0212 Survey of Mission Concepts for Exploring the Dark Ages Universe
        Saptarshi Bandyopadhyay (Jet Propulsion Laboratory), Ashish Goel (NASA Jet Propulsion Lab), Gaurangi Gupta (), Joseph Lazio (Jet Propulsion Laboratory), Paul Goldsmith (JPL), Tzu-Ching Chang () Presentation: Saptarshi Bandyopadhyay - Tuesday, March 4th, 08:30 AM - Jefferson
        The Dark Ages Epoch of the early Universe has not been explored for cosmological observations to date! Observa- tions of the Dark Ages has the potential to revolutionize physics and cosmology by improving our understanding of fundamental particle physics, Dark matter and Dark energy physics, and inflation. The Dark Ages Epoch represent the period in the early evolution of the Universe, starting immediately after the decoupling of Cosmic Microwave Background (CMB) photons from matter, and ending with the formation of the first stars and galaxies. The HI signal (with rest wavelength and frequency 21 cm and 1420 MHz, respectively) is the only available probe we can use to understand this crucial phase in the cosmological history of the Universe and answer fundamental questions about the validity of the standard cosmological model, dark matter physics, and inflation. Due to cosmological redshift, this signal is now only observable in the 5–40 MHz frequency band. The biggest challenge with these observations is separating the Dark Ages signal from the 5-orders-of-magnitude stronger Galactic Foreground noise, which is the synchrotron radiation emitted by relativistic electrons that travel on spiraling paths in our Milky Way Galaxy’s magnetic field. A number of mission concepts have been proposed that could observe the Dark Ages from the lunar far-side. They are broadly classified into: (1) Single telescope concept on the lunar surface, (2) Sparse dipole array concepts on the lunar surface, (3) Satellite constellation concepts in Moon orbit, and (4) Satellite constellation concepts in Earth-Moon system. This paper presents a survey of different mission concepts that have been proposed to observe the Dark Ages Universe.
    • Clara O'Farrell (Jet Propulsion Laboratory) & Ian Clark (Jet Propulsion Laboratory)
      • 02.0301 Lidar-Based Landing Hazard Detection for Dragonfly
        Carolyn Sawyer (Johns Hopkins University/Applied Physics Laboratory), Samuel Bibelhauser (JHU-APL) Presentation: Samuel Bibelhauser - Wednesday, March 5th, 11:50 AM - Gallatin
        NASA’s Dragonfly mission will use a dual-quadcopter lander to explore the surface of Saturn’s moon Titan. After Titan arrival, the entry, descent and landing sequence includes a transition from parachute descent to powered flight, autonomous selection of a safe landing site, and autonomous landing. Additionally, the lander will relocate to multiple sites of scientific interest on the Titan surface via powered flight. Due to the long light-time delay to Earth, limited flight time, and limited pre-existing map data for Titan, Dragonfly must utilize autonomous precision landing and hazard avoidance technology to search for safe landing sites. The Dragonfly team has developed a hazard detection approach based on imaging lidar data accumulated during powered flight above the Titan surface. This paper will describe the development of the Lidar Terrain Sensing algorithms, which have been designed to provide robust hazard detection and safe site identification capability. Algorithm components include compiling a detailed topographic map of the surface, estimating the surface slope and roughness, assessing the relative safety of all map points, and suggesting the highest-scoring locations as candidate landing sites. The safety assessment is designed to include probabilistic estimates of sensor and algorithm noise effects, the lander mechanical tolerance to surface features, and navigation uncertainty from site selection to landing. The interaction of the Lidar Terrain Sensing algorithms with other onboard algorithms and mission operations will be discussed, along with algorithm performance in simulated flight conditions.
      • 02.0305 Impact Mechanics of Antarctic Air-Dropped Ice Penetrator
        Alex Miller (MIT), Michael Brown (MIT), Aaron Makikalli (Massachusetts Institute of Technology), Jeffrey Hoffman (Massachusetts Institute of Technology) Presentation: Alex Miller - Tuesday, March 4th, 08:55 AM - Jefferson
        Existing measurement tools for ice shelves and other glaciated regions have limited capability to measure dynamic events in remote areas. The Seismo-Geodetic Ice Penetrator (SGIP) offers a method for rapid deployment of a broadband seismometer and Global Navigation Satellite System (GNSS) positioning system designed to sense ice shelf resonant forcings caused by ocean gravity waves and atmospheric waves. Additionally, SGIP will track seismic indications of calving and rifting, facilitating better estimates of sea level rise. During operation, SGIP is dropped from an aerial vehicle, reaching a terminal velocity of 42 ms−1; during impact with the snowpack surface SGIP experiences an average acceleration of approximately 500 ms−2. Upon impact, a fore-body section separates from the upper aft-body "flare" section and continues several meters into the ice shelf, while the aft-body remains at the surface with a set of communications antennas. The SGIP platform is compared to previously envisioned and tested penetrator systems. Impact modeling of SGIP into glacial firn is detailed, with a focus on fast simulation runtimes for design exploration. An elastic nonlinear compaction model is developed which relates analytically to fundamental material properties and has higher fidelity compared to other fast (>1 second runtime) techniques. Finite element models are developed at small and large scale and are compared to corresponding experimental data that were collected from a small-scale drop-test model. Impact mechanics results from a full-scale prototype hardware test in Juneau, Alaska are discussed and contextualized in relation to the models developed to inform future system design changes.
      • 02.0306 Landing Trajectory Tracking Guidance for Reusable Launch Vehicle Using MPC with SOCP
        Da-Hwi Kim (), JaeIl Jang (KAIST), Chang-Hun Lee (Korea Advanced Institute of Science and Technology (KAIST)) Presentation: Da-Hwi Kim - Tuesday, March 4th, 09:20 AM - Jefferson
        This paper proposes novel and practical landing trajectory tracking guidance for reusable launch vehicles (RLVs) using model predictive control (MPC) with second order cone programming (SOCP). After real-time optimization using convex programming for powered descent guidance (PDG), it is desired to use tracking guidance to handle various disturbances and uncertainties. Unlike conventional approach using control gain, MPC can be employed to account for operational constraints, but unfortunately, utilizing MPC as a feedback tracker for RLVs presents unique challenges due to their nonlinear dynamics and, more significantly, the non-convex nature of throttle limits. The well-known convexification techniques, called “Lossless Convexification”, are not directly applicable because the objective function in MPC is not fuel-minimization but rather tracking-error-regulation. Successive convex programming (SCP) may be exploited, but its non-guaranteed convergence property can make the system unstable by any chance. To avoid using SCP method, proposed algorithm utilizes SOCP as an optimization problem in MPC. First, we leverage perturbed dynamics model based on optimal PDG trajectory, where perturbed states and control inputs are implemented. The nominal control from the optimal trajectory is supplemented by perturbed control inputs generated by the MPC, enabling compensation for disturbances and uncertainties. Second and most importantly, second-order cone constraint with perturbed control input is introduced to handle the thrust limit of the engine of RLVs. In general, optimal thrust magnitude in 3-degree-of-freedom (DOF) PDG has bang-bang shape. Since this can make the control saturation easily because of disturbances and uncertainties, thrust margin value should be considered in trajectory optimization. If we use this “margin” value as a limit of the magnitude of perturbed control input, we can prove that the corrected control input, which is the summation of nominal and perturbed control input, can satisfy thrust limit with its magnitude. Or vice versa, we can say that if we decide the limit of perturbed thrust magnitude considering worst disturbances, this can be reflected in PDG trajectory optimization as thrust margin. 3-DOF numerical simulations, based on SpaceX Falcon 9 model, demonstrate the efficacy of the proposed algorithm in achieving precise guidance performance despite wind disturbances and model uncertainties. The SOCP tracking problem in MPC has been solved via ECOS solver and we can see that all the constraints related to PDG is fully satisfied including thrust limit. Moreover, the use of SOCP in MPC ensures global convergence of solution in polynomial time, enhancing its attractiveness for aerospace Guidance and Control (G&C) design.
      • 02.0307 Powered Descent Guidance via Sequential Convex Programming with Constraint Function Design
        JaeIl Jang (KAIST), Da-Hwi Kim (), Chang-Hun Lee (Korea Advanced Institute of Science and Technology (KAIST)) Presentation: JaeIl Jang - Tuesday, March 4th, 09:45 AM - Jefferson
        This paper proposes a method of imposing path constraints through "constraint function" in sequential convex programming and apply it to the powered descent guidance(PDG) of a reusable launch vehicle(RLV). In the landing burn phase of RLV, which begins at 1-5km altitude, it is necessary to limit the angle of attack in the regions of high dynamic pressure to avoid excessive aerodynamic loads. Furthermore, attitude of the launch vehicle should be constrained near the landing point to ensure vertical landing. To efficiently satisfy these requirements, path constraints should be imposed based on the state of the RLV, rather than as fixed constants. To address these situations, previous study has proposed state-triggered constraint, which activates the constraint only when the trigger condition is met. However, discontinuous on-off manner activation of the constraint can lead to abrupt changes in the state variables associated with the constraint as the trigger condition changes. Such abrupt changes can strain the system or degrade its performance in realizing the guidance command. Instead, we propose constraint function method that can design the form of constraint as a function of state dependent or independent variables. This approach enables the constraints shaping into a desired form, ensuring that constraint changes continuously as the input of constraint function varies. Moreover, it allows for the expansion of the constraint design domain from independent variables, such as time or altitude, to include domains based on state variables such as dynamic pressure, which can enhance the flexibility in the design. By using the proposed constraint function method, we have established powered descent guidance problem for reusable launch vehicle. We have applied constraint function approach to angle of attack and tilt angle constraint. In the case of angle of attack constraints, upper limit of the angle of attack has been designed as an exponential function with dynamic pressure as the input. This configuration tightly restricts the angle of attack in the region of high dynamic pressure and smoothly relaxes the upper bound as the dynamic pressure decreases. For the tilt angle constraint, the upper limit has been designed as a function of normalized flight time to be applicable to the free final time problem. This constraint progressively tightens the tilt angle to zero as the vehicle approaches landing, ensuring a vertical orientation upon touchdown. Since the PDG problem with constraint function do not conform to the structure of a convex optimization problem, the guidance solution has been derived using sequential convex programming(SCP). The linearized convex subproblem has been solved via ECOS solver and the SCP has converged within 5 iterations with 1E-5 tolerance level. The angle of attack reaches the upper limit set by the constraint function in the region of high dynamic pressure and gradually relaxes as the dynamic pressure decreases. For the case of tilt angle, the magnitude of the tilt angle progressively decreases to zero as landing approaches, resulting in a vertical landing.
      • 02.0309 Approaches for Guidance & Control Distribution for Powered Descent of Chandrayaan-3
        CHIRANJIB GUHA MAJUMDER (Indian Space Research Organization ) Presentation: CHIRANJIB GUHA MAJUMDER - Tuesday, March 4th, 10:10 AM - Jefferson
        Chandrayaan-3 achieved the historic feat of becoming the first lunar robotic exploration mission by performing safe and soft landing near Lunar south pole on 23 August 2023. Successful landing was achieved through design and development of Powered Descent trajectory and Guidance and Control(G&C) algorithms, followed by analyses of the G&C interactions and apportionment of margins for the same. Approaches for thrust augmentation, disturbance mitigation and onboard compensation algorithms were studied and limit of guidance performance was brought out. Different schemes for control authority distribution, and allocation to RCS Thrusters for rapid Attitude control (ACS) and Liquid Engines during Thrusting maneuvers were evolved and validated. The entire course of Powered descent from 30km till touchdown was executed through Closed-loop Guidance and Control with decision and safety logics implemented in onboard Autonomous Landing sequencer (ALS). The current paper brings to light the Guidance and Control distribution and attitude control aspects during the course of the descent. The novelty of the research lies in arriving at formulations for Engine based attitude control along with fast Reaction Control thruster driven ACS and implementation of the same onboard Chandrayaan-3 NGC. The variants of the Engine based ACS studied and devised during braking and terminal descent phases of powered descent and conditional augmentation of RCS thrust (for ΔV aiding) are the key highlights of the paper. The merits of knowledge driven engine Thrust biasing schemes towards coarse attitude control as a first order correction approach and Relative Throttling scheme as feed-back approach towards mitigating engine differential thrust impacts are presented. The prioritization between the thrust regulation loop, to meet engine thrust shortfall from guidance demand, and Engine based ACS, for correcting attitude deviation, are brought out. The faithful execution, handshaking and scheduling of all the algorithms together were validated through extensive simulations and ground tests, which have led to the successful landing as observed from post flight data analyses.
    • Paul Backes (Jet Propulsion Laboratory) & Richard Volpe (Jet Propulsion Laboratory) & Joseph Bowkett (Jet Propulsion Laboratory)
      • 02.0401 Tethered Variable Inertial Attitude Control Mechanisms through a Modular Jumping Limbed Robot
        Yusuke Tanaka (UCLA), Alvin Zhu (University of California, Los Angeles), Dennis Hong (UCLA) Presentation: Yusuke Tanaka - Wednesday, March 5th, 08:30 AM - Jefferson
        This paper presents the concept of a tethered variable inertial attitude control mechanism for a modular jumping-limbed robot designed for planetary exploration in low-gravity environments. The system, named \name, comprises two sub-$10$ kg quadrupedal robots connected by a tether, capable of executing continuous jumping gaits and stabilizing in-flight using inertial morphing technology. Through model predictive control (MPC), we demonstrate attitude control by adjusting the limbs and tether length to modulate the system's principal moments of inertia. Our results indicate that this control strategy allows the robot to stabilize during flight phases without needing traditional flywheel-based systems or relying on aerodynamics, making the approach mass-efficient and ideal for small-scale planetary robots—continuous jump. The paper outlines the dynamics, MPC formulation for inertial morphing, actuator requirements, and simulation results, illustrating the potential of agile exploration for small-scale rovers in low-gravity environments like the moon or asteroids.
      • 02.0402 Scaling of RoboBall: A Parametric Robot Family for Crater Exploration
        Rishi Jangale (Texas A&M University Robotics and Automation Design (RAD) Lab), Aaron Villanueva (Texas A&M University Robotics and Automation Design (RAD) Lab), Garrett Jibrail (Texas A&M University ), Micah Oevermann (Texas A&M University), Derek Pravecek (Texas A&M University), Meghali Prashant Dravid (Texas A&M University), Robert Ambrose (Texas A&M) Presentation: Rishi Jangale - Wednesday, March 5th, 08:55 AM - Jefferson
        Non-conventional robotic rovers have gained traction over the past several decades as traditional designs continue to struggle with steep crater edges and other extreme lunar terrain. One particular paradigm of interest in non-traditional rovers in recent years has been inflatable, pendulum-driven spherical robots. This concept promises improved durability, dexterity, and environmental protection compared to the legacy designs. Previously, a 2 ft. diameter spherical robot named “RoboBall II” was built and tested as an alternative rover proof-of-concept. Several advantages were immediately apparent, including resilient orientations (the robot could not be flipped over), improved environmental protections (system is environmentally isolated), and far more generous descent angle (the ball can bounce). These capabilities meet conventional requirements, such as environmental protection, while improving upon traversal performance. Yet, the RoboBall II system's design introduced many new challenges. For instance, at the small scale of 2 ft. diameter, there is virtually no volume for payloads. Since most payloads need direct access to the environment, the payload bay cannot be enclosed in the shell. This suggests that the only usable payload region in this class of robot would be the annular region at the center of the driveshaft. When hollow, this space allows for a volume that can access the environment. However, the driveshaft of RoboBall II is too small to contain any useful payload, even if it were hollowed. Thus, as any useful application of the RoboBall paradigm requires a much larger scale, RoboBall II was used primarily as a research testbed and reconnaissance platform. RoboBall III, as a 6 ft. diameter ball, addresses these challenges. With a 6 in. diameter hollow, annular region as the driveshaft, payloads akin to NASA's Cubesats in size could be fitted to the system for exploration or reconnaissance tasks. This exposed payload bay allows for RoboBall III to physically interact with its surroundings beyond pure observation. However, with scaling comes many challenges. Not only do logistics get more difficult, but several systems scale poorly in spherical systems: components meaningfully affecting volume, e.g. air compressors, scale in size and power at a greater than cubic rate. This article will show data on scaling of performance, mass, and power parameters from RoboBall II to RoboBall III. Shortcomings between predictive models or derivations and practicality when physically building a robot of this class will be highlighted. Images and diagrams of both RoboBalls undergoing environmental testing will be shown. In short, whereas scaling pendulum driven, inflatable, spherical rovers greatly improves performance and capability, the methods used in construction of the smaller robots must be re-evaluated and redesigned for scaling. Analysis of these results will be utilized in optimizing the size and construction of future RoboBall designs to cater to their specific mission environments.
      • 02.0403 MLGTT: An Open-Source Tool to Generate Camera-Relative Ground Truth for Monocular Localization
        Jorge Enriquez (), Tu-Hoa Pham (Jet Propulsion Laboratory, California Institute of Technology), Philip Bailey (NASA Jet Propulsion Lab), Kyle Dewey (California State University, Northridge) Presentation: Joseph Bowkett - Wednesday, March 5th, 09:20 AM - Jefferson
        Ground truth is essential in Computer Vision problems as it establishes a baseline for error thresholds in practical applications. This paper presents the Monocular Localization Ground Truth Tool (MLGTT), a modular ground truth seeking tool for camera pose calculations, aimed at obtaining accurate metrics to verify the performance of localization algorithms while isolating intrinsic image error factors. Our tool leverages a Perspective-n-Point (PnP) algorithm, and a Random Sample Consensus (RANSAC) iterative algorithm, which approximates the best fit model, represented by the highest number of inliers and the lowest reprojection error. To utilize this PnP algorithm, we require a predetermined set of 3D points selected on an object’s CAD model, 2D points from an image of the object that are manually selected in the tool, and a camera model that includes distortion parameters for handling unrectified images. These parameters yield a transformation matrix from the camera to the object with respect to the world, which can be saved as a CSV file, along with camera and CAD profiles for repeatability. The MLGTT reprojects these points onto the canvas using the best transformation matrix from PnP. When paired with a rasterizer tool, the MLGTT can synthesize an image using the calculated transformation matrix that overlays on top of the original image. A slider tool is also implemented into the MLGTT which provides the user with a better visual aid for verifying localization fidelity, by blending the original image with the rendered image with a slider. This visual verification helps ensure the accuracy of the pose estimation to the level of accuracy of the human eye. The MLGTT allows human perception to aid in the findings of the ground truth, which removes error found in other methods. We introduce our characterization of user and pixel error, quantified through experiments with individuals both familiar and unfamiliar with the MLGTT or computer vision generally. We will also discuss reprojection error on different screen resolutions and how they might affect the findings. Additionally, we quantify the ground truth error in free spaces as a function of the resolution and field of view (FOV) of the camera that captured the image. Findings include demonstrating the tool’s precision and ease of use compared to traditional external ground truthing methods, such as Vicon or April Tags. These fiducial-based systems typically require external markers and complex setups, whereas the MLGTT operates without the need for such markers. This makes the MLGTT more versatile and easier to deploy in various environments, demonstrated by testing on Mars 2020 images. Originally designed to provide independent ground truth to verify our localization algorithms for the Mars Sample Return (MSR) campaign, the MLGTT has been adapted for multiple different applications and can be applied generally to any monocular imagery of known hardware with a calibrated camera. This software will be released as an open-source project along with this paper.
      • 02.0404 Design and Testing of TRL5 IPEx Actuators
        Casey Clark (), Drew Smith (NASA - Kennedy Space Center), Andrew Nick (), Victoria Ortega (NASA - Kennedy Space Center), Jason Schuler (NASA - Kennedy Space Center), Jeffrey Dyas (), John Lahl (NASA - Kennedy Space Center) Presentation: Casey Clark - Wednesday, March 5th, 09:45 AM - Jefferson
        In-Situ Resource Utilization (ISRU) is an essential part that NASA is addressing by framing specific missions around placing sustainable infrastructure on both the moon and Mars. Regolith is the most abundant and ostensibly the only tangible resource available on the moon. The IPEx project is developing a 30 kg-class excavator to demonstrate robotic excavation of large amounts (10,000 kg) of granular lunar regolith on a future technology demonstration mission. IPEx uses novel excavation tools, called bucket drums, which are hollow cylinders with scoops staggered around the outside. This combination of bucket drum excavation tools and counter-acting excavation forces enables low mass robotic excavators to effectively dig in reduced gravity environments. This is a significant departure from terrestrial excavators that rely on high mass to produce tractive forces to counteract the forces of excavation. The mission requirements dictate a highly reliable mobility and excavation system due to the substantial amount of terrestrial traversing, relative to all pre-existing planetary rovers. IPEx is made up of several custom actuators that all need to be verified for their intended application. This paper focuses on the initial design, testing and modifications of three actuators: a mobility actuator, a shoulder (arm) actuator, and a bucket drum (excavation) actuator. The design of each actuator incorporates mass and volume constraints of the rover, while intentionally being oversized for the lunar mission to allow for Earth testing such that the lunar hardware could be verified in a lab environment. This consideration allows for identical motor designs between the actuators tested and those that will be used on mission, effectively reducing the project complexity. The mission’s concept of operations (ConOps) predisposed the motor sizing, torque, and speed requirements. The calculations based on the ConOps resulted in choosing a design for all three actuators that integrate a ThinGap motor, harmonic drive, SKF angular contact bearings and hall effect sensors. Each actuator was tested under four separate test profiles: ambient motor characterization, hot and cold motor characterization, accelerated life test (ALT), and ConOps. The accelerated life tests and successful ConOps have verified the motors’ ability to survive the mission with a relative factor of safety. The first series of all three unique actuators were seamlessly characterized and completed their respective ALT. However, only the bucket drum actuator performed a complete ConOps within the first series without the need to restart the test. The first mobility actuator needed to be decoupled from the test chamber and reinstalled several times before the ConOps completed, while the first shoulder actuator was unable to complete a ConOps. The first series of testing resulted in various failure modes and minor design alterations of each actuator. These failure modes and design alterations are highlighted within this paper.
      • 02.0405 The DLR AutoNav Experiment with the IDEFIX Rover: Software Architecture & Preliminary Ops Concept
        Mallikarjuna Vayugundla (German Aerospace Center - DLR), Tim Bodenmüller (German Aerospace Center - DLR), Martin Schuster (German Aerospae Center (DLR)), Lukas Burkhard (German Aerospace Center - DLR), Marco Sewtz (German Aerospace Center - DLR), Wolfgang Stuerzl (German Aerospace Center (DLR)), Marcus Müller (German Aerospace Center - DLR), Andreas Lund (German Aerospace Center - DLR), Fabian Buse (German Aerospace Center - DLR), Rudolph Triebel (German Aerospace Center - DLR), Michal Smisek (German Aerospace Center - DLR), Markus Grebenstein (German Aerospace Center - DLR) Presentation: Mallikarjuna Vayugundla - Wednesday, March 5th, 10:10 AM - Jefferson
        The Japan Aerospace Exploration Agency (JAXA) is set to launch the Martian Moons eXploration (MMX) mission in 2026. This sample-return mission aims to enhance our understanding of the origins of Mars' two moons, Phobos and Deimos. The spacecraft is expected to enter Martian orbit in 2027, and subsequently, it will orbit Phobos to conduct detailed observations. In early 2029 (timeline might change), the mission will deploy a small rover named IDEFIX onto Phobos to scout potential sample collection sites for the lander and to conduct additional scientific measurements. IDEFIX is a collaborative development by the German Aerospace Center (DLR) and the Centre National d’Etudes Spatiales (CNES). If successful, IDEFIX will be the first rover to land and navigate on Phobos. The rover mission poses several challenges, including a brief four-month operational period, significant communication delays, limited data bandwidth, low energy resources, and a partially unknown environment. As a technology demonstration and also to assist safe driving, DLR is developing an autonomous navigation on-board module for the rover. Such local autonomy for navigation is needed as teleoperation is not possible due to the long communication delays between Earth and Phobos, coupled with limited communication capabilities. In addition to the rover mission challenges mentioned before, the rover's limited sensor capabilities, such as the lack of an inertial measurement unit (IMU) and pan-tilt mechanisms, make the development of an autonomous navigation solution a hard problem. This paper outlines the features and architecture of the DLR navigation module, detailing its adaptation to the mission's constraints. It also describes the preliminary integration of the module into mission operations, including on-ground rover mobility pre-planning. Additionally, the paper discusses the testing of the navigation module in both simulated and real test environments.
      • 02.0406 14 Rover-years of Slip Risk Assessment for Robotic Arm Safety
        Aaron Curtis (), Ethan Schaler (NASA Jet Propulsion Lab), George Antoun (ATA Engineering, Inc), Michael Stragier (JPL), Tyler Del Sesto (NASA Jet Propulsion Lab) Presentation: Aaron Curtis - Wednesday, March 5th, 10:35 AM - Jefferson
        Science return from the Curiosity and Perseverance Mars rovers, like Spirit and Opportunity before them, depends in large part on “contact science” activities in which instruments on a turret mounted on a robotic arm are placed near, on, or into the ground surface. When planning arm operations, rover teams are responsible for ensuring that torques and forces on arm joints and links will not exceed design specifications during contact science. This necessarily includes consideration of likelihood and consequence of rover movement due to wheel slip or settling. Simulation tools and practices for protecting Curiosity’s arm were initially presented at this conference by White et al (2014), centered on the Slip Risk Assessment Process (SRAP). During the 11 years of operating Curiosity, SRAP and related arm safety procedures were gradually improved, and subsequently adapted for use on Perseverance operations. Herein, we summarize those improvements and adaptations and then examine the use of SRAP and related procedures since their inception. We present data addressing questions including: How often do arm safety concerns preclude arm contact science? When contact science is precluded, what are the primary scenarios of concern? What is SRAP’s impact on the planning timeline? How do predicted wheel slip likelihoods compare with observed slip event frequency? In discussing these questions, we aim to consider how the Curiosity and Perseverance arm safety paradigm might be further optimized to maximize science return while maintaining hardware safety. We also aim to provide lessons that may be of use to planetary missions conducting terrain-contact science, including upcoming Lunar missions.
      • 02.0407 Robotics Capabilities Development for Mars Sample Return Transfer Activities
        Marco Dolci (NASA Jet Propulsion Laboratory - CalTech), Joseph Bowkett (Jet Propulsion Laboratory), Philip Bailey (NASA Jet Propulsion Lab), Anna Boettcher (NASA Jet Propulsion Lab), Junggon Kim (), Preston Rogers (NASA Jet Propulsion Lab), Tu-Hoa Pham (Jet Propulsion Laboratory, California Institute of Technology), Daniel Chavez-Clemente (Jet Propulsion Laboratory), Jennifer Shatts (), Julie Townsend (Jet Propulsion Laboratory), Philip Twu (NASA Jet Propulsion Lab), Curtis Collins (Jet Propulsion Laboratory) Presentation: Joseph Bowkett - Wednesday, March 5th, 11:00 AM - Jefferson
        The planned NASA-ESA Mars Sample Return campaign aims to be the first set of missions to bring Martian samples back to Earth, where thousands of scientists would make groundbreaking discoveries that could redefine our understanding of the Red Planet and perhaps even the origins of life on Earth. A crucial component of the current baseline design for the missions is the transfer of sample tubes from the Martian surface to the ascent vehicle, which requires a highly dexterous robotic arm and sophisticated control strategies. Development and testing of the control algorithms are currently underway at NASA’s Jet Propulsion Laboratory on advanced R&D testbeds, while the European Space Agency (ESA) is responsible for developing the flight robotic arm. To meet the complex requirements for transferring the sample tubes, we propose an abstract layer consisting of three novel robotics capabilities, each to be implemented for the first time in planetary exploration. 1. 7-DoF Manipulation: This capability focuses on managing the kinematics of the 7-DoF robotic arm. It includes the development of algorithms for forward and inverse kinematics, joint redundancy management, kinematic calibration, deflection compensation, target-to-pose selection, trajectory generation, and control strategies for single-joint, multi-joint, Cartesian motions, and free-space closed-loop control. 2. Robotics Vision: This capability pertains to a monocular vision system mounted on the robotic arm’s end-effector with an additional redundant camera. These cameras are essential for localizing the M2020 Rover tube-retrieval station, the SRL OS, identifying tubes on the Martian surface, and estimating the robotic arm’s pose to ensure successful interactions with the station. 3. In-contact Manipulation: This capability addresses the robot’s interaction with its external environment. It involves load estimation using a 6-DoF force-torque sensor, force regulation control (standard load-wrench control to setpoints orthogonal to baseline motion), and active compliance control (admittance control orthogonal to baseline motion).This paper will outline the ongoing development of these robotics capabilities, including the requirements, analysis, testing processes, and future steps necessary to ensure the success of the Mars Sample Return campaign. The decision to implement Mars Sample Return will not be finalized until NASA’s completion of the National Environmental Policy Act (NEPA) process. This document is being made available for information purposes only.
      • 02.0408 Design Considerations for a 2-DOF Robotic Gantry to Support a Mars Sample Return ConOps
        Richard Fleischner (Motiv Space Systems), Brian Hayashi (Motiv Space Systems), Alex Ferreira (), Matthew Quinn () Presentation: Richard Fleischner - Wednesday, March 5th, 11:25 AM - Jefferson
        Abstract—In support of NASA’s Mars Sample Return (MSR) campaign, the Jet Propulsion Laboratory (JPL) contracted Motiv Space Systems (Motiv) of Pasadena, CA to architect, develop, and qualify a 2-DOF Robotic Gantry, to serve as a critical element in the MSR mission Capture, Contain, and Return System (CCRS) Concept of Operations (ConOps). The primary function of the Gantry is to transfer the sample containment vessel from its on-orbit staging area to its location in the Earth Return Vehicle. The Gantry positions a JPL-designed End Effector (EE) within its allotted workspace while reacting the forces and moments associated with sample transfer. The Gantry configuration took on a variety of architectures throughout its development, including robotic arms and composite structures with serially mounted translating stages. The Gantry is made up of a rotary stage structure, designed to carry launch loads and maintain sufficient stiffness during operations, along with a translating linear stage that supports and positions the EE. A rotary stage actuator positions the rotary stage structure within a 270-degree range of motion. A motor-driven synchronization drive advances and positions the linear stage structure through approximately 500 mm range of motion. The system makes use of a kinematic arrangement of rolling elements, engineered and tested to sustain loads from launch and operations. Force and position sensors enable feedback telemetry. Power and signals are transmitted by an arrangement of flex-print cabling, used in twist-capsule and rolling loop arrangements. Two launch restraint systems, located on the rotary and linear stages, provide protection of the system’s structure, mechanisms, and sensors during launch. Several key requirements drove the system’s convergence: first-mode frequency, allocated mass, supported payload mass, the random vibration environment, linear force generation, interface accuracy, and overall packaging. This paper describes at a high-level the development of the Gantry system, from conception to near critical design completion, which satisfied most levied requirements. Motiv’s solution, as presented in several technical and programmatic reviews, ultimately met most requirements. Unfortunately, the development effort was suspended in late 2023 due to funding uncertainty and the pursuit of alternate architectures for both CCRS and MSR. This paper serves in part to preserve the final Gantry configuration, as the system employed a variety of novel engineering approaches.
      • 02.0409 Preliminary Design of the Robotic Pickup Install and Encapsulation Subsystem for CCRS
        John Luke Wolff () Presentation: John Luke Wolff - Wednesday, March 5th, 11:50 AM - Jefferson
        The Capture, Containment, and Return System (CCRS) aims to bring Martian rock and atmosphere samples to Earth as part of the planned Mars Sample Return (MSR) campaign. CCRS would consist of four key phases: (1) Launch, Commission, and Outbound Transfer, (2) Capture and Configuration of samples in Mars orbit, (3) On-Orbit Assembly of samples into Earth Entry Vehicle, (4) Protection, Jettison, and Release of the Earth Entry Vehicle. Under this architecture, the robotic (mechanical) functions during the On-Orbit Assembly phase were allocated to the Pickup, Install, and Encapsulation (PIE) subsystem. The subsystem relied on both hand calculations and ADAMS analysis to perform the multibody dynamic analysis associated with the required functions as part of establishing a preliminary design. Some key challenges entailed (a) interfaces driving system and hardware design, (b) configuration constraints (e.g. geometric, mass, resources – active vs passive), and (c) an aggressive schedule. This paper will describe the preliminary design of the subsystem, discuss key challenges and mitigations implemented, the multibody dynamics analysis, and the validation & verification plan. As part of the NASA response to the recent MSR Independent Review Board’s report and in light of the current budget environment, the MSR Program is undergoing a consideration of changes in its mission architecture. This document is based upon the previous baseline MSR architecture. The decision to implement Mars Sample Return will not be finalized until NASA’s completion of the National Environmental Policy Act (NEPA) process. This document is being made available for information purposes only.
    • Christopher Green (NASA - Goddard Space Flight Center) & Elena Adams (Johns Hopkins University/Applied Physics Laboratory)
      • 02.0502 DAVINCI Venus Descent Sphere Data Flow Design Overview and Initial Performance Estimates
        Jacob Hageman (GSFC) Presentation: Jacob Hageman - Wednesday, March 5th, 04:30 PM - Jefferson
        This paper provides a design overview and initial performance estimates related to the data flow from the Venus Descent Sphere (DS) instruments to the relay spacecraft for the Deep Atmosphere Venus Investigation of Noble gasses, Chemistry, and Imaging (DAVINCI) mission, targeting a launch in fiscal year 2031 or 2032. To maximize high value science data return during descent over the range of possible scenarios, the descent sphere to spacecraft relay link will utilize an autonomous adaptive data rate scheme and data flow will be managed via a data prioritization algorithm. The data prioritization algorithm is implemented mainly in autonomous flight software on the descent sphere, with firmware to offload low latency frame processing supporting the radio interface. Driving concepts that justify the initial design are covered along with initial results from implementing bounding data rate scenarios in a flight software prototype and supporting simulation environment to support design validation and analysis. Data generation, queuing, transmission, and storage estimates by type and priority are included to illustrate the approach to ensuring mission success over a variety of scenarios.
      • 02.0505 Concept for a Lunar Electromagnetic Launch System Architecture
        Luis Carrio (Lockheed Martin Space Systems Company), Ariel Gebhardt (Lockheed Martin Space Systems Company) Presentation: Luis Carrio - Wednesday, March 5th, 04:55 PM - Jefferson
        With the resurgence of space exploration in the 21st century, space agencies and their industry partners will work together to deploy infrastructure that establishes the lunar economy. Development of a lunar economy will both facilitate the persistence of future lunar missions and drive the technological capabilities necessary for exploration of Mars. NASA’s Moon to Mars objectives offer a roadmap for achieving this vision. This vision features commodity production via lunar surface in-situ resource utilization (ISRU), as well as the deployment of technologies that utilize those in-situ based commodities, to enable a robust lunar economy. Examples of these commodities include cryogenic propellants such as liquid oxygen and liquid hydrogen, water for human habitation, additive manufacturing feedstocks derived from regolith, and other high-value products that can be utilized by emerging cislunar systems. Despite technology development efforts in many of these areas, an outstanding functional gap remains regarding a more sustainable means for transferring these commodities from the lunar surface to cislunar orbit. Early roadmap plans for transporting mass from the lunar surface to orbit rely exclusively on chemical propulsion, similar to the Sustainable Lunar Transport systems, expected to be in operation by the early 2030s. As an example of the tyranny of the rocket equation in this context, elements that utilize ISRU produced propellants on the lunar surface will consume well over half of their launched propellant mass just to deliver the remaining cryogenic propellant to NRHO. Sourcing all commodity materials from Earth is similarly impacted by system inefficiencies, as they require both overcoming Earth’s significantly higher gravity and executing the orbit transfer to cislunar orbit insertion, in addition to undermining the underlying premise for the creation of a functional lunar economy. This paper will explore a concept for an Electromotive Launcher System Architecture (ELSA) that offers a more sustainable long-term solution for closing the lunar surface-to-orbit commodities delivery gap. The benefits of this architecture are made possible by the presence of a lunar power grid. Within this system, payloads are kinetically accelerated by elements on the lunar surface through use of electrical power provided by the lunar grid, ejected into a cislunar orbit, and then both captured and collected by orbital elements, thereby minimizing the total propellant mass consumed per transported cargo mass. This paper will discuss a functional decomposition of the elements and assemblies that constitute this system, including; lunar surface launchers, cargo canisters, orbital catchers, and mass-capture surface elements. In addition, it will discuss the system’s mission design in detail, including relevant orbit transfers. This paper will also cover the lunar infrastructure required to enable this system architecture, including the lunar surface power grid performance capabilities for supporting the launcher, surface elements for receiving and transferring emptied canisters, as well as orbital interfaces for the transfer fluids and additive manufacturing feedstocks. Lastly, this paper will discuss relevant trades, system-of-systems architectural evolution scenarios, and key benefits of this architecture to enable more sustainable development and exploration of the Moon, Mars, and beyond.
      • 02.0506 OptiDrill: Next-Generation Instrumented Drill for In-Situ Planetary Surface Analysis
        Joseph Palmowski (Honeybee Robotics), Kathryn Bywaters (Honeybee Robotics), Kris Zacny (Honeybee Robotics Spacecraft Mechanisms Corporation), Christian Sipe (Honeybee Robotics Spacecraft Mechanisms Corporation), Nathan Bramall (Leiden Measurement Technology LLC), Justin Myles (Leiden Measurement Technology), Janice Bishop (SETI Institute) Presentation: Joseph Palmowski - Wednesday, March 5th, 05:20 PM - Jefferson
        Understanding the preserved regolith stratigraphy and undisturbed grains on planetary surfaces is crucial for deciphering formation processes and characterizing water content. These tasks are vital to both planetary science and future commercial activities on the Moon and Mars. Current-generation regolith-oriented coring drills often disrupt unconsolidated regolith stratigraphy during core extraction, and water ice quickly sublimates once brought to the surface. There is a critical need for technology that can analyze undisturbed samples in situ, preserving the delicate stratigraphy and volatiles such as water ice. OptiDrill technology development effort addresses this gap by providing an instrumented drill capable of performing in-situ multispectral microscopic imaging. This requires integrating advanced optical instruments into a compact auger system, making it suitable for a variety of planetary exploration missions, including those to the Moon, Mars, asteroids, and icy worlds. Furthermore, this approach brings the instruments directly to the sample, allowing for the collection of spatially-correlated data sets that are otherwise unattainable. Under NASA PICASSO funding, the OptiDrill is being developed from TRL 2 to TRL 4. The main deliverables are to 1. Design and Test Lens Assembly: Develop the lens system for the high-resolution camera, 2. Integrate Camera and Electrical System: Embed the camera, lens assembly, and necessary electronics into the drill auger, and 3. Fabricate and Test: Conduct extensive testing on planetary analogs to validate performance. OptiDrill will significantly enhance subsurface investigations by providing high-resolution, spatially-correlated mineralogical data and water content measurements. The in-situ analysis preserves the stratigraphy and minimizes the loss of volatiles, offering a more accurate understanding of the planetary subsurface.
      • 02.0507 Future Martian Landing Science Targets and Implications for Exploration Architecture
        Laura Kerber (), Abigail Fraeman (Jet Propulsion Laboratory), Larry Matthies (Jet Propulsion Laboratory), Robert Anderson (NASA Jet Propulsion Lab) Presentation: Laura Kerber - Wednesday, March 5th, 09:00 PM - Jefferson
        Over the last 50 years, orbital missions have collected a wealth of data across the Martian surface, and landers and rovers have visited a handful of specific places to conduct in-depth analyses. As orbital imagery has improved, it has become clear that Mars has significant compositional and geomorphological diversity well beyond that sampled by in-situ missions. Deliberate exploration of end-member terrains with surface assets is critical for furthering our understanding of Martian history. This study presents several examples of how the diversity of the Martian surface can be abstracted from orbital data and plots the previous and proposed landing sites in this framework. Starting with a range of proposed landing sites derived from community workshops and reports, we explore the implications of the proposed science investigations on the required engineering architecture, including (1) landing site altitude (driving landing architecture), (2) local climate (driving power and thermal architectures), (3) surface dust environment (driving landing and power architectures), and (4) required mobility (driving surface asset architecture).
      • 02.0509 MSR Returned Sample Handling and Sample Removal Technology Development
        Paulo Younse (NASA Jet Propulsion Lab) Presentation: Paulo Younse - Wednesday, March 5th, 09:25 PM - Jefferson
        The Mars Returned Sample Handling (MRSH) task developed key technologies to remove gas and solid samples collected within Returnable Sample Tube Assemblies (RSTAs) by the Mars 2020 Perseverance rover and returned to Earth. This includes technologies to deintegrate the Earth Entry System (EES) containing an Orbiting Sample (OS) container, deintegrate the OS containing up to 30 RSTA sample tubes, enclose the RSTAs within a sealed secondary container for protection during handling operations and pre-basic characterization studies, extract gas stored within the tubes, extract solid samples stored within the tubes, and transfer the solid samples into a Core Dissection Tray Container (CDTC) for sample processing, examination, safety assessment, and storage. The sample handling approach is being developed with input from the Sample Receiving Project (SRP) Joint Science Office (JSO) and Johnson Space Center (JSC) Astromaterials Research and Exploration Science (ARES) to address returned sample science, curation, contamination control, and planetary protection concerns. Prototypes for a Secondary Outer Containment Case (SOCC) and Sample Tube Isolation Container (STIC) were fabricated to assess XCT and magnetometry compatibility. Testbeds were developed to demonstrate sample tube puncture, cutting, and sample removal. Hardware mockups were built to assess ergonomics for operation and refurbishment within a glovebox. A robotic handling system testbed was assembled to test sample tube and CDTC pick and place operations using a robotic arm and end effector. An end-to-end demonstration of sample handling operations was carried out on a physical sample tube with a Mars analog sample to validate the system and operational concept.
      • 02.0512 Overview and Results from NASA’s Break the Ice Lunar Challenge
        Kurt Leucht (NASA), Tracie Prater (NASA Marshall Space Flight Center) Presentation: Kurt Leucht - Thursday, March 6th, 08:30 AM - Jefferson
        NASA’s Prizes, Challenges, and Crowdsourcing program portfolio is designed to connect the public to the agency’s missions. The program encourages creative solutions from a diverse range of solvers which serve to advance the agency’s technology development efforts. The Prizes, Challenges, and Crowdsourcing portfolio includes NASA Centennial Challenges, which invites the general public to help close specific NASA technology gaps and incentivizes participation by awarding the Challenge winners large cash prizes for successful outcomes. Centennial Challenges seeks broad participation from independent inventors, companies, nonprofits, small businesses, student groups, and even international entities. NASA executed the Break The Ice Lunar Challenge as part of the Centennial Challenges Program between 2020 and 2024. This Challenge focused on advancing technologies capable of (1) excavating tough icy-regolith, or icy Lunar soil, and (2) transporting that excavated material to a different location on the Lunar surface. Phase 1 of NASA's Break The Ice Lunar Challenge required competing teams to conceptualize a high-level system architecture capable of excavating buried icy-regolith and transporting either icy-regolith or processed water several kilometers across the Lunar surface. Phase 2 of the Challenge required competing teams to design, build, and demonstrate robotic technologies capable of excavating simulated icy-regolith material and transporting it across a simulated Lunar landscape. This paper provides details about the development and execution of the Challenge, the results of each Challenge Phase, and a discussion of the significance of the technology developed under the Challenge for NASA’s exploration goals.
      • 02.0513 Revolutionizing Lunar Subsurface Exploration through Instrumented Drilling Technologies
        Joseph Palmowski (Honeybee Robotics), Kevin Hubbard (Honeybee Robotics Spacecraft Mechanisms Corporation), Kathryn Bywaters (Honeybee Robotics), Evan Eshelman (Honeybee Robotics), Kris Zacny (Honeybee Robotics Spacecraft Mechanisms Corporation), Robert May (Honeybee Robotics Spacecraft Mechanisms Corporation), Nicholas Naclerio (Honeybee Robotics Spacecraft Mechanisms Corporation) Presentation: Joseph Palmowski - Thursday, March 6th, 08:55 AM - Jefferson
        The REBELS (Rapidly Excavated Borehole for Exploring Lunar Subsurface) system presents an advanced end-to-end technology for accessing, analyzing, and collecting deep regolith deposits on the moon. REBELS leverages a suite of existing, high-TRL (4-7) technologies to accomplish this, including coiled-tube deployment with pneumatic excavation and rotary-percussive drilling. At the surface, the system can harvest the excavated regolith with a pneumatic collection head situated at the top of the borehole. If applicable, REBELS is also capable of melting and extracting water. In addition to ISRU capabilities, REBELS’ Bottom-Hole Assembly (BHA) is equipped with multiple downhole instruments for nondestructive, in situ measurements of the subsurface. The notional instrument suite consists of a microscopic camera and multispectral camera to characterize the borehole wall’s texture, granularity, stratigraphy, and mineralogical composition. The instrument suite also includes downhole dielectric spectroscopy probes with integrated temperature sensing for measuring and constraining regolith electrical properties, particularly capacitance and conductivity and assessing water ice content of the subsurface. By enabling exploration beyond the current depth limitations, REBELS will provide invaluable insights into the Moon's geological history, while also providing a promising means of lunar resource utilization, laying the groundwork for future human exploration and habitation of the moon.
      • 02.0514 The CHOPPER Next-Generation Mars Rotorcraft: Scaling Ingenuity by a Factor 20
        Håvard Grip (NASA Jet Propulsion Laboratory, California Institute of Technology), Laura Jones Wilson (Jet Propulsion Laboratory), Christopher Lefler (JPL), Adam Duran (), Benjamin Inouye (NASA Jet Propulsion Lab), Brandon Burns (NASA Jet Propulsion Lab), Brandon Metz (), Travis Brown (NASA Jet Propulsion Lab), David Bugby (Jet Propulsion Laboratory), Jaakko Karras (Jet Propulsion Lab), Fernando Mier-Hicks (), Theodore Tzanetos (NASA Jet Propulsion Laboratory), Giannka Picache (Stanford University), Wayne Johnson (NASA - Ames Research Center), Makoto Ueno () Presentation: Håvard Grip - Thursday, March 6th, 09:20 AM - Jefferson
        With their ability to rapidly traverse long distances over difficult terrain, rotorcraft have the potential to revolutionize Mars exploration. NASA’s Ingenuity helicopter proved that rotorcraft are capable of operating on Mars, albeit at small scale. With a total mass of 1.8 kg, Ingenuity was built only to demonstrate that flight on Mars is possible, and it carried no useful payload except for a small cell-phone class camera. To fully unleash the potential of rotorcraft on Mars, the technology must be scaled to a size where significant payloads – measured in kilograms rather than grams – can be carried over large distances. Helicopters scale differently on Mars than on Earth, owing to the extremely low atmospheric density, lower gravity, and lower speed of sound. The feasibility of scaling Mars helicopter technology to this size class cannot be assumed a priori. The CHOPPER project, led by NASA’s Jet Propulsion Laboratory in collaboration with NASA Ames Research and AeroVironment, is an effort to develop a concept for a large-scale Mars helicopter capable of carrying multiple kilograms of payload and flying multiple kilometers per day. This platform could form the basis for future standalone science missions or be leveraged in a utility capacity – for example, transporting samples in the context of sample return from Mars. Building on the heritage from Ingenuity, the CHOPPER project has approached the Mars helicopter scaling question from scratch, targeting a payload capacity of at least 3 kg and a daily range of at least 3 km, in environments bounded by that of the Jezero Crater rim on Mars. This effort has shed light on the scaling of fundamental physical quantities – such as geometric size, mass, power, energy, strength, stiffness, and loads – and how these relate to key constraints such as volume constraints, aerodynamic stall, Mach number constraints, battery discharge limits, thermal limits, available energy, and structural stiffness requirements for flight control. In this paper we will discuss the details of this analysis and how it drives the design of large-scale helicopters on Mars. We show how scaled-up designs challenge the limits of feasibility, closing off certain branches of the design space. Nonetheless, by making informed design choices we arrive at a notional architecture that is compatible with the project’s performance goals and maintains margins against future mass growth or shortfalls in predicted performance. The baseline CHOPPER vehicle is a hexacopter with a rigid, non-deployable airframe, with rotors measuring 1.35 m in diameter. The predicted mass is approximately 30 kg, and the overall vehicle is sized to fit within a Mars 2020 heritage aeroshell. We will present the high-level vehicle architecture and predicted performance, as well as the assumptions underpinning this analysis. Notable in this regard is the reliance on high-solidity rotors, which we argue are required to scale Mars helicopters significantly beyond the Ingenuity class of vehicles. The use of high-solidity rotors remains largely unexplored for rotorcraft, and future experimental testing is required to confirm the performance predictions used in the current analysis.
      • 02.0515 Conceptual Design of a Mars Exploration Helicopter Packaged in an Accommodation Enclosure
        Lindsay Sheppard (AeroVironment.Inc), Sara Langberg (Aerovironment) Presentation: Lindsay Sheppard - Thursday, March 6th, 09:45 AM - Jefferson
        The success of the Ingenuity program has shown that aerial vehicles are key enablers of future exploration on Mars. As a technology demonstration, Ingenuity had limited dedicated science capabilities, with internal flight sensors and just two cameras, but has proven more valuable to the planetary science community than expected. Making helicopters more capable allows them to carry larger, specialized science payloads and perform more complex missions. However, the feasibility of future Mars helicopter missions may require simplifying the helicopter system to save both schedule and mission cost, as well as providing missions planners an easier means of integrating a helicopter with their spacecraft. This paper will detail the point design of a modular, self-contained helicopter with its own accommodation enclosure that has system mass of approximately 35 kg. The enclosure deploys a helicopter capable of delivering up to 1 kg payload capacity, 2.5-minute flights, and greater than 1.2 km range. General mission concepts for how a helicopter of this size could be utilized or could be resized appropriately for different mission concepts are also presented.
    • Terry Hurford () & Xiang Li (NASA Goddard Space Flight Center) & Jacob Graham (NASA Goddard Space Flight Center)
      • 02.0601 Development of a Dual Wavelength Microchip Laser for NASA’s Raman Mass Spectrometer (RAMS)
        Matthew Mullin (NASA - Goddard Space Flight Center), Jane Lee (NASA - Goddard Space Flight Center), Anthony Yu (NASA - Goddard Space Flight Center), Molly Fahey (NASA Goddard Space Flight Center), Andrej Grubisic (NASA Goddard Space Flight Center) Presentation: Matthew Mullin - Thursday, March 6th, 10:10 AM - Jefferson
        RAMS (Raman Mass Spectrometer) is a hybrid instrument with a combined capability of laser desorption mass spectrometry (LDMS) and Raman spectroscopy being built at NASA Goddard Space Flight Center. It allows for comprehensive evaluation of organic and inorganic composition of in situ probed planetary surface samples, such as those found on Ocean worlds and other airless bodies throughout the Solar System. To enable this science capability, a unique laser system consisting of a passively Q-switched Yb:YAG microchip and VBG cavity, followed by second and fourth harmonic frequency conversion has been developed. The RAMS laser is capable of outputting 257.5 and 515 nm, 1.2 ns pulses at 10 kHz with adjustable pulse energies ranging from 0 - 25 uJ and 0 – 100 uJ, respectively. The laser system also supports single shot or burst mode operation by utilizing a high voltage electro-optic stage as well as spatial resolution imaging via a MEMS output mirror to allow for micron-scale rastering across the surface of collected samples. We present a comprehensive characterization of the RAMS laser system, along with preliminary results from integration onto the instrument. Additionally, we outline the next steps being taken to advance NASA's microchip laser technology for future planetary exploration missions.
      • 02.0602 Advancing Lunar Exploration: The Neutral Gas Mass Spectrometer for Regolith and Exosphere Analysis
        Rico Fausch (University of Bern), Lukas Hofer (Spacetek Technology AG), Davide Lasi (University of Bern), Daniele Piazza (University of Bern), Jürg Jost (Spacetek Technology AG), Hans Rudolf Elsener (Empa (Swiss Federal Laboratories for Materials Science and Technology)), Peter Wurz (University of Bern) Presentation: Rico Fausch - Thursday, March 6th, 10:35 AM - Jefferson
        The Neutral Gas Mass Spectrometer (NGMS) for the Luna 25 and Luna 27 missions represents a pivotal advancement in lunar exploration technology. This compact time-of-flight (TOF) spaceborne instrument is integrated with a gas chromatograph (GC) and a pyrolysis oven, designed to investigate the chemical composition of lunar polar regolith and the tenuous lunar exosphere. NGMS measures elemental, isotopic, and molecular compositions, including CHON compounds, water, and noble gases. It features a robust design with an ion storage source, redundant thermionic electron emitters, pulsed ion extraction, ion drift paths, an ion mirror, and a high-speed microchannel plate detector, achieving a mass resolution better than 1000 and a dynamic range of up to 1e6 within a 1-second integration time. Preparations for the mission included the successful coupling of the TOF-MS with GC, enabling pre-separation of species. Test measurements with hydrocarbons and noble gases confirmed the system's high sensitivity and extensive dynamic range, with detection limits for volatile species in lunar regolith estimated at 2e−10 by mass for hydrocarbons and 2e−9 for noble gases. The NGMS design ensures high sensitivity and moderate resource requirements, recording all masses simultaneously and achieving a mass resolution of approximately 1100 full width at half maximum. The instrument's control electronics were developed to meet stringent mission requirements, including a maximum power consumption of 23 W, compact size, and radiation tolerance, all assessed at technical readiness level 8. The complete instrument weighs 3.2 kg. This outstanding compact design was achieved by creating a miniaturized, integrated ion-optical system, relying on a metal-ceramic brazed structure rather than discrete ion-optical elements. By omitting bulky connecting components, a narrow opening angle of the ion beam inside the ion-optical system was possible, minimizing ion-optical aberrations. Although NGMS did not fly to the Moon due to programmatic considerations, its architectural baseline continues to influence ongoing advancements in lunar and planetary exploration. Thanks to its exceptional achievements, it served as a baseline for the design of two instruments on board deep space missions: NIM on board the JUICE mission and MANIaC on the Comet Interceptor mission, both specifically designed to analyze tenuous exospheres.
      • 02.0603 Design and Testing of a Sample Handling System for Operation on the Lunar Surface
        Peter Keresztes Schmidt (University of Bern), Andreas Riedo (University of Bern), Peter Wurz (University of Bern) Presentation: Peter Keresztes Schmidt - Thursday, March 6th, 11:00 AM - Jefferson
        This contribution aims at providing an in-depth description of the design concepts and verification strategy for a sample handling system (SHS) used for preparing lunar regolith samples for analysis by laser ablation ionization mass spectrometry (LIMS). The SHS is an integral part of a reflectron-type time-of-flight LIMS (RTOF-LIMS) which allows for direct sensitive microanalysis of individual regolith grains in situ on the lunar surface. The RTOF-LIMS measurements can provide the full element and isotope composition of the sample material (mass range up to m/z ~1,000) with each laser pulse applied to the sample surface. The CLPS-LIMS instrument is manifested for a robotic mission (stationary lander) within NASA’s Artemis CLPS program to the lunar south-pole. The lander will provide on request lunar regolith to the SHS’s collection funnel for further sample manipulation. The sample material is sieved in a two-stage process assisted by vibrating the funnel and sieve assembly at their first global resonance frequency. The sieving process removes grains with particle sizes larger than 1 mm, as these would hinder the preparation of a sample surface with a sufficiently good surface roughness. Control of the roughness is important to ensure consistent and appropriate laser ablation conditions during sample analysis and thus reproducible chemical composition determination of the sample material. The sieved sample material is deposited within a 1.5 mm deep and 5 mm wide cavity at the edge of a rotating circular carousel disk with a diameter of 187 mm. The system creates, with the help of passive shaping brushes and a skimmer blade, a well-defined planar sample surface with height variations of less than ±200 µm in >90 % of the conducted test runs. After chemical analysis by LIMS, cleaning brushes remove the remaining material from the cavity and prepare it for new sample material delivered by the lander platform. The SHS system presented, allows for indefinite sample material to be analyzed. Two reference samples are included along the edge of the carousel for calibration purposes of the LIMS instrument during the commissioning phase of the mission. The SHS was tested and validated at inclinations up to ±10° off the nominal plane of operation, as an angled landing position of the lunar lander is possible. The observed overall sample utilization efficiency was influenced by losses during sieving and the intentional overfilling of the carousel cavity. Effects on the sample surface quality have been studied and found to be within the requirements for subsequent analysis of the regolith by LIMS. Tests were conducted using regolith simulant with grain sizes of up to 5 mm (LHS-2, Exolith Lab, USA, complemented with larger grains). To test sieving efficiencies in the lunar gravity environment, cork grains with low specific mass were used to simulate the reduced gravity. FEM analysis of the SHS shows compatibility with vibration loads as defined by NASA’s GEVS.
      • 02.0604 PlumeCAS: A Novel Plume Capture and Potential Biosignature Detection Instrument
        Isabel King (Honeybee Robotics), Frank Sheeran (Honeybee Robotics), Manuel Gonzalez Parra (Honeybee Robotics Spacecraft Mechanisms Corporation), Kris Zacny (Honeybee Robotics Spacecraft Mechanisms Corporation), Jason Kriesel (OKSI), Andrew Fahrland (OKSI), Jennifer Stern (NASA Goddard Space Flight Center), Marc Neveu (University of Maryland / NASA Goddard Space Flight Center) Presentation: Isabel King - Thursday, March 6th, 11:25 AM - Jefferson
        PlumeCAS is an all-in-one sample capture, volatile metering, and Capillary Absorption Spectrometer (CAS) system that can make sensitive isotopic and abundance measurements of H2O, CO2, CH4, and C2H6 in Enceladus’ plume to aid in the search for life. The instrument accumulates gaseous and icy phases during plume flythroughs in an aerogel and indium collector that preserves the integrity of small organics. The system leverages a heritage hermetic sealing design to contain that sample, and a volatile metering system delivers aliquots to the CAS for analysis. There, hydrogen and carbon isotopic ratios of H2O and CO2 are measured to generate a baseline against which to compare isotopic ratios in CH4. PlumeCAS will also determine the abundance of C2H6 to generate a CH4/C2H6 abundance ratio. These isotopic and abundance indicators are among the best-understood biosignatures available for measurement in Enceladus plume material. CAS is an infrared laser spectroscopy system that can provide unambiguous measurements for both trace gas sensing and stable isotope analysis. It can uniquely identify molecular species and isotopologues with high sensitivity, avoiding measurement interferences between molecular species of equal mass. For example, traditional isotope ratio mass spectrometry struggles to accurately measure CH3D because of mass interference from 13CH4, while laser spectroscopy can distinctly differentiate between these two isotopologues. Further, PlumeCAS analysis is non-destructive, and thus its sample capture and handling techniques could contribute to a gas-analysis suite. These factors make PlumeCAS a valuable tool in the search for signs of life in the Enceladus plume with great potential to be applied on a wide variety of missions. Future development of this instrument could even extend this work to apply to orbital sampling of other target destinations, such as the volcanic gases of Io or other planetary atmospheres. The Honeybee and OKSI teams developed a PlumeCAS TRL 4 breadboard under SBIR Phase II funding that demonstrated successful capture and delivery as well as isotopic characterization and abundance measurements of methane-doped water ice particles. These particles were accelerated into a thermally-controlled vacuum chamber containing the PlumeCAS breadboard at speeds of ~100 m/s using a sample-loading tube pressurized with helium gas. After sample acquisition, the breadboard was robotically sealed, then heated to sublimate the sample and deliver volatiles to the CAS for analysis. We demonstrated that, at these speeds, aerogel had increased capture efficiency relative to the aluminum housing or indium backing alone. In a representative test, 1020 mg of methane-doped ice particles containing 1.1 x 10-6 moles of CH4 were shot at the breadboard and 8 x 10-9 moles of CH4 were captured by the system. For this sample run, the δ13C-CH4 isotope ratio was measured to be -42.4‰ in one experiment, compared to a known value of -40.0‰ for the methane used to dope the ice. These breadboard experiments provided an end-to-end functional demonstration of the PlumeCAS sample capture, volatile delivery, and measurement concept in a relevant environment. Future work on this project seeks to mature the design for flight and test at flyby speeds of several km/s.
      • 02.0605 Ionospheric Observations and ISS Frame Charging during the March and April 2023 G4 Solar Storms
        Carlos Maldonado (University of Colorado at Colorado Springs), Anthony Rogers (Los Alamos National Laboratory) Presentation: Carlos Maldonado - Thursday, March 6th, 11:50 AM - Jefferson
        The Falcon Electric Propulsion Electrostatic Analyzer Experiment (henceforth “ÈPÈE”) is a fourth-generation follow-on to a family of electrostatic analyzers flown five times on the International Space Station (ISS) and on six Evolved Expendable Launch Vehicle (EELV) Secondary Payload Adapter (ESPA) class spacecraft in Low Earth Orbit (LEO) since 2009. The instruments are designed, built, and tested at the United States Air Force Academy (USAFA) allowing cadets to perform basic research “learning space by doing space”. The ÈPÈE is a compact, rugged Electrostatic Analyzer (ESA) that measures ion populations in the local space environment. ÈPÈE has operated nearly continuously on-board the International Space Station (ISS) as a manifested payload on the United States Department of Defense Space Test Program – Houston 9 (STP-H9) platform since March 15th, 2023. The sensor serves as an energy bandpass filter and from the in-situ measurements the ambient density and temperature of the local space plasma can be obtained. Additionally, the resulting spacecraft charge can be observed to provide real-time data concerning the local space environment and potentially hazardous levels of differential frame charging. We present the ÈPÈE design and initial observations of the ionospheric plasma environment during the increased solar activity as Solar Cycle 25 heads towards the solar maximum. Specifically, we will focus on the impact of two of the largest solar storms in the last two decades that occurred in early 2023. Two severe geomagnetic storms (G4) occurred on March 23-24 and April 23-24 of 2023. We examine the solar storms' effects on ionospheric plasma density and resulting ISS frame charging.
    • Leonard Felicetti (Cranfield University) & Giovanni Palmerini (Sapienza Universita' di Roma) & Ryan Woolley (Jet Propulsion Laboratory)
      • 02.0701 Optimization of Satellite Formation Reconfiguration
        Aaron Hoskins (California State University, Fresno) Presentation: Aaron Hoskins - Thursday, March 6th, 10:35 AM - Madison
        Satellite formation reconfiguration can significantly enhance the types and quality of collected data. Previous research by others has investigated different reconfiguration strategies. However, there has always been a predetermined mapping of the satellites transitioning from the location in the first formation to the location in the second formation. There are 24 different potential mappings for a formation of four satellites. The research presented here will include all potential mappings in the optimization process to determine the optimal formation reconfiguration strategy. Objective functions minimizing overall Delta-V and balancing Delta-V over all satellites will be implemented and compared. The inclusion of the nominal variable of the mapping will necessitate the use of metaheuristics for the optimization process. The research will investigate multiple metaheuristic algorithms to compare performance on these satellite formation reconfiguration problems. Previous work by Sarno et al. (Sarno, et al. 2020) investigated Coplanar to Projected Circular Orbit, Cartwheel to Pendulum, and Pendulum to Helix formation reconfigurations. This research will examine these examples, and reconfiguration will be reversed. (As an example, Projected Circular Orbit to Coplanar). An interesting result of this work will be if the optimal formation reconfiguration mapping is identical in both directions or if multiple reconfigurations of a specific set of satellites result in a rotation amongst the positions of the satellites from one formation reconfiguration to the next. Additional formation reconfigurations, such as Coplanar to Tetrahedron, will be investigated. The research will also explore multiple orbital regimes of different altitudes and eccentricities. The research will investigate multiple examples of satellite formation reconfigurations over multiple orbital regimes while employing multiple optimization algorithms and objective functions. The results will enable mission managers to understand better the tradespace to consider when designing missions that include satellite formation reconfigurations. Bibliography Sarno, S, J Guo, M D'Errico, and E Gill. 2020. "A guidance approach to satellite formation reconfiguration based on convex optimization and genetic algorithms." Advances in Space Research 2003--2017.
      • 02.0702 Stochastic Multistage Satellite Constellation Reconfiguration for Tracking Uncertain Targets
        Brycen Pearl (West Virginia University), Hang Woon Lee (West Virginia University) Presentation: Brycen Pearl - Thursday, March 6th, 11:00 AM - Madison
        Natural disasters are highly impactful and unpredictable planetary phenomena that occur on various spatial and temporal scales. The modeling and prediction of such disasters heavily depend on understanding local natural processes, which requires extensive data collection, especially through satellite-based remote sensing. Constellation reconfigurability is a leading-edge operational concept for Earth observation satellites, with the potential to enable high spatial and temporal resolution data acquisition by reconfiguring a constellation of satellites into a more optimal configuration via satellite maneuvers. Leveraging orbital maneuverability, constellation reconfigurability has been demonstrated as a promising solution to respond to dynamic events. Previous literature assumes a deterministic setting, in which either a case study or the entire formulation is considered to have a priori knowledge of target properties. This is not accurate to reality, as the progression of natural disasters is governed by stochastic processes that are only discernible in retrospect. This unpredictability complicates the decision-making process, often resulting in suboptimal reconfiguration strategies. In response to this gap in the literature, we present a stochastic variant of the Multistage Constellation Reconfiguration Problem (MCRP) leveraging the use of a Monte Carlo Tree Search (MCTS) algorithm to account for target uncertainties, which builds upon the authors' prior work on a deterministic variant. Additionally, the use of MCTS allows for optimal decisions regarding the reconfiguration of a given constellation with respect to a number of potential probable target scenarios. To verify the stochastic MCRP variant, a case study is conducted with Hurricane Rita, a deadly Category Five hurricane that struck the southern United States in September of 2005. The results of the case study depict that a stochastic MCRP can outperform a baseline constellation of non-reconfigurable satellites while accounting for the stochastic nature of a target and obeying visibility, feasibility, and maneuver budget constraints.
      • 02.0703 Wide-Range Relative Velocity Sensor Using Laser Interferometry for Ultra-Precision Formation Flying
        Hosei O (The University of Tokyo), Yuki Yamaguchi (), Subaru Shibai (), Kentaro Komori (), Masaki Ando () Presentation: Hosei O - Thursday, March 6th, 11:25 AM - Madison
        The realization of ultra-precision formation flying, which controls the relative positions of multiple satellites with micrometer accuracy, is anticipated to enable new scientific missions using inter-satellite laser interferometry, such as space-based gravitational wave telescopes. Achieving such precision requires transitioning from coarse relative position control using the Global Navigation Satellite System (GNSS), accurate to sub-centimeter levels, to ultra-precision control at the micrometer level. This necessitates the development of a satellite-to-satellite relative velocity sensor capable of wide-range measurements from sub-centimeters per second to micrometers per second. We have developed a novel method using the Doppler effect in an asymmetric Michelson interferometer to measure relative velocity between satellites. This method involves modulating the laser frequency to introduce an effective offset to the frequency of signal, enabling signed velocity measurements. In our research, we conducted tabletop experiments to validate the proposed method. The setup included a laser source, a delay line, and mirrors suspended on a stage and controlled by electromagnetic actuators to simulate the relative motion between satellites in a space mission. The results demonstrated that our approach could accurately measure velocities ranging from sub-centimeters per second to micrometers per second. We also compared offline and online analysis results of the output signal to investigate the accuracy and limitations of our method. Our research demonstrates that this method can measure satellite-to-satellite relative velocities from sub-millimeter levels to micrometer level, significantly advancing the feasibility of ultra-precision formation flying missions. Future work will involve on-orbit verification to confirm its effectiveness, with the expectation of practical applications in various scientific missions.
      • 02.0704 Convex Station-Keeping Control of Halo Orbits Using a Solar Sail
        Fausto Vega (Carnegie Mellon University), Martin Lo (JPL), Zachary Manchester (Carnegie Mellon University) Presentation: Fausto Vega - Thursday, March 6th, 04:30 PM - Madison
        We present a convex optimization-based station-keeping control algorithm designed for long-term station keeping of unstable halo orbits using a solar sail. Our controller determines a sail orientation to minimize deviations from a nominal halo orbit. Traditional methods often linearize the solar-sail propulsion model around nominal angles that define the sail orientation, but this can lead to inaccuracies as the model deviates from the linearization point. Instead, we encode the set of possible thrust vectors generated by the nonlinear solar-sail model as the boundary of a convex set, which we then relax to arrive at a convex optimization problem. We demonstrate that this relaxation is tight (i.e. it produces feasible solutions to the original problem) in realistic simulation examples in the Earth-Moon system, validating the effectiveness of this propulsion-free method for long-term station keeping.
      • 02.0705 Decentralized Impulse Control for Multiagent Space Systems
        Xun Liu (Villanova University), Bo Wang (The City College of New York), Hashem Ashrafiuon (Villanova University), Sergey Nersesov (Villanova University) Presentation: Xun Liu - Thursday, March 6th, 04:55 PM - Madison
        For nonlinear systems that admit continuous-time stabilizing controllers, we propose a Lyapunov-based technique to map this continuous-time control input into a sequence of discrete impulses while guaranteeing the desired performance of the closed-loop system. Specifically, a numerical algorithm, running in real time, determines the instants of control impulse injections into the system based on a comparison of a Lyapunov function value predicated on continuous-time control and the impulse control. Besides these discrete control injections, no control input is applied to the system otherwise. We apply this technique to design decentralized cooperative control for formation flying of multiple satellites and spacecrafts distributed along the orbit. Since these vehicles typically use discrete burns to save on fuel, the proposed technique is well-suited for such an application and we show its efficacy through numerical simulations.
      • 02.0706 Information-Optimal Multi-Spacecraft Positioning for Interstellar Object Exploration
        Arna Bhardwaj (University of Illinois at Urbana-Champaign), Shishir Bhatta (University of Illinois at Urbana-Champaign), Hiroyasu Tsukamoto (University of Illinois at Urbana-Champaign) Presentation: Arna Bhardwaj - Thursday, March 6th, 11:50 AM - Madison
        Interstellar objects (ISOs) provide valuable information on the universe's formation and insight into astronomical objects' material compositions and makeup in interstellar space. They visit our solar system only once in their lifetime due to the hyperbolic nature of their orbit, where only two such objects have been identified and observed to date. It is thus imperative to gain information from them promptly, while efficiently dealing with their high relative velocity and large state uncertainty. We propose a framework for locally maximizing the amount of information, which we define as imaging data, obtained from an ISO using a swarm of spacecraft. We utilize a novel information cost function, that when minimized using sequential convex programming (SCP), outputs local optimal expected values of the terminal relative states of each spacecraft in the swarm with respect to the ISO. The information cost analytically computes the expected information loss of the swarm for multiple points of interest (POIs) covering the position uncertainty of the target ISO, the distribution of which is assumed to be Gaussian. It also formally accounts for the limited field of view (FOV) of each spacecraft in the swarm. The details of our framework are summarized as follows. We first derive a symbolic expression of the probabilistic information cost of investigating all the POIs. The spacecraft’s distance and orientation with respect to the POIs, as well as the cameras' focal angles and lengths, are used to quantify the quality of the ISO coverage with standard rectilinear lenses. We incur additional costs for POIs already identified by some spacecraft to encourage the other spacecraft in the swarm to prioritize unexplored POIs. Partially integrating this probabilistic quantity over spacecraft states, given their Gaussian joint probability density functions, yields its expected value as a function of the states’ mean and covariance matrices. We then formulate a nonlinear optimization problem of minimizing this expected information cost as the states’ mean being the decision variables, where the covariance matrices are given externally by the onboard navigation scheme. We solve this problem approximately by SCP, thereby showing that the optimal solution locally maximizes the probability of having the ISO in the swarm’s FOV even under the large state uncertainty. Numerical simulations are performed to demonstrate the efficacy of our method. Here, as the exact location of the ISO is stochastic due to its state uncertainty, POIs are assumed to be located on an ellipsoid with its radii specified by the mean squared distances of the relative states. A set of ten thousand POIs is then randomly selected from points on or within the ISO’s uncertainty ellipsoid. Our formulation allows each spacecraft in the swarm to optimally select its terminal state and determine the ideal number of POIs to obtain information from. Synthetic ISO candidates and their nominal trajectories are generated according to an existing, quasi-realistic empirical population of ISOs, along with the state uncertainties given by our previous work with AutoNav.
      • 02.0707 Stochastic Models for Remote Sensing Coverage Analysis Limited by Geophysical Conditions
        Jonathan Sipps (The University of Texas at Austin), Ian Thornton (), Lori Magruder (University of Texas at Austin) Presentation: Jonathan Sipps - Thursday, March 6th, 09:00 PM - Madison
        Earth Science Decadal Survey (2018) directives call for spaceborne remote sensing architectures capable of collecting measurements on targeted observables for multiple science objectives, either by traditionally monolithic design, or by a distributed spacecraft missions (DSM). Beyond trades in instrument modality, launch availability, and orbital element selection, simulated spatial and temporal coverage of Earth's surface is a first-step to quantify figures-of-merit (FOMs) to determine mission architecture effectiveness. Typically, FOMs are limited to coverage geometry. However, geophysical conditions, such as cloud cover, can partially obscure observations, creating a probability for successful measurement collection at a given geographical location on Earth. In this study, we apply frequentist probability models assuming independent and uniformly distributed observations per geographical grid cell (GC), to quantify the stochastic number of observations N, percent coverage P, and revisit interval statistics R (minimum, maximum, and mean), for both wide- and small-swath sensor coverage simulation. We leverage the same probabilistic framework to estimate small swath coverage (SSC, <10 km swath), such as lidar or high-resolution stereophotogrammetry. The coverage statistics for these technologies cannot be otherwise quantified in a grid-point (GP) based framework due to quadratic runtime complexity in GP sampling, required to sufficiently quantify the swath signal. In SSC, we estimate the area of the GC covered by the swath for every overpass, and couple the area with geophysical quality, producing a probabilistic spatial coverage per GC. The assumptions are that the swath is ~20x smaller than the GC size, and the swath and ground track motion are constant per GC access period. We show through rigorous monte-carlo (MC) validation that the proposed models match the limiting estimates of MC. SSC is shown to be accurate to <10% relative error in estimating N and P, despite linear runtime complexity. We present examples of the methods applied to the spatial coverage of the Landsat-8 sun-synchronous orbit, with a 185 km and 1 km swath, propagated over 16 days during the summer solstice of 2019, given both solar illumination and cloud cover observability constraints. The examples demonstrate that stochastic models for FOMs and SSC are a promising research focus, to both reduce the spatial dimensionality of the coverage problem, and to rigorously quantify probabilistic measurement capability for remote sensing mission design.
      • 02.0708 Electro-Optical Sensor Design for Space Traffic Management in Cislunar Space
        Chingiz Akniyazov (University of Auckland), Roberto Armellin (), Laura Pirovano (), Oliver Sinnen () Presentation: Chingiz Akniyazov - Thursday, March 6th, 09:25 PM - Madison
        The increased exploration beyond the geostationary ring has heightened concerns about the long-term sustainability of space, particularly with the growing population of spacecraft, boosters, rocket bodies, and debris. Monitoring these space objects (SOs) is crucial for ensuring safe operations. Passive optical systems offer a cost-effective solution for this monitoring, with recent advances in optical system design, detectors, and image processing introducing new capabilities. This paper presents the Earth-Moon cislunar electro-optical sensor design utilizing periodic orbits for space traffic management (STM). The goal is to identify an optimal space-based monitoring architecture for missions focused on detection, tracking, initial orbit estimation, and orbit maintenance. We propose a mathematical measure to evaluate the observation performance of a telescope design on a specified trajectory orbit for cislunar space monitoring, considering multiple visibility constraints such as obstructions by the Moon, Earth, and Sun, and factors related to the sensor's capability to identify SO signals. A significant outcome is the development of sensor parameters essential for successful detection. The study highlights the performance of optical systems in non-tracking modes for uncued SO detection in cislunar space. Using this model, we examine various families of repeating trajectories and phasing, simulating the dynamic paths of cislunar objects over time. This enables the creation of design mappings for telescope capabilities to detect SOs at varying distances, considering the relative velocities between the observer and the target, and the minimal integration time needed for successful detection. This approach allows for specifying telescope parameters in a given operational orbit, providing a detailed configuration of essential telescope components necessary to achieve the desired detection range and capability.
      • 02.0711 Markov Decision Processes for Satellite Maneuver Planning and Collision Avoidance
        William Kuhl (), Jun Wang (), Duncan Eddy (Stanford University), Mykel Kochenderfer (Stanford University) Presentation: William Kuhl - Friday, March 7th, 08:30 AM - Amphitheatre
        This paper presents a decentralized, online planning approach for scalable maneuver planning for large constellations. While decentralized, rule-based strategies have facilitated efficient scaling, optimal decision-making algorithms for satellite maneuvers remain underexplored. As commercial satellite constellations grow, there are benefits of online maneuver planning, such as using real-time trajectory predictions to improve state knowledge, thereby reducing maneuver frequency and conserving fuel. We address this gap in the research by treating the satellite maneuver planning problem as a Markov decision process (MDP). This approach enables the generation of optimal maneuver policies online with low computational cost. This formulation is applied to the low Earth orbit collision avoidance problem, considering the problem of an active spacecraft deciding to maneuver to avoid a non-maneuverable object. We test the policies we generate in a simulated low Earth orbit environment, and compare the results to traditional rule-based collision avoidance techniques.
      • 02.0714 A Study of Formation Flying at Lower Altitudes
        Barbara Braun () Presentation: Barbara Braun - Friday, March 7th, 08:55 AM - Amphitheatre
        Formation-flying missions between dissimilar satellites, such as the mission recently concluded by the CloudSat and Calipso satellites, allow missions to collect coordinated science measurements from multiple instruments hosted on multiple satellites concurrently. CloudSat and Calipso successfully flew in formation for 17 years at approximately 700km above the earth. The missions used a simple method of planning and accomplishing maneuvers that took advantage of the parabolic relative motion between the two satellites, caused by differences in the satellite’s drag profiles. Now, newer missions are seeking to replicate the formation-flying success of CloudSat and Calipso, but at lower altitudes. Using CloudSat and Calipso as a model, this paper looks at considerations for implementing a similar formation flying algorithm at a lower altitude. Trade studies were run to get an initial look at how solar flux, differing drag coefficients, and amount of allowed delay between the two satellites affected the frequency at which maneuvers need to be made and the size of the delta-V required. Case studies were run for orbital altitudes of 450 km, 500 km, and 550 km above earth. The total number of maneuvers and approximate fuel required over a five-year formation flying mission were also calculated for the cases examined. The results are of interest to any mission looking to maintain formation flying at lower altitudes, where drag poses a greater obstacle to maintaining formation.
      • 02.0716 A MicroSatellite Mission to Sample LEO and Lower MEO Environment
        Giovanni Palmerini (Sapienza Universita' di Roma), Prakriti Kapilavai (Sapienza Università di Roma), Emiliano Ortore (University of Rome, La Sapienza) Presentation: Giovanni Palmerini - Friday, March 7th, 09:20 AM - Amphitheatre
        An accurate knowledge of the orbital environment is extremely important to correctly design space systems and, even more relevant, to better investigate the link with geophysics, which is now understood to be quite strong. Suitable models for quantities of interest, as the amount of radiations or the magnetic field, already exist, of course. However, and especially for radiations, the gained knowledge is not enough, because the scenario is far from being a steady one. As a result, studies on the terrestrial orbital environment are clearly an ongoing activity, and in situ measurements are important to validate and correctly tune existing or novel theoretical models. This paper intends to detail the orbital design of a mission aimed to provide an effective sampling of the Low Earth Orbits (LEO) and of the lower altitude portion of the Medium Earth Orbits (MEO), up to about 5000 km above the surface of the Earth, at different latitudes. The main characteristic of the proposed design is that it fully exploits the J2 (Earth flattening) effect, and the precession it induces on both the line of nodes and the line of apsides. In such a way, a high inclination, eccentric orbit can sweep in time a spherical shell, allowing to collect a significant set of measurements to characterize the radiation environment (and, additionally, the magnetic field). The strong advantage of the proposed design stays in the passive strategy, i.e. not requiring any thrust after orbit injection for exploring a very large volume. It makes such a mission a suitable opportunity for microsatellite or cubesats, that cannot support significant propulsion capabilities. In addition to the orbital design, the paper analyzes the radiation environment to be expected in the sampled regions, to allow for a preliminary sizing of the instruments – dosimeters - to be accommodated onboard.
      • 02.0717 CubeSat Orbit Insertion Maneuvering Using J2 Perturbation
        M. Reza Emami (University of Toronto), M. Amin Alandihallaj (University of Luxembourg) Presentation: M. Reza Emami - Friday, March 7th, 09:45 AM - Amphitheatre
        This paper proposes a maneuvering sequence for setting a CubeSat in low-Earth orbit on a precise trajectory to reach its designated orbital elements precisely by employing the J2 perturbation, through which the total required delta-v for correcting the orbit deviation can be reduced considerably. Since CubeSats are launched as secondary payloads, their insertion orbits are guaranteed only within large tolerances. However, for certain CubeSat missions it is essential to put them in precise orbits, especially when having multiple CubeSats in a formation or constellation. Thus, a CubeSat may need to perform some orbital maneuvering, in order to correct its orbit and reach the designated orbital elements. Due to several key challenges in putting a CubeSat on a precise orbit, orbit insertion maneuvering of CubeSats has not been fully rectified yet. The first challenge is that, since a major part of each CubeSat is often allocated to payloads, a very limited amount of propellant can be carried onboard. Secondly, the propulsion systems currently available for CubeSats offer low-thrust levels, which should be employed for a long duration of time to provide enough delta-v needed for orbit corrections. Further, the limited electric power generated onboard CubeSats prevents their propulsion system from being continuously activated for a long time. The third challenge is that an online optimal orbit correction algorithm requires high computational resources, which are typically beyond the processing capabilities of CubeSats. The proposed J2 maneuvering sequence attempts to take benefits from the J2 perturbation by placing the CubeSat on a transfer orbit with appropriate semi-major axis and inclination, calculated using the first-order Taylor series approximation of the Gauss’ variational equations, so that the J2 perturbation decreases the error of the right ascension of ascending node and the argument of latitude with respect to the desired orbit over the maneuvering period. Thus, the CubeSat does not need to perform out of plane maneuvers for correcting the right ascension of ascending node, which usually require the major part of the fuel. The J2 maneuvering sequence initially puts the CubeSat on a proper transfer orbit using an in-plane maneuver to change the semi-major axis of the orbit and an out-of-plane maneuver to change the inclination. When the CubeSat is placed on the proper transfer orbit, the argument of latitude and right ascension of ascending node slowly drift to the desired values. When the CubeSat is on the transfer orbit, the error of right ascension of ascending node is converged to zero at a constant rate by the J2 perturbation over a certain time period. Finally, an inclination correction maneuver and an in-plane semi-major axis correction maneuver should be performed at the end, when the argument of latitude exactly matches the desired value. The effectiveness of the proposed approach is investigated through several case studies using numerical simulations.
    • Ondrej Ploc (Nuclear Physics Institute of the Czech Academy of Sciences) & Lembit Sihver (TU Wien and NPI of the CAS)
      • 02.0802 Laboratory Testing of a Radiation Hardened 2D Imaging Anode for Charged Particle Spectrometry
        Daniel Arnold (Los Alamos National Laboratory), Michael Holloway (Los Alamos National Lab), Justin McGlown (Los Alamos National Laboratory), Angus Guider (), Carlos Maldonado (University of Colorado at Colorado Springs), Philip Fernandes (Los Alamos National Laboratory) Presentation: Daniel Arnold - Thursday, March 6th, 04:30 PM - Jefferson
        Technical staff at the Los Alamos National Laboratory’s Intelligence and Space Research Division have developed a system that supports high-resolution 2D imaging spectrometry applications and operates at rates up to 100 kHz in the space environment. Of particular importance is the radiation hardened design, which has been developed to support operations through the stressing geostationary transfer orbit (GTO). Spacecraft systems operating through GTO will pass through the Earth’s radiation belts twice per day and be subjected to the highly dynamic and energetic trapped charged particle populations. The design consists of a micro-channel plate (MCP), cross-delay line (XDL) anode, constant fraction discriminator (CFD) circuits, time-to-digital converters (TDCs), and a field programmable gate array (FPGA) that interfaces with a microprocessor. The MCP/XDL portions of the system are commercial off-the shelf (COTS) hardware from Sensor Sciences LLC. Moreover, the radiation hardened TDCs are also COTS, developed by MAGICS Technologies NV. The team has developed the necessary analog front end (AFE) which implements CFD circuitry to support the jitter, count rate, and radiation survivability requirements thus enabling a rad-hard 2D imaging anode capable of 200 micron pixel resolution. The CFD topology creates an amplitude-invariant triggering mechanism for the TDC that requires minimal low-power-consumption integrated circuits. A key feature of the imaging system is a pickoff signal that directly interfaces the MCP. When a charged particle enters the MCP, a pulse is sensed to trigger a START command in the TDCs. Once a START is received, the TDCs count until STOP signals are received from the two ends of the cross-delay anode. These timing measurements are then read by an FPGA through a simple serial peripheral interface (SPI). The MCP pickoff approach eliminates the need for large delay lines that guarantee a pulse travelling out one end of the delay line will always arrive before the other. This robust and highly accurate design has been proven at under five watts including the AFE, TDC, microprocessor, and FPGA. Unique to this system is the ability to generate individual Cartesian (x,y) list data per event that hits the detector. The imaging anode system size is 136 mm × 136 mm × 54mm, weighs 1.785 kg, and uses 4.9 W of power. In this paper, we will provide a detailed overview of the various aspects of our 2D imaging system and discuss potential applications to other space-flight spectrometry missions. This design is scheduled to fly on the Experiment for Space Radiation Analysis (ESRA) CubeSat mission to GTO and the Autonomous Ion Mass Spectrometer Sentry (AIMSS) to the International Space Station. We will present ion beam laboratory testing results using ion energies at 1keV, 5keV, and 35keV. Specifically, the imaging anode resolution is characterized at these ion energies as a function of spatial resolution using a calibration mask.
      • 02.0805 Fault Mitigation for SNN Classification of Neuromorphic Event Streams with Radiation-Induced Noise
        Joshua Poravanthattil (University of Pittsburgh), Daniel Stumpp (University of Pittsburgh), Alan George (University of Pittsburgh) Presentation: Joshua Poravanthattil - Thursday, March 6th, 04:55 PM - Jefferson
        The pursuit of autonomy for on-orbit processing drives the evolution of deep-learning (DL) systems for space applications. Hindered by limited compute capabilities on constrained embedded platforms, alternative technologies are being explored. Amidst the evolution of these DL systems, spiking neural networks (SNNs) coupled with neuromorphic sensing inputs offer an advantage compared to traditional DL frameworks, as SNNs offer lower latency, memory, computation requirements, and subsequent lower power usage. These benefits of neuromorphic systems make them highly amenable to space applications. Given the limited space-flight heritage of neuromorphic systems, the characterization of radiation effects on such algorithms and sensors is valuable for reliable implementations in space. Specifically, characterization of the adverse effects of radiation on neuromorphic systems can lead to development of effective fault-mitigation strategies. This research therefore aims to characterize the effects of simulated, radiation-induced noise on the classification capability of SNNs and explore contemporary fault-mitigation tactics. A fault-injection tool modeled from the results of a neutron-radiation test is leveraged to simulate noise. Using this tool, noise is injected into event-based sensor data and subsequently passed to an SNN for classification. This simulated neuromorphic system provides insight into the radiation-resiliency of SNNs as network hyperparameters such as width and depth are varied, as well as identifies the limitations of current denoising methodology. A novel event-rate denoising method is also introduced to increase the performance of SNN classification under higher magnitudes of the noise, outperforming the standard spatiotemporal filters currently used with event data suitable for radiation environments. The proposed filtering methodology outperforms standard spatio-temporal filtering methodology by 1.7x at high simulated noise rates (100 events per second) on the complex DVS Gesture dataset, with an improved performance for a wider range of noise rates. The intrinsic reliability of SNNs, coupled with the enhanced event-rate filtering methodology offers a formidable DL alternative.
    • James Kinnison (JHU-APL) & Yasin Abul-Huda (Johns Hopkins University/Applied Physics Laboratory)
      • 02.0901 Regulating Orbital Decay through Passive Thermochromism in PMPEs for Orbital Debris Remediation
        Joseph Ivarson (Auburn University), Davide Guzzetti (Auburn University) Presentation: Joseph Ivarson - Sunday, March 2th, 09:00 PM - Madison
        Current state-of-the-art techniques for small debris (<1 cm) remediation use dust clouds composed of small grains to induce artificial drag and deorbit debris. By passively controlling the variation of particle properties, it is possible to harness external forces to steer orbital motion. Programmable Metamaterial Particle Ensembles (PMPEs) could enable more precise regulation of semi-major axis variations, hence improving management of radial dispersion, along-track dispersion, and orbital decay compared to inert dust clouds. This work investigates how solar radiation pressure (SRP) can be utilized for orbit control by regulating optical properties according to temperature. Preliminary studies of the PMPE design space were performed using a thermo-orbit model to assess particle trajectories and temperature responses, and to begin informing the requirements for orbital debris removal technology development. Initial results demonstrated the intended effects of PMPEs, showing that variations in optical properties with temperature can induce SRP asymmetry, thereby enabling semi-major axis control over successive orbits. Subsequent analysis identified performance metrics, such as volumetric heat capacity, which could inform the optimal selection of materials to prolong the duration of asymmetric SRP induction. Further studies explored design parameters that maximize control authority over decay rates for specific test cases. In the orbit range of interest for small debris remediation (800 to 1000 km), we found that PMPEs have the capability to either cancel out or double the natural decay rates, enabling operators to optimize interactions with debris. This research highlights the promising potential of PMPEs for innovative passive orbit control solutions, paving the way for further advancements in space debris mitigation.
      • 02.0904 Intelligent Small Satellite Swarm Control System for Avoiding in Space Debris
        Evan Finnigan (Stottler Henke Associates, Inc.), Brandon Liu (), Dick Stottler (Stottler Henke Associates, Inc (SHAI)) Presentation: Evan Finnigan - Sunday, March 2th, 09:25 PM - Madison
        This paper describes an intelligent software system for controlling satellites in a swarm to avoid in space debris that might otherwise collide with satellites in the swarm. This software system, called Coordinated Autonomous Debris Avoidance (CADANCE), was developed with funding and direction from NASA. CADANCE uses trajectory optimization and case-based reasoning (CBR) to create a sequence of thrust controls for every satellite in a swarm, based on incoming Conjunction Data Messages (CDMs) and mission objectives. Our approach can control maneuvers to avoid in space debris for satellites in diverse swarm and formation types including but not limited to massive internet provider constellations, science missions, earth observation constellations, and trailing formations. CADANCE uses CBR to analyze features extracted from CDMs, the swarm’s objectives, mission constraints, and thrust capabilities of the satellites in the swarm to develop a high level plan to enable the swarm to avoid possible conjunctions with other satellites or space debris. CBR uses a case base of past examples which it then modifies and combines to develop one or more plans for the current scenario. An example high level plan might be to raise the orbit of a satellite in the swarm that has a high probability of collision (Pc) with a piece of space debris and then perform an orbit restoration maneuver to rendezvous the satellite back with the rest of the satellite formation if and when the debris falls into the atmosphere. CADANCE uses a trajectory optimization algorithm called Particle Swarm Optimization (PSO) to plan the sequence of maneuvers required to execute the high-level plan. PSO is a type of iterative evolutionary algorithm where each iteration consists of first creating a population of possible trajectories, then using orbital propagation to find the cost of each trajectory, and finally adjusting the population to be more similar to the trajectories with the least cost. The PSO algorithm completes when it finds a trajectory with low cost that satisfies all of the mission constraints. The main significance of our work is that we proved, in simulation, the feasibility of autonomously controlling satellites in a swarm to avoid conjunctions. We tested CADANCE in eight unique simulated scenarios with different types of constellation, mission constraints, and conjunction risks using a different satellite simulation software than the orbital propagation library we used for PSO to remove bias. CADANCE was capable of finding a conjunction avoidance maneuver for every scenario that reduced the Pc to below a given threshold while obeying all mission constraints. Additionally, CADANCE’s trajectory optimizer was always capable of planning a series of maneuvers to match the trajectory requested by the high level planner to within 5 meters. Another important aspect of the feasibility of CADANCE is computational efficiency which enables it to run on radiation tolerant computers in space. The time complexity of the full CADANCE algorithm is linear with swarm size and, in tests, finished planning in 30 seconds or up to 2 minutes for very complex scenarios on a laptop computer with standard specifications.
      • 02.0905 Satellite Initial Positioning Optimization for Passive Multi-Debris Approaches
        Alessandro Piotto (University of Michigan), Giusy Falcone (University of Michigan) Presentation: Alessandro Piotto - Sunday, March 2th, 04:30 PM - Madison
        The significant increase in space debris poses a substantial threat to the safety and sustainability of space operations. As of today, approximately 35,750 trackable objects are in orbit, with only about 21.5% being operational satellites, while the remaining 78.5% contribute to collision risks and debris generation. This paper addresses the critical issue of space debris by proposing a methodology to determine optimal initial orbital conditions for a satellite to passively approach the maximum number of debris objects. This novel strategy leverages the satellite's natural trajectory, offering a cost-effective first step to enable debris removal strategies. A major challenge in debris removal is the high propellant cost for close-proximity operations, which becomes prohibitive for small debris. In this case, a passive strategy is indeed required, especially for multi-debris removal. To this end, this paper comprehensively analyzes existing debris removal techniques, evaluating and comparing their required relative velocity, distance, and timing for successful interactions between a satellite and debris. These constraints are then incorporated into the optimization algorithm. The optimization technique begins with the analysis of the data of tracked debris using statistical clustering methods to identify potential regions of interest and inform the initialization of the optimization algorithm. The optimization problem is framed to find the orbital elements of a satellite that would maximize the number of successful debris approaches. To solve this efficiently, the paper employs a combination of analytical and numerical methods. Analytical methods rapidly solve for intersections between elliptical orbits, while numerical methods provide detailed simulations accounting for orbital dynamics and perturbations. This hybrid approach ensures computational efficiency while maintaining solution accuracy. The optimization problem is tackled using Differential Evolution (DE), a population-based algorithm that is well-suited for complex, non-linear search spaces. This approach allows for a thorough exploration of the initial condition space while preventing premature convergence to local optima. Moreover, a Monte Carlo simulation is used to perturb the location of debris, accounting for positional uncertainties and ensuring the robustness of the proposed trajectory under varying conditions. The findings illustrate the feasibility of small debris removal strategies using optimized initial conditions and passive stable orbits. This approach enables the removal of small debris without using prohibitive propellant mass. By integrating rigorous analysis, statistical clustering, and a combination of analytical and numerical methods, this research introduces a flexible framework for planning and scheduling future small debris removal missions that can be rapidly adjusted based on the debris removal strategy's requirements.
      • 02.0906 A Framework for the Quantitative Comparison of Collision Avoidance Maneuver Optimisation Methods
        Thomas Childs (Universidade de Lisboa - Instituto Superior Técnico), André Ribeiro (Neuraspace), João Monteiro (), Rodrigo Ventura (Instituto Superior Técnico), Paulo Gil (Instituto Superior Técnico, University of Lisbon) Presentation: Thomas Childs - Sunday, March 2th, 09:25 PM - Electronic Presentation Hall
        In this paper, a methodology for the comparison of Collision Avoidance Maneuver (CAM) planning techniques is proposed, taking into account the accuracy and optimality of the resulting solutions, as well as the computational cost associated with their determination. With the increasing number of space objects, the risk of collisions with active satellites has become an urgent issue. Satellite operators already perform conjunction risk assessment and avoidance activities. This entails a significant cost, making invaluable the employment of CAM planning and execution methods with proven risk-mitigation capabilities, at minimum fuel and computational expense. Multiple authors have explored the topic of the determination of optimal CAMs over the last decades, proposing a variety of approaches. Therefore, the goal of this study is to derive an appropriate set of comparison criteria for the evaluation and performance assessment of these. The criteria proposed consider factors such as the accuracy of the resulting solutions, their optimality, and the computational requirements of each method, maintaining enough generality to be widely applicable across all classes of CAM optimization techniques. This work contributes to the streamlining of collision avoidance operations, by reducing the required time for decision-making and improving the efficiency and effectiveness of the performed maneuvers. The developed framework is applied to a selection of CAM optimization methods that address a set of commonly employed optimization goals, constraints, and decision variables. The results are validated for the use case of a single-encounter conjunction scenario, across various conjunction geometries and orbital regimes.
      • 02.0908 The MMOD Hypervelocity Impact Modeling Approach for Dragonfly
        Yasin Abul-Huda (Johns Hopkins University/Applied Physics Laboratory), Douglas Mehoke (Johns Hopkins University Applied Physics Laboratory (JHU/APL)) Presentation: Yasin Abul-Huda - Sunday, March 2th, 04:55 PM - Madison
        Dragonfly is a rotorcraft lander designed to study prebiotic organic chemistry and bio-signatures of water-based forms of life on Saturn’s moon Titan. The rotorcraft will be transported 1.2 billion km over a 6.7 year-long cruise phase while housed inside of a thruster propelled aeroshell. The trajectory, which includes a gravity assist maneuver, will expose the aeroshell and propulsion stage to a damaging micrometeoroid and orbital debris (MMOD) environment with a mass and speed distribution ranging between 1E-6 to 1E-1 g and 1 to 80 km/s, respectively. The design of spacecraft protection systems from the MMOD environment relies on semi-empirically derived ballistic limit equations (BLEs) to predict the combination of impact parameters for a specific shielding configuration which yields a predetermined damage criteria. The equations are validated by hypervelocity impact ground testing, over a limited particle size and speed range that does not span what is in space. Moreover, since the Whipple shields being evaluated for Dragonfly are non-standard, they require modified versions of the BLEs which include a bumper thickness dependence. This paper outlines the modeling approach of individual hypervelocity micrometeoroid impacts with the goal of ensuring an acceptable risk tolerance to the mission. It leverages a methodology previously developed and implemented for the Solar Parker Probe mission. The impacts are modeled with the Sandia CTH hydrodynamics code because of its ability to handle shockwaves, multi-phase materials, high states of deformation, fracture, and failure, tabular equations of state (EOS) accounting for physical and chemical phenomena, and provides versatility in assigning material strength properties. The CTH results in conjunction with modified BLEs are used to define tolerable impacts with a particular shielding configuration for a range of impact parameters. We report preliminary results of the CTH-derived ballistic limit behavior of a cruise phase Whipple shield configuration, and compare it to the predicted protection derived from the modified BLEs. We discuss the possible sources of differences, as well as present examples of the rear wall spall and incipient spall cases.
    • Jeffery Webster (NASA / Caltech / Jet Propulsion Laboratory) & Michael Werth (The Boeing Company) & Paul Chodas (Jet Propulsion Laboratory)
      • 02.1001 Electrostatic Forces for Planetary Defense: A Method for Asteroid Deflection
        Anubhav Gupta (In Orbit Aerospace), Abhinav Gupta (University of Colorado Boulder), Arsh Nadkarni (Ridgetop Group, Inc) Presentation: Anubhav Gupta - Thursday, March 6th, 05:20 PM - Jefferson
        This paper examines the challenges associated with planetary defense from asteroid impacts and proposes an alternative solution using electrostatic forces for asteroid deflection. While the DART mission has demonstrated the feasibility of impact-based deflection, this study introduces a method that utilizes electrostatic forces to alter an asteroid's trajectory. Electrostatic deflection offers several potential benefits, including the ability to alter the trajectory of asteroids without physical contact, reducing the risk of fragmenting the asteroid, and allowing for the spacecraft to be reused in multiple missions. We develop a detailed electrostatic force model and apply it to scenarios involving near-Earth objects, assessing the feasibility of using electrostatic forces to achieve significant trajectory changes. The study includes numerical simulations to evaluate the effectiveness of this method under various conditions, such as different asteroid sizes, compositions, and charge distributions. Additionally, a feasibility and cost analysis is conducted to compare this approach with other existing asteroid deflection methods. The results indicate that electrostatic deflection could be a viable option for planetary defense, providing a flexible and scalable solution for mitigating the threat posed by potentially hazardous asteroids.
      • 02.1002 Designing a Near-Earth Asteroid Survey for a Telescope in Geosynchronous Orbit
        Sophia Vlahakis (Massachusetts Institute of Technology), Tansu Daylan (Washington University in St. Louis), George Ricker (MIT), Kerri Cahoy (MIT) Presentation: Sophia Vlahakis - Thursday, March 6th, 09:00 PM - Jefferson
        The detection and characterization of Near-Earth Objects (NEOs) is important for both planetary defense against dangerous asteroids and for solar system science. In 2005, the US Congress directed NASA to detect and characterize 90% of NEOs greater than 140 meters in diameter by the end of 2020. By the beginning of 2024, it is estimated that only around 43% of these asteroids have been cataloged, and even fewer have been characterized. Upcoming surveys such as the future space-based infrared telescope NEO Surveyor and the ground-based Rubin Observatory’s Legacy Survey of Space and Time (LSST) will dramatically increase the NEO discovery rate. However, existing resources that perform follow-up observations to characterize these asteroids do not have the capacity to keep up with all of the projected new discoveries. This paper discusses a new mission concept for an optical telescope in geosynchronous orbit and its ability to characterize NEOs. Expanding resources for characterizing NEOs is important to improve orbital estimates and to measure the physical properties of potentially dangerous asteroids. Scheduling follow-up observations with large in-demand facilities can be challenging since many NEOs are only visible for a small fraction of their orbits and they need to be observed again quickly. A telescope in geosynchronous orbit would be able to rapidly characterize asteroids discovered by NEO Surveyor that may be out of reach for ground-based instruments which have a limited ability to observe the inner solar system due to daylight. These follow-ups would also have the benefit of complementing the NEO Surveyor’s infrared data with visible spectrum measurements. We present simulations that use synthetic populations of Near-Earth Objects to analyze the potential science yield of this telescope. With a mirror diameter of 25 cm, we estimate a limiting visible apparent magnitude of 21.5 and assume a solar avoidance angle of 45 degrees. From this, we analyze what populations of NEOs this telescope would be sensitive to considering the observing geometry, magnitude thresholds, and asteroid track lengths. We then compare these results with projections from NEO Surveyor and LSST in order to discuss how NEO observations with this telescope could complement these missions.
      • 02.1006 The Pan-STARRS Search for Near-Earth Objects: 10 Years Old, and Still Going Strong
        Richard Wainscoat (University of Hawaii) Presentation: Richard Wainscoat - Thursday, March 6th, 09:25 PM - Jefferson
        The Pan-STARRS telescopes, located near the summit of Haleakala, on the island of Maui in Hawaii, have been engaged in a search for Near-Earth Objects since 2014. The survey now utilizes two 1.8-meter telescopes, and has become one of the leading Near-Earth Object surveys. The survey strategy is explained. Among the strengths of Pan-STARRS is discovery of larger objects, for which an Earth impact would likely be serious. Approximately 44% of asteroids with diameter >140 meters are now believed to have been discovered, and Pan-STARRS has led the discovery of these larger objects nearly every year since 2014. Due to the structure of its camera and focal plane, Pan-STARRS is weaker at discovering faster moving objects (which are usually smaller objects due to their proximity to Earth). Pan-STARRS delivers excellent astrometry that allows some Near-Earth Objects to be identified based solely on non-linear motion across the sky caused by the motion of the observer as Earth rotates. Among the most important discoveries by Pan-STARRS is the first interstellar object, ‘Oumuamua. After 10 years, Pan-STARRS continues to be productive, and will be a strong northern hemisphere complement to the survey that will soon begin in the south with the Rubin Observatory.
    • David Sternberg (NASA Jet Propulsion Laboratory) & Kenneth Cheung (NASA - Ames Research Center)
      • 02.1102 The Communication and Computation Architecture for a Universal Space Robotic Joint
        Thomas Bahls (German Aerospace Center - DLR), Alexander Beyer (DLR), Hans Juergen Sedlmayr (), Andreas Stemmer (German Aerospace Center - DLR), Sascha Moser (Deutsches Zentrum für Luft- und Raumfahrt e.V.), Robert Burger (German Aerospace Center - DLR) Presentation: Ferdinand Elhardt - Monday, March 3th, 08:30 AM - Madison
        In the recent years the two main disciplines of space robotics have been exploration and servicing as well as maintenance. Depending on the mission goal, the system level complexity differs. On one hand dedicated robots for a very specific task are necessary and on the other highly complex systems with different levels of autonomy. However, independent of the complexity, all robotic systems can be described as a combination of joints. A joint can be considered as the smallest unit of a robot’s configuration to setup a specific kinematic structure. A robotic joint comprises one or more actuators to implement a specific degree of freedom (DoF) as well as sensors to gather information about the actual state (e.g. position, torque, temperature, ...) of the joint. Furthermore, a joint provides a communication interface to a field bus to connect to predecessor or successor joints on one hand or to a real time host performing higher level control on the other hand. CAESAR (Compliant Assistance and Exploration SpAce Robot) is following this joint approach by its building block concept. The key to CAESAR’s flexibility are locally controlled joints that can be combined in different ways. High performance is achieved by local joint controllers that can provide position or torque control, together with an outer control loop to adjust the behaviour of the whole arm in Cartesian space. The combination of these control layers allows move the arm either in stiff position control or in compliant impedance control, adjusted by software depending on the current task. The robot structure can be scaled to the missions needs by adjusting the number of joints and the kinematic layout. In its example configuration CAESAR is intended to be capable of servicing or catching satellites in LEO/GEO, even ones that are in tumbling, and/or non-cooperative states. Therefore a seven DoF configuration was chosen to meet the dexterity and the kinematic redundancy requirements. For CAESAR, DLR RM is developing the entire robot system including the robotic hardware and core functionalities for On-Orbit-Servicing such as assembly, maintenance, or repair of satellites. These include complete control of the robot, on-board task management, and visual servoing, for instance to perform high precision assembly tasks. The single robotic activities can be performed autonomously with visual-servoing or semi-autonomously, in teleoperation or telepresence mode. CAESAR's joint based building block concept is already applied in several terrestrial robotic developments. In this paper we present the computation and communication architecture for a universal space robotic joint based on CAESAR’s joint control unit (JCU) which is necessarry to perform the described operations and tasks. It comprises all aspects from the overlying high level control of multiple joints on a real time host down to the synchronisation of communication participants on register transfer level (rtl). This involves also the used middleware implementation and fieldbus communication as well as actuator control and sensor data acquisition.
      • 02.1107 Design and Characterization of a Testbed Simulator for In-Space Robotics
        Eddie Hilburn (Texas A&M University) Presentation: Eddie Hilburn - Monday, March 3th, 08:55 AM - Madison
        As launch costs decrease, the number of applications for space-based platforms is growing. Correspondingly, the demand for in-space servicing of existing spacecraft is increasing, with many if not most, of the methodologies expected to feature robotics. Testing these robotic spacecraft in situ is risky, and software simulations lack the realism required to mitigate this risk. Physical simulation platforms for interaction between spacecraft are highly desirable as an intermediate validation, however, existing physical simulation tools such as air bearing tables, neutral buoyancy labs, and robotics platforms are limited in key ways. Air bearing tables are constrained to SE(2), and robotic manipulators by payload, precision, and reachability. Additionally, neutral buoyancy facilities require waterproof test articles, and one must account for dissipative forces. In this paper we introduce the Robotic Space Simulator (RSS), a new testbed for in-space robotics using two 7-DOF Gough-Stewart platforms. The RSS was developed to simulate a spacecraft with a robotic manipulator approaching, grappling, manipulating, and decoupling from another spacecraft, fusing accurately simulated centroidal dynamics with real force-sensorized contact interaction and realistic visuals. RSS addresses many of the deficiencies of other physical simulation tools by operating in SE(3) with high payload capacity (2600 kg per platform), increased precision and reachability, and without additional requirements such as waterproofing of test articles. The RSS was designed to enable the testing of full-scale flight articles with a workspace that permits testing of multiple spacecraft and full-contact interactions between them via robotic manipulator. In this paper, we outline critical design criteria to meet these requirements such as payload capacity and reachability. For a standard 6-DOF Stewart Platform these attributes are dictated by its geometry and construction. By adding a 7th DOF, we significantly increase the workspace along one axis for each platform. Selecting a translational axis for one platform enables the simulation of the final approach prior to contact (additional 6 m) and selecting a rotational axis for the other platform (additional 720 degrees in yaw) enables additional maneuverability during contact and manipulation. In addition to design optimization, this paper also presents initial evaluation of the sensors, filtering techniques, and control methodology. The RSS uses force-torque sensors on each platform to simulate full-contact microgravity dynamics between the spacecraft. Evaluation includes measurement range, resolution, and frequency analysis of each sensor. Based on the results of this evaluation, we design filtering strategies for the system and evaluate these experimentally. Finally, we implement and evaluate a Jacobian-based feed-forward velocity controller using dual-quaternion based kinematics. The controller is evaluated based on computational efficiency, convergence time, and resistance to singularity.
      • 02.1108 Planning for In-Space Robotic Assembly of Modular CubeSats
        Leila Freitag (Massachusetts Institute of Technology), James Dingley (Massachusetts Institute of Technology), Daniel Saptari (), Jarrod Homer (Northeastern University), Kerri Cahoy (MIT) Presentation: Leila Freitag - Monday, March 3th, 09:20 AM - Madison
        In-space manufacturing has the potential to revolutionize the construction of structures in space, bypassing launch constraints and significantly accelerating deployment response times. This paper presents an assembly planner for modular CubeSats as part of the Orbital Locker project at the Massachusetts Institute of Technology. The concept of operations involves a free-flying satellite that acts as a storage "locker", carrying modular CubeSat components inside it and having the ability to assemble and deploy them. Orbital Locker is an initial small-scale demonstration that is intended to be scaled up. The CubeSat modules are dimensioned such that three modules stack to form a 1U CubeSat. During launch, the modules are secured in storage stacks within the Locker. They are then assembled on demand into multi-module CubeSats and deployed from the Locker by a Cartesian gantry robot. The 50~cm~x~25~cm~x~25~cm prototype Locker is capable of assembling up to three 1U CubeSats. This paper presents the design of the assembly planner which determines the assembly sequence used by the Locker to assemble a requested CubeSat.~The assembly planner consists of a global planner for high level symbolic planning and a local planner for low level motion and path planning.~The local planner ensures that the gantry robot moves the satellite modules between the storage stacks and the assembly stack efficiently and without collisions, while the global planner determines the optimal sequence for moving different modules between stacks in order to correctly assemble the desired satellite.~The global planner creates a graph representation of possible assembly configurations, with each graph edge between states corresponding to the movement of one module between stacks while satisfying the physical constraints of the orbital locker layout.~It searches for the fastest sequence of module movements to assemble the desired multi-module satellite using a version of Dijkstra's algorithm. After discussing the planner, this paper provides examples of optimal assembly sequences generated by the global planner for the prototype Locker, and discusses the performance of the global planner for future scaled-up systems containing more stacks and modules. Autonomous assembly as demonstrated by the Orbital Locker’s assembly planner lays the groundwork for a system that can deploy custom-configured satellites on orbit in minutes, rather than weeks or months.
      • 02.1109 Validation of Fine Manipulation Using NMPC for Rotation Floating Space Robots with HILS Setup
        Roshan Sah (Tata Consultancy Services (TCS)), SOMDEB SAHA (), Nijil George (Tata Consultancy Services), Kaushik Das (TCS) Presentation: Roshan Sah - Monday, March 3th, 11:25 AM - Electronic Presentation Hall
        On-orbit servicing using autonomous robotic manipulators onboard satellites is a promising method for future space sustainability missions like debris removal, refueling, assembly, and maintenance. A newer variety of space robots is the Rotation floating space robot (RFSR), in which the orientation and position of a robotic arm mounted on a floating satellite are actively controlled while the satellite base orientation is also being simultaneously adjusted. However, non-linearities and complexities associated with controlling such coupled space-based systems present difficulties in their feasible implementation. It is challenging to model such systems with coupled degrees of freedom (DOF) and a high degree of non-linearity. The paper presents an approach to model and control Rotation Floating Space robots to design an optimal path that the end-effector can follow while being controlled to capture the target. A Nonlinear Model Predictive Controller (NMPC) is proposed for the position and orientation control of a UR5 manipulator mounted on the base of the satellite. Also, experimental verification of such systems is difficult due to difficulties replicating a micro-gravity environment on the ground, providing 6DOF motion to robots. This paper addresses multiple issues. One is to model RFSR in Pybullet and use NMPC-driven layered control architecture for simultaneous 9DOF control. The simulation shows the complete workflow from deploying the robotic arm until the target is successfully captured or docked on. After that, we use a hardware-in-the-loop (HILs) hybrid approach to validate the simulations performed in PyBullet. While one robot is employed to simulate the target body's six degrees of freedom, another robot mimics the movements of the chaser arm. The NMPC formed the outer loop controller, generating reference commands for the inner loop PyBullet controller. The NMPC took the gripper to the desired pose while a simultaneous PD controller stabilized the orientation of the satellite base and maintained it at the desired values. The results from simulation and hardware experiments show that the proposed controllers were efficient, and the NMPC could find an optimal path to the desired gripper's pose.
      • 02.1113 Towards Robust Spacecraft Trajectory Optimization via Transformers
        Yuji Takubo (Stanford University), Tommaso Guffanti (Stanford University), Daniele Gammelli (Stanford University), Marco Pavone (Stanford University), Simone D'Amico (Stanford University) Presentation: Yuji Takubo - Monday, March 3th, 09:45 AM - Madison
        Future multi-spacecraft missions require robust autonomous trajectory optimization capabilities to ensure safe and efficient rendezvous operations. This capability hinges on solving non-convex optimal control problems in real time, although traditional iterative methods such as sequential convex programming impose significant computational challenges. To mitigate this burden, the Autonomous Rendezvous Transformer (ART)~\cite{art_ieeeaero24} introduced a generative model trained to provide near-optimal initial guesses. This approach provides convergence to better local optima (e.g., fuel optimality), improves feasibility rates, and results in faster convergence speed of optimization algorithms through warm-starting. This work extends the capabilities of ART to address robust chance-constrained optimal control problems. Specifically, ART is applied to challenging rendezvous scenarios in Low Earth Orbit (LEO), ensuring fault-tolerant behavior under uncertainty. Through extensive experimentation, the proposed warm-starting strategy is shown to consistently produce high-quality reference trajectories, achieving up to 30\% cost improvement and 50\% reduction in infeasible cases compared to conventional methods, demonstrating robust performance across multiple state representations. Additionally, a post hoc evaluation framework is proposed to assess the quality of generated trajectories and mitigate runtime failures, marking an initial step toward the reliable deployment of AI-driven solutions in safety-critical autonomous systems such as spacecraft.
      • 02.1114 A Gaussian Mixture Model for Probabilistic Workspace Generation of Multibody Systems
        Nate Osikowicz (Penn State University), Puneet Singla (The Pennsylvania State University ) Presentation: Nate Osikowicz - Monday, March 3th, 10:10 AM - Madison
        In this paper, a computationally efficient method is presented for probabilistic workspace analysis of multibody systems. Growing interest in robotically assembled space structures has introduced a demand for analyzing all end-to-end configurations of large multibody systems as they are expanded and articulated in outer space. This paper develops a computationally efficient approach for workspace analysis, in which the system’s workspace is approximated by a Gaussian mixture model. The independence of inputs for each link is exploited to propagate the bounds on relative rotation of one link with respect to the other through forward kinematics. Furthermore, a non-product quadrature method known as Conjugate Un- scented Transformation is used to compute statistical moments with a minimum number of samples. The tradeoff between accuracy of the Gaussian mixture model in representing the workspace distribution and computational time is performed by increasing or decreasing the number of batches in the mixture model. To showcase the newly developed algorithm, several numerical experiments are conducted emphasizing its utility in design and trajectory planning applications. By revealing low and high probability regions of the workspace, the algorithm can be utilized in applications such as stochastic design and fail-safe trajectory planning.
      • 02.1115 Capturing Tumbling Objects in Orbit with Adaptive Tube Model Predictive Control
        Aaron John Sabu (University of California, Los Angeles), Brett Lopez () Presentation: Aaron John Sabu - Monday, March 3th, 10:35 AM - Madison
        There is a growing need for sophisticated rendezvous and docking guidance algorithms capable of capturing tumbling, noncooperative objects, e.g., space debris, before they collide with other spacecraft. This capability entails sending a chaser (specialized spacecraft) that attaches to the target (tumbling object) to stabilize or deorbit the noncooperative asset. While rendezvous and docking with cooperative objects has been possible for several decades, advanced strategies for tumbling objects that account for uncertainties in the model parameters of either the chaser or target are still under development. We propose the use of adaptive dynamic tube model predictive control (ADTMPC) for trajectory planning of the chaser for rendezvous and docking with the target. Traditionally, the presence of model uncertainty requires the use of robust guidance techniques that are often conservative, leading to suboptimal performance. The main advantage of our approach is the ability to safely update the prediction model online—leading to less conservative trajectories and behaviors—without violating safety constraints that include time-varying collision constraints and other standard state or control constraints. The approach relies on computing a suitable robust tracking controller offline and combining it with a set membership identification algorithm that identifies the feasible set of parameters online rather than a single point estimate. Under mild assumptions, we guarantee the system will always satisfy the specified constraints even as the model is changing online. Moreover, our approach can incorporate predictions of how the closed-loop trajectory tracking performance will vary with the designed trajectory. This property, in essence, can be used to inform the guidance algorithm on which maneuvers can be executed without exciting the uncertain system dynamics. For the docking phase, we also propose an optimization-based procedure for computing the optimal docking position of the chaser on the target and the optimal application of propulsion to detumble the target. We demonstrate our approach for a chaser-target scenario where drag, inertial parameters, and orbit parameters are uncertain.
      • 02.1117 Origami-inspired Structural System for In-space Assembly
        Megan Ochalek (NASA Ames Research Center), Olivia Formoso (NASA Ames Research Center), Manan Arya (), Kenneth Cheung (NASA - Ames Research Center) Presentation: Megan Ochalek - Monday, March 3th, 11:00 AM - Madison
        As space missions advance, the need for larger and more complex structures has grown beyond what current launch payloads can accommodate. Consequently, there is a requirement for some combination of assembly and deployment of these structures in space. The ARMADAS (Automated Reconfigurable Mission Adaptive Digital Assembly System) project has demonstrated the autonomous assembly and reconfiguration of strut-based volumetric pixels (``voxels”) by mobile robots; the voxels can be assembled and reassembled into lightweight yet strong structures. The individual blocks, however, lack efficient stowage capabilities for launch. Here, we introduce a novel structural system inspired by the principles of origami and designed to integrate with the current ARMADAS robots. A simple bar-and-hinge model is used to understand the effects on stiffness the integrated hinges impose, and the packing efficiency is calculated. The folding voxels reduce down to less than 15\% of their deployed volume for transport and use a bi-stable locking mechanism. The two ARMADAS robots are able to simply deploy the voxel and trigger the locking mechanism, such that it may be assembled into a larger structure.
      • 02.1118 A Modular, Adaptive, Coiled Deployable Boom System for Programmable Assembly
        Olivia Formoso (NASA Ames Research Center), Megan Ochalek (NASA Ames Research Center), Kenneth Cheung (NASA - Ames Research Center) Presentation: Olivia Formoso - Monday, March 3th, 09:45 AM - Electronic Presentation Hall
        To enable deep space exploration and a sustained presence in space, autonomous in-space assembly systems are needed to develop large-scale infrastructure. Assembled mechanical metamaterials are a promising technology for space applications due to their low mass and high mechanical performance. Previous building block designs include monolithic units and numerous styles of decomposition. Face decomposition and strut-and-node designs were investigated for their high packing efficiency in a launch vehicle. Deployable building blocks provide a new paradigm for increased packing efficiency during transport. In this paper, we explore a novel deployable building block design using coiled deployable booms. Coiled deployable boom technology is used extensively in space structures for their lightweight properties and low power requirements. They are compact in the stowed position and have a high stiffness to mass ratio. We present the design and performance characterization of a deployable, adaptive, structural building block for assembled metamaterial systems. The building block is designed with tape springs that serve as the strut members between attachment nodes. This building block design has a high packing efficiency for transport, tunable strut design for custom interfaces and mechanisms, and allows for repair of hard to reach areas in the structure, providing a highly adaptive system for assembly. The structural performance of the building block is characterized, and two folding configurations of the building block prototype were demonstrated. A discussion of lessons learned and future design revisions is also presented. This system of adaptive structural modules for robotic assembly will enable more complex assembled structures and space infrastructure.
      • 02.1119 Proprioceptive Inchworm Robots for Space Applications
        Pascal Spino (), Frank Sebastianelli (NASA - Ames Research Center), Kenneth Cheung (NASA - Ames Research Center), Christine Gregg (NASA Ames Research Center), Olivia Formoso (NASA Ames Research Center), Irina Kostitsyna () Presentation: Pascal Spino - Monday, March 3th, 04:30 PM - Madison
        Mobile robots will be key in enabling the next stages of human presence in extraterrestrial environments. One set of emerging applications is in construction and maintenance tasks on host structures like space stations and lunar infrastructure. Robots that operate in these highly structured environments can leverage much simpler subsystems for locomotion, sensing, and path planning compared to machines in unstructured environments like terrestrial rovers. A leading robot architecture in this space is the walking arm or inchworm robot. This class of robot resembles a series manipulator arm where both ends are equipped with identical end effectors; the robot uses these end effectors interchangeably to anchor itself to its host structure, to interact with tools, and to transport objects. In this way the robot can locomote like an inchworm by changing its anchor points while maintaining the functionality of a manipulator arm at specific locations. The inchworm robot has a workspace that scales with its host structure, allowing it to maintain and build structures much larger than itself. The inchworm robot also achieves high precision by indexing with anchor points on the host structure. Inchworm robots have been developed across a variety of scales by aerospace companies and research groups; they are currently in use on the ISS as the Space Station Remote Manipulator System (SSRMS) and European Robotic Arm (ERA). The NASA ARMADAS project is developing inchworm-style robots for in-space assembly. The robotics company GITAI is developing inchworm robots for assembly of tall lunar towers. --- The inchworm robot has unique properties that make its design and control challenging. Unlike a typical manipulator arm, both ends of the inchworm robot must be capable of acting as a base to support the large inertia and gravity loads of the entire robot. This kinematic chain symmetry makes actuator selection difficult, particularly in gravity environments. Past approaches have relied on high torque stages at each joint by motors with large gear transmissions, which results in energy inefficiency and excessive joint stiffness. Compliance is critical for inchworm robots as they must frequently make and break contact with their host structure and other objects. Past control approaches have also relied on quasi-static methods, which yield excessively slow operation. Following recent developments in proprioceptive actuation and impedance-based control strategies, we provide a framework for the design and control of inchworm robots that are both compliant and dynamic by sensorless joint-space torque measurements and low-gear-ratio motor stages. This paper demonstrates methods for platform designers to derive actuator parameters, to assess different controllers, and to test relationships between engineering variables in simulation. The paper also investigates different locomotion and manipulation modes of the inchworm robot across a variety of length scales and gravity environments to assess feasibility in emerging applications. Finally, the paper provides physical validation experiments on the Scaling Omnidirectional Lattice Locomoting Explorer (SOLL-E) inchworm robot being developed by the NASA ARMADAS project.
      • 02.1121 Architecting Autonomy for Safe Microgravity Free-Flyer Inspection
        Keenan Albee (Jet Propulsion Laboratory), David Sternberg (NASA Jet Propulsion Laboratory), Alexander Hansson (ETH Zürich), David Schwartz (ETH Zurich), Ritwik Majumdar (University of Michigan), Oliver Jia-Richards (University of Michigan) Presentation: Keenan Albee - Monday, March 3th, 04:55 PM - Madison
        Small free-flying spacecraft have the potential to provide vital extravehicular activity (EVA) services like inspection and repair for future orbital outposts such as the planned Lunar Gateway. Operating adjacent to delicate space station and other microgravity targets, these spacecraft require formalization to describe the autonomy desiderata that a free-flyer inspection mission must provide. This work explores the transformation of general mission requirements for this class of free-flyer into a set of concrete decisions for the planning and control autonomy architectures that will power such missions. Flowing down from operator commands for inspection of important regions and mission time-criticality, a motion planning problem emerges that provides the basis for developing autonomy solutions. Unique constraints are considered such as typical velocity limitations, pointing, and keep-in/keep-out zones, accompanied by a discussion of mission fallback techniques for providing hierarchical safety guarantees under model uncertainties and failure. Planning considerations such as cost function design and path vs. trajectory control are discussed. The typical inputs and outputs of the planning and control autonomy stack of such a mission are also provided. Finally, notional system requirements such as solve times and propellant use are documented to inform planning and control design. The entire proposed autonomy framework for free-flyer inspection is realized in the custom-made SmallSatSim simulation environment, providing a reference example of free-flyer inspection autonomy. The proposed autonomy architecture serves as a blueprint for future implementations of small satellite autonomous inspection in proximity to mission-critical hardware, going beyond the existing literature in terms of both (1) providing realistic system requirements for an autonomous inspection mission and; (2) their translation into algorithmic structuring and autonomy design decisions for inspection planning and control.
      • 02.1125 In-Space Manufacturing for Flexible Membranes: Process, Applications, and Vacuum Test Insights
        Michael Kringer (Munich University of Applied Sciences / Dcubed), Nisanur Eker (DCUBED), Felix Schaar (DCUBED), Jannik Pimpi (Munich University of Applied Sciences), Ugo Lafont (European Space Agency), Thomas Sinn (Deployables Cubed), Philipp Reiss (Technical University of Munich), Markus Pietras (Munich University of Applied Sciences) Presentation: Michael Kringer - Monday, March 3th, 09:00 PM - Madison
        An experimental setup is presented to demonstrate the in-space manufacturing of a support structure for flexible membranes as a proof-of-concept. Membranes, for example, used in flexible solar arrays, can be reinforced by a continuous extrusion of photopolymer. This process is intended to provide a more scalable and cost-effective solution compared to deployable structures. This paper presents a test setup and the results of functional tests under atmospheric and vacuum conditions. The flexible membrane is unrolled from a spool and pulled through a circular nozzle on one side of the flexible membrane. Through this nozzle, a photopolymer is extruded onto the membrane at the same feed rate. Directly after the nozzle outlet, UV-lights are located to cure the photopolymer in parallel to the deployment. After a short section, where rolls stabilize the membrane, and the resin has time to cure, the in-space manufactured structure is deployed into free space. With this setup, the manufacturing of structures with a total length of over 300 mm was demonstrated. However, the structures intended to be manufactured with that technology could support membranes with a length of several meters. For longer support structures, the cross-section and geometry of the nozzle, as well as the amount of material would need to be increased to enhance the bending stiffness. Functional tests in ambient conditions and vacuum showed a successful extrusion of photopolymeric structure on the membrane. However, bubble formation was observed for the structure extruded in vacuum which was not present in the structure manufactured under atmospheric conditions. Potential causes include inadequate resin degassing, air entrapment during filling, or outgassing from hardware components. Addressing these bubbles is crucial for in-space manufacturing technology as future applications require a predictable and reproducible manufacturing process to design structures optimized for their load cases. Previous fundamental experiments of the photopolymer extrusion have been demonstrated by ground-based experiments, in microgravity during zero-g flights, as well as in microgravity and vacuum conditions during a sounding rocket experiment. With the presented concept, a high bending stiffness can be achieved due to the continuous solidified structure. Furthermore, the technology offers an energy-efficient curing mechanism, a high packing efficiency, and a low number of moving elements. This makes it a competitive solution compared to other in-space manufacturing technologies. More importantly, it brings new approaches to conventional spacecraft designs to reduce costs and increase mission capabilities.
  • James Hoffman (Kinemetrics) & Glenn Hopkins (Georgia Tech Research Institute)
    • Glenn Hopkins (Georgia Tech Research Institute)
      • 03.0101 Modeling the Performance of Beam Forming Software Defined Geostationary Communication Satellites
        Roland Burton (Intelsat), Gopinath Anaszewicz (CGI UK), Ivan Williams (CGI) Presentation: Roland Burton - Monday, March 3th, 09:45 AM - Lamar/Gibbon
        This paper describes how the performance of the next generation of soon-to-be-launched beam forming geo stationary telecommunication satellites can be modeled both to assess their capabilities compared to current satellites and to understand how to optimally operate them. The next generation of geo stationary communication satellites are fully software defined, incorporating digital channelization and active beam forming antenna arrays. These software defined satellites can form 100s of beams of varying shape, power and bandwidth. However, this flexibility comes at a mass and power penalty, and hence higher cost, when compared to fixed beam satellites of the same bandwidth. It is therefore important for operators such as Intelsat to be able to both quantify the benefit of this increased flexibility and to ensure that the satellites are being operated in a manner consistent with maximizing revenue and utilization. This requires modeling of the beam forming satellite’s capabilities and performance. Traditionally, a satellite’s beam sizes and shapes are determined prior to launch based on the static expected demand over the satellite’s mission. There is limited capability for change and it is accepted that many satellite beams are left underutilized for long periods. In contrast, a beam forming software defined satellite allows the beams to better follow the demand, providing the ability to capture secular, seasonal, diurnal and even real-time changes in demand over the mission. Accurately modeling the capability of a beam forming satellite against real world demand is complex. A set of beams, or beam laydown, cannot be created with entirely arbitrary shape, frequency and bandwidth. Rather, beam laydown must adhere to a multitude of constraints that create complex interdependencies. For example, there are spectrum limitations, carrier size limitations, limits in the acceptable carrier-to-interference ratio of co-frequency co-polarization beams, limits in the acceptable EIRP density, limits in the total power dissipation of the satellite and limits in the power capability of each feed in the beam forming array. In addition to the constraints, the modeling must also be representative of how the beam laydown problem will be solved in the real world. The problems of fencing (deciding the shape of each beam), coloring (assigning the frequency, polarization and bandwidth of each beam) and beam forming (determining the excitation coefficients necessary to form each beam) are each NP-hard problems. To remain tractable, real world solution strategies will necessarily employ sub-optimal and locally greedy solutions. In the first part of the paper, a typical software defined satellite architecture is introduced, and the architecture’s capabilities, limits and constraints with respect to the beam laydown are summarized. In the second part of this paper, each of the three beam laydown sub-problems are briefly described, solution strategies discussed and a modeling baseline created. In the final part, the performance of this baseline is measured against real world demand patterns that are representative of Intelsat’s actual market forecasts for various regions in the world. Performance is benchmarked against a current generation, fixed beam satellite deployment.
      • 03.0102 Modelling of IMDs from a Multiple Beams Transmitter under Antennas Crosstalk Conditions
        Aymeric Cailleux (Heriot-Watt University), Jiayu Hou (Heriot-Watt University), Pablo Rochas (Thalès Alenia Space), Yuan Ding (Heriot-Watt University), Jean-Philippe Fraysse (), George Goussetis () Presentation: Aymeric Cailleux - Monday, March 3th, 10:10 AM - Lamar/Gibbon
        With the advent of new generations of telecommunications, the deployment of active direct radiating arrays (DRA) delivering multiple beams across multiple frequencies holds great promise, particularly for Low Earth Orbit (LEO) applications. In these advanced technologies, assessing active circuit nonlinearities and the effects of coupling and mismatch within antenna arrays is increasingly critical. As the industry moves towards eliminating bulky and power-consuming isolators from active DRAs, the mutual coupling between antennas will inevitably alter the load impedance perceived by the power amplifiers (PAs). Given that PA performance is closely tied to the load impedance, we can expect fluctuations in the DRA's performance. This effect is usually referred as Load-Pull Effect. However, evaluating these techniques experimentally poses significant challenges in terms of time, resources, and expenses. Furthermore, existing modelling tools in commercial software may encounter either prolonged computation times or convergence issues as the number of beams and radiating elements is increased. This article introduces a hardware-oriented modelling approach aimed at predicting the intermodulation products (IMDs) generated by the system experiencing active circuit nonlinearities and antenna crosstalk, in terms of EIRP and beam steering angle across the radiation pattern. The tool presented within this framework relies on an interpolated lookup table and employs Bessel-Fourier memoryless modelling. Such modelling approximations are suitable for narrowband signals. The objective is to offer a tool that strikes a balance between computation time and accuracy. The model is compared to commercial software solutions for DRA operating in the Sub-X band and tailored for LEO applications. Given that certain LEO applications within these bands can be classified as narrowband, the tool is well-suited for evaluating system performance. The comparison includes computation of the active reflection coefficient, assessment of the spectral regrowth signal, and evaluation of the EIRP far-field radiation pattern, with the aim of illustrating the degradation of DRA performance. The results show a significant reduction in computation time while maintaining a commendable level of accuracy.
    • James Hoffman (Kinemetrics) & David Mooradd (MIT Lincoln Laboratory)
      • 03.0201 Design and Performance Analysis of a Reflectarray Antenna for K-Band Satellite Communications
        Mehmet Can Demirci (Istanbul Technical University), Funda Akleman (İstanbul Technical University) Presentation: Mehmet Can Demirci - Monday, March 3th, 10:35 AM - Lamar/Gibbon
        In this study, the design, simulation and measurement issues related to the circular polarized, offset fed reflectarray antenna operating in the 19-21 GHz K-band frequency range are investigated in depth. Reflectarray antennas are notable for their significant physical flexibility, attributed to features such as lightweight construction, planar layout and optional foldability, and therefore represent a significant difference from traditional parabolic reflector antennas. This research focuses on the design of reflectarray antennas targeting satellite communication applications, considering the operating frequency, paying close attention to features such as portability, easy integrability and the critical aspect of circular polarization. A notable feature of the design and manufacturing process of the antenna is that it is desirable to use a patch antenna as the feed antenna, instead of using a conventional horn antenna. This strategic decision stemmed primarily from the general goal of reducing structural weight, which is an important consideration in contemporary antenna engineering practices. The development of the design resulted from the meticulous development of a circular reflectarray surface to efficiently illuminate the feed antenna and provide optimum performance characteristics. The design optimization steps began with a rigorous evaluation of three different unit cell elements and continued with the selection of the most suitable element after a comprehensive analysis of the results. This phase allowed for a seamless transition to the exploration of both offset feed and center feed configurations, with the offset feed approach emerging as the preferred option for final design realization. Then, the production phase was passed. An offset-fed reflectarray antenna has been achieved, whose performance has been subjected to rigorous scrutiny by careful comparison of measurement results with simulated data. However, as a result of the comparison process, naturally expected differences emerged between measurement and simulation results, which were primarily attributed to alignment differences encountered during assembly due to the precise offset feed configuration. Additionally, losses due to the inherent properties of the dielectric material at high operating frequency, combined with inherent losses in RF components such as cable connectors, caused the observed deviations. Despite these difficulties, the circularly polarized reflectarray antenna, built on a circular structure with a diameter of approximately 20 cm, has a remarkable antenna gain performance of approximately 26 dB at the center frequency of 20 GHz, based on the measurement results. In addition, as a result of the measurement results, the reflectarray antenna showed a successful return loss performance below -10 dB throughout the frequency band. In addition, the produced reflectarray antenna has left hand circular polarization, and as a result of the measurement results, it was seen that the cross polarization level was approximately -20 dB. In summary, the mentioned measurement results show that the reflectarray antenna, whose design, production and measurement are carried out, promises to be an important alternative in the field of advanced antenna technology.
      • 03.0203 Polarization-Insensitive, Highly-Selective Metasurface-Based Filtenna for Satcom Applications
        Ashifa Mohammed Musthafa (Eindhoven University of Technology), Elmine Meyer (Eindhoven University of Technology), Ulf Johannsen (), Diego Caratelli (The Antenna Company) Presentation: Ashifa Mohammed Musthafa - Monday, March 3th, 11:00 AM - Lamar/Gibbon
        Integrated radio frequency front ends are essential to meet the cost and design demands of advanced 6G Low Earth Orbit (LEO) wireless systems. One relevant integration approach involves combining filters and antennas, leading to the development of filtennas. Filtennas offer significant advantages in terms of reduced mass, low power consumption, and improved insertion loss for LEO payloads, as well as for 6G non-terrestrial network (NTN) links that are less tolerant to interference. Filtenna integration methods are broadly categorized into three approaches, namely cascaded, synthesis, and fusion. First is the cascade method, where the functional integration is through a 50-ohm transmission line. Second, the synthesis, or the co-design method, enables integration by opting for the radiator as its last resonator in a system of series-coupled resonators. Finally, the fusion method incorporates filtering capabilities into the antenna through structural modifications, and this method stands out prominently among the three. The fusion method minimizes insertion losses and can help reduce antenna mass and power consumption by eliminating the need for external filters. In this context, frequency-selective surfaces (FSSs) utilize their structural properties to filter signals by being opaque to one frequency band while transparent to another. This paper presents a planar, polarization-insensitive, space-compatible, skirt-selective transmit array antenna for satellite communication applications. The transmit array employs an FSS radome to enable filtering with a sharp roll-off factor (ROF) in the relevant frequency response. The FSS is designed to operate in the Ku-Band (12 GHz - 18 GHz ) and to be insensitive to the polarization (vertical or horizontal) of the incoming linearly polarized electromagnetic wave. The performance enhancement of the proposed filtenna is achieved through a hybrid FSS radome and a customized unit cell geometry that is optimized using both computational techniques and advanced electromagnetic concepts based on equivalent circuit theory. Full-wave simulation results will be presented to demonstrate the characteristics of the designed transmit array.
      • 03.0204 Spiral Wrapped Antenna Technology
        Thomas Murphey (Opterus) Presentation: Thomas Murphey - Monday, March 3th, 11:25 AM - Lamar/Gibbon
        This paper presents prototype and ground test results for an advanced spacecraft deployable parabolic reflector architecture called Spiral Wrap Antenna Technology (SWATH). SWATH is a fully continuous, solid surface deployable parabolic reflector architecture that enables low-cost, high frequency communications and radar missions. Utilizing high strain composites and an origami folding pattern, the antenna stows into a compact form factor for launch while maintaining high precision upon deployment. A streamlined design and manufacturing process is presented along with shape and deployment testing results. Prototyping and testing efforts have proven the architecture to have many desirable features over state of the art mesh antennas including higher frequency operation, lower cost manufacturing, and more compact packaging.
    • Orin Council (Georgia Tech Research Institute) & James Hoffman (Kinemetrics)
      • 03.0302 Low SWaP X-Band Transceiver for Deep Space Applications
        Lucas Wray (Johns Hopkins University/Applied Physics Laboratory), Evan Shi (Johns Hopkins University/Applied Physics Laboratory), Michael Dauberman () Presentation: Lucas Wray - Tuesday, March 4th, 08:30 AM - Lamar/Gibbon
        The Frontier Radio deep space software defined radio (SDR) developed by the Johns Hopkins University Applied Physics Laboratory (APL) has been a staple of high-tier space exploration missions such as Parker Solar Probe, DART, Europa Clipper, and Dragonfly. Although Frontier Radio has been highly successful over the past decade, difficulties with long-lead custom parts, manufacturing defects and delays, and parts obsolescence have driven APL to evolve the Frontier Radio family of SDRs in order to meet the increasingly resource-constrained needs of future deep space missions. APL's next generation SDR, the Frontier Radio Lite Mark 2 (FRLM2), will bring significant improvements in size, weight, and power (SWaP), in large part due to improvements in front end design. These new developments leverage the progression in commercial monolithic microwave integrated circuit (MMIC) technology and the availability of new high-reliability components to produce a simpler and more SWaP-efficient design. This paper details the design and test of the FRLM2’s X-band RF front end, a major advancement in APL’s deep space radio capabilities.
      • 03.0305 Measurements of Scattering Characteristics of Lunar Regolith for Radio Propagation Analysis
        Akira Akasaka (KDDI Research, Inc.), AKIRA YAMAGUCHI (KDDI Research, Inc.), Kento Kimura (KDDI Corporation) Presentation: Akira Akasaka - Tuesday, March 4th, 08:55 AM - Lamar/Gibbon
        Numerous lunar exploration missions are currently in the planning stages, indicating a surge in communications traffic among astronauts, rovers, robots, and satellites. This exponential increase in communication necessitates reliable mobile technologies like the 5G system and Wi-Fi, seemingly capable of supporting lunar communications. Nevertheless, radio propagation on the Moon is expected to differ from that on Earth due to the lunar surface being covered with regolith rich in various metal compounds. Authors have presented findings on the reflection coefficients and diffraction characteristics of lunar regolith simulant at previous Aeroconf events. Fortunately, these characteristics appear to align with those on Earth. Ongoing research will soon unveil the scattering properties of the same lunar regolith simulant at Aeroconf 2025. In 2023, reflection coefficients of a flat surface using regolith simulant FJS-1 were measured across five frequency bands recommended by the latest SFCG guidelines. These bands cater to lunar surface communication (2.4, 5.6, and 25 GHz) for 5G and Wi-Fi, as well as lunar orbit communication (7 and 8.5 GHz). Reflection coefficients of dry concrete blocks were also measured at selected frequency bands. In 2024, edge diffraction loss of regolith simulant was evaluated in anechoic chambers, simulating scenarios with sharp cliffs. Results indicated a straightforward increase in diffraction loss with angle, akin to Earth conditions. Notably, vertical and horizontal polarizations exhibited similar diffraction losses across all measured frequencies, suggesting no distinct polarization effects during diffraction. Current efforts involve setting up calibrated measurements to assess scattering characteristics of the regolith simulant using the bi-static reflection method, known as the "NRL Arch Method," within anechoic chambers. Emphasis lies on 25 GHz due to equipment limitations, with expectations that characteristics at other frequencies (2.4, 5.6, 7, and 8 GHz) can be inferred from the 25 GHz data. This methodology facilitates simultaneous measurement of the regolith simulant's electric permittivity. Upon consolidating data regarding the reflection, diffraction, and scattering of lunar regolith simulant, the subsequent phase entails the assured simulation of radio propagation encompassing multipath fading on the Moon. The overarching aim remains the enhancement of mobile communication infrastructure on the lunar surface, ensuring meticulousness.
    • Mark Bentum (Eindhoven University of Technology) & Melissa Soriano (Jet Propulsion Laboratory)
      • 03.0401 Pathfinding Low Frequency Radio Astronomy with the DORA Radio Background Experiment
        Yifan Zhao (Arizona State University), Daniel Jacobs (Arizona State University), Judd Bowman (Arizona State University), Titu Samson (Arizona Sate University), Marc-Olivier Lalonde (Arizona State University) Presentation: Yifan Zhao - Tuesday, March 4th, 09:20 AM - Lamar/Gibbon
        The DORA mission is a 3U cubesat multi-experiment testbed for advancing technology relevant to infrared laser communications and radio astronomy. The primary experiment is an environmental test of a solid state infrared silicon photo-multiplier detector element for the Deployable Optical Receiver Aperture, a widefield laser receiver concept. Here we report on the design and laboratory characterization of the secondary payload on the cubesat. The DORA Radio Background Experiment is a low frequency radio spectrometer to measure the intensity in LEO of terrestrial radio transmitters in the 50-200~MHz band used in 21~cm cosmology. The radio receiver includes a coarse-band spectrometer with eight channels and a narrow-band spectrometer implemented using a software defined radio yielding 300~kHz channels. The radio experiment will be used to map integrated transmitter power over low-population regions, focusing on FM and other high-power broadcast stations. As a TRL-raising measure, the design uses proposed high precision RF switches and other elements that are being investigated for future 21~cm cosmology missions. The spacecraft was built as an educational activity by undergraduate and graduate students at Arizona State University. It was launched as part of ELaNa~52 on the NG-21 resupply services mission to the International Space Station. Deployment and on-orbit operations are expected in October 2024.
      • 03.0402 How GRAIL Radio Occultations Could Enable Future Lunar Missions for Mapping the near Surface Dust
        Kamal Oudrhiri (Jet Propulsion Laboratory), Yu-Ming Yang (NASA Jet Propulsion Lab), Daniel Erwin (University of Southern California), Dustin Buccino (Jet Propulsion Laboratory), Paul Withers () Presentation: Kamal Oudrhiri - Tuesday, March 4th, 09:45 AM - Lamar/Gibbon
        Past NASA’s Lunar missions, including Surveyor, Apollo, and Lunar Atmosphere and Dust Environment Explorer (LADEE) have provided valuable insights into the dynamics of the lunar exosphere and its relationship with the solar wind flux. However, it has been a challenge for the scientific community to interpret the dust lofting and levitation on the moon. This research aims to utilize NASA's Gravity Recovery and Interior Laboratory (GRAIL) radio science as signals of opportunity to evaluate the Radio Science Measurements for the future Radio Occultation (RO) mission to investigate the lunar dust and surrounding plasma interactions. This paper will provide an assessment of new science observations derived using GRAIL RO data as a reference for the scientific community to design the future Lunar Radio Occultation Missions to improve our understanding of the dense dust formation and evolution near the moon surfaces. Furthermore, the observations of the lunar dust with GRAIL’s radio science signals of opportunity will provide a clean reference without lander impacts since the Apollo era as a comparison and a reference for future investigations of moon dust dynamics, especially in the south pole region where future Artemis human mission will land.
      • 03.0403 Assessing the Impact of Solar Plasma on Spacecraft Telemetry during the 2023 Mars Solar Conjunction
        Daniel Kahan (), Dustin Buccino (Jet Propulsion Laboratory), David Morabito (Jet Propulsion Laboratory), Walid Majid (Jet Propulsion Laboratory), Roy Gladden (Jet Propulsion Laboratory) Presentation: Daniel Kahan - Monday, March 3th, 11:50 AM - Electronic Presentation Hall
        On November 18, 2023, Mars reached solar conjunction as viewed from Earth, with a minimum Sun-Earth-Mars (SEM) angle of 0.1 degrees. With the SEM angle smaller than usual, this geometry presented a unique opportunity to study the effects of solar plasma on radio signals sent from Mars assets to the Deep Space Network (DSN) on Earth. For a period of two weeks before and after the event, we utilized the Open Loop Receivers (OLRs) at the DSN to record the radio signal from three Mars orbiters: Mars Odyssey (56 tracks), Mars Reconnaissance Orbiter (56 tracks), and MAVEN (24 tracks). The OLR recordings were completely opportunistic in nature—no modifications were made to any of the spacecrafts’ configurations or to the DSN ground stations. With decreasing SEM angle, the DSN closed loop receiver had increasing difficulty locking to the signal, while the OLR continued to capture a spectrum containing the degraded signal. Additionally, two coronal mass ejections were observed during the study period, with the impact evident in the OLR data. By correlating phase noise from the OLR data with dropped telemetry frames, we map the impact of solar plasma on the transmission of data through the sun’s corona. The OLRs were also configured to record wide bands, offering a path for telemetry recovery even when the closed loop receiver was out of lock at low SEM angles.
  • John Enright (Toronto Metropolitan University) & Kar Ming Cheung (Jet Propulsion Laboratory)
    • Behzad Koosha (The George Washington University ) & Shervin Shambayati (Aerospace Corporation)
      • 04.0103 Comprehensive Analysis of Recent LEO Satellite Constellations : Capabilities & Innovative Trends
        Behzad Koosha (The George Washington University ), Pooria Madani (Ontario Tech University), Mansoor Dashti Ardakani (AMD) Presentation: Behzad Koosha - Sunday, March 2th, 04:30 PM - Gallatin
        In this study, we will review the state of the art satellite constellations and provide an overview of major existing and future missions. Satellite constellations are networks of satellites working together in space to provide comprehensive and continuous coverage of the Earth's surface. These systems are designed to meet various needs, from communication and navigation to Earth observation and scientific research. In recent years, with the development of constellations there has been significant interest in addressing the challenges and the need for a survey of these capabilities are more crucial. Satellite constellations are becoming more relevant due to their ability to meet the increasing demand for global connectivity, their role in enhancing navigation and timing services, their contributions to Earth observation and environmental monitoring, and the economic opportunities they present. Advances in technology, reduced costs of launch and satellite production, and the strategic importance of space-based assets further drive their relevance in today's world. As we continue to rely more on space-based technologies, the importance of satellite constellations is only expected to grow. During this study, we will review the constellations from six different categories highlighted below: a) Demand for Global Connectivity : With the rise of the internet as a fundamental resource for education, business, and communication, there is a growing demand to extend high-speed internet access to underserved and remote areas. Satellite constellations can support the infrastructure needed for seamless global communication, particularly in regions where terrestrial networks are impractical. b) Technological Advancements : Advances in satellite technology have led to smaller, cheaper, and more efficient satellites. This makes it possible to launch large constellations at a lower cost than ever before. The development of reusable rocket technology by companies like SpaceX has significantly reduced the cost of launching satellites, making it economically viable to deploy large constellations. c) Enhanced Navigation and Timing Services: Satellite constellations provide highly accurate positioning, navigation, and timing (PNT) services, which are critical for various sectors, including aviation, maritime, agriculture, and autonomous vehicles. d) Earth Observation and Environmental Monitoring : Satellite constellations provide real-time data for monitoring climate change, tracking weather patterns, and responding to natural disasters. High-resolution imagery from Earth observation constellations aids in precision farming, forest management, and monitoring natural resources, contributing to sustainable practices and food security. e) Economic Opportunities: The commercialization of space has opened up new markets and opportunities. Satellite constellations support a wide range of applications, including telecommunications, remote sensing, and IoT networks. f) National Security and Defense: Governments use satellite constellations for surveillance, reconnaissance, and intelligence gathering, which are critical for national security and defense. In this study, our focus is to have a deep dive into innovative communication concepts and infrastructures planned on the horizon for future constellations, as this is the fundamental baseline for connecting all constellations in space and to ground networks.
      • 04.0105 A Rapid, Low-Cost Path to Lunar Communication and Navigation with a Lunar Surface Station
        William Jun (NASA Jet Propulsion Laboratory), Toshiki Tanaka (), Paul Carter (Jet Propulsion Laboratory), Sriramya Bhamidipati (NASA Jet Propulsion Lab), Rodney Anderson (Jet Propulsion Laboratory / California Institute of Technology), Kar Ming Cheung (Jet Propulsion Laboratory) Presentation: William Jun - Thursday, March 6th, 11:50 AM - Gallatin
        Multiple international efforts are developing systems for position, navigation, timing, and communications (PNT+C) for the Moon. These architectures plan to implement a large lunar constellation similar to Earth-like Global Navigation Satellite Systems (GNSS) enabling high-performance PNT+C for a scalable user segment on the lunar surface. However, the collective cost and timeline of a complete infrastructure deployment could reach many billions in USD over decades before surface users can fully utilize its PNT+C capabilities. We propose the use of a lunar surface station to achieve high performance PNT+C for lunar south pole surface users during the initial deployment of a large lunar constellation. The early addition of a single station can provide the same or better PNT+C performance as a large lunar constellation while only requiring a lunar constellation with ≤4 orbiters. Thus, a surface-station-aided (SSA) PNT+C architecture significantly reduces the required timeline and overall infrastructure cost to achieve accurate and scalable PNT+C. A static lunar surface station positioned at the Connecting Ridge supplies valuable PNT+C services to future missions in nearby permanently shadowed regions. The station provides communication relay and store-and-forward capabilities between lunar relay orbiters and nearby surface users. In addition, the station receives augmented forward signals (AFS) from the LunaNet compatible constellation to generate one-way pseudorange and Doppler shift observables. The station then generates differential and Joint Doppler and Ranging corrections with these observables and broadcasts corrections, along with an additional AFS, to its target region. Both the corrections and geometric benefits from an additional surface-to-surface AFS result in significant improvements in real-time PNT performance for surface users, even amid large ephemeris and radiometric biases, drift, and noise. With a lunar constellation as small as three orbiters, this SSA PNT+C architecture can achieve <15 m positioning accuracy (3σ), <40 mm/s velocity accuracy (3σ), <150 ns timing distribution (3σ), and up to 300 Mbps in data throughput for nearby surface users. Along with PNT+C benefits, the station maintains a highly stable timing reference for time distribution, constellation synchronization, and lunar timekeeping. Designed to survive throughout lunar night, the station can also provide continuous scientific value with the deployment of a science payload. A lunar surface station can enable a rapid path to full PNT+C capabilities, while still providing long-term benefits after the completion of a large lunar infrastructure.
    • Shervin Shambayati (Aerospace Corporation) & Behzad Koosha (The George Washington University )
      • 04.0201 High-performance DTN Using Larger Packets
        Fred Templin (The Boeing Company) Presentation: Fred Templin - Sunday, March 2th, 04:55 PM - Gallatin
        Delay Tolerant Networking (DTN) for space communications is on the path to mission infusion and operations, with high data rate DTN experimentation beginning to emerge in real deployments. The International Space Station (ISS) DTN testbed has evolved to include high data rate experimentation over laser links, which will serve as a precursor to future space applications. The NASA Near Space Network (NSN) program is also beginning to investigate high data rate DTN for earth to lunar distances, with the establishment of a notional Lunar Network (“LunaNet”) as one of the program’s goals. These studies will pave the way for future interplanetary DTN communications, such as the establishment of space relays between earth and Mars. Since interplanetary DTN links will be few in number, it is essential for interplanetary relays to sustain high data rates since many concurrent flows must be aggregated to maximize capacity. Our studies with the HDTN and ION DTN implementations have proven that increasing the DTN Licklider Transmission Protocol (LTP) convergence layer segment size by using larger Internet Protocol (IP) packets and/or IP layer fragmentation provides a performance multiplier. Of equal importance however is reducing LTP retransmissions to the bare minimum since each retransmission imparts delay on the order of twice the one-way light time (OWLT) which may be unacceptable for many applications. We specifically show that increasing the LTP segment size results in better DTN performance even when IP fragmentation is invoked. This result indicates an unfulfilled need for larger packet sizes and better IP fragmentation support in the Internet. Since IP fragmentation only supports packet sizes up to 64KB, however, an unfulfilled need exists for still larger packets that suggests new link models capable of carrying true jumbo-sized frames with high integrity are necessary. Our current research therefore focuses on the combination of using larger packet sizes that include Forward Error Correction (FEC) codes over high data rate and long delay links. Using a new service known as IP parcels and Advanced Jumbos, these larger packets will be conveyed over long delay links allowing the final destination to use the FEC code to repair any damage to the packet payload which may be quite large. This will greatly reduce the instance of retransmissions due to corruption even over links with higher bit error rates while sustaining higher data rates. In this paper, we discuss our approach to performance maximization over interplanetary DTN space relay links.
      • 04.0203 Precision Time Protocol at Picosecond Scale over Asynchronous Ethernet
        Alexander Utter (The Aerospace Corporation), Joseph Zales (Aerospace Corporation) Presentation: Alexander Utter - Sunday, March 2th, 05:20 PM - Gallatin
        Precise time-distribution is an important prerequisite for many applications in navigation and sensing. The IEEE-1588 Precision Time Protocol (PTP) allows time-distribution over wired computer networks. State-of-the-art PTP implementations such as the White Rabbit Project can achieve picosecond-scale clock synchronization over many kilometers, but they require the use of Synchronous Ethernet (SyncE) and other specialized technology that limits adoption. This paper describes an alternative PTP implementation using a novel digital timestamping circuit that can operate with multiple asynchronous clocks. Under laboratory conditions, the open-source prototype has demonstrated end-to-end clock synchronization precision of 30 ps-rms, using PTP over a conventional asynchronous Ethernet network.
      • 04.0204 Security Challenges in Space-based Delay Tolerant Networks
        Mohammad Salam (Chicago State University), Robert Short (NASA Glenn Research Center) Presentation: Mohammad Salam - Sunday, March 2th, 09:00 PM - Gallatin
        Space-based Delay Tolerant Networks (DTNs) have emerged as vital frameworks for communication in extreme environments, such as interplanetary missions, deep-space exploration, and satellite constellations. Unlike traditional terrestrial networks, space-based DTNs operate under unique constraints, including long communication delays, intermittent connectivity, and rapidly evolving network topologies. These factors introduce formidable challenges to maintaining secure and reliable communication across vast distances. As space exploration advances and relies more heavily on autonomous systems and long-duration missions, addressing the security challenges inherent in space-based DTNs becomes imperative. This paper delves into the critical security issues confronting space-based DTNs and explores innovative solutions to mitigate these risks. A primary security concern in space-based DTNs is data authentication. Ensuring the integrity and authenticity of data across a network that experiences long delays, where direct communication between sender and receiver may be sporadic or unavailable, poses a significant challenge. Authentication protocols designed for terrestrial networks are often inadequate in this environment, as they rely on continuous or near-instant communication. For space-based DTNs, delay-tolerant authentication schemes must be developed to support secure verification over prolonged periods, without requiring immediate responses. Another pressing concern is data confidentiality. Given the critical nature of space communication, which frequently involves sensitive, mission-critical, or classified information, safeguarding data from eavesdropping or unauthorized access is essential. Data in space-based DTNs often travels vast distances and may pass through numerous intermediate nodes or storage points before reaching its intended destination. This provides opportunities for interception and unauthorized access. While end-to-end encryption is a common solution, traditional encryption methods can be resource-intensive, making them unsuitable for space-based systems with constrained computational power and energy. Ensuring data integrity is another significant issue. Data stored temporarily at intermediate nodes in space-based DTNs is vulnerable to tampering or corruption during transit. Technologies such as hash chains and blockchain-based integrity verification are gaining traction as solutions to maintain data integrity in these networks. Trust management represents an additional challenge. In terrestrial networks, trust is typically established through continuous interactions between nodes. However, in space-based DTNs, where nodes communicate intermittently, establishing and maintaining trust can be problematic. Reputation-based trust models, in which nodes build trust based on their history of interactions, are being developed to address this challenge. Finally, denial-of-service (DoS) attacks present a serious risk in space-based DTNs, where bandwidth and resources are extremely limited. Malicious actors could exploit these limitations to overwhelm the network with traffic, disrupting mission-critical communications. Rate-limiting algorithms and priority-based traffic management systems are being designed to ensure that essential data is prioritized and malicious traffic is curtailed.
      • 04.0205 Multicast Communications with Uplink Broadcast in a Proliferated Low Earth Orbit Satellite Network
        Jun Sun (MIT Lincoln Lab) Presentation: Jun Sun - Sunday, March 2th, 09:25 PM - Gallatin
        A proliferated LEO satellite network can provide many resilient communication services that are essential to the military. The ubiquity of its presence, the diversity of the provided data paths, and the improved signal-to-noise ratio all contribute to the enormous interest in pLEO network systems. In this paper, we propose a novel multicast tree construction protocol for a group of ground users communicating through a pLEO satellite network. The proposed protocol is jamming resistant in that it takes advantage of the multiple satellites available, hence multiple paths, to receive a ground user’s data. Adversaries will need to jam all available paths to prevent the reception of user data at the satellites. However, this potentially results in the reception of redundant packets at multiple satellites. To enable the efficient forwarding of this data to the destination users, our protocol builds an efficient multicast tree. With redundant packet checks and an efficient multicast tree, we reduce the number of satellite transmissions required to disseminate a packet to a geographically distributed group of users. Depending on the size of area in which users are located, we construct a single multicast tree or two multicast trees using satellites moving in the same direction. The use of two multicast trees serves groups of users in concentrated location distributions more efficiently, while a single tree is more efficient for distributing traffic to a group of users with widely dispersed locations.
      • 04.0206 PRABR: Integrating Primary and Backup Routing in pLEO Satellite Networks
        Collin Brady () Presentation: Collin Brady - Sunday, March 2th, 09:50 PM - Gallatin
        As Proliferated Low Earth Orbit (pLEO) satellite constellations increase in popularity for providing low latency global communications networking, their vulnerability to disruptive threats to deny use is also growing. Existing routing schemes for pLEO are either centralized terrestrial schemes that typically rely on knowledge of link and node states in the entire constellation, or distributed low-complexity space-based schemes that rely on factors such as orbit geometry or geographic location of endpoints. In this paper, we propose a hybrid routing scheme called Primary Routing Aware Backup Routing (PRABR). PRABR is a framework to incorporate a centrally planned primary routing algorithm alongside a distributed backup routing algorithm and intelligently switch between them based on local observations of a router. We use the ns-3 based Hypatia simulator to show that PRABR significantly outperforms primary routing alone under satellite failures while maintaining the latency minimizing properties over the backup routing algorithm alone. We also present several example routes and discuss the effects of satellite failures and PRABR on such routes.
    • Tommaso Rossi (University of Rome Tor Vergata) & Claudio Sacchi (University of Trento)
      • 04.03 Terrestrial-Non-Terrestrial Network integration in the framework of emerging 6G visions: the European perspective of the ITA-NTN project
        Claudio Sacchi Presentation: Claudio Sacchi - - Gallatin
      • 04.0301 Airborne Quantum Key Distribution with Boundary Layer Effects and Mach Number
        Shamreen Banu Sheik Sulaiman (), Mayukh Singha (), Sonai Biswas (Technische Universitàet Dresden), Swaraj Nande (Technical University Dresden), Riccardo Bassoli (Technische Universität Dresden) Presentation: Shamreen Banu Sheik Sulaiman - Monday, March 3th, 08:55 AM - Gallatin
        The problems associated with quantum key distribution (QKD) over large distances and in free space may be solved by using aircraft platforms. Nevertheless, the boundary layer that surrounds fast-moving aircraft can cause erratic changes and disruptions to photons that are transmitted, which can alter their mode. When the flying speed exceeds 0.3 Ma, the random dispersed boundary layer is always encircled by the aircraft surface, resulting in the random wavefront aberration, jitter, and additional intensity attenuation of the photons being transmitted. In this paper, we suggest a method for evaluating air-to-air QKD performance that takes into account boundary layer effects and the influence of Mach number. Our thorough examination of the performance of aerial quantum key separation (QKD) can be used to future airborne quantum communication concepts.
      • 04.0303 Evaluating Performance in LEO Satellite Communication Networks: NS3-Based Simulation Study
        Nour Badini (), Fabio Patrone (University of Genoa), Arianna Miraval Zanon (), Mario Marchese (University of Genoa, Italy) Presentation: Nour Badini - Monday, March 3th, 09:20 AM - Gallatin
        Next-generation communication technologies aim to meet the evolving demands of users, emphasizing seamless access to high-quality services regardless of location or time constraints. However, conventional ground-based networks face limitations in providing Internet connectivity to users on several moving platforms, such as airplanes, ships, and trains, as well as in remote areas where building an extensive terrestrial infrastructure is economically unfeasible. To address these challenges, researchers and standardization organizations, such as the ITU, ETSI, ESA, and 3GPP, are exploring the integration of satellites into communication systems. Satellites are appealing thanks to their unique capabilities in delivering reliable connectivity across diverse geographical regions, independent of environmental factors and events, such as climate or natural disasters. However, a thorough investigation and verification is essential to deal with several aspects and allow testing the developed solutions in controlled environment before integrating them into operational systems. Accurate simulation tools play a crucial role in modeling propagation environments and network dynamics to ensure effective deployment and optimization of satellite communication networks. This paper presents an NS3-based simulation tool that encompasses several key functionalities to ensure an accurate modeling of communications in Satellite-Terrestrial Integrated Networks (STIN). Among the offered functionalities, the simulator includes a mobility model based on the NORAD Simplified General Perturbations 4 (SGP4) mathematical model to simulate Low Earth Orbit (LEO) satellite movements, mobility models tailored for ground users (pedestrian, vehicles, train, and airplanes), and channel models, based on the 3GPP's TR 38.811, representing different environments (dense urban, urban, sub-urban, and rural) including atmospheric absorption and clutter effects to provide accurate estimations of signal propagation and Signal-to-Noise Ratio (SNR) calculations. This paper also presents a comprehensive study evaluating the performance in different network settings with a focus on satellite-to-ground SNR and maximum link capacity calculations across Ka- and S-bands, showcasing the effectiveness of using NS3 as a simulation platform to assess performance in LEO satellite networks. By integrating factors such as node positioning, channel modeling, and environmental influences, we offer valuable insights into the design and optimization of satellite communication systems for diverse deployment scenarios, aiming to narrow the gap between simulation and real-world deployment and paving the path for more efficient and resilient satellite communication networks in the future.
      • 04.0304 Federated Learning and MEC for Disaggregated RAN Monitoring in the 5G Non-Terrestrial Networks
        Henok Tsegaye (University of New Mexico), Claudio Sacchi (University of Trento), Yonatan melese Worku (), Petro Tshakwanda (University of New Mexico) Presentation: Yonatan melese Worku - Monday, March 3th, 09:45 AM - Gallatin
        The disaggregation of the Next Generation Radio Access Network (NG-RAN) is becoming essential for optimizing resource consumption in 5G Non-Terrestrial Networks (NTN), especially within Low Earth Orbit (LEO) satellite constellations operating in regenerative mode. By splitting the access network into Central Unit (CU), Distributed Unit (DU), and Radio Unit (RU), satellite payload design becomes more manageable, addressing the capacity constraints of non-terrestrial networks. However, the disaggregated NG-RAN architecture, satellite mobility, and limited processing power present significant challenges in ensuring network resilience and meeting Quality of Service (QoS) requirements. This paper proposes an innovative framework that combines Federated Graph Neural Network (FedGNN) with Multi-access Edge Computing (MEC) to monitor disaggregated NG-RAN components in 5G NTN environments. FedGNN enables decentralized model training across satellite and terrestrial nodes without sharing sensitive data, ensuring privacy and reducing latency. MEC brings computation closer to the network edge, facilitating real-time processing and efficient traffic routing decisions. Using a Graph Convolution Network (GCN), the framework can optimize performance, detect faults in network nodes, and dynamically adapt traffic routes across the LEO satellite-terrestrial network in real-time. We implement this framework within a Kubernetes cluster featuring terrestrial and satellite edge nodes to monitor the disaggregated NG-RAN efficiently. The deployment includes local GCN models that detect link failures and optimize traffic paths, improving network resilience. The proposed approach significantly reduces the burden on satellite processing while maintaining scalability and flexibility in managing network resources. We evaluate the framework’s latency, throughput, and fault detection accuracy through extensive simulation. The results demonstrate that integrating FedGNN and MEC enhances the efficiency and resilience of 5G NTN networks, making it well-suited for dynamic, resource-constrained LEO satellite constellations.
      • 04.0305 Reliable Heterogeneous Multi-Node Quantum Networks for Future 6G Communication
        Abdelkrim Menina (TU Dresden), Riccardo Bassoli (Technische Universität Dresden) Presentation: Abdelkrim Menina - Monday, March 3th, 10:10 AM - Gallatin
        Quantum technologies are promising key elements for the future deployment of 6G three-dimensional networks. As multiple quantum physical platforms develop in parallel, a challenge arises to support a heterogeneous structure combining discrete and continuous variables entanglement distribution protocols. This combination is essential for achieving a hybrid optical communication architecture between metropolitan and space segments. Here we present a new reliable theoretical design based on hybrid discrete and continuous systems transferring encoded coherent state through this heterogeneous multi-node entanglement distribution quantum network implementable in the future 6G communication.
      • 04.0306 Efficient Message-Passing Detection for Multi-Satellite Systems Using OTFS Modulation
        Elisa Conti (University of Parma), Amina Piemontese () Presentation: Elisa Conti - Sunday, March 2th, 02:40 PM - Electronic Presentation Hall
        In this study, we analyze a low Earth orbit multi-satellite communication system using orthogonal time frequency space (OTFS) modulation. In this context, the choice of the OTFS modulation is driven by its robustness against high Doppler shifts and its suitability for time-frequency selective channels. This scenario is motivated by the need for a more stable and reliable system, which can be achieved through diversity, i.e., allowing each user to be jointly served by multiple satellites. This is particularly beneficial in situations where the line-of-sight link between the satellite and the user terminal (UT) is obstructed by physical impairments on the ground. However, signals from diverse satellites typically reach the UT at distinct time epochs and are also affected by different Doppler shifts and phases. Therefore, while higher diversity can significantly improve performance, the UT must effectively combine the received information to benefit from multi-satellite configurations. This paper focuses on soft-output data detection algorithms and proposes a novel message passing (MP)-based approach that leverages both the channel sparsity in the Delay-Doppler domain and the particular structure assumed by the equivalent channel matrix Ψ, derived from the compact block-wise input-output relation, according to the Forney observation model for linear modulations over AWGN channels. The proposed detector works on a processed version of Ψ, referred to as G, where a number Nd of main diagonals, corresponding to the most significant interfering terms, are identified. Based on the reduced version of the Hermitian matrix G, containing Nd nonzero diagonals in the lower triangular submatrix, the proposed MP solution operates on a factor graph composed by Nd subgraphs. Each of them consists, in turn, of a specific number of parallel branches implementing separately the well-known BCJR algorithm with unitary memory. The computation on each branch in the same subgraph can be performed simultaneously, allowing to achieve a high level of parallelization. The various subgraphs exchange information according to the sum-product algorithm rules and convergence is achieved after a low number of iterations, typically 3. The proposed approach is the first one able to significantly reduce the complexity by acting on the choice of interferers, organized in diagonals, and on the schedule, by prioritizing the strongest elements. The number of selected diagonals Nd is optimized to achieve the best trade-off between complexity and performance. We assess the detector's performance by evaluating its pragmatic capacity, i.e., the achievable rate of the channel induced by the signal constellation and the detector soft-output. The simulated satellite link scenarios are based on the Starlink constellation, considering various numbers of satellites in visibility with the UT. For comparison purposes, we tested the linear minimum mean squared error (LMMSE) receiver and a low-complexity algorithm working on Ψ taken from existing literature. Numerical results demonstrate that multi-satellite diversity effectively enhances the system performance and that the proposed solution reaches and, in some scenarios, outperforms the LMMSE detector with a considerably lower complexity. Moreover, given the constraint on the computational load, significant performance advantages are observed with respect to the algorithm operating on Ψ.
    • David Copeland (Johns Hopkins University/Applied Physics Laboratory) & Patrick Stadter (The Aerospace Corporation)
      • 04.0401 Acquiring Precision Doppler Measurements with Juno's Ka-band Translator for Increased Science
        Dustin Buccino (Jet Propulsion Laboratory), Kamal Oudrhiri (Jet Propulsion Laboratory), Marzia Parisi (Jet Propulsion Laboratory), Ryan Park () Presentation: Dustin Buccino - Monday, March 3th, 10:35 AM - Gallatin
        One of the instruments onboard NASA’s Juno spacecraft in orbit around Jupiter is a Ka-band Translator (KaT), which provides the capability to add a second radio link between the Juno spacecraft and NASA’s Deep Space Network antennae. Juno’s KaT was provided to NASA’s Juno mission by the Italian Space Agency (ASI) and built by Thales Alenia Space-Italy. The KaT consists of the Ka-band Translator instrument, which receives an uplink Ka-band carrier radio signal from the DSN and phase-coherently re-transmits the signal back to the DSN, and a Ka-band Solid State Power Amplifier (Ka-SSPA) which amplifies the downlink signal through Juno’s High Gain Antenna. The KaT was designed to acquire precision Doppler measurements which are used in the analysis gravity science to probe Jupiter’s interior structure. Together, it is known as the KaTS. In the prime mission, Juno was able to utilize the KaT to <10 micron/second of Doppler measurement precision at 60-s integration time; leading to discoveries on Jupiter’s asymmetric interior structure and the depths of the zonal flow, and more. In the extended mission, the KaTS was applied to gravity science and radio occultation measurements of Ganymede, Europa, and Io along with continuing investigations of Jupiter’s atmosphere, ionosphere, and plasma features. This work will describe the KaTS throughout the mission and the science return
      • 04.0402 Hybrid Lunar Satellite and Cooperative Surface Navigation: A Distributed Estimation Perspective
        Robert Poehlmann (German Aerospace Center (DLR)), Jan Gerhards (University of Stuttgart), Siwei Zhang (German Aerospace Center (DLR)), Emanuel Staudinger (German Aerospace Center - DLR), Christian Becker (University of Stuttgart) Presentation: Robert Poehlmann - Monday, March 3th, 11:00 AM - Gallatin
        In the coming years, a large number of lunar surface missions is planned by both, public and private sectors. A key prerequisite for many missions is accurate and reliable position, navigation and timing (PNT), e.g. to enable precision landing, autonomous robotic exploration and localization of astronauts. To support a growing number of missions, lunar satellites operating under the frame of LunaNet shall provide communications and PNT for orbit and surface users. Especially in an early phase, the number of satellites will be very limited. To ensure high availability and high accuracy, hybrid satellite navigation and cooperative radio navigation on the lunar surface has been suggested. For cooperative radio navigation, radio signals exchanged among users on the lunar surface are used. Surface radio links among users can either be provided by a dedicated radio system or within a 4G/5G lunar mobile network. While the benefit of combining satellite navigation and cooperative surface navigation for the moon has been shown by a theoretical study, how such a hybrid system could be implemented has not been fully analyzed yet. Estimation algorithms shall consider the nature of the distributed system, where measurements become available at different nodes and communication among the nodes needs to be modeled appropriately. Such algorithms have not been proposed yet. With the final paper, we define a unified system and signal model, including both satellites and surface nodes. Then, we introduce one centralized and three decentralized estimation algorithms for a hybrid lunar satellite and cooperative surface radio navigation system. The centralized algorithm runs on a single user node, e.g. the lander, whereas the decentralized algorithms run on each user node. We consider decentralized estimation algorithms based on broadcasting measurements, consensus and equivalent ranging variance, representing different concepts of distributed estimation. We further introduce a simulation framework to assess the performance of lunar surface navigation algorithms. Based on extensive simulation and theoretical results, all algorithms are evaluated regarding the functional requirement accuracy and the non-functional requirements flexibility, scalability and robustness. The centralized estimation algorithm is conceptually simple, but suffers from low robustness due to a single point of failure. We show that using a distributed estimation algorithm based on consensus is unfavorable due to low flexibility and scalability. Based on our results, we recommend the algorithm based on broadcasting measurements for best accuracy and the algorithm based on equivalent ranging variance for best scalability. Furthermore, our simulation results confirm theoretical results from literature that hybrid satellite and cooperative surface navigation considerably outperforms a standalone lunar satellite navigation system. Finally, we provide a detailed analysis how static nodes on the lunar surface can considerably improve the position accuracy of moving nodes. Specifically, we show how static nodes allow to bridge periods with low satellite availability.
      • 04.0404 Implementation of Regenerative Ranging for Low SNR Scenarios for Software-Defined-Radios
        Nirbhay Tyagi (California Institute of Technology), Lindsay White (JPL), Dennis Ogbe (Jet Propulsion Laboratory), Zaid Towfic (Jet Propulsion Laboratory) Presentation: Lindsay White - Monday, March 3th, 11:25 AM - Gallatin
        Robotic spacecraft exploration is heavily dependent on communications and navigation. NASA spacecraft connect to antennas in the deep space network (DSN) by radio links in California, Australia, and Spain to support communications as well as navigation functions including ranging. Range measurements between the spacecraft and DSN are a significant, occasionally crucial, aid in navigation. To ensure that spacecraft and ground resources are used as efficiently as possible, communication must typically be done with ranging for a particular set of uplink and downlink configuration. Regenerative ranging allows the use of less power dedicated to the ranging signal than the standard (nonregenerative) ranging and is compatible with presence of commanding and telemetry. Additionally, since regenerative ranging does not dedicate power to re-transmit the uplink noise (as is done for traditional ranging), this extra power can be dedicated to telemetry, allowing the use of higher data rates. Various approaches to implement regenerative ranging have been examined. The easiest method is to obtain the symbol transitions and reproduce the code using a symbol tracking loop. Unfortunately, this technique suffers from the requirement for a high SNR, which is incompatible with the typical use case of regenerative ranging in the case of deep space missions. This paper evaluates a regenerative ranging subsystem utilizing a phase/delay locked loop and presents a performance analysis correlating ranging performance with SNR.
    • Mazen Shihabi (Jet Propulsion Laboratory) & Jaime Esper (NASA - Goddard Space Flight Center)
      • 04.0501 Optimal Satellite Network Topology Design with Time-Dependent Traffic Demands
        David Williams Rogers (West Virginia University), Dongshik Won (TelePIX Co., Ltd. / Korea Advanced Institute of Science and Technology ), Dongwook Koh (Telepix), Kyungwoo Hong (), Hang Woon Lee (West Virginia University) Presentation: David Williams Rogers - Monday, March 3th, 04:30 PM - Gallatin
        The use of satellite networks, either with radio frequency or optical links, appears as a solution to tackle the digital divide problem. Literature approaches the design of satellite networks with the use of metaheuristic algorithms, which do not guarantee the convergence of the solution to the global optimum. Further, coverage is adopted as the main figure of merit, often neglecting the time-dependent traffic demands and the channel capacity of the links at the time of the network design. In this paper, we propose a mixed-integer linear programming optimization formulation that simultaneously determines the optimal satellite network configuration and its topology while satisfying the strict time-dependent traffic demands encoded by leveraging the concept of traffic matrices. We present a case study to demonstrate the applicability of the model by defining a heterogeneous set of gateways with time-varying traffic demands, radio-frequency links between satellites and gateways, and optical inter-satellite links. Further, the case study provides insight into the coupled relationship between the satellite network configuration, the astrodynamics, and the network topology.
      • 04.0502 Onboard Processing for LunaNet Data Services
        Wesley Eddy (), Jon Verville (NASA Goddard Space Flight Center) Presentation: Jon Verville - Monday, March 3th, 04:55 PM - Gallatin
        The LunaNet Interoperability Specification (LNIS) has been created and released to the public, enabling international commercial and government lunar systems to have a common baseline to work together in forming complex lunar mission networks. The LNIS specifically defines a set of data service protocols that offer multiple options for different types of network users and providers to work together. This includes real-time frame delivery services similar to those classically used by missions, as well as both real-time and store-and-forward networking that will be important as the number of mission users and complexity of lunar mission operations increases. The NASA Lunar Communications Relay and Navigation Systems (LCRNS) project requires lunar relay satellite services to implement onboard processing and networking capabilities in some ways similar to LEO mega-constellations that offer Internet access, but also going beyond those capabilities to meet unique lunar mission needs (e.g. store-and-forward services, LNIS messaging, etc.). Onboard processing will be necessary for a number of different protocols that are included in the LNIS, including support for data transfer over lunar proximity radio links based on CCSDS framing and CCSDS Encapsulation Packets, networking via IP and DTN Bundle Protocol, a suite of DTN Convergence Layer Adapters, and LunaNet messaging services. This paper provides an overview of our work in contributing to and defining the LNIS service interfaces and LCRNS requirements related to onboard processing for data services. This includes exploration of the key aspects of lunar networking concept of operations, technology trade studies, protocols for trunking, applicability and scope of different protocols, messaging, security considerations, and realistic implementation and deployment constraints. While the technology for new lunar relay networks can build upon the recent advances in LEO constellations, there are unique new challenges related to network management, onboard processing hardware/software system constraints, security, and quality of service for human spaceflight missions. As multiple international systems are planned to be put online in the coming years based on the LNIS and derived data services, this work will have a lasting impact that supports the evolution from single-vehicle user missions into scenarios where many surface and orbital assets are able to collaborate over multi-hop networks operated by diverse providers.
      • 04.0503 Investigation of Multipath Effects on Mars Relay Network Overflights
        Marc Sanchez Net (Jet Propulsion Laboratory), Emme Wiederhold (Jet Propulsion Laboratory), Ryan Mukai (Jet Propulsion Laboratory), Charles Lee (Jet Propulsion Laboratory), Neil Chamberlain () Presentation: Marc Sanchez Net - Monday, March 3th, 09:00 PM - Gallatin
        The Mars Relay Network (MRN) is routinely used to increase data volume return from spacecraft on the Martian surface, including Mars 2020 (M2020) and the Mars Science Laboratory (MSL). The system performance, typically measured in daily returned data volume per vehicle, depends on several factors, including the time and geometry of the overflights, as well as the orientation and state of the vehicles at both ends of the proximity link. In this paper, we focus on understanding the effect of terrain occlusions and multipath fading on the performance of return proximity links. Using historical data, we first show that relay passes at low elevation angles experience a clear data volume degradation, and result in returned daily data volumes that are more difficult to predict ahead of time during the relay planning phase. Motivated by these findings, we describe a new method to include terrain effects in the relay planning phase, i.e., while selecting certain overflights as relay passes. We also evaluate different metrics to use as a predictor of multipath effects, from simple terrain occlusion, to computer-based approaches that combine ray tracing and electromagnetic simulations to estimate the reflected electric fields. Acknowledgments The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).
      • 04.0504 Lunar Inter-Spacecraft Optical Communicator
        Jose Velazco (Chascii Inc.) Presentation: Jose Velazco - Sunday, March 2th, 09:00 PM - Electronic Presentation Hall
        The inter-spacecraft omnidirectional optical communicator (ISOC) is currently being developed by Chascii to provide fast connectivity and navigation information to small spacecraft forming a swarm or a constellation in lunar space. The lunar ISOC operates at 1550 nm and employs a dodecahedron body holding 6 optical telescopes and 20 external arrays of detectors for angle-of-arrival determination. In addition, the ISOC includes 6 fast avalanche photodetectors (each furnished with external gain optics) for high data rate connectivity. The ISOC should be suitable for distances ranging from a few kilometers to a few thousand kilometers. It will provide full sky (4π steradian) coverage and gigabit connectivity among smallsats forming a constellation around the moon. It will also provide continuous positional information among these spacecraft including bearing, elevation, and range. We also expect the ISOC to provide fast low-latency connectivity to assets on the surface of the moon such as landers, rovers, instruments, and astronauts. Chascii is building a lunar ISOC including all its transceivers, optics, and processing units. We are developing the ISOC as a key candidate to enable LunaNet. In this paper, we will present the development status of the ISOC’s transceivers as well as link budget calculations for operations around the moon using pulse-position modulation. We will present our ISOC development roadmap that includes LEO, GEO and lunar missions in the 2025-2028 time frame. We will also discuss the use of Delay-Tolerant Networking in the ISOC processor to achieve secure and encrypted connectivity around the moon. We believe the ISOC, once fully developed, will provide commercial, high-data rate connectivity to future scientific, military, and commercial missions around lunar space and beyond.
      • 04.0505 A Tri Band Spherical Mesh Antenna for Lunar Communication and PNT Networks
        Aman Chandra (FreeFall Aerospace), Terrance Pat (The University of Arizona) Presentation: Aman Chandra - Monday, March 3th, 09:25 PM - Gallatin
        A robust lunar communication network will be fundamental to enable sustained exploration and habitation of the moon. As launches to cis-lunar space and the lunar south pole pick up pace, there is a need for low SWAP, high gain steerable antenna systems to service multiple applications including direct to Earth communication relays and cis lunar PNT. Freefall Aerospace in collaboration with the University of Arizona has been developing ultra lightweight and deployable spherical reflector technology with a tri-band feed to serve at S, X and Ka-band. The spherical shape of this system allows efficient beam steering at very low power and no gain reduction. In this paper, we describe the design of a 1.5 meter deployable antenna and tri-band feed system that can be packaged and deployed from small satellite payload volumes. This paper discusses structural, RF and thermal design considerations and presents a multi-objective optimization approach towards reflector and feed tolerancing. Performance at each band has been simulated based on a deployable additively manufactured feed unit optimized for DSN bands. We discuss key challenges and describe further steps towards realizing a flight ready system.
    • Alessandra Babuscia (NASA Jet Propulsion Laboratory) & Kar Ming Cheung (Jet Propulsion Laboratory)
      • 04.0702 Testbed for Modulating Retroreflectors Enabled Passive Optical Communications
        Lin Yi (Jet Propulsion Laboratory, California Institute of Technology), Jeremy Schumacher (NASA Jet Propulsion Lab) Presentation: Lin Yi - Thursday, March 6th, 08:30 AM - Gallatin
        A “laserless” optical communication transceiver (also known as a “modulating retro-reflector”) uses a wide-angle retroreflector and surface-normal electro-absorptive quantum-well modulators to efficiently encode information onto the optical beam, which is reflected along the exact same path of incidence. This is particularly attractive to planetary mission concepts using a highly asymmetric resource-allocated architecture and examples include surface missions to the Moon, Mars and other planetary destinations. A key challenge is the wide temperature range that the transceiver needs to operate without high power active temperature control. This is particular challenging for ground prototyping work when the temperature is as low as 100K (lunar night) in the ambident laboratory environment. We designed and constructed a laboratory testbed that can simulate the required operating temperature range for the passive optical communication transceiver including the optics, quantum-well modulator and necessary electronics. The testbed also includes a design that can work with both continuous-wave lasers and optical frequency combs (pulsed lasers) as the interrogating light sources.
      • 04.0704 Orbit Determination and Time Synchronization for Future Mars Relay and Navigation Constellation
        Keidai Iiyama (Stanford University), William Jun (NASA Jet Propulsion Laboratory), Sriramya Bhamidipati (NASA Jet Propulsion Lab), Grace Gao (Stanford University ), Kar Ming Cheung (Jet Propulsion Laboratory) Presentation: William Jun - Thursday, March 6th, 08:55 AM - Gallatin
        The current Mars relay network (MRN), which utilizes the combination of NASA and ESA orbiters to transfer data from Mars surface assets, greatly contributed to the success of past scientific missions. However, these orbiters are now operating beyond primary design lifetimes, and were not principally designed for relay services. To enable future human exploration on Mars, it is necessary to develop a dedicated network of satellites that provide both communication relay and positioning, navigation, and timing (PNT) service in Mars' orbit and on its surface. Additionally, it is desirable to operate the constellation semi-autonomously and reduce reliance on the oversubscribed Deep Space Network (DSN). In the paper, we first address the design of the next-generation Mars constellation with a focus on the following three objectives: coverage between +-30 degrees latitude, communication data volume, and surface user PNT performance. Previous studies on Mars constellation design have addressed portions of these objectives, but none have addressed all of the three objectives simultaneously. To address this gap, our paper conducts an analysis on the coverage, communication data volume, and PNT performance for a notional constellation comprising five satellites: two orbiters at areostationary orbit and three orbiters in inclined repeating ground track orbits at lower altitudes. We propose a candidate constellation that can balance these three objectives effectively. We also analyze the sensitivity of altitude and inclinations of the inclined orbits to the three objectives. There is a trade-off because increasing the inclination enhances the navigation accuracy but reduces coverage, while increasing the altitude improves coverage but diminishes data volume. Second, we propose a semi-autonomous orbit determination and time synchronization (ODTS) framework for the MRN orbiters. Our proposed approach integrates three different methodologies: DSN tracking, inter-satellite links (ISLs), and links with a Mars surface station (MSS). The orbiters will utilize either dual one-way links or two-way radio-frequency or optical links to other orbiters and the MSSs to obtain range, range-rate, and bearing angle observables. Within this network connected by ISLs, the surface stations serve as anchor nodes since their position and clock offset are known very precisely. Therefore, this approach minimizes the use of DSN, limiting its monitoring to a portion of the network at shorter time windows. In this paper, we investigate two approaches for processing the obtained measurement data for semi-autonomous ODTS: 1) downlinking the collected ISL measurement data to MSS and processing them in batches, and 2) processing the measurements in a distributed manner at each node (orbiters and MSS). The first approach can provide a more accurate solution, while the latter approach can lower the communication burden. We demonstrate the effectiveness of the proposed ODTS approach in the proposed constellation in the first part of this paper.
      • 04.0705 A Comparison of Navigation Methods Enabled by a Deep Space Relay Architecture
        Paul Carter (Jet Propulsion Laboratory), Kar Ming Cheung (Jet Propulsion Laboratory), William Jun (NASA Jet Propulsion Laboratory) Presentation: Paul Carter - Thursday, March 6th, 09:20 AM - Gallatin
        A deep space relay architecture for communication and navigation was developed in two previous papers. The baseline architecture, consisting of a Mars leading and a Mars trailing relay, was designed to mitigate the Mars conjunction problem for optical communication while enabling deep space trilateration for inner solar system users. An expanded version of the architecture added a Mars halo relay which improved the architecture’s data throughput and navigation potential. This paper further defines the navigation component of the architecture. Previous analysis assumed trilateration with idealized, instantaneous one- and two-way range measurements. Here, four realistic methods for making both range and doppler measurements are defined and compared: true one-way, dual one-way, two-way with user as transponder, and two-way with node as transponder. Tradeoffs in accuracy, latency, cost, and complexity are discussed. Range and doppler signal structures compatible with each measurement method are identified and a link analysis is performed to assess measurement accuracy, determine user service volumes, size relay hardware, and plan operations strategies. Idealized measurement assumptions are dispensed with to develop the concept of a “deep space trilateration” algorithm, which incorporates signal propagation time, motion of nodes and users, and relativistic effects that are non-negligible at deep space distances. A “deep space joint doppler and ranging” algorithm is also developed to assess the benefits of incorporating both range and doppler information into a navigation solution. The navigation merits of the deep space relay architecture were previously assessed with a dilution of precision (DOP) analysis. On its own, the DOP simulation captured the intrinsic geometric quality of the architecture for trilateration but did not fully capture user-achievable positioning accuracy. This paper estimates user navigation performance with a more rigorous filter-based simulation. The simulation implements the deep space trilateration and joint doppler and ranging algorithms for each measurement method and accounts for measurement, clock, and ephemeris errors. The accuracy and timeliness of simulated user position estimates are compared across each navigation method for a variety of inner solar system users.
      • 04.0707 Expendable Nanosats Concept for Uranus Exploration
        Lin Yi (Jet Propulsion Laboratory, California Institute of Technology), Tiziana Fiori (La Sapienza Università di Roma) Presentation: Lin Yi - Thursday, March 6th, 09:45 AM - Gallatin
        Uranus Orbiter and Probe mission has been prioritized in the decadal survey as the flagship mission in the next decade. One key measurement is to measure the magnetic field and plasma in the magnetosphere that would benefit from having the simultaneous measurements from multiple locations. We propose the concept of using expendable Nanosats equipped with passive optical wireless communication with modulating retroreflectors and ultra-low power stepped-quantum-well modulators. These Nanosats are dispatched in divergent directions from the Orbiter and data is remotely read out by the laser from the Orbiter up to 10,000km. This paper will present our analysis on the communication link budget in this mission context, battery and thermal requirements to enable significant Nanosat’s lifetime, and the size, weight and power for the instrument payloads.
      • 04.0709 High Data Rates from the Outer Solar System
        Kar Ming Cheung (Jet Propulsion Laboratory), Victor Vilnrotter (Jet Propulsion Laboratory), Marc Sanchez Net (Jet Propulsion Laboratory), Carlyn Ann Lee (JPL) Presentation: Kar Ming Cheung - Thursday, March 6th, 10:10 AM - Gallatin
        Outer planet missions play an important role in JPL and NASA’s space exploration objectives in the coming decades. This paper assesses the technical options available for ensuring that the Deep Space Network (DSN) can enable future missions to the outer Solar System and recommend investment options for flight and ground communication systems and technologies that would meet the future data return requirements. The major takeaways are summarized as follow: • Given the current and near-term state of technology development there are major challenges in operating optical links at outer planet distances (e.g., Sun-Earth angle effects, spacecraft power available for the laser, optical ground network development plans, etc). • Due to the spacecraft limitations at outer planet distances, e.g., antenna pointing and solar/RTG power, the ‘biggest bang for the buck’ on enhancing data return is by improving the capabilities of the DSN at Ka-band. • Concurrently with enhancing DSN capabilities at Ka-band, NASA should encourage the use of Ka-band in missions by actively incentivizing the use of Ka-band on high-rate science downlinks using technologies already available today, while retaining X-band capability for low-rate telemetry, commanding, and emergency support. To improve DSN capabilities at Ka-band, we consider two alternative approaches: 1) operation use of 34-m BWG arrays, and 2) upgrading the 70-m antennas. For each option, we quantify the expected performance and compare it against known upcoming users. We describe past DSN development and flight demonstrations, summarize the technological advances conducted to-date, and identify additional engineering work required to operationalize the system. We recommend the following: o For arraying of 34-m BWG at Ka-band, additional demonstration activities are needed to better characterize the system performance under different conditions, including operations in adverse weather conditions or close to a hot body source. o For upgrading the 70-m antennas, new holography measurements (and panel setting) should be conducted, and operational versions of previously prototyped gravity compensation systems like array feeds and deformable flat plates should be developed and installed. Using these approaches, we expect the DSN downlink performance at Ka-band to improve by 4-6 dB, providing an increase in downlink data rate of 2.5x to 4x.
    • Marc Sanchez Net (Jet Propulsion Laboratory)
      • 04.0801 Backup Routing considering Multiple Link Failures in Optical Communication Satellite Networks
        Kazuki Takashima (The University of Tokyo), Shunichiro Nomura (The University of Tokyo), Takayuki Hosonuma (The University of Tokyo), Ryu Funase (University of Tokyo), Shinichi Nakasuka (University of Tokyo) Presentation: Kazuki Takashima - Thursday, March 6th, 10:35 AM - Gallatin
        The demand for space communication has been escalating due to several factors such as the expanded use of Earth observation satellites, the need for alternative communication pathways during terrestrial communication disruptions caused by natural disasters, and the necessity for robust internet connectivity in maritime and aerial environments. To meet these demands, there is a growing focus on the development of non-terrestrial networks that integrate optical communication, which offers significantly higher data rates than traditional radio frequency communication, with low Earth orbit (LEO) satellite constellations that ensure broad coverage with low latency. However, LEO optical communication constellation networks face unique challenges. These include the dynamic nature of communication links due to the orbital movement of satellites, interruptions caused by cloud cover, and the increased risk of equipment failures or malfunctions in the harsh space environment. Traditional routing methods used in terrestrial networks, when applied directly to LEO optical communication constellations, lead to frequent link switches and unexpected link failures, resulting in data loss and increased latency. Therefore, there is an urgent need for communication routing methods tailored to the specific characteristics of optical communication satellite constellations. Existing research has primarily addressed periodic communication link interruptions caused by orbital movement, but has not sufficiently considered sudden interruptions due to factors such as cloud cover or equipment failures. These unexpected disruptions pose significant challenges to maintaining reliable communication links, necessitating more resilient routing strategies that can adapt in real-time to fluctuating network conditions. This research proposes a novel routing method to mitigate the impact of unpredictable link failures across multiple communication links, thereby minimizing packet loss and delay. For each satellite, the proposed method selects multiple communication links in the network and pre-calculates backup routing tables to be robust against the failures of the selected links. Each satellite can autonomously decide which backup table to use based on the current network status, ensuring a swift and efficient response to sudden link failures. The selection process for multiple communication links to anticipate potential failures is based on the hop count from each satellite to the link and the overall placement of ground stations within the network. By identifying specific links through potential failure scenarios, the method enhances the network's ability to reroute traffic promptly and effectively against the link disruptions. To evaluate the effectiveness of the proposed routing method, we model the structure and data flow of a LEO optical communication satellite constellation network. This modeling facilitates precise simulation of network operations under various conditions, providing a comprehensive understanding of the network's performance dynamics. Numerical simulations are conducted with different ground station positions and scenarios where sudden link failures can occur. We assess the communication performance from the simulation results and demonstrate the proposed method's capability to enhance the robustness and efficiency of LEO optical communication satellite constellation networks.
      • 04.0803 Free-Space Optical Communication Using an Optical Frequency Comb and Modulating Retroreflectors
        Lin Yi (Jet Propulsion Laboratory, California Institute of Technology), Kai Suekane (), Uriel Escobar (JPL) Presentation: Lin Yi - Thursday, March 6th, 11:00 AM - Gallatin
        We present a high data rate optical communication method with a UAV swarm by combining quantum-well modulating retroreflectors (MRR) and an optical frequency comb for a distributed interferometric antenna. The optical communication between a laser station and UAVs using MRR can reduce the mission complexity and power consumption at the UAV site. To minimize the power consumption at the UAV site, a short link is preferred to obtain a sufficient signal-to-noise ratio (SNR) and reduce the required modulation bandwidth. However, the saturation power of a photodiode (PD) at the station site limits the SNR, so we cannot decrease the power consumption with more laser power or a shorter link distance. We overcome this limit by using multiple comb modes for data transfer. By adding a diffraction grating to MRR, we can independently encode data in each comb mode and retro-reflect them. At the station site, we demultiplex the reflected comb modes and then receive them at PDs, which decreases the received power at each PD by a factor of the number of comb modes. We designed a simple optical system that consists of a quantum-well modulator (QWM), a diffraction grating, a lens, and a planar focal plane, ray tracing indicates that the designed MRR has a field of view of about 2°, and ~100 modes can be used with a near-diffraction limit. We compared the power consumption for communication at the UAV site between using a continuous-wave (CW) laser and using a comb by simulation using various parameters. In the case of an antenna that operates at 20 GHz, needs 400 Mbps per UAV, 343 UAVs (forming a cube with one side 7 m), 10 % of the modulation depth of the QWM, 10 kW of the laser output, and reasonable saturation power and noise equivalent power of PDs, the use of a comb can reduce the power consumption by over 90 % compared to the use of a CW laser. Furthermore, the required bandwidth of a PD is over 200 GHz when using a CW laser, while it is less than 1 GHz when using a comb. Our comparison shows the use of a comb can not only reduce the power consumption at the UAV site but also mitigate the requirement of PDs.
      • 04.0804 LuPNT: An Open-Source Simulator for Lunar Communications, Positioning, Navigation, and Timing
        Guillem Casadesus Vila (Stanford University), Keidai Iiyama (Stanford University), Grace Gao (Stanford University ) Presentation: Guillem Casadesus Vila - Thursday, March 6th, 11:25 AM - Gallatin
        The renewed interest in lunar exploration, focused on establishing a sustainable presence on the Moon, has underscored the critical need for reliable lunar communications and Positioning, Navigation, and Timing (PNT). Addressing these needs requires advanced simulation tools that integrate complex lunar dynamics, space communication networks, and advanced navigation capabilities. The primary challenge in developing such tools involves combining high-fidelity models of spacecraft dynamics, communication, and navigation within a modular, extensible, and computationally efficient framework. Current solutions offer high-fidelity models in several areas but are often limited in scope and integration, resulting in significant overhead. For example, astrodynamics simulators frequently lack the tools needed for data exchange between spacecraft. Furthermore, comparing results from different research efforts is difficult without a standardized, open-source platform, complicating the integration and comparison of proposed algorithms. To address these challenges, we developed LuPNT [1], an open-source simulator that integrates astrodynamics and communication capabilities within a single tool. Developed in C++ for computational performance and offering Python bindings to facilitate research development, LuPNT can handle the time scales involved in simulations ranging from data exchange between spacecraft and filtering for orbit determination to long-varying spacecraft dynamics. Its modular design allows for both high-level and low-level simulations, making it adaptable to various needs in lunar PNT research. This work showcases multiple new capabilities and use cases, including modeling and optimizing contact plans for delay-tolerant networking, generating images for optical navigation using Blender, investigating rover surface navigation with a single satellite, and fusing sensor measurements for orbit determination. We include relevant and comprehensive examples, such as modeling pseudo-range and Doppler observables from terrestrial Global Navigation Satellite Systems (GNSS) while accounting for Earth and Moon occultations and signal-in-space losses, using publicly available satellite ephemerides and antenna pattern information. LuPNT’s spacecraft dynamics models incorporate planetary ephemerides, higher-order gravity, third-body perturbations, and solar radiation pressure. Additionally, its event-based simulation core facilitates the implementation of onboard applications and satellite data exchange, making it a powerful tool for research into multi-agent and autonomous capabilities. The simulation tool is in active development and has already been incorporated into several research projects. We are rapidly expanding the number of examples, tests, and tutorials to facilitate its adoption by the research community. LuPNT provides a unified, comprehensive tool for lunar PNT and communication research. It overcomes the limitations of existing solutions and significantly advances the potential for lunar exploration by enabling the development, integration, and comparison of diverse PNT algorithms within a modular, extensible framework. [1] K. Iiyama, G. Casadesus Vila, and G. Gao, ‘LuPNT: Open-Source Simulator for Lunar Positioning, Navigation, and Timing’, in Proceedings of the 36th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2023), 2023, pp. 1499–1510.
    • David Taggart () & Claudio Sacchi (University of Trento) & Len Yip (Aerospace Corporation)
      • 04.0901 ATimescale Concept in AltPNT: A Model-based Control of Networked Systems Approach
        Khanh Pham (Air Force Research Laboratory) Presentation: Khanh Pham - Wednesday, March 5th, 08:30 AM - Gallatin
        Terrestrial and satellite communications, tactical data links, positioning, navigation, and timing (PNT), and distributed sensing will continue requiring precision timing and the ability to synchronize and disseminate time. However, as the supply of space-qualified clocks with Global Navigation Satellite Systems (GNSS)-level performance is limited and the understanding of disruptions to GNSS due to potential adversarial actions in creases, the current practice of reliance on the GNSS-level timing becomes costly and outdated when compared with on-orbit assembly of robust and stable timescale references, especially being considered as the development of diverse alternatives to GNSS. Onboard realization of clock ensembles is particularly attractive for various applications such as on-demand dissemination of GPS Time (GPST) like navigation services via a proliferated Low Earth Orbit (pLEO) constellation. This paper investigates the model-based control of networked timing systems that embrace the vision of optimally placing critical information of the implicit ensemble mean (IEM) estimation about the multi-platform clock ensemble that has better stability than any individual member of the ensemble on the network so to flexibly reduce the data traffic load. By making the remote sensor of IEM estimates running onboard of the anchor platform and actuators for optimal steering of remote frequency standards located at participating platforms more “intelligent” that supports onboard IEM timescale realizations across a pLEO constellation, the networked control system is able to predict future behaviors of local reference clocks ac companied with low noise oscillators and then send the precise information on the IEM estimation at critical times so to ensure the realization of a common pLEO timescale onboard all participating platforms. Clock steering is especially essential to realizations of timescales. Performance realizations are generally dependent on chosen control intervals and steering techniques. Towards performance robustness beyond what the existing steering technique of Linear Quadratic Gaussian control can offer, the minimal-cost-variance (MCV) steering paradigm is proposed from the perspective of minimizing the variance of the integral-quadratic-form performance measure of the Linear Quadratic Gaussian control subject to a constraint on its mean.
      • 04.0902 On the Theory of Network Architectures in the Solar System Internet
        Alan Hylton (NASA), Jihun Hwang (Purdue University) Presentation: Alan Hylton - Wednesday, March 5th, 08:55 AM - Gallatin
        Delay Tolerant Networking (DTN) is maturing into a viable enabling technology for the so-called Solar System Internet (SSI). The focus of SSI is shifting towards modern network architectures and scalability, which goes beyond the underlying protocol suite of DTN. Following a record-setting experiment campaign on the International Space Station (ISS), there is a wealth of operational experience and lessons-learned based on the advent of service-provider oriented architectures available to propel humanity's ability to network in space to new levels. However, deeper understandings of extending these architectures to the solar-system level are not fully explored. In this paper, we combine this new information with previous theoretical advances to open new doors in DTN network modeling with an eye on practical means to designing, creating, and operating future space networks. The primary purpose of networking is scalability, however simply using DTN does not give this for free. Indeed, having a protocol suite does not inform the user on its best practices; in the case of DTN, best practices are not known. In particular, the ISS experiments illustrated the difficulty of uniting DTN network areas across project and programmatic boundaries. In traditional DTN routing, all nodes have the same schedule of contacts - known as a contact graph - and it is expected that these data are globally consistent. Approaches depending on omniscience neither scale nor generalize well, yet alternatives remain elusive as there is no standard temporospatial network modeling approach. We investigate the capability of various mathematical models of dynamic heterogeneous networks to capture critical features such as routing, data flow optimization, and network hierarchy detection for the Near Space Network’s upcoming real-mission deployment including LunaNet. To better encapsulate the multifaceted nature of space communications, we first explore sheaf constructions on hypergraphs and more accurately model time variation in our network using the theory of topos and moduli spaces. For algorithmic directions, we develop a framework for automated community and bottleneck detection using Ollivier-Ricci curvature and persistence homology, so we can either bypass or exploit the detected bottlenecks using network coding. We are also exploring the application of algebraic geometry to improve error correction, detection, and code repair in locales such as the Near-Space Network. In establishing solid mathematical frameworks to model space communications, we will be able to better standardize more efficient and scalable network services for the upcoming Near-Space Network and design the architecture of the eventual Solar System Internet. Examples are given in the context of the ISS network experiment with a discussion on how these tools can be used in more general settings. Finally, we conclude with future research directions.
      • 04.0903 Towards Practical Clock Synchronization in the Solar System Internet
        Alan Hylton (NASA), Jihun Hwang (Purdue University), Jacob Cleveland (NASA Glenn Research Center), Robert Short (NASA Glenn Research Center) Presentation: Alan Hylton - Wednesday, March 5th, 09:20 AM - Gallatin
        Clock synchronization remains a notable gap in the Delay Tolerant Networking (DTN) suite of protocols. However, following the great theoretical advances in the area and a highly successful DTN experiment campaign on the International Space Station (ISS), a strong enough foundation has been laid to support a practical clock synchronization protocol and implementation for the Solar System Internet (SSI). In this paper we work with the theoretical developments along with the lessons-learned from the DTN experiments to drive a clock synchronization protocol and implementation for practical SSI network architectures, complete with code and simulation analyses. In addition to the recent technical and feature growth of DTN, network architectures have begun taking center stage. This was particularly true with the ISS experiments which operated over a multitude of DTN network boundaries (as well as project and programmatic boundaries), yielding operational experience with proper DTN network architectures. Taking this a step further, it is expected that future users in space will subscribe to multiple service providers, implying multiple network areas and boundaries. Such an architecture is central to the upcoming LunaNet, which notably has nodes that do not neighbor authoritative clock sources. After covering the basics of DTN, we recall the most germane aspects of clock synchronization for DTNs. This includes remarks on routing in DTNs, which is often schedule-based; we emphasize that this alone demonstrates how crucial clock synchronization is for scalability. The introduction continues with a discussion of the ISS experiment network architectures and its implications for this work. The key components have been implemented and used in simulation, and results are discussed; moreover, the implementation will be made available openly and is designed for integration into the High-rate DTN (HDTN) implementation. We then cover parameter optimization and the ramifications of choosing underlying equation solvers for convergence. Observations based on network architectures are used to explain the multiple implementation paths available in DTN and which one was chosen. We conclude with the analysis of a simulation based on the ISS network architecture and the future work thereby inspired.
      • 04.0904 Mitigation of Turbulence Losses over Terrestrial Laser Links for Quantum and Optical Communications
        Victor Vilnrotter (Jet Propulsion Laboratory) Presentation: Marc Sanchez - Wednesday, March 5th, 09:45 AM - Gallatin
        Simulations and preliminary experimental results for the optimization of a laser link over a long-range (9 km) slant-horizontal terrestrial propagation path between the JPL Mesa Test Range and the roof of Caltech Hall at a distance of 9 km, is described and evaluated. For the simulation, a collimated Gaussian beam was transmitted by a telescope located on the roof of Caltech Hall, propagated through the slant-horizontal line-of-sight (LOS) path to the JPL Mesa Test range where it was received by another telescope, and focused to a point-spread function (PSF) in the imaging plane where it is coupled into a high-performance SMF-28 single-mode optical fiber for detection and further processing. Turbulence along the path was simulated using the Split-Step Beam Propagation Method (SSBPM), which approximates continuous turbulence along the line-of-sight with a large number of discrete phase-screens, with realistic spatial and temporal correlations included in the model. The coupling coefficient was evaluated via analysis and simulation with and without tip-tilt compensation, and the improvement in the coupling coefficient quantified. The simulation model will be used to assess the feasibility of future urban-scale quantum communication links, that will require optimal link efficiency. This model is also applicable to ground-to-space laser links, where the same techniques can be employed to mitigate deep fades at the spacecraft or at the ground receiver due to atmospheric turbulence, thus leading to significant improvements in overall system performance.
      • 04.0906 Comparison of Error Probability Analyses for Asynchronous DS-CDMA Satellite Communication Systems
        Len Yip (Aerospace Corporation) Presentation: Len Yip - Wednesday, March 5th, 10:10 AM - Gallatin
        There have been interests in spread spectrum technology in satellite communications due to its high security and multiple access capabilities. Analyzing the end-to-end communication performance using error probability is critical to optimize the spread spectrum system parameters during the design phase. However, the exact evaluation of error probability is considered a formidable task. In the past, several approximation methods for evaluating the error probability of asynchronous direct sequence CDMA (DS-CDMA) systems were developed, which include Standard Gaussian Approximation (SGA), Simplified Improved Gaussian Approximation (SIGA), and the characteristic function approach. In this paper, we compare the accuracy of these methods by developing a computer program to simulate the asynchronous DS-CDMA communication systems. Furthermore, we apply SIGA method in the time domain and include it in the comparison.
      • 04.0907 Scaling up Deep Reinforcement Learning for AI Using FPGAs
        John Porcello () Presentation: John Porcello - Wednesday, March 5th, 10:35 AM - Gallatin
        Abstract – Deep Reinforcement Learning (DRL) combines Reinforcement Learning (RL) algorithms and Deep Learning (DL) to achieve remarkable advancements in AI. Specifically, RL trains one or more agents in an environment typically for estimation, optimization or control based tasks. DRL takes advantage of the high dimensional, complex, non-linear, universal function approximation capability of DL to extend RL. This allows DRL to support a very broad range of practical applications across many disciplines. This paper looks at the task of scaling up DRL algorithms in FPGAs to meet the rapidly growing demand of AI applications. FPGAs are largely underutilized for DRL applications but this is expected to increase as demand for AI drives the need for high performance, complex DRL applications. The use of FPGAs for DRL represents a practical, field deployable, AI solution that offers several key advantages such as relatively low SWAP-C, scalable, fully reconfigurable, high throughput, and low latency. This paper begins with an overview and background of RL algorithms to provide context for the challenges of FPGA implementation. For example, similar to other types of DL architectures, DRL implementations must also implement back propagation in the FPGA in order to train the DL network and therefore allows an agent to learn. Design data as well as insight for scaling and implementing large DRL systems for AI applications using FPGAs is provided herein. This includes DRL challenges in the context of agents running on Multi-FPGA systems in a large-scale, high-throughput AI implementation. Finally, an example large DRL Multi-FPGA design is provided to illustrate the concepts in the paper using an AMD Versal device. This document does not contain technology or Technical Data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations.
      • 04.0908 Construction of Low-rate LDPC Codes from Rate ½ CCSDS Standard LDPC Codes
        Richard Wesel (UCLA), Semira Galijasevic (UCLA), Linfang Wang (University of California, Los Angeles), Jon Hamkins (Jet Propulsion Laboratory), Dariush Divsalar (Jet Propulsion Laboratory) Presentation: Richard Wesel - Wednesday, March 5th, 11:00 AM - Gallatin
        The existing Consultative Committee for Space Data Systems (CCSDS) standard uses low-density parity-check (LDPC) codes for higher code rates including 1/2, 2/3, and 4/5, supporting message block lengths of 1024, 4096, and 16384. For lower code rates, the CCSDS standard uses Turbo codes, providing rates of 1/3, 1/4 and 1/6 and supporting message block lengths of 1784, 3568, and 16384. However, the frame error rate (FER) performance of the Turbo codes shows an error floor around 10−4, which is undesirable for certain space applications. This paper uses Protograph-Based Raptor-Like (PBRL) approach to provide new lower LDPC code rates of 1/3, 1/4 and 1/6 for the existing LDPC message lengths of 1024, 4096, and 16384 that do not have an error floor (at least above FER 10−7) and are rate-compatible with the existing CCSDS rate1/2 LDPC codes.
      • 04.0909 Parallel Trellis-Stage-Combining BCJR for High-Throughput CUDA Decoder of CCSDS SCPPM
        Richard Wesel (UCLA), Amaael Antonini (University of California, Los Angeles), Egor Glukhov (), Dariush Divsalar (Jet Propulsion Laboratory), Jon Hamkins (Jet Propulsion Laboratory) Presentation: Richard Wesel - Wednesday, March 5th, 11:25 AM - Gallatin
        A serially-concatenated convolutional encoder as an outer code through an interleaver with an accumulator as inner code is used to encode pulse position modulated laser light in the Consultative Committee for Space Data Systems (CCSDS) standard for high photon efficiency. This coding scheme is called serially concatenated pulse positions modulated (SCPPM) and it is used for NASA's Deep Space Optical Communications (DSOC) experiment. For traditional decoding that traverse the trellis forwards and backwards according to the Bahl Cocke Jelinek and Raviv (BCJR) algorithm, the latency is on the order of the length of the trellis, which has 10,080 stages for the rate 2/3 DSOC code. This paper presents a novel alternative approach that simultaneously processes all trellis stages, successively combining pairs of stages into a meta-stage. This approach has latency that is on the order of the log base-2 of the number of stages. The new decoder is implemented using the Compute Unified Device Architecture (CUDA) platform on an Nvidia Graphics Processing Unit (GPU). Compared to Field Programmable Gate Array (FPGA) implementations, the GPU implementation offers easier development, scalability, and portability across GPU models. The GPU implementation provides a dramatic increase in speed that facilitates more thorough simulation as well as enabling a shift from FPGA to GPU processors for DSOC ground stations.
    • David Taggart () & Claudio Sacchi (University of Trento)
      • 04.1001 Enhancing Space Situational Awareness: Robust Millimeter-Wave Satellite Communication Solutions
        Mansoor Dashti Ardakani (AMD), Behzad Koosha (The George Washington University ) Presentation: Behzad Koosha - Wednesday, March 5th, 04:30 PM - Gallatin
        Space Situational Awareness (SSA) assumes heightened significance in the contemporary space domain, given the proliferation of satellites and space debris. The precise monitoring of fast-moving targets in distinct orbital trajectories is a crucial aspect of SSA, demanding advanced tracking and predictive capabilities. Managing the complexities arising from the dynamic interactions between objects in different orbits necessitates a nuanced understanding of their orbital dynamics, velocities, and altitudes. In satellite communications, particularly at millimeter-wave frequencies, waveforms are susceptible to increased bandwidth and vulnerability to white noise interference. Also, multipath signals can pose challenges to reception, while the Doppler effect introduces frequency or wavelength changes, further complicating the tracking process. In dealing with these moving targets, sophisticated technologies such as radar systems and optical telescopes are deployed to accurately track and predict their trajectories. To address these issues, we propose a baseband link capable of detecting and compensating for Doppler shifts or Carrier Frequency Offsets (CFO) in received signals. Leveraging the Direct Sequence Spread Spectrum (DSSS) technique offers numerous advantages. Spread spectrum technology spreads the original signal across a wider bandwidth before transmission, enhancing resilience against interference. Moreover, DSSS receivers demonstrate improved performance in low Signal-to-Noise Ratio (SNR) environments compared to non-DSSS designs. Furthermore, the proposed system is designed to support multi-user environments, accommodating different users with distinct Pseudo-Noise (PN) sequences at a relatively low cost. In the proposed solution, we have developed two links capable of transmitting data rates of up to 350 Mbps. Quadrature Phase Shift Keying (QPSK) modulation is chosen due to its ability to achieve the same error probability as other modulation techniques with lower required SNR (Eb / N0) for demodulation. Additionally, QPSK enables the transmission of two bits of data per symbol, effectively doubling the data rate without increasing bandwidth requirements or reducing the bandwidth for the same data rate. On the receiver side, the proposed solution has simulated all necessary synchronization circuits essential for operational satellite links. The design incorporates mechanisms to detect and compensate for Doppler shift frequencies due to moveable targets, align PN sequences with precision to within one sample, and adjust signal phase to enable coherent demodulation for QPSK. By implementing these techniques, the proposed solution aims to mitigate the challenges associated with mm-wave satellite communications, ensuring robust and efficient data transmission in dynamic and noise-prone environments.
      • 04.1002 Optical Frequency Hopped Spread Spectrum: Thoughts and Experiments
        Eugene Grayver (Aerospace Corporation), Matthew Kelley (The Aerospace Corporation) Presentation: Eugene Grayver - Wednesday, March 5th, 04:55 PM - Gallatin
        Frequency hopped spread spectrum (FHSS) has been used to develop secure and resilient radio frequency (RF) systems for decades. FHSS forces a jammer to spread its energy over the entire span while the user concentrates the energy in much smaller range, thereby achieving ‘spreading gain.’ Free space optical (lasercomm) links are highly directional making them inherently resistant to jamming. A jammer located off-axis from the transmitter experiences significant attenuation. However, as the power available to the lasercomm jammers increases, the links become vulnerable. This emerging vulnerability motivates the application of FHSS to the optical domain. Just a few papers have addressed this problem, but are limited to either purely simulation studies, or lack the end-to-end experimental verification. This paper presents the results from a testbed implementing a small (four-channel) OFHSS system. It covers everything from selecting an appropriate error correction scheme, waveform definition, optical implementation to jammer injection. The paper covers options for scaling the system, which is essential since the spreading gain is proportional to the number of channels. The waveform is designed to allow error-free operation when one (and only one) of the channels is jammed (1 out of 4 for the testbed described in this paper). The two main approaches for implementing an appropriate FEC are: interleaving followed by a powerful codec (e.g. LDPC) and an erasure code. Erasure codes are optimal for this scenario because the data in the jammed hop carries no information. A (10,4) code (10 data words followed by 4 parity words, for a total of 14 words, corresponding to a code rate of 5/7) can correct up to 4 words. One word is transmitted on each hop, and the code block takes 14 hops to transmit. Assuming equally likely hop frequencies, ¼ of all the hops will be erased, resulting in 14/4 ≤ 4 erasures per code block. The hopping pattern cannot be truly random because of the constraint that no more than 4 hops within each set of 14 falls on the same frequency. The duration of each hop (word length) is constrained by the ‘standoff distance’ – the distance corresponding to the time a jammer requires to detect the transmission and jump to the frequency. Each word contains a known sequence (at a pseudo-random offset within the word). If the receiver detects the sequence the word is valid, otherwise the word is considered erased. The testbed consists of: • 4x CW lasers at different wavelengths • 4x MZM intensity modulators driven by an arbitrary waveform generator • Optical combiner followed by an OPA • The jammer is implemented with a fast tunable laser that is combined with the output of the OPA • A 200-channel splitter to separate the wavelengths (only 4 of the channels are used) • 4x photodetectors digitized with a high speed oscilloscope • The data encoding, modulation, capture, demodulation, and decoding are done in software in post-processing, but can be trivially implemented in real-time hardware.
      • 04.1003 Radio Transmitter Development to Support multi-Gbps Satellite Downlinks in Ka-band
        Masatoshi Kobayashi (Jet Propulsion Laboratory), Zaid Towfic (Jet Propulsion Laboratory) Presentation: Masatoshi Kobayashi - Wednesday, March 5th, 05:20 PM - Gallatin
        Advancements in spaceborne remote sensing technology have significantly increased the amount of data generated onboard spacecraft, necessitating a robust downlink system capable of much higher rates than previously required. For the NASA/ISRO Synthetic Aperture Radar (NISAR) mission the Jet Propulsion Laboratory has developed a Ka-band Modulator (KaM) variant of the Universal Space Transponder (UST) product line to support 2000 Msps downlink from a single radio using Offset QPSK (OQPSK) modulation in the Earth Exploration Satellite Service (EESS) band from 25.5 to 27.0 GHz. The KaM has successfully completed its protoflight development campaign and flight models have been integrated onto the NISAR spacecraft, awaiting its launch. The KaM is a Software-Defined Radio (SDR) that offers the flexibility to change all aspects of the RF modulation to channel coding protocol stack without altering the underlying hardware. This paper describes critical aspects of the design/implementation trade space to achieve higher data rates on the KaM while meeting transmission band spectral limitations. We explore the use of higher-order modulation schemes such as 8PSK and 16APSK, which can substantially increase data throughput within the available 1500 MHz bandwidth of the EESS band. Performance simulations and prototype radio-based characterization tests demonstrate that with firmware updates, the KaM can be upgraded to support data rates up to 3000 Msps using 8PSK modulation and 4000 Msps using 16APSK modulation. Additionally, by employing a dual polarization downlink scheme, the updated radio could enable downlink rates of up to 8000 Msps for future missions. A feasibility study is also presented to look at high-rate lunar downlink rates pairing the KaM with current technology status of high-power amplifiers and antenna technology.
    • Eugene Grayver (Aerospace Corporation) & Genshe Chen (Intelligent Fusion Technology, Inc.)
      • 04.1102 A Unified Software-Defined Radio Framework for Flexible Waveform Design in Non-Terrestrial Networks
        Claudio Sacchi (University of Trento) Presentation: Claudio Sacchi - Tuesday, March 4th, 08:30 AM - Gallatin
        The integration between terrestrial and non-terrestrial networks (T-NTNs) in the framework of ``5G and beyond'' standardization requires a flexible PHY-layer design. Indeed, it is known that future NTN connections will be characterized by highly heterogeneous data-rate requirements and usage of different frequency bands. Therefore, the waveform design of the non-terrestrial segment should be flexible and adaptive to cope with specific end-to-end performance indicators. In our paper, we aim to propose a new waveform design and implementation methodology, based on software-defined radio, to be deployed in a strongly innovative NTN framework characterized by disruptive concepts like, e.g., cell-free massive MIMOs. After detailing the communication ecosystem where data are exchanged, the starting point of our analysis relies on the general representation of 6G waveforms in terms of a mathematical time-frequency lattice structure. In our view, such a structure will be propedeutical to define a software-based reconfigurable platform, where the different radio interfaces can be obtained in the baseband domain by adding (or removing) software modules and/or re-parameterizing them. The paper will describe in detail the above-mentioned software-based design strategy and will provide some useful guidelines for the practical implementation of the reconfigurable NTN transceivers. The preliminary results in terms of SDR implementation and link performance evaluation will motivate the SDR-based design strategy considered in our work.
      • 04.1103 Prototyping Cooperative Radio Navigation for Planetary Exploration with Software-Defined Radios
        Robert Poehlmann (German Aerospace Center (DLR)), Emanuel Staudinger (German Aerospace Center - DLR), Siwei Zhang (German Aerospace Center (DLR)), Fabio Broghammer (German Aerospace Center (DLR)), Armin Dammann () Presentation: Robert Poehlmann - Tuesday, March 4th, 08:55 AM - Gallatin
        Reliable radio communications as well as position, navigation and timing (PNT) are crucial components of planetary exploration missions. These technologies cannot be developed purely by theoretical works and simulation. Prototyping, experiments and analogue missions are vital to increase the technology readiness level (TRL). Furthermore, a rapid prototyping approach allows lessons learned in experiments to go back into a refined system design. Software-defined radios (SDRs) are perfectly suited for such tasks, due to their flexibility and comparably low implementation effort. However, utilizing SDRs for radio navigation poses several challenges. Methods based on propagation time like round-trip time ranging require sub-nanosecond timing accuracy. Thus, internal group delays of SDR transceivers need to be calibrated appropriately. Direction-of-arrival estimation or beamforming require phase-coherent multichannel SDR frontends, which must be calibrated as well. With the final paper, we want to share the experiences we gained using SDRs for prototyping cooperative and hybrid radio navigation systems at the German Aerospace Center (DLR). We showcase three measurement campaigns and experiments in analogue environments with distinct requirements. The experiments include three to eight radio nodes integrated into static and moving entities. First, we present radio channel and ranging measurements with single-channel SDRs in a lava tube on Lanzarote. Second, we show direction-of-arrival estimation with a coherent multichannel SDR integrated on a robotic rover as part of an analogue experiment on the volcano Mt Etna. Third, we introduce a two-channel SDR for hybrid navigation on the lunar surface, capable of simultaneously performing ranging with neighboring nodes and receiving global navigation satellite system (GNSS) signals. For each of the three experiment setups we discuss both, software architecture and hardware choices. The core software part is the physical and medium access layer of our cooperative radio navigation system, which allows to measure the time-of-arrival and to perform round-trip time ranging to neighboring nodes. A flexible extension enables direction-of-arrival estimation with a multichannel SDR. We further integrated gnss-sdr to allow acquisition and tracking of GNSS signals received on a separate SDR channel, providing pseudorange and Doppler measurements with respect to satellites. On the hardware side, the USRP B200mini, USRP N310 and USRP X310 with UBX frontends have been used for the three different experiment setups. We show lab measurements to determine the timing and phase accuracy and discuss calibration challenges. Finally, we present measurement results from the three experiments. Specifically, we analyze the accuracy of ranging, DoA, pseudorange and Doppler measurements in different analogue environments.
      • 04.1105 K/Ka-Band Space-Flight Reprogrammable and Flexible Communications - Frontier Radio - Multi-Lingual
        Matthew Angert (Johns Hopkins University/Applied Physics Laboratory), Michael Cerabona (Johns Hopkins University/Applied Physics Laboratory), Sean Martin (Johns Hopkins University/Applied Physics Laboratory), Michael Dauberman (), Neil Dalal (Johns Hopkins University Applied Physics Laboratory), Jacob Wilkes (Johns Hopkins University/Applied Physics Laboratory) Presentation: Matthew Angert - Tuesday, March 4th, 09:20 AM - Gallatin
        Future space-flight communications will need to cover wide frequency bands, have more computing power, be highly flexible, and be reprogrammable. The next-generation space-flight software-defined radio (SDR) developed for the NASA Space Communications and Navigation (SCaN) Polylingual Experimental Terminal (PExT) by the Johns Hopkins University Applied Physics Laboratory (JHU/APL) provides these capabilities and more. This radio has completed development into a flight-worthy box and then completed flight qualification for use in the PExT demonstration. The radio receiver and transmitter cover the government/NASA, military, and commercial K/Ka-bands for relay and direct-to-earth communications. The radio supports Consultative Committee for Space Data Systems (CCSDS) compliant waveforms used in many NASA missions, and Digital Video Broadcast – Satellite – Second Generation (DVB-S2) waveforms typically used by commercial relay services. The field programmable gate array (FPGA) and embedded software are re-programmable in-flight so that entirely new waveforms can be uploaded in the future. In the flight demonstration, communications to NASA’s Tracking and Data Relay Satellite System (TDRSS), Boeing/Inmarsat, and SES/O3b mPOWER will be confirmed. The PExT payload will be on a low earth orbit (LEO) spacecraft and has on-board Doppler compensation algorithms for both the forward and return links to relay satellites in various orbits. Optional AE256 encryption is available for the forward link from the ground to the spacecraft. The radio uses an Ethernet connection to the spacecraft host computer. A powerful new feature is included that allows the user to connect to the radio over internet protocol after an RF link is established to a commercial relay provider; this provides access to a standard Linux operating system on the radio enabling future networking and on-board applications. The radio, along with the rest of the PExT payload, has completed integration and vibration testing at the spacecraft level for a scheduled launch in 2025. This paper focuses on the radio design, developments, and PExT flight test qualification. Future commercialization is planned with possible improvements including an even smaller form factor along with additional software applications, non-linear output predistortion, and future waveform support.
      • 04.1107 Fast Software Implementation of a CCSDS LDPC Encoder
        Nathan Wei (Aerospace Corporation), Nathan Chen (The Aerospace Corporation), Eugene Grayver (Aerospace Corporation) Presentation: Eugene Grayver - Tuesday, March 4th, 09:45 AM - Gallatin
        The CCSDS LDPC codes are used by NASA and other space agencies in modern transceivers. These codes were designed to be easy to decode by ensuring the parity check matrix is very sparse. However, the encoding requires multiplication by a non-sparse generator matrix. The encoder can be efficiently mapped to an FPGA by exploiting the high level of parallelism. Indeed, all prior published work on the CCSDS LDPC encoders describes FPGA implementations. With the almost universal acceptance of software defined radio (SDR) in modern ground stations, the LDPC codec must be implemented in software operating on standard CPUs. The authors encountered a counter-intuitive situation where the decoder operates faster than the encoder when implemented in software. This surprising result is mainly due to the availability of highly optimized software decoders, and lack of such optimized encoders. This paper provides an in-depth discussion of mapping the encoder to a modern CPU. Starting with a naïve implementation (as implemented in available open-source software), we progress to multiple optimization steps. The first optimization recasts the processing from bit-wise to byte-wise, and takes advantage of the GF2 (i.e. modulo 2) arithmetic for the matrix multiplication. The modulo-2 arithmetic maps well onto the single-instruction-multiple-data (SIMD) accelerators (SSE/AVX) available on modern CPUs. This approach results in a throughput improvement of 5x. However, the throughput for large code blocks matrix size of 32,3842 is still low since the number of operations grows quadratically with the matrix size. The second optimization takes advantage of some symmetry of the dense generator matrix, inspired by the clever solution presented in [1]. The solution in [1] is designed to reduce the FPGA implementation complexity by replacing the full size generator matrix (2Mx2M) with a smaller (MxM) matrix followed by multiple permutations. The software implementation based on [1] was expected to provide around 4x higher throughput (since the matrix is ½ the size) vs. the original optimized version. The performance exceeded expectations, providing over 5x improvement. The paper delves into low-level optimization options that take advantage of specific instructions offered by Intel and AMD processors and provides a few alternative implementations. The resultant code is at least 10x faster than anything previously published.
      • 04.1108 USRP Implementation and Verification of GNSS Multi-Carrier Broadband Waveforms
        Dan Shen (Intelligent Fusion Technology, Inc), Genshe Chen (Intelligent Fusion Technology, Inc.), Khanh Pham (Air Force Research Laboratory) Presentation: Dan Shen - Tuesday, March 4th, 10:10 AM - Gallatin
        Broadband multicarrier global navigation satellite system (GNSS) waveforms were developed with a focus on onboard waveform adaptation and the mitigation of GNSS waveform distortion caused by high-power amplifiers (HPAs). Navigation signals with a constant envelope (CE) offer a significant advantage for GNSS transmitters due to their fixed amplitude gain and fixed phase offset, even after passing through nonlinear HPAs. This results in minimal impact on the positioning, navigation, and timing (PNT) performance of the received signals. Additionally, multi-carrier broadband modulation is beneficial for GNSS, as it reduces the number of required HPAs and enhances the coherence across different GNSS bands. To demonstrate and verify real-time performance, we implemented the waveforms on universal software radio peripheral (USRP) devices. The demo system includes two computers (running the transmitter and receiver GNU Radio codes), a transmitter USRP (Ettus B210), a receiver USRP (Ettus B210), a transmitter HPA (Mini-Circuits ZX60-43-S+), and USB cables. On the transmitter side, we implemented multicarrier GNSS waveforms to combine Global Positioning System (GPS) L2 and L5 signals. For the USRP implementation of the receiver, we addressed practical issues such as carrier and time synchronization. To demonstrate performance, we developed an online delay estimation algorithm based on convolution, which is a computationally efficient method to find the delay between the received pseudorandom noise (PRN) codes and the local replica. The demo system participated in NAVFEST 2024, a two-week GPS NAVWAR test event held in May 2024 at White Sands Missile Range (WSMR), NM. NAVFEST, also known as the Navigation Festival, provides a low-cost, realistic GPS jamming environment for testing GPS-based navigation systems in GPS-contested settings. Our GNSS testbed participated in NAVFEST 2024 and achieved promising anti-jamming results with GNSS multi-carrier broadband waveforms.
      • 04.1109 Testset for Cis-Lunar Communications and Navigation
        Jon Verville (NASA Goddard Space Flight Center), Eugene Grayver (Aerospace Corporation), Eric McDonald (The Aerospace Corporation), David Lee (Aerospace Corporation), Jean-Guy Dubois (NASA - Goddard Space Flight Center) Presentation: Jon Verville - Tuesday, March 4th, 10:35 AM - Gallatin
        When people again walk on the moon they will benefit from communications and navigation services that were unimaginable on our first visit. NASA is procuring Lunar Relay Communications Navigation System (LCRNS) as a 'turnkey' system. The contractor is responsible for providing a complete system that consists of the Space segment (one or more lunar satellites) and the Ground segment (one or more ground stations). The LunaNet ICD specifies the interfaces and services that LCRNS provides to the lunar Users. LCRNS provides data and navigation (PNT) services. The data services include real-time and non-real time (delay tolerant network). Both types of data services can be delivered over a relatively low-rate S-band or the high-rate Ka-band links. The S-band link supports data rates from a few kbps to a few Mbps, as well as an 'emergency' mode at 15 bps. The Ka-band link supports data rates from 1 to 50 Mbps. LCRNS also provides one-way and multi-way ranging, position, and timing using a signal derived from the terrestrial L1C GPS. This signal also carries low-rate broadcast messages (the in-phase channel is for PNT, the quadrature is for data). This paper describes a novel, software-centric architecture for a testset that is being developed to verify functionality, compliance, and performance of contractor hardware. The testset: • Supports both functional and performance validation o Nominal and off-nominal scenarios o Range of operating conditions o Ground testing using RF cables only – no antennas, not in-orbit • Supports physical-to-application layers o Sources and termination points for IP, DTN, and physical layer data o Interfaces for NASA-provided applications, including NSN/Nextera • Transportable and/or remotely accessible • Upgradeable o Easy insertion and modification of new waveforms, user types, capabilities o Responsive to LNIS version updates o Path to support more than one user • Optimize use of COTS components, software defined radio and open-source software • Enable operation and test execution by non-technical operators o Simple and intuitive user interface o Extensive automation and scripting • Support anomaly resolution and debugging The bulk of the testset functionality is made up of 'golden reference' modems that are used to verify contractor implementations. It is designed to support up to two simultaneous Users and up to four navigation signal transmitters. The signals and capabilities are expected to change as contractor(s) come on board and requirements are refined. The design of the testset is optimized for many and frequent updates. The proposed architecture takes advantage of the high-speed, multi-core CPUs to move most of the signal processing to the software domain. There is only one RF/mixed signal interface for each signal, with the rest of the connectivity entirely in the digital domain. This approach has significant advantages over the hardware-centric 'box per module' approach: • Fewer sources of distortion and noise • Simpler integration • Greater flexibility to support new signals and use cases • Significantly lower cost The paper covers the hardware architecture including radio frequency front ends, the software architecture that forms the core of the testset.
    • Lin Yi (Jet Propulsion Laboratory, California Institute of Technology) & John Enright (Toronto Metropolitan University)
      • 04.1302 Onboard Implementation and Validation of RTK-Based Relative Navigation System for CubeSats
        Hanjoon Shim (Seoul National University), Bu-gyeom Kim (), Yonghwan Bae (), Changdon Kee (Seoul National University) Presentation: Hanjoon Shim - Wednesday, March 5th, 09:00 PM - Gallatin
        This paper presents the world's first onboard real-time kinematic (RTK)-based relative navigation system, developed specifically for the SNUGLITE-III CubeSat rendezvous missions. It details an optimized approach leveraging a conventional ground-based, single-station, single-frequency RTK system for relative navigation between low Earth orbit (LEO) satellites. The system achieves centimeter-level navigation accuracy solely by using GPS receivers, with a streamlined algorithm tailored for real-time onboard execution on the CubeSat platform. By integrating this algorithm, the study demonstrates its effectiveness with cost-effective commercial off-the-shelf hardware, overcoming substantial spatial, computational, data communication, and hardware performance challenges associated with CubeSat GPS receivers and patch antennas. GPS-based precise relative navigation, a technique previously attempted in larger satellites, is well-suited to the low-orbit environment due to minimal GPS signal error factors, allowing for effective execution of precise relative navigation between satellites without additional devices. However, achieving centimeter-level precision necessitates resolving integer ambiguities inherent in GPS carrier phase measurements. The application of GPS relative navigation in LEO has traditionally used the carrier phase differential GPS (CDGPS) technique, based on single-difference measurements between receivers, but unresolved receiver clock errors have posed problems in solving integer ambiguities. To address this, attempts have been made to use auxiliary sensor measurements, atomic clocks, and multi-GNSS, yet the resolution of ambiguities still faces limitations, and no satellite has been reported to have fixed an ambiguity onboard. For this reason, relative navigation systems have mostly utilized base electro-optical technologies to achieve centimeter-level precision. During this process, GPS measurements were used only to overcome the limitations of target satellite recognition distance and the environmental impacts on light. Compared to the potential applications of GPS receivers in LEO, their use has been very conservative until now. The proposed relative navigation system exclusively uses GPS receivers to achieve centimeter-level accuracy. To overcome the limitations in resolving integer ambiguities inherent in existing methods, this paper proposes a highly efficient algorithm that can be applied in real-time onboard, borrowing from ground-based RTK-based relative navigation. This involves initially reducing the search space for ambiguities from that of single-frequency Differential GPS (DGPS). A Hatch filter is employed to improve the noise level of pseudorange measurements, thereby enhancing the accuracy of pseudorange-based relative navigation. Additionally, a recursive ambiguity filter (RAF), which utilizes the characteristics of carrier phase measurement noise and ambiguity, effectively estimates the float solution. The RAF replaces the conventional extended Kalman filter-based ambiguity estimation methods used in traditional RTK systems, dramatically reducing the computational load. Moreover, we propose a strategy through the analysis of ambiguity dilution of precision that enables a 100% success-fix rate of ambiguities, achieving centimeter-level precision in real-time. The reliability of the proposed algorithm is validated through software-based LEO simulators, as well as through ground-based experimental hardware tests using GPS receivers and patch antennas mounted on actual CubeSats. Furthermore, through actual implementation, an effective end-to-end algorithm verification method was presented in the laboratory, achieving centimeter-level relative navigation using only GPS receivers.
      • 04.1303 Sensor Fusion for Autonomous Orbit Determination and Time Synchronization in Lunar Orbit
        Guillem Casadesus Vila (Stanford University), Grace Gao (Stanford University ) Presentation: Guillem Casadesus Vila - Wednesday, March 5th, 09:25 PM - Gallatin
        The growing interest in lunar exploration has led to a significant increase in planned missions, with over a hundred missions projected in the next decade from space agencies, industry, and academia. This surge has driven the development of LunaNet, a network architecture and set of standards to provide communication, Positioning, Navigation, and Timing (PNT) services to lunar missions. Proposed satellite systems within LunaNet, such as NASA's Lunar Communications Relay and Navigation Systems, face demanding performance requirements, particularly for Orbit Determination and Time Synchronization (ODTS), thus necessitating high accuracy in estimating the satellites' Position, Velocity, and Time (PVT). Current ODTS solutions primarily rely on measurements from Earth-based tracking stations like NASA's Deep Space Network (DSN) and process data on the ground to estimate satellite states. However, this method is costly, limited in availability, and not scalable for increasing activity in cislunar space. Moreover, ground-based state computation introduces delays that can impede real-time PNT services. Consequently, performing ODTS autonomously onboard the satellites presents a promising solution, providing real-time PVT estimates without relying on ground infrastructure. Several autonomous ODTS solutions for cislunar space have been proposed, including the use of Global Navigation Satellite System (GNSS) signals, Inter-Satellite Links (ISLs), and Optical Navigation (OpNav) techniques. GNSS signals can provide position and velocity estimations but suffer from signal availability and require high-sensitivity receivers, offering accuracy within tens of meters. ISLs improve these estimations by providing relative positioning information, but meeting the accuracy requirements necessitates several satellites. OpNav techniques, such as horizon detection, crater matching, and triangulation with planetary point sources, only require onboard cameras and are suitable for initial estimates but achieve limited accuracy, typically in the order of hundreds of meters. Despite the advantages of each method, it is not well understood how these sensing modalities can be fused given their error and uncertainty profiles, and it remains unclear which combination of these methods is best to meet the stringent PVT requirements of future lunar satellite constellations. To address these limitations, we investigate sensor fusion approaches integrating horizon detection, crater matching, GNSS, and ISL measurements to enhance PVT estimation accuracy. By implementing different centralized and distributed filtering architectures, we study how real-time state estimation can be improved for satellite constellations in lunar orbit, compared to using the methods separately. Our work explores these sensing modalities and analyzes their joint performance, demonstrating high-precision ODTS for lunar missions without relying on ground infrastructure. This comprehensive investigation lays the groundwork for more autonomous and resilient lunar exploration systems.
      • 04.1304 PHODCOS: Pythagorean Hodograph-based Differentiable Coordinate System
        Jon Arrizabalaga (Technical University of Munich), Fausto Vega (Carnegie Mellon University), Zbynek Sir (Charles University Prague), Zachary Manchester (Carnegie Mellon University), Markus Ryll (Technical University of Munich) Presentation: Jon Arrizabalaga - Wednesday, March 5th, 09:50 PM - Gallatin
        This paper presents PHODCOS, an algorithm that assigns a moving coordinate system to a given trajectory. The parametric functions underlying the coordinate system, i.e., the path function, the moving frame and its angular velocity, are exact -- approximation free -- differentiable, and sufficiently continuous. This allows for computing a coordinate system for highly nonlinear trajectories, while remaining compliant with autonomous navigation algorithms that require first and second order gradient information. In addition, the coordinate system obtained by PHODCOS is fully defined by a finite number of coefficients, which may then be used to compute additional geometric properties of the trajectory, such as arc-length, curvature, torsion, etc. Therefore, PHODCOS presents an appealing paradigm to enhance the geometrical awareness of existing guidance and navigation on-orbit spacecraft maneuvers. The PHODCOS algorithm is presented alongside an analysis of its error and approximation order, and thus, it is guaranteed that the obtained coordinate system matches the given trajectory within a desired tolerance. To demonstrate the applicability of the coordinate system resulting from PHODCOS, we present numerical examples in the Near Rectilinear Halo Orbit (NRHO) for the Lunar Gateway.
      • 04.1306 Comparative Analysis and Design of a Dual-Satellite System for Lunar Rover Localization
        Kaila Coimbra (Stanford University), Grace Gao (Stanford University ) Presentation: Kaila Coimbra - Thursday, March 6th, 04:30 PM - Gallatin
        With the launch of the multi-mission NASA Artemis program, there is a growing need to develop autonomous localization services for lunar surface users. Vision-based navigation techniques, while useful, are often insufficient due to their high memory storage requirements and vulnerability to errors during the lunar night, thus necessitating satellite-based localization. Concurrently, multiple space agencies are developing LunaNet, a network of satellite networks designed to provide communication and navigation services to lunar surface users. Early-stage missions, such as NASA’s Endurance rover, will precede the full deployment of the LunaNet constellation. Presently, one of LunaNet’s initial pilot satellites, the Lunar Pathfinder, will provide communication-only services to the early-stage missions, including Endurance. Due to limited infrastructure in the cis-lunar environment, establishing lunar positioning, navigation, and timing services pose a significant challenge. Previous works have designed satellite constellations with a focus on providing navigation services to lunar users. However, these multi-satellite systems are unlikely to be operational within the timeframe of many early-stage missions. In our prior work, we investigated utilizing a single satellite to provide absolute localization to the Endurance rover. In this single-satellite scenario, the rover, while stationary, accumulates navigation observables from the satellite and refines its state estimation over time. One study assumes that the satellite has a radiometric navigation payload, enabling the use of two-way ranging measurements. Conversely, a following study models the communication-only Lunar Pathfinder as the single satellite, necessitating the use of Doppler shift measurements from the downlinked communication signals as the only navigation observable. While both of our prior methods achieve sub-10-m absolute positioning error, localization convergence time can be significantly improved with the addition of another satellite providing navigation observables. In this work, we analyze the localization performance of a two-satellite constellation comprising the Lunar Pathfinder and an additional satellite equipped with either a communication payload or a navigation payload. Unlike previous studies, our analysis focuses on the quantitative performance improvement achieved by adding a second satellite providing either additional Doppler shift measurements or ranging measurements to the existing Doppler shift observables from the Lunar Pathfinder. This will enable the design choice of using either one or two satellites for the system design. For our approach, we model the transmitter antenna according to the type of payload onboard to simulate noisy measurements. Our study considers several options (elliptical lunar frozen orbits, near-rectilinear halo orbits, etc.) when designing the orbital trajectory of the second satellite. Metrics used to choose orbital trajectories include signal availability, position dilution of precision, and root-mean-square positioning and timing errors through a weighted batch filter framework across Monte Carlo realizations. This approach allows us to evaluate how the combination of different types of navigation observables and orbital trajectories impacts the localization accuracy for surface users in the lunar South Pole. The trade-off analysis between cost and performance will provide key insights to enhance navigation capabilities of early-stage missions.
      • 04.1308 On-Orbit Demonstration of Range-Only Navigation for Small Satellite Formations
        Ibrahima S. Sow (Carnegie Mellon University), Zachary Manchester (Carnegie Mellon University) Presentation: Ibrahima S. Sow - Thursday, March 6th, 04:55 PM - Gallatin
        The increasing availability of low-cost commercial launch ride-share services has fueled the rapid proliferation of small satellite platforms, such as CubeSats. Missions involving fleets of multiple spacecraft acting as distributed computing platforms have gained significant interest due to their potential to achieve mission objectives at much lower costs than traditional, bulky, and expensive spacecraft. These applications span a wide range of fields, including continuous Earth monitoring, communication coverage services, large baselines for high-resolution radio and optical astronomy, and/or distributed in-situ measurements of the ionosphere and solar wind, such as the upcoming SunRISE and HelioSwarm missions. However, the success of these missions heavily depends on accurate relative navigation, which currently requires highly specialized hardware. Global Navigation Satellite Systems (GNSS) receivers (e.g., GPS, GLONASS) are commonly used for precise orbit determination. Yet, significant power constraints and potential radio frequency interference limit their continual use in orbit, resulting in sparse data for navigation. Moreover, these receivers are expensive and subject to regulatory restrictions on operating speeds and altitudes. The on-orbit relative navigation problem mirrors terrestrial sensor network localization, where the absolute positions of each node (i.e., satellite) must be determined from partial absolute and relative inter-node measurements. Two-way ranging provides a low-cost method for distance measurements between each satellite in the formation. However, ranging alone, unlike bearing measurements, leads to unobservabilities in 3D relative navigation, as an infinite number of solutions exist for a single range measurement. This paper presents a novel sparse optimization-based navigation method to address the 3D on-orbit joint absolute and relative navigation problem for a satellite network, using inter-satellite range measurements between satellites, a single "anchor" satellite with global frame measurements and highly accurate orbital models. Our approach is cost-effective and accessible and is scalable to resource-constrained satellite formations. We evaluate the estimator through numerical experiments comparing the performance of star and mesh network topologies for both short- (after deployment from the ejection pod) and long-distance formations. We also perform an on-orbit demonstration using data from our PY4 mission, featuring online calibration of the ranging measurements and demonstration of the estimator’s robustness with low-cost hardware and intermittent anchor measurements. PY4 is a constellation of four 1.5U CubeSats launched on SpaceX Transporter 10 on March 4, 2024, in a collaboration between Carnegie Mellon University and NASA Ames Research Center.
      • 04.1309 Initial Orbit Determination with Sequential Stellar Aberration Measurements
        Michela Mancini (Georgia Institute of Technology), John Christian () Presentation: Michela Mancini - Thursday, March 6th, 09:00 PM - Gallatin
        Navigation using stellar aberration measurements---known as StarNAV---is an attractive solution for autonomous spacecraft navigation in the absence of classical navigation sources. This relativistic phenomenon is easily observable for a spacecraft in Earth orbit (or most other orbits in our Solar System) and we routinely correct for this effect in contemporary star trackers. Within a StarNAV framework, rather than treating stellar aberration as a nuisance parameter, it may be used as a navigation observable where the difference in inter-star angle is related to the velocity of the observer relative to the rest frame (usually BCRF). In this work, we propose a solution to the initial orbit determination (IOD) problem from sequential inter-star angle measurements. While a solution with concurrent measurements already exists, obtaining such multiple concurrent observations is challenging. We choose to use inter-star angles (instead of single-star directions) since inter-star angles have the advantage of being attitude-independent. This removes the (currently) unrealistic pointing requirements at the milliarcsecond level. However, measuring inter-star angles with such an accuracy is much simpler. Without any a priori information, an initial guess of the orbit may be obtained by realizing that an inter-star angle measurement provides information about the projection of the velocity along the angle bisector. Parameterizing the orbital velocity using the Lagrange interpolating polynomial, we provide an initial solution by solving a simple linear system. Such initial guess is then refined using a propagation approach which does not require the explicit knowledge of the position vector of the orbiting body. Exploiting the circular shape of the orbital hodograph for Keplerian orbits - a fact first proved by Hamilton - we leverage Rodrigues’ rotation formula to propagate the velocity in three dimensions. This is especially desirable in the StarNAV context, where the position vector does not play a role in the measurement equation and its propagation is superfluous. The experimental results show the feasibility of such an IOD solution for a body orbiting around Neptune.
    • Dylan Hasson (Volpe National Transportation Systems Center)
      • 04.1402 Cost-Effective Integration of CNS Infrastructure for Urban Air Mobility: Insights and Strategies
        Faizana Naeem (Technische Universität Hamburg (TUHH)), Volker Gollnick (Hamburg University of Technology) Presentation: Faizana Naeem - Thursday, March 6th, 09:25 PM - Gallatin
        Building upon previous research focused on the design and CNS requirements for Urban Air Mobility (UAM) vehicles, this paper presents a comprehensive, simulation-based evaluation of the communication, navigation, and surveillance (CNS) infrastructure necessary for UAM operations. The study is aligned with NASA's UAM Concept of Operations (ConOps) and considers the potential for integration within Europe. It explores the deployment of CNS systems in urban environments through advanced simulations, addressing practical challenges and solutions associated with their implementation. A detailed analysis is conducted of the performance and reliability of CNS technologies, with particular emphasis on their critical role in ensuring the safety and efficiency of UAM operations in congested urban airspaces. Simulations are employed to replicate real-world scenarios and evaluate CNS system performance, including communication availability, navigation accuracy, and surveillance coverage. Furthermore, the economic viability of implementing CNS infrastructure for UAM is investigated by examining cost factors, scalability, and potential return on investment. This includes a detailed cost analysis derived from simulation data, which considers initial setup costs, operational expenses, and long-term financial benefits. Our findings identify key cost drivers and present strategies to enhance economic viability, such as leveraging existing infrastructure, adopting scalable technologies, and optimizing operational efficiencies. By demonstrating effective ways to make CNS systems cost-effective, this paper aims to bridge the gap between technological development and practical implementation, offering actionable insights for advancing the integration of UAM systems.
      • 04.1403 Probability, Impact, and Mitigation of Cycle Slips in Time-of-Arrival Positioning Systems
        Sharanya Srinivas (Massachusetts Institute of Technology), Andrew Herschfelt () Presentation: Sharanya Srinivas - Monday, March 3th, 11:00 AM - Electronic Presentation Hall
        Growing demand for diverse radio frequency (RF) applications with limited spectrum access has driven unmanned aerial systems (UASs) towards integrated sensing and com- munications (ISAC) solutions. Legacy positioning systems such as GPS are often insufficient for safety-critical air transport systems, while demand for higher communications through- put and other sensing capabilities continues to increase. We previously developed the Communications and High-Precision Positioning (CHP2) system to simultaneously provide network communications and localization capabilities to UAS platforms with minimal power and spectral resources. This alternative positioning, navigation, and timing (APNT) system is built around a two-way ranging (TWR) protocol with an integrated, phase- accurate timing exchange. At lower signal-to-noise ratios (SNRs), carrier-phase range ambiguities – often called ”cycle slips” – can induce significant errors in this type of system. In this study, we quantify the likelihood of these events occurring, their impact on two-way ranging systems, and several mitigation techniques to reduce or eliminate them. We specifically focus on an extended Kalman tracking filter and a Viterbi trellis trimming algorithm to detect and correct cycle slips with reasonable computational efficiency. We validate the proposed solution in a MATLAB simulation environment.
    • Krishna Sampigethaya (Embry-Riddle Aeronautical University)
      • 04.1503 Testable Cyber Requirements for Space Flight Software
        James Curbo (Johns Hopkins University Applied Physics Laboratory), Gregory Falco (Cornell University ) Presentation: James Curbo - Friday, March 7th, 08:30 AM - Dunraven
        As space missions grow in complexity, the cybersecurity threat landscape expands, necessitating a shift toward secure-by-design flight software (FSW). Traditional development prioritizes functionality over security, leaving systems vulnerable to attack. This paper introduces a novel methodology for developing cyber-resilient FSW with a secure-by-component architecture. By incorporating key resilience principles—segmentation, adaptive response, redundancy, and substantiated integrity—our approach addresses critical security needs early in development, minimizing attack surfaces without sacrificing performance. Leveraging NIST systems security guidelines and tailored cyber resilience techniques, we apply this methodology to a notional spacecraft's Command and Data Handling (C\&DH) subsystem. Through attack surface analysis and threat modeling, we derive specific cybersecurity requirements to enhance resilience. Key mechanisms, such as real-time monitoring, cryptographic enforcement, memory-safe programming, and zero-trust communication, are embedded to mitigate vulnerabilities from external threats and internal faults. This work advances space cybersecurity by offering a scalable, secure-by-design approach to FSW. Future efforts will extend this methodology to formal verification and autonomous systems, ensuring space operations remain secure against evolving adversarial tactics.
      • 04.1504 Space Cybersecurity Incident Response Framework: A Viasat Case Study
        Nick Saunders (Viasat, Inc), Rajiv Thummala (The Pennsylvania State University), Gregory Falco (Cornell University ) Presentation: Nick Saunders - Friday, March 7th, 08:55 AM - Dunraven
        The restoration of space systems following failures has historically been precipitated by natural phenomena such as geomagnetic storms or unintentional software malfunctions and hardware issues. However, the advent of intentional cyberattacks targeting space systems heralds a new era of incident response. This paper presents a first-hand account of the incident response to the February 2022 cyberattack on Viasat's KA-SAT network, which coincided with the Russian invasion of Ukraine. Lessons learned are highlighted and integrated into a proposed incident response framework for space system service providers.
      • 04.1505 A Model Based System Security Goal Elicitation Method Applied to a Space Traffic Management System
        Martin Span (Colorado State University), Sarah Rudder (), Jeremy Daily (Colorado State University) Presentation: Martin Span - Friday, March 7th, 09:20 AM - Dunraven
        A Model Based Systems Engineering (MBSE) approach to eliciting functional security goals for a notional spaced-based traffic management system highlights the utility of Eliciting Goals for Requirement Engineering of Secure Systems (EGRESS), a new methodology. EGRESS leverages a recently released MBSE standard, the Object Modeling Group (OMG) Risk Analysis and Assessment Modeling Language (RAAML), to conduct security analysis on a system of interest. The demonstration of the EGRESS method includes the use of the Systems Theoretic Process Analysis (STPA), an industry best practice approach to hazard identification and analysis. The artifacts from the EGRESS approach will be captured using RAMML in an MBSE tool as applied to a notional space-based traffic management system (STMS). Modeling a system in a digital environment provides traceability between the potential points of failure and risk assessments. Linking safety and security impacts to system assets and stakeholder needs enables design decisions and risk treatment at the appropriate level of abstraction. Early identification of security goals accommodates implementation as a 'Design For Consideration' and is treated on the same level as functional and safety requirements during the system design process. EGRESS uniquely provides direct traceability for security and digital representation that facilitates the identification of failure modes along with the required remediation. This work demonstrates the utility of EGRESS, MBSE, and RAAML to secure system design for a representative space-based cyber-physical system (CPS).
      • 04.1508 Mechanical Vibration vs RF Characteristic: A Meta Fingerprinting Approach for UAV Classification
        Ying Wang (Stevens Institute of Technology), Joshua Meharg (Stevens Institute of Technology) Presentation: Ying Wang - Friday, March 7th, 09:45 AM - Dunraven
        This study combines mechanical vibration data with Radio Frequency (RF) signal characteristics to distinguish between UAVs of the same make and model, addressing current limitations in UAV identification and enhancing security in UAV operations. As UAVs become integral in fields like military, commercial, and medical sectors, reliable identification methods are crucial, particularly with their integration into Space-Air-Ground Integrated Networks (SAGIN). Traditional UAV classification methods—radar, vision, sound, and RF-based—each have limitations. RF-based methods, while cost-effective, struggle to accurately identify moving UAVs, especially in environments with signal interference from other devices. We propose a novel approach that integrates mechanical vibration data with RF signal characteristics, improving UAV classification accuracy. By capturing unique vibrational signatures and analyzing RF signals, this method offers a more robust and precise identification process. The study developed two deep learning models: a Recurrent Neural Network (RNN) for vibration data and a Convolutional Neural Network (CNN) for RF communication data. The RNN, trained on vibrational data from multiple UAV flights, focused on forces along the UAV's axes and clipping indicators, achieving 97% accuracy. The CNN, trained on 5G communication data from a UAV-attached mobile device, analyzed features like modulation schemes and signal-to-noise ratio, achieving 95% accuracy. Flight tests were conducted using two distinct UAVs under various conditions, including line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios. The data, logged by a Pixhawk flight controller, included both physical telemetry and communication data, which were preprocessed using custom Python scripts to ensure data integrity and suitability for training. The experimental results demonstrated the effectiveness of this classification approach. The RNN and CNN models achieved 97% and 95% accuracy, respectively, indicating that combining mechanical vibration data with RF signal characteristics provides a reliable and accurate method for UAV identification, even in challenging conditions. These models outperformed traditional machine learning methods like Random Forest and k-Nearest Neighbors, particularly in handling time-series data and capturing complex relationships essential for UAV classification. This research sets a new standard for UAV classification by integrating mechanical vibration data with RF signal characteristics. The high accuracy of the RNN and CNN models supports the feasibility of a fully integrated, on-chip solution for secure UAV communication systems, enhancing UAV identification and overall operational security and reliability. Ongoing work involves experiments on the Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW), allowing further refinement of these models using data from different UAV architectures. The ultimate goal is to develop a comprehensive solution capable of reliably classifying a wide range of UAVs, ensuring security and safety across various applications.
      • 04.1509 Attack Scenarios and Threat Assessment in Space Mission Cybersecurity
        Sakurako Kuba (Embry-Riddle Aeronautical University), Radu Babiceanu (Western Michigan University) Presentation: Sakurako Kuba - Friday, March 7th, 10:10 AM - Dunraven
        Sophisticated attackers such as nation-states and well-organized criminal groups constantly seek to exploit space-based infrastructure vulnerabilities. One of the primary concerns is the potential threats to disrupt space-based systems to degrade the satellite services or jeopardize the safety of astronauts. The lack of outer space law and insufficient provisions addressing cybersecurity pose challenges in current space missions that need to comply with incident response strategy and prepare for potential impacts to ensure the safety and success of the mission. Further, Artificial Intelligence (AI) and Machine Learning (ML) integrations can boost the operational efficiency for both crewed and uncrewed missions. However, there is currently a lack of soft laws in AI-driven systems in space operations, which increases the vulnerability and threat risk in cybersecurity. This paper proposes a goal-based attack tree for different types of attack goals to develop novel attack scenarios. Each scenario is assessed to identify the risks based on its consequential impact and probability of occurrence using a risk matrix. Additionally, this paper explores existing framework and AI/ML use cases and potential threats and risks unique to space AI applications. This risk assessment aims to expand the scope of existing attack scenarios by assessing various approaches to trigger impacts on satellite navigation.
  • Alex Austin (Jet Propulsion Laboratory) & Catherine Venturini (The Aerospace Corporation)
    • Lee Jasper (Space Dynamics Laboratory ) & Young Lee (Jet Propulsion Laboratory) & Benjamin Donitz (NASA Jet Propulsion Laboratory)
      • 05.0101 DiskSat Demo Mission: New Paradigm in Small Satellite Architectures
        Catherine Venturini (The Aerospace Corporation), Darren Rowen (The Aerospace Corporation) Presentation: Catherine Venturini - Sunday, March 2th, 09:00 PM - Lake/Canyon
        The DiskSat is a quasi-two-dimensional satellite bus architecture designed for applications requiring high power, large apertures, and/or high maneuverability in a low-mass containerized satellite. The DiskSat structure is a composite flat structure, one meter in diameter, 2-3 cm thick, with a mass of 2-3 kg. Spacecraft avionics, sensors, actuators, and payloads are either installed into cutouts in the composite structure or mounted to the surface. These additions increase the effective thickness in some areas and add mass to the overall spacecraft. The surface area of the DiskSat is large enough to host 200 W of solar cells without deployable solar panels. For launch, multiple DiskSats are stacked in a dispenser and are released individually in orbit. The combination of the lightweight composite structure and large surface area for solar cells allows for power to weight ratios significantly exceeding traditional satellite designs. When flying edge on to the velocity direction, the small cross-sectional area helps to reduce atmospheric drag. When coupled with an electric propulsion thruster, the spacecraft can maintain orbit at lower altitudes than traditional spacecraft. The demonstration mission intends to explore this Very Low Earth Orbit (VLEO) regime experimentally to characterize performance. A demonstration mission of the DiskSat platform is under development with flight ready units expected to complete in Spring 2025. The mission will include 4 DiskSats and a newly developed dispenser. The mission will demonstrate and characterize the performance and utility of the platform, power management, the potential for maneuverability in different orientations and orbital regimes, and multi-satellite deployment from the dispenser. The dispenser is a containerized system that incorporates consideration for rideshare Do No Harm and protection from debris during launch. Launch is tentatively scheduled for Spring 2026. This paper will provide an overview on the status of the DiskSat demo mission development with details on the bus and dispenser followed by a discussion of potentially new mission applications enabled by the Disksat platform. Mission concepts could include Earth-orbit, cis-lunar, and deep space applications. Engagement with the larger small satellite community to discuss mission applications and opportunities are underway. The results from the demo mission will feed into a tech transfer program for industry and others to easily use the platform and develop a manufacturing base.
      • 05.0102 Distributed Space System Architecture to Enable Rapid Technology Development
        Carlos Maldonado (University of Colorado at Colorado Springs), Jonathan Deming (Los Alamos National Laboratory), Zachary Miller (Los Alamos National Laboratory), Daniel Arnold (Los Alamos National Laboratory), Tony Nelson (Los Alamos National Laboratory), Mike Caffrey (Los Alamos National Laboratory), Robert Merl (Los Alamos National Laboratory), Anthony Rogers (Los Alamos National Laboratory), August Gula (Los Alamos National Laboratory), Michael Holloway (Los Alamos National Lab), Paul Graham (Los Alamos National Laboratory), Andrew Kirby (Los Alamos National Laboratory) Presentation: Carlos Maldonado - Sunday, March 2th, 04:30 PM - Lake/Canyon
        Technical staff at the Los Alamos National Laboratory have developed a distributed space system architecture to enable rapid technology development. This system is adaptable to any host vehicle, and we present two flight missions that will implement this architecture using two ends of host vehicle class; (1) a 12U cubesat and (2) the International Space Station (ISS). The Experiment for Space Radiation Analysis (ESRA) is the latest of a series of Demonstration and Validation (DemVal) missions built by the Los Alamos National Laboratory, with the focus on testing a new generation of plasma and energetic particle sensors along with critical subsystems. The primary motivation for the ESRA payloads is to minimize size, weight, power, and cost while still providing necessary mission data. These new instruments will be demonstrated by ESRA through ground-based testing and on-orbit operations to increase their technology readiness level such that they can support the evolution of technology and mission objectives. This project will leverage a commercial off-the-shelf CubeSat avionics bus and commercial satellite ground networks to reduce the cost and timeline associated with traditional DemVal missions. The system will launch as a ride share with the DoD Space Test Program to be inserted in Geosynchronous Transfer Orbit (GTO) and allow observations of the Earth’s radiation belts. The ESRA CubeSat consists of two science payloads and several subsystems: the Wide field-of-view Plasma Spectrometer, the Energetic Charged Particle telescope, high voltage power supply, payload processor, flight software architecture, and distributed processor module. The ESRA CubeSat will provide measurements of the plasma and energetic charged particle populations in the GTO environment for ions ranging from ~100 eV to ~1000 MeV and electrons with energy ranging from 100 keV to 20 MeV. ESRA will utilize a commercial 12U bus and demonstrate a low-cost, rapidly deployable spaceflight platform with sufficient SWAP to enable efficient measurements of the charged particle populations in the dynamic radiation belts. The ESRA mission CDR was held on 15-Aug-23 and the team is currently engaged in flight builds for all payloads, subsystems, and the space vehicle. We will present bench-top testing results for all critical subsystems, ion beam testing (1-30 keV) results for WPS conducted at the LANL Intelligence and Space Research Ion Beam Laboratory, and electron beam testing (10-18 MeV) of the ECP payload at Brookhaven National Laboratory. The Autonomous Ion Mass Spectrometer Sentry (AIMSS) is the latest in a series of LANL internally funded research and development efforts dedicated to on-orbit experiments. The sensor payload will utilize the same core electronics subsystem as ESRA, however it will field an ion mass spectrometer. The sensor will make measurements of the background ionospheric plasma environment and monitor contamination of ISS surfaces due to impinging plumes, leaks, and venting activities.
      • 05.0104 The Pandora SmallSat: A Low-Cost, High Impact Mission to Study Exoplanets and Their Host Stars
        Thomas Barclay (NASA Goddard Space Flight Center) Presentation: Thomas Barclay - Sunday, March 2th, 04:55 PM - Lake/Canyon
        The Pandora SmallSat is a NASA flight project aimed at studying the atmospheres of exoplanets—planets orbiting stars outside our Solar System. Over its one-year prime mission, Pandora will observe more than 200 planetary transits. By measuring the apparent sizes of these planets at visible and short-wave infrared wavelengths, Pandora will help us understand how contaminating stellar signals affect our interpretation of the chemical compositions of these planetary atmospheres. The mission will provide the first dataset of simultaneous, multiband, long-baseline observations of exoplanets and their host stars. Early in the mission's conceptual development, it was clear that achieving this scientific goal required a departure from the traditional cost-schedule paradigm of half-meter class observatories. Pandora achieves this by leveraging existing capabilities that necessitate minimal engineering development, alongside firm-fixed-price contracts. The Pandora team has developed a suite of high-fidelity parameterized simulation and modeling tools to estimate the performance of both imaging channels. This has enabled a unique bottom-up approach to deriving system requirements. This unconventional approach for aerospace missions has facilitated synergies between previously disparate technologies and capabilities. Pandora is a partnership between NASA and Lawrence Livermore National Laboratory. The project completed its Critical Design Review in October 2023 and is slated for launch into sun-synchronous low Earth orbit in Fall 2025.
      • 05.0105 From One Unit Tech Demo to Three Unit Class D Constellation: Ops Lessons from RainCube to INCUS DAR
        Shivani Joshi (Jet Propulsion Laboratory), Benjamin Donitz (NASA Jet Propulsion Laboratory), Robert Beauchamp (), Simone Tanelli (Jet Propulsion Laboratory), Stephen Durden (), Bradley Ortloff (NASA Jet Propulsion Laboratory) Presentation: Shivani Joshi - Sunday, March 2th, 05:20 PM - Lake/Canyon
        INvestigation of Convective UpdraftS (INCUS) is a NASA Earth Venture Mission (EV-M) managed by JPL for the Colorado State University (CSU) PI. The INCUS mission will be comprised of a constellation of three observatories, each consisting of the spacecraft bus, and the Dynamic Atmospheric Radar (DAR). One observatory will also host the Dynamic Microwave Radiometer (DMR), based on the TEMPEST-D technology demonstrator. DAR is an evolution of RainCube (Radar In a Cubesat), with 7-beam cross-track scanning capability. The three observatories will fly in low-Earth-orbit (LEO) in a train formation to make measurements of tropical convective storms at three different times (0, 30 and 120 seconds). The temporal sampling of the DAR reflectivity profiles is used to measure reflectivity changes with time, allowing the detection of convective updrafts and the estimation of their transport of air and water between the surface and upper troposphere, known as Convective Mass Flux (CMF), and elucidates how this transport varies with storm type and throughout the storms’ lifecycles. The INCUS measurements will increase our understanding of how, when, where, and why storms form, and will allow us to better model and predict the role of convective storms in current and future climates. In this paper, focused on the Dynamic Atmospheric Radar (DAR) instrument of the INCUS mission, we discuss how the design heritage and implementation and test experience as well as lessons learned from on-orbit flight operations from RainCube technology demonstration were leveraged in developing the concept, system requirements and implementation approach for the DAR instrument. We also present the programmatic considerations associated with translating a technology demonstration mission to a higher class (class C or D) mission and the trade-offs considered during such development. RainCube was a 35.75 GHz Ka-Band radar with 0.5 m deployable antenna in a 6U CubeSat form factor. RainCube was successfully launched to ISS on May 21, 2018 and completed a 2-year mission in LEO on Dec 24, 2020. INCUS is a 35.75 GHz Ka-Band radar with a 1.6 m deployable antenna 130 kg SmallSat. INCUS is scheduled to launch in 2026.
      • 05.0107 Integration and Delivery of the Deployable Optical Receiver Aperture (DORA) Cubesat
        Marc-Olivier Lalonde (Arizona State University), Daniel Jacobs (Arizona State University), Judd Bowman (Arizona State University), Yifan Zhao (Arizona State University), Titu Samson (Arizona Sate University), Joseph DuBois (ASU), Chandler Hutchens (Arizona State University), Ben Weber (Arizona State University), Dylan Larson (Arizona State University), Sid Vaidyanathan (Arizona State University), Sam Cherian (Arizona State University), Quang Huy Dinh (Arizona State University) Presentation: Marc-Olivier Lalonde - Sunday, March 2th, 09:25 PM - Lake/Canyon
        The Deployable Optical Receiver Aperture (DORA) experiment is a pathfinder for large scale space-based radio interferometry with experiments addressing scalable laser communications within swarms and precision low frequency radio astronomy instruments. The collection of prototype instruments will characterize background noise which might limit future optical links and radio astronomy measurements. The DORA laser terminal concept uses arrays of solid state optical/infared sensors to form a widefield high sensitivity detector theoretically capable of speeds up to 1 Gbps at a range of up to 1000 km between cubesats without requiring precision pointing. The freedom to rotate as needed for science or power purposes, without interrupting data transfer, will enable future swarm radio arrays of freeflying antennas operating as a single correlated array. The first DORA mission aims to test fly silicon photomultiplier and measure background light from reflected sunlight, moonlight and city lights. DORA’s radio pathfinder aims to flight-test compact solid state radio technology needed for precision low frequency receivers targeting the cosmic dawn and epoch of reionization. Solid state RF switches and related elements which have not yet been demonstrated in space at the precision necessary have been formed into a dual-mode radio spectrometer attached to a deployable monopole. A filter-band VHF spectrometer provides continuous monitoring from 50 to 200 MHz spectrum in 20MHz channels while a miniature software defined radio makes high spectral resolution scans of the same band. The antenna used is a deployable monopole antenna. The satellite was built and tested by students in ASU’s Low frequency Cosmology Lab and ASU’s Interplanetary Initiative Lab (IPL). Vacuum testing, vibration testing and day in the life testing were all performed at ASU. The design and build began during the covid pandemic which had many effects on the availability of people and components. As a result, late changes were necessary to the payload, structure, and power systems. During testing there were several component failures all of which were resolved before delivery. These can ultimately be traced to late changes and provide instructive examples for future projects. DORA was aboard Northrop Grumman’s 21st Cygnus flight on August 4th and is currently set for deployment on October 8th 2024.
      • 05.0109 Multispectral and Submetric Earth Observation Optical Payload for Micro Satellite Platform
        Xavier Lopez (SATLANTIS MICROSAT S.A.) Presentation: Xavier Lopez - Sunday, March 2th, 09:50 PM - Lake/Canyon
        SATLANTIS MICROSATS SA has led the development of a new Earth Observation satellite mission which provides very high-resolution and multispectral capabilities in the micro satellite size range. The mission, under the name GARAI, aims to deploy two different satellites into a sun-synchronous low Earth orbit to enable multiple imagery products for critical applications and services. Both satellites are targeted to fly in a SpaceX Transporter mission in 2025. The aim of this paper is to provide an overview of the payload developed and the spacecraft mission profile. Each payload consists of two binocular imagers from Satlantis proprietary iSIM family, an iSIM-90 and iSIM-170, combining submetric resolution with swath values up to 14.5 km and multispectrality, with a total of 14 different filters, split between the four optical channels covering SWIR spectra, Polarimetry, and PAN + VNIR spectra. The imagers are accommodated in an optical bench that interfaces with the platform deck with a vibration isolation system. The optical bench holds a set of star trackers and a thermal control system to achieve high thermal stability along the orbit, ensuring precise pointing and geolocation. The complete payload solution weights around 30 kg and can achieve a continuous acquisition time of 15 minutes per orbit, to cover around 80.000 km2 of territory. In addition, it features a high data rate link through a high-speed X-band transmitter that allows data downloads rates superior to 500 Mb/s, with the objective to reduce the product latency as much as possible. The satellite bus platform selected for the mission was OHB Sweden’s InnoSat micro satellite platform, which stands out for its electric propulsion system with a large delta-V budget to give station keeping, orbit transfer, collision avoidance and deorbit capabilities and their high-slew-rate mode that enhance platform agility to efficiently track linear profiles such as borders, coastlines, pipelines, or multiple scattered targets.
    • Nathan Barba (Jet Propulsion Laboratory) & Young Lee (Jet Propulsion Laboratory) & Dexter Becklund (The Aerospace Corporation)
      • 05.0201 Cost Effective Mission Concepts for National Security and Meteorological Applications
        Aaron Pereira (University of Technology Sydney), Ed Kruzins (CSIRO), Jose Velazco (Chascii Inc.), Frederick Menk (University of Newcastle), Sean Bryan (Arizona State University) Presentation: Aaron Pereira - Monday, March 3th, 08:30 AM - Lake/Canyon
        Australia currently lacks sovereign Earth observation capability to support national security and scientific exploration. This presents a severe risk in these times of heightened tensions in the Indo-Pacific. Additionally, to deal with the challenges of global climate change, the country needs the latest technology and tools at its disposal – namely space-based sensors that provide the weather modeling data, for which Australia is dependent upon overseas partners. This mission details a cost-effective SmallSat mission that delivers high-resolution imagery critical for the observation of sea lanes of communications as well as an advanced radiometer for in-situ characterization of clouds enabling accurate extraction of data for the Bureau of Meteorology’s now casting applications.
      • 05.0202 Small Satellite Mission Design for Robotic Assembly and Reconfiguration of Mechanical Metamaterials
        Ashley Kline (Carnegie Mellon University), Frank Regal (The University of Texas at Austin), Colin Hoang (University of California, Berkeley), Olivia Formoso (NASA Ames Research Center), Zachary Manchester (Carnegie Mellon University), Kenneth Cheung (NASA - Ames Research Center) Presentation: Ashley Kline - Monday, March 3th, 08:55 AM - Lake/Canyon
        In-space assembly will be crucial for constructing large-scale space structures on future missions. The ARMADAS (Automated Reconfigurable Mission Adaptive Digital Assembly Systems) project at NASA Ames seeks to leverage discrete lattice building blocks with high mechanical performance and a repeatable, thus predictable, structure as materials for autonomous assembly. Robots designed specifically for this controlled environment can live on and traverse across the structure as it is being built. Proof of concept experimentation on Earth has demonstrated that a team of three robots are able to autonomously execute pre-planned paths to assemble a large-scale, reconfigurable lattice structure. However, this technology has not yet been tested in a space environment, which is required for implementing this technology on future space missions and supporting NASA Artemis goals. In this paper, we discuss the mission overview and system requirements for a 27U CubeSat that could demonstrate this feasibility. We focus on the mechanical payload design of the CubeSat, which contains a robot team capable of assembling and reconfiguring lattice building blocks (termed voxels for volumetric pixels) in low-Earth orbit. The payload design includes re-scaling of the voxels presented in the original ARMADAS demonstration to accommodate the volume-constrained payload space. We show that we can achieve the same high performance mechanical properties. The design also focuses on an adapted team of robots, which demonstrates the scalability of the current technology, but in a zero gravity environment. Our goal was to further build on and optimize the system demonstrated on the ground by designing the robots in a more efficient, simplified way. Simplified, in this context, refers to a team of robots with fewer degrees of freedom and increased power efficiency. Scalability, on the other hand, is important for the eventual goal of deploying large numbers of robot teams working in conjunction to build even larger space structures at low cost. Finally, we investigate the capability of outfitting the voxels with thermal radiators. Due to size constraints of launch vehicles, modular thermal solutions can be highly volume-advantageous. We present the assembled voxels as a modular, lightweight, and reconfigurable solution that has the potential to offload high amounts of heat from on-board electronics, similar to deployable or pre-built radiator systems. On-orbit demonstration of the ARMADAS system would highlight the full autonomy of the system, its ability to package together in a more compact volume, and its capability to withstand harsh launch conditions. While the number of voxels assembled is smaller and the complexity of the structures built would be much simpler, this technology demonstration has the potential to explore the power systems challenges of in-space assembly and the volume constraints associated with getting the system to space.
      • 05.0204 Cost-Effective Very Low Earth Orbit Mission for Atmospheric Science
        Jose Pedro Ferreira (University of Southern California), Joseph Wang (University of Southern California), Aaron Pereira (University of Technology Sydney), Adrian Tang (NASA Jet Propulsion Laboratory), James Gilland (Ohio Aerospace Institute) Presentation: Jose Pedro Ferreira - Monday, March 3th, 09:20 AM - Lake/Canyon
        Understanding and characterizing wind, composition, and temperature in the lower thermosphere is critical for two main reasons. First, the processes controlling their spatial and temporal changes and how the neutral atmosphere interacts with the ionosphere and magnetosphere above is fundamental to our understanding of the mechanisms that drive and affect the Earth’s upper atmosphere as well as in other planetary and stellar atmospheres. Second, space weather events, depending on how the upper atmosphere is affected by solar variability and lower‐atmospheric disturbances, can severely harm spacecraft, human operations in space, and society’s ground‐based technological infrastructure. Very low Earth orbit (VLEO), in an altitude shell between 250 km and 350 km, offers a unique vantage point to undertake cost-effective missions to study atmospheric dynamics using spectroscopic instruments. Additionally, by performing limb sounding measurements, the spectroscopic system is characterized and validated in an environment representative of a planetary exploration mission. This paper discusses a cost-effective VLEO mission with next-generation spectrometer instruments capable of meeting the strict payload mass and power requirements of future small-platform deep space missions, while opening new science applications for compact, low-altitude platforms in Earth's orbit.
      • 05.0206 REX: An Autonomous Resource Exchange System for Optimizing Microgravity Manufacturing Efficiency
        Anubhav Gupta (In Orbit Aerospace), Ryan Elliott (In Orbit Aerospace) Presentation: Anubhav Gupta - Monday, March 3th, 09:45 AM - Lake/Canyon
        The advancement of microgravity manufacturing necessitates innovative systems capable of optimizing resource management in space. In Orbit Aerospace is developing the Resource Exchange Module (REX), an integrated system designed to facilitate the seamless storage, sorting, and transfer of payloads between orbital platforms and re-entry vehicles. This paper presents the design and functionality of REX, highlighting its autonomous capabilities that eliminate the need for traditional robotic arms, thereby reducing potential failure points and operational costs. By leveraging reusable launch technologies and an automated approach, the REX system aims to significantly reduce payload delivery costs by up to 75\% and wait times by 66\%. The cost function considers key variables such as the number of experiments, launch frequency, payload capacity, and preparation time, providing a comprehensive framework for understanding the economic impact of the system. Through Hardware-in-the-Loop (HIL) testing, we validate the system's performance in simulating the challenges of microgravity environments. The findings illustrate REX’s potential to revolutionize microgravity manufacturing, fostering economic growth in the space industry and enabling new applications on Earth. This paper discusses the implications of REX for future space missions and the broader market for microgravity research and manufacturing.
      • 05.0207 Multimode Propulsion: Cislunar Rideshare Mission Concept Trade Space Analysis for Small Spacecraft
        Tyler Presser (University of Southern California), Nathan Ré (Advanced Space LLC), Daniel Erwin (University of Southern California), Mohammed Siddiqui (Phase Four) Presentation: Tyler Presser - Monday, March 3th, 10:10 AM - Lake/Canyon
        This paper presents an overview of new low-cost cislunar mission concepts and explores their design trade space for small spacecraft utilizing future multimode propulsion systems. Multimode propulsion offers the promise of a spacecraft with a high-thrust chemical maneuver mode and a solar electric low-thrust maneuver mode. As national and commercial activity in cislunar space increases and multimode propulsion systems begin to come online, new low-cost pathways for small spacecraft to access multi-body orbits can be explored. One pathway of key interest is the available excess payload capacity on launch systems and transfer stages designed to deliver landers to the lunar surface, such as those on the Commercial Lunar Payload Services (CLPS) missions. While common for Earth-orbiting missions, rideshare for lunar missions has been attempted with varying degrees of success. To allow for flexibility in mission Concept of Operations (CONOPS), enable rapid-response mission timelines, and provide an examination of the design trade space for low-cost rideshare missions to cislunar space, we propose a set of new mission concepts that utilize current CubeSat technology enhanced with multimode propulsion. The proposed low-cost missions are designed around deployment from future rideshare opportunities to the lunar surface, aiming to deliver a 12U+ CubeSat to the Near Rectilinear Halo Orbit (NRHO). The spacecraft used in these concepts represents an upgraded version of the 12U+ architecture used for the CAPSTONE mission that utilizes Phase Four's multimode system, which consists of a plasma RF electric low-thrust system and a high-thrust chemical system both fed by a single hydrazine tank. The high-thrust mode is designed around a set of hydrazine chemical thrusters, representing an upgraded version of those on CAPSTONE, to produce approximately 200s Isp and up to 4N of thrust combined. In low-thrust mode, the Phase Four system provides thrust levels on the order of 30mN per kilowatt of power and can be reconfigured to provide between 500s to 2000s of Isp. When exploring the mission concept trade space, we analyze the choice of overall propulsion mode and choice of Isp in low-thrust mode throughout the mission. We find that the multimode system shows promising characteristics for enhancing flexibility in mission CONOPS. These include reduced transfer time, increased robustness to schedule changes and mission anomalies, and the possibility to target powered lunar flybys to further decrease propellant usage. In addition to exploring optimal transfer designs, we also consider on-orbit operations with the multimode system. Beyond operating in the NRHO, the multimode system also enables transfers between multibody orbits, which are also investigated as part of a follow-on mission. We note that our mission concepts are designed using an N-body dynamics model, and multimode trajectory optimization is done using Advanced Space’s proprietary tools to solve different aspects of these challenges efficiently. Additionally, this effort revisits and builds upon work from previous NASA missions and proposals, namely the CAPSTONE, Genesis, and ARTEMIS (THEMIS Extended Mission to LL1 & LL2) that leveraged low-energy transfer orbits, as well as previous feasibility studies on cislunar transfers with multimode propulsion for small spacecraft.
    • John Samson (Morehead State University ) & Michael Swartwout (Saint Louis University) & Bruce Yost (NASA - Ames Research Center)
      • 05.0302 Avionics Design Architecture for Low-Cost CubeSat Missions and Lessons Learned from R5-S2 and R5-S4
        Kathryn Knesek (NASA - Johnson Space Center), Morgan Alexander (), Jack Wisbiski (NASA - Johnson Space Center) Presentation: Kathryn Knesek - Monday, March 3th, 10:35 AM - Lake/Canyon
        The Realizing Rapid, Reduced-cost high-Risk Research (R5) project is funded by the Small Spacecraft Technology (SST) program within the Space Technology Mission Directorate at NASA and is based out of Johnson Space Center. The goal of this project is to develop a series of CubeSats to fast-track technology readiness levels of hardware and software payloads allowing engineers to prove out technologies at an accelerated schedule while reducing cost. Unlike typical satellites, R5’s spacecrafts utilize many commercial off-the-shelf (COTS) components while designing custom hardware when necessary. This paper explores the avionics subsystem design and choices made as well as the lessons learned throughout the design and build of R5 Spacecraft 2 and R5 Spacecraft 4. The avionics subsystem has divided its design into three main groups: power, propulsion, and payload interfaces. The power system contains the solar panels, batteries, and a battery management system (BMS). The propulsion system contains the electronics needed to control thrusters, monitor propulsion systems, and operate reaction wheels. Since R5’s mission is to fly various payloads quickly and cheaply in Low Earth Orbit (LEO), the interface system is designed to provide numerous types of connections made available to these payloads while providing command and data handling (C&DH) support for them. The avionics subsystem design supports late changes to mission goals and spacecraft configuration, reducing the need for redesign and testing time. Typical CubeSat architectures use the PC104 specification, a modular framework in which avionics PCBs are stacked. As the R5 project utilizes many COTS components and interfaces with a wide variety of payloads, the PC104 specification does not meet the needs of the project. Instead, the R5 avionics subsystem has developed an alternative framework that allows for easier mechanical and electrical integration with this variety of components. Throughout the design, build, and test process of Spacecrafts 2 and 4 (S2 and S4), the team had the opportunity to learn several lessons that were then applied to future designs, Spacecrafts 3 and 5 (S3 and S5). This paper will highlight the lessons learned and how the subsystem design was evolved to buy down risk for future missions.
      • 05.0303 Operational Challenges and Achievements of the OPS-SAT-1 Mission
        David Evans (European Space Agency), Vladimir Zelenevskiy (Telespazio Germany GmbH), Georges Labrèche (Tanagra Space / European Space Agency), Tim Oerther (Terma GmbH), Nuno Carvalho (), Guilhem Honoré (Telecom SudParis), Frederik Dall'Omo (University of Stuttgart), Dominik Marszk (European Space Agency) Presentation: Georges Labrèche - Monday, March 3th, 11:00 AM - Lake/Canyon
        The OPS-SAT-1 mission, launched by the European Space Agency (ESA) on December 18, 2019, provided a unique platform for testing and validating innovative space technologies. As the first satellite of its kind, OPS-SAT-1's primary goal was to lower the barriers for in-orbit experimentation, offering researchers an unprecedented opportunity to trial cutting-edge concepts in a real-world environment. This paper presents a detailed overview of the lessons learned from operating OPS-SAT-1, emphasizing both technical achievements and operational challenges encountered throughout the mission. A significant aspect of the mission was the successful engagement of the research community. By providing an accessible platform for experimentation, the spacecraft allowed numerous research teams to run their experiments in orbit, validating their technologies, gathering critical data, and disseminating their findings through various publications and public engagements. The collaborative nature of the mission fostered innovation and facilitated the exchange of ideas, resulting in a diverse set of experiments that spanned various fields, including Artificial Intelligence (AI), communications, data processing, as well as non-traditional space activities such as financial transactions and gaming with in-orbit runs of onboard chess and DOOM. The research community's ability to successfully execute their experiments onboard OPS-SAT-1 emphasized the mission's role as a catalyst for technological advancement and research development in space. Pioneering (AI) experiments lead to operationalizing the use of Machine Learning (ML) for day-to-day onboard real-time data processing and autonomous decision-making. This demonstrated the potential of (AI) to enhance spacecraft autonomy, setting the stage for other experiments and satellite missions to leverage (AI) for improved operational efficiency. The mission also explored new communication protocols and onboard data processing techniques. These included testing high-speed data downlinks and innovative compression algorithms to maximize the efficiency of data transmission to ground stations. The lessons learned from these tests highlighted the importance of optimizing communication strategies to handle the vast amounts of data generated by modern satellites. Throughout its operational life, OPS-SAT-1 encountered several anomalies and technical challenges that provided invaluable insights. Key among these were issues with the Attitude Determination and Control System (ADCS), which experienced failures in reaction wheels and control algorithm anomalies. This paper presents the technical achievements and operational lessons from the OPS-SAT-1 mission and provides a comprehensive understanding of the factors that contributed to its success. The insights gained from OPS-SAT-1 will be instrumental in developing future CubeSat missions, particularly the follow-up mission OPS-SAT VOLT.
      • 05.0304 OPS-SAT-1's Final Orbits and Reentry Analysis amid Mission Extension Attempts
        Frederik Dall'Omo (University of Stuttgart), Georges Labrèche (Tanagra Space / European Space Agency), Tim Oerther (Terma GmbH), Nuno Carvalho (), Guilhem Honoré (Telecom SudParis), Dominik Marszk (European Space Agency), Vladimir Zelenevskiy (Telespazio Germany GmbH), David Evans (European Space Agency) Presentation: Georges Labrèche - Monday, March 3th, 11:25 AM - Lake/Canyon
        The OPS-SAT-1 spacecraft, an innovative CubeSat mission launched on December 18, 2019 by the European Space Agency (ESA), successfully demonstrated advanced in-orbit technology until its reentry on May 22, 2024. This paper offers a comprehensive overview of the satellite's final weeks of operations. It focuses on onboard telemetry analysis and mitigation against altitude loss. It also discusses operational experiences and lessons learned during the reentry phase to extend the lifetime of future small satellite missions nearing end-of-life. Two Line Element (TLE) data published by the US 18th Space Defense Squadron revealed significant periods of altitude loss closely correlated with anomalies in the Attitude Determination and Control System (ADCS). To minimize atmospheric drag, the spacecraft primarily used the `horizontal pointing' mode of its fine-pointing ADCS system, which orients the satellite so that the smallest surface area faces the flight direction. However, failures in reaction wheels and control algorithm anomalies degraded performance thus increasing susceptibility to atmospheric drag. With the risk of S-band communication loss due to increasing spin rates, the Ultra High Frequency (UHF) radio system on OPS-SAT-1 was used as an alternative. The amateur radio community was engaged through a public platform to collect and submit UHF packets. This campaign was critical during reentry, as UHF packets provided telemetry and ADCS data. Utilizing the SatNOGS network and a frames-collector portal resulted in extensive ground coverage and invaluable observability of the state of the spacecraft during its final hours. ESA mission operators faced several operational challenges in the weeks preceding reentry. Deteriorating components led to undesired reboots and spacecraft recoveries, along with limited operational windows. The operations team rapidly developed creative solutions, such as On-Board Control Procedure (OBCP) that injected newly defined Fault Detection, Isolation, and Recovery (FDIR) software to prevent impromptu and frequent shutdowns of the onboard computer. Ground infrastructure issues, including system-wide hard disk failures days before the reentry, compounded these challenges. Additionally, the May 2024 solar storms drastically decreased the spacecraft's altitude at an inopportune time.
      • 05.0306 Lessons Learned from the NASA TROPICS CubeSat Constellation Mission
        Andrew Cunningham (MIT Lincoln Laboratory), William Blackwell () Presentation: Andrew Cunningham - Monday, March 3th, 11:50 AM - Lake/Canyon
        The NASA TROPICS CubeSat constellation mission is currently providing wide-swath microwave observations of tropical cyclones in twelve channels spanning 90-205 GHz at unprecedented revisit rates to improve our basic scientific understanding of how storms form and evolve and to improve our ability to forecast storm track and intensity. Four satellites were successfully placed into orbit on Rocket Lab Electron launch vehicles on May 8 and May 26 (NZST) 2023 – two satellites were deployed in each launch, resulting in two equally-spaced 33-degree inclined orbital planes at 550-km altitude. In advance of the constellation mission, the TROPICS engineering qualification unit was launched as part the Transporter-2 rideshare mission on a SpaceX Falcon 9 launch vehicle into a sun-synchronous orbit at 530-km altitude. This TROPICS “Pathfinder” satellite operated successfully for 2.5 years and has provided a vast data record to validate and optimize all the aspects of the TROPICS constellation mission, including the flight segment, ground segment, and science data segment. These five satellites have demonstrated the first ever microwave data record provided with better than 60-minute median revisit rate, and these observations have been downlinked to users with latencies of approximately 45 minutes on average, permitting the use of these data by operational forecasting centers. The mission has utilized a wide array of commercial products and services, from cubesat buses, command and control, ground stations, and launch to implement the mission at much lower costs than traditional, operational weather satellite systems. Many technical innovations combined with a new paradigm for high-performance earth observation have come with many challenges, obstacles, and setbacks over the course of mission planning, development, implementation, and operation. In this paper, we describe many of these problems in some detail and present observations, lessons learned, and a look at how the solutions that were conceived to overcome the issues could be used to improve future missions in a wide variety of application areas.
      • 05.0308 On-Orbit Performance and Lessons Learned for Autonomous Angles-Only Navigation of a Satellite Swarm
        Justin Kruger (Stanford University), Simone D'Amico (Stanford University) Presentation: Justin Kruger - Monday, March 3th, 04:30 PM - Lake/Canyon
        This paper presents flight results and lessons learned from optical angles-only navigation of a satellite swarm, conducted during the Starling Formation-Flying Optical Experiment (StarFOX). StarFOX is a core payload of the NASA Starling mission, which consists of four propulsive CubeSats launched in July 2023. Prior angles-only flight demonstrations have been limited to a single observer and single target, and have relied upon a-priori target orbit knowledge for initialization, translational maneuvers to resolve target range, and external absolute orbit updates to maintain convergence. StarFOX removes these limitations by applying a new angles-only architecture called the Absolute and Relative Trajectory Measurement System (ARTMS), which integrates novel algorithms for image processing, batch orbit determination and sequential orbit determination. During StarFOX experiments from December 2023 to May 2024, images from on-board star trackers were used to perform multi-target and multi-observer relative navigation; to autonomously track and initialize navigation for unknown targets; and to perform simultaneous absolute and relative orbit determination without GPS. Relative positioning uncertainties (1-sigma) of 1.3% of target range are achieved for a single observer, reduced to 0.6% with multiple observers (without orbit control maneuvers). Nevertheless, on-orbit conditions proved more challenging than anticipated with respect to swarm visibility, image signal-to-noise ratios, image time-tag synchronization, and overall measurement reliability. The impact of these conditions on performance is studied by comparing in-flight telemetry to telemetry produced by an ARTMS digital twin running on the ground, using both in-flight measurement data and synthetic measurement data from pre-launch simulations. Usage of the digital twin facilitated efficient troubleshooting and mitigation via in-flight software updates which, in combination with a robust software design, allowed adaptation to adverse conditions. Upcoming StarFOX+ experiments in 2025 will extend ARTMS with additional capabilities, such as opportunistic detection and identification of passing resident space objects, which are preliminarily demonstrated via proof-of-concept simulations using StarFOX flight data.
    • Jin S. Kang (U.S. Naval Academy) & Michael Swartwout (Saint Louis University)
      • 05.0402 Building a CubeSat Capstone for Master’s Students
        Luke Korth (Johns Hopkins University/Applied Physics Laboratory) Presentation: Luke Korth - Monday, March 3th, 04:55 PM - Lake/Canyon
        Aerospace education encompasses both a broad and deep technical domain with many focuses and specializations which can present challenges to the design of a graduate capstone class. This paper details methodologies used and the design of the future capstone class for Johns Hopkins Space Systems Engineering master's program written by graduating students for future students. The goal of the graduate class is to integrate prior classes and experiences into a single class and is geared for a very wide range of experiences and technical knowledge while still presenting challenges and opportunities for the most experienced students. The progressive class continuously builds on prior learning and labs through the integration of industry standard tools and practices. Based on The Radio Amateur Satellite Corporation (AMSAT) CubeSat Simulator platform, the class embraces open-source software and hardware to support modern and affordable materials with a core tenant that education knows no borders and is International Traffic in Arms Regulations (ITAR) free in order to support global education. Further potential expansion into a two-semester class and modification for undergraduate and high school are also discussed.
      • 05.0403 Applying DiskSat Concept to Small Satellite Education Programs
        Jin S. Kang (U.S. Naval Academy), Michael Sanders (US Naval Academy) Presentation: Jin S. Kang - Monday, March 3th, 05:20 PM - Lake/Canyon
        With the CubeSat revolution in recent years, many educational institutions around the world have been able to bring in hands-on components to space system and technology education. While CubeSats have become an integral part of the space technology education programs throughout the country, including middle and high schools, CubeSat's stacked configuration and inherent design constraints still pose challenges, particularly in terms of power capacity and volumetric/structural limitations. These challenges often act as an extra layer of difficulties in student satellite programs that are already making do with limited resources both in terms of money and man-power. The DiskSat, a novel form factor satellite-on-a-disk, developed by The Aerospace Corporation, presents a promising alternative for educational programs. DiskSats are circular satellites in a disk form factor of 1 m in diameter and 2.5 cm in thickness. The first iterations of DiskSat demo missions were optimized for high power generation and efficient propulsion. Unlike CubeSats, DiskSats can offer more electrical power without requiring a deployable solar panel, while maintaining a lower mass. Additionally, the design simplifies thermal management and reduces atmospheric drag, making them suitable for operations in very low Earth orbits where shorter-term operation and being closer to the ground offers many benefits to student-operated missions. These attributes, along with the streamlined manufacturing process and potential for rapid re-entry upon mission completion, make DiskSats an attractive platform for university-level satellite projects, potentially easing the steep learning curve associated with CubeSat development while providing enhanced performance and educational value​. This paper outlines key features of DiskSats from the perspective of a university student adaptation, discusses advantages and disadvantages of DiskSats as a student learning tool, and describes potential challenges in adapting the new form factor for student projects.
    • Michael O'Connor (United States Space Force) & Rashmi Shah (Jet Propulsion Laboratory/California Institute of Technology) & Laila Kazemi (arcsec )
      • 05.0504 Predicting the Expected Amount of Observable Space Debris with an SSA Capable Star Tracker.
        Thijs Verhaeghe (Royal Military Academy / KU Leuven), Laila Kazemi (arcsec ) Presentation: Thijs Verhaeghe - Monday, March 3th, 09:00 PM - Lake/Canyon
        This study develops a simulation-based mathematical model to predict the expected amount of debris a generalised space-based optical sensor can register, in this case, an SSA-capable star tracker. ESA's MASTER software accurately models the debris surrounding Earth, providing discretized values for the debris density as a function of height, declination and diameter. A satellite platform equipped with an optical sensor can be placed artificially in a highly-frequented orbit. By using appropriate coordinate transformations, it becomes possible to compute the debris-originated photon flux by integrating the density over a cone-shaped volume, accurately representing the space observable by an optical sensor. Given the roughness of the integrand, numerical Monte Carlo methods are necessary to sample the physical space and provide reliable results efficiently. In reality, the observability of certain regions in space is subject to various constraints. These include, initially, the geometrical constraints of the sensor (such as Earth, Sun and Moon avoidance angles) and the debris (such as Sun illumination angles and distance to the platform). Additionally, the properties of the debris itself, such as shape, composition and diameter, play a critical role in determining the sensor irradiance. Subsequently, the incident photon flux is influenced by the size and quality of the lens. Lastly, the properties of the detector must be considered to determine how the perceived signal can be translated into a debris element. This is where the features of the star tracker and its detection algorithm come into consideration. The algorithm aims to include these complexities while motivating design choices based on current literature in the field of optical RSO detection. No additional simulation is needed as individual objects are aggregated into an overall volumetric density (analogous to the abstraction of individual molecules in an ideal gas). This provides a computationally cost-effective approach to modelling the expected amount of debris a star tracker can observe in a given setup. Furthermore, the photon flux of the discretized volume can be imaged on a CMOS detector. By further exploiting external third-party software to model a satellite scenario, the algorithm provides swift and easily accessible results that can influence mission planning and guide design decisions. We can show that the sensor lines of sight with high debris fluxes are significantly restricted. The algorithm differentiates itself through its ease of use and low computational cost, drawing upon a robust foundation of an extensively researched debris model. For validation, we will model the setup of a ground-based star tracker which observes the night sky. This corresponds to the current setup of arcsec's Sagitta and Twinkle star trackers installed on the Mercator telescope in La Palma.
      • 05.0505 Development of an Extreme Ultraviolet Imager for the Sun Coronal Ejection Tracker (SunCET) CubeSat
        Evan Burger (JHU-APL), Bryan Maas (JHU APL), Aaron Magner (Johns Hopkins University/Applied Physics Laboratory), James Mason (Johns Hopkins University/Applied Physics Laboratory) Presentation: Evan Burger - Monday, March 3th, 09:25 PM - Lake/Canyon
        The Sun Coronal Ejection Tracker (SunCET) telescope is an extreme ultraviolet imager used to study the dominant physical mechanisms for coronal mass ejection (CME) acceleration profiles as a function of altitude and time. The SunCET mission uses a 6U CubeSat to fill the middle corona (1.5–4 solar radii from the Sun's center) observational gap left by the Heliophysics System Observatory. CMEs are significant because they can affect space weather, impacting satellite operations, communications, and power systems on Earth. SunCET uses a two-mirror Ritchey-Chrétien telescope with a 2.0° x 2.7° full field of view (FOV) to form a 3.79 x 5.03 solar radii image on a backside illuminated CMOS detector. Aluminum filters and aperiodic B4C/Mo/Al mirror coatings are utilized to block visible light and create a 17-20 nm bandpass. SunCET will be the first to fly these novel coatings, which are less susceptible to performance degradation in air than existing formulations. Low coefficient of thermal expansion materials (Zerodur and Invar) are used for the mirrors and structure to maintain focus over temperature. The telescope is 1.87kg and fits within a 10x15x20cm volume. The telescope was assembled and tested at Johns Hopkins Applied Physics Laboratory between February and May 2024. The telescope was measured to have a point spread function (PSF) with an 80% encircled energy radius of less than 30 arcseconds within the inner 3.5 solar radius portion of the FOV. This ensures SunCET’s observations complement existing and future coronagraphs and heliospheric imagers for continuity of coverage out to hundreds of solar radii.
      • 05.0507 CubeSat Laser Infrared CrosslinK Mission Development Status
        Paige Forester (Massachusetts Institute of Technology), Celvi Lisy (Massachusetts Institute of Technology), Leonardo Gallo (Massachusetts Institute of Technology), William Kammerer (Massachusetts Institute of Technology), Hannah Tomio (Massachusetts Institute of Technology), Kerri Cahoy (MIT), Myles Clark (University of Florida), John Hanson (Cross Trac Engineering, Inc.) Presentation: Paige Forester - Tuesday, March 4th, 08:30 AM - Lake/Canyon
        Optical communications can have advantages compared with radio frequency communications when high data rates are needed with limited size, weight, and power, when link security is a priority, and when clouds or weather do not impede line-of-sight between terminals. Currently, optical communications are not strictly regulated and the use of bandwidth is not nearly as contested as for radio frequencies. While there have been notable space-to-ground and space-to-space (intersatellite crosslink) laser communications demonstrations such as NFIRE/TerraSAR-X, and LCRD, and even lunar (LLCD) and interplanetary (Psyche) laser communications, this work focuses on the the NASA CubeSat Laser Infrared CrosslinK (CLICK) mission’s goal of advancing low-cost and small CubeSat-scale laser communications terminals that are robust to platform disturbances. The main goal of the two phases of the CLICK mission is to demonstrate low-cost miniaturized optical transceiver technology in Earth orbit for downlink and crosslink. CLICK is being jointly developed by the Massachusetts Institute of Technology (MIT), the University of Florida (UF), and the NASA Ames Research Center and consists of two flights: CLICK-A consists of a single downlink satellite to a ground terminal at MIT Wallace Observatory, and CLICK-B/C consists of two satellites performing a full-duplex crosslink with precision ranging and downlink. CLICK-A was deployed on September 5, 2022, performed six downlink experiments to Wallace Observatory and de-orbited in March 2023. CLICK-B/C is slated to launch in 2025 courtesy of NanoRacks. This paper will focus on CLICK-B/C payload development. The CLICK-B/C flight has two separate BCT 3U buses each hosting a 1.5U laser communication payload with mass < 1.7 kg. The two CubeSats will be deployed near simultaneously and the goal is to use differential drag to control the range between CLICK-B and CLICK-C. CLICK-B and CLICK-C shall demonstrate full-duplex intersatellite laser communication at data rates of ≥ 20 Mbps ranging from 25 km to 580 km, along with time-transfer and precision ranging of better than 50 cm. CLICK-C transmits data at 1563 nm and CLICK-B transmits at 1537 nm. CLICK-B/C will also utilize the Portable Telescope for Lasercom (PorTeL) at MIT Wallace Observatory for downlink communications. All three terminals, the PorTeL optical ground station as well as both satellites, each have a 976 nm beacon laser to aid coarse and fine pointing. The CLICK-B/C satellites are currently in assembly and functional test, including the alignment of the flight optical benches and evaluation of the Pointing, Acquisition, and Tracking (PAT) subsystem. PAT consists of open-loop body pointing, closed-loop coarse tracking, and closed-loop fine pointing. To verify the PAT subsystem, the response to injected disturbances will be measured and reported. The beam divergence is measured for each flight model by propagating the optical path 7 meters and then verifying the result. The PAT testing, divergence testing, environmental testing and mission updates will be the main focus of the paper, while more specifics of the assembly and alignment of the CLICK-B/C optical payload will be discussed as well.
    • John Dickinson (Sandia National Laboratories) & Michael Mclelland (Southwest Research Institute) & Dimitris Anagnostou (Heriot Watt University)
      • 05.0601 Silicon Photomultipliers Implemented as Free Space Optical Communication Sensors
        Leonardo Gallo (Massachusetts Institute of Technology), Kerri Cahoy (MIT), Joseph Hollmann (Charles Stark Draper Laboratory) Presentation: Leonardo Gallo - Tuesday, March 4th, 08:55 AM - Lake/Canyon
        Free-space optical communications (FSOC) is an emerging field offering a compelling alternative to the current technology standard of radio frequency (RF) communications. Optical carriers have smaller size, weight, and power (SWaP) requirements compared to RF systems, thanks to the smaller required aperture size. The narrow beam divergence of FSOC terminals also enhances the power efficiency and security of long-range communication links. These improvements in performance can be leveraged by platforms constrained by size, such as satellites. Over the past decade, the number of satellite launches has increased tenfold, with smallsats now comprising 96% of these launches. The use of FSOC terminals for smallsats enables higher data rates but requires precise pointing. NASA's CubeSat Laser Infrared Crosslink (CLICK) B/C mission exemplified this by utilizing a beacon-based pointing, acquisition, and tracking (PAT) system to correct angular misalignment, while an avalanche photodiode (APD) receiver detects the communication signal. The high gain of the APD allows signal detection over distances from 25 km to 580 km for CLICK-B/C. This work explores whether higher sensitivities can be achieved using a Silicon photomultiplier (SiPM) as the receive optical sensor. SiPMs, which are arrays of APDs operating in Geiger mode, produce nanosecond output pulses with gains of 10^6 electrons per photon. This paper presents the experimental and simulation results of implementing a SiPM in a 2x2 array configuration as a dual pointing and communication sensor for FSOC terminals in LEO. The proposed SiPM setup directly measures the misalignment between the transmit and receive terminals, eliminating the need for a dedicated beacon laser and quadcell detector for PAT, thereby reducing the overall SWaP of a communication terminal similar to the CLICK-B/C terminal by a factor of 2. The pointing performance of the proposed SiPM configuration is characterized by calculating the noise equivalent angle (NEA) through simulations and experiments, while the communication performance is assessed by testing the maximum detectable pulsing frequency of a laser. Simulation results indicate an NEA of 1 urad and a maximum detectable pulsing rate of 2 GHz for a 1,000 km FSOC link.
      • 05.0603 1U Membrane-based Deployable Solar Array Engineering Model Testing
        Tom Sproewitz (German Aerospace Center) Presentation: Tom Sproewitz - Tuesday, March 4th, 09:20 AM - Lake/Canyon
        With ever increasing power demands of space systems, the need for even more lightweight deployable solar arrays increases. Especially for small space systems with a high-power need like for high-performance on-board computing power it is still a challenge to provide sufficient power out of such small available volumes like CubeSats. In the project DEAR (Deployable 100W PV Array for SmallSats), funded by ESA in the ARTES 4.0 programme, a deployable 100W solar array was developed. It can be stowed in and deployed out of a 1U CubeSat volume. DEAR is a joint project between DLR, Astronika, AZUR SPACE Power and ESA. In this paper a short overview of the system design but in particular a summary of the Engineering Model testing and its results as part of the conclusion of this activity will be presented. The design is driven by the required 100W power EoL, its operation in LEO environment for at least 5 years and its tight volume and mass restrictions (deployment out of 1U CubeSat, maximum 2 kg). Especially in terms of mass and volume constraints these requirements clearly show that flexible or at least semi-flexible solar arrays need to be addressed for the problem solution. Based on previous DLR projects like GoSolAr a concept of a thin-film-based solar array membrane which is actively deployed by deployable masts is the base concept for the DEAR activity. The DEAR concept is composed of a deployable photovoltaic platform which contains a boom deployer and the deployable and stowed solar array blanket. The PV platform will be deployed out the of the 1U cube upon HDRM release and will lift the boom deployer provided by Astronika and the PV blanket out this volume. With this the PV membrane, equipped with standard triple junction solar cell assemblies from AZUR SPACE Power, can be deployed. After successful Critical Design Review an Engineering Model with a fully equipped PV blanket with 100 Solar Cell Assemblies underwent an extensive system test campaign consisting of vibration, thermal-vacuum and deployment testing. Deployment tests are conducted in a thermal-vacuum chamber on a specially designed deployment test MGSE including PV illumination and gravity off-load. The deployment tests will be conducted at the beginning of the test campaign, after vibration testing and in vacuum after thermal cycling testing. At the beginning of the test campaign and after each deployment electroluminescence and power output measurements will be conducted on the PV blanket to thoroughly determine the state of health of all Solar Cell Assemblies and to ensure 100 W EoL power output. The presentation will give a summary of the complete environmental test campaign, it will describe the test setup for deployment testing, which is specifically designed for this purpose and will evaluate the Solar Array’s performance under the requested test boundary conditions. Specific problems during testing related to the Solar Array or the dedicated test setups will be summarized by lessons learned.
      • 05.0604 Efficient GNSS-Based Attitude Determination and Integer Ambiguity Resolution for 3U CubeSats
        Yonghwan Bae (), Hanjoon Shim (Seoul National University), Changdon Kee (Seoul National University) Presentation: Yonghwan Bae - Tuesday, March 4th, 09:45 AM - Lake/Canyon
        The Seoul National University GNSS Laboratory Satellite-III (SNUGLITE-III) CubeSat consists of two satellites developed in a standard 3U configuration. These satellites aim to perform close-fly operations in orbit, collect GNSS Radio Occultation (GNSS-RO) data, and develop and validate an orbit control system for formation flying and rendezvous docking of CubeSats. The first crucial step for successfully executing the CubeSat mission is to perform orbit control within an acceptable margin of error, necessitating a precise attitude determination and control system. The attitude determination and control system of the SNUGLITE-III CubeSat comprises two attitude determination modules and two attitude control actuators. The primary attitude determination module is a star tracker, known for its high precision, which is responsible for most attitude determinations. The secondary module combines micro electro mechanical system (MEMS) sensors, including an inertial measurement unit (IMU), sun sensor, magnetometer, and a GNSS receiver, for orbit determination and GNSS-RO mission execution. This module serves as a backup for high initial angular rate situations during detumbling and when the star tracker is not operational. This study proposes an algorithm for GNSS-based attitude determination that is suitable for the constrained environment of CubeSats. Unlike conventional GNSS-based systems, the 3U CubeSat faces limitations due to the confined space, restricting the installation of multiple antennas in the same orientation, thus significantly reducing the number of available satellites for attitude determination and degrading estimation performance. However, the short baseline length allows for rapid estimation of the integer ambiguity in GNSS carrier phase measurements, enabling auxiliary satellite attitude estimation every second. We propose a method to resolve the integer ambiguity every second and perform attitude determination even in environments with limited common visible satellites, using a computationally efficient approach suitable for the low-power onboard computers of CubeSats. While the widely used least squares ambiguity decorrelation adjustment (LAMBDA) method is accurate, it requires significant computational resources. Therefore, we employ least squares ambiguity search technique (LSAST) and CubeSat-specific constraints to minimize the search space for integer ambiguity resolution using satellite differencing combinations and other measurements. To evaluate the performance of the minimized search space, we conduct simulation analyses and real-world experiments, analyzing the success rate and Time to First Fix (TTFF) based on baseline length, number of visible satellites, and satellite combinations. The results confirm that GNSS-based attitude determination can be performed every second in the 3U CubeSat configuration, validating the integer ambiguity resolution performance.
      • 05.0608 Dynamically Reconfigurable Coprocessor for Floating-Point Arithmetic Capability in Small Satellites
        Hezekiah Austin (Montana State University), Chris Major (Montana State University), Zachary Becker (Montana State University), Tristan Running Crane (Montana State University), Kristoffer Allick (), Brock La Meres (Montana State University) Presentation: Hezekiah Austin - Tuesday, March 4th, 10:10 AM - Lake/Canyon
        Field Programmable Gate Arrays (FPGAs) are increasing used in small satellite missions for tasks ranging from command \& data handling to sensor data processing. An FPGA is a dense system of computing resources, logic gates, memory, look-up tables (LUTs), etc. The size, mass, input/output (I/O), and low power constraints of small satellites prevent designers from taking advantage of the full capability of FPGAs. Implemented yet unused components waste power and FPGA resources. This paper proposes increasing FPGA-based computer capability by implementing a dynamically reconfigurable coprocessor in parallel to the main processor on the FPGA. The coprocessor functions as a hardware accelerator for data intensive operations or operations unsupported by the main processor. This approach minimizes the coprocessor’s footprint by reconfiguring the coprocessor in-real time to reuse the same FPGA resources for different stages of a computation. Complex computational operations are broken up into multiple discrete stages with individual coprocessors sequentially performing each stage. This allows the required FPGA resources for the coprocessor to be minimized while taking advantage of the computational boost in FPGA hardware instead of the main processor. A secondary benefit is this approach is that the coprocessors can support functionality not provided by the main processor. To investigate the feasibility of this approach, a set of floating-point operation were implemented as coprocessors and integrated into a RISC-V soft processor system. The results of this proof-of-concept provide evidence that the use of dynamically reconfigurable floating-point coprocessors has the ability to increase the computational capability of small satellite computers while fitting within the constrained resources of such missions.
      • 05.0609 Prototype Testing of a Compact Modular High Voltage Power Supply for Space Applications
        Carlos Maldonado (University of Colorado at Colorado Springs), Andrew Kirby (Los Alamos National Laboratory), Jonathan Deming (Los Alamos National Laboratory), Zachary Miller (Los Alamos National Laboratory) Presentation: Carlos Maldonado - Tuesday, March 4th, 10:35 AM - Lake/Canyon
        The Space High Voltage Power Supply team at Los Alamos National Laboratory has designed and developed a compact modular High Voltage Power Supply (HVPS) that will adhere to the 3U SpaceVPX (ANSI/VITA 78) specification using a conduction-cooled frame compliant with VITA 48. The HVPS will be flown as a part of the Experiment for Space Radiation Analysis (ESRA) 12U CubeSat and the Autonomous Ion Mass Spectrometer Sentry (AIMSS) missions. The HVPS system will provide static and dynamic high voltage potentials to drive next generation charged particle instruments designed to measure the space environment. The ESRA Demonstration and Validation (DemVal) project will rapidly test and mature space technologies by flying through the geosynchronous transfer orbit (GTO). Operation within the radiation belts will provide flight heritage of critical technologies, such as the HVPS, in the most stressing conditions for near Earth orbits. The AIMSS mission will monitor background plasma environment on-board the International Space Station (ISS). The HVPS design will support a Space Wire Interface on the control plane and expansion plane as well as an I2C interface on the system management bus enabling development with low-cost commercial enclosures, backplanes, and other resources than can be leveraged during the design process. The power supply will support two unipolar high voltage outputs up to 5kVDC. The design will leverage a common control interface to allow the substitution of different high voltage multiplier and/or transformer configurations to tailor the output voltage and current drive capability. The design is intended to meet radiation hardness requirements for operation in GTO by employing radiation hardened components with >100kRad (Si) tolerance. The digital interface and control will be implemented using a Vorago microcontroller running RTEM Real Time Operating System and Lattice FPGA that can be used as a Space Wire Router to connect between devices on the control and expansion planes. High Voltage outputs will connected using an appropriately rated front panel connector. Overall system design and prototype testing of the HVPS unit will be presented.
      • 05.0610 AI-Driven Efficient Downlink Communication for Limited-Transmit-Power CubeSats in the Ka-band
        Mohammed Alqodah (University of Mississippi), MUSTAFA MATALGAH (University of Mississippi) Presentation: Mohammed Alqodah - Wednesday, March 5th, 08:30 AM - Lake/Canyon
        CubeSats face significant transmit power limitations due to their compact size. This poses a challenge for maintaining reliable communication links, especially in bands offering high bandwidth like the Ka-band, which is attractive for Satellite communications due to its high bandwidth capabilities that are essential for transmitting large volumes of data. Nevertheless, this frequency band is particularly susceptible to various types of attenuations, with rain fading being one of the most severe. Rain fading can introduce additional attenuation of up to 10 dB, drastically impacting the signal-to-noise power ratio (SNR) and, consequently, the quality of the communication link. Adaptive modulation and coding (AMC) schemes dynamically adjust the modulation and coding parameters based on the real-time SNR conditions. This adaptability allows for optimizing data rates in the presence of fluctuating SNR, which is common in the Ka-band due to weather-induced attenuation. Existing AMC algorithms, such as those used in the Digital Video Broadcasting Satellite Second Generation (DVB-S2) standard are designed to support high-data-rate satellite communications in space assuming high downlink power. However, this assumption is not practical for the case of CubeSats whose downlink SNR range includes a wide sub-interval that is not supported by the DVB-S2 standard and will result in outage. In this paper, we propose designing new AMC sets that are optimized to suit the unique challenges faced by low transmit power in CubeSats. Assuming the Ka-band spectrum, we develop analytical methods for calculating the average throughput (in bits per seconds) over the whole range of the channel SNR to obtain the performance gain obtained by our proposed AMC sets as compared to the DVB-S2 sets. This is accomplished by developing statistical mathematical modeling for the downlink SNR with the constraint of limited transmit power at the CubeSat under various weather conditions and elevation angles. The contribution of the paper is two-fold. First, we use the resultant modeling to design the AMC sets. Second, we use the derived statistical SNR models to generate large datasets of SNRs. These datasets will be used to develop an AI-based algorithm capable of accurately predicting SNR values over periods exceeding twice the satellite signal propagation time. The predicted SNR will then be utilized by the AMC system to select the appropriate modulation and coding combinations, which will be sent from the ground station to the CubeSat for implementation. Different machine learning (ML) algorithms such as linear regression (LR), support vector machines (SVMs), Gaussian process regression (GPR), ensemble methods (EMs), and neural networks (NNs) will be studied for predicting signal parameters. While reinforcement learning algorithm will be used for dynamically adjust power, modulation, and coding to maximize throughput at a specified bit error rate (BER). The integration of AMC tailored for low power CubeSat transmitters in the Ka-band presents a promising solution to overcome the limitations imposed by high attenuation and fluctuating SNR conditions. By developing and employing an AI-driven predictive model for SNR, we can enhance the reliability and efficiency of CubeSat communication systems, thereby supporting more ambitious space missions and applications.
      • 05.0612 The Impact of Gravity-Gradient Stabilization of ADCS Efficiency and Design Optimization in CubeSats
        Yasmin Avelino (University of Brasília), Renato Borges (Universidade de Brasilia), William Silva (University of Brasilia) Presentation: Yasmin Avelino - Wednesday, March 5th, 08:55 AM - Lake/Canyon
        From the perspective of attitude dynamics, a rigid body in orbit is subjected to various external torques and disturbances that can significantly impact its orientation and, consequently, a satellite's overall mission performance and subsystems. Due to its inherent Earth-pointing capability, gravity-gradient stabilization emerges as a valuable passive attitude control method for some missions. A proper mechanical design of a CubeSat can leverage gravity-gradient stabilization, offering notable improvements in control efficiency and indirect benefits, such as reduced weight, simpler design, and lower power consumption. The CubeSat Design Specifications (CDS) allows the body to achieve inertia ratios in all gravity-gradient stability map regions - Lagrange, Debra-Delp, Pitch, Roll-Yaw, and Unstable - which gives the mechanical project freedom to seek a design with better stability features. This paper investigates and compares the effects of gravity-gradient stabilization with two commonly used active control methods: magnetic torquers and reaction wheels, within the CubeSat standard. The study utilizes Python-based simulations to model the satellite's attitude in a circular Low Earth Orbit (LEO) at 500 km altitude. Scenarios are evaluated for CubeSats equipped with active ADCS systems - magnetorquers and reaction wheels - independently and in conjunction with gravity-gradient stabilization, across different CubeSat sizes. The analysis focuses on the performance of ADCS, particularly in control action and power consumption, while also discussing implications for weight and design optimization. The findings of this work aim to assess the potential advantages of integrating gravity-gradient stabilization into the CubeSat design, highlighting its impact on control performance and overall satellite efficiency.
    • Nicole Fondse (Aerospace Corporation) & Kara O'Donnell (Aerospace Corporation)
      • 05.0701 Using the Decision Tree (DT) to Help Scientists Navigate the Access to Space (ATS) Options
        Robert Caffrey (NASA/Goddard Space Flight Center) Presentation: Robert Caffrey - Wednesday, March 5th, 09:20 AM - Lake/Canyon
        Abstract – The Decision Tree (DT) is a tool that uses a tree-like model of decisions and their possible consequences, including outcomes, costs, schedule, risks, and performance. The paper applies the decision tree tool to the Access To Space (ATS) options to help scientists and small sat teams select the best approach that meets their ATS requirements. The ATS DT has three primary branches: rideshare, hosted payloads, and dedicated launch vehicles. Each of these branches has multiple sub-branches of ATS options. This paper describes the ATS options and provides industry contacts for each ATS option.
    • Ryan Woolley (Jet Propulsion Laboratory) & Ashwati Das-Stuart (NASA Jet Propulsion Lab) & Rashmi Shah (Jet Propulsion Laboratory/California Institute of Technology)
      • 05.0801 Feasibility Analysis of Distributed Space Antennas Using Electromagnetic Formation Flight
        Seang Shim (The Graduate University for Advanced Studies), Yuta Takahashi (Tokyo Institute of Technology), Naoto Usami (Japan Aerospace Exploration Agency), Masahiro Kubota (), Shinichiro Sakai (JAXA) Presentation: Seang Shim - Wednesday, March 5th, 09:45 AM - Lake/Canyon
        Distributed space antennas using electromagnetic formation flight (EMFF) can achieve long-term, high-performance communication systems. EMFF, which allows simultaneous control of relative positions and absolute attitudes of multiple objects, is a desired propulsion technology for distributed space antennas. While distributed space antennas are dynamic systems, their shape directly affects communication performance. Therefore, evaluating control performance for shape maintenance, deployment, and reconfiguration is crucial. Previous EMFF design studies focused on a few satellites, identifying trade-offs between coil size and control force. However, in distributed space antennas, the large number of satellites and power requirements for maintaining communication performance make the trade-off much more complex. This paper proposes a novel design method for satellites and coil actuators that meets antenna performance requirements, aiming to contribute to practical design applications. By assuming a grid formation for the antenna shape, $J_2$ disturbances are compensated in a bucket-brigade manner. This grid assumption simplifies the derivation of antenna gain and effective isotropic radiated power (EIRP), and by restricting directionality, also makes the antenna pattern simpler. This approach enables the identification of the maximum side lobe direction, allowing consideration of the communication strength outside the main lobe. We applied the proposed method to three cases: two with satellite spacing of half a wavelength (one considering only disturbance compensation power and the other including deployment power) and one with 3/2-wavelength spacing considering only disturbance compensation power. The derived design solutions indicate that a larger antenna than the monolithic BlueWalker3 (8m, 1500kg) is achievable when considering only formation maintenance. When considering deployment power, the solution achieved a similar size, suggesting the potential for improved performance through scalability. To validate the feasibility of the secured deployment power, we perform deployment simulations utilizing MPC. The results demonstrate that MPC achieves faster convergence to a stable orbit than feedback control. This verifies the deployment power's adequacy and serves as a practical example of control system design for distributed space antennas.
      • 05.0802 Propellant-Free Rendezvous Mission of SNUGLITE-III CubeSat: Orbit Control Using Aerodynamic Forces
        Jae Woong Hwang (Seoul National University), Hanjoon Shim (Seoul National University), Yonghwan Bae (), Changdon Kee (Seoul National University), Jaegang Kim (Sejong University) Presentation: Jae Woong Hwang - Wednesday, March 5th, 10:10 AM - Lake/Canyon
        This paper addresses a challenging orbit control problem for in-orbit demonstration of propellant-free Rendezvous Proximity Operation and Docking (RPOD) CubeSat mission. The Seoul National University GNSS Laboratory Satellite-III (SNUGLITE-III) mission, consisting of two 3U CubeSats, performs close-range formation flying and rendezvous docking without using a propulsion system. This mission utilizes differential drag and lift forces in Low Earth Orbit (LEO) to control the relative distance between the two satellites within 1km. This propellant-less orbit control method has been extensively studied theoretically in the literature due to its advantage of replacing the use of on-board thrusters. Moreover, several CubeSat missions have demonstrated the effectiveness of the method by successfully performing constellation deployment and formation-flying with differential drag. However, these missions have all been conducted using open-loop control approach with ground-based commands. Although ground stations performed accurate Orbit Determination (OD) for each satellite considering various perturbations and complex density models, there remains a risk of collision due to uncertainty in the atmospheric density and other modelling errors. Consequently, these missions required satellites to maintain a relative distance of at least 1km, which is not suitable for proximity operation and docking. To address these challenges, we propose an autonomous, closed-loop orbit control system that utilizes differential drag and lift as control forces. Using a GPS-based centimeter-level relative navigation system, precise relative position and velocity measurements are obtained. The proposed orbit control system then calculates the appropriate attitudes for the two satellites to generate the desired differential drag and lift. In designing the controller, we first analyzed the aerodynamic drag and lift acting on the SNUGLITE-III shape based on its attitude. Subsequently, we developed two types of feedback controllers by adopting optimal control theory on linearized dynamics of relative motion: one utilizes differential drag for in-plane motion control, and the other utilizes differential lift for out-of-plane motion control. Without requiring precise atmospheric density predictions, the control gains are tuned to operate effectively across the possible range of atmospheric densities, ensuring simplicity and computational efficiency. To validate the performance of the proposed method, numerical simulations were conducted using a high-fidelity orbital propagator that includes major perturbations in LEO. To ensure applicability in real-world scenarios, the performance of the method was extensively analyzed across a wide range of atmospheric density variations. The results confirm the successful execution of rendezvous proximity operation maneuver of SNUGLITE-III within 1km using the proposed orbit control system. This novel orbit control method can be applied to LEO formation flying and rendezvous missions for CubeSats where the onboard thrusters are unavailable, or as a backup system for thruster-based control.
  • Darin Dunham (Lockheed Martin) & Jordan Evans (Jet Propulsion Laboratory)
    • Bogdan Oaida (Jet Propulsion Laboratory, California Institute of Technology) & Maria De Soria Santacruz Pich (Jet Propulsion Laboratory) & Travis Imken (Jet Propulsion Laboratory)
      • 06.0101 Prototype Testing of the AMR-CR Instrument: Drivers, Implementation, and Results
        Lena Siskind (Jet Propulsion Laboratory), Michael Sondheim (NASA Jet Propulsion Lab) Presentation: Lena Siskind - Sunday, March 2th, 04:30 PM - Lamar/Gibbon
        The CRISTAL mission will perform critical cryosphere science and is the next evolution for ocean topography measurements in the polar regions. The instrument suite on CRISTAL includes the Interferometric Radar Altimeter for Ice and Snow (IRIS) and the Advanced Microwave Radiometer for CRISTAL (AMR-CR), which will support the secondary ocean altimetry mission objective by providing correction for the radar path delay due to atmospheric water vapor. The AMR-CR instrument is a 6-channel radiometer with significant heritage from former AMR missions, such as those flown on Sentinel-6 and Jason-3. Many elements of the instrument design were inherited from previous missions, including the RF electronics, structural design, and calibration system, but some updates were made to the design to modernize the electronics, software, and programmable logic in the instrument. Merging the updated electronics with the inherited framework posed unique challenges and introduced risk of self-compatibility or performance issues arising post instrument integration, at which point issues are more difficult to correct and have greater impact to cost and schedule. In response to these risks, the project pursued a prototype test campaign prior to the start of flight hardware integration. This paper will describe specific differences, both internal and external, between CRISTAL AMR-CR and Sentinel-6 AMR-C that led to the selection and execution of two high priority test cases. The test results validated the functionality of the updated electronics and indicated their compatibility with the inherited receivers and spacecraft interfaces.
      • 06.0102 Deep RL for UAV Energy and Coverage Optimization in 6G-Based IoT Remote Sensing Networks
        Yonatan melese Worku (), Henok Tsegaye (University of New Mexico), Claudio Sacchi (University of Trento), Petro Tshakwanda (University of New Mexico) Presentation: Yonatan melese Worku - Sunday, March 2th, 04:55 PM - Lamar/Gibbon
        The rapid evolution of IoT and networking capabilities has heightened the demand for intelligent and efficient data collection across diverse domains. This need is particularly pronounced in 6G-based IoT remote sensing networks, where integrating Unmanned Aerial Vehicles (UAVs) offers transformative solutions. UAVs enhance environmental observation, data collection, and communication efficiency in dynamic and challenging environments. However, optimizing their energy consumption and coverage remains a significant challenge, especially for real-time data acquisition and transmission. This paper addresses these challenges by introducing novel reinforcement learning (RL) techniques tailored to the unique demands of 6G and Non-Terrestrial Networks (NTN). The methodology integrates advanced deep learning algorithms designed to optimize strategic UAV data collection. Specifically, Q-learning, Double Deep Q-Network (DDQN), and Actor-Critic methods are implemented and rigorously compared under diverse operational conditions, including random starting positions, wind interference, and varying environmental complexities, ensuring robust performance evaluation. The study focuses on optimizing UAV surveillance scenarios where ground images are captured to collect sensor data on users. This problem is formulated as a multi-objective combinatorial optimization challenge, aiming to derive optimal trajectories for maximizing the detection of individuals in captured images while minimizing energy consumption. The RL approach leverages state-of-the-art algorithms tailored for handling discrete action spaces and balancing trade-offs between competing objectives. The objective is to maximize UAV coverage of users while ensuring timely return to base before battery depletion. It is hypothesized that these RL techniques will significantly improve key performance metrics, achieving optimal energy usage and a significant increase in coverage area compared to baseline methods. Additionally, faster convergence rates and greater robustness in varied environmental conditions are expected. Key findings demonstrate the effectiveness of RL techniques in optimizing performance metrics, despite challenges in reproducibility across diverse Scenarios. Ongoing efforts focus on refining reward structures and optimizing hyper-parameters to enhance algorithm robustness for different conditions . This research advances RL-based strategies for UAV operations in 6G-based IoT networks, addressing energy efficiency and trajectory optimization challenges. It contributes to applications in disaster response, surveillance, precision agriculture, and infrastructure inspection, shaping adaptive technologies for complex operational environments.
      • 06.0103 The Psyche Multispectral Imager Flight Software Interface
        Haley Bates-Tarasewicz (NASA Jet Propulsion Lab), Maria De Soria Santacruz Pich (Jet Propulsion Laboratory), James Bell (Arizona State University), Michael Walworth (Arizona State University), Michael Caplinger (Malin Space Science Systems) Presentation: Haley Bates-Tarasewicz - Sunday, March 2th, 05:20 PM - Lamar/Gibbon
        The Psyche mission will visit the unique metal-rich asteroid of the same name, (16) Psyche, which orbits the Sun between Mars and Jupiter. Launched in October of 2023, Psyche seeks to gain valuable insight into the formation of planets by exploring what may be an exposed nickel-iron core of an early planetesimal, similar to our own Earth’s core. The Psyche science instruments consist of a Multispectral Imager, two Magnetometers mounted in a gradiometer configuration, and a Gamma-Ray and Neutron Spectrometer. The mission is led by Arizona State University, which also leads the Psyche Multispectral Imager investigation. The Psyche Multispectral Imager is an imaging system designed to capture geologic, compositional, and topographic data of the (16) Psyche asteroid. In addition to mapping, Imager data will be used to characterize the relative ages of surface regions, understand the impact, tectonic, and gradational processes on the asteroid’s surface, and provide geologic context for the measurements made by other instruments – all valuable for understanding if (16) Psyche is a planetary core. Due to the Imager’s specific operational needs and functional capabilities, the spacecraft-to-instrument flight software module needed to be designed with a unique approach in the context of the Psyche mission instruments. The spacecraft management of instrument data transfer, the ability to interrupt long-duration commands, and hardware sun-safety concerns created special cases for spacecraft-to-instrument communication not present with the other Psyche science instruments. These cases required a modified software architecture and supplemental testing for full verification and validation of the Imager interface. This paper discusses how the Imager’s special cases were accommodated within Psyche flight software and the strategies used for the instrument test campaign.
      • 06.0104 Payload System Design, I&T and V&V Challenges for an Academically Centered Flight Instrument
        Sara Susca (), Thomas Brown (JPL), Bradley Moore (JPL/Caltech), Konstantin Penanen (), Jennifer Rocca (Jet Propulsion Laboratory), Chi Nguyen (California Institute of Technology), Howard Hui (Caltech), Paul MacNeal (NASA Jet Propulsion Lab), Thomas DiSarro (Jet Propulsion Laboratory), James Wincentsen (Jet Propulsion Laboratory), Giacomo Mariani (NASA Jet Propulsion Laboratory), Stephen Padin (California Institute of Technology), Amelia Quon (Jet Propulsion Laboratory), Ross Williamson (Jet Propulsion Laboratory), Phillip Korngut () Presentation: Sara Susca - Sunday, March 2th, 09:00 PM - Lamar/Gibbon
        The single-instrument payload for the SPHEREx (Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer) mission was designed, integrated, and tested by an institutionally and culturally diverse team forged from academic, industrial, and NASA lab environments. In this paper we describe the challenges encountered and solutions adopted through the implementation of the SPHEREx payload. In particular, we describe the architecture approach that guided the design phase and the how, during the integration, testing, and verification phases, and under the pressures of a global pandemic, the strengths of each team were leveraged.
    • Matthew Horner (Jet Propulsion Laboratory) & Keith Rosette (Jet Propulsion Laboratory)
      • 06.0201 Europa Clipper Payload Accommodation Overview and Lessons Learned
        Greta Studier (Jet Propulsion Laboratory), Pranay Mishra (NASA Jet Propulsion Lab) Presentation: Greta Studier - Sunday, March 2th, 09:25 PM - Lamar/Gibbon
        NASA’s Europa Clipper spacecraft carries a suite of ten science instruments, or payloads, to investigate Jupiter’s icy moon Europa. These ten science instruments collectively enable the spacecraft to meet the missions’ science objectives: (1) determine the thickness of Europa’s icy shell and how the ocean interacts with the surface, (2) investigate Europa’s composition, and (3) characterize the geology of Europa. All ten instruments were delivered from different institutions across the country to the Jet Propulsion Laboratory (JPL) for system level integration from March 2022 to June 2023. The Payload Accommodation role on the Europa Clipper project was tasked with the verification and validation of the interface between the flight system and the ten science instruments. The role had ownership of the Level 3 requirements defining many aspects of the spacecraft to payload interface, including: thermal control, electrical configuration, power, commands & telemetry, software, SpaceWire data link, parameters, and timing. This role evolved over the project life cycle, starting at the Applied Physics Laboratory and transferring over to JPL before instrument delivery. This paper aims to document the Payload Accommodation role on the Europa Clipper project and to provide lessons learned for the benefit of future projects. This paper describes the Payload Accommodation role across the lifecycle of the project, focusing on (1) development of the payload to spacecraft interface verification and validation architecture, (2) interface documentation and electrical integration oversight at hardware delivery, and (3) post-integration system level testing and V&V. At this time, the Payload Accommodations role is complete and the Europa Clipper spacecraft is set to launch in October of this year, 2024.
      • 06.0205 Nonlinear Effects of Loosely Constrained Deployable Mass on Instrument Dynamic Testing and Analysis
        Ryan Sorensen (NASA Jet Propulsion Lab) Presentation: Ryan Sorensen - Sunday, March 2th, 09:50 PM - Lamar/Gibbon
        Many instruments utilize launch locks (pin-pullers, frangibolts, etc) to restrain deployable assemblies during spacecraft launch to survive launch vibration and acoustic environments. Non-linear responses have been observed during vibration testing for instruments where the restrained mass represents a significant portion of the total mass. The non-linearity is particularly notable when the mass is loosely constrained and sees large relative displacement compared to the primary structure of the instrument. Three recent instruments, Multi-Angle Imager for Aerosols (MAIA), Radar for Europa Assessment and Sounding: Ocean to Near Surface (REASON) and Europa Clipper Magnetometer (ECM), demonstrated non-linear behavior due to this phenomenon, each in different manners with differing impact to the test and/or subsequent analysis. The effect of the non-linear response is presented for each instrument as well as the adjustments to the test procedure and subsequent analysis. Finally, recommendations for improvements to analysis and test procedures is made for future instruments with loosely constrained deployable mass.
    • Peter Sullivan (NASA Jet Propulsion Lab) & Mohamed Abid (Jet Propulsion Laboratory / NASA)
      • 06.0301 AquaSat-1: An Imaging Spectrometer for Water Quality and Aquatic Ecosystem Monitoring from Space
        David Ardila (NASA Jet Propulsion Laboratory), Peter Sullivan (NASA Jet Propulsion Lab), Bryant Mueller (), Steven Davis (), Christine Bradley (Jet Propulsion Laboratory), David Thompson (Jet Propulsion Laboratory), Robert Green (Jet Propulsion Laboratory), Courtney Bright (CSIRO), Nick Carter (CSIRO), Joshua Pease (CSIRO), Andre Held (CSIRO) Presentation: David Ardila - Wednesday, March 5th, 04:30 PM - Lamar/Gibbon
        We present a description of the AquaSat-1 mission concept, with emphasis on the instrument and concept of operations. AquaSat-1 seeks to provide data that can be used to deliver actionable information on water quality and aquatic ecosystems. The instrument is a state-of-the-art 350-1050 nm imaging spectrometer, with ≤10 nm Full-Width at Half Maximum (FWHM) spectral response function. It utilizes a fast f/1.8 optical system comprised of a three-mirror telescope coupled with a Dyson-type spectrometer. The passively cooled detector is a 3072×512-pixel Teledyne digital-output array with 18 μm pitch. Analysis of the radiometric performance demonstrates that the instrument can provide the required signal-to-noise ratio with margin for challenging aquatic objectives. High optical throughput and ground motion compensation, along with high spectral and spatial uniformity, permit high signal-to-noise ratio and excellent spectral-radiometric fidelity. The orbit is 400 km sun-synchronous with Local Time of Ascending Node (LTAN) 00:00. During its 1-year mission, AquaSat-1 will observe inland rivers, reservoirs, lakes, as well as corals in coastal areas.
      • 06.0302 Preliminary Thermal Design of the OTTER Instrument for the SBG-TIR Study
        Ian McKinley (Jet Propulsion Laboratory), Gregory Allen (Jet Propulsion Laboratory), Jared Keller (Jet Propulsion Laboratory), Jose Rodriguez (JPL/NASA) Presentation: Ian McKinley - Wednesday, March 5th, 09:00 PM - Lamar/Gibbon
        The thermal infrared (TIR) free-flyer spacecraft is host to the orbiting terrestrial thermal emission radiometer (OTTER) instrument and a two-band-visible and near infrared (VNIR) camera. The purpose of the OTTER instrument is to acquire multispectral images of the Earth in the mid and thermal infrared ranging from 3 um to 12 um. These images provide the remote sensing data needed to inform key research and applications focus areas of ecosystems and natural resources, hydrology, weather, climate, and solid Earth. Launch is scheduled for 2027 with a three-year prime mission in a sun synchronous orbit at an altitude of 665 km. OTTER is an eight-band radiometer with seven bands in the 3 um to 12 um range and an additional band at 1.65 um. It expands on the heritage design of the ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) instrument utilizing a telescope to focus the optical signal onto spectral filters on an actively cooled 65 K focal plane array, a double-sided scan mirror rotating at constant speed and two internal blackbody calibration targets. The OTTER thermal control architecture consists of active and passive elements and implements a heat rejection system that is not similar to that of ECOSTRESS. An overview of the overall thermal control design approach is presented along with discussion of various trade studies that have been performed.
      • 06.0303 The Optical Design of the Carbon Investigation (Carbon-I) Imaging Spectrometer
        Christine Bradley (Jet Propulsion Laboratory), Matthew Smith (Jet Propulsion Laboratory), David Thompson (Jet Propulsion Laboratory), Pantazis Mouroulis (Jet Propulsion Laboratory), Robert Green (Jet Propulsion Laboratory) Presentation: Christine Bradley - Wednesday, March 5th, 04:55 PM - Lamar/Gibbon
        The proposed Carbon Investigation (Carbon-I) Imaging Spectrometer is designed to measure variations of greenhouse gases in the Earth’s atmosphere. The instrument will survey the Earth from its own spacecraft at an altitude of approximately 610 km. It will use a course ground sampling distance (GSD) of <400 m in global mode for land and coastal monitoring and a finer 35 m GSD in target mode to sample key regions. The identification and quantification of greenhouse gases require continuous spectral sampling over the 2040-2380 nm wavelength range with < 1nm spectral sampling. The proposed design builds upon Jet Propulsion Laboratory’s experience of development spaceflight Dyson imaging spectrometers to achieve spectral sampling of 0.7 nm per pixel. This paper presents the proposed Carbon-I optical design comprised of a freeform three-mirror anastigmat telescope that couples to a F/2.2 and highly uniform Dyson-inspired imaging spectrometer. This high uniformity and throughput enables Carbon-I to measure Earth’s greenhouse gas concentrations with unprecedented precision and spatial sampling.
      • 06.0304 MethaneSAT On-Orbit Lunar Calibrations Planning
        Maya Nasr (Environmental Defense Fund | Harvard University ), Jonathan Franklin (Harvard University), Joshua Benmergui (), Steven Wofsy (Harvard University) Presentation: Maya Nasr - Monday, March 3th, 11:50 AM - Amphitheatre
        The MethaneSAT satellite mission aims to detect, quantify, and monitor methane emissions in targets covering over 80% of global oil and gas production. Bridging the gap between existing point-source and global mapping remote sensing satellites, MethaneSAT enables simultaneous emission estimation from large individual point sources and from spatially diffuse area sources. One of MethaneSAT’s calibration activities is a monthly scan of the nearly full moon (a “lunar calibration scan”), which provides a consistent and well-known external light source in the wavelength range that MethaneSAT measures. This paper presents a detailed description of MethaneSAT’s lunar calibration activities and example data from an early MethaneSAT lunar calibration scan. MethaneSAT uses a synchronous scanning mode where the spacecraft attitude is fixed with respect to the orbital frame and the instrument array scans perpendicular to the apparent velocity of the moon. MethaneSAT’s instruments have a 21.3° field of view while the moon’s angular diameter is 0.52°, and so it would require 41 synchronous scans to illuminate the entire field of view. Due to data budget and time limitations MethaneSAT has been performing 5 lunar calibrations scans per lunar cycle when the phase angle is between +5 to +9 degrees. Observations from each lunar calibration scan are to be later compared to the Lunar Irradiance Model of the European Space Agency (LIME)’s disk-integrated lunar irradiance model, which is derived from ground-based observations from the Izaña Atmospheric Observatory and Teide Peak. LIME predicts variations in lunar irradiance, accounting for view geometry, phase angle, lunar librations, and the lunar surface albedo distribution of maria and highlands. In August 2024, MethaneSAT successfully scanned the moon, demonstrating the viability of the proposed calibration approach. However, a larger dataset of monthly scans is necessary for trending analysis and comparison against the LIME model. As such, this paper focuses on the planning and methodology for lunar calibration, presenting an example scan as a proof of concept. Further calibration trend analyses will be conducted as more scans are collected in the coming months. Once sufficient data is available, the calibration will be finalized and integrated into satellite retrievals.
      • 06.0305 Monitoring Greenhouse Gases: From Massive Instruments to the Compact Uvsq-Sat NG Spectrometer
        Cannelle Clavier (LATMOS), Mustapha MEFTAH (CNRS) Presentation: Cannelle Clavier - Wednesday, March 5th, 09:25 PM - Lamar/Gibbon
        The six-unit CubeSat Uvsq-Sat NG is equipped with a Near-Infrared (NIR) spectrometer designed specifically for measurements of greenhouse gas concentrations. This mission explores the feasibility of deploying compact spectrometric technology aboard CubeSats to accurately determine atmospheric levels of CO2 and CH4. Operating in the NIR range from 1200 to 2000 nm, the spectrometer is optimally configured to detect and quantify the spectral signatures of key greenhouse gases. The deployment of this technology on Uvsq-Sat NG marks a significant advancement in climate research, offering vital data that enhances our understanding of environmental changes and showcasing the increasing sophistication of satellite technology in monitoring Earth’s evolving climate. This will involve showcasing the capabilities of a disruptive spectrometer in measuring greenhouse gases and exploring potential advancements to push beyond the current state of the art in this field.
    • Donnie Smith (Waymo) & Robert Magnusson (University of Texas at Arlington) & Thomas Backes (Georgia Institute of Technology)
      • 06.0401 Remote Sensing Dual-Band LWIR Thermometry Enhancements via Passive and Active Sensor Fusion
        Dan Harris (Northrop Grumman Corporation), Darin Dunham (Lockheed Martin) Presentation: Darin Dunham - Thursday, March 6th, 05:20 PM - Lamar/Gibbon
        Accurate temperature estimation can be critical in remote sensing applications, particularly when dealing with unresolved objects, where the challenge of obtaining precise measurements is heightened due to the need for more assumptions. This paper introduces a novel method for enhancing temperature estimates by fusing data from passive and active infrared sensors. Our approach effectively decouples band emissivity and area estimates, which are typically combined as emissivity-area estimates, thus, removing the need for more assumptions. Our fusion technique significantly improves temperature accuracy and provides a more precise and informative characterization of thermal properties. Through extensive simulations and validation, we demonstrate the efficacy of our approach, which holds promise for enhancing the reliability and efficacy of infrared sensing systems in various remote sensing applications.
      • 06.0402 Energy Saving Waveform for Tracking Radar
        Benjamin Gigleux (ONERA), Eric Chaumette (Isae Supaero), François Vincent (ISAE-SUPAERO) Presentation: Benjamin Gigleux - Sunday, March 2th, 04:30 PM - Electronic Presentation Hall
        Waveforms are usually designed to optimize the radar performance. To this aim, a compromise has to be found between resolution and ambiguities, so that sidelobes have to be controlled carefully. In the particular scenario of a radar tracking a single target in a sparse environment (space based-radar for orbital rendezvous or air-to-air radar operating at high altitude), the constraint of low sidelobes level can be removed. In this paper, we show how releasing this constraint makes it possible to optimize the power consumption of the radar while preserving its precision. An adaptive waveform with binary frequency and time amplitudes, and frequency or time interruptions, is proposed and its efficiency for delay and Doppler estimation is characterized. The results are illustrated in a space rendezvous use case where the efficiency of the radar in transforming energy into information for delay and Doppler estimation can be improved thanks to interruptions. In this example, the proposed waveform enables to save 60% of the transmitted energy for a link budget excess of 1 dB, while a conventional reduction of the pulse width saves 20% of it.
      • 06.0403 Ice-Bed Detection Capabilities of a Low-VHF Radar on a Small UAS
        Gabriel Rose (University of Kansas), Emily Arnold (University of Kansas), John Paden (University of Kansas), Fernando Rodriguez-Morales (University of Kansas), Carlton Leuschen (University of Kansas), Daniel Gomez Garcia Alvestegui (University of Kansas) Presentation: Gabriel Rose - Thursday, March 6th, 04:30 PM - Lamar/Gibbon
        This paper presents an analysis of the bed detecting capabilities of an ice sounding radar integrated onto a small, unmanned aircraft system (UAS). We evaluated the average signal-to-noise ratio (SNR) and signal-to-interference ratio (SINR) of radar measurements collected by the UAS over Greenland’s Helheim Glacier in 2022 and compared those to radar measurements collected over the same region using a radar-equipped Twin Otter around 2008. The statistical analysis presented of the SNR and the SINR shows that both systems have comparable bed detection capabilities. While the average SNR for all points considered is more than 20 dB higher for the Twin Otter system, the average SINR of both has a similar value. The overall average SINR is 9.79 dB for the UAS and 9.19 dB for the MA. As it is discussed in the paper, the lower SNR of the UAS system is attributed to its lower operating frequency, while the comparable SINR depends on various factors. The results of this paper have implications on planning and design of future field deployments.
      • 06.0404 Outdoor Long Range Object Detection Experiments with Event-Based Sensors
        David Ziehl (Air Force Research Laboratory), Joseph Cox (Air Force Research Laboratory) Presentation: David Ziehl - Thursday, March 6th, 04:55 PM - Lamar/Gibbon
        An event-based sensor (EBS) is an imaging device that detects changes in local irradiance on the focal plane array (FPA) by using a readout integrated circuit (ROIC) with a thresholding operation. These sensors encode local irradiance changes on the FPA as events registered by the device, then read out these events and their time of occurrence in a sparse digital output format. These devices are interesting for various remote sensing applications because the sparse output can translate to a ~100x bandwidth improvement relative to traditional frame-based sensors (FBS). This bandwidth improvement may lead to a ~100x latency improvement and ~100x lower power and cooling requirement for an electro-optical (EO) subsystem. EBS’s bandwidth advantage can be leveraged to increase the imaging trade space between spatial resolution, temporal resolution, and field of view, enabling a more effective imaging system in some applications. Given the current maturity of the technology, EBS holds potential to improve imaging capabilities in areas such as automated manufacturing or object detection and localization while relaxing processing requirements for overall system design architectures. In principle, EBS sensors provide opportunities for more efficient image processing algorithms due to reduced data output from the sensor’s FPA. However, one limitation of EBS technology is its performance in contrast-limited scenarios, which can be attributed to the necessity of local log irradiance changes meeting a certain threshold prior to registering as events which can be read out to the processor. In some challenging imaging scenarios this can result in missed data, where an object may be present though the sensor is not registering events, resulting in false negative detection decisions made by the system. This work aims to better characterize this tradeoff through collecting imagery data in a real-world outdoor setting that can be used to assess the merits of the technology relative to existing alternatives, such as FBS. Specifically, a series of tests will be conducted imaging hot air balloons from a distance of (>1km) as they rise from the ground and translate across the field of view (FoV) of the EBS sensor. This raw data will then be post-processed using algorithms designed to detect the target in the FoV of the EO sensor and determine the location of the target’s centroid. During the post-processing, bandwidth data from the cameras will also be compared to results of past in-lab simulation efforts to determine relationships between theoretical and real-world imaging performance. Results are forthcoming since the temperature of the air is not yet cool enough for the ballooning season to begin. However, based on prior laboratory and field testing it is predicted that detection performance will be good at shorter distances to the target, before eventually tapering off at longer distances where the effects of Rayleigh scattering become significant. It is also expected that EBS will be able to retain a 15x bandwidth advantage (relative to the bandwidth of a comparable FBS) in this more challenging real-world imaging application involving heightened effects of noise and vibrations than the sensor experiences during in-lab experimentation.
      • 06.0405 Advancements in Chree’s Method for Enhanced Signal Amplitude Estimation in Remote Sensing
        Dan Harris (Northrop Grumman Corporation), Darin Dunham (Lockheed Martin) Presentation: Darin Dunham - Thursday, March 6th, 09:00 PM - Lamar/Gibbon
        Chree's method of superimposed epochs analysis effectively estimates signal amplitude, making it useful for identifying extreme states of a target, even with non-sinusoidal or aperiodic signals. In remote sensing radar applications, as an example, this method can reveal the maximum and minimum observed Radar Cross Section (RCS) as the radar moves around or the target rotates. Evaluating signal amplitude with Chree's method requires an initial hypothesis about the signal's period length. Typically, a series of discrete period length hypotheses are tested using a standard null test to maintain a constant false alarm rate. However, automating this process poses a challenge because the actual period length often falls between hypothesized values, causing phase walk---where a phase shift occurs with each hypothesized periodic cycle---leading to incoherent signal addition and signal loss over time. To address this, we developed an algorithm that effectively tests between discrete period length hypotheses by applying a phase shift that counteracts a phase walk, maintaining a constant false alarm rate, and enabling the processing of jointly synchronized signals. These advancements facilitate the fusion of signals from different sensor types, such as radar and electro-optical/infrared, thereby enhancing target analysis. This paper details these innovations and demonstrates their performance benefits.
    • Craig Agate (Toyon Research Corporation) & Dan Harris (Northrop Grumman Corporation)
      • 06.0501 Linear Gaussian Models in Target Tracking
        Stefano Coraluppi (Systems & Technology Research) Presentation: Stefano Coraluppi - Monday, March 3th, 08:30 AM - Lamar/Gibbon
        Linear Gaussian models are a common and effective way to capture motion uncertainties in target tracking. When these are insufficient, common extensions include nonlinear models with additive Gaussian perturbations or hybrid-state multiple-model formulations. These methods allow for tractable solutions via Extended Kalman filtering (EKF), Interacting Multiple-Model (IMM) filtering, and further generalizations. Here, our focus is on the family of linear Gaussian models. The choice of model naturally depends on the domain of application, the sensor data rate, the presence of Doppler measurements, etc. Common models include nearly constant position (NCP), nearly constant velocity (NCV), nearly constant acceleration (NCA), Ornstein-Uhlenbeck (OU), Integrated Ornstein-Uhlenbeck (IOU), and Singer. The 2nd order OU was first proposed in (Coraluppi et al., IEEE Aerospace 2012), and both this and the IOU have found promising application in context exploitation (Millefiori et al., IEEE T-AES October 2016, Coraluppi et al., IEEE Aerospace 2021). We review the discrete-time formulation of these models. Further, we study a unifying framework for the linear Gaussian family of models. We establish relationships between the models as we consider small-feedback and large-feedback limits for the gain parameters, as well as short and long temporal horizons. In many multi-target target tracking applications, a key challenge is the need to suppress clutter-induced false alarms while solving the question of measurement provenance of target-originated measurements. There is a complementary challenge when sensor coverage is sparse, and one must maintain target custody in the absence of frequent target-originated measurements. For this, models with long-term predictive capability are needed. We study an approach to lower-uncertainty and lower-bias long-term target prediction and filtering that exploits the WGS84 ellipsoid to which objects must remain close. As shown in our simulation studies, while the approach does outperform conventional NCV prediction, it is ultimately limited by the Gaussian approximation that no longer holds for sufficiently long prediction times. There are several directions for future work. These include: (1) more extensive validation of our context-aware linear Gaussian approach, (2) a more quantitative understanding of when sufficiently lengthy predictions require non-Gaussian state representation via particle filtering, and (3) exploitation of additional sources of context including target-specific motion constraints, patterns-of-life, and mission objectives.
      • 06.0502 Bayesian Decision-Level Fusion Algorithm for Addressing Correlated Inputs
        Craig Agate (Toyon Research Corporation), Jonathan Price (U.S Air Force) Presentation: Craig Agate - Monday, March 3th, 08:55 AM - Lamar/Gibbon
        We consider the problem of correlated inputs to a Bayesian decision-level fusion algorithm. A decision-level fusion problem is characterized by combining the outputs of different classifiers/ATRs to more accurately and rapidly classify objects in a surveillance region. The outputs from the different classifiers may be declarations of a particular class type or may be a ranked ordering of class types based on confidence values. We presume that an individual classifier considers each look at an object separately from previous looks when arriving at declarations. Nonetheless, there may still be correlation in these subsequent declarations and ignoring this correlation in a Bayesian fusion rule can lead to inaccurate results. The correlation arises due to unknown parameters that affect the ATR algorithm's declaration (e.g., pose of an object in an image). In this paper, we describe the problem and highlight where the assumption of independent ATR outputs is typically made in the Bayesian fusion update. We then derive a joint parameter estimation and classification algorithm that correctly addresses such correlation. The performance of the approach on a binary classification example with simulated measurements is given.
      • 06.0503 Multiple-Hypothesis Tracking with Unframed Sensor Measurements
        Stefano Coraluppi (Systems & Technology Research), Andrew Hunter (Systems & Technology Research) Presentation: Stefano Coraluppi - Monday, March 3th, 09:20 AM - Lamar/Gibbon
        As with most multi-target tracking paradigms, multiple-hypothesis tracking assumes framed sensor data. This assumption is appropriate for active radar, sonar, and imaging sensors, but is not well matched to the detection data from many passive sensors. In this paper, we derive the transformation to express unframed data in an equivalent framed-data setting, under a Poisson detection assumption. This leads to the classical MHT formulation, but with the generality of (possibly) multiple measurements per target. We explore performance of a simplified MHT solution. Finally, we explore the question of determining the optimal frame rate in the unframed-to-framed data transformation, balancing the desire for non-myopic reasoning with mitigating violations of the point-target assumption.
    • Laura Bateman (Johns Hopkins University/Applied Physics Laboratory) & William Blair (Georgia Tech Research Institute)
      • 06.0602 Federated Learning for Low-Latency Emitter Identification from Space
        Max Cui-Stein (MIT Lincoln Laboratory), Binoy Kurien () Presentation: Max Cui-Stein - Thursday, March 6th, 09:25 PM - Lamar/Gibbon
        Emitter identification (ID) is a critical function in adaptive radio-frequency (RF) systems, informing interference management, communications planning, and anomaly detection. Additionally, RF emitter ID is a prerequisite for emitter geolocation, as identification of the signal-of-interest generally needs to be confirmed before executing algorithms such as time difference of arrival (TDOA). Distributed sensing is typically required for acquiring the data needed for emitter ID over wide coverage areas. Proliferated low-Earth orbit (pLEO) satellite constellations provide potential architectures for performing this distributed sensing. In conventional approaches, raw sensor data is aggregated at a centralized node in the sensor network, and classification is applied to the full dataset. However in instances where the data volume required for accurate emitter ID is large, limited communications bandwidth available for aggregating the required data can lead to high latency between data collection and emitter identification. This paper explores the viability of federated learning (FL) as a tool to reduce latency between collection of an emitted waveform and its reliable identification in situations where the sensor data used for identification is distributed across multiple platforms. Our approach is validated via simulation in a scenario involving a pLEO satellite constellation which collects snapshots of digitally modulated RF waveforms transmitted from ground-based emitters. Channel impairments are modeled to be representative of those found in typical pLEO ground-to-satellite links. This data is used to train lightweight deep neural networks (DNNs) onboard each emulated pLEO satellite for emitter ID. The DNN model weights are shared among satellite nodes, and FL is used to fuse the DNNs into a global consensus model. We quantify the latency improvements gained from sharing model weights during FL relative to a baseline method which aggregates the raw data necessary to classify the transmissions. We find that a federated learning scheme can achieve nearly a 100-fold improvement in latency over the baseline method. These findings indicate that federated learning coupled with satellite-based sensing is a potential enabler of low-latency emitter identification across wide surveillance regions. Ultimately, this capability could assist in the creation of RF spectrum maps using sensor networks limited in communications bandwidth. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D- 0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering.
    • John Glass (RTX) & John Grimes (BAE Systems, Inc)
      • 06.0701 Event-Based Target Detection and Tracking for Remote-Sensing Applications
        Daniel Stumpp (University of Pittsburgh), Alan George (University of Pittsburgh) Presentation: Daniel Stumpp - Monday, March 3th, 01:25 PM - Electronic Presentation Hall
        Neuromorphic event-based vision sensors have emerged as a promising technology for space-based target detection and tracking due to characteristics like high dynamic range and high temporal resolution. Most existing research focuses on the tracking of relatively large objects in terrestrial environments or the task of space situational awareness (SSA), where targets are small but background clutter is limited. This research explores the application of various detection and tracking techniques to remote-sensing event-stream data. Remote-sensing applications typically require tracking small objects embedded in the clutter of the Earth background. Adding to the challenge is the limited availability of neuromorphic remote-sensing datasets with reliable truth information. To overcome this challenge, we introduce an event-stream simulation pipeline leveraging the Air Force Institute of Technology Sensor and Scene Emulation Tool (ASSET), Landsat 8 imagery, and the open-source v2e event simulator. The simulation pipeline is used to generate three datasets of target sequences: two for sensors in a geostationary orbit (GEO) and one for a sensor in a low-Earth orbit (LEO). This simulated data and ground truth are used to evaluate existing target-detection and tracking methods along with a new machine-learning-based approach proposed in this research. Existing methods designed for SSA are used for comparison, however, the detection approaches used by these methods prove to generate high numbers of false alarms caused by structured clutter from ground-based features. To overcome this challenge for remote-sensing applications, we propose a simple machine-learning (ML) detector that operates on events as they are produced. It is observed that this method can more effectively discriminate between events generated by targets moving in the scene and events generated by ground features or sensor noise. Additionally, we explore frame-based methods to perform detection using aggregated event-frame inputs. These frame-based approaches demonstrate promising performance for applications where the full temporal resolution of the sensors does not need to be leveraged. Tracker performance for each method is evaluated with standard track metrics including precision, recall, multiple object tracking accuracy (MOTA), and target localization error. Performance is also characterized relative to target intensity and velocity. While existing SSA methods perform well in some cases with careful tuning, the event-by-event and frame-based ML methods demonstrate more generalizable performance in the presence of background clutter. ML methods achieve MOTA scores of up to 0.86 on simulated data generated compared to a maximum of 0.68 for non-ML methods.
      • 06.0702 Extended Object Tracking Using a Gaussian Process Extent Model and Scene Flow-LiDAR Fusion
        Steffen Folaasen (Norwegian University of Science and Technology (NTNU)), Nicholas Dalhaug (Norwegian University of Science and Technology), Edmund Brekke (Norwegian University of Science and Technology), Martin Baerveldt (NTNU), Michael Lopez (Norwegian University of Science and Technology), Annette Stahl (Norwegian University of Science and Technology) Presentation: Steffen Folaasen - Monday, March 3th, 04:30 PM - Lamar/Gibbon
        In Extended Object Tracking (EOT), high-resolution sensors such as RADARs and LiDARs are used to estimate an object's pose, translational and rotational velocity, and extent. These sensors typically do not provide direct observability of translational and rotational motion. To address this limitation, this paper leverages recent advancements in scene flow estimation, which determines 3D pixel displacements between images, to enhance EOT performance through the fusion of LiDAR and scene flow measurements. We propose several scene flow measurement models based on the Gaussian Process (GP) extent model and rigid motion kinetics. Measurement gating is also discussed to remove scene flow measurement outliers. These models are tested using Monte Carlo simulated measurements and real-world measurements from maritime object tracking scenarios. The real-world scene flow vectors are obtained using CamLiRaft, a deep Convolutional Neural Network (CNN)-based architecture that uses LiDAR and camera measurements to estimate scene flow. Additionally, processing and analysis of the scene flow data are performed to assess the measurements' conformity to the standard assumptions required for Bayesian recursion. Our findings indicate that scene flow measurements often resemble being embedded in zero-mean Gaussian-distributed noise. However, the real-world scene flow measurements would also contain severe outliers that needed to be gated away, causing a trade-off between gating outliers and keeping data containing new information. Nevertheless, incorporating scene flow measurements in EOT generally improved the accuracy of state estimates for target objects. Specifically, we observed reduced error in velocity estimates, which often translated to better pose and extent estimates. The results suggest that scene flow vectors can be valuable additions to an EOT pipeline.
      • 06.0703 Advances in Modeling the Performance of Multitarget Tracking Systems
        James Helferty (KBR Inc.) Presentation: James Helferty - Monday, March 3th, 04:55 PM - Lamar/Gibbon
        Performance prediction models are critical in satellite operations to help establish the optimal viewing geometry and imaging exposure for collections based on a variety of tasking parameters. The image quality of the collection can be adversely influenced by altitude, viewing geometry, solar lighting, atmosphere conditions, target requirements, and sensor capabilities. The General Image Quality Equation (GIQE) is used to predict collection performance for contextual imaging tasks with single frame images and provides an estimated quality based on the NIIRS scale. Although NIIRS is a valuable metric for single image collections it is not sufficient for many collection scenarios. NIIRS is not adequation for tracking applications with a single sensor with multiple frames or for tracking from multiple sensors. It is also not applicable for prediction detection performance for optimal detection and tracking of celestial objects. Past work on this topic investigated a set of equations to predict the performance of image systems that are used for detection and tracking missions. The detection system performance model will allow users to predict the quality of a set of frame collections for a detection mission. The performance prediction model is usually part of a satellite tasking and scheduling systems that ensures the images are collected efficiently to meet the user requirements. This paper extends this work to include realistic scene clutter with target detection statistics. The scene clutter is a noise source based on random jitter between frames interacting with the background subtraction algorithm. The paper shows an analytic technique to estimate the background clutter based on convolving the statistical error sources of the expected scene gradients and the jitter standard deviation. This provides an true analytic expression that can consider any scene type and jitter level. This paper further investigates techniques to estimate the tracking performance of celestial targets. These are considered to be unresolved point source targets possibly from multiple observers. This tracking performance is based on line-of-site measurement from ground and space based sources. Error sources include ephemeris error and line-of-site error. The error is calculated based on overall estimation error using Cramer-Rao-Lower-Bound. The effect of orbit and geometric will be considered in the error measurement.
      • 06.0704 An N-Observation Modification to Gooding's Method for Initial Orbit Determination
        Daniel Doscher (United States Military Academy), Alexander Ma (United States Military Academy) Presentation: Daniel Doscher - Monday, March 3th, 05:20 PM - Lamar/Gibbon
        This report tests an extension of Gooding’s Method for Initial Orbit Determination (IOD) to process N line of sight (LOS) measurements for space-based observations of a target taken in a near co-planar orbit relative to the observer. Additional error frames are added for measurements 2 through N-1 to extend the gradient used in a Newton-Raphson iterative refinement formula from 2 target functions to 2*(N-2). The N-Gooding algorithm utilizes a least-squares approach to perform the iterative refinement on a non-square matrix. Measurement errors and process noise are not considered in this study, and dynamics are represented from simple Keplerian orbits. Line of sight observations are processed as unit vectors and not converted to error angles as seen in other N-Gooding approaches. A sample of 100 orbits were tested for N = 3 through N = 24 observations. The accuracy of the N-Gooding algorithm significantly improves on the original Gooding’s algorithm in consistently fewer steps. The resulting N-Gooding estimate shows an average of 80% improvement in position error as compared to the traditional Gooding algorithm, with average errors of 0.0467 km and 0.2378 km respectively. Both accuracy and precision improved with more observations for a majority of debris orbits, with standard deviations reduced by more than 60%. The number of steps to converge to within an error frame tolerance of +/- 1E-8 km was reduced by an average of 99%. The resulting convergence plots show that the estimate improves according to a logarithmic trend. Future work includes testing the effects of process and measurement noise on the accuracy and convergence of the N-Gooding algorithm. Classification of near co-planar orbits that yield optimal and predictable results against target that are less likely to converge will also be studied. Last, comparison of N-Gooding to other IOD procedures such as those based on Physics Informed Neural Networks is of particular interest.
    • Peter Zulch (Air Force Research Laboratory) & Paul Schrader (Air Force Research Laboratory Information Directorate) & Erik Blasch (Air Force Research Laboratory)
      • 06.0801 Developing an Edge Computing Architecture for a Lunar Dust Recognition System
        Carmen Misa Moreira (CERN) Presentation: Carmen Misa Moreira - Monday, March 3th, 09:00 PM - Lamar/Gibbon
        Lunar dust, also known as regolith, presents several challenges for rovers, including abrasive damage to mechanical components, reduced traction, and impared mobility. Dust accumulation on solar panels and radiators diminishes power generation efficiency and disrupts thermal regulation, potentially causing overheating of critical systems. Additionally, lunar dust contamination can compromise scientific instruments and data, with electrostatically charged particles adhering to surfaces and impairing functionality. One precondition for mitigating the risks from lunar dust is its visual detection via computer vision algorithms. However, existing algorithms are computationally expensive and require careful trade-offs regarding where the computation is conducted. This paper introduces a proof of concept for a lunar dust recognition system, addressing different computational trade-offs and initial validation results. The proposed approach leverages edge computing to establish an inter-satellite communication and computing system to cope with deep space mission requirements, particularly focusing on power and communication subsystems. For considering architecture alternatives of communication and computing systems, we consider resource-sharing mechanisms, such as virtualization and containerization, among space systems to enhance data collection, processing, storage, and transmission to ground stations. To reduce bandwidth usage, machine learning (e.g. federated learning) is employed to classify data into critical and non-critical categories (e.g. LSTM, SVM), ensuring only critical data is transmitted to Earth. However, scientist may benefit from raw-data analysis, so opportunities for raw data downloading are also provided in the proposed design for analysis and model training, contingent on downlink capacity. The LunarLab at the University of Luxembourg, which replicates lunar surface conditions, has been utilized to validate the dust recognition system. The system employs YOLOv3 as object detection algorithm, reveling computational demands and highlighting the necessity for high-throughput and high-performance computing at the edge of the network. The sharing of computational loads on the network edge among nodes, such as on landers and orbiters, is explored, considering the implications on the communication and power systems of the computing nodes.
      • 06.0804 Topologically Informed Unified Adaptive Multimodal Data Fusion Designs for Auto. Target Recognition
        HONGZHI GUO (University of Nebraska-Lincoln), Paul Schrader (Air Force Research Laboratory Information Directorate) Presentation: Paul Schrader - Monday, March 3th, 09:25 PM - Lamar/Gibbon
        Existing Automatic Target Recognition (ATR) paradigms leverage multi-modal data extensively. Data from various sensory modalities can improve accuracy by providing unique perspectives and enhanced reliability through the mitigation of individual modality failures. Challenges in multi-modal data fusion persist since it demands modality-specific knowledge for effective feature extraction. Furthermore, since modality-specific feature engineering increase system complexity and reduce reliability, data fusion solutions, when the number of modalities, their respective collection devices, and individual device channels deployed becomes voluminous, prohibit scalability. Moreover, the development of Artificial General Intelligence (AGI) demands new machine learning (ML) models that can effectively process multi-modal data and mimic human perception. Although Transformer-based AGI models demonstrate unprecedented performance in multi-modal data computation, their lack of explainability and large model size restrict their usage in resource- and latency-constrained Contested Edge Computing Application Systems (CECAS), including ATR. It is imperative to develop lightweight, unified solutions that can process arbitrary numbers of sensory modalities without dependencies on specified modality features. Paramount to high accuracy, it is critical to ensure high reliability and low latency in these designs. Multi-modal ATR systems may experience missing sensory modalities due to sensor failures, network congestion and unreliable communication channels, and reduced modality quality due to increased distance from the target, interference/attenuation, and cyber-attacks/adversarial deception. As a result, ATR systems require adaptive/dynamic data driven solutions that assess intra-modal task-related information and semantic correlation selectively combining optimally orchestrated modalities for emerging CECAS. This paper introduces a unified adaptive multi-modal data fusion framework for ATR. First, it leverages Topological Data Analysis (TDA), particularly persistent homology, to extract topological features from multi-modal data. Persistent homology quantifies the shape and structure in each sensory modality creating uniform feature vectors from persistence diagrams, which do not require modality-specific knowledge. These features are then fused for ingestion into two ML models, Support Vector Machines (SVM) and Deep Neural Networks (DNN) providing comparative ATR classification performance. Second, the framework uses conformal prediction scores derived from TDA features to evaluate each sensory modality’s certainty on a given ATR task. A task-related effective modality possesses high certainty, while a task-irrelevant or weak modality has low certainty. Depending on the modality’s certainty, we develop adaptive multi-modal data fusion solutions at the feature level and decision level. Five sensory modalities are considered including electro-optical, infrared, acoustic, passive radio frequency, and seismic which are extracted from the US Air Force Research Laboratory’s Information Directorate (AFRL/RI) 2021 ESCAPE II multi-modal dataset. In the associated ATR ML tasks involving seven distinct ESCAPE II target vehicles, the results demonstrate that the proposed solution/framework are computationally viable and lightweight in terms of the number of parameters, improving robustness while maintaining high accuracy in the presence of missing modality and reduced modality quality. In effect, this framework shows promise in supporting robust high-accuracy ATR and continuous target custody within US DoD agency mission spaces as well as broad applicability to civilian multimodal sensor data driven autonomy-based industry/monitoring capabilities.
  • Patrick Phelan (Southwest Research Institute) & John Dickinson (Sandia National Laboratories)
    • Robert Merl (Los Alamos National Laboratory) & Jamal Haque (Lockheed Martin Space Systems Company)
      • 07.0101 Shihab-1: A Cost-Effective Spacecraft On-board Computer with Machine Learning Capabilities
        Sergio Sirota (TII), Maksim Shevtsov (Technology Innovation Institute), Alexey Simonov (Technology Innovation Institute), Yusra Alkendi (Technology Innovation Institute), Anton Ivanov (Technology Innovation Institute) Presentation: Alexey Simonov - Tuesday, March 4th, 08:30 AM - Madison
        Space missions classified as high mission class require fully radiation-hardened (rad-hard) electronics. Those processors are lagging the modern technology by 10-20 years in terms of performance and size. With the recent advances in \ac{ML} and \ac{CV} algorithms there is a need to deploy them on-board spacecraft for increased autonomy. But these algorithms require orders of magnitude more compute power than legacy processors can deliver. At the same time ML-capable spacecraft processors need to satisfy the usual strict size, weight, and power (SWaP) constraints as well as be resilient to radiation effects of outer space. Consequently, alternative onboard computer architectures are emerging. In this paper, we introduce a miniaturized (110 mm x 120 mm) onboard computer designed to meet the requirements of Low Earth Orbit (LEO) and short Lunar missions, referred to as Shihab-1. The Shihab-1 weighs 150 grams and is designed to operate within 5 watts power. It combines the control-flow capabilities and floating-point performance of traditional processors with the hardware-acceleration of \acs{FPGA}s. This architecture features Microchip Polarfire \acs{SOC} that combines 64-bit RISC-V CPUs with SEU-resilient non-flash based \acs{FPGA} fabric. The Shihab-1 architecture leverages both \ac{Rad-Hard} and \ac{COTS} components in a cost-effective way while ensuring there are no single points of failure (SFP). We propose to deploy compute-intensive ML and CV algorithms to \acs{FPGA} fabric and keep general control and FDIR logic running on CPUs. We plan for extensive program of qualifying Shihab-1 for space missions including thermal-vacuum chamber testing and radiation testing to make it available for missions starting from 2027. Our proposed solution addresses the critical need for robust and efficient computing in space, providing a pathway for more complex and autonomous missions. Additionally, the Shihab-1 onboard computer offers scalable performance, making it adaptable for various mission profiles and extending its utility across multiple space exploration projects.
      • 07.0105 Runway Detection Using a Modified DeeplabV3+ Segmentation Neural Network for Space Applications
        Douglas Carssow (Naval Research Laboratory), David Smith (Naval Research Laboratory) Presentation: Douglas Carssow - Tuesday, March 4th, 08:55 AM - Madison
        Using Vitis AI, a Xilinx framework for executing machine learning models on heterogeneous computing hardware, this work benchmarked the capability of a Xilinx Versal AI Core system-on-a-chip (SoC) to execute a runway detection segmentation neural network. This demonstration was performed to support an NRL developed algorithm for performing landmark based orbit-determination. The radiation tolerant Xilinx Versal Core AI XQR provides an option for high-throughput on-orbit processing to support applications such as neural networks for electro-optical sensor processing. A Xilinx VCK190 evaluation kit was used to perform the benchmarking. The model used in this effort was a modified DeeplabV3+ segmentation model that performed well using a limited data set of 160 training images and 40 test images. The DeeplabV3+ model was altered to allow for a Xilinx Deep Processing Unit (DPU) instantiation of the model that did not rely on custom implementation of any layers in the network. This was done to increase throughput and ease the implementation of the model on the Versal. The data set was built from imagery captured by the WorldView-2, WorldView3, and GeoEye-1 satellites. In evaluation of the accuracy, the mean-intersection-over-union (mIoU) was determined to be roughly 0.6772 on the floating point model following training. The model was then quantized from floating point to INT8 in order to allow for FPGA compatibility, and then compiled for execution using Vitis AI on the Xilinx Versal VC1902 SoC hosted on the VCK190 development board. The resulting model was run using the Vitis-AI API via a python script, and the resulting mIoU was found to be 0.6757. The maximum throughput achieved in this configuration was 70.108 FPS using 7 threads for evaluation on the VCK190.
      • 07.0106 The JPL Snapdragon Co-Processor: A Compact High-performance Computer for Spaceflight Applications
        Dennis Ogbe (Jet Propulsion Laboratory), Andre Jongeling (Jet Propulsion Laboratory), Zaid Towfic (Jet Propulsion Laboratory) Presentation: Dennis Ogbe - Tuesday, March 4th, 09:20 AM - Madison
        Following in the footsteps of the resounding success of the Ingenuity Mars Helicopter, which was powered by a Qualcomm® Snapdragon™ 801 system-on-chip (SoC), the Jet Propulsion Laboratory has continued its investments into high-performance spaceflight computers based on Qualcomm’s Snapdragon line of SoCs. One significant achievement of these efforts is the development of the JPL Snapdragon Co-Processor (SCP), a small form-factor computer for spaceflight applications featuring the automotive-grade Snapdragon 8155 SoC. The technology readiness level (TRL) of the SCP was raised to TRL-6 in January 2024. The SCP is currently being included in two upcoming CubeSat-based on-orbit technology demonstration missions and is under consideration as a computer vision processor for missions up to and including class-B. The Snapdragon 8155 features an octa-core Arm® CPU cluster with four Cortex® A-76 and four Cortex A-55 cores, a graphics processing unit (GPU) capable of 898 GFLOPs (32-bit floating-point), and a cluster of 4 Hexagon™ DSP cores. The SCP board is outfitted with 16 GB of RAM, 128 GB of non-volatile Flash memory, and 2 Mb FRAM. The external interfaces of the SCP are a two USB 3.1 Gen2 ports, a 4x4 lane MIPI Camera Serial Interface connector, and a 200-pin space-grade mezzanine connector featuring three total PCI Express lanes, an RGMII interface to support Gigabit Ethernet, and low-speed UART, GPIO, JTAG, and SPI connections. To utilize the interfaces on the mezzanine connector, JPL has developed a variety of custom carrier cards for the SCP, ranging from the advanced and high-performance line of Swift Processor Modules to a dedicated SCP carrier card for CubeSat applications. In this paper, we present a detailed overview of the SCP design, its capabilities, and its interfaces to the spacecraft bus. We describe the available options to integrate the SCP into a spacecraft using the currently available and in-development carrier boards as examples. We comment on the software support and past and future JPL software benchmarking and porting efforts. Finally, we comment on the TRL-6 test campaign and give a brief outlook of the future of the SCP for JPL missions and the wider spaceflight community.
    • Mark Post (University of York) & Michael Epperly (Southwest Research Institute) & Patrick Phelan (Southwest Research Institute)
      • 07.0201 SpaceFibre Onboard Interconnect: From Standard, through Demonstration to Space Flight
        Steve Parkes (STAR-Dundee Ltd.), Alberto Gonzalez Villafranca (STAR-Barcelona SL), Dave Gibson (STAR-Dundee Ltd.) Presentation: Steve Parkes - Tuesday, March 4th, 09:45 AM - Madison
        SpaceFibre has been developed to provide a high data-rate, high-reliability, high-availability interconnect for spacecraft onboard applications. Other drivers include a small footprint, simplicity of implementation, quality of service, and backwards compatibility with SpaceWire at the packet level. SpaceWire, developed in 2003, is now used as a moderate data-rate payload data-handling network on hundreds of space missions. This paper describes the development of SpaceFibre from drafting of the standard, through development, testing and evaluation of key elements of the technology, to complete system-level demonstration, and finally to its first operational use in space. SpaceFibre runs over electrical or fibre optic physical layers. It uses latent multi-gigabit transceiver technology available in current chips (FPGAs and ASICs) to provide data rates of tens and, in the near future, hundreds of Gbit/s. It automatically recovers rapidly from transient errors without loss of data, which has the welcome side-effect of improving the radiation tolerance of the multi-gigabit transceivers. It uses multiple lanes to achieve high data-rates, but also provides graceful-degradation and hot and cold redundancy for those lanes, in the case of a lane failure. Virtual channels with priority, bandwidth reservation and scheduling are used to form virtual networks which isolate different flows of data from one another, obviating the need for separate physical control and data planes in standards like SpaceVPX. This enhances reliability as well as reducing the number of wire/fibres required on a backplane. SpaceFibre also provides a low-latency broadcast message capability which can be used to distribute system time, to trigger or indicate the occurrence of events, to give notification of errors, etc. The principal results of this work are: • A standard published by the European Cooperation for Space Standardization. • A system-level demonstration interconnecting instruments, mass-memory, data-compressor, data-encryptor, radio frequency downlink and optical downlink through a SpaceFibre routing switch with a bisectional bandwidth of 100 Gbit/s. • Chip designs which are flying in and being designed into important space missions. The significance of this work is reflected in its application in those space missions, with at least six spacecraft flying using SpaceFibre as an interconnect, and around 60 more under development. SpaceFibre is also included as a data and control plane technology in the VITA 78.0-2022 SpaceVPX standard, as the high-speed interconnect in the ESA Advanced Payload Data Handling specification, and is also being used in the emerging SpaceVNX+ standard.
      • 07.0202 Titan Bound: The FPGA SoC Design of the Navigation Coprocessor Controller
        Steven Zhan (JHU Applied Physics Lab), Matthew Gile () Presentation: Matthew Gile - Tuesday, March 4th, 10:10 AM - Madison
        This paper describes the architecture and verification of the Navigation Coprocessor Controller (NCC) FPGA and its soft-core processor implemented for NASA’s New Frontiers Dragonfly mission to Titan. NCC is a PCI target to Flight Software (FSW) and resides on the Navigation Coprocessor Board (NCP). Some of the key functionalities for NCC include sending commands and receiving telemetry to the Rotor Drive Electronics on a precise interval, providing an interface to the Onboard Vision Processing (FPGA), recording of raw images and lidar relief map to NAND Flash, booting of components on the NCP, scrubbing of OVP, and sending commands to the Navcam and Lidar over UART. NCC contains an AMBA bus that is segmented into two portions that are connected via an AHB/AHB bridge in order to separate frequent traffic from FSW and OVP. The team used a mixture of traditional and modern verification practices to verify the NCC. Many designers verified their modules individually in the NCC using traditional VHDL testbenches. The top-level simulations used the modern Universal Verification Methodology Framework (UVMF) from Siemens to establish a common framework to test the complex design of the NCC and to allow both flexibility and scalability as the NCC design evolves.
      • 07.0203 Formal Deadlock and Lifelock Detection of FPGA-based SoC Designs
        Kai Borchers (German Aerospace Center - DLR) Presentation: Kai Borchers - Sunday, March 2th, 09:50 PM - Electronic Presentation Hall
        Verification of register transfer logic (RTL) designs has been a challenge for a long time. A steady increase in FPGA sizes and the resulting increase in system complexity will even aggravate the situation of verification engineers. Hence, it is important to apply proper verification techniques to the right problems. Functional simulation can still be considered as a primary verification method. It allows the application of directed or constrained random stimulus to evaluate the device under test (DUT) reaction. This method even performs well at large system designs but almost always provides incomplete verification results concerning state space analysis. Formal property verification (FPV) addresses this issue by utilizing mathematical approaches to investigate system properties. By this, FPV can check system properties for all possible input scenarios rather than defining a subset of input stimulus as done for functional simulation. This paper shows how liveness properties are investigated by formal property verification. These checks are applied to an FPGA design that provides remote interface functionality for two upcoming satellites. Liveness properties represent a temporal statement to prove that "something good" shall happen within an undefined time period. Deadlocks and lifelocks are faulty system states that can be detected by these properties. Deadlocks appear frequently in a distinct way, ready for debugging. Lifelocks instead, may provide escape possibilities that must be investigated further. We discuss our particular verification setup and analysis results to illustrate the root causes of these errors. Overall, deadlocks and lifelocks tend to cause large impacts on system behavior and should be investigated in addition to safety properties.
      • 07.0204 SpaceFibre IP Cores for Fast Adoption of Next-Gen FPGA Communication Architectures
        Alberto Gonzalez Villafranca (STAR-Barcelona SL), Steve Parkes (STAR-Dundee Ltd.) Presentation: Alberto Gonzalez Villafranca - Wednesday, March 5th, 08:30 AM - Madison
        SpaceFibre is an advanced spacecraft on-board data-handling network technology, building upon its predecessor, SpaceWire, to meet the increasing demands for higher data transfer rates and improved reliability in space applications. This open standard (ECSS-E-ST-50-11C), promoted by the European Space Agency (ESA), has been integrated into numerous spacecraft standards including SpaceVPX, SpaceVNX+, and ADHA. SpaceFibre’s built-in quality of service (QoS) and fault detection, isolation, and recovery (FDIR) capabilities ensure high reliability and availability – critical for spacecraft operations. It operates over both copper and fibre-optic cables and supports any number of lanes (up to 16) in multi-lane mode, allowing links with different numbers of lanes to seamlessly interoperate and providing rates exceeding 50 Gbit/s in existing space-qualified technology. Its network architecture supports virtual networks and arbitrary packet sizes, enhancing the scalability and flexibility of spacecraft data systems. Simple nodes can achieve maximum throughput since SpaceFibre is designed to operate without the need for CPU intervention. Additionally, it maintains backwards compatibility with SpaceWire at the network level, facilitating integration with existing systems. Capable of running on up to 100 meters over fibre optics and inherently designed to handle high data volumes from sophisticated payloads, SpaceFibre is a critical technology for existing and future spacecraft communication architectures. STAR-Dundee has developed a comprehensive suite of SpaceFibre IP cores, consisting of the Single-Lane Interface, Multi-Lane Interface, and Routing Switch. Optimized for space applications, these cores target existing space-qualified FPGAs for improved footprint and speed. This paper presents recent updates to these IP cores and analyzes their performance using metrics such as maximum lane rates, resource usage, latency and throughput. Support has been added for new radiation-tolerant FPGAs, including Frontgrade RT-CertusPro-NX, Microchip RT-PolarFire SoC, AMD Versal, and NanoXplore NG-Ultra. A novel and significant architectural improvement is the encapsulation of all transceiver logic within a single block. This simplifies user interaction by requiring only a minimal interface between the transceivers of supported FPGAs and the SpaceFibre IP. This design enables easy setup for various architectures via a configuration file, streamlining integration and reducing complexity. The SpaceFibre IP cores presented have achieved TRL-9, having been deployed in at least six operational missions since 2021 and currently being designed into more than 60 spacecraft. The paper will demonstrate that the IP cores enable efficient and reliable data handling and processing, which is essential for mission success.
    • Eric Rossland (Naval Research Laboratory) & Eric Bradley (Naval Research Lab)
      • 07.0302 Design of a High-Performance EGSE Architecture for the Dragonfly Mission to Titan
        Vijay Baharani (Johns Hopkins University Applied Physics Laboratory) Presentation: Vijay Baharani - Wednesday, March 5th, 08:55 AM - Madison
        We present here the design of the high-performance, precisely-synchronized Electrical Ground Support Equipment (EGSE) architecture that is used to rigorously test the Integrated Electronics Module (IEM) of Dragonfly. Dragonfly is a NASA New Frontiers class mission that is expected to launch in 2028 to travel to Titan, the largest moon of Saturn, where it will analyze Titan’s chemistry in multiple locations by autonomously flying from one location to another. The Dragonfly mission requires a complex IEM with interfaces to many different peripherals, several of which have relatively tight timing constraints to enable Dragonfly to fly. The EGSE design presented here enables the simultaneous and precisely coordinated simulation of these peripherals, which include Dragonfly’s rotor-drive electronics, critical bus power electronics, navigation cameras, LIDAR sensor, inertial measurement units, and more. The EGSE for testing the Dragonfly IEM consists of custom embedded software running on the RTEMS real-time operating system (RTOS), which runs on the ARM processor of a Xilinx Zynq 7015 System-on-a-Chip (SoC). The Zynq SoC is part of the Avnet PicoZed, which sits atop a custom carrier card that breaks out the I/O of the Zynq to be able to connect to the desired interfaces on the IEM. This EGSE architecture is scalable, meaning that multiple PicoZeds can be used within the same EGSE system, with each PicoZed emulating different interfaces to the IEM. For example, one PicoZed may be responsible for emulating the LIDAR interface, while another PicoZed is responsible for emulating the high and medium gain antenna gimbal drive electronics. All PicoZeds are precisely synchronized using common timing signals, which when combined with the features of the RTEMS RTOS and the Zynq’s programmable logic, enables very fine control of simulation timing. The EGSE design for testing the Dragonfly IEM is also capable of precisely timestamping events. Events, such as the rising edge of the stop bit of a packet received on a UART, are timestamped with 100ns resolution. Thousands of these timestamps are latched per second and sent to an upstream data collection system to be logged, enabling detailed characterization of the IEM’s performance. The timestamps can also be directly compared across different PicoZeds within the same EGSE system, since the PicoZeds are all tightly synchronized to one another. The PicoZeds all interface via Ethernet to a central system, which manages the PicoZed collective and runs the top-level simulation. Command and telemetry for each PicoZed are defined using the open-source, declarative language Kaitai Struct. Packet handlers running on the ARM parse these command packets and then write data via AXI to the programmable logic portion of the Zynq, where the data is queued for the selected interface, sent through the first circuit protection that is built into the carrier cards, and received by the Dragonfly IEM.
    • Christopher Iannello (NASA - NESC ) & Thomas Cook (Voyager Space)
      • 07.0401 The Space Power System Standard
        Steve Parkes (STAR-Dundee Ltd.), Brent Gardner (), Aaron Maurice (Moog Space and Defense) Presentation: Steve Parkes - Wednesday, March 5th, 09:20 AM - Madison
        The Space Power System Standard is an emerging standard for space power system being developed by NASA and industry in both the USA and the UK, which aims to: 1. Develop a common and modular power architecture that is applicable to multiple space applications. 2. Develop standards for spacecraft power systems and components, which lead to lower costs, faster development and interoperable hardware from different vendors. 3. Provide a common set of language and definitions, which will inform hardware manufacturers on interface requirements between power elements. This paper introduces the Space Power Standard, describes the rationale behind the development of the standard, outlines the power system architecture, and details the various components of the architecture: power channels, power interfaces, power links and buses, and power elements, including power sources, power stores, voltage converters and power switches. The methodology followed in the development of the standard was an initial exploration of the scope of power system standard, development of the power system architecture, revelation of the principal elements of power systems, definition of how the power elements behave and interact with one another, and detailing of the requirement statements. The principal results of this work are, thus far, a power system architecture which is divided into a set of regulated and unregulated voltage domains. Within a voltage domain, power sources, power stores and power switches are connected by power links and power buses to distribute and deliver power to voltage converters and end user loads. Voltage converters provide the interfaces between one voltage domain and another voltage domain. The significance of this work is, at the moment, its aspiration: the potential for a unified power system architecture for space applications which will overcome some of the deficiencies and difficulties with present power system development: • Spacecraft power systems are often developed program by program; • Modules are not designed to interoperate at either hardware or software level; • System management and power management interfaces are often proprietary and application specific; • Available qualified components have restricted functionality; • Reuse of module from one program to the next is limited; • Independent development and procurement lead to long acquisition cycles and high cost; • Flying a new technology is high risk and costly. The Space Power Standard aims to ameliorate these issues and to simplify the design of future space power systems using modular, reusable power elements with clearly defined standard interfaces.
      • 07.0403 Axial Flux Motor with Integrated PCB Winding and Redundant Hall Sensor for Satellite Reaction Wheel
        Yi-Jie SU (), Bo-Ting Lyu (National Taiwan University), Shih-Chin Yang (National Taiwan University ) Presentation: Yi-Jie SU - Wednesday, March 5th, 09:45 AM - Madison
        This paper proposes an axial flux motor (AFM) specifically for satellite reaction wheel applications. To maintain the high reliability for aerospace applications, a printed circuit board (PCB) motor stator is designed to integrate copper windings and double redundancy Hall sensors design. The reaction wheel specification is targeted at the momentum of 8.0 Nm-sec with maximum torque of 0.3 Nm and 200 W peak power consumption. From the motor stator windings perspective, the conventional enameled windings are replaced by the integrated PCB stator windings. By directly designing copper wires as stator windings in PCB stator, the reaction wheel can not only increase the system reliability but also reduce the manufacture cost. The insufficient armature flux reflected by the PCB stator windings can be compensated through the axial-flux type magnet rotor topology. More importantly, the PCB stator windings more precise and consistent windings, minimizing the variations and failure risks associated with conventional copper windings. From the reaction wheel drive perspective, a double redundancy Hall sensor is integrated into the proposed PCB stator. It is shown that Hall sensor installation results in the alignment issue on conventional stator with copper windings. By using the PCB stator, Hall sensors can be directly installed without manufacture consideration. More importantly, the redundant design across two independent sensor sets can be achieved. This PCB integrated redundancy sensors ensure stable operation even in the event of up to triple sensor failure, thereby increasing overall system stability. A novel fault tolerant control (FTC) is also developed to monitor the signals from the double redundancy sensors. The proposed FTC drive can achieve the tolerant drive up to three sensor failures. It leads to the better reaction wheel reliability in demanded space environment. A reaction wheel prototype is also fabricated for comprehensive environment tests. The experimental results demonstrate that the axial flux motor with PCB stator system exhibits excellent performance in a compact volume and light weight. Additionally, the double redundancy design of Hall sensors with proposed FTC guarantee functionality whether the motor is spinning or stationary. More experimental results can be demonstrated in the final paper.
    • Mohammad Mojarradi (Jet Propulsion Laboratory)
      • 07.0502 Radiation Shielding Simulation of High Energy Neutrons for Small Instrument Packages
        Samantha Kenyon (Virginia Tech), Spencer Soccio-Mallon (Virginia Tech), Arthur Ball (Virginia Tech) Presentation: Samantha Kenyon - Wednesday, March 5th, 10:10 AM - Madison
        Enabling radiation shielding technology is imperative for deep space exploration of spacecraft and crewed missions to the moon and Mars. Radiation from the sun's gamma rays and from high energy neutrons due to cosmic rays prove to be some of the toughest particles to shield electronics from. Typically shielding is implemented up to the gamma limit and then radiation hardened electronics are used to protect against higher-energy gamma rays and neutrons. Neutrons pose a problem for electronics due to their high energy and mass having a significant impact on silicon substrates. However, well-designed shielding can be used as protection from high energy neutrons as well, through the use of moderator and absorber materials. It has been shown that when using them in combination, they have the capacity to decrease neutron energy and absorb them, respectively. The moderator + absorber layer concept has been utilized for large-scale, ground-based systems such as nuclear facilities. We look at incorporating the same concept with a small form factor for instrument packages bound for deep interplanetary space. Preliminary results simulated in a software package known as SWORD7 will be shown for a novel radiation shielding concept that attenuates high-energy neutrons. Simulation runs of a 15 day equivalent fluence were performed and analyzed for a 1U CubeSat module (10 cm cubed) using various materials for a moderator layer and absorber layer, and compared them with minimal shielding to analyze shielding success. The combination of polyethylene as a moderator layer and boron carbide as an absorber layer were proven to be the best solution of all combinations including a stainless steel only case.
    • Tom Hoffman (Jet Propulsion Laboratory) & Didier Keymeulen (Jet Propulsion Laboratory)
      • 07.0601 Enhancements of FLEX Hyperspectral Data Compression Using High-Performance Embedded Space Computing
        Didier Keymeulen (Jet Propulsion Laboratory) Presentation: Didier Keymeulen - Wednesday, March 5th, 10:35 AM - Madison
        This paper describes the enhancement of a data compression computing system that operates on imaging spectrometer data. The implementation is part of the focal plane interface electronics - digital (FPIE-D) and includes AMD SoC Zynq-based and KU060-based versions. JPL imaging spectrometers can acquire data with 640, 1280, or 3000 cross-track positions with 328 or 480 bands at up to 216 images per second. Uses for imaging spectrometer data include studying the mineral dust cycle, methane detection, analysis of biodiversity, and fire detection. The enhancements to the Fast Lossless Extended (FLEX) data compression block (a modified implementation of the CCSDS-123.0-B-2 standard) compared to the implementation used for the Earth Surface Mineral Dust Source Investigation (EMIT) mission were: (1) BIL scan order, (2) narrow local sums, (3) a modification to use ‘stale’ prediction weights, (4) a minor modification to the predictor, and (5) a pipelined predictor-quantizer core that was made possible by the other enhancements. These enhancements together do not adversely affect the achieved compression ratio. The motivation for the 'stale weights' was that with BIL scan order and narrow local sums, the main obstacle to pipelining the prediction calculation in an FPGA implementation is that the weight update calculation requires the sign of the prediction error for the previous sample. By using weights that are a few samples old, it is possible to pipeline the prediction calculation. These enhancements provide an increase in data throughput by a factor of 2.5 compared to the EMIT implementation. Specifically, the FPIE-D 7z1 achieves a data throughput of 60 MSamples/sec (compared to 25 MSamples/sec for EMIT), allowing real-time data compression for a JPL imaging spectrometer similar to EMIT (40 MSamples/sec), using a 3 FLEX Band Interleaved by Line (BIL) core implementation. The FPIE-D KU1 board achieves a data throughput of 218 MSamples/sec using an 11 FLEX BIL core implementation, which is the maximum number of cores based on the resources available on the KU060. The FPGA implementation was stress-tested on the FPIE-D KU1 board using different image patterns, image sizes (cross-track between 32 and 1280 in increments of 32, and bands between 32 and 480 in increments of 8), and compression fidelity parameters (maximum error values varying by band and randomly ranging from 0 (lossless) to 1023). Over 20 million images of different sizes, totaling 23 terabytes of data, were tested. We also demonstrated using Alpha Data performance analysis FPGA IPs, that the limited PCIe data bandwidth (1.4 GBytes/sec) does not affect the compression data throughput, allowing the host processors to perform real-time acquisition and downlink while the FPGA simultaneously performs real-time data compression. Finally, we stress-tested a 1-core implementation on the FPIE-D 7z1 board and verified the functional test on a 3-core implementation.
    • Leena Singh (MIT Lincoln Laboratory) & Matthew Lashley (GTRI) & John Enright (Toronto Metropolitan University)
      • 07.0701 Capabilities and Recent Projects of the Jet Propulsion Laboratory’s Guidance and Control Section
        David Sternberg (NASA Jet Propulsion Laboratory), Carl Liebe (JPL), Oscar Alvarez Salazar (jpl) Presentation: David Sternberg - Wednesday, March 5th, 11:00 AM - Madison
        This paper presents a survey of select flight projects and technology development efforts in the NASA Jet Propulsion Laboratory Guidance and Control (GNC) Section. This survey is not exhaustive, but it is intended to demonstrate the broad range of capabilities and areas of expertise held by the GNC section. The programs span the gamut of ground-only testbeds to Earth orbiters to deep space flyers and the technologies needed to pursue ever more daring and innovative missions. The GNC concepts and technologies presented in this paper are informed by experience and includes autonomous operation, and systems-level thinking. This paper will also present directions and interests the Section will continue pursuing as it develops new GNC systems for flight.
      • 07.0702 Fast Fuel-Optimal Constrained Impulsive Control with Application to Distributed Spacecraft
        Matthew Hunter (Stanford University), Simone D'Amico (Stanford University) Presentation: Matthew Hunter - Wednesday, March 5th, 11:25 AM - Madison
        This work extends a reachable set theory approach for fuel-optimal control to incorporate convex and non-convex state, time, and magnitude constraints. Reachable set theory considers problems with norm-like cost functions and time-varying linear dynamics over a fixed time horizon and can produce optimality conditions for control input profiles. These optimality conditions enable the identification of optimal maneuver times and directions without needing to explicitly solve for the maneuvers themselves. While previous works have leveraged this approach to generate unconstrained, numerically-efficient optimal control solutions, this work proposes a novel impulsive control solver that finds provably-optimal trajectories under non-convex constraints. The solver is designed from a new set of optimality conditions, derived by reformulating a single reachable set into multiple time-ordered, step-wise reachable set waypoint sub-problems, where a waypoint is the state reached after a given maneuver. Non-convex constraints are satisfied at each waypoint, preserving convexity at each sub-problem without needing to convexify the full control problem, enabling the computationally efficient identification of constrained optimal maneuvers, and guaranteeing a user-defined bound on constrained cost. To produce an optimal maneuver plan, the solver rolls out constrained trajectories until the remaining unconstrained solution adheres to the problem constraints. The fuel-optimal control problem considered by this work is broadly applicable to many fields and particularly significant to Distributed Space Systems (DSS), which incorporate multiple spacecraft to overcome the limitations and risks posed by a single spacecraft. Multi-agent guidance and propulsive control techniques, often for small spacecraft at close separations, require guarantees of collision avoidance, efficient maneuvering to preserve a restricted fuel payload, and numerically-simple algorithms functional under on-board computational processing limitations. The proposed solver offers both user-defined bounds on constrained maneuver optimality and enhanced computational efficiency to directly address these challenges, and these capabilities are demonstrated comparatively against baseline solvers from the literature through Monte Carlo analysis on a variety of relevant DSS scenarios. Overall, this work enables the benefits of reachable set theory, namely computational efficiency and provable optimality, to be extended to control problems of higher fidelity.
      • 07.0703 Optimal Attitude Control of Large Flexible Space Structures with Distributed Momentum Actuators
        Pedro Rocha Cachim (Carnegie Mellon University), Will Kraus (Carnegie Mellon University), Zachary Manchester (Carnegie Mellon University), Pedro Lourenço (GMV), Rodrigo Ventura (Instituto Superior Técnico) Presentation: Pedro Rocha Cachim - Wednesday, March 5th, 04:30 PM - Madison
        Recent spacecraft mission concepts propose larger payloads that have lighter, less rigid structures. For large lightweight structures, the natural frequencies of their vibra- tion modes may fall within the attitude controller bandwidth, threatening the stability and settling time of the controller and compromising performance. This work tackles this issue by proposing an attitude control design paradigm of distributing momentum actuators throughout the structure to have more control authority over vibration modes. The issue of jitter dis- turbances introduced by these actuators is addressed by expand- ing the bandwidth of the attitude controller to suppress excess vibrations. Numerical simulation results demonstrate that, at the expense of more control action, a distributed configuration can achieve lower settling times and reduce structural deforma- tion compared to a more standard centralized configuration.
      • 07.0708 Linear Parameter Varying Attitude Control for CubeSats Using Electrospray Thrusters
        Felix Biertümpfel (TU Dresden), Emily Burgin (TU Dresden), Hanna Harjono (US Space Force), Paulo Lozano (), Harald Pfifer (TU Dresden) Presentation: Felix Biertümpfel - Wednesday, March 5th, 04:55 PM - Madison
        This paper proposes the design of a single linear parameter-varying (LPV) controller for the attitude control of CubeSats using electro spray thrusters. CubeSat attitude control based on electro spray thrusters faces two main challenges. The thruster can only generate a small control torque leading to easily saturating the actuation system and CubeSats need to operate multiple different maneuvers from large to small slews to pointing tasks. LPV control is ideally suitable to address these challenges. The proposed design follows a mixed-sensitivity control scheme with LPV weights that are derived from the performance and robustness requirements of individual typical CubeSat maneuvers. The controller is synthesized by minimizing the induced L2-norm of the closed-loop interconnections between the controller and weighted plant. The performance and robustness of the controller is demonstrated on a simulation of the MIT Space Propulsion Lab's Magnetic Levitation CubeSat Testbed.
      • 07.0710 A Crater-based Optical Navigation Approach for Precise Spacecraft Localization
        Simone Andolfo (University of Rome, La Sapienza), Antonio Genova (University of Rome, La Sapienza), Mohamed El Awag (University of Rome, La Sapienza), Fabio Valerio Buonomo ("La Sapienza" University of Rome), Pierluigi Federici (), Riccardo Teodori (University of Rome, La Sapienza) Presentation: Simone Andolfo - Wednesday, March 5th, 09:00 PM - Madison
        Lunar exploration represents the launching pad to develop novel navigation techniques in preparation of future missions to Mars and beyond. Innovative optical navigation systems that make use of machine vision techniques are currently under development to aid in the performance of challenging navigation tasks, including autonomous pinpoint landing operations. Imaging data can indeed support real-time navigation operations through the detection of characteristic surface features (e.g., craters), whose locations and properties have been previously retrieved and loaded into the onboard navigation system. By matching and tracking multi-object patterns across the images acquired by the onboard camera, highly accurate real-time localization is carried out enabling diverging maneuvers in case hazards are identified across the spacecraft path or landing site. This capability is fundamental to prevent the approach to risky areas. Due to recent push to revisit the Moon both from a scientific and commercial perspective, this represents a key capability to support, for example, the exploration of key scientific sites (e.g., lava tubes) or the provisioning of resources to future human settlements on the lunar surface. In this work, a crater-based navigation approach is implemented and discussed, which leverages the use of machine learning techniques to extract crater locations from the imaging data collected under different illumination conditions and observation geometries. The performances of the proposed navigation system are evaluated by carrying out a campaign of numerical simulations in a high-fidelity synthetic lunar environment, where simulated images captured by the landing platform are generated by accounting for high-resolution digital terrain models (DEM) of the lunar surface. In addition to describing the crater identification and matching pipelines, we will also investigate the impact of surface landmark distribution and topographic characterization of the target area (e.g., consistency between orbital maps and 3D terrain models) on the performances of the navigation filter.
      • 07.0712 An Indirect Approach to Solve a Pursuit-Evasion War Game between Two Spacecraft
        Aden Funkhouser (Pennsylvania State University), Sharad Sharan (The Pennsylvania State University), Puneet Singla (The Pennsylvania State University ) Presentation: Aden Funkhouser - Wednesday, March 5th, 09:25 PM - Madison
        Pursuit–evasion (PE) games define a subclass of zero-sum differential games in which an agent desires to “capture” an adversarial agent. The use of indirect optimization to solve zero-sum differential games is not extensively documented in literature due to their complex dynamics and highly sensitive nature. An indirect approach known as the multi-order shooting scheme (MOSS) is proposed to solve pursuit-evasion war games between two spacecraft in a robust and efficient manner. A traditional shooting scheme employs Newton’s method to solve the resulting two-point boundary value problem (TPBVP). Consequently, it is susceptible to divergence when the initial guess is in a region where gradient information alone is insufficient to progress the guess toward the saddle-point equilibrium solution. The MOSS employs Halley’s method to develop second and third order root-solving updates, which utilize the higher-order derivatives of the inherent error surface to increase the domain of convergence. Furthermore, a non-product quadrature method known as the Conjugate Unscented Transformation (CUT) is utilized to compute higher-order sensitivities in a derivative-free manner. A key feature of MOSS is that, for each iteration, it automatically selects the update order that ensures the most effective step toward satisfaction of the terminal boundary conditions. An orbital PE game corresponding to the interception of an adversarial spacecraft in minimum time is considered to showcase the efficacy of MOSS in efficiently solving TPBVPs. The numerical results demonstrate that MOSS is able to identify a solution to the PE game when the traditional shooting method struggles to converge due to a poor-quality initial guess.
      • 07.0714 Contingency-Aware Station-Keeping Control of Halo Orbits
        Fausto Vega (Carnegie Mellon University), Martin Lo (JPL), Zachary Manchester (Carnegie Mellon University) Presentation: Fausto Vega - Thursday, March 6th, 08:30 AM - Madison
        We present an algorithm to perform fuel-optimal stationkeeping for spacecraft in unstable halo orbits with additional constraints to ensure safety in the event of a control failure. To enhance safety, we enforce a half-space constraint on the spacecraft trajectory. This constraint biases the trajectory toward the unstable invariant manifold that escapes from the orbit away from the planetary body, reducing the risk of collision. We formulate a convex trajectory-optimization problem to autonomously generate impulsive spacecraft maneuvers to loosely track a halo orbit using a receding-horizon controller. Our solution also provides a safe exit strategy in the event that propulsion is lost at any point in the mission. We validate our algorithm in simulations of the three-body Earth-Moon and Saturn-Enceladus systems, demonstrating both low total delta-v and a safe contingency plan throughout the mission.
    • William Jackson (L3Harris Technologies) & Michael Mclelland (Southwest Research Institute)
      • 07.0802 Sub-attofarad Capacitance Sensor for High-precision Sensing in LISA
        Benjamin Cella (ETHZ (Swiss Federal Institute of Technology)) Presentation: Benjamin Cella - Thursday, March 6th, 08:55 AM - Madison
        This paper presents the design and performance of various sensing transformer versions for the Front End Electronics (FEE), which are utilized to sense capacitance with high precision for the LISA (Laser Interferometer Space Antenna) mission. LISA aims to detect gravitational waves in space, with a planned launch in 2035. The sensing capacitance reflects the LISA spacecraft's position relative to the gravitational reference, the Test Mass (TM). Although the technology demonstrator of LISA, LPF (LISA Pathfinder), was successful, minor system-level changes necessitated the reassessment of certain electronic components. A notable change is the increase in parasitic capacitance at the input of the sensing circuit. Without mitigation, this would reduce the resonant frequency of the sensing circuit, defined by the transformer inductance and its total input capacitance. Therefore, within the LISA framework, the LPF transformer design has been revised to achieve resonance at the injection frequency while maintaining or enhancing capacitance noise performance. We designed and manufactured 25 transformers using three different core types (P26x16-N48, P36x22-N48, B655813-N41) and three different inductance factors, i.e., core gaps between two core halves (AL = 400nH, 630nH, 1000nH). The transformers were assembled and tested with varying numbers of turns for each coil (48, 55, 56, 64, 69, 70, 80, 100) to achieve different coil inductances. The capacitance noise performances (0.51 aF/√Hz to 0.65 aF/√Hz) indicate potential improvement compared to the performance demonstrated in LPF (~0.69 aF/√Hz). The measured performances are also compared with similar results recently presented in the literature.
    • Matthew Spear (Air Force Research Laboratory) & Douglas Carssow (Naval Research Laboratory)
      • 07.0901 Pulsed Laser as a Single Event Effects Screening Technique an Introduction
        George Ott (Radiation Test Solutions) Presentation: George Ott - Monday, March 3th, 09:25 PM - Madison
        Electronic components used in space applications operate in a radiation environment including both total dose and single event effects. Traditional testing for single event effects requires use of a particle accelerator. Facilities for this testing are limited and expensive. Pulsed laser as a potential substitute or screen for single event effects has long been considered and in some cases used. This paper will include both an introduction to the technique of plSEE (Pulsed Laser Single Event Effects) testing as well as sample results of the testing compared to classic hiSEE (Heavy Ion Single Event Effects) for two COTS (Commercial Off The Shelf) oscillators, a COTS optocoupler and a COTS high side current sense amplifier.
      • 07.0902 Cross-Examining the Computational Performance of Radiation-Tolerant NVIDIA and AMD SoCs
        Richard Briggs (Cosmiac), Derrek Landauer (University of New Mexico), Tyler Lovelly (Alligator Electronics L.L.C.) Presentation: Richard Briggs - Thursday, March 6th, 09:20 AM - Madison
        As space missions encounter increasing computational demands, the need for hardware that combines technical capability with resistance to harsh space radiation environments grows. Radiation-hardened (rad-hard) hardware often performs poorly compared to commercial-off-the-shelf (COTS) processors. Consequently, COTS-based systems-on-chips (SoCs) demonstrating some radiation tolerance, such as NVIDIA's Tegra K1 and AMD's Ryzen V1000 series, may provide viable alternatives for low Earth orbit (LEO) missions. This study extends previous individual evaluations by offering a comparative analysis, conducting standardized tests on critical computational components --- CPU, GPU, and Single-Instruction Multiple-Data units --- across both systems. The comprehensive analysis presented in this study delivers robust comparative performance metrics, equipping the aerospace community with essential data to optimize hardware selection for future space missions.
      • 07.0903 An Affordable Fault-tolerant EDAC Designed for FPGA and Memory Applications
        Youcef Bentoutou (Satellite Development Center - Algerian Space Agency) Presentation: Youcef Bentoutou - Thursday, March 6th, 09:45 AM - Madison
        In space radiation environment, single event upset mitigation is crucial to guarantee the total reliable operation of memory and Field Programmable Gate Array (FPGA) devices. Traditionally, ensuring the reliable operation of commercial memory and FPGA devices in space has often depended on hardware-based solutions, such as Hamming (12, 8) codes or Triple Modular Redundancy (TMR). TMR involves triplicating memory or FPGA devices and using a voting logic to detect and correct erroneous bits. This modular triplication approach has been proven to be effective in protecting these devices against the impact of radiation-induced upsets. However, it comes with costs in circuit performance, resource utilization, and power consumption. This paper presents the design of a new Error Detection and Correction (EDAC) system based on the combination of partial TMR and the Quasi-cyclic codes implemented in SRAM-based FPGAs to protect SRAM memories against single-event upsets. The experimental results demonstrate that the proposed EDAC scheme exhibits reduced delay, area, and power overheads compared to standard EDAC schemes. The reliability estimates, expressed in terms of mean time to failure, indicate that the proposed SRAM-based FPGA design performs well and is highly reliable in low-earth polar orbits.
      • 07.0904 RarePlanes Detection Using YOLOv5 on the Versal Adaptive SoC
        Jacob Brown (Brigham Young University) Presentation: Jacob Brown - Thursday, March 6th, 10:10 AM - Madison
        Object detection is an important operation for satellites to be able to perform, and in outer-space missions it is crucial that machine learning models perform inference on satellite images accurately and reliably. Where size, power, and other constraints exist, meeting this goal for accurate and reliable object detection is challenging. Additionally, soft errors caused by radiation further disrupt and degrade the operation of object detection in satellites. This work investigates the performance of object detection models on an embedded device in the presence of soft errors. We used the well-known and high-performing YOLOv5 object detection model, and trained various sizes of YOLOv5 on a collection of satellite images of airplanes known as the RarePlanes dataset. The target device is AMD Xilinx's VCK190 evaluation board featuring the VC1902 Versal Adaptive SoC part. The Versal part was chosen as the deployment target because it is capable of high performance computing at low power, and because it has a space grade version, the XQRVC1902. The trained models were quantized and compiled using AMD Xilinx's Vitis AI framework and deployed on the Deep Learning Processing Unit intellectual property. Performance of the YOLOv5 models on the DPU varies based on model size and DPU configuration, with the highest frame rate being 300 fps and the lowest being 20 fps. All model and DPU configurations result in a higher framerate than a standard desktop CPU running the same models. Accuracy of the floating point YOLOv5 models is high, with F1 scores ranging from 0.845 to 0.911. Accuracy differences between the original 32-bit floating-point models and the 8-bit integer quantized models are minimal, with the greatest change in accuracy being about 2%. The reliability of the YOLOv5 models running on the DPU IP in a harsh environment was then evaluated. We developed an environment to simulate and discover the impact of soft-errors on YOLOv5 object detection through injecting random faults into the programmable logic of the Versal device. Benchmarking tests were executed as faults were injected. Results from the benchmark tests during fault injection were compared to the baseline results recorded before any faults were injected, and deviations in the model output were discovered. Over several weeks, more than 400,000 faults were injected, of which 7.48% had noticeable effects. Of the faults with noticeable effects, 30% had tolerable effects (slightly altered but still correct predictions) and 70% had non-tolerable effects (incorrect predictions, DPU timeouts, and CPU hangs). This paper will describe the implementation of YOLOv5 on the VC1902, model performance results in accuracy and throughput, the fault injection environment, and the impact of fault injection on model performance.
  • Lisa May (Lockheed Martin Space) & Greg Chavers (NASA)
    • Kevin Post (Booz Allen Hamilton) & Chel Stromgren (Binera, Inc.)
      • 08.0102 Robotically Emplaced Lattice Reinforcement for Lunarcrete Structures and ISRU
        Christine Gregg (NASA Ames Research Center), Adam Johnson (The Pennsylvania State University), Sarah Baxter (University of St. Thomas), Rita Lederle (University of St. Thomas) Presentation: Christine Gregg - Monday, March 3th, 09:20 AM - Electronic Presentation Hall
        NASA requires versatile and robust in-situ resource utilization (ISRU) strategies for lunar construction. Many concepts propose using lunar regolith to form concrete, termed here generally as lunarcrete, to create structures like Lunar Safe Haven to protect astronauts and equipment from the micrometer and radiation environment. Just like terrestrial concrete, lunarcrete is expected to have poor tensile strength and require reinforcement to build safe lunar structures under expected thermal and moon quake conditions. However, autonomous placement of continuous reinforcement remains a challenge for 3D printing strategies. The NASA ARMADAS project has demonstrated autonomous robotic assembly of lattice structures. We propose using this autonomously constructed lattice to build reinforcement and integral formwork for lunar construction. Rather than 3D printing concrete, structures could be cast in a similar manner to what is commonly done on Earth. This process would leverage much of the technology needed for lunarcrete 3D printing, but critically would reduce the precision needed for placement, mix control, rheology, etc. This process margin, when combined with the increased strength and ductility provided by the reinforcement and formwork, will lead to a lower-risk construction methodology for lunar settlements. In this paper, we present experimental data from flexural and tensile strength testing of a terrestrial concrete mix reinforced with prototype lattice reinforcement. These preliminary results show successful reduction in crack propagation relative to the unreinforced control specimens, though no increase in tensile or flexural strength. Additional work is necessary to study whether tuning the geometry of the reinforcement or the concrete/lattice interface could realize increases in strength in addition to crack propagation reduction, especially with lunar simulant mixes.
      • 08.0105 A Pathway to the Moon: Marshall Space Flight Center’s Human Landing Systems
        Beverly Perry (), Lisa Watson-Morgan (NASA - Marshall Space Flight Center), Kent Chojnacki (NASA- MSFC), Laura Kiker (NASA - Marshall Space Flight Center) Presentation: Kent Chojnacki - Sunday, March 2th, 04:30 PM - Amphitheatre
        NASA’s Artemis campaign will bring astronauts back to the lunar surface in this decade and beyond. Artemis objectives include exploration, advancing science and technology, and learning how to work and live on another world as we prepare for human missions to Mars. NASA is collaborating with commercial and international partners to establish the first long-term presence on the Moon with plans to land the first woman and first person of color on the Moon using innovative technologies to explore more of the lunar surface than ever before. NASA’s Human Landing System (HLS) program, based at NASA’s Marshall Space Flight Center (MSFC) in Huntsville, Alabama, is the agency’s program responsible for the development of the vehicles that will take astronauts from lunar orbit to the surface of the Moon and back, safely. The program has contracted with two industry providers, SpaceX and a Blue Origin-led team, to build the next lunar landers. In 2021, NASA awarded a firm, fixed-price (FFP) contract to SpaceX to provide an initial lunar lander for Artemis III – scheduled to be the first mission to return astronauts to the Moon in over 50 years. In 2022, NASA awarded SpaceX a contract modification to provide a more capable, sustaining lunar lander for Artemis IV. In parallel, the program awarded another FFP contract to Blue Origin and its partners to provide a landing system for Artemis V, giving NASA a dissimilar, redundant capability for human Moon landings. For Artemis VI and beyond, NASA plans to competitively procure landing services. In late 2023, the HLS program also gave the contracted companies the authority to begin work that would modify designs of the human landing systems to develop large cargo landers. The HLS program is making significant strides with SpaceX and Blue Origin as the companies perform design, development, test, and evaluation (DDT&E) of these human landing systems. MSFC continues to provide critical oversight, insight, and expertise into many of the key areas required to return to the Moon, such as technology and engine development, cryogenic fluid management, and propellant transfer. This paper will detail the progress both providers have made toward Artemis III, IV, and V as well as large cargo landers, scheduled to become available no earlier than Artemis VII. We will also discuss the role MSFC and the HLS program play in the development of industry-changing, public-private partnerships within the space community and the critical role the Center’s capabilities and expertise are to getting humans back to the Moon.
      • 08.0106 Lunar Engineering 101
        Milena Graziano (Johns Hopkins University/Applied Physics Laboratory) Presentation: Milena Graziano - Sunday, March 2th, 04:55 PM - Amphitheatre
        On July 24, 1969, the Apollo 11 lunar module (the Eagle) landed in the Sea of Tranquility and realized an incredible achievement: the first instance of human presence on the Moon. This incredible milestone was repeated five more times under the Apollo program, ending with a record-breaking mission (Apollo 17) that continues to hold the title for the longest crewed spaceflight and longest crewed lunar landing mission. Since then, we have had numerous attempts to land on the Moon with mixed success, including India’s Chandrayaan-3 landing close to the lunar south pole and a far side visit by China’s Chang’E. Armed with the Apollo program lessons learned, the decades of scientific discoveries by lunar orbiters, and the recent advances in lander and science payload technologies, we are gearing up to once again see astronauts safely return to the lunar surface. Our ultimate goal is to achieve a safe and active permanent base that can facilitate human missions to Mars and other planetary bodies. Long-term and reliable lunar surface systems are critical to support NASA’s goals for exploration and habitation within the next decade. Transformative capabilities are needed for numerous lunar surface technologies, including power, in-situ resource utilization, and extreme access. So, how do we prepare to realize this enormous task? After several troubled lunar landings in the past couple of years, it is evident that reaching the lunar surface and conquering its environment is an extremely onerous task. As we prepare for the future, it proves ideal to have a practical guide to lunar surface environments that would enable the development of critical lunar technologies, and remain on track to complete NASA Moon to Mars objectives. Specifically, it would be important to understand limitations due to the lunar environments, what specific engineering challenges these impose, and how can these challenges be overcome by design, risk assessment, and ground testing. To this need, the Lunar Surface Innovation Consortium (LSIC) has developed a resource for engineers that presents the main characteristics of lunar surface environments, along with their respective challenges, and hardware design considerations. This new resource, named Lunar Engineering 101, has been prepared by subject matter experts in lunar science and spacecraft engineering. The unique perspective and targeted content of this guide will serve as an aid to engineers and scientists of different backgrounds. This paper offers an overview of the concept, organization, and contents of the Lunar Engineering 101 series.
      • 08.0107 A New Methodology for Quantitative Analysis to Determine Crew Sizes for Mars Missions
        Donna Dempsey (NASA) Presentation: Donna Dempsey - Sunday, March 2th, 05:20 PM - Amphitheatre
        Missions to Mars will differ from previous human spaceflight missions in that the crew of astronauts will be required to operate in an Earth-independent manner due to the long communication delays. Without a systematic, repeatable process to determine the number and composition of crew necessary to successfully accomplish these missions, NASA increases the risk that crew sizes may be too small to meet primary mission objectives under nominal conditions and, more consequentially, that crewmembers may not have the expertise needed to successfully respond to unforeseen failures without the real-time expertise of the Mission Control Central (MCC) team NASA currently relies upon. The NASA Engineering and Safety Center (NESC) developed a methodology for assessing the trade space of factors that affect the number of crew for Mars missions based on human performance and expertise modeling. This methodology includes the consideration of results from three human performance models developed using the Improved Performance Research and Integration Tool (IMPRINT) modeling platform as well as a custom-built model on expertise trained within the crew based on NASA’s crew qualification and responsibilities matrix (CQRM). We present example model results and discuss implications on trade space analyses for Mars mission crew size.
      • 08.0108 Progress in Planetary Protection Development for Crewed Mars Missions
        James Spry (BQMI), James Benardini (NASA Headquarters), Erin Lalime (NASA), Bette Siegel (NASA) Presentation: James Spry - Sunday, March 2th, 09:00 PM - Amphitheatre
        It is widely accepted that the current PP paradigm developed for robotic spacecraft needs to change ahead of the first crewed mission to Mars. Assessing spacecraft microbial cleanliness against a quantitative bioburden threshold at launch is just not relevant for the habitable spacecraft environments that would be introduced to the Martian surface, when the astronauts themselves each have circa eight orders of magnitude more microorganisms on and in their bodies than would be permitted for a typical robotic Mars lander mission. The international COSPAR PP policy recognizes this, but also states that; “the intent of planetary protection is the same whether a mission to Mars is conducted robotically or with human explorers. Accordingly, planetary protection goals should not be relaxed to accommodate a human mission to Mars. Rather, they become even more directly relevant to such missions – even if specific implementation guidelines should differ.” The goals that are to remain the same (as described in Article IX of the 1967 Outer Space Treaty) are: i) to avoid the harmful contamination of the target solar system body (Mars in this case), and ii) to avoid adverse changes in the environment of the Earth resulting from the introduction of extraterrestrial matter. But what then are the implementation guidelines that should differ? The NASEM CoPP in its recent (2021) “Evaluation of Bioburden Requirements for Mars Missions” report declined to recommend relaxation of the present robotic requirements, noting that for such relaxation to be acceptable, any landing site needs “to be a conservative distance from any subsurface access point”, and that such a determination would also need to consider wind conditions for the location and season, and best estimates of microorganism survival time in the surface UV environment. This knowledge is simply not available at present, but this paper will provide an update on these and other “knowledge gaps” identified in a workshop series on PP for crewed Mars missions. Additionally, the paper will describe progress to incorporate the addressing of PP constraints in ongoing activities within NASA as the Moon to Mars architecture is matured. Finally, because the scientific community has been unable to provide a consensus answer to the question “How much is too much?”, in regard to the contamination that a crewed spacecraft mission might introduce to Mars, the paper will describe work to model and quantitate the microbial bioburden associated with the hardware and operations of NASA’s crewed Mars surface mission concepts.
    • Matthew Simon (NASA Langley Research Center) & Erica Rodgers (NASA - Headquarters)
      • 08.0201 An Evaluation of the IEEE Std 1547-2018 for Power Systems Interconnected on Lunar Habitats
        James Hurtt (University of Colorado, Boulder), Kyri Baker (University of Colorado, Boulder) Presentation: James Hurtt - Monday, March 3th, 08:30 AM - Amphitheatre
        From commercial, national, and international efforts, there are numerous developments underway to return humans to the Moon and to establish a permanent Lunar habitat. The electrical power system demands for such permanent habitation are expected to far exceed any previous extraterrestrial power system developed, including the International Space Station. Additionally, the future growth and expansion of such a habitat will inevitably lead to the need for interconnection standardization. With the diversity of energy resources suitable for a Lunar habitat (e.g. solar, nuclear, hydrogen fuel cells, and chemical battery storage) the coordination and interoperability of these resources will need to be defined and standardized. Given the human rated requirements, the ability to ensure the highest level of reliability is paramount to the safety of the habitat. This is further exacerbated by the unique challenges of the Lunar environment. The effects of radiation, space charging, meteorite impacts, surface contaminants (e.g. Lunar dust), among other environmental factors, result in further need to provide coordinated operation and flexible protection mechanisms. . This paper evaluates the applicability of the existing terrestrial IEEE interconnection standard for future Lunar habitats. IEEE Std 1547-2018 is the most recently enacted standard on this topic and covers the interconnection and interoperability of distributed energy resources. In particular, the study focuses on the challenges of power quality, voltage stability, frequency stability, and protections called out in IEEE Std 1547-2018 and how they can apply to Lunar power systems. To accomplish this, the guidelines called out in the standard will be compared against proposed Lunar habitat power systems found in literature. The results will show that despite the unique challenges of large-scale Lunar power systems, some relevant lessons learned from terrestrial power systems can be applied to this application and can potentially mitigate risk to future missions.
      • 08.0202 Moon BRICCSS: Moon Blocks Using Regolith ISRU for Corbelled Construction of Sustainable Shielding
        Palak Patel (MIT), Lanie McKinney (MIT), Daniel Massimino (Massachusetts Institute of Technology), Annika Thomas (Massachusetts Institute of Technology), Juan Salazar (Massachusetts Institute of Technology), Mikita Klimenka (MIT), George Lordos (Massachusetts Institute of Technology), Cody Paige (Massachusetts Institute of Technology), Skylar Tibbits (MIT), Dava Newman (National Aeronautics and Space Administration) Presentation: Palak Patel - Monday, March 3th, 08:55 AM - Amphitheatre
        The United States, along with 43 other Artemis Accords signatories, are committed to engaging in a peaceful and sustainable development of human presence on the moon in the next two decades. For sustained human habitation on the Moon, a significant amount of radiation protection will be required around lunar habitats to mitigate exposure to ionizing radiation and protect astronauts' long-term health. In addition, habitats will need protection from micrometeoroids and the extreme temperature swings characteristic of the lunar environment. The utilization of local resources (ISRU) for construction and shielding on the Moon would considerably reduce launch mass from Earth and is a key enabling technology for a scalable human presence. This paper proposes an architecture where molten regolith is cast into hollow, hexagonal prism bricks, filled with aggregate regolith and robotically assembled into a corbelled structure. Key innovations include the manufacturing of hollow masonry units filled with loose regolith to reduce energy demands and annealing time, as well as the hexagonal brick design for compatibility with casting, energy minimization, and structural stability. Also, the back-weighting of the corbelling technique provides a stable structure without the need for metal supports, mortar, or other binders that would need to be sourced from Earth or substantially processed, enabling the construction materials to be derived entirely from un-beneficiated lunar regolith. The manufacturing method selected is simple and industrially scalable for increasing lunar habitation. The resulting structure is made out of Moon BRICCSS (Blocks using Regolith ISRU for Corbelled Construction of Sustainable Shielding) that are 1 meter tall and 3 meters long, capable of being handled by expected robotic technology. Analysis of expected brick thermal expansion and overall structural stability of a corbelled arch are presented, in addition to the results of the technology demonstrations for manufacturing a scaled brick unit and the robotic assembly procedure. Moon BRICCSS is a proposal for the manufacturing and construction of a highly durable and scalable radiation, thermal, plume, and micrometeoroid protection structure compatible with different habitat configurations.
    • Randy Williams (The Aerospace Corporation) & Melissa Sampson (Lockheed Martin)
      • 08.0301 Recovery of Rocket Payloads and First Stages Using Unmanned Vehicles - a Proof-of-concept
        Kristoffer Gryte (Norwegian University of Science and Technology), Tor Johansen (NTNU), Roger Birkeland (Norwegian University of Science and Technology) Presentation: Kristoffer Gryte - Monday, March 3th, 09:20 AM - Amphitheatre
        Higher rocket launch rates encourage recovery, through greater economic potential from re-use, and in avoiding an unnecessarily large environmental impact. Recovery by autonomous aerial or surface vehicles, operating alone or collaborating, can reduce cost and increase the quality of the recovery operation, by minimizing human involvement and thus relax safety requirements and increase agility. These benefits apply to all the concept ideas we evaluate, both employing aerial and surface vehicles. The demand for utilization of space continues to grow, and the number of rocket launches reached a total of 223 orbital launch attempts in 2023. Most European and US launch sites are placed so the rockets fly over the open ocean to avoid any debris – either depleted rocket stages or due to launch failures – over land. We present and discuss several concepts of unmanned vehicle-assisted rocket stage recovery at sea for non-propelled returns, by use of aerial or surface robots. These concepts are meant to be ”launch operator agnostic” and require little adaptation of the launcher end to be implemented. There are two main motivations for this research: The first is better resource management, either by promoting the reuse of rocket parts or by recovery cost reduction by (partially) replacing manned recovery operations with autonomous systems. The second motivation is related to reducing the environmental impact of launches. Several of the new launchers, both in operation and proposed, are made mostly of carbon composite material and are currently not equipped with a recovery system, which may result in a total of some hundred metric tons of composite materials and metal being disposed of in the waters every year. At the current launch cadence of everyone else than SpaceX, the current environmental impact saving may seem low, but there are extensive plans for increased launch cadence both in Europe and elsewhere. For example, from Andøya Spaceport in Norway, they foresee scaling up to 30 launches a year, and therefore this technology development is an important part of a sustainable chain of operations. As a hybrid proof-of-concept experiment, we chose to study the recovery of a high-power student rocket by an unmanned surface vehicle. Here, a simulated unmanned surface vehicle is guided to the real-time predicted impact point of a physical high-power student rocket launched from Andøya Spaceport. In this case, the rocket did not have a parachute. In addition, we benchmark the demonstrate the recovery using previously recorded rocket telemetry data. While the economic potential in student rocket recovery is limited, the method is designed to scale for recovery of sounding rocket payloads or orbital rocket first stages.
      • 08.0302 Modelling Thrust for ABS/N20 Based 3D Printed Hybrid Rocket Engine - Review of Static Fire Results
        Pratush Charan Donkada (Charan Aerospace) Presentation: Pratush Charan Donkada - Monday, March 3th, 09:45 AM - Amphitheatre
        We discuss the viability of using a commonly available 3D printable material, Acrylonitrile Butadiene Styrene (ABS), in combination with Nitrous Oxide (N2O) as a 3D printable fuel for rocket engines. A solid rocket propulsion model utilizing ABS + N2O fuel is programmed in Python, and the results are presented in this paper. ABS can be easily shaped into complex fuel-grain geometries for solid rocket motors via 3D printing technology. Modifying the grain geometry is as simple as adjusting the 3D printer CAD file. Thrust curve results for the complex grain geometries of the ABS + N2O-based 3D printed fuel have been programmatically modeled and discussed herein. Both N2O and ABS are non-toxic, non-explosive, and stable at room temperature and during high-temperature 3D printing. The use of ABS is expected to transform solid rocket motor production by improving consistency and safety while reducing development and production costs. This approach simplifies the manufacturing process and reduces turnaround time for producing solid rocket engines, which is particularly relevant from an Indian perspective where ISRO has traditionally relied on solid rockets for the first and second stages of their missions. This study also included a series of small-scale static fire tests on ABS fuel grains. Static burns were conducted on two ABS fuel grains, each with initial dimensions of 100 mm in length and 20 mm in diameter, featuring a 6 mm straight circular combustion port. The primary focus of these tests was on the regression rates of ABS fuel grains. The results demonstrated that ABS exhibits relatively high regression rates compared to traditional hybrid fuels like hydroxyl-terminated polybutadiene, making it a promising candidate for hybrid rocket propulsion. The static fire tests confirmed the practical viability of using ABS as a 3D printable fuel material, highlighting its potential to streamline production and enhance the performance of solid rocket motors.
      • 08.0303 Analyzing the Impact of the Free-rolling Tail on the Canard-controlled Missile Dynamic Performance
        Antoni Radzymiński (), Szymon Elert (Military Institute of Armament Technology), Dariusz Sokolowski (Military Institute of Armament Technology), Michal Pyza (Military Institute of Armament Technology), Maciej Cichocki (Military Institute of Armament Technology) Presentation: Antoni Radzymiński - Monday, March 3th, 10:10 AM - Amphitheatre
        This paper examines the impact of a movable tail with fins on stabilizing a low-cost canard rocket model in the roll channel. The tested rocket is a 105 mm diameter model equipped with a hybrid propulsion system, designed with cost efficiency in mind. Under certain flight conditions, the small surface area of the fins and their 45-degree angle relative to the rocket's trajectory led to stabilization issues, particularly in the roll channel. The phenomenon of vortex shedding at the fin tips caused adverse aerodynamic effects, resulting in unintended roll oscillations. To improve stability, a modification was proposed involving the use of freely rotating wings. This change aimed to compensate for the roll moment and enhance control in the roll axis. Simulations were conducted in the Matlab & Simulink environment with various flight scenarios. The simulations utilized a mathematical model of the rocket based on six degrees of freedom (6-DOF) equations of motion, an aerodynamic model consisting of aerodynamic force and moment coefficients, and a complete GNC algorithm. GNC consists of an Internal Navigation System (INS), Proportional Navigation, and Three-Loop Autopilot. The autopilot stabilizes the rocket in three channels (yaw, pitch, roll) and gives the fins command to follow the desired flight path. It includes matrices with variable gains dependent on the current flight conditions the missile experiences. The results showed that using a movable tail significantly improves roll stability, reduces oscillations, and enhances overall flight performance. The study suggests that this modification can address the design imperfections of the rocket and provide a basis for further aerodynamic optimization and control improvements.
      • 08.0305 Lessons Learned from Development of 610 Mm Solid Rocket Motor
        Dariusz Sokolowski (Military Institute of Armament Technology), Michal Pyza (Military Institute of Armament Technology), Maciej Cichocki (Military Institute of Armament Technology), Szymon Elert (Military Institute of Armament Technology) Presentation: Dariusz Sokolowski - Monday, March 3th, 10:35 AM - Amphitheatre
        The dynamic development of the space sector, has led to the development of different types of propulsion systems for launching research payloads into space. The development of a solid rocket motor with a total impulse of 1 [MNs], was one of the key points in the implementation of the project “Development of a Three-Stage Suborbital Rocket System to Lift Research Payloads” The development of all production and technological processes enables us to obtain valuable experience in the production of rocket propulsion systems and will allow further development. This is a breakthrough technology on a national scale due to the parameters of the developed propulsion system. The potential associated with the development of a propulsion system of this class, will allow development in the field of launching research payloads into low orbit - which is currently an unpopular service on a European scale. The analysis of solutions of a similar class, will allow to present the current technological level in the context of the discussed rocket propulsion. The paper describes the problems that were encountered in the design, manufacturing and testing process. The initial design concept and operating parameters will be presented. All components of the solid rocket motor will be described, such as the composite case, ablative coatings, metal structural components and the composite exhaust nozzle. The manufacturing phase of the structural components, especially the composite structures, highlighted many problems and allowed us to gain experience in manufacturing this type of system. The key issue was the production of the motor chamber based on carbon fiber and epoxy resin. The composite case was made using the fiber winding method on a numerically controlled machine. This provided the opportunity to define the required parameters of the composite structure. The innovation of the project defined the need to manufacture specialized tools for the production and assembly of the drive. Vacuum casting of the fuel allowed defining an optimized combustion surface. The final element will be the presentation of the results of the intermediate tests carried out and the final fire test.
    • Steve Matousek (Jet Propulsion Laboratory) & Paul Niles (NASA Johnson Space Center)
      • 08.0401 Blue Ring Spacecraft Adaptation for Large Payload Delivery, Hosting, and Relay Services at Mars
        Thomas Randolph (Blue Origin) Presentation: Thomas Randolph - Monday, March 3th, 11:00 AM - Amphitheatre
        Blue Origin’s first Blue Ring spacecraft is currently being assembled in Huntsville Alabama. This spacecraft is intended to be the first in a long commercial product line optimized for delivery and hosting of geostationary and cislunar transfers payloads; but it can be adapted for Mars missions with minimal modifications to the product line. The Blue Ring propulsion system can deliver more than one ton of payloads to low Mars Orbit, Areosynchronous, or a variety of other Martian orbits in under 2.5 years. This payload capacity can be distributed between a mixture of up to 13 hosted or separable payloads with industry standards interfaces. Blue Ring can provide significant power at Mars distributed amongst hosted payloads along with data and telecommunication services. With flexible mission kits that can be added as needed, Blue Ring can provide additional storage, compute, and data relay capability for hosted payloads. This includes the ability to be able to augment the Mars Relay Network by serving as or distributing relay orbiters. Modifications necessary to the telecom, thermal, and power systems to support the Mars environment (primarily driven by greater solar and Earth ranges) are more easily accommodated on Blue Ring than other commercial spacecraft. Blue Ring’s large solar arrays, high radiation tolerance (required for low thrust transfers through the Van Allen belts), and robust redundancy architecture (needed for demanding geostationary and cislunar customers) provide substantial capability aligned with the requirements of Mars missions. Accommodations for payload power, volume, mass, and data interfaces and designed into the product line. Blue Ring’s combination of flexibility, modularity, and substantial available resources enables exciting new, commercially enabled, missions for Mars.
      • 08.0402 The Commercial Lunar Payload Services Initiative
        Paul Niles (NASA Johnson Space Center) Presentation: Paul Niles - Monday, March 3th, 11:25 AM - Amphitheatre
        NASA’s Commercial Lunar Payload Services (CLPS) initiative is intended to enable rapid, frequent, and affordable access to the lunar surface by helping to establish a viable commercial lunar landing services sector. As a stated goal, the CLPS initiative seeks to maintain a cadence of two delivery awards per year through the first six years of the contract. Since money cannot be paid on the contract after the 10-year lifetime expires, awarded task orders must complete before the expiration of the master contract. A total of 12 task orders (TOs) have been awarded with three more expected before the end of 2025. Each of the task orders are awarded using a competitive process amongst a pool of 14 eligible companies. The TOs will result in lunar landings at sites widely distributed across the surface of the Moon, including the south polar region and the farside. In conjunction with instrument development efforts within NASA, academia, international partners, and commercial industry, a considerable variety of science and technology payloads have been delivered to CLPS vendors or are in the process of development. The CLPS initiative involves over 51 different payloads across the world, stimulating space workforce development and investment in the space economy infrastructure.
    • Kevin Duda (The Charles Stark Draper Laboratory, Inc.) & Jessica Marquez (NASA Ames Research Center)
      • 08.0502 Enhancing Flight Deck Decision Support with Distributed GenAI: A Multi-Agent Approach
        Jose De Almeida Prado (University of Malta), BRIAN ZAMMIT (University of Malta), Alan Muscat (QuAero Limited), Sandro Mizzi (), Jason Gauci (University of Malta) Presentation: Jose De Almeida Prado - Wednesday, March 5th, 04:30 PM - Cheyenne
        Generative AI (GenAI) has emerged as a transformative field within artificial intelligence, focusing on generating new content such as text, images, and sounds rather than solely analysing existing data. These models, trained on vast datasets, exhibit human-like capabilities, promising significant advancements across various applications. In commercial aviation, integrating GenAI into the cockpit represents a pivotal opportunity to enhance decision-making processes by providing real-time crew support for tasks such as aircraft flight-performance monitoring, flight-plan management, en-route weather assessments and ensuring overall flight safety. This paper explores the design and implementation of a GenAI architecture tailored for commercial aircraft cockpits. Unlike traditional applications of GenAI, the aviation domain poses unique challenges due to its highly specialized terminology, complex operational procedures, stringent real-time requirements and the last but not the least demanding certification requirements. Existing large language models (LLMs) like GPT-4 and others, while powerful, are impractical within cockpit environments due to their size, latency issues, and lack of domain-specific knowledge. To address these challenges, our proposed architecture adopts a distributed multi-agent approach leveraging Adaptive-Retrieval Augmented Generation (Adaptive-RAG). This approach integrates smaller, specialized LLMs ranging from 100 million to 3 billion parameters, each optimized for specific cockpit tasks. The architecture comprises several categories of agents: 1. Cognitive agents: Specialized in complex inferential tasks, leveraging contextual information from RAG Agents or Toolbox Agents. 2. RAG agents: Responsible for retrieving and synthesizing information from vectorized databases to provide comprehensive context for Cognitive Agents. 3. Toolbox agents: Facilitating access to real-time data such as weather conditions, aircraft parameters, and traffic information, integrating deterministic programming methods crucial for accurate decision-making. 4. AI toolbox agents: Bridging generative AI with rule-based systems or reinforcement learning, ensuring compatibility with non-deterministic tasks within the cockpit. 5. Human-in-the-loop agents: Activated when the system lacks sufficient knowledge to answer queries, prompting direct pilot interaction for critical decision-making. 6. Explanation agents: Providing explanations of complex AI-generated responses, which is crucial for maintaining transparency, trustworthiness and accountability. To validate the proposed architecture, we will compare the response time and relevance of the generated responses across three setups: two centralised approaches and the proposed, distributed multi-agent system. This comparison aims to demonstrate the efficiency, reliability, and suitability of the proposed system for cockpit use, providing robust autonomy and operational resilience, independent of external cloud infrastructure. Key considerations include the architecture's ability to operate autonomously, adhering to aviation communication standards, and its capability to deliver concise, timely responses, underscoring its suitability for cockpit integration. In conclusion, this paper proposes a novel GenAI architecture designed to meet the rigorous demands of commercial aircraft cockpits. By leveraging specialized agents and Adaptive-RAG methodologies, the architecture not only addresses domain-specific challenges but also sets a precedent for integrating advanced AI technologies into safety-critical aviation operations.
      • 08.0503 Unobtrusive Monitoring of Sensorimotor Performance in Ground-Based Functional Tasks
        Hannah Weiss (KBR Inc.) Presentation: Hannah Weiss - Wednesday, March 5th, 04:55 PM - Cheyenne
        Astronauts returning from long-duration exposure to microgravity frequently exhibit alterations in sensorimotor function leading to postural imbalance, impaired locomotion, and operational challenges to manual control. Mission duration and individual responses often influence both the severity of performance decrements and the variability in adaptation timelines. Postflight disruptions during functional tasks are often detected through body-worn inertial measurement unit (IMU) devices. While IMU sensors are relatively compact, the long-term wear may lead to discomfort, displacement of the sensors on the body, and restrictions in movement or crew behavior. Although IMU data offers valuable insights from a research standpoint, interpreting changes in pre- and post-flight measures can be difficult for crew support personnel beyond the research domain, which can hinder the application to medical assessments and rehabilitation. Finally, the availability of inertial sensors in-flight is limited. There is a need for unobtrusive monitoring tools to improve our ability to monitor adaptation following gravitational transitions in various postflight evaluations and rehabilitation settings. Markerless motion capture is an evolving unobtrusive technology that builds upon decades of research with marker-based motion capture systems to provide three-dimensional human pose estimation from multiple synchronized two-dimensional camera views using deep learning algorithms. Markerless technology can revolutionize how data is captured pre- and post-flight and potentially in-flight during intravehicular activity by enabling pose estimation of multiple crew members from onboard camera hardware. The following paper presents the initial evaluation of a state-of-the-art commercial-off-the-shelf markerless motion capture system, Theia Markerless, compared to traditional IMU devices during various ground-based functional tasks and environmental conditions. Synchronous functional data collected using both motion capture and IMUs are analyzed for six subjects, four male and two female subjects of varying anthropometry. The analysis includes limited assessments of clothing, capture volumes, and the tool's sensitivity to detecting performance changes after a spaceflight analog centrifuge exposure. Results demonstrate comparable root mean square error to existing literature comparing markerless and marker-based motion capture systems. The development of visualization tools to enhance the application of the pose estimation output is also presented. These tools offer effective methods for anonymizing sensitive crew data, facilitating numerous applications across research, medical, and rehabilitation groups. The following work lays the foundation for future implementation leveraging markerless motion capture to assess the time course of recovery to baseline performance and provide insight for rehabilitation protocols to enhance crew readiness for the resumption of daily activities.
      • 08.0504 Characterizing Spontaneous Self-Scheduling in NASA’s Human Exploration Research Analog Campaign 6
        Renee Abbott (Texas A&M University), John Karasinski (University of California, Davis), Jessica Marquez (NASA Ames Research Center) Presentation: Renee Abbott - Wednesday, March 5th, 05:20 PM - Cheyenne
        Future deep-space exploration missions will require astronaut crews to leave low-Earth orbit, introducing the new challenge of significant communication delays. Consequently, these crews must be able to act with greater autonomy than in previous and current missions. One approach to enabling greater crew autonomy is self-scheduling, wherein crew members may create and change their operational mission timelines with minimal supervision from the ground. However, scheduling timelines is complex and requires the planner to consider many constraints and requirements. To investigate future crew’s ability to self-schedule, we developed a web-based scheduling software tool called Playbook and deployed it in NASA’s Human Exploration Research Analog (HERA) Campaign 6. After Mission Control Center created an initial timeline, crew members used Playbook to self-schedule flexible activities as they desired (i.e., spontaneous self-scheduling). Information about each activity, such as the start time and the crew members assigned before and after any schedule changes, was recorded in Playbook. We analyzed spontaneous Playbook use in four 45-day HERA missions to determine if some activities were moved more often than others, who participated in self-scheduling, and when and why crew members self-schedule. In addition, we correlated Playbook use with autonomy preferences, technology self-sufficiency, and crew attitudes toward self-scheduling and plan execution with various behavioral health and performance measures such as affect and team cohesion. Analysis revealed that all crew utilized Playbook often to reschedule various activities, with 23.8% (1,617 out of 6,802) of all flexible activities rescheduled across missions. We observed three primary behaviors in spontaneous self-scheduling: activities planned for the future were rescheduled to start immediately (23.63%), activities planned for the current time were delayed to start later (18.71%), and activities had their start times slightly adjusted by ± 30 minutes (57.66%). Our results indicate that Playbook and self-scheduling were well accepted by crews and highlight the desire for crews to exercise self-scheduling to adapt to schedule changes as they arise and better reflect their personal scheduling preferences.
    • Torin Clark (University of Colorado-Boulder) & Andrew Abercromby (NASA Johnson Space Center) & Ana Diaz Artiles (Texas A&M University)
      • 08.0601 Sensorimotor Impairment Related to Vestibular Adaptation to Altered Gravity
        Victoria Kravets (), Torin Clark (University of Colorado-Boulder) Presentation: Torin Clark - Wednesday, March 5th, 09:00 PM - Cheyenne
        Exposure to altered gravity results in acute sensorimotor impairment, including motion sickness, disorientation, and inhibited posture and locomotion, thought to be related to vestibular maladaptation. Symptoms of sensorimotor impairment have affected every interviewed crew member, resulting in severe discomfort at best and life-threatening risks at worst. Given the potential risks associated with exposure to altered gravity, and the multifactorial nature of vestibular adaptation, it is crucial to understand the interdependencies of sensorimotor impairment during adaptation to a new environment. To further this aim, we explored the effect of altered gravity on dynamic balance and motion sickness, as well as the relationship between these impairments and orientation perception. Thirteen subjects participated in two consecutive days of centrifugation-induced hyper-gravity exposure along their longitudinal body axis, with each “Hyper-gravity” segment lasting nominally one hour. During “Baseline” data collection segments before centrifugation and “1g Readaptation” segments immediately after each centrifugation session, subjects conducted eyes-closed tandem walk (TW) tasks to measure dynamic balance. Every 3 minutes during centrifugation, subjects verbally reported their motion sickness via the Misery Index Scale (MISC) and reported their perception of tilt during passive (i.e., operator-controlled) head-on-neck tilts via a Subjective Visual Vertical (SVV) task. MISC scores and SVV metrics were also collected during the “1g Readaptation” segment. On each day of testing, 5 (38%) of the subjects were unable to complete the entire hour of centrifugation due to motion sickness, though median MISC scores never exceeded 2 (on a scale of 0-10). Motion sickness results showed trends of a slower progression on day 2 relative to day 1. Subjects also had significant TW performance decrements following centrifugation on both day 1 (W=42, p=0.020) and on day 2 (W=28, p=0.016) via Wilcoxon Signed Rank tests, though there was no significant difference between days in the decrements (p=0.211, W=18). When examining the relationship between the sensorimotor metrics collected, two sets of metrics showed evidence of a significant, though moderate, positive correlation: overestimation during Hyper-gravity was significantly correlated with overestimation during the 1g Readaptation (r=0.59, p=0002), and final reported MISC scores were significantly correlated with delta TW scores (r=0.49, p=0.016). However, it is important to note that these results may be affected by the premature halting of the Hyper-gravity segment (i.e., subjects that felt most motion sickness also tended to experience the shortest duration of hyper-gravity exposure). Taken together, the motion sickness and dynamic balance metrics captured here indicate that adaptation to altered gravity results in severe sensorimotor impairments. As can already be seen in this data, these metrics are likely related, warranting further exploration in a longer duration of exposure to altered gravity. This work contributes to the body of data that quantifies sensorimotor impairments driven by adaptation to altered gravity by exploring motion sickness and dynamic balance in a controlled sensory environment, offering insight into sensorimotor impairment during the early hours of the adaptation period.
      • 08.0604 A Soft-Tissue and Sensor Model of Exoskeletons for Amplifying Astronaut Strength
        Lewis Simms (Texas A&M University), Jadon Kaercher (Texas A&M University), Jake Cooper (), Javid Mustafa (), Gray Thomas (Texas A&M University) Presentation: Lewis Simms - Wednesday, March 5th, 09:25 PM - Cheyenne
        Long periods in microgravity reduce strength, and moving from the Moon to Mars will only exacerbate this challenge for human spaceflight. Countermeasures for muscle atrophy, already a key component of astronaut safety and health, do not fully prevent muscle atrophy and loss of strength (mean isokinetic strength still decreases 8-17% over 163±38 days of spaceflight). Assistive exoskeletons have a unique potential to offer a two-part countermeasure for atrophy: in space they could resist human motion as an exercise tool, and in planetary operations they could artificially increase human strength as an assistive device. Yet, while today's assistive exoskeletons are controlled to match the periodic motion of the lower-body in walking, which has been successful in reducing metabolic cost of transport (for example, by 17.4% with a hip exoskeleton), this pattern-based control is unlikely to be suitable for space flight and planetary operations where motion may become more irregular. Thus, we seek a task-invariant exoskeleton controller that makes no such assumptions with respect to human behavior. One such control strategy, which we term Force-Feedback Strength Amplification (FFSA), appears ideally suited for this type of unpredictable, non-periodic locomotion. By directly measuring the effort exerted by the human operator (the force-feedback), this strategy provides proportional assistance (amplifying strength) with no need for task-knowledge. However, FFSA is sensitive to the dynamics of the human--machine interface, especially the compliant tissue dynamics and unconscious human impedance that could result in operator-induced instability if not properly modeled and compensated for in control design. In this paper, we propose a robotic testbed to test an alternative approach to measuring the elements necessary for feedback control. In our approach, only the Ground Reaction Force (GRF) vector and the relative pose (i.e., position and orientation) of an actuator and six-axis force sensor are used for feedback, which measures the human-environment interaction rather than the human-exoskeleton interaction. An active LED camera tracking system provides the relative pose measurement, while the six-axis force/torque plate provides the GRF vector. Additionally, a physical model of the human leg is used to mimic the compliance of human soft tissue using 3-D printed thermoplastic polyurethane. System identification experiments with variable compliance were conducted to validate the hardware and observe the open and closed-loop transfer functions of the system. The experiments showed strength amplification of the closed-loop controlled exoskeleton at low frequencies and an unexpected dynamic behavior at high frequencies. These data will help to design exoskeleton hardware capable of high bandwidth strength amplification in space applications.
      • 08.0607 Utilizing Closed-Loop Physiological Feedback for Dynamic Compression in Soft Robotic Wearables
        Cort Reinarz (), Ana Diaz Artiles (Texas A&M University), Brad Holschuh (University of Minnesota), Darren Hartl (Texas A&M University), Manuel Carrera (Texas A&M University), Haaris Bham (Texas A&M University), Zachary Wideman (Texas A&M University) Presentation: Cort Reinarz - Wednesday, March 5th, 09:50 PM - Cheyenne
        Orthostatic intolerance (OI) is commonly experienced by pilots during high-G maneuvers, astronauts returning to Earth, and individuals with autonomic-related diseases. OI occurs due to blood pooling in the lower body and associated decrease in cerebral blood flow, and could result in hypotension, dizziness, changes in vision, and potentially syncope. Compression garments are currently used for health benefits involving blood flow, especially in sports, military applications, and older populations. However, current garments, such as passive elastic and inflatable garments, are bulky, they need to be activated by a user, and they typically offer a unique level of compression. This paper outlines the development of a first prototype of a compact, fully automated compression garment system utilizing a closed-loop physiological feedback system to mitigate the severity of an OI event. The closed-loop system involves the use of continuous physiological monitoring, supporting the recognition of OI events. The compression garment also utilizes Shape Memory Alloys (SMAs) as actuators for compression, allowing for the ability to dynamically compress the garment as needed, based on OI symptoms. Our first compression garment prototype successfully gathered and utilized real-time physiological data of the user to drive compression when pre-syncope symptoms were present. Compared to legacy compression devices, our design shows that SMAs are an effective replacement to generate adequate compression, allowing for both sustained and dynamic actuation within the garment. The preliminary findings of the compression garment are indicative of an effective countermeasure for OI events; however, further research is needed to validate the effectiveness of the findings. The utilization of a user’s biometrics within a closed looped system can be applied in many other applications where environmental factors are unknown or cannot be measured. Additionally, the ability to dynamically control the compression of the garment means that, under nominal conditions, the garment can be deactivated, improving the users experience and comfort.
    • Brian McCarthy (The Aerospace Corporation) & Alexander Eremenko (Jet Propulsion Laboratory) & Peter Rossoni (NASA/GSFC)
      • 08.0701 Measuring and Mitigating Roman Space Telescope Reaction Wheel Imbalance Forces and Torques
        Parker Lin (NASA / KBR), Lawrence Sokolsky (NASA - Goddard Space Flight Center), Robert Campion (NASA - Goddard Space Flight Center), Kuo-Chia Liu (NASA - Goddard Space Flight Center) Presentation: Parker Lin - Thursday, March 6th, 08:30 AM - Amphitheatre
        The Roman Space Telescope (RST) utilizes six Reaction Wheels (RWs) to provide precision pointing and attitude control during on-orbit science observation. Observatory jitter requirements would be exceeded without wheel isolation, hence a Reaction Wheel Isolator System (RWIS) was developed. Disturbances arising from the RWs’ static and dynamic imbalances are attenuated through RWIS to meet the observatory’s nanometer pointing stability and jitter requirements. While RWIS reduces wheel vibration transmitted to the spacecraft bus and into the payload, mechanical shorts such as thermal straps and electrical harnesses that induce forces across the isolator can degrade the isolator’s performance and adversely impact observatory jitter. The RW disturbance signature and RWIS isolation performance are characterized by their respective vendors through component-level testing. However, understanding the behavior and influence of mechanical shorts necessitates a test to be performed at the RW assembly level including RWIS, thermal straps, and RW harnesses. A novel gravity offload system was designed and instrumented in conjunction with a dynamometer to support the test article while measuring accelerometer responses and RW-induced forces and moments at the base of RWIS where it interfaces to the spacecraft bus. Wheel transmissibility testing was conducted with and without the parasitic harness and thermal strap loads to (a) characterize their effects on RWIS’s isolation performance, (b) facilitate thermal strap design modifications to minimize impact to system, (c) evaluate overall system performance, and (d) generate correlated models of thermal straps and RW harnesses for prediction of on-orbit jitter performance. A finite element model of the test configuration comprising the gravity offload system, RW, RWIS, harness, and thermal straps is anchored by tap and dynamometer data through tuning of key parameters in the model. Although correlation of the test configuration model proved to be challenging due to the complexity of the gravity offload system and its dynamic coupling to modes of test article at higher frequencies, thermal strap and RW harness model tuning has benefitted from data within the RWs’ operating speed range where less coupling between the offload system and test article is present. Correlated models are subsequently incorporated into the observatory model to predict on-orbit jitter for greater accuracy.
      • 08.0702 Friction in Space – Analysis of Robotic Joint Friction in Space Conditions
        Anton Shu (German Aerospace Center - DLR), Ferdinand Elhardt (German Aerospace Center - DLR), Fabian Beck (German Aerospace Center - DLR), Andreas Stemmer (German Aerospace Center - DLR), Alexander Beyer (DLR), Manfred Schedl (DLR), Alin Albu-Schaeffer (DLR (German Aerospace Center) e.V.), Maximo A. Roa (German Aerospace Center - DLR) Presentation: Anton Shu - Thursday, March 6th, 08:55 AM - Amphitheatre
        Robotic joints in space behave in a different way as they do on ground. Although ground testing in thermal and vacuum chambers offers an insight into the behavior of the joints under simulated space conditions, long-term experiments under real conditions provide a more valuable insight into the expected behavior. The German Aerospace Center (DLR) carried out the Robotik-Komponenten-Verifikation auf der ISS (ROKVISS) experiment from 2005 until 2011, where a two-joint robotic arm was mounted outside the International Space Station (ISS) exposed to temperatures between -20°C to +40°C. In this work, we retrieve the original experimental data and use viscous-Coulomb friction models for both joints in order to quantify the temperature dependence of each parameter. Our new analysis shows that for the evaluated data most of the friction can be modeled with static and viscous friction, which are both proportional to temperature. For load-dependent Coulomb friction both the temperature influence and the contribution to the total friction are small. We further present estimated friction model parameters over temperature with confidence bounds for both joints. Since both joints operated at the same temperatures, this analysis also quantifies the differences in joint friction.
      • 08.0703 Combating Amine Blush: Root Cause and Corrective Action of a Compromised Bond on Europa Clipper
        Jon Hamel (Jet Propulsion Laboratory) Presentation: Jon Hamel - Thursday, March 6th, 09:20 AM - Amphitheatre
        A structural debond of a mechanical fitting occurred during vehicle-level static test of the Europa Clipper spacecraft. This event was identified by both an anomalous reading in a test strain gage and an audible noise. Testing was immediately halted and a thorough investigation commenced. Finite element analysis provides good evidence that a debond of the fitting in question would produce the strain signature observed in test. Structural analysis also indicated that a properly-bonded fitting should have substantial capability beyond the loads applied in test. Furthermore, a proof test of the bonded fitting immediately following completion of the bond curing cycle was successfully performed with no detrimental observation. Root cause analysis and investigation ultimately uncovered two causes behind the debond observed in spacecraft test. The primary cause was identified as a phenomenon known as amine blush, compromising the integrity of the adhesive. Amine blush is the formation of carbamates on the exposed surface of an epoxy due to unfavorable conditions during the bonding operation. Surface carbamates present a waxy quality which substantially degrades the adhesive capability of the epoxy. Thus, the epoxy itself was predisposed to not adhere properly to the spacecraft. The secondary cause identified was the nature of the local proof testing of the bonded fittings immediately following the bond cure. The proof test was designed to be quick and simple, using a vertical pull on the fitting to a load level which reproduced flight-like stresses in the adhesive. Although an appropriate overall stress magnitude was achieved in the adhesive, the stress peaked on the opposite side of the adhesive bondline in the proof test than it does during the spacecraft test and as in flight. These differences were known and accepted at the time. However, the formation of amine blush is not symmetric on either side of a bondline. Therefore, by exercising the incorrect side of the bondline during the local proof test, the degraded epoxy was not identified in the local proof test. The local proof test resulted in a false-positive indication of the bonded joint’s capability. Corrective action for the primary root case, amine blush, involved enhancements to the bonding procedure to mitigate the formation of carbamates. Corrective action for the secondary cause was the design of a more complex, but flight-like, local proof test for the bonded fitting. Implementation of the two corrective actions was successful and the bonded fitting has been demonstrated to be soundly installed on the Europa Clipper spacecraft.
      • 08.0707 Data-Driven Physics-Based Digital Twin for Linkage Analysis
        Mitchell Fogelson (Carnegie Mellon University), Zachary Manchester (Carnegie Mellon University) Presentation: Mitchell Fogelson - Thursday, March 6th, 09:45 AM - Amphitheatre
        Linkage mechanisms are fundamental in the design of deployable space structures, enabling efficient packing and significant expansion. However, they have a high risk of jamming and are a source of single-point failures. Simulation tools provide a viable method to increase confidence of mission success, offering insights into system behavior without relying on physical prototypes. However, rigid body dynamic simulations struggle to capture the effects of joint clearance and friction. This paper implements a data-driven, physics-based digital-twin framework for linkage analysis. Our approach uses a differentiable physics engine and real data from hardware experiments to estimate joint clearance and friction for a target mechanism. The differentiable physics engine uses a “maximal coordinates” representation where each link’s full pose in SE(3) is explicitly defined, and joint constraints are applied directly. This approach enables accurate simulation of complex linkages with joint clearances and friction. We validate using both synthetic data and a hardware experiment on a multi-cell scissor mechanism. Our model correctly captures linkage jamming due to joint clearance and friction and closely matches trajectories captured during hardware experiments.
      • 08.0714 DEVELOPMENT of a SELF-RESETABLE, LOW-SHOCK HOLD-DOWN and RELEASE MECHANISMS
        Tom Sproewitz (German Aerospace Center) Presentation: Tom Sproewitz - Thursday, March 6th, 10:10 AM - Amphitheatre
        Satellite systems are often equipped with deployable structures. Their preloading and release are handled by hold-down and release mechanisms (HDRM). Many hold-down and release mechanisms are commercially available. Usually, the release bolt of the mechanism, located in the interface between the deployable structure and the satellites body, is released to start the deployment. This bolt can be either fractured by a shape-memory alloy cylinder (SMA), by explosives, or it can be freed by removing blocking elements holding the bolt in place to allow separation of the I/F. Those release methods are widely implemented in commercial products. The new HDRM technology developed at DLR omits those methods and instead fixes the bolt through a frictional mechanism. Several major improvements can be achieved by this strategy: simplicity in the design, low-shock characteristics or self-resetability. The frictional locking mechanism is realized based on a self-locking clamping device used widely in industry, but tailored to the needs of space applications. This mechanism allows unique simplicity in the handling of the device: The release bolt is pushed into the HDRM by the user, the self-locking effect prevents extraction of the bolt instantaneously. A zero-force installation of the bolt can be achieved by short powering of the mechanism through the electrical interface. Preloading of the HDRM can be done directly afterwards. This way of resetting allows a permanent installation of the device to the spacecraft. The clamping device is released by a shape memory alloy wire, which is activated through joules heating. The overall configuration is based on few, simple parts to allow cost effective realization. A circular shaped 1.5 kN HDRM (diam. 27mm, height 21mm, mass 20g, voltage 9V/16V) has already been testing relevant environment. Further, circular shaped 4.5kN HDRM models (diam. 36mm, height 28mm, mass 75g, voltage 26V...32V) are currently in qualification until end of September to show feasibility for a 4.5kN application on space missions. The paper will start with a simplified description of the design principle and will focus on the discussion of the qualification tests which will be concluded by the end of September 2024. These tests are the well-known mechanical and thermal-vacuum tests, but also further tests like long time preload measurements or the determination the executed shock environment. The paper will close with the overview of the commercialization process into the space market through an industrial partner.
    • Erica Deionno (The Aerospace Corporation) & Richard Hofer (Jet Propulsion Laboratory)
      • 08.0801 First Year of Psyche Electric Propulsion Cruise Operations
        Charles Kelly (Jet Propulsion Laboratory), Steve Snyder (Jet Propulsion Laboratory), Charles Garner (JPL), Sarah Bairstow (Jet Propulsion Laboratory), Austin Nicholas (NASA Jet Propulsion Lab), Nicholas Bradley () Presentation: Steve Snyder - Thursday, March 6th, 10:35 AM - Amphitheatre
        The Psyche spacecraft launched in 2023 as part of NASA’s Discovery program and is currently en route to the metal-rich asteroid (16) Psyche. Its cruise to the asteroid consists of months-long portions of nearly continuous thrusting using the onboard SPT-140 Hall thrusters. This mission is the first time Hall thrusters have ever been operated beyond cis-lunar space. As such, the evolution of propulsion system performance throughout cruise is of great interest to the planetary science and electric propulsion communities. In this paper we will present the latest flight data from the Psyche spacecraft regarding performance of the Hall thrusters, power processing units, and propellant management system. In particular, we will share the trending of parameters such as anode current, floating voltage, component temperatures, and more as a function of accumulated system operating time and solar distance since the beginning of cruise thrusting in April 2024. Updated lifetime usage statistics will be published as well as a discussion of anomalies, idiosyncrasies, and lessons learned in cruise so far. We will also share how long-term cruise operations, such as the thruster switching strategy, are being shaped by in-flight experience. One major activity throughout cruise is the monitoring and management of propellant consumption. Prior to launch, a strategy was developed for propellant gauging using a combination of the pressure-volume-temperature and flow rate bookkeeping methods. The projected uncertainty was determined to be within mission requirements. After one year of propulsion system operations (and six months of near-continuous thrusting in cruise phase which dominates propellant consumption), this paper will present the first assessment of propellant usage and gauging uncertainty using flight data. Additionally, a revised comprehensive propellant gauging method will be presented that details the onboard measurement systems and quantifies all system uncertainties. A post-launch propellant budgeting strategy will be presented and compared to actual consumption in flight. The electric propulsion system has many impacts across the other subsystems as well. The power system has to accommodate the thruster load to ensure the spacecraft can reach Psyche following trajectory-optimized throttle levels. Hall thrusters are also a unique challenge to the thermal system, which has to manage propulsion system components over temperature ranges of several hundred degrees Celsius. This paper will present the observed interactions between the power and thermal subsystems across thrusting boundaries as well as during steady state thruster operation. We will discuss the implications of particular interactions as a means to characterize each subsystem. An additional challenge of operating Hall thrusters in deep space is the high swirl torque imposed on the spacecraft when exhausted ions interact with the thruster applied magnetic field. This is managed in flight operations by regularly stopping thrust, changing attitude, and firing the SPT-140 to create a torque that dumps the momentum accumulated by the reaction wheels. We will discuss EP-based momentum management for the first year of the mission, reflect on lessons learned, and present plans for the remainder of cruise.
      • 08.0803 Assessment of Propellant Droplet Contamination Effects on the Europa Clipper Spacecraft
        John Anderson (Jet Propulsion Laboratory), William Hoey (NASA Jet Propulsion Laboratory), Daniel Fugett (Jet Propulsion Laboratory), Nora Low (Nasa Jet Propulsion Laboratory), Carlos Soares (NASA Jet Propulsion Laboratory) Presentation: John Anderson - Thursday, March 6th, 11:00 AM - Amphitheatre
        NASA's Europa Clipper spacecraft will carry nine science instruments to investigate whether Jupiter’s moon Europa has an environment that could support life. Europa Clipper is the largest spacecraft NASA has flown for a planetary mission. With solar arrays fully extended the spacecraft spans over 30 meters. At launch, nearly half of Europa Clipper’s mass is propellant which will be consumed during travel to Jupiter, inserting the spacecraft into orbit around Jupiter, and to perform almost 50 flybys of Europa. Europa Clipper’s propulsion system has 24 bipropellant engines which will process nearly 2750 kg of fuel and oxidizer during the course of the mission. During propulsion system operation liquid droplets of unburned fuel and oxidizer are expelled from the engines. An empirical bipropellant plume contamination model that was originally developed for International Space Station thrusters was applied to investigate droplet effects on the Europa Clipper spacecraft. Unburned propellant droplets can cause damage to spacecraft materials by chemical reactions and mechanical damage exhibited by pitting surfaces. Bounding estimates for propellant droplet fluxes to Europa Clipper surfaces were obtained using the empirical model. Based on material testing results, estimates of damage to materials used for thermal blankets, solar arrays, and instruments were assessed to determine the impact to spacecraft thermal, power and science systems. Where needed, design changes were made to mitigate deleterious effects caused by propulsion system operation.
      • 08.0806 Power System for a Venus Aerobot
        Joel Schwartz (Jet Propulsion Laboratory), Zachary Bittner (SolAero by Rocket Lab), Tobias Burger (Rocket Lab), James Cutts (NASA Jet Propulsion Lab), Stephen Dawson (JPL CALTECH), Patrick Irwin (University of Oxford), Kazi Islam (Jet Propulsion Laboratory), John-Paul Jones (), Shubham Vilas Kulkarni (University of Oxford), Clara MacFarland (Jet Propulsion Laboratory), Nate Miller (https://www.rocketlabusa.com, https://www.rocketlabusa.com/space-systems/solar/), Hui Li Seong (), James Sinclair (), Christopher Stell (NASA Jet Propulsion Laboratory), Will West (California Institute of Technology) Presentation: Joel Schwartz - Thursday, March 6th, 11:25 AM - Amphitheatre
        A range of concepts for long duration aerial missions, using high altitude balloons operating in the clouds of Venus, have been studied by NASA and JPL for the Planetary Science and Astrobiology Decadal Survey and for NASA’s competitive New Frontiers and Discovery programs. These concepts offer a rich set of scientific opportunities in atmospheric chemistry, astrobiology, atmospheric dynamics, seismology and sub-cloud surface imaging. The Venus aerobot would be sustained in flight by a variable-altitude balloon and carry a payload of instruments at altitudes between 52 and 62 km. The aerobot would fly in the cloud layer containing sulfuric acid aerosols and be subject to large temperature extremes as it traverses a range of altitudes and latitudes at different times of day. To achieve the desired lifetime on the order of one Venus day we have defined a solar power system that would supply power over the full altitude range while the aerobot is circumnavigating the planet. We have initiated development of the requisite technology, including rechargeable batteries, solar arrays, and a peak power tracker for this challenging mission. Specifically, we have fabricated triple-junction inverted metamorphic (IMM) solar cells optimized for power generation in the unique spectrum of light expected at 51.5 km altitude and measured 34.0 mW/cm2 power output at room temperature in initial testing. We developed a coating to protect aerobot solar panels from corrosion in sulfuric acid and demonstrated survival without performance degradation after 96 hours in 96% aqueous sulfuric acid at room temperature. Initial performance data were obtained on a peak power tracker showing 96% power conversion efficiency. In addition, we have developed specialized lithium-ion cells intended to operate between -30 and 100oC and demonstrated 80% capacity retention after 90 cycles at 100% depth of discharge at 100 deg C. These cells were incorporated into a 4s1p battery module and successfully tested under expected flight-like random vibration and thermal vacuum conditions. These results represent key steps in the process of developing the power system technology needed to bring the Venus aerobot mission to fruition.
    • Christofer Whiting (NASA - Glenn Research Center) & Concha Reid (National Aeronautics and Space Administration)
      • 08.0902 Sustained Lunar Presence Advances in Radioisotope Power Systems for Surviving the Lunar Night
        Jacob Matthews (Zeno Power Systems) Presentation: Jacob Matthews - Wednesday, March 5th, 08:30 AM - Lamar/Gibbon
        The new race to the Moon has begun with recent landings by Japan, China, and the U.S. The new finish line is a sustained presence on the Moon enabled by technologies that allow infrastructure to survive the lunar night. Radioisotope power systems (RPS) provide reliable thermal energy in extreme environments, ensuring critical systems remain operational during the prolonged lunar nights. Some estimates of the thermal power required for a sustained lunar presence are as high as several thousand watts. To meet this demand, abundant and readily available radioisotopes are necessary to create robust, scalable solutions that align with both NASA and commercial mission timelines and operational requirements. This challenge must be balanced with the desire for high volume of lightweight and cost-effective RPS systems. Three trends will be influential in considering which radioisotopes can support this capability over the next decade. First, private companies and international space agencies are developing novel radioisotope power systems, providing additional options and freeing up the U.S. Department of Energy's plutonium-238 systems to support more NASA marquee missions. Second, the development of heavy-lift launch vehicles like SpaceX's Starship and Blue Origin's New Glenn over the next several years will drastically increase the mass capacity for space systems. This shift implies that power, rather than mass, will become the primary constraint for future space systems, necessitating a pivot away from traditional mass-constrained architectures. Third, NASA is developing a new radiation-hardened electronics technology that greatly increases the processing speed and radiation tolerance of electronic components - enabling electronics to endure much longer in the high-radiation environment on the lunar surface. This paper will explore these design trades in more detail and discuss the portfolio of radioisotope options that can support sustained lunar night survivability. It will discuss how emerging technologies, increased launch capacities, and advanced electronics standards collectively contribute to overcoming the challenges of maintaining a long-term human presence on the Moon.
      • 08.0903 NASA’s Radioisotope Power Systems Program Status Update and Focus on Commercialization
        Carl Sandifer (NASA Glenn Research Center), Lauren Clayman (NASA - Glenn Research Center), Ryan Edwards (), David Frate (NASA - Glenn Research Center), Allen Guzik (NASA - Glenn Research Center), Kristin Jansen (NASA - Glenn Research Center), Leah Sopko (NASA - Glenn Research Center), Colleen Van Lear (NASA) Presentation: Carl Sandifer - Wednesday, March 5th, 08:55 AM - Lamar/Gibbon
        Radioisotope power systems (RPS) have been safely providing the power to explore space in the United States for over 60 years. NASA established the RPS Program in 2010 to support these specialized science missions and potential future opportunities more effectively and efficiently. The RPS Program ensures the availability of RPS for the exploration of the solar system in environments where conventional solar or chemical power generation is impractical or impossible. RPS-enabled NASA missions have utilized this space nuclear power to explore planets, moons, and interstellar space, enabling missions to some of the deepest, darkest, and dustiest regions in the solar system and beyond. This extreme exploration has deepened our understanding of the solar system and our role within it. The RPS Program, in partnership with the Department of Energy (DOE) Office of Nuclear Energy continues to operate as an interagency partnership to provide robust radioisotope power system solutions to spacecrafts that conduct extreme missions for science and exploration. This Program invests in systems and technologies to ensure that NASA maintains this capability well into the future and it manages processes that ensure the safe use and launch of these systems. This paper provides a synopsis of current activities of the RPS Program, which provides the power to explore, which demonstrates a record of successful formal interagency partnering with the DOE. Additionally, it introduces a focus on the commercialization of RPS to enable a variety of future mission applications.
      • 08.0904 Sensitivity Analysis of CFM Technologies for Combined NEP-Chemical Mars Missions
        Elizabeth Turnbull (NASA GRC), Steven Oleson (NASA Glenn Research Center) Presentation: Elizabeth Turnbull - Wednesday, March 5th, 09:20 AM - Lamar/Gibbon
        Cryogenic fluid management (CFM) technologies will be crucial in enabling crewed missions to Mars with advanced propulsion systems. The Compass Team, a concurrent engineering team at NASA Glenn Research Center, examined the sensitivity of a nuclear electric propulsion (NEP) combined with chemical propulsion (NEP-Chem) crewed Mars mission to the performance of various CFM technologies. Specifically, the team examined the robustness of the mission to the valve leak rates, residuals, and the specific mass and power of the cryocoolers. A similar analysis was performed by the Advanced Concepts Office (ACO) at NASA Marshall Space Flight Center looking at the effects of the same technologies’ performance on a nuclear thermal propulsion (NTP) crewed Mars mission. This paper will include a discussion of the results of the NEP-Chem analysis, as well as a comparison of the results between the two different nuclear systems. Discussion of the NEP-Chem analysis will include the impacts of varying each of the CFM technologies’ performance to both the vehicle design and the trajectory- noting where breakpoints occurred. The results of both the Compass and ACO analyses provide information on which CFM technologies must meet their performance goals to enable Mars missions. Additionally, the NEP-Chem results speak to the mission robustness afforded by having two separate propulsion systems.
      • 08.0905 Monte Carlo Methods: Modeling TEG Material Property Uncertainty Propagation and Sensitivity
        Carter Gassler (University of Pittsburgh), Matthew Barry (University of Pittsburgh) Presentation: Carter Gassler - Wednesday, March 5th, 09:45 AM - Lamar/Gibbon
        This study addresses the critical need to quantify the impact of uncertainty in thermoelectric material properties on thermoelectric device (TED) performance. Particularly,l we focus on Silicon Germanium (SiGe) thermoelectric generator (TEG) unicouples, TEDs that convert heat to electrical power, used in the General-Purpose Heat Sources radioisotope thermoelectric generator (GPHS-RTG) for space power generation applications. Toward this aim, we developed a tool that produces a family of curves that can be used to determine the uncertainty in output quantities like power, voltage, current, temperatures, and other performance metrics based on uncertainties in thermoelectric material properties. Monte Carlo methods estimate output quantity uncertainties as a function of input material property uncertainties and develop robust statistical correlations between input uncertainty and expected performance bounds. A fast unicouple-level mathematical model capable of handling various thermal and electrical phenomena is utilized. Model considerations include varying thermal and electrical boundary conditions, the influence of interconnectors, electrical and thermal contact resistances between constitutive unicouple materials, and temperature-dependent material properties. This mathematical model was validated against a fully-coupled thermal-electric finite-volume model implemented in ANSYS CFX for an exhaustive range of operating parameters. This work can potentially guide manufacturing tolerances and flight-hardware qualification requirements at institutions like NASA Jet Propulsion Lab (JPL). Additionally, the tools and framework developed herein can be applied to any arbitrary thermoelectric material set with any arbitrarily varying material properties, assisting with developing new thermoelectric materials and unicouples for future-generation RTGs.
      • 08.0907 Mixed Quasi-Steady and Transient Modeling of Radioisotope Thermoelectric Generators
        Joseph VanderVeer (Penn State University) Presentation: Joseph VanderVeer - Wednesday, March 5th, 10:10 AM - Lamar/Gibbon
        Radioisotope thermoelectric generators (RTGs) typically have life spans measured in decades. For this reason, life performance modeling of RTGs tend to be quasi-steady state rather than fully transient. The Applied Energy and Power Library (AEPL), developed by the Applied Research Laboratory at Penn State University, was designed to model a variety of steady, quasi-steady, and fully transient energy and power systems. A key feature of AEPL is a variable state temporal solver capable of dynamically switching between quasi-steady state and fully transient solutions within a simulation. To demonstrate AEPL’s ability to model such variable state temporal problems, a simplified RTG was modeled within AEPL focusing upon the entry, descent, and landing stage of the Curiosity rover's mission profile. Results show the impact of a transient induced increased cold side temperature and the asymptotic approach to steady state after landing. Other features of AEPL to model radioisotope power systems, including dynamic systems, are also discussed in the paper.
      • 08.0909 Comparison of Radioisotope Power Systems to Enable the Endurance Mission Concept
        Young Lee (Jet Propulsion Laboratory), Matteo Clark (NASA Jet Propulsion Lab), Troy Hudson (NASA Jet Propulsion Laboratory), John Elliott (Jet Propulsion Laboratory), Allen Guzik (NASA - Glenn Research Center), June Zakrajsek (The Aerospace Corp), Mark Chodas (NASA Jet Propulsion Lab), Alex Davis (NASA Jet Propulsion Lab), Paul Schmitz (Vantage Partners) Presentation: Young Lee - Wednesday, March 5th, 10:35 AM - Lamar/Gibbon
        The Endurance mission concept is one of the mission concept studies conducted for the 2023-2032 Planetary Science and Astrobiology Decadal Survey. Endurance would be an autonomous, rover mission enabled by a Radioisotope Power System (RPS) to traverse 2,000 km on the lunar surface, collecting and delivering samples to the Artemis basecamp near the south pole. The baseline RPS for the planned four-year mission duration is the Next Gen Mod-1 RTG (NGRTG Mod-1). Building on the legacy of previous systems, NGRTG Mod-1 aims to offer increased power per unit and slower power degradation rates compared to the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG). Given the variety of RPS options, the RPS Program Office undertook a study to compare various technologies for the Endurance lunar rover mission concept. JPL’s Team X conducted the study to create parametric point-designs for the rover. Selecting the RPS option played a major role in determining the design of each point, causing modifications to other rover sub-systems to accommodate the RPS. The study analyzed four candidate RPS products: MMRTG, NGRTG Mod-1, a plutonium oxide-based Dynamic Radioisotope Power System (DRPS-Pu), and an Americium variant of the DRPS (DRPS-Am). From the RPS program’s perspective, the MMRTG is the current state-of-the-art, the NGRTG Mod-1 is a planned next generation product, and both DRPS variants are considered future product offerings. Beyond the rover power system, the thermal subsystem was another significant variable that underwent evaluation in this Team X study. The decision to use either an electrical heater architecture or a waste-heat-capture fluid loop architecture was thoroughly examined. The rover design allowed for changes in the power and thermal subsystems while keeping all other variables constant, making it easier to compare different Endurance rover variants. The tracked metrics for comparing the rovers against each other were the estimated mass and mission duration. The purpose of these two metrics is twofold: 1) ensuring the rover’s delivery to the lunar surface via a NASA Commercial Lunar Payload Service (CLPS) lander, and 2) meeting the mission’s four-year mission duration requirement. Among the evaluated cases, the rover’s maximum estimated mass was 817 kg, and the maximum estimated mission duration was 4.3 years. Overall, the results showed that achieving mission design closure for the Endurance rover mission concept is possible with any of the specified power systems and either of the heater architectures, depending on the power system implemented. Various factors, including technical feasibility, programmatic needs, cost constraints, risk assessment, could affect the choice of the RPS. This paper will provide a full summary of the results of the Team X study and a comprehensive analysis of the benefits and challenges of accommodating each RPS. More information about the power system will be provided. The study results are pre-decisional information for planning and discussion purposes only.
    • Tara Polsgrove (NASA Marshall Space Flight Center) & Ashley Karp (Jet Propulsion Laboratory)
      • 08.1002 11'' Hybrid Rocket Motor Test Bed Applications and Opportunites
        Marissa Anderson () Presentation: Marissa Anderson - Wednesday, March 5th, 11:00 AM - Lamar/Gibbon
        The development of new systems for lunar or martian missions, such as kick stages or ascent vehicles, will require rapid generation of material and geometry of motor performance data in representative environments. The 11-inch hybrid rocket motor (HRM) is a low-cost, short-lead-time solid-rocket-motor (SRM) material test bed that is designed, manufactured, and tested at Marshall Space Flight Center (MSFC). The 11-inch HRM uses solid HTPB/Al fuel and gaseous N2/O2 oxidizer to produce large-scale-SRM-like internal environments for the testing of insulation and nozzle materials and configurations. The use of solid fuel rather than solid propellant allows for substantially reduced cost and lead times compared to a similarly sized SRM, while still delivering SRM-similar temperatures, pressures, and combustion-product chemical species. In addition, the combination of SRM features, the number of adjustable parameters, and the potential for instrumentation and characterization make the 11-inch HRM ideal for generating validation data for models of SRM systems, subsystems, components, and environments The inaugural firing of the 11-inch HRM, HRM-1, occurred in June 2024 and produced nozzle and insulation performance data that demonstrate its capability. Future use of the 11-inch HRM is intended to aid industry partners with materials testing and selection as well as anchoring new test data to existing MSFC models and test beds.
    • Robert Sievers (RKS Consulting)
      • 08.11 8:11 Nuclear Propulsion Systems - Opportunities and Barriers
        JASON TURPIN, Kurt Polzin, Timothy Cichan Presentation: JASON TURPIN, Kurt Polzin, Timothy Cichan - - Amphitheatre
        Description: Nuclear propulsion offers to open up the space frontier from LEO to the Moon, Mars and Beyond. This panel will provide a brief overview of current development programs and then focus on the capabilities and tradeoffs between the leading options; nuclear electric and nuclear thermal. The panelist will discuss capabilities of each option relative to current propulsion systems and what each technology enables in the near and long term. The current challenges and barriers will be addressed.
    • Christofer Whiting (NASA - Glenn Research Center)
      • 08.12 8.12 PANEL: Radioisotope Power Systems – Expanding Our Reach
        Jacob Matthews, Christofer Whiting, Patrick Frye, Leo Gard, June Zakrajsek Presentation: Jacob Matthews, Christofer Whiting, Patrick Frye, Leo Gard, June Zakrajsek - - Amphitheatre
    • Robert Sievers (RKS Consulting)
      • 08.13 8.13: Radioisotope Systems - Advancing Early Lunar Science Capabilities (extends to 12:10)
        Stephen Indyk, Vincent Bilardo, Milena Graziano, Jacob Matthews Presentation: Stephen Indyk, Vincent Bilardo, Milena Graziano, Jacob Matthews - - Amphitheatre
        Sponsored by Session 8.09, this panel brings together the lunar science community and organizations with lunar service or radioisotope systems capability to delve into the future of lunar exploration and the integration of radioisotope technology with their systems over the next decade. Radioisotope devices can provide critical heat and continuous power to survive the night or reach into the permanently shadowed regions. Panelists will provide unique insights into their missions and discuss capability that could be uniquely enabled by radioisotope power systems or heat sources, as well as the challenges and opportunities. The nuclear material supply chain and the respective regulatory issues will be included. The panelists will also discuss how government support or guidance can improve the deployment of these systems.
  • Tom Mc Ateer (NAVAIR) & Christopher Elliott (CMElliott Applied Science LLC)
    • Will Goins (Radiance Technologies ) & Richard Hoobler (University of Texas at Austin)
      • 09.0104 Design, Analysis and Development of a Mini Airship
        Alhamzah Al-mawla (The University of Manchester), Tunde Oyadiji (University of Manchester) Presentation: Alhamzah Al-mawla - Wednesday, March 5th, 10:35 AM - Lake/Canyon
        Most current aircraft are heavier-than-air vehicles. Thus, they require a large amount of power to take off and stay air-borne. This leads to the requirements to burn large amounts of aviation fuel that negatively impacts on the environment in terms of climate change. Overall, the aviation industry is one of the major contributors to global warming due to COX and NOX emissions. To address this, there is currently the drive to reduce the burning of aviation fuel, and in fact to transition to all electric aircraft. On the other hand, an airship is a lighter-than-air vehicle that is kept afloat by a gas that is lighter than air e.g. hydrogen or helium. Thus, the amount of power required to keep the vehicle in the air much smaller and can be readily supplied from electrical sources, including solar panels. This paper involves the design, analysis and development of a hybrid airship with an exoskeleton to support an envelop to be filled with helium gas. The hybrid nature of the airship involves aerostatic and aerodynamic lift generation. The NACA6322 aerodynamic profile was used for the airship envelope and the exoskeleton. The aerodynamic performance of the envelope was analysed using the ANSYS Fluent Computational Fluid Dynamics analysis software, while the analysis of the structural performance of the exoskeleton was conducted using the ANSYS Structural software. The predicted lift and drag forces produced by the helium envelope, and the predicted deflections and stresses of the exoskeleton are discussed. The procedures for the manufacture and assembly of the hybrid airship with the exoskeleton are described and illustrated.
      • 09.0105 High-Fidelity Aeroelasticity Simulation of Morphing Wingtip
        Jehan Herath Mudiyanselage (University of Hertfordshire), Reza Moosavi (University of Hertfordshire) Presentation: Jehan Herath Mudiyanselage - Wednesday, March 5th, 11:00 AM - Lake/Canyon
        This research investigates the aerodynamic, structural, and aeroelastic characteristics of an active winglet using the concept of pressure-actuated cellular structures (PACS) on a Cessna Citation X wing. The PACS is designed with cells of specific geometric shapes: pentagons in the middle row and triangles in the top and bottom rows. Each cell is pressurised independently to achieve the desired bending moments, thereby optimising the aerodynamic performance and structural integrity of the winglet. The study employs Computational Fluid Dynamics (CFD) to analyse the performance of the PACS-integrated winglet. The CFD analysis is conducted on both a full-scale and a scaled model of the Cessna Citation X wing. This dual approach ensures that the results are applicable to real-world scenarios while also being validated through wind tunnel testing of the scaled model. Structural and modal analyses complement the CFD analysis, providing a comprehensive understanding of the winglet’s behaviour under various loading conditions. In the CFD analysis, the aerodynamic characteristics such as lift, drag, and pressure distribution are examined. The structural analysis focuses on the stress and deformation responses of the winglet under aerodynamic loads. Modal analysis is conducted to study the winglet's natural frequencies and mode shapes, which are crucial for understanding its aeroelastic stability. Following the computational studies, wind tunnel tests are performed on the scaled model to validate the CFD results and to assess the practical viability of the PACS actuator. The wind tunnel tests help in identifying any discrepancies between the computational predictions and the actual performance, thereby providing insights for further refinement of the PACS design. The results from this research will demonstrate the effectiveness of PACS in actively controlling the winglet shape to optimise aerodynamic performance. By adjusting the pressure in individual cells, the winglet can adapt to varying flight conditions, potentially reducing drag and improving fuel efficiency. The structural analysis will confirm the feasibility of the PACS design in withstanding the operational loads, while the modal analysis will ensure that the winglet remains stable under dynamic conditions. In conclusion, this research not only validates the concept of PACS-integrated winglets through comprehensive CFD and wind tunnel analyses but also paves the way for future advancements in adaptive wing technologies. The findings will contribute significantly to the development of more efficient and versatile aircraft wing designs, ultimately enhancing the performance and sustainability of modern aviation.
      • 09.0106 Design, Analysis and Fabrication of a Blended Wing Unmanned Aerial Vehicle
        SWARNA MAYURI KUMAR (United Arab Emirates University), Mohamed Okasha (), Mohamed Kamra (United Arab Emirates University) Presentation: SWARNA MAYURI KUMAR - Wednesday, March 5th, 11:25 AM - Lake/Canyon
        This paper presents a comprehensive methodology for the design, analysis, and fabrication of a Blended Wing Body (BWB) Unmanned Aerial Vehicle (UAV), aimed at enhancing aerodynamic efficiency and stability. Design parameters are defined based on mission requirements and performance specifications, and the geometric model is created using OpenVSP. Aerodynamic analysis with the mid-fidelity surface-vorticity flow solver, FlightStream, assesses lift, drag, and stability derivatives. Three UAV configurations are evaluated-a straight wing without winglets, a twisted wing without winglets, and a twisted wing with winglets. These configurations are compared for aerodynamic efficiency, and the optimal design is selected. These results are then used to model the aircraft's dynamic response to disturbances, demonstrating the UAV's longitudinal and lateral stability and performance. The UAV is fabricated using advanced 3D printing with lightweight PLA (Polylactic acid) material, ensuring precise construction and a minimal weight airframe. After fabrication, the UAV is assembled and prepared for flight testing to validate the predicted performance and stability. The preliminary findings are encouraging, as both OpenVSP and FlightStream were successfully employed to simulate and analyze the UAV’s aerodynamic characteristics, allowing for an informed selection of the most feasible configuration for fabrication. The results also provided critical insights into the UAV’s dynamic stability, ensuring the chosen design maintains robust performance under various flight conditions. The analysis confirms that the selected configuration achieves a balance between aerodynamic efficiency and structural stability. This study underscores the practical application of mid-fidelity simulation tools in UAV design, providing a solid foundation for future advancements in UAV performance and optimization. The paper concludes by discussing the integration of resource-efficient modeling and analysis techniques with practical fabrication processes, highlighting their contributions to enhancing the UAV's efficiency, stability, and overall performance.
      • 09.0107 Multibody Dynamics Modelling of a Passive Pilot for Aircraft-Pilot-Coupling Investigation
        Daniel Nelson (Carleton University), Fidel Khouli (Carleton Univ) Presentation: Daniel Nelson - Wednesday, March 5th, 04:30 PM - Lake/Canyon
        A fundamental principle in aircraft design is to minimise aircraft weight to optimise performance. This is partly done through the use of new airframe materials and structures, where recent advancements have generally resulted in more flexible airframes, and less damped structural modes. Consequently, modern airframes tend to exhibit structural vibrational modes that impinge on the bandwidth of pilot biodynamics. This results in a phenomenon known as Aircraft-Pilot-Coupling (APC), where the pilot dynamics adversely interact with the control system and the aeroservoelastic response of the aircraft. The resulting vibrations cause reduced handling qualities, and pilot and passenger discomfort. To predict APC, a dedicated pilot biodynamic model is necessary. This model must be parameterized so that it may encompass biomechanical variations between different pilots. Few biodynamic models exist in the available literature; however, each has limitations. Recently, Shams derived a detailed analytical discrete multibody dynamics model using Lagrangian dynamics. Following aircraft flight dynamics and control convention, the model is split into longitudinal and lateral models. While the model demonstrates a general consistency with experimental data, many simplifications were made limiting the model generality. One of the aims of the work being presented is to develop a highly detailed multibody dynamics model that improves upon Shams’ model using the multibody dynamics software Adams by Hexagon. This will enable the creation of more sophisticated pilot, pilot seat and inceptor models. The literature lacks this modelling approach for fixed-wing aircraft, which necessitates this development as APC prediction and mitigation becomes more critical for aircraft manufacturers. A preliminary Adams model was created based on Shams’ Lagrangian approach as an initial basis for a full and detailed 3D model. Parametric studies of the longitudinal and lateral models were compared to Shams’ model in the frequency domain. For the lateral model, the Adams model is displaying similar trends to Shams’ model in magnitude and phase. The effects of modifying the joint stiffness, joint damping, pilot mass, pilot height, inceptor stiffness, inceptor damping, and torso angle were investigated. It was found that modifying these parameters had near identical effects on the overall response. Similar conclusions are drawn for the longitudinal model. The 3D multibody dynamics model and example biodynamic responses are presented next along with the representative experimental setup of a seated pilot holding an inceptor for a fixed wing aircraft. The collected experimental biodynamic response data for different test pilots and how it will be used to conduct a grey-box system identification analysis using the developed 3D model are discussed. The main goal of using the developed grey-box model within the Aircraft-Pilot-System simulation to predict APC is then emphasized.
      • 09.0113 Modeling of Active Control of the Wing Angle of Attack for a Flapping Wing Micro-Aerial Vehicle
        Neil Schoenwetter (Villanova University), Rebecca McGill (), Stephen McGill (Villanova University), Sergey Nersesov (Villanova University) Presentation: Neil Schoenwetter - Wednesday, March 5th, 04:55 PM - Lake/Canyon
        In this research, we study the aerodynamics of a flapping wing micro-aerial vehicle (FWMAV). We create a simulation and analytical model of active control of the wing's angle of attack and compare the two methods. The flapping mechanism for this FWMAV is comparable to that of a hummingbird. The inclusion of a mechanism to change the angle of attack adds to the weight of the vehicle, requiring a higher lift force. Therefore, the added benefit of actively changing the angle of attack is to be large enough to validate the added weight of the components required to make this active control possible. The lift and drag coefficients are calculated from the simulation (via COMSOL software). The lift and drag forces of the FWMAV wing found through the simulation and analytical study are compared to ensure viability of the theoretical model. Two different wing shapes are modeled and analyzed to ensure robustness of the two methods. In this project, the effect of the active wing on the FWMAV is looked at for the hovering state specifically. The efficiency of the flight is based on obtaining the necessary lift for hover while minimizing both the drag forces and the power input. The analytical force results for both wing shapes are comparable to the force results obtained from the COMSOL simulation. The lift force plots of both the analytical method and simulation show similar behavior to lift force plots gathered from studies conducted on hummingbirds in hovering flight. With an accurate analytical and simulation model, the process of optimizing the wing shape and flight parameters will become more efficient as an accurate idea of the wing design's aerodynamic performance will be determined before experimental testing. The lift and drag forces recorded in future experimental testing will also be compared to the forces obtained in the analytical model and the simulation.
      • 09.0116 The Effect of Proturbance Structures on the Aerodynamic Performance of an Aerofoil
        Samuel Jennings () Presentation: Samuel Jennings - Monday, March 3th, 10:10 AM - Electronic Presentation Hall
        Lift, drag and pitching moments of various samples from NACA0012 aerofoils were tested using SOLIDWORKS Flow Simulation. These samples had protuberance structures modelled on the leading edge and were inspired by the tubercles of the humpback whale. Literature review commented that the observation of these features’ ability to delay the onset of stall and increase lift has led to exploration and experimentation of these features in wind turbine blades. Protuberance-like structures called tubercles were created with varying amplitudes and wavelengths and the effect these had on aerodynamic performance was determined by a virtual wind-tunnel test at intervals of 5 degrees from 0 to 15 degrees. The results of this investigation showed an increase in the production of lift ranging from 7.68% to 38.41% when adding leading-edge tubercles, when compared to a baseline aerofoil, corrolating with the literature on the topic.
      • 09.0118 Modeling and Analysis of Thermal Aspects for a Hybrid Stratospheric HAPS
        Salvatore Mazza (CIRA), Pietro Mazzei (CIRA Italian Aerospace Research Center), Vincenzo Baraniello (CIRA Italian Aerospace Research Center), Alessandra Zollo (CIRA Italian Aerospace Research Center), Giuseppe Persechino (CIRA - Italian Aerospace Research Centre) Presentation: Salvatore Mazza - Wednesday, March 5th, 09:00 PM - Lake/Canyon
        Abstract - High Altitude Platform Stations (HAPS) have attracted a lot of interest in recent years as they can be seen as a complement to terrestrial, aerial and satellite systems, allowing to target new applications. The applications of these vehicles cover many sectors, from defense to communication systems to Earth observation. A challenge to be addressed in the design of such hybrid aircraft is the analysis of thermal aspects in all the flight envelope and, consequently, the design of thermal management system which is of fundamental importance to ensure that the electrical components on board performs correctly throughout the flight. One of the factors that represents a difficulty for the success of the mission is the surrounding environment: HAPS fly for most of the mission at stratospheric altitudes; this implies extreme environmental conditions in terms of density, temperature and air pressure. Another aspect to take into great consideration is the presence of the Sun, a help and a hindrance in these cases: it supplies electrical power to the components through photovoltaic cells but it is also the main heating source for them. Another aspect to take into consideration is the mission profile of the HAPS: its rate of ascent associated to and the achievement of the operational altitude represents fundamental mission requirements; altitude angle, velocity vector and presence of winds are parameters that must be considered to assure the safety of the mission. Thise work presents, firstly, a description of the typology of components used inside the HAPS avionics bay, from a thermal management perspective. The thermo-dynamic atmospheric variables are then descripted. The knowledge of the HAPS location in terms of longitude and latitude allows for greater precision in calculating all model variables. The thermal model used and the results obtained are then presented and discussed. This model takes into account all the heat exchange modes (radiation, convection and conduction) and the exchange possibilities between the avionics bay and the external environment. The model is based on the electro-thermal analogy, working with an equivalent electrical scheme. Taking into account the type of mission in progress, it is necessary to know the value of the temperature (in particular that of the components) during the entire duration of the flight. To do so, the theory of lumped parameters is used. The results show the temperature trends of all avionics components in the bay during the entire duration of the mission, climb, cruise and final descent, ensuring that their temperatures remain within their operating temperature range to avoid interruptions or damages during the flight. The aim of this paper is to provide a valid methodology to determine the temperature conditions for both the equipment and the buoyancy gas during the design of a HAPS in order to determine the requirements on the temperature operating range for equipment and the need of a thermal management system and, eventually, to design it.
      • 09.0119 Design of the Pressurization System for a Novel Inflatable HAPS Vehicle: Development and Simulation
        Pietro Mazzei (CIRA Italian Aerospace Research Center), Salvatore Mazza (CIRA), Vincenzo Baraniello (CIRA Italian Aerospace Research Center), Alessandra Zollo (CIRA Italian Aerospace Research Center), Giuseppe Persechino (CIRA - Italian Aerospace Research Centre) Presentation: Pietro Mazzei - Wednesday, March 5th, 09:25 PM - Lake/Canyon
        The colonization of the stratosphere thanks to new aircraft configurations has become possible as a result of recent technological development. Inflatable aero-structures represents one of the most promising solution for medium and high weight payload. Blimps and aerostatic balloons are the most common inflatable aero-structures. They present several limits on movements, speed and controllability. The work presented describes a hybrid inflatable Heavier-Than-Air (HTA) design that permits to go beyond the limits of previous Lighter-Than-Air (LTA) and HTA vehicles, opening new possibilities for their fields of employment. This new design is based on the joint use of aerostatic and aerodynamic lift, allowing an overall weight and size reduction. This is obtained also thanks to a shape variation during the flight envelope. In particular, this article will primarily focus on describing the modeling of one fundamental characteristic of this new design: the pressure system. It is designed to ensure that the desired shape is achieved when required while maintaining the structural integrity of the envelope. These demands required to develop a numerical model to predict the behavior of the gas inside the envelope at each phase of the preplanned mission profile. The specific characteristics of the vehicle required to develop more than a simple modeling of the gas expansion, due to the need to simulate also the effect of the valve required to eject the gas to not exceed the nominal overpressure. The first phase of the lift-off is designed to be purely aerostatic, relying on the lift generated by the LTA gas. The close interaction between the gas behavior and the lift force required to devise a model of the vertical motion, too. Moreover, this novel vehicle was designed to continuously cruise in the stratosphere during several days, being thus subjected to extreme variations of outside temperature and pressure. These aspects, coupled with the solar radiation affecting the extensive surface of the envelope, required an accurate modeling of the atmospheric characteristics and of the solar radiation at various quote. Finally, the structural integrity of the vehicle during the descending phase was design to be carried on by supplying compressed air produced by compressors to the envelope. The modeling of the required compressed air’s mass flow and compression ratio was also implemented for every instant of the descending phase. The energy required by the compressors will be supplied by photovoltaic cells installed on top of the envelop. Electrical and thermal models of photovoltaic cells were also developed and implemented to accurate estimate values of the available energy. After a general description of the key features of the total design of the new hybrid inflatable HTA vehicle proposed in this work, a deeply description of all the mentioned developed models will be given, detailing all the assumptions, the equations and the limits that define the developed model. In the last section, the results for a simulated mission profile will be shown and discussed in depth. Finally, a summary of the future works to validate the developed model will be described.
      • 09.0120 Study on the Influence of Sharp/Blunt Fuselage on the Aerodynamic Performance of Supersonic Nacelle
        Lu Bai () Presentation: Lu Bai - Monday, March 3th, 10:35 AM - Electronic Presentation Hall
        At present, in the planning of supersonic civil aircraft, the ' N + X ' generation planning developed by NASA has the most reference value. The cruise Mach number of supersonic civil aircraft proposed by NASA is about 1.8. Therefore, it is of great academic significance and engineering application value to consider the structural design and aerodynamic performance analysis of the fuselage and nacelle of supersonic civil aircraft in cruise state. In this paper, an axisymmetric inlet for supersonic civil aircraft is designed. The design Mach number of the inlet is 1.75. The flow characteristics solving models of axisymmetric inlet, sharp fuselage, blunt fuselage and nacelle are established. On the basis of verifying the accuracy of the solving model, the designed axisymmetric inlet is placed in the nacelle and installed on the sharp fuselage and blunt fuselage respectively. The effects of the structure with fuselage and the structure without fuselage on the aerodynamic performance of inlet total pressure recovery coefficient and outlet total pressure distortion index are studied respectively. The flow field characteristics and formation mechanism of internal and external flow in nacelle at 0 ° and 2 ° angles of attack are analyzed. The results show that in the cruise state, the structure with the fuselage will significantly increase the internal flow loss and the total pressure distortion at the outlet, reduce the total pressure recovery coefficient of the inlet, and increase the total pressure distortion index at the outlet. Along the x ( incoming flow direction ) direction, with the increase of x, the low-energy fluid on the fuselage side gradually decreases, and the total pressure distribution is uniform before the inlet of the inlet. In the structure without fuselage, the total pressure recovery coefficient of the designed axisymmetric inlet is 94.7 % under the design point condition. In the structure with fuselage, the total pressure recovery coefficient of the inlet installed on the sharp fuselage is 90.58 % under the design point condition, and the total pressure recovery coefficient of the inlet installed on the blunt fuselage is 89.6 % under the design point condition. The research content of this paper provides a theoretical basis for the aerodynamic performance analysis and structural design of axisymmetric inlets and nacelles used in supersonic civil aircraft.
    • Kerianne Hobbs (Air Force Research Laboratory) & Will Goins (Radiance Technologies )
      • 09.0201 Hierarchical Vision-Based Localization in Large-Scale GNSS-Denied Environments
        Michael Schleiss (University of the Bundeswehr Munich), Max Hofacker (University of Federal Armed Forces Munich), Roger Foerstner (), Thomas Pany (University of the Bundeswehr Munich) Presentation: Michael Schleiss - Thursday, March 6th, 08:30 AM - Lake/Canyon
        This paper presents a hierarchical framework for vision-based absolute localization in large-scale GNSS-denied environments. Initially, a coarse position is estimated using visual observations within a Monte Carlo Localization framework. Each query image is encoded as a global image descriptor and matched against a database of georeferenced images. These reference images resemble what an aerial vehicle would see with a downward-facing camera at that location. Image similarities between the query and reference images are then used as likelihood to update the particles’ weight in the Monte Carlo Localization scheme. We demonstrate that a coarse position estimate can reduce the position uncertainty from multiple kilometers to below 150 meters. Within this reduced search space, we then match local image features against a satellite image or ortho-image and a corresponding digital surface model to obtain six-degree-of-freedom poses in the global frame. These poses are used to track the absolute position and attitude, which are then fused with inertial measurements using an Extended Kalman Filter. We evaluate our hierarchical localization framework using a challenging real-world aerial dataset that spans over 38 kilometers in distance at an altitude of 300 meters above ground and covers various types of landscapes such as urban and vegetated environments. Our evaluations show that our framework can localize over 65% of query images with an error threshold of 5 meters which allows - combined with pose tracking - for continuous global pose estimation over our test scenario.
      • 09.0202 Station Keeping Vented Solar High-Altitude Balloons with Deep Reinforcement Learning
        Tristan Schuler (U.S. Naval Research Laboratory), Chinthan Prasad (U.S. Naval Research Laboratory), Georgiy Kiselev (UC Davis), Donald Sofge (Naval Research Laboratory) Presentation: Tristan Schuler - Thursday, March 6th, 08:55 AM - Lake/Canyon
        T. Schuler1, C. Prasad1, G. Kiselev², and D. Sofge1 1U.S. Naval Research Laboratory – Distributed Autonomous Systems Section ²U.S. Naval Research Laboratory – NREIP Intern Vented Solar High-Altitude Balloons (SHAB-Vs) can leverage opposing winds to perform station keeping maneuvers for persistent area coverage of a target region which can help with surveillance, in-situ stratospheric meteorological data collection, or communication relays. With a perfect weather forecast, this would be a simple deterministic path planning problem, however forecasts frequently have large errors in wind direction (occasionally up to 180 degrees) and also lack vertical and temporal resolution in the altitude region of interest (typically only 5-10 data points for a 10 km region), leading to significant uncertainty in flow fields. SHAB-Vs perform station keeping maneuvers over a much smaller time period (10-20 hours) than altitude controlled super pressure balloons (days to weeks) We have developed a simulation environment to use deep reinforcement learning algorithms to train altitude controllable lighter than air platforms to station keep in complex flow fields. At the highest level, the trained agents are rewarded for maintaining time within a target region, with randomized flows for every episode. The simulated flow is based off of radiosonde altitude data for a particular day and the forecasts is based off of real forecasts from ECWMF, providing a way to accurately simulate uncertainty. We will initially verify our trained models in an indoor simulated environment with miniature autonomous blimps and flow fields generated from several bladeless fans oriented in a way to create opposing winds at different altitudes. The indoor flow field isn’t a perfect analog to outdoor SHAB-V tests, because the indoor flow velocity decreases to zero as the blimps move away from the fan, and there is significantly more turbulence between levels; whereas wind in the stratosphere has a near constant flow and significantly smaller regions of turbulence (which can more easily be avoided in an outdoor arena). Ideally, if our indoor agents can solve these more complex indoor flow fields, similar algorithms, reward structures, and hyperparameters can be used for the outdoor agents by using transfer learning from the initial indoor models.
      • 09.0208 Autonomous Identification and Localization of Battle Damage on Aircraft Using Infrared-based Sensors
        David Ke (United States Air Force Academy), Elliott Kmetz (United States Air Force Academy), Ashlynn Sweet (United States Air Force Academy), Joseph Olson (United States Air Force Academy), J. Humberto Ramos (University of Florida), Michael Anderson (United States Air Force Academy) Presentation: David Ke - Thursday, March 6th, 09:20 AM - Lake/Canyon
        Awareness of an aircraft’s remaining integrity and achievable performance during everyday flight and especially during combat is critical for the safety of the pilot and preserving combat effectiveness. Current means of monitoring aircraft integrity involve either post-flight inspections on the ground or in-flight inspections where a second aircraft is maneuvered around the first. These techniques are labor intensive, undesirably infrequent, and heavily constrained by environmental conditions. Specifically, post-flight inspections require additional equipment and time to manually examine each surface on the aircraft, along with the added inflexibility of requiring the aircraft to return to base. For in-flight inspections, clear air conditions and high visibility are needed, and even then, critical damage may be overlooked. Other inspection situations, particularly ground inspection, create critical periods of vulnerability for the aircraft and maintenance personnel. This work centers on the use of an unmanned aircraft system (UAS) with commercial off-the-shelf (COTS) infrared (IR)-based sensors to autonomously identify battle damage on aircraft. The proposed solution employs a convolutional neural network (CNN) and Simultaneous Localization and Mapping (SLAM) to detect, classify, and localize aircraft damage. This methodology has promise in many applications related to battle awareness because it offers improvements over standard red, green, blue, depth (RGB-D) sensor packages under poor visibility conditions, while running completely in real-time and onboard. This paper presents an experiment that autonomously identified and localized aircraft battle damage on a T-37 airplane wing in day and night conditions with a 58.02% success rate. Potential future applications of this work include remotely piloted aircraft (RPA) using existing sensor packages for aircraft monitoring, current and next-generation fighter aircraft wingmen passively inspecting one another, and RPA swarms that can each actively measure the health of the group. This technology could also be used by civilian commercial aircraft and spacecraft to quickly and confidently identify structural damage. Future work includes increasing the quality and quantity of training for the CNN, testing with the inspected aircraft in motion, and fusing the IR sensors with standard color cameras.
    • Will Goins (Radiance Technologies ) & Andrew Lynch (Tactical Air Support Inc.) & Thomas Fraser (Lockheed Martin Corp)
      • 09.0305 Determining the Effect of BVID with Optical Fiber Sensors and Composite Material Strength
        Sydney Houck (University of South Carolina) Presentation: Sydney Houck - Thursday, March 6th, 09:45 AM - Lake/Canyon
        Objective This paper investigates Barely Visible Impact Damage (BVID) on thermoplastic composite materials and focuses on coupon size specimens. Optical fiber strain sensors are used in this study to investigate the best method for detecting BVID caused by impacts. Compression After Impact (CAI) testing is completed for some of the impacted panels to investigate the effect of impact on static strength. Fatigue testing is completed for remaining panels to investigate the strength of the part using post cyclic loading. The findings of this study can be applied to aerospace structures that use thermoplastic composites and regularly experience BVID. Methods The composite specimens used for this study were made of Toray TC1225 LM PAEK and cut into 4” x 6” panels. The stacking sequence used for these panels is [45, 0, -45, 90]4s. The impacts were performed following ASTM standards at energy levels ranging from 5J to 60J on a total of 43 panels. Luna High-Definition Strain sensors were attached to 7 panels before impact. The Luna Optical Distributed Sensor Interrogator (ODiSI) measurement system was used to collect data. Results The data gathered after impact can be displayed to visually show the strain values at different lengths upon the fiber. The data can also be used to create a 3D plot that compares the strain, length, and time five seconds before and five seconds after the impact. These figures can be compared to understand which locations of the panel are experiencing the highest and lowest strain values due to the impact. The preliminary results from CAI testing can be displayed in a figure that shows a linear relationship between the impact energy (J) and the compressive strength (MPa) of the impacted panels. Conclusions The main conclusion that can be drawn from the preliminary impact tests is that the optical fibers are capable of clearly detecting when a BVID impact occurs on a composite part. This can be most clearly seen in 3D plots comparing strain, length, and time. In these plots, there is a significant visual change in the strain measurements along the length of the fiber. Preliminary compression testing shows that different impact energies can be correlated to a percentage knockdown factor, which is calculated as 3% for 15J and 28% for 30J. Additional CAI tests will be completed to further validate these conclusions. Fatigue testing for this study is still in progress and does not have preliminary conclusions yet.
    • Nikolaus Ammann (DLR (German Aerospace Center)) & Christopher Elliott (CMElliott Applied Science LLC) & Tom Mc Ateer (NAVAIR) & Richard Hoobler (University of Texas at Austin)
      • 09.0401 Autonomous Navigation and Station-keeping of High-Altitude Balloon Using Extremum Seeking Control
        Telema Harry (Queen's University), Martin Guay (Queen's University), Shimin Wang (Massachusetts Institute of Technology) Presentation: Telema Harry - Thursday, March 6th, 10:10 AM - Lake/Canyon
        Unmanned High-Altitude Balloon Platform (HABP) has many real-life engineering and scientific applications, such as near-space experiments, military surveillance, meteorological observations, and high-speed broadband internet. These are flexible structures designed to carry large payloads to the stratosphere for an extended period. The major challenges associated with HABP operation are navigation and station-keeping (i.e., the ability to maintain the HABP in a particular geographical area), as their dynamics depend solely on the prevailing atmospheric wind conditions, which are strongly nonlinear and time-varying, making it difficult to develop an accurate mathematical model. Even short-term forecasts for a particular region are not accurate and may differ substantially from actual measurements. Furthermore, the harsh operating environment makes continuous human intervention impractical. All these factors add to the complexities associated with the design of a control system for the navigation and control of high-altitude balloon platforms. Atmospheric wind varies with altitude, thus the standard control strategy is to exploit the variability of the environmental condition with altitude to design a controller that can steer the balloon to our desired location by continuously seeking the altitude with the most favorable wind profile. In this work, we propose a data-driven real-time optimization technique based on dual-mode extremum-seeking control (ESC) for the navigation and station-keeping of high-altitude balloon platforms. ESC does not require historical data nor accurate knowledge of the wind profile at different altitudes; it only requires real-time measurement of the HABP's geographical location using a global positioning device (GPS) or atmospheric wind direction. This information is used to estimate some performance function local gradient, and the control input is manipulated such that the HABPs seek the altitude with the most favorable wind pattern. To demonstrate the effectiveness of our data-driven controller, we conducted simulations of a HABP using unknown wind data from the National Oceanic and Atmospheric Administration's (NOAA) Global Forecast System (GFS). The simulation shows that the algorithm can steer a HABP from one location to another. Secondly, although it is practically impossible to constrain the HABP to a fixed geographical location, the simulation results demonstrated the ability of our control algorithm to seek the next altitude with a favorable wind direction that will steer the HABP towards the predefined area of operation when it veers off.
      • 09.0402 Evaluation of Automatic Landmark Selection Strategies for Navigation of Unmanned Aircraft
        Nikolaus Ammann (DLR (German Aerospace Center)) Presentation: Nikolaus Ammann - Thursday, March 6th, 10:35 AM - Lake/Canyon
        The objective of this work is to investigate strategies to adapt the angle of view of a camera and lidar sensor configuration to optimize the navigation state estimate of an unmanned aircraft. Most unmanned aircraft used in private, commercial and military context are equipped with camera systems for aerial photography or surveillance respectively. These camera systems are typically of high quality compared to navigation cameras. In addition, these camera systems feature a pan-tilt for unit for adjustment of the viewing angle. Nevertheless, such high-quality camera systems are not used in the flight controller or in the state estimation respectively. This paper enhances an approach presented last year in this session on how to integrate such camera systems into the state estimation by introducing an automatic landmark selection algorithm. The paper briefly summarizes the implemented state estimation from the previous publication. Subsequently, different strategies are investigated on how to control the pan-tilt unit in order to optimize the measured values for the state estimation. The implemented strategies are compared with each other and with fixed camera angles. To show the impact of the different algorithms. The comparison is based on a Matlab/Simulink simulation environment providing measurements like inertial, magnetic or barometric sensor data. The simulation of raw optical sensor data and the processing of this data has been facilitated by implementing a statistical model of processing pipeline. This model emulates the environment, the raw optical sensor data, and the processing.
      • 09.0403 Vision-based Self-Localization for UAVs Using Semantic Features and OpenStreetMap
        Rebecca Schmidt (German Aerospace Center - DLR), Joachim Rüter (German Aerospace Center - DLR), Stefan Krause (German Aerospace Center - DLR), Stefan Schubert (Chemnitz University of Technology) Presentation: Rebecca Schmidt - Thursday, March 6th, 11:00 AM - Lake/Canyon
        The autonomous operation of unmanned aerial vehicles (UAVs) requires precise self-localization. The UAV location is commonly determined by using global navigation satellite systems (GNSS). However, GNSS information can be inaccurate, e.g. due to effects of space weather phenomena or radio frequency interferences caused by GNSS jamming. In order to provide a reliable and precise UAV localization that does not rely on GNSS information, various research focuses on vision-based methods. In this paper, we introduce a real-time method for vision-based UAV self-localization with GNSS-like accuracy. Our approach uses high-level semantic features, extracted from images captured during flight, which are then matched to geo-referenced OpenStreetMap (OSM) data of the flight area. This matching relies on scene similarity measurements and is performed using a template matching approach. We access the high-level semantic features through a neural network for semantic segmentation, considering three feature classes: paths, buildings, and background. The map data used is obtained from the OSM server and preprocessed into a comparable image format. Additionally, data from the inertial navigation system (INS), barometer, and magnetometer are utilized to determine the UAV's current altitude and heading. This information is then used to transform, i.e., rotate and scale, the segmentation images to match the scale and orientation of the map. The template matching accuracy depends significantly on the metric used to measure the similarity. Therefore, we compare five metrics. These include metrics that are often used for template matching of grayscale images, such as normalized cross-correlation (NCC) and sum of squared differences (SSD). But, we also evaluate metrics that are commonly used to compare segmentation images, e.g. loss functions for supervised learning of semantic segmentation networks like binary cross entropy (BCE), focal loss (FL) and dice loss (DL). Since we work with high-resolution image data, we do not use a sliding window approach for the template matching. Instead, we reduce the computational cost by performing the matching in frequency domain using the fast Fourier transform (FFT). Definitions of the metrics NCC and SSD, which can be calculated in the frequency domain, are already available in the literature. In this paper, we derive definitions for the metrics BCE, FL, and DL, enabling real-time capable matching in the frequency domain. As part of the evaluation, the metrics are compared in terms of positioning accuracy and runtime. We especially found that focal loss is well suited for matching the high-level semantic features with semantic map data. The overall system achieves a median localization accuracy of 1.64 m on noise-free data with a mean runtime of 0.38 s executed on the on-board UAV hardware. In addition to introducing the localization method, this paper presents challenges and potential pitfalls faced when trying to match semantic features with map data using template matching. Furthermore, we show that loss functions known from Machine Learning are more suitable to perform an accurate matching of the semantic features with the map data than traditional template matching metrics.
      • 09.0404 Impact of Added Mass on the Control Laws Design
        Felice Fruncillo (), Vincenzo Baraniello (CIRA Italian Aerospace Research Center), Adolfo Sollazzo (the Italian Aerospace Research Centre), Nicola Genito (CIRA Italian Aerospace Research Center), Antonio VITALE (CIRA - Italian Aerospace Research Centre) Presentation: Felice Fruncillo - Thursday, March 6th, 11:25 AM - Lake/Canyon
        The added mass problem is a significant challenge in the design and operation of lighter than air (LTA) stratospheric vehicles. This phenomenon occurs when a vehicle moving through a fluid experiences additional inertia due to the fluid it displaces. For LTA vehicles, the effect of added mass is particularly pronounced because of their low structural mass and the relatively large volume of air they interact with during flight. One of the primary challenges posed by added mass is the additional stress it places on the vehicle’s structure. The interaction with the surrounding air can induce forces that the structure must be able to withstand without compromising the vehicle’s lightweight nature. Moreover, the dynamics of LTA aircraft are heavily influenced by added mass, impacting stability and control. Therefore, engineers must carefully consider this factor during the design process in order to ensure accurate predictions of the vehicle’s behavior, to maintain stability and efficiency, and to set up control systems that take into account the effect of the extra inertia and the altered responsiveness of the vehicle to the inputs, thus necessitating precise and sophisticated control mechanisms. Energy efficiency is another area affected by the added mass phenomenon. The increased inertia means that more energy is required to move the vehicle through the air. This is especially critical for vehicles that rely on solar power or other low-energy systems to achieve prolonged flight durations. By thoroughly understanding and addressing the added mass phenomenon, engineers can enable successful deployment of LTA aircraft in various applications. The authors want to give, with this paper, their contribution by focusing on the real impact of the addressed issue on the design of the attitude control laws, in order to give an objective quantification of the problem that flight control engineers shall face with. This paper comprehends some recalls on the added mass mathematical modelling, already presented by the authors in previous works; then the same control technique is applied to design the attitude control system, either to the model that considers the add mass either to the one that neglects the phenomenon. The LTA aircraft considered for the analyses presents two pairs of complex poles (longitudinal pitch-incidence and lateral dutch-roll), two other stable real poles (longitudinal surge and lateral yaw-roll), and two unstable poles (longitudinal fugoid and lateral sideslip-yaw); the introduction of added masses tends to slow the poles in the complex plane, except for the latter modes that are little affected. The control technique selected for the analyses is the Linear Quadratic with state augmentation to achieve null error to step command for the roll and pitch attitude angles. A first analysis compares the performance with reference to the full model, that considers the added mass. A successive analysis focuses on the robustness variation through a probabilistic approach, by taking into account the stochastic uncertainties on main aerodynamic and inertial parameters and also considering the added mass data. Detailed results of these analyses will be presented in the full paper.
      • 09.0405 Pitch Plane Trajectory Tracking Control for Sounding Rockets via Adaptive Feedback Linearization
        Pedro Santos (University of Lisbon - IST), Paulo Oliveira (IST/Ulisboa) Presentation: Pedro Santos - Sunday, March 2th, 03:05 PM - Electronic Presentation Hall
        Sounding rockets are instrumental platforms to provide cost-effective, rapid access to space, and to validate new technologies prior to orbital flight. In face of increasingly demanding mission scenarios, focusing on reusability and reconfigurability, adaptive and global control solutions are needed to track different trajectories in large flight envelopes, which may be generated in real-time using online guidance methods. In this work, a pitch plane trajectory tacking control solution for suborbital launch vehicles is derived relying on adaptive feedback linearization. Initially, the 2D dynamics and kinematics for a single-nozzle, thrust-vector-controlled sounding rocket are obtained for control design purposes. Then, an inner-outer control strategy, which simultaneously tackles attitude and position control, is adopted, with the inner-loop comprising the altitude and pitch control and the outer-loop addressing the horizontal (downrange) position control. Feedback linearization is used to cancel out the non-linearities in both the inner and outer dynamics, reducing them to two double integrators acting on each of the output tracking variables. Uncertainty is considered when canceling the aerodynamic terms and is estimated in real-time in the inner loop via adaptive backstepping. More precisely, making use of Lyapunov stability theory, an adaptation law, which provides online estimates on the inner-loop aerodynamic uncertainty, is jointly designed with the output tracking controller, ensuring global reference tracking in the region where the feedback linearization is well-defined. The zero dynamics of the inner-stabilized system are then exploited to obtain the outer-loop dynamics and derive a Linear Quadratic Regulator (LQR) with integral action, which can stabilize them as well as reject external disturbances. In the outermost loop, the estimate on the correspondent aerodynamic uncertainty is indirectly obtained by using the inner loop estimates together with known aerodynamics relations. The resulting inner-outer position control solution is proven to be asymptotically stable in the region of interest in terms of pitch angle, i.e, in the upper region of the unit circle excluding the horizontal orientation. Finally, the control strategy is implemented in a Matlab/Simulink simulation environment composed by the non-linear pitch plane dynamics and kinematics model and the environmental disturbances to assess its performance. Using a single-stage sounding rocket propelled by a liquid engine as reference vehicle, different mission scenarios are tested in the simulation environment to verify the adaptability of the proposed control strategy. Preliminary simulation results are satisfactory, given that the controller is able to track the requested trajectories while rejecting external wind disturbances. Furthermore, the need to re-tune the control gains in between different mission scenarios is minimal to none.
      • 09.0410 Nonlinear MPC for Stabilizing the Longitudinal Dynamics of a Highly Unstable Aircraft
        Paulina Conrad (Friedrich-Alexander Universität Erlangen-Nürnberg), Andreas Michalka (), Johannes Beck (Airbus), Knut Graichen (Friedrich-Alexander University Erlangen-Nuremberg) Presentation: Paulina Conrad - Thursday, March 6th, 04:30 PM - Lake/Canyon
        This paper investigates the application of nonlinear model predictive control (MPC) for stabilizing the longitudinal dynamics of a highly unstable aircraft model. MPC operates by solving an optimization problem to determine the optimal control input at each sampling step, starting from the current state and predicting the system's future behavior. A key advantage is the ability to consider state and input constraints, ensuring physical and operational limitations of the aircraft. Unlike conventional MPC applications in aerospace that often linearize the aircraft dynamics and focus on outer loop autopilot control, the nonlinear system formulation of an aircraft including a detailed aero-data model is used in the optimization problem for inner loop stabilization, enhancing control accuracy and robustness across the entire operating flight envelope. The implementation with GRAMPC, a gradient-based MPC toolbox, allows for a fast optimization by utilizing the gradients of the system dynamics as well as the aerodynamic coefficients, leading to a real-time capable application that can accurately predict future system behavior. By additionally considering current pilot stick deflections in the optimization problem, the control system dynamically adjusts to the pilot's commands, creating a responsive and adaptive flight control system relevant for real-flight operations. The approach is validated for the ADMIRE model (“Aero-Data Model In a Research Environment”), a high-fidelity aircraft benchmark model that includes transport delay, actuator dynamics, and uncertainties. Simulations demonstrate the improved performance compared to a conventional differential PI controller. Additionally, the effectiveness and inherent robustness of the MPC strategy is highlighted by stability evaluations across the aircraft's flight envelope and under aerodynamic uncertainties. The results indicate that the MPC approach provides an improvement in handling the nonlinear dynamics and uncertainties of an aircraft compared to traditional control methods. The method's real-time capability and ability to process real-time pilot inputs ensures its applicability in practical flight scenarios. The study concludes that the proposed MPC framework offers a reliable and efficient solution for inner loop stabilization, being beneficial for advanced flight control systems.
      • 09.0411 Dual-Control Autopilot Design for Combined Tail& Divert Thruster Controlled Hit-to-Kill Interceptor
        Daniel Boudreau (Raytheon) Presentation: Daniel Boudreau - Thursday, March 6th, 04:55 PM - Lake/Canyon
        Keywords: Direct Hit, Dual-Control, Autopilot Abstract - The direct hit requirement imposed on current generation missile interceptor design demands a fast airframe response. In aerodynamic tail-controlled missiles, system response time is driven by dynamic pressure and the transient “wrong way” effect inherent to tail-controlled designs. With high angle rates of change typically seen in terminal geometries, dynamic pressure and the “wrong way” effect limit the immediate correction commanded by guidance systems at a most critical time. A solution addressing both issues is the addition of divert lateral thrusters. Thrusters provide additional control authority in low dynamic pressure regions and counteract the “wrong way” effect. This paper presents a dual-control missile autopilot architecture for simultaneous thruster and aerodynamic control. Defining a new missile trim condition, it considers the control contributions of thrusters at autopilot design time in combination with aerodynamic tail control. This material is intended for professionals interested in a dual-control interceptor design seeking a practical design methodology. Introduction (from completed paper) The capabilities of aerial threats are continually evolving. Modern threat designs have increased maneuverability, payload lethality, and payload durability. Hardened payloads have been rigidized and designed for enhanced survivability. To combat these advancements, surface-to-air interceptors have been given a direct hit requirement which defines success as a warhead-to-warhead collision. Direct warhead-to-warhead collision disintegrates threating ordinance at intercept, where legacy designs employing proximity fuzes and fragmentation warheads fail to do so. A high-altitude intercept is preferred to allow non-disintegrated hazardous material time to disperse within the atmosphere prior to reaching ground level reducing the likelihood of concentrated toxins causing collateral damage. In a benign (noise free) environment, the most significant contributor to miss distance is slow system response. The relationship between flight control system time constant and miss distance is shown in Figure 1. As the flight control system time constant decreases, so does the miss distance. The two factors driving the response time of aerodynamic tail-controlled missiles are dynamic pressure and the “wrong way” effect. A high-altitude engagement requirement results in the interceptor operating in a low dynamic pressure environment. In the thin air, aerodynamic forces acting on control surfaces are greatly decreased, and the flight control system exhibits a slowdown (i.e., increase in time constant) in its ability to respond to guidance commands. The immediate “wrong way” transient motion of the tail-controlled architecture also increases the flight control system time constant. This momentary behavior lasts until the interceptor’s angle-of-attack is sufficient to accelerate it in the command direction. Prior to this, the forces acting on the rear control surfaces (deflected to rotate the system into the command direction) act counter to and exceed forces acting on the body pushing the interceptor opposite (i.e., “wrong way”) to the command direction. This phenomenon takes time to overcome and worsens at higher altitudes. The combination of slow system response due to low dynamic pressure and the “wrong way” effect cause miss distance to increase. ...
      • 09.0413 A Model-Free Data-Driven Algorithm for Continuous-Time Control
        Sean Bowerfind (Auburn University / AFIT), Matthew Kirchner (Auburn University), Gary Hewer (Naval Air Weapons Center ), David Robinson (NAWCWD), Paula Chen (), Alireza Farahmandi (Naval Air Systems Command), Katia Estabridis (Naval Air Warfare Center Weapons Division) Presentation: Sean Bowerfind - Thursday, March 6th, 05:20 PM - Lake/Canyon
        Presented is an algorithm design a stabilizing controller that converges to an LQR design when the underlying system is linear. The algorithm doesn't require knowledge of system, but instead using only input-output data. The data is entirely off-policy and is not required to be from optimal input. This paper presents the derivation as well as shows examples applied to both linear and nonlinear systems dynamics inspired from air vehicles.
    • Christopher Elliott (CMElliott Applied Science LLC)
      • 09.0606 Verification and Clearance of Flight Control Software for High-Altitude Long Endurance Aircraft
        Christian Weiser (German Aerospace Center - DLR) Presentation: Christian Weiser - Thursday, March 6th, 09:00 PM - Lake/Canyon
        This paper presents a clearance process for the flight control software of a very light weight and flexible High-Altitude Long Endurance (HALE) aircraft. The selected application example is the German Aerospace Center’s (DLR) HALE platform “HAP-alpha”, which is used to illustrate the presented validation and verification (V&V) framework. The V&V of the flight control software is an important prerequisite for the integration on the actual hardware, which is taking place in the current project phase. The DLR HALE project was launched to conduct satellite-like missions, e.g. surveillance and earth observation with optical and radar payloads at much lower cost than low earth orbit satellites. Moreover, continuous observation of one location or a change of observation location is possible with the aircraft. Additionally, the opportunity of using multiple aircraft exists, to cover wider areas or, e.g. provide telecommunication services for remote areas within relatively short time. The aircraft is projected to fly solar and battery powered at an altitude of up to 80,000 ft with a payload of 5 kg. The expected average mission duration is around 60 – 90 days. Limiting factors are the available amount of solar energy in higher latitudes and practical considerations, such as the maximum amount of flight hours between scheduled maintenance, in lower latitudes. However, in lower latitudes the available solar energy would theoretically allow an unlimited mission duration. The design and development of the flight control software for the HALE aircraft has been discussed in previous work. The major novelty of the control system design is the special type of aircraft configuration and the consideration of the aircraft’s flexible modes for the primary flight control design. This was achieved by targeting minimum structural loads via the model-based design approach. Furthermore, the integrated V&V process which is presented here can be generalized for any flight control law development. The V&V framework assures functional, performance, and software quality which allow the entry into the flight-testing phase of an aircraft. This is important, since there is not much experience or general practices for control design and validation for HALE aircraft. The developed V&V framework includes unit testing using simulation scenarios which cover all nominal flight conditions, as well as disturbance and fault scenarios. Additionally, code coverage is addressed and investigated as a driver for extending the unit test scenarios, where a high percentage in decision coverage is used as an indicator of covering all valid mode and input combinations. In a final step, Monte-Carlo simulation is employed in order to test the robustness of the flight control laws in presence of critical aerodynamic uncertainties and disturbances, sensor delays and noise levels and to identify critical uncertainty and parameter combinations.
      • 09.0607 Automatic Flight Tests Execution on a Distributed Electrical Propulsion Demonstrator
        Nicola Genito (CIRA Italian Aerospace Research Center), Gianfranco Morani (CIRA Italian Aerospace Research Center), Luca Garbarino (CIRA), Gianluigi Di Capua () Presentation: Nicola Genito - Thursday, March 6th, 09:25 PM - Lake/Canyon
        In Platform 1 of the Clean Sky 2 Large Passenger Aircraft - Integrated Aeronautics Demonstration Platform (LPA-IADP), aimed at discovering a strategy to reduce emissions of Large Passenger Aircraft. In this frame, the use of Hybrid Electric Propulsion (HEP) was identified as a possible solution and, specifically, Distributed Electrical Propulsion (DEP) was found as an important enabler for HEP future development. To foster DEP adoption for large passenger aircraft architectures, a strategic roadmap was implemented including flight testing of a flying demonstrator called D08 “Radical Configuration Flight Test Demonstrator”. The D08 demonstrator is a Distributed Electrical Propulsion aircraft with 6 propellers, with the weight of 170 kg and with a wing span of 4 m. Within this plan, Italian Aerospace Research Centre (CIRA) is in charge of developing an aircraft Guidance, Navigation and Control (GNC) system to be integrated in a dedicated testing framework for supporting demonstration with the D08. This work will benefit from the GNC system already developed by CIRA in the same project for supporting flight testing of the Dynamically Scaled Vehicle, D03 demonstrator. The D03 testing framework was developed with the objective of supporting demonstration campaign. This was achieved through the inclusion of an instruction language that allows performing complex missions and test maneuvers autonomously increasing repeatability in flight-testing. Moreover, the modular SW architecture was specifically developed to allow the users to integrate their modules, e.g. control laws, following a sequence of simple steps. These two characteristics, repeatability and modularity, can be also combined thanks to the flexibility of the GNC SW and to the automation instruction language mentioned earlier that allows selecting the custom control modules instead of the available standard ones for performing an automated sequence of experimental maneuvers and the test of differents control strategies. On top of a resume of the On-board Guidance, Navigation and Control system and Ground Remote Pilot Station developed by the authors, this paper will present the in-flight demonstration of the Guidance, Navigation and Control system through flight data recorded during the activities carried out in Grottaglie airport (Italy) on May, June and July 2024. Several flights tests have been performed with the On-board Guidance, Navigation and Control system in control of the D08 Scaled Flight Demonstrator and several maneuvers have been automatically executed by the On-board Guidance, Navigation and Control system. These tests carry out the parameter identification of the aircraft characteristics and the performance analysis of some Multi Input Multi Output control laws designed in order to take advantage of the intrinsic control redundancy of the vehicle configuration. The performances of the control laws where performed in nominal and not nominal conditions (e.g. engine, aileron and rudder failures). During the flight tests, as required, the On-board Guidance, Navigation and Control system allowed the accurate and repeatable execution of a set of pre-defined maneuvers and easily and accurately change the characteristics of the maneuvers during the flight tests campaign.
  • Kristin Wortman (Johns Hopkins University Applied Physics Laboratory) & Virgil Adumitroaie (Jet Propulsion Laboratory)
    • Virgil Adumitroaie (Jet Propulsion Laboratory) & Seungwon Lee (NASA Jet Propulsion Laboratory)
      • 10.0101 Accurate, GPU Accelerated Solar Radiation Pressure Modeling for Exoatmosphere Trajectory Simulation
        Asher Elmquist (Jet Propulsion Laboratory), Vivian Steyert (Jet Propulsion Laboratory), Spencer Diehl (), Abhinandan Jain () Presentation: Asher Elmquist - Sunday, March 2th, 04:30 PM - Cheyenne
        For long-duration space flight, solar radiation pressure (SRP) can induce non-negligible forces on the space craft, requiring accurate modeling for the analysis of uncontrolled flight trajectories. These forces are the result of absorbed and reflected radiation on the spacecraft’s surface, a phenomenon dependent on geometric and material properties. This paper describes the development, validation, and benchmarking of a high-fidelity solar radiation pressure model for fast and efficient simulation of the uncontrolled Earth approach flight of the Mars Sample Return Earth Entry System (MSR EES). The described SRP model is implemented in JPL’s Dynamics and Real-Time Simulation (DARTS), and leverages DARTS’s Inter-planetary Rendering for Imaging and Sensors (IRIS) for GPU-accelerated ray tracing of incident radiation on a spacecraft. By leveraging ray tracing techniques, the SRP model includes the effect if detailed geometric features and spatially varying material properties, and accurately captures shadowing effects, supporting complex scenes such as non-convex geometries and occluding spacecraft. The SRP model calculates the force and moment induced by radiation on the vehicle as a function of absorbed and reflected radiation, accounting for per-ray incident lighting angle and material properties. Additionally, since the ray-tracing of the SRP model is efficient, evaluation is performed throughout the simulation, allowing simulation of reconfigurable spacecraft geometries, such as the unfurling of solar panels. The applied force and moment and calculated through IRIS are incorporated in DARTS’ trajectory simulation, allowing analysis of absolute SRP effect for a given simulation or set of simulations via Monte Carlo simulation. In this contribution, we show how DARTS can carry out exo-atmosphere simulations to analyze the effect of radiation pressure. Along with the model, we describe the cross-validation of DARTS’s SRP simulation with other tools. In addition to cross-validation, we include performance benchmarking of the DARTS exo-atmosphere simulation, showing 3 day trajectory simulation with SRP forces in less than 5 minutes on a single workstation with negligible error accumulation.
      • 10.0103 Analysis of Various Injector Shapes to Improve Air-Fuel Mixing inside a Scramjet Combustor
        Alhanouf Eshtairy (Cranfield University), Alexandre Millot (Cranfield University) Presentation: Alhanouf Eshtairy - Sunday, March 2th, 04:55 PM - Cheyenne
        The efficiency of air-fuel mixing in scramjet combustors is crucial for achieving hypersonic speeds, as the residence time is in milliseconds. A parametric study of air-fuel mixing for different injector designs in the HyShot-II scramjet combustion chamber is performed. This work focuses on a numerical study on mixing efficiency as hydrogen is injected at sonic speed into a 2.4 Mach number air cross-flow. Numerical simulations with three-dimensional Reynolds-Averaged Navier-Stokes (RANS) equations and the k-omega Shear Stress Transport (SST) turbulence model investigate an elliptical injector under various orientations, in addition to the impact of a fin ahead of the injector. The study is conducted on non-reacting flow to isolate the mixing process from combustion effects. Key aspects include validating the computational model against a theoretical framework, the capability to capture general vortex structures, and expanding the research by employing Detached Eddy Simulation (DES) to investigate unsteady flow phenomena on the design that offered improvement in air-hydrogen mixing. Preliminary results indicate up to a 17% improvement in mixing efficiency near the injector with the ellipse and fin combination and 2.3% across the length of the combustion chamber, showing reduced mixing length and better fuel-air homogeneity compared to traditional circular injection designs.
      • 10.0104 Multiphase Compressibility Correction for Supersonic Flow Using Lattice Boltzmann Method
        Hemant Joshi (University of Hertfordshire) Presentation: Hemant Joshi - Sunday, March 2th, 05:20 PM - Cheyenne
        This research delves into the intricate interactions of fluid particles around lifting surfaces in high-speed supersonic flows. A novel compressibility correction method is presented, utilizing a multiphase solver within the lattice Boltzmann method (LBM) framework. The computational domain is divided into numerous cells, with the lifting surface modelled as an obstacle matrix. This setup allows for identifying elements as either fluid particles or solid walls, facilitating detailed mesoscopic-level simulations of fluid dynamics. The computational solver iteratively computes density and velocity distributions at lattice nodes, propagating these values to neighbouring cells to enforce boundary conditions. A key feature of this study is the integration of the Bhatnagar-Gross-Krook (BGK) model’s collision factor, which is essential for determining density ratios and effective velocity components, ensuring accurate modelling of compressible flows. The research further explores the adaptability of LBM when combined with vortex lattice methods (VLM), enabling the characterization of computational domains as isotropic or anisotropic. This combination effectively captures multiphase phenomena and fluid-solid interactions in high-speed conditions, outperforming traditional methods. By incorporating compressibility correction within the VLM framework, the study enhances the understanding of multiphase fluid dynamics and boundary interactions at supersonic speeds. This combined approach addresses limitations in modelling individual fluid particles, providing a robust solution for simulating complex compressible flow dynamics. The LBM's versatility in handling complex interfaces and managing compressible flows is highlighted, making it an ideal tool for studying shock waves and compressibility effects in supersonic regimes. The research methodology involves discretizing each flow particle into numerous degrees of freedom, adopting various lattice forms, such as D2Q9 for two-dimensional analysis and D3Q19 for three-dimensional analysis. The distribution function f(ξ, x, t) defines non-linear fluid characteristics at the microscopic scale. The discrete velocity Boltzmann equation (DVBE) is employed, with the equilibrium distribution function adjusted to account for thermal and compressible limitations. The Chapman-Enskog expansion is used to derive the macroscopic equations of the hybrid method. The hybrid approach combines LBM with the entropy-inspired energy equation using the finite volume method, applicable to both subsonic and supersonic regimes. The LBM's efficiency and ability to model high-speed flows are further enhanced by integrating advanced boundary conditions and collision factors. The study incorporates the use of the double distribution function (DDF) around the supersonic boundary, which provides significant benefits in terms of accurately simulating polyatomic gases at high speeds by extending numerical equilibrium approaches to reproduce multiple moments of the Maxwell-Boltzmann distribution. This combined approach addresses limitations in modelling individual fluid particles, providing a robust solution for simulating complex compressible flow dynamics with faster computation time.
      • 10.0105 Dshell-DARTS: A Reusability-focused Multi-mission Aerospace and Robotics Simulation Toolkit
        Juan Garcia Bonilla (Jet Propulsion Laboratory), Carl Leake (), Asher Elmquist (Jet Propulsion Laboratory), Aaron Gaut (), Tristan Hasseler (Jet Propulsion Laboratory), Vivian Steyert (Jet Propulsion Laboratory), Abhinandan Jain () Presentation: Juan Garcia Bonilla - Sunday, March 2th, 09:00 PM - Cheyenne
        The Dshell-DARTS framework is a versatile simulation toolkit designed for aerospace and robotics applications. Developed at NASA's Jet Propulsion Laboratory (JPL), it supports high-fidelity simulations for a wide range of systems, including interplanetary, orbital, atmospheric, and surface vehicles, as well as robotic platforms. Dshell-DARTS integrates key capabilities in support of these simulations, such as rigid and flexible multi-body dynamics, sensor simulation, collision modeling, terramechanics computations, and gravity and atmospheric modeling. This paper outlines the challenges of developing a simulation framework capable of supporting such diverse domains and capabilities. Reusability is identified as a key feature in meeting these challenges, and some of the architectural choices adopted in Dshell-DARTS that enable and promote reusable capability development are explored. This reusability-focused architecture reduces the need for bespoke simulation creation, which accelerates development and testing, and enhances robustness. The Dshell-DARTS framework has been successfully used in numerous NASA missions, demonstrating its versatility in supporting complex and diverse mission requirements.
      • 10.0107 Digital Lunar Exploration Sites (DLES) Terrain Crafting
        Cory Foreman (METECS), Edwin Crues (NASA Johnson Space Center), Daniel Fenn (METECS), Jack Kincaid (METECS) Presentation: Cory Foreman - Sunday, March 2th, 09:25 PM - Cheyenne
        Humans will soon be returning to the surface of the Moon with NASA’s Artemis program. The Artemis program is an international collaboration that will consist of a complex series of space systems and missions to explore the lunar surface and pave the way for the future exploration of Mars. NASA and its partners rely heavily on simulation for lighting and navigation studies as well as training astronauts, flight controllers, and mission support staff. The NASA Exploration Systems Simulations (NExSyS) team in the Simulation and Graphics Branch (ER7) in the Engineering Directorate at NASA’s Johnson Space Center has built up many simulation products to support this effort, one of which is the Digital Lunar Exploration Sites (DLES). DLES is a collection of products used to simulate and render the lunar surface in a digital environment. We discussed and presented an overview of the DLES products at the 2022 IEEE Aerospace Conference in Big Sky, MT with a paper titled "Digital Lunar Exploration Sites". This “DLES Terrain Crafting” paper will expand on the information previously provided in “DLES” paper and dive deeper into the details of the terrain crafting process and the toolsets used to support this task. The best digital data currently available of the lunar surface is provided by the Lunar Reconnaissance Orbiter (LRO). Its Lunar Orbiter Laser Altimeter (LOLA) achieves an impressive resolution of 5m per pixel at the Lunar South Pole (LSP) and can generate datasets covering a large continuous region near the LSP. There are a few additional methods, such as Shape from Shading which can infer higher resolution data (up to 1m per pixel) from the LRO Narrow Angle Camera (NAC) images. However, surface-based simulations require higher-resolution data, and this paper will discuss the process of enhancing the terrain to meet that need. The process begins with capturing statistical data of craters in the regions of interest using images provided by the LRO NAC. This data is then used to scatter artificial features which are not captured in the truth data, resulting in an enhanced DEM with a much higher resolution of 20cm per pixel. Many tools were built up to assist in the creation of these artificial Digital Elevation Models (DEM), which this paper will discuss in detail. DEMs themselves are a very powerful representation of a planetary surface, and many operations and tools can utilize the data they contain. This paper includes a description of the rendering of the lunar surface in a graphics engine, generation of contact patches to simulate tire to ground interaction, and ray tracing utilities to model Line of Sight (LOS) interactions with the terrain. This paper will also explore some new tool sets currently under development which aim to utilize Machine Learning (ML) to assist in the identification of craters from LRO NAC imagery. While this is not a novel idea, the NExSyS team is developing a unique approach which may result in more robust identification of crater characteristics.
      • 10.0109 Quantum Computing Use Cases & Impacts for Aerospace Industry
        Charles Chung (IBM Quantum), Thomas Ward (IBM), Bob Dirgo (Ohio Aerospace Institute) Presentation: Charles Chung - Sunday, March 2th, 09:50 PM - Cheyenne
        Quantum computers have the potential to be transformational for the aerospace industry. However, many aerospace organizations have not considered how this technology will impact their enterprises. Quantum computers are expected to have a greater effect than simply reduce computational runtimes. However, understanding their impact not only requires understanding what quantum computers are capable and incapable of, but can also require re-examining well-established workflows and questioning long-held assumptions. Quantum computation is currently in a stage of rapid maturation. In 2023, a quantum computer demonstrated a computation that goes beyond the capabilities of brute force classical computation. Building upon this, the hardware is rapidly progressing with qubit counts increasing and gate fidelities improving such that a quantum error corrected quantum computer has been announced to arrive in 2029. In parallel, algorithm advancements have decreased hardware requirements, in some cases by orders of magnitude. The result is that first applications are expected in the next few years, and aerospace organizations are advised to become quantum ready now. In the Spring of 2024, representatives from around the US aerospace industry gathered to identify how the expected power of quantum computers may impact the aerospace industry. The outcomes and recommendations from the workshop are summarized in this paper. The paper discusses the current status and trends in quantum computing, the four main application areas of quantum computing (quantum machine learning, search & optimization, chemical & material simulation, and linear algebra), illustrative use case examples from each area, and how these map to challenges in the aerospace industry.
      • 10.0110 Development of Spacecraft Molecular Accumulation and Contamination Kinetics Simulator (SMACKS)
        Maxwell Martin (NASA Jet Propulsion Laboratory), ANTHONY WONG (Jet Propulsion Laboratory), William Hoey (NASA Jet Propulsion Laboratory) Presentation: Maxwell Martin - Monday, March 3th, 08:30 AM - Cheyenne
        The Near-Earth Object Surveyor (NEO Surveyor) mission is a 50 cm diameter infrared telescope capable of detecting asteroids which will help advance NASA’s planetary defense efforts to discover and characterize potentially hazardous near-earth objects in our solar system. As an infrared observatory, NEO Surveyor’s sensitive optical surfaces and radiators operate at temperatures <60K, which can readily collect significant amounts of contamination and lead to performance degradation if not properly mitigated via material selection, bakeouts, thermal control and mechanical design. The most critical on-orbit portion of NEO Surveyor’s contamination control strategy is during the initial period after launch when surfaces are warmest, leading to the highest outgassing rates. After an initial decontamination period, the optics are cooled down to their cryogenic operational temperatures which leads to a highly transient thermal profile for both outgassing sources and sensitive surfaces. In order to model the contamination collection during this period, a novel Python-based contamination analysis tool entitled “Spacecraft Molecular Accumulation and Contamination Kinetics Simulator” (SMACKS) was developed and tested. This Python-based tool is an evolution of previous workbook-based models such as those applied to NASA’s Psyche and Lunar Gateway programs. Compared to earlier models which performed simplified steady state calculations with limited abilities to account for temperature-dependent outgassing rates, SMACKS allows for evaluation of fully transient thermal profiles, inclusion of detailed material outgassing data sets, and automated creation of results visualizations. In addition, the Python-based tool is more robust to user-error as it requires significantly fewer manual inputs and manual inspections/testing than the workbook models. This work describes the development, testing and application of SMACKS for the NEO Surveyor mission, and discusses future applications and enhancements of the code.
      • 10.0112 A Decoupled Approach to Fluid-Thermal Coupling in High-Speed Flow with Convection Boundary Condition
        Gorkem Atay (ASELSAN Inc.), Tuğcan Selimhocaoğlu (ASELSAN Inc.) Presentation: Gorkem Atay - Monday, March 3th, 08:55 AM - Cheyenne
        In high-speed fluid-thermal coupling problems, solution accuracy depends on the frequency of information transfer at the solid-fluid interface. Whereas monolithic approaches and strongly coupled schemes provide high accuracy, they are computationally expensive during preliminary design phases compared to decoupled methods. Hence in this study a decoupled methodology is developed to efficiently and accurately calculate the transient temperature distribution in missile skin during high-speed flight. However, limited information transfer at the interface can lead to inaccuracies that increase over time, particularly because the non-adiabatic wall temperature, which relies on accurate information transfer at the interface, significantly affects the boundary layer and temperature gradient, influencing level of the aerodynamic heating. Because, flow properties that shape the boundary layer are highly dependent on temperature. Despite this, some literature reports promising results for decoupled methods in high-speed flows. To mitigate inaccuracies, the proposed methodology uses two approaches: Applying the convection heat transfer coefficient (HTC) as a thermal boundary condition (BC), derived from flow analyses, since it exhibits weak coupling with the wall temperature, and in the flow analyses selecting a constant wall temperature near recovery temperature in the process of obtaining HTC to better simulate real-world conditions. The calculation algorithm is itemized: (1) Time Discretization: The flight trajectory, involving changes in Mach number, altitude, and flow angles with time is divided into sufficiently small time-steps to accurately capture the problem's physics. (2) Adiabatic Wall (Recovery) Temperature Calculation: Using the flight conditions at the discrete points of the trajectory determined in Step-(1), steady-state flow analyses are performed with an adiabatic no-slip wall BC to obtain the adiabatic wall temperature distribution. (3) Heat Flux Calculation: Constant wall temperatures are assigned to the external boundary based on the flight conditions for each discrete points of the trajectory determined in Step-(1) to calculate wall fluxes. These temperatures are set as either 1.05 times the total air temperature for high temperatures or the total air temperature plus approximately 15K for lower temperatures. (4) HTC Value Calculation: The Modified Newton’s Law is utilized to calculate the HTC values, the only unknown variables, by superimposing the results obtained from the flow analyses. At this stage, the convection BC requirements for heat diffusion analyses are established as functions of time and space. (5) Application of Convection BC and Temperature Distribution Calculation: The recovery temperature and HTC values are converted to solver format and applied to the solid domain as convection thermal BC at each time-step. Then the heat diffusion equation is solved for each time-step from Step-(1), yielding the temperature distribution throughout the missile skin as a function of time. Quasi-transient approach is used to reduce global runtime, as the flow field quickly reaches a steady-state, whereas heat diffusion within the solid progresses slowly due to the significant difference in their time scales. The comparison is made with a Surface-to-Air missile flying in the high-supersonic regime, showing good agreement with a maximum discrepancy of 1.9%.
      • 10.0113 On Loads and Aerodynamics Assessment of the Missiles
        Dariusz Sokolowski (Military Institute of Armament Technology), Maciej Cichocki (Military Institute of Armament Technology), Szymon Elert (Military Institute of Armament Technology), Michal Pyza (Military Institute of Armament Technology) Presentation: Dariusz Sokolowski - Monday, March 3th, 09:20 AM - Cheyenne
        The paper contains overview of the chosen results of multiple computational multiphysical simulation results of different missiles’ components (CFD, FEM, CLA – FSI). It discusses the importance of certain results, which led to the development of a prototype air vehicles. During the project of “Development of a Three-Stage Suborbital Rocket System to Lift Research Payloads”, co-financed by Polish National Centre for Research and Development under contract no. POIR.01.01.01-00-0834/19-00 – a series of technologies for tail-controlled rockets was developed. Rapid prototyping and range testing with limited funds and research laboratory equipment or staff led to development of series of simple analytical tools, which were then simulated in mainly CFD and FEM software to asses overall missile performance, limitations on control system, aeroelastic behaviour and structural responses. CFD simulations were held locally and using parallel computing on a CPU cluster. The simulations were held in the areas of: external aerodynamics, heat transfer of solid rocket motor and nozzle, thermal load for strains in FSI and load distribution for composite structures optimization. FEM simulations included: basic load analysis of a casing for first geometry stresses assessment, thermal strains of casing and joints under aerothermodynamic and propellant loading, flight termination system and pyrotechnical stage separation analysis, composite solid rocket motor and nozzle design and final mass optimization after review. Analytical tools were mainly helpful in the areas of initial loads and sizing, structural analysis, motor internal ballistics, manufacturing and prototyping tooling development. Those results will be briefly discussed and presented with information why certain results were of an incredible importance for a project timelines. General knowledge on mechanics and avionics and power of numerical modelling led to the design, which was proven successful during the numerous flight tests of a subscale missile. Oncoming tests are going to be verification of the full scale object.
    • Ronnie Killough (Southwest Research Institute) & Kristin Wortman (Johns Hopkins University Applied Physics Laboratory)
      • 10.0201 A Fractal UML Design Pattern for Collaborative Object Trees
        Jeremiah Finnigan (Johns Hopkins University/Applied Physics Laboratory) Presentation: Jeremiah Finnigan - Sunday, March 2th, 04:55 PM - Electronic Presentation Hall
        This paper presents a novel UML design pattern named Fractal. This object behavioral design pattern enables the nodes of a tree that is constructed of fractal objects to collaborate to produce a collective behavior and/or construct a desired output. The Fractal design pattern has been successfully used to implement a fractal regression test framework for testing the AMMOS CFDP ground software for the Europa Clipper mission. This framework organizes the regression test suite in the form of a test tree, in which each node of the tree is a test script that implements the Fractal design pattern such that each test script has the ability to act as a test runner for all of its child nodes, and each test script had the ability to report its cumulative results to its parent node. As a result, the test runner behavior is distributed throughout the entire test tree, and the nodes of the tree collaborate to produce a single regression test summary report in addition to each test script producing its own detail test report file where necessary. In order to make this scheme work correctly when executing either the entire regression test tree or any subtree, it is necessary for each node in the tree to dynamically determine whether it is the root node, a leaf node, or a middle node during any given regression test run. The Fractal design pattern was also used to implement a fractal subtest framework for independent acceptance testing of the Interstellar Mapping and Acceleration Probe (IMAP) mission flight software. This framework enables subdividing long-running test scripts into smaller subtest scripts that can be dry-run and debugged independently with shorter run times, and allowing the parent test script to execute any number/combination of these subtests in sequence without modification. The ability to independently dry-run and debug shorter-running subtest scripts facilitates more efficient use of very limited and expensive testbed resources that are in high-demand by the FSW development team, the acceptance test team, and the autonomy system team. Although this design pattern was originally developed for the purpose of automating software regression testing, it may be applied to solving other types of problems as well.
      • 10.0203 Removing the Design from the Simulation Speeds DO-254 Pre-silicon Verification
        Hamilton Carter (Mentor Graphics a Siemens Business) Presentation: Hamilton Carter - Monday, March 3th, 09:45 AM - Cheyenne
        One of the key limiters to any verification effort is simulation speed. To be fair, digital RTL and gate-level simulators have a lot of work to do: they track signal transitions, (on four-level signals no less), through the equivalent of millions of logical gates. Meanwhile— paradoxically perhaps—verification code tends to be written in a procedural rather than a synthesizable fashion and does not need or use many of the signal level features provided by a typical digital simulator. The Universal Verification Methodology, known colloquially as UVM—implemented in System Verilog according to IEEE Standard 1800 in most simulators—encourages modeling signal driven bus transactions as objects instantiated from classes. These objects contain the intent and content of the transaction, (e.g. address, direction, and data of a bus access), abstracting away signal level transitions. Passing these transaction objects from component to component as the lingua franca of verification, the language used to model what we should expect to see from the device under test, (DUT), limits the mechanics of signal interactions to a very thin layer at the interface between the DUT and the verification environment. With UVM abstracting verification away from signal level interactions, a new question naturally arises: Do we really need toggling clocks—perhaps thousands of times per created transaction—propagating all the internal signals of our DUT, in order to simply test our verification environment? The answer is no. By removing the internals of the DUT— leaving only the signal ring that interfaces with the verification test bench—and reducing the clock speed if possible, simulation speed for verification environment testing can be increased by factors of 10, 100, or even 1000x depending on the design. In this paper, we’ll show how to easily set up a verification environment from first principles so that it can be developed without the presence of the design under test. This not only reduces simulation times substantially but can also be used to give the verification team a head start on the design team as a new project ramps towards kickoff. The methodology discussed provides more than just development speed. It can also be used to accelerate debug of the design and its associated verification environment. By removing the DUT, then pushing streams of transaction objects captured during simulation through the transaction only verification test bench, we can quickly determine if it is the DUT or the verification environment itself that’s at fault. We’ll review the concepts of transaction level modeling as it’s proscribed in UVM. We’ll show how to easily convert UVM agents, (or replace them if necessary), to shunt their signal-based bus functional models, (BFMs), out of the picture. And finally, we’ll show how to put these concepts and processes together to create verification code more efficiently so it’s ready to go when or before the DUT is.
      • 10.0205 Enabling Interoperable Digital Twins for Collaborative Lunar Exploration
        Edward Chow (Jet Propulsion Laboratory), Bingbing Li (California State University Northridge) Presentation: Edward Chow - Monday, March 3th, 10:10 AM - Cheyenne
        Digital Twins (DT) hold tremendous promise to revolutionize science, technology, engineering, production, and operations. They have already been applied in fields such as scientific discovery, biomedical sciences, factory management, climate change, and smart cities. In the context of lunar exploration, where access to objects in space is challenging, DT is particularly critical. DT enables engineers on Earth to manage the health of systems on the Moon and facilitates virtual planning and testing of lunar activities before robotic systems execute potentially dangerous tasks. Furthermore, lunar exploration is evolving into a collaborative international effort, and DT has the potential to support testing across organizations globally. The primary challenge to achieving this vision of collaborative lunar exploration is the in