The realization of ultra-precision formation flying, which controls the relative positions of multiple satellites with micrometer accuracy, is anticipated to enable new scientific missions using inter-satellite laser interferometry, such as space-based gravitational wave telescopes. Achieving such precision requires transitioning from coarse relative position control using the Global Navigation Satellite System (GNSS), accurate to sub-centimeter levels, to ultra-precision control at the micrometer level. This necessitates the development of a satellite-to-satellite relative velocity sensor capable of wide-range measurements from sub-centimeters per second to micrometers per second. We have developed a novel method using the Doppler effect in an asymmetric Michelson interferometer to measure relative velocity between satellites. This method involves modulating the laser frequency to introduce an effective offset to the frequency of signal, enabling signed velocity measurements. In our research, we conducted tabletop experiments to validate the proposed method. The setup included a laser source, a delay line, and mirrors suspended on a stage and controlled by electromagnetic actuators to simulate the relative motion between satellites in a space mission. The results demonstrated that our approach could accurately measure velocities ranging from sub-centimeters per second to micrometers per second. We also compared offline and online analysis results of the output signal to investigate the accuracy and limitations of our method. Our research demonstrates that this method can measure satellite-to-satellite relative velocities from sub-millimeter levels to micrometer level, significantly advancing the feasibility of ultra-precision formation flying missions. Future work will involve on-orbit verification to confirm its effectiveness, with the expectation of practical applications in various scientific missions.
This paper proposes a maneuvering sequence for setting a CubeSat in low-Earth orbit on a precise trajectory to reach its designated orbital elements precisely by employing the J2 perturbation, through which the total required delta-v for correcting the orbit deviation can be reduced considerably. Since CubeSats are launched as secondary payloads, their insertion orbits are guaranteed only within large tolerances. However, for certain CubeSat missions it is essential to put them in precise orbits, especially when having multiple CubeSats in a formation or constellation. Thus, a CubeSat may need to perform some orbital maneuvering, in order to correct its orbit and reach the designated orbital elements. Due to several key challenges in putting a CubeSat on a precise orbit, orbit insertion maneuvering of CubeSats has not been fully rectified yet. The first challenge is that, since a major part of each CubeSat is often allocated to payloads, a very limited amount of propellant can be carried onboard. Secondly, the propulsion systems currently available for CubeSats offer low-thrust levels, which should be employed for a long duration of time to provide enough delta-v needed for orbit corrections. Further, the limited electric power generated onboard CubeSats prevents their propulsion system from being continuously activated for a long time. The third challenge is that an online optimal orbit correction algorithm requires high computational resources, which are typically beyond the processing capabilities of CubeSats. The proposed J2 maneuvering sequence attempts to take benefits from the J2 perturbation by placing the CubeSat on a transfer orbit with appropriate semi-major axis and inclination, calculated using the first-order Taylor series approximation of the Gauss’ variational equations, so that the J2 perturbation decreases the error of the right ascension of ascending node and the argument of latitude with respect to the desired orbit over the maneuvering period. Thus, the CubeSat does not need to perform out of plane maneuvers for correcting the right ascension of ascending node, which usually require the major part of the fuel. The J2 maneuvering sequence initially puts the CubeSat on a proper transfer orbit using an in-plane maneuver to change the semi-major axis of the orbit and an out-of-plane maneuver to change the inclination. When the CubeSat is placed on the proper transfer orbit, the argument of latitude and right ascension of ascending node slowly drift to the desired values. When the CubeSat is on the transfer orbit, the error of right ascension of ascending node is converged to zero at a constant rate by the J2 perturbation over a certain time period. Finally, an inclination correction maneuver and an in-plane semi-major axis correction maneuver should be performed at the end, when the argument of latitude exactly matches the desired value. The effectiveness of the proposed approach is investigated through several case studies using numerical simulations.
In this paper, a methodology for the comparison of Collision Avoidance Maneuver (CAM) planning techniques is proposed, taking into account the accuracy and optimality of the resulting solutions, as well as the computational cost associated with their determination. With the increasing number of space objects, the risk of collisions with active satellites has become an urgent issue. Satellite operators already perform conjunction risk assessment and avoidance activities. This entails a significant cost, making invaluable the employment of CAM planning and execution methods with proven risk-mitigation capabilities, at minimum fuel and computational expense. Multiple authors have explored the topic of the determination of optimal CAMs over the last decades, proposing a variety of approaches. Therefore, the goal of this study is to derive an appropriate set of comparison criteria for the evaluation and performance assessment of these. The criteria proposed consider factors such as the accuracy of the resulting solutions, their optimality, and the computational requirements of each method, maintaining enough generality to be widely applicable across all classes of CAM optimization techniques. This work contributes to the streamlining of collision avoidance operations, by reducing the required time for decision-making and improving the efficiency and effectiveness of the performed maneuvers. The developed framework is applied to a selection of CAM optimization methods that address a set of commonly employed optimization goals, constraints, and decision variables. The results are validated for the use case of a single-encounter conjunction scenario, across various conjunction geometries and orbital regimes.
On-orbit servicing using autonomous robotic manipulators onboard satellites is a promising method for future space sustainability missions like debris removal, refueling, assembly, and maintenance. A newer variety of space robots is the Rotation floating space robot (RFSR), in which the orientation and position of a robotic arm mounted on a floating satellite are actively controlled. At the same time, the satellite base orientation is also being simultaneously adjusted. However, non-linearities and complexities associated with controlling such coupled space-based systems present difficulties in their feasible implementation, requiring accurate modeling and advanced control algorithms. It is challenging to model such systems with coupled degrees of freedom (DOF) and a high degree of non-linearity. The paper presents an approach to model and control Rotation Floating Space robots to design an optimal path that the end-effector can follow while being controlled to capture the target. A Non-linear Model Predictive Controller (NMPC) is proposed for the position and orientation control of a UR5 manipulator mounted on the base of the satellite. Also, experimental verification of such systems is difficult due to difficulties replicating a micro-gravity environment on the ground, providing 6DOF motion to robots. This paper addresses multiple issues. One is to model RFSR in Pybullet and use NMPC-driven layered control architecture for simultaneous 9DOF control. The simulation shows the complete workflow from deploying the robotic arm until the target is successfully captured or docked on. After that, we use a hardware-in-the-loop (HILs) hybrid approach to validate the simulations performed in PyBullet. While one robot is employed to simulate the target body's six degrees of freedom, another robot mimics the movements of the chaser arm. The NMPC formed the outer loop controller, generating reference commands for the inner loop PyBullet controller. The NMPC took the gripper to the desired pose while a simultaneous PD controller stabilized the orientation of the satellite base and maintained it at the desired values. The results from simulation and hardware experiments show that the proposed controllers were efficient, and the NMPC could find an optimal path to the desired gripper's pose.
To enable deep space exploration and a sustained presence in space, autonomous in-space assembly systems are needed to develop large-scale infrastructure. Assembled mechanical metamaterials are a promising technology for space applications due to their low mass and high mechanical performance. Previous building block designs include monolithic units and numerous styles of decomposition. Face decomposition and strut-and-node designs were investigated for their high packing efficiency in a launch vehicle. Deployable building blocks provide a new paradigm for increased packing efficiency during transport. In this paper, we explore a novel deployable building block design using coiled deployable booms. Coiled deployable boom technology is used extensively in space structures for their lightweight properties and low power requirements. They are compact in the stowed position and have a high stiffness to mass ratio. We present the design and performance characterization of a deployable, adaptive, structural building block for assembled metamaterial systems. The building block is designed with tape springs that serve as the strut members between attachment nodes. This building block design has a high packing efficiency for transport, tunable strut design for custom interfaces and mechanisms, and allows for repair of hard to reach areas in the structure, providing a highly adaptive system for assembly. The structural performance of the building block is characterized, and two folding configurations of the building block prototype were demonstrated. A discussion of lessons learned and future design revisions is also presented. This system of adaptive structural modules for robotic assembly will enable more complex assembled structures and space infrastructure.
Mobile robots will be key in enabling the next stages of human presence in extraterrestrial environments. One set of emerging applications is in construction and maintenance tasks on host structures like space stations and lunar infrastructure. Robots that operate in these highly structured environments can leverage much simpler subsystems for locomotion, sensing, and path planning compared to machines in unstructured environments like terrestrial rovers. A leading robot architecture in this space is the walking arm or inchworm robot. This class of robot resembles a series manipulator arm where both ends are equipped with identical end effectors; the robot uses these end effectors interchangeably to anchor itself to its host structure, to interact with tools, and to transport objects. In this way the robot can locomote like an inchworm by changing its anchor points while maintaining the functionality of a manipulator arm at specific locations. The inchworm robot has a workspace that scales with its host structure, allowing it to maintain and build structures much larger than itself. The inchworm robot also achieves high precision by indexing with anchor points on the host structure. Inchworm robots have been developed across a variety of scales by aerospace companies and research groups; they are currently in use on the ISS as the Space Station Remote Manipulator System (SSRMS) and European Robotic Arm (ERA). The NASA ARMADAS project is developing inchworm-style robots for in-space assembly. The robotics company GITAI is developing inchworm robots for assembly of tall lunar towers. --- The inchworm robot has unique properties that make its design and control challenging. Unlike a typical manipulator arm, both ends of the inchworm robot must be capable of acting as a base to support the large inertia and gravity loads of the entire robot. This kinematic chain symmetry makes actuator selection difficult, particularly in gravity environments. Past approaches have relied on high torque stages at each joint by motors with large gear transmissions, which results in energy inefficiency and excessive joint stiffness. Compliance is critical for inchworm robots as they must frequently make and break contact with their host structure and other objects. Past control approaches have also relied on quasi-static methods, which yield excessively slow operation. Following recent developments in proprioceptive actuation and impedance-based control strategies, we provide a framework for the design and control of inchworm robots that are both compliant and dynamic by sensorless joint-space torque measurements and low-gear-ratio motor stages. This paper demonstrates methods for platform designers to derive actuator parameters, to assess different controllers, and to test relationships between engineering variables in simulation. The paper also investigates different locomotion and manipulation modes of the inchworm robot across a variety of length scales and gravity environments to assess feasibility in emerging applications. Finally, the paper provides physical validation experiments on the Scaling Omnidirectional Lattice Locomoting Explorer (SOLL-E) inchworm robot being developed by the NASA ARMADAS project.
On November 18, 2023, Mars reached solar conjunction as viewed from Earth, with a minimum Sun-Earth-Mars (SEM) angle of 0.1 degrees. With the SEM angle smaller than usual, this geometry presented a unique opportunity to study the effects of solar plasma on radio signals sent from Mars assets to the Deep Space Network (DSN) on Earth. For a period of two weeks before and after the event, we utilized the Open Loop Receivers (OLRs) at the DSN to record the radio signal from three Mars orbiters: Mars Odyssey (56 tracks), Mars Reconnaissance Orbiter (56 tracks), and MAVEN (24 tracks). The OLR recordings were completely opportunistic in nature—no modifications were made to any of the spacecrafts’ configurations or to the DSN ground stations. With decreasing SEM angle, the DSN closed loop receiver had increasing difficulty locking to the signal, while the OLR continued to capture a spectrum containing the degraded signal. Additionally, two coronal mass ejections were observed during the study period, with the impact evident in the OLR data. By correlating phase noise from the OLR data with dropped telemetry frames, we map the impact of solar plasma on the transmission of data through the sun’s corona. The OLRs were also configured to record wide bands, offering a path for telemetry recovery even when the closed loop receiver was out of lock at low SEM angles.
In this study, we analyze a low Earth orbit multi-satellite communication system using orthogonal time frequency space (OTFS) modulation. In this context, the choice of the OTFS modulation is driven by its robustness against high Doppler shifts and its suitability for time-frequency selective channels. This scenario is motivated by the need for a more stable and reliable system, which can be achieved through diversity, i.e., allowing each user to be jointly served by multiple satellites. This is particularly beneficial in situations where the line-of-sight link between the satellite and the user terminal (UT) is obstructed by physical impairments on the ground. However, signals from diverse satellites typically reach the UT at distinct time epochs and are also affected by different Doppler shifts and phases. Therefore, while higher diversity can significantly improve performance, the UT must effectively combine the received information to benefit from multi-satellite configurations. This paper focuses on soft-output data detection algorithms and proposes a novel message passing (MP)-based approach that leverages both the channel sparsity in the Delay-Doppler domain and the particular structure assumed by the equivalent channel matrix Ψ, derived from the compact block-wise input-output relation, according to the Forney observation model for linear modulations over AWGN channels. The proposed detector works on a processed version of Ψ, referred to as G, where a number Nd of main diagonals, corresponding to the most significant interfering terms, are identified. Based on the reduced version of the Hermitian matrix G, containing Nd nonzero diagonals in the lower triangular submatrix, the proposed MP solution operates on a factor graph composed by Nd subgraphs. Each of them consists, in turn, of a specific number of parallel branches implementing separately the well-known BCJR algorithm with unitary memory. The computation on each branch in the same subgraph can be performed simultaneously, allowing to achieve a high level of parallelization. The various subgraphs exchange information according to the sum-product algorithm rules and convergence is achieved after a low number of iterations, typically 3. The proposed approach is the first one able to significantly reduce the complexity by acting on the choice of interferers, organized in diagonals, and on the schedule, by prioritizing the strongest elements. The number of selected diagonals Nd is optimized to achieve the best trade-off between complexity and performance. We assess the detector's performance by evaluating its pragmatic capacity, i.e., the achievable rate of the channel induced by the signal constellation and the detector soft-output. The simulated satellite link scenarios are based on the Starlink constellation, considering various numbers of satellites in visibility with the UT. For comparison purposes, we tested the linear minimum mean squared error (LMMSE) receiver and a low-complexity algorithm working on Ψ taken from existing literature. Numerical results demonstrate that multi-satellite diversity effectively enhances the system performance and that the proposed solution reaches and, in some scenarios, outperforms the LMMSE detector with a considerably lower complexity. Moreover, given the constraint on the computational load, significant performance advantages are observed with respect to the algorithm operating on Ψ.
The inter-spacecraft omnidirectional optical communicator (ISOC) is currently being developed by Chascii to provide fast connectivity and navigation information to small spacecraft forming a swarm or a constellation in lunar space. The lunar ISOC operates at 1550 nm and employs a dodecahedron body holding 6 optical telescopes and 20 external arrays of detectors for angle-of-arrival determination. In addition, the ISOC includes 6 fast avalanche photodetectors (each furnished with external gain optics) for high data rate connectivity. The ISOC should be suitable for distances ranging from a few kilometers to a few thousand kilometers. It will provide full sky (4π steradian) coverage and gigabit connectivity among smallsats forming a constellation around the moon. It will also provide continuous positional information among these spacecraft including bearing, elevation, and range. We also expect the ISOC to provide fast low-latency connectivity to assets on the surface of the moon such as landers, rovers, instruments, and astronauts. Chascii is building a lunar ISOC including all its transceivers, optics, and processing units. We are developing the ISOC as a key candidate to enable LunaNet. In this paper, we will present the development status of the ISOC’s transceivers as well as link budget calculations for operations around the moon using pulse-position modulation. We will present our ISOC development roadmap that includes LEO, GEO and lunar missions in the 2025-2028 time frame. We will also discuss the use of Delay-Tolerant Networking in the ISOC processor to achieve secure and encrypted connectivity around the moon. We believe the ISOC, once fully developed, will provide commercial, high-data rate connectivity to future scientific, military, and commercial missions around lunar space and beyond.
A deep space relay architecture for communication and navigation was developed in two previous papers. The baseline architecture, consisting of a Mars leading and a Mars trailing relay, was designed to mitigate the Mars conjunction problem for optical communication while enabling deep space trilateration for inner solar system users. An expanded version of the architecture added a Mars halo relay which improved the architecture’s data throughput and navigation potential. This paper further defines the navigation component of the architecture. Previous analysis assumed trilateration with idealized, instantaneous one- and two-way range measurements. Here, more realistic one- and two-way range measurement approaches are developed. A broadcast one-way range measurement mode is developed which offers benefits of user scalability and more timely positioning estimates. A point-to-point measurement mode, compatible with both one- and two-way range measurements, is developed as an alternative to the broadcast mode that levies less demanding time synchronization and antenna sizing requirements on the user. Pseudo-noise code-based ranging approaches compatible with each measurement mode are identified and a link analysis is performed to determine user service volumes and relay hardware requirements. The point-to-point measurement mode is found to be viable for even CubeSat-sized users throughout the inner solar system, while the broadcast measurement mode requires users with antenna diameters on the order of meters. The concept of a “deep space trilateration” algorithm is developed, in which the measurement models incorporate signal propagation time and the motion of users relative to the measurement providing nodes, both of which are non-negligible effects at deep space distances. Position estimation using the deep space trilateration method is simulated in a batch filter. This filter was found to converge even when presented with initial position errors of thousands of kilometers. The accuracy and timeliness of position estimates for a Mars cruise user are found to be on the order of 10 meters and 1 hour respectively, assuming 2-meter range measurement error.
As space missions grow in complexity, the cybersecurity threat landscape expands, necessitating a shift toward secure-by-design flight software (FSW). Traditional development prioritizes functionality over security, leaving systems vulnerable to attack. This paper introduces a novel methodology for developing cyber-resilient FSW with a secure-by-component architecture. By incorporating key resilience principles—segmentation, adaptive response, redundancy, and substantiated integrity—our approach addresses critical security needs early in development, minimizing attack surfaces without sacrificing performance. Leveraging NIST systems security guidelines and tailored cyber resilience techniques, we apply this methodology to a notional spacecraft's Command and Data Handling (C\&DH) subsystem. Through attack surface analysis and threat modeling, we derive specific cybersecurity requirements to enhance resilience. Key mechanisms, such as real-time monitoring, cryptographic enforcement, memory-safe programming, and zero-trust communication, are embedded to mitigate vulnerabilities from external threats and internal faults. This work advances space cybersecurity by offering a scalable, secure-by-design approach to FSW. Future efforts will extend this methodology to formal verification and autonomous systems, ensuring space operations remain secure against evolving adversarial tactics.
Australia currently lacks sovereign Earth observation capabilities presenting a severe risk in the current environment of heightened tensions in the Indo-Pacific region. Additionally, to deal with the challenges of global climate change, the country needs the latest technology and tools at its disposal – namely space-based sensors that provide weather modeling data, for which Australia is currently dependent upon overseas partners. This mission concept details a cost-effective SmallSat mission that provides high-resolution imagery critically necessary for the observation of sea lanes of communications as well as an advanced radiometer for in-situ characterization of clouds enabling accurate extraction of data for the Bureau of Meteorology’s nowcasting applications.
The advancement of microgravity manufacturing necessitates innovative systems capable of optimizing resource management in space. In Orbit Aerospace is developing the Resource Exchange Module (REX), an integrated system designed to facilitate the seamless storage, sorting, and transfer of payloads between orbital platforms and re-entry vehicles. This paper presents the design and functionality of REX, highlighting its autonomous capabilities that reduce the degrees of freedom, thereby reducing potential failure points and operational costs. By leveraging reusable launch technologies and an automated approach, the REX system aims to significantly reduce payload delivery costs by up to 75% and wait times by 66%. The cost function considers key variables such as the number of experiments, launch frequency, payload capacity, and preparation time, providing a comprehensive framework for understanding the system's economic impact. The findings, through hardware-in-loop (HIL) testing, illustrate REX’s potential to revolutionize microgravity manufacturing, fostering economic growth in the space industry and enabling new applications on Earth. This paper discusses the implications of REX for future space missions and the broader market for microgravity research and manufacturing.
The six-unit CubeSat Uvsq-Sat NG is equipped with a Near-Infrared (NIR) spectrometer designed specifically for measurements of greenhouse gas concentrations. This mission explores the feasibility of deploying compact spectrometric technology aboard CubeSats to accurately determine atmospheric levels of CO2 and CH4. Operating in the NIR range from 1200 to 2000 nm, the spectrometer is optimally configured to detect and quantify the spectral signatures of key greenhouse gases. The deployment of this technology on Uvsq-Sat NG marks a significant advancement in climate research, offering vital data that enhances our understanding of environmental changes and showcasing the increasing sophistication of satellite technology in monitoring Earth’s evolving climate. This will involve showcasing the capabilities of a disruptive spectrometer in measuring greenhouse gases and exploring potential advancements to push beyond the current state of the art in this field.
Waveforms are usually designed to optimize the radar performance. To this aim, a compromise has to be found between resolution and ambiguities, so that sidelobes have to be controlled carefully. In the particular scenario of a radar tracking a single target in a sparse environment (space based-radar for orbital rendezvous or air-to-air radar operating at high altitude), the constraint of low sidelobes level can be removed. In this paper, we show how releasing this constraint makes it possible to optimize the power consumption of the radar while preserving its precision. An adaptive waveform with binary frequency and time amplitudes, and frequency or time interruptions, is proposed and its efficiency for delay and Doppler estimation is characterized. The results are illustrated in a space rendezvous use case where the efficiency of the radar in transforming energy into information for delay and Doppler estimation can be improved thanks to interruptions. In this example, the proposed waveform enables to save 60% of the transmitted energy for a link budget excess of 1 dB, while a conventional reduction of the pulse width saves 20% of it.
Neuromorphic event-based vision sensors have emerged as a promising technology for space-based target detection and tracking due to characteristics like high dynamic range and high temporal resolution. Most existing research focuses on the tracking of relatively large objects in terrestrial environments or the task of space situational awareness (SSA), where targets are small but background clutter is limited. This research explores the application of various detection and tracking techniques to remote-sensing event-stream data. Remote-sensing applications typically require tracking small objects embedded in the clutter of the Earth background. Adding to the challenge is the limited availability of neuromorphic remote-sensing datasets with reliable truth information. To overcome this challenge, we introduce an event-stream simulation pipeline leveraging the Air Force Institute of Technology Sensor and Scene Emulation Tool (ASSET), Landsat 8 imagery, and the open-source v2e event simulator. The simulation pipeline is used to generate three datasets of target sequences: two for sensors in a geostationary orbit (GEO) and one for a sensor in a low-Earth orbit (LEO). This simulated data and ground truth are used to evaluate existing target-detection and tracking methods along with a new machine-learning-based approach proposed in this research. Existing methods designed for SSA are used for comparison, however, the detection approaches used by these methods prove to generate high numbers of false alarms caused by structured clutter from ground-based features. To overcome this challenge for remote-sensing applications, we propose a simple machine-learning (ML) detector that operates on events as they are produced. It is observed that this method can more effectively discriminate between events generated by targets moving in the scene and events generated by ground features or sensor noise. Additionally, we explore frame-based methods to perform detection using aggregated event-frame inputs. These frame-based approaches demonstrate promising performance for applications where the full temporal resolution of the sensors does not need to be leveraged. Tracker performance for each method is evaluated with standard track metrics including precision, recall, multiple object tracking accuracy (MOTA), and target localization error. Performance is also characterized relative to target intensity and velocity. While existing SSA methods perform well in some cases with careful tuning, the event-by-event and frame-based ML methods demonstrate more generalizable performance in the presence of background clutter. ML methods achieve MOTA scores of up to 0.86 on simulated data generated compared to a maximum of 0.68 for non-ML methods.
Verification of register transfer logic (RTL) designs has been a challenge for a long time. A steady increase in FPGA sizes and the resulting increase in system complexity will even aggravate the situation of verification engineers. Hence, it is important to apply proper verification techniques to the right problems. Functional simulation can still be considered as a primary verification method. It allows the application of directed or constrained random stimulus to evaluate the device under test (DUT) reaction. This method even performs well at large system designs but almost always provides incomplete verification results concerning state space analysis. Formal property verification (FPV) addresses this issue by utilizing mathematical approaches to investigate system properties. By this, FPV can check system properties for all possible input scenarios rather than defining a subset of input stimulus as done for functional simulation. This paper shows how liveness properties are investigated by formal property verification. These checks are applied to an FPGA design that provides remote interface functionality for two upcoming satellites. Liveness properties represent a temporal statement to prove that "something good" shall happen within an undefined time period. Deadlocks and lifelocks are faulty system states that can be detected by these properties. Deadlocks appear frequently in a distinct way, ready for debugging. Lifelocks instead, may provide escape possibilities that must be investigated further. We discuss our particular verification setup and analysis results to illustrate the root causes of these errors. Overall, deadlocks and lifelocks tend to cause large impacts on system behavior and should be investigated in addition to safety properties.
Enabling radiation shielding technology is imperative for deep space exploration of spacecraft and crewed missions to the moon and Mars. Radiation from the sun's gamma rays and from high energy neutrons due to cosmic rays prove to be some of the toughest particles to shield electronics from. Typically shielding is implemented up to the gamma limit and then radiation hardened electronics are used to protect against higher-energy gamma rays and neutrons. Neutrons pose a problem for electronics due to their high energy and mass having a significant impact on silicon substrates. However, well-designed shielding can be used as protection from high energy neutrons as well, through the use of moderator and absorber materials. It has been shown that when using them in combination, they have the capacity to decrease neutron energy and absorb them, respectively. The moderator + absorber layer concept has been utilized for large-scale, ground-based systems such as nuclear facilities. We look at incorporating the same concept with a small form factor for instrument packages bound for deep interplanetary space. Preliminary results simulated in a software package known as SWORD7 will be shown for a novel radiation shielding concept that attenuates high-energy neutrons. Simulation runs of a 15 day equivalent fluence were performed and analyzed for a 1U CubeSat module (10 cm cubed) using various materials for a moderator layer and absorber layer, and compared them with minimal shielding to analyze shielding success. The combination of polyethylene as a moderator layer and boron carbide as an absorber layer were proven to be the best solution of all combinations including a stainless steel only case.
NASA requires versatile and robust in-situ resource utilization (ISRU) strategies for lunar construction. Many concepts propose using lunar regolith to form concrete, termed here generally as lunarcrete, to create structures like Lunar Safe Haven to protect astronauts and equipment from the micrometer and radiation environment. Just like terrestrial concrete, lunarcrete is expected to have poor tensile strength and require reinforcement to build safe lunar structures under expected thermal and moon quake conditions. However, autonomous placement of continuous reinforcement remains a challenge for 3D printing strategies. The NASA ARMADAS project has demonstrated autonomous robotic assembly of lattice structures. We propose using this autonomously constructed lattice to build reinforcement and integral formwork for lunar construction. Rather than 3D printing concrete, structures could be cast in a similar manner to what is commonly done on Earth. This process would leverage much of the technology needed for lunarcrete 3D printing, but critically would reduce the precision needed for placement, mix control, rheology, etc. This process margin, when combined with the increased strength and ductility provided by the reinforcement and formwork, will lead to a lower-risk construction methodology for lunar settlements. In this paper, we present experimental data from flexural and tensile strength testing of a terrestrial concrete mix reinforced with prototype lattice reinforcement. These preliminary results show successful reduction in crack propagation relative to the unreinforced control specimens, though no increase in tensile or flexural strength. Additional work is necessary to study whether tuning the geometry of the reinforcement or the concrete/lattice interface could realize increases in strength in addition to crack propagation reduction, especially with lunar simulant mixes.
Most current aircraft are heavier-than-air vehicles. Thus, they require a large amount of power to take off and stay air-borne. This leads to the requirements to burn large amounts of aviation fuel that negatively impacts on the environment in terms of climate change. Overall, the aviation industry is one of the major contributors to global warming due to COX and NOX emissions. To address this, there is currently the drive to reduce the burning of aviation fuel, and in fact to transition to all electric aircraft. On the other hand, an airship is a lighter-than-air vehicle that is kept afloat by a gas that is lighter than air e.g. hydrogen or helium. Thus, the amount of power required to keep the vehicle in the air much smaller and can be readily supplied from electrical sources, including solar panels. This paper involves the design, analysis and development of a hybrid airship with an exoskeleton to support an envelop to be filled with helium gas. The hybrid nature of the airship involves aerostatic and aerodynamic lift generation. The NACA6322 aerodynamic profile was used for the airship envelope and the exoskeleton. The aerodynamic performance of the envelope was analysed using the ANSYS Fluent Computational Fluid Dynamics analysis software, while the analysis of the structural performance of the exoskeleton was conducted using the ANSYS Structural software. The predicted lift and drag forces produced by the helium envelope, and the predicted deflections and stresses of the exoskeleton are discussed. The procedures for the manufacture and assembly of the hybrid airship with the exoskeleton are described and illustrated.
Lift, and pressure data was modeled on various samples based on NACA0012 aerofoils, which were designed and modified using SOLIDWORKS and then tested from 0˚ to 20˚ using a CFD wind tunnel. These aerofoil samples had protuberance structures along the leading edge similar to the tubercles of the humpback whale with varying amplitudes and wavelengths, and the effect these had on aerodynamic performance was determined by a low-speed wind-tunnel test. This project found an improvement of lift over the baseline, ranging from 7.677 to 38.411% greater maximum lift coefficient values, and that hypotheses based on previous studies regarding low-pressure pockets were verified. There are significant implications to these results, including verifying of the hypothesis that low-pressure pockets created by the inclusion of tubercles do improve aerodynamic performance at the onset of flow separation, and that there are significant potential applications for various fields. There is a wide range of potential future exploration and investigation in this topic, specifically with regards to improved aerodynamic and hydrodynamic design. Potential research directions include the optimization of the tubercle effect for high aspect-ratio wings to improve stall performance due to their prevalence in concept aircraft optimized for greater sustainability and efficiency.
Abstract - High Altitude Platform Stations (HAPS) have attracted a lot of interest in recent years as they can be seen as a complement to terrestrial, aerial and satellite systems, allowing to target new applications. The applications of these vehicles cover many sectors, from defense to communication systems to Earth observation. A challenge to be addressed in the design of such hybrid aircraft is the analysis of thermal aspects in all the flight envelope and, consequently, the design of thermal management system which is of fundamental importance to ensure that the electrical components on board performs correctly throughout the flight. One of the factors that represents a difficulty for the success of the mission is the surrounding environment: HAPS fly for most of the mission at stratospheric altitudes; this implies extreme environmental conditions in terms of density, temperature and air pressure. Another aspect to take into great consideration is the presence of the Sun, a help and a hindrance in these cases: it supplies electrical power to the components through photovoltaic cells but it is also the main heating source for them. Another aspect to take into consideration is the mission profile of the HAPS: its rate of ascent associated to and the achievement of the operational altitude represents fundamental mission requirements; altitude angle, velocity vector and presence of winds are parameters that must be considered to assure the safety of the mission. Thise work presents, firstly, a description of the typology of components used inside the HAPS avionics bay, from a thermal management perspective. The thermo-dynamic atmospheric variables are then descripted. The knowledge of the HAPS location in terms of longitude and latitude allows for greater precision in calculating all model variables. The thermal model used and the results obtained are then presented and discussed. This model takes into account all the heat exchange modes (radiation, convection and conduction) and the exchange possibilities between the avionics bay and the external environment. The model is based on the electro-thermal analogy, working with an equivalent electrical scheme. Taking into account the type of mission in progress, it is necessary to know the value of the temperature (in particular that of the components) during the entire duration of the flight. To do so, the theory of lumped parameters is used. The results show the temperature trends of all avionics components in the bay during the entire duration of the mission, climb, cruise and final descent, ensuring that their temperatures remain within their operating temperature range to avoid interruptions or damages during the flight. The aim of this paper is to provide a valid methodology to determine the temperature conditions for both the equipment and the buoyancy gas during the design of a HAPS in order to determine the requirements on the temperature operating range for equipment and the need of a thermal management system and, eventually, to design it.
The colonization of the stratosphere thanks to new aircraft configurations has become possible as a result of recent technological development. Inflatable aero-structures represents one of the most promising solution for medium and high weight payload. Blimps and aerostatic balloons are the most common inflatable aero-structures. They present several limits on movements, speed and controllability. The work presented describes a hybrid inflatable Heavier-Than-Air (HTA) design that permits to go beyond the limits of previous Lighter-Than-Air (LTA) and HTA vehicles, opening new possibilities for their fields of employment. This new design is based on the joint use of aerostatic and aerodynamic lift, allowing an overall weight and size reduction. This is obtained also thanks to a shape variation during the flight envelope. In particular, this article will primarily focus on describing the modeling of one fundamental characteristic of this new design: the pressure system. It is designed to ensure that the desired shape is achieved when required while maintaining the structural integrity of the envelope. These demands required to develop a numerical model to predict the behavior of the gas inside the envelope at each phase of the preplanned mission profile. The specific characteristics of the vehicle required to develop more than a simple modeling of the gas expansion, due to the need to simulate also the effect of the valve required to eject the gas to not exceed the nominal overpressure. The first phase of the lift-off is designed to be purely aerostatic, relying on the lift generated by the LTA gas. The close interaction between the gas behavior and the lift force required to devise a model of the vertical motion, too. Moreover, this novel vehicle was designed to continuously cruise in the stratosphere during several days, being thus subjected to extreme variations of outside temperature and pressure. These aspects, coupled with the solar radiation affecting the extensive surface of the envelope, required an accurate modeling of the atmospheric characteristics and of the solar radiation at various quote. Finally, the structural integrity of the vehicle during the descending phase was design to be carried on by supplying compressed air produced by compressors to the envelope. The modeling of the required compressed air’s mass flow and compression ratio was also implemented for every instant of the descending phase. The energy required by the compressors will be supplied by photovoltaic cells installed on top of the envelop. Electrical and thermal models of photovoltaic cells were also developed and implemented to accurate estimate values of the available energy. After a general description of the key features of the total design of the new hybrid inflatable HTA vehicle proposed in this work, a deeply description of all the mentioned developed models will be given, detailing all the assumptions, the equations and the limits that define the developed model. In the last section, the results for a simulated mission profile will be shown and discussed in depth. Finally, a summary of the future works to validate the developed model will be described.
At present, in the planning of supersonic civil aircraft, the ' N + X ' generation planning developed by NASA has the most reference value. The cruise Mach number of supersonic civil aircraft proposed by NASA is about 1.8. Therefore, it is of great academic significance and engineering application value to consider the structural design and aerodynamic performance analysis of the fuselage and nacelle of supersonic civil aircraft in cruise state. In this paper, an axisymmetric inlet for supersonic civil aircraft is designed. The design Mach number of the inlet is 1.75. The flow characteristics solving models of axisymmetric inlet, sharp fuselage, blunt fuselage and nacelle are established. On the basis of verifying the accuracy of the solving model, the designed axisymmetric inlet is placed in the nacelle and installed on the sharp fuselage and blunt fuselage respectively. The effects of the structure with fuselage and the structure without fuselage on the aerodynamic performance of inlet total pressure recovery coefficient and outlet total pressure distortion index are studied respectively. The flow field characteristics and formation mechanism of internal and external flow in nacelle at 0 ° and 2 ° angles of attack are analyzed. The results show that in the cruise state, the structure with the fuselage will significantly increase the internal flow loss and the total pressure distortion at the outlet, reduce the total pressure recovery coefficient of the inlet, and increase the total pressure distortion index at the outlet. Along the x ( incoming flow direction ) direction, with the increase of x, the low-energy fluid on the fuselage side gradually decreases, and the total pressure distribution is uniform before the inlet of the inlet. In the structure without fuselage, the total pressure recovery coefficient of the designed axisymmetric inlet is 94.7 % under the design point condition. In the structure with fuselage, the total pressure recovery coefficient of the inlet installed on the sharp fuselage is 90.58 % under the design point condition, and the total pressure recovery coefficient of the inlet installed on the blunt fuselage is 89.6 % under the design point condition. The research content of this paper provides a theoretical basis for the aerodynamic performance analysis and structural design of axisymmetric inlets and nacelles used in supersonic civil aircraft.
This paper presents a hierarchical framework for vision-based absolute localization in large-scale GNSS-denied environments. Initially, a coarse position is estimated using visual observations within a Monte Carlo Localization framework. Each query image is encoded as a global image descriptor and matched against a database of georeferenced images. These reference images resemble what an aerial vehicle would see with a downward-facing camera at that location. Image similarities between the query and reference images are then used as likelihood to update the particles’ weight in the Monte Carlo Localization scheme. We demonstrate that a coarse position estimate can reduce the position uncertainty from multiple kilometers to below 150 meters. Within this reduced search space, we then match local image features against a satellite image or ortho-image and a corresponding digital surface model to obtain six-degree-of-freedom poses in the global frame. These poses are used to track the absolute position and attitude, which are then fused with inertial measurements using an Extended Kalman Filter. We evaluate our hierarchical localization framework using a challenging real-world aerial dataset that spans over 38 kilometers in distance at an altitude of 300 meters above ground and covers various types of landscapes such as urban and vegetated environments. Our evaluations show that our framework can localize over 65% of query images with an error threshold of 5 meters which allows - combined with pose tracking - for continuous global pose estimation over our test scenario.
The autonomous operation of an unmanned aerial vehicle (UAV) relies on reliable self-localization, which is typically achieved using global navigation satellite systems (GNSS). However, GNSS data can be unreliable due to effects of space weather phenomena or interference from GNSS jamming. To ensure accurate localization in such conditions, vision-based approaches for UAV positioning offer a potential alternative, though they often come with trade-offs in positioning accuracy or computational efficiency. In this paper, we present a realtime method for vision-based UAV self-localization that achieves GNSS-like accuracy. This approach involves extracting highlevel semantic features from captured images and matching them to geo-referenced OpenStreetMap (OSM) data of the flight area. The global location of the UAV is then determined based on the matching results. We also compare and evaluate different metrics for measuring scene similarity to enhance the system’s performance. Moreover, we demonstrate that even when OSM data is partially inaccurate, it can still be used to achieve accurate localization. This holds true even with a nonoptimal neural network for segmentation and in environments with limited semantic features. The data set used for evaluation will be made available with publication of this paper.
The added mass problem is a significant challenge in the design and operation of lighter than air (LTA) stratospheric vehicles. This phenomenon occurs when a vehicle moving through a fluid experiences additional inertia due to the fluid it displaces. For LTA vehicles, the effect of added mass is particularly pronounced because of their low structural mass and the relatively large volume of air they interact with during flight. One of the primary challenges posed by added mass is the additional stress it places on the vehicle’s structure. The interaction with the surrounding air can induce forces that the structure must be able to withstand without compromising the vehicle’s lightweight nature. Moreover, the dynamics of LTA aircraft are heavily influenced by added mass, impacting stability and control. Therefore, engineers must carefully consider this factor during the design process in order to ensure accurate predictions of the vehicle’s behavior, to maintain stability and efficiency, and to set up control systems that take into account the effect of the extra inertia and the altered responsiveness of the vehicle to the inputs, thus necessitating precise and sophisticated control mechanisms. Energy efficiency is another area affected by the added mass phenomenon. The increased inertia means that more energy is required to move the vehicle through the air. This is especially critical for vehicles that rely on solar power or other low-energy systems to achieve prolonged flight durations. By thoroughly understanding and addressing the added mass phenomenon, engineers can enable successful deployment of LTA aircraft in various applications. The authors want to give, with this paper, their contribution by focusing on the real impact of the addressed issue on the design of the attitude control laws, in order to give an objective quantification of the problem that flight control engineers shall face with. This paper comprehends some recalls on the added mass mathematical modelling, already presented by the authors in previous works; then the same control technique is applied to design the attitude control system, either to the model that considers the add mass either to the one that neglects the phenomenon. The LTA aircraft considered for the analyses presents two pairs of complex poles (longitudinal pitch-incidence and lateral dutch-roll), two other stable real poles (longitudinal surge and lateral yaw-roll), and two unstable poles (longitudinal fugoid and lateral sideslip-yaw); the introduction of added masses tends to slow the poles in the complex plane, except for the latter modes that are little affected. The control technique selected for the analyses is the Linear Quadratic with state augmentation to achieve null error to step command for the roll and pitch attitude angles. A first analysis compares the performance with reference to the full model, that considers the added mass. A successive analysis focuses on the robustness variation through a probabilistic approach, by taking into account the stochastic uncertainties on main aerodynamic and inertial parameters and also considering the added mass data. Detailed results of these analyses will be presented in the full paper.
Sounding rockets are instrumental platforms to provide cost-effective, rapid access to space, and to validate new technologies prior to orbital flight. In face of increasingly demanding mission scenarios, focusing on reusability and reconfigurability, adaptive and global control solutions are needed to track different trajectories in large flight envelopes, which may be generated in real-time using online guidance methods. In this work, a pitch plane trajectory tacking control solution for suborbital launch vehicles is derived relying on adaptive feedback linearization. Initially, the 2D dynamics and kinematics for a single-nozzle, thrust-vector-controlled sounding rocket are obtained for control design purposes. Then, an inner-outer control strategy, which simultaneously tackles attitude and position control, is adopted, with the inner-loop comprising the altitude and pitch control and the outer-loop addressing the horizontal (downrange) position control. Feedback linearization is used to cancel out the non-linearities in both the inner and outer dynamics, reducing them to two double integrators acting on each of the output tracking variables. Uncertainty is considered when canceling the aerodynamic terms and is estimated in real-time in the inner loop via adaptive backstepping. More precisely, making use of Lyapunov stability theory, an adaptation law, which provides online estimates on the inner-loop aerodynamic uncertainty, is jointly designed with the output tracking controller, ensuring global reference tracking in the region where the feedback linearization is well-defined. The zero dynamics of the inner-stabilized system are then exploited to obtain the outer-loop dynamics and derive a Linear Quadratic Regulator (LQR) with integral action, which can stabilize them as well as reject external disturbances. In the outermost loop, the estimate on the correspondent aerodynamic uncertainty is indirectly obtained by using the inner loop estimates together with known aerodynamics relations. The resulting inner-outer position control solution is proven to be asymptotically stable in the region of interest in terms of pitch angle, i.e, in the upper region of the unit circle excluding the horizontal orientation. Finally, the control strategy is implemented in a Matlab/Simulink simulation environment composed by the non-linear pitch plane dynamics and kinematics model and the environmental disturbances to assess its performance. Using a single-stage sounding rocket propelled by a liquid engine as reference vehicle, different mission scenarios are tested in the simulation environment to verify the adaptability of the proposed control strategy. Preliminary simulation results are satisfactory, given that the controller is able to track the requested trajectories while rejecting external wind disturbances. Furthermore, the need to re-tune the control gains in between different mission scenarios is minimal to none.
In Platform 1 of the Clean Sky 2 Large Passenger Aircraft - Integrated Aeronautics Demonstration Platform (LPA-IADP), aimed at discovering a strategy to reduce emissions of Large Passenger Aircraft. In this frame, the use of Hybrid Electric Propulsion (HEP) was identified as a possible solution and, specifically, Distributed Electrical Propulsion (DEP) was found as an important enabler for HEP future development. To foster DEP adoption for large passenger aircraft architectures, a strategic roadmap was implemented including flight testing of a flying demonstrator called D08 “Radical Configuration Flight Test Demonstrator”. The D08 demonstrator is a Distributed Electrical Propulsion aircraft with 6 propellers, with the weight of 170 kg and with a wing span of 4 m. Within this plan, Italian Aerospace Research Centre (CIRA) is in charge of developing an aircraft Guidance, Navigation and Control (GNC) system to be integrated in a dedicated testing framework for supporting demonstration with the D08. This work will benefit from the GNC system already developed by CIRA in the same project for supporting flight testing of the Dynamically Scaled Vehicle, D03 demonstrator. The D03 testing framework was developed with the objective of supporting demonstration campaign. This was achieved through the inclusion of an instruction language that allows performing complex missions and test maneuvers autonomously increasing repeatability in flight-testing. Moreover, the modular SW architecture was specifically developed to allow the users to integrate their modules, e.g. control laws, following a sequence of simple steps. These two characteristics, repeatability and modularity, can be also combined thanks to the flexibility of the GNC SW and to the automation instruction language mentioned earlier that allows selecting the custom control modules instead of the available standard ones for performing an automated sequence of experimental maneuvers and the test of differents control strategies. On top of a resume of the On-board Guidance, Navigation and Control system and Ground Remote Pilot Station developed by the authors, this paper will present the in-flight demonstration of the Guidance, Navigation and Control system through flight data recorded during the activities carried out in Grottaglie airport (Italy) on May, June and July 2024. Several flights tests have been performed with the On-board Guidance, Navigation and Control system in control of the D08 Scaled Flight Demonstrator and several maneuvers have been automatically executed by the On-board Guidance, Navigation and Control system. These tests carry out the parameter identification of the aircraft characteristics and the performance analysis of some Multi Input Multi Output control laws designed in order to take advantage of the intrinsic control redundancy of the vehicle configuration. The performances of the control laws where performed in nominal and not nominal conditions (e.g. engine, aileron and rudder failures). During the flight tests, as required, the On-board Guidance, Navigation and Control system allowed the accurate and repeatable execution of a set of pre-defined maneuvers and easily and accurately change the characteristics of the maneuvers during the flight tests campaign.
This research delves into the intricate interactions of fluid particles around lifting surfaces in high-speed supersonic flows. A novel compressibility correction method is presented, utilizing a multiphase solver within the lattice Boltzmann method (LBM) framework. The computational domain is divided into numerous cells, with the lifting surface modelled as an obstacle matrix. This setup allows for identifying elements as either fluid particles or solid walls, facilitating detailed mesoscopic-level simulations of fluid dynamics. The computational solver iteratively computes density and velocity distributions at lattice nodes, propagating these values to neighbouring cells to enforce boundary conditions. A key feature of this study is the integration of the Bhatnagar-Gross-Krook (BGK) model’s collision factor, which is essential for determining density ratios and effective velocity components, ensuring accurate modelling of compressible flows. The research further explores the adaptability of LBM when combined with vortex lattice methods (VLM), enabling the characterization of computational domains as isotropic or anisotropic. This combination effectively captures multiphase phenomena and fluid-solid interactions in high-speed conditions, outperforming traditional methods. By incorporating compressibility correction within the VLM framework, the study enhances the understanding of multiphase fluid dynamics and boundary interactions at supersonic speeds. This combined approach addresses limitations in modelling individual fluid particles, providing a robust solution for simulating complex compressible flow dynamics. The LBM's versatility in handling complex interfaces and managing compressible flows is highlighted, making it an ideal tool for studying shock waves and compressibility effects in supersonic regimes. The research methodology involves discretizing each flow particle into numerous degrees of freedom, adopting various lattice forms, such as D2Q9 for two-dimensional analysis and D3Q19 for three-dimensional analysis. The distribution function f(ξ, x, t) defines non-linear fluid characteristics at the microscopic scale. The discrete velocity Boltzmann equation (DVBE) is employed, with the equilibrium distribution function adjusted to account for thermal and compressible limitations. The Chapman-Enskog expansion is used to derive the macroscopic equations of the hybrid method. The hybrid approach combines LBM with the entropy-inspired energy equation using the finite volume method, applicable to both subsonic and supersonic regimes. The LBM's efficiency and ability to model high-speed flows are further enhanced by integrating advanced boundary conditions and collision factors. The study incorporates the use of the double distribution function (DDF) around the supersonic boundary, which provides significant benefits in terms of accurately simulating polyatomic gases at high speeds by extending numerical equilibrium approaches to reproduce multiple moments of the Maxwell-Boltzmann distribution. This combined approach addresses limitations in modelling individual fluid particles, providing a robust solution for simulating complex compressible flow dynamics with faster computation time.
The most reliable systems within the aerospace domain continually leverage formal methods to prove system correctness against project requirements. Most systems verify value-domain requirements to ensure functional correctness of the application. This approach only examines one of the key domains within which to apply formal methods. Aerospace applications contain many constrained resources that when over-utilised can lead to mission failure. One such constraint is energy, as space missions use minimal energy stores to power and maintain normal operating functions. In this paper, we extend formal methods to verify energy within the process of Hardware / Software Co-Design. Particularly, we focus on developing applications in FPGAs, while measuring energy usage at runtime. We combine this engineering data with formal methods and our requirements specified in temporal logics to formally verify energy constraints. Our approach is runtime verification in the purest sense, as our system measures energy by executing all possible branches of the user’s logic. Our approach is compatible with existing methods in continuous verification and agile development.
A majority of Near-Earth Asteroids (NEAs) with an absolute magnitude of less than 20 have been discovered so far. However, there still remains a large discrepancy between the number of detected asteroids and the estimated numbers for absolute magnitudes larger than 28. The properties that make such fainter NEAs inherently more difficult to detect, besides their small size and proximity to Earth, include low geometric albedos and high velocities across the sky. Synthetic tracking has been shown to be a viable alternative to conventional long-exposure methods for NEA detection; however, the amount of computation required for synthetic tracking is considerable. This paper presents a new synthetic tracking algorithm that leverages transformer-based networks to enhance the computational efficiency as well as detection performance. Deep-learning architectures have been applied to NEA detection by solely utilizing Convolutional Neural Networks (CNNs) that can operate on a single image at one instant of time, thus only enabling detection of slow-moving asteroids appearing in long-exposure images. In this paper, however, instead of using a single long exposure as input to the network, the proposed architecture uses multiple shorter exposures of an NEA, and employs a video transformer architecture to exploit temporal dependencies between images. The images employed for training the network are simulated according to the actual population statistics of all known NEAs. First, the absolute magnitude and the date of first observation of all known NEAs are extracted from JPL's Small Body Database, to obtain the apparent motion, NEA to the Sun and Earth distances, and the phase angle at the instant of observation. Next, a proper frequency distribution is fitted against the NEA population for each property, and one sample from each property is picked up to generate the images. The set of sampled properties, representing one synthetic NEA, is used to estimate the NEA's Signal-to-Noise Ratio (SNR), along with the read noise, dark current, system zero point, pixel scale, and background magnitude for the Sierra Mountain Observatory as well as the desired number of frames and exposure time of the image stack. The background noise in some real images obtained from the same observatory is also estimated using sigma clipping. The estimate and the expected SNR are then used to compute the necessary signal strength of the synthetic NEA, with a point spread function following a modified Gaussian distribution. The orientation of the synthetic NEA track in the image stack is sampled from a uniform distribution of orientation angles. The generated NEA is supplanted into every image frame in the stack according to the NEA's apparent motion as well as the exposure time selected, and the images are cropped to meet the transformer input dimension requirements. The training database consists of roughly one million image stacks representing permutations of the known NEA population. The proposed transformer architecture is based on a pre-trained ViViT backbone that is fine-tuned using the training data, and the resultant model is verified on real synthetic tracking data.
Urban air mobility (UAM) has the potential to revolutionize the transportation of people and cargo via small and often autonomous aircraft that can circumvent increasingly congested ground infrastructure. To realize this potential, advanced nowcasting capabilities are necessary to accurately assess microweather wind conditions in real-time and ensure safe, reliable, and weather tolerant operations. Prediction of microweather, however, presents significant technical challenges: (1) while macro weather forecasts are becoming more reliable, urban areas exhibit substantial local variations in weather due to phenomena like wind tunnel effects, (2) traditional physics-based modeling for microweather is computationally intractable, (3) statistical methods such as gaussian processes lack the expressivity to model non gaussian distributions, and (4) standard data-driven approaches are typically deterministic and fail to capture the inherent aleatoric variability in winds, such as critical wind gusts affecting UAM reliability. This work explores the use of Generative AI to model microweather winds in a manner that is probabilistic, statistically accurate, and computationally efficient. Generative AI, known for applications in language and image generation, leverages large datasets to produce synthetic but realistic data samples by learning a mapping between random noise and the data distribution. The proposed approach assumes the availability of a dataset from a temporary measurement campaign in an area of interest (e.g., a landing zone), and then builds a probabilistic map from the regional (macro) weather forecast to the microweather wind conditions in that area. Once the generative model is established, local wind conditions can be simulated on demand to inform UAM aircraft operations without the need for further measurements. A proof-of-concept nowcasting system was implemented using data from a recent NASA measurement campaign that collected wind velocity data via Sonic Detection and Ranging (SODAR) wind profilers. Conditioned on corresponding macroweather forecasts from a nearby weather station, a state-of-the-art generative algorithm: Denoising Diffusion Probabilistic Model (DDPM) was trained to learn distributions of SODAR measurements. Results show that the DDPM can synthesize realistic and novel wind profiles. Furthermore, demonstration of the DDPM’s ability to capture non gaussian characteristics in the data distribution highlights the benefit of using a generative approach over a gaussian process. This study identifies and implements a promising Generative AI based solution to the challenging problem of predicting microweather winds based on macroweather conditions. The proof-of-concept nowcasting system introduced in this work provides a platform for further development of systems which are capable of scaling to much larger datasets with a higher density of sensor measurements and larger numbers of macroweather conditions. Conditional generative models trained on such datasets can one day be used to enable safe and reliable UAM.
Controlling spacecraft near asteroids in deep space comes with many challenges. The delays involved necessitate heavy usage of limited onboard computation resources while fuel efficiency remains a priority to support the long loiter times needed for gathering data. Additionally, the difficulty of state determination due to the lack of traditional reference systems requires a guidance, navigation, and control (GNC) pipeline that ideally is both computationally and fuel-efficient, and that incorporates a robust state determination system. In this paper, we propose an end-to-end algorithm utilizing neural networks to generate near-optimal control commands from raw sensor data, as well as a hybrid model predictive control (MPC) guided imitation learning controller delivering improvements in computational efficiency over a traditional MPC controller. The imitation learning controller is trained using expert trajectories from an MPC controller with full state access in the process of "learning by cheating". Once trained, the controller doesn't require state information and instead uses only raw lidar data. To train this controller, state data was collected on trajectories flown by an MPC controller in a MATLAB dynamics simulation, which was then used to generate lidar data along the same trajectories in a Gazebo environment mirroring the MATLAB simulation. This data was used in imitation learning where a network was trained to imitate the MPC controller's output while only having access to raw sensor data from the lidar array and the target end state provided by a human operator. The network architecture chosen used a combination of Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and a Multilayer Perception (MLP) to process raw sensor data into near-optimal thrust commands. After training the network and analyzing its expected performance using a test dataset, the network was implemented back into MATLAB and tied in with sensor data from Gazebo to perform forward testing in the simulated environment. Tests of the time taken to generate a control output using the MPC resulted in a time of ~0.473 s when using a CPU with a Thermal Design Power (TDP) of 155 W, while inference time for generating control outputs of the trained network was measured at ~0.053 s when using a GPU with a TDP of 300 W. Based off these results, running the trained network can be expected to use only ~21.6% of the energy that running the MPC would take, representing a significant improvement in efficiency. Additionally, testing the trained model on the CPU yielded inference times of ~0.138 s, resulting in only ~29.2% of the power consumed compared to the MPC. Testing has shown the trained network to perform at a satisfactory level (defined by whether control outputs generated by the network were within a set error range when compared to the outputs of the MPC) up to 70% of the time. If states are occasionally available, a developed hybrid MPC guided imitation learning controller can be used which utilizes the MPC for runtime assurance, improving performance while also achieving large gains in computational efficiency.
As the complexity and deployment of autonomous systems continue to expand across sectors, there is an increasing need for robust and adaptive frameworks that ensure both mission success and system reliability. Current mission planning and health management approaches often fall short in handling real-time anomalies and uncertainties, which can lead to mission failures and significant resource losses. Traditional methods are typically reactive, addressing issues only after they arise, and many contemporary machine learning models function as opaque "black boxes," providing predictions without the necessary interpretability crucial for high-stakes decision-making environments. This paper introduces an Intelligent Health and Mission Management (IHMM) framework designed to address these shortcomings by using interpretable physics-basedmodels to enhance the robustness and adaptability of mission planning and health management for autonomous systems by combining the clear, explainable nature of physics-based models, such as Newtonian mechanics and orbital dynamics, with the predictive power and adaptability of machine learning models trained on historical mission data. This hybrid approach ensures that predictions are not only accurate but also interpretable, making them more reliable and trustworthy in mission-critical scenarios. The IHMM framework employs optimization algorithms like Model Predictive Control (MPC) and robust control techniques to manage missions effectively in real-time. These algorithms benefit from the precision and transparency of physics-based models, which provide a solid foundation for decision-making, while the machine learning components adapt to dynamic, unforeseen events by continuously learning from new data. The fusion of these methods allows the system to dynamically adjust mission objectives and operational parameters in response to real-time health assessments, significantly reducing the risk of mission failure. To validate the proposed framework, extensive simulations using realistic mission scenarios are conducted. These simulations test the framework's ability to adapt to both expected and unexpected conditions, ensuring that the models can handle a wide range of operational challenges. The expected outcomes include improved predictive accuracy, with the added benefit of clear explanations of model behavior, leading to more reliable mission planning and greater adaptability to unexpected events. This adaptability is crucial for autonomous systems operating in unpredictable environments, such as space exploration or autonomous transportation networks. The research presented in this paper significantly advances the field of autonomous systems by providing a comprehensive framework for mission planning and health management that prioritizes both accuracy and interpretability. By addressing the limitations of current methods and offering a solution that balances the deterministic clarity of physics-based models with the flexibility of machine learning, this framework sets a new standard for the design and operation of autonomous systems. The enhanced reliability, transparency, and efficiency of these systems are expected to enable more ambitious and successful missions across various sectors, paving the way for future developments in intelligent, autonomous technologies. This research not only contributes to the technical advancement of autonomous systems but also fosters greater trust in their deployment in critical and complex missions.
This paper introduces an agent-based Proactive Fault Tolerance and Management (PFTM) architecture specifically designed for small satellite missions, aiming to significantly improve the reliability and autonomy of satellite operations. The architecture focuses on two essential functionalities: Predictive Diagnostics and Prognostics (PDP), which involves the analysis and classification of faults, and Proactive Fault Management (PFM), which ensures system integrity by isolating faulty components and executing appropriate recovery procedures. A key innovation of the proposed architecture is its emphasis on fault/anomaly prediction and proactive mitigation, enabling the system to address potential failures before they occur, thus offering a substantial improvement over traditional Fault Detection, Isolation, and Recovery (FDIR) approaches. The implementation of PFTM capabilities inevitably introduces increased complexity to the onboard software, making the architecture's design a critical factor. To address this, a comprehensive trade-off analysis is conducted, evaluating five key criteria: communication overhead, power consumption, maintainability, response time, and resiliency. This analysis identifies the most effective architecture for implementing PFTM in a small satellite constellation. The paper provides an in-depth overview and comparison of current agent-based architectures, assessing their strengths and weaknesses in the context of PFTM. Additionally, existing agent-based FDIR approaches are overviewed, with a focus on their applicability and limitations within space systems. The proposed architecture is rigorously tested through the implementation of fault/anomaly detection/prediction, advanced decision making, and proactive fault recovery methods, with a specific focus on addressing the challenge of unresponsive reaction wheels—critical components responsible for satellite attitude control. The architecture's effectiveness is demonstrated through numerical simulations conducted using the Basilisk simulator, showcasing its capability to maintain satellite operations despite the failure of a reaction wheel. Regarding PDP, both supervised and unsupervised learning methods are investigated as a part of agent-based architecture. Although unsupervised methods only require normal data for training, they are vulnerable for fault isolation. Additionally, the results produced by learning methods can be uncertain and often function as black boxes, delivering accurate predictions without revealing the underlying processes that led to those predictions. This paper employs XAI as a part of agent-based architecture to tackle this issue by offering interpretability, allowing both human operators and software agents to comprehend the reasons behind detected anomalies. Regarding PFM, we first demonstrated the impact of unresponsive reaction wheel on the satellite's performance and attitude through various parameters, including attitude error, reaction wheel torque, and speed. Then, proactive decision-making module runs proper recovery action to recover from the fault and ensures that the spacecraft maintains its desired orientation. The results validate the architecture's potential to enhance mission success rates in small satellite constellations, making it a promising solution for future space missions.
The space sector is changing from a focus on state-supported space exploration for research to encouraging commercial space exploration, giving rise to NewSpace. The role of SMEs in this rapid growth is increasing, but experience in other sectors shows SMEs may not be sufficiently focused on cyber security issues. In this exploratory study, we investigated how new entrants to NewSpace have altered the practice of cybersecurity according to industry stakeholders. To explore this question, we carried out 8 semi structured interviews with NewSpace organisations directly involved in the design, development or review of space system devices and services, including SMEs, large businesses, governmental and NGO organisations. Expected influence of new entrants to the space industry is expected to be in the areas of technology and infrastructure, cybersecurity maturity, including culture of vulnerability disclosure, and regulation. These are the areas where changes may occur in the future and where intervention to promote healthy cybersecurity practices can be directed, and be achieved through collaborations amongst the various parties.
The advancement of robotic systems has revolutionized numerous industries, yet their operation often demands specialized technical knowledge, limiting accessibility for non-expert users. This paper introduces ROSA (Robot Operating System Agent), an AI-powered agent that bridges the gap between the Robot Operating System (ROS) and natural language interfaces. By leveraging state-of-the-art language models and integrating open-source frameworks, ROSA enables operators to interact with robots using natural language, translating commands into actions and interfacing with ROS through well-defined tools. ROSA's design is modular and extensible, offering seamless integration with both ROS1 and ROS2, along with safety mechanisms like parameter validation and constraint enforcement to ensure secure, reliable operations. While ROSA is originally designed for ROS, it can be extended to work with other robotics middle-wares to maximize compatibility across missions. ROSA enhances human-robot interaction by democratizing access to complex robotic systems, empowering users of all expertise levels with multi-modal capabilities such as speech integration and visual perception. Ethical considerations are thoroughly addressed, guided by foundational principles like Asimov's Three Laws of Robotics, ensuring that AI integration promotes safety, transparency, privacy, and accountability. By making robotic technology more user-friendly and accessible, ROSA not only improves operational efficiency but also sets a new standard for responsible AI use in robotics and potentially future mission operations. This paper introduces ROSA’s architecture and showcases initial mock-up operations in JPL's Mars Yard, a laboratory, and a simulation using three different robots. The core ROSA library is available as open-source.
The Interstellar Mapping and Acceleration Probe (IMAP) mission investigates two of the most important issues in space physics today — the acceleration of energetic particles and interaction of the solar wind with the interstellar medium. This revolutionary mission includes a suite of ten instruments that together resolve fundamental scientific questions. These instruments are provided by nine institutions globally, hosted on an observatory structure integrated by the Johns Hopkins Applied Physics Lab (JHU/APL). IMAP successfully completed Systems Integration Review (SIR) in September of 2023, and has since completed core bus integration, integration of a number of instruments and begun system testing. During this phase of the project, the Safety and Mission Assurance (SMA) role is critical in ensuring that work is performed and executed such that hardware and personnel are protected, non-conformances are accurately captured, understood, and addressed, and that mission requirements are met. This paper intends to provide context to IMAP SMA’s broad approach to achieve these goals during integration and test, and describe in detail what those methods look like in daily execution. Specifically, this paper will include methods implemented for non-conformance documentation and management, transition of SMA support from payload teams to spacecraft teams, day-to-day SMA implementation and support through the environmental test campaign, and completing verification of safety & mission assurance requirements. We will also describe the approach in SMA resource planning to support these critical activities using other past missions as a baseline while tailoring for the technical complexity and scope of the mission. Lastly, the paper will conclude with lessons learned, both those incorporated into the approach for IMAP and for future missions.
NASA's Artemis program and Moon to Mars objectives target development of a sustained human presence on the Moon. This drives the need for understanding what technologies and procedures are critical for long-duration surface habitation, in both a lunar and Martian environment. Earth-based analogs provide the ability to test mission components in comparable environments. However, there are a limited number of analog facilities, each with high-fidelity approximations of different features of the target environment. The limited availability of analog spots and associated high costs of testing make efficient use of analogs as a research platform critical to provide useful data for space exploration. Here, we propose a framework to evaluate the need for testing a given experiment or technology in different analog facilities. Multiple criteria decision analysis techniques utilizing analytical hierarchy process calculations will be used to evaluate selection for three criteria: the analog facility's level of feature approximation for different mission profiles (e.g. lunar and Martian), the experiment or technology's demonstrated need for testing with analog features, and the experimental value according to NASA's shortfall identification and targeted knowledge gaps. Features such as terrain, isolation, mission control, available technology, etc. will be used to evaluate facility approximation and analog needs of researchers. A prospective utility assessment using this method is conducted to evaluate the need for analog testing of two representative experiments: a lunar ISRU energy production technology, and a psychosocial crew teaming experiment. This selection methodology framework quantifies the merit of conducting a given experiment in various analog environments. It enables analog selection by researchers to target the best possible facility for their work, and enhances the ability of facilitators to select experiments that utilize the unique capabilities of their analog. The efficient use of analog resources via optimized experiment and facility selection will enable improved and rapid advancements to the technologies deemed most critical to test prior to integration into in-situ use.
A crucial part of the operation of the MMX Rover on the surface of Phobos will be an in-depth understanding of the behavior of the Rover's locomotion system. The French Centre National d’Études Spatiales (CNES) and the German Aerospace Center (DLR) build the rover, while its locomotion system is developed by the Robotics and Mechatronics Center of DLR (DLR-RMC). The low gravity, roughly 1/2000 of that on Earth, makes laboratory experiments difficult and makes simulations of the robotic behavior of the rover indispensable for understanding or predicting the rover's behavior. This paper will focus on developing and validating the locomotion system model. Building upon the concept shown in a previous publication, the modular approach in modeling will be shown in detail with the example of the locomotion system. Models exist for the various components of the locomotion sub-system (LSS). They range from simple static components representing only geometry to complex mechanical and electronic elements. Components like the coupled leg-wheel drivetrain comprise various gears with backlash, elasticity and friction. These components are split into subcomponents available in various detail levels. These are then combined into configured versions of the locomotion systems model for different applications. The modeling techniques vary from physical modeling based on differential equations for some elements, like backlash, to look-up-tables based on measurement data for other elements, like the potentiometers. Components like the gyroscopes, where sensor noise is of particular interest for replicating system behavior, are modeled partially based on physical models in combination with additional signal modeling of the noise to match its properties like Allan Variance. In addition to the electromechanical system, the behavior of the components in the electronic box (EBox) must be represented. In the model of the Ebox, the behavior of analog to digital conversion for signals of the gyroscope or temperature sensors and the logic implemented on an FPGA is reproduced in simulation. This model includes an interface to connect the simulation to either a hardware-in-the-loop setup used at CNES for system verification or a software-in-the-loop setup used at DLR for locomotion software development and as a locomotion planning tool. For the simulation's application during operations, the used model must be representative of the actual flight hardware. This matching is achieved by parametrizing the model to match the measurements available of the LSS. Various measurements from component and system levels are used to confirm the model's validity.
Space-related equipment is often highly customized for specific missions, making it inflexible and costly to repurpose for other uses. This paper discusses the technical challenges and planned modifications required to adapt the hardware and software of existing ground testing equipment to meet the needs of a new space mission. The focus is on the reutilization of validated hardware, software libraries, and FPGA IP cores. The SimuCam, originally developed for the European Space Agency's (ESA) PLATO mission as a camera simulation system by the Núcleo de Sistemas Eletrônicos Embarcados at Instituto Mauá de Tecnologia (NSEE-IMT), is now being repurposed to simulate the VenSpec spectrographic suite for ESA’s EnVision mission to study Venus. This paper examines and describes the strategic decisions made to adapt and improve the equipment while maintaining and reutilizing an already robust solution. The expected similarities between the PLATO cameras and the VenSpec instruments are explored, with a focus on the functional requirements, the effort needed to modify the existing infrastructure, and the anticipated benefits of these adaptations. New challenges, such as implementing PUS interfaces over SpaceWire and transitioning from a DE4 to a TR4 development board due to hardware discontinuation, are also addressed, including their impact on the overall system. The proposed changes include shifting from a centralized processing architecture to a distributed processing model and migrating from NIOS II to RISC-V processors to enhance system flexibility and performance. The adoption of RISC-V, along with the transition from µC/OS-II to FreeRTOS, is expected to provide a more robust and scalable solution leading up to the mission launch. Ultimately, this study aims to illustrate how existing space-related technology can be effectively reused and adapted for new missions, reducing development time and costs while leveraging proven systems.