
The NASA Psyche Discovery Mission is currently cruising towards a rendezvous with the asteroid (16) Psyche, the largest M-class asteroid in the main asteroid belt [1]. The spacecraft instrument suite consists of a magnetometer, gamma ray and neutron spectrometers, multispectral imagers, and radio science experiments, all designed to unravel the history of (16) Psyche [2,3]. The resulting data products generated by the mission (e.g., images; spectroscopic, magnetic, and gravity field data; shape models; geologic maps, etc.) will be delivered to the Small Bodies Node of the NASA Planetary Data System (PDS) for long-term archiving [4]. The Psyche mission’s Science Data Center (SDC), part of the JPL Psyche Mission System, is the point of contact for all data sharing and archiving activities. Here we describe the design and implementation of the SDC, which was guided by many factors: supporting mission requirements related to data warehousing; performing data dissemination and archiving; adhering to federal, NASA, and ASU cybersecurity guidelines; and following industry best practices. Given the long baseline of the mission (launch in October 2023 and arrival at the asteroid in mid-2029), the system needs to be easily maintainable and upgradable during the mission’s lifetime. The SDC is located on the Tempe campus of ASU, with connections to the mission’s Ground Data System at JPL, and is available to the Psyche team via a web portal utilizing purpose-built tools. The SDC leverages heritage tools, concepts, and lessons learned from previous ASU instrument operations, such as for the Lunar Reconnaissance Orbiter Cameras on LRO, the Mastcam cameras on the Mars Science Laboratory rover, and the Mastcam-Z cameras on the Mars 2020 rover. We describe the heritage, principles, and requirements that guided the design and initial development of the Psyche Science Data Center, including real-world examples of the SDCs data portal, RESTful interface, in-house scripts, and early data products. The design of the SDC is centered around a relational database, with a schema to model the many files that are ingested and tracked, as well as their relationships to the PDS bundles being aggregated and delivered. The SDC disseminates data products to the Psyche Team for science investigations through a web-based portal, which also includes a RESTful interface that allows team members to upload, download and search data using in-house scripts. The three instrument teams make heavy use of the RESTful interface for uploading their PDS products. The web portal also makes mosaics and individual image products available to the team using Web Mapping Service technology. Most of the tools developed for the SDC are written in Python, using virtual environments to minimize the need to configure and maintain Python at the system level. These adhere to the UNIX philosophy of software development: make each program do one thing well, expect output (when generated) to become input to another tool, and test early and refactor as needed. References: [1] Elkins-Tanton+, Space Sci. Rev., 218, 2022. [2] Dibb+, AGU Advances, 5, doi:10.1029/2023AV001077, 2024. [3] Polanskey +, Space Sci. Rev., submitted, 2025. [4] https://pds-smallbodies.astro.umd.edu
The proliferation of space debris has become a matter of grave concern for new and existing orbiting assets, leading to potential collisions and losses. One approach for mitigating the impacts of space debris involves the use of satellites with robotic manipulation arms mounted on them. The approach for capturing debris with such a system involves going to a rendezvous orbit with a previously identified debris object, using a vision-based grasp planner to find an approach trajectory for the robotic arm, and finally controlling the attitude of the satellite along with the joints of the robotic arm and gripper to execute a successful grasp. Building upon our previous work on grasp planning for satellite-mounted manipulators, after the orbit rendezvous stage, we propose a whole-body control strategy using reinforcement learning (RL) to control the combined satellite and robotic arm system. In order to execute debris capture in a closed-loop manner, we employ a lightweight vision-based grasp planner that allows for real-time visual servoing, as well as tactile sensing on the robotic arm’s end-effector to prevent the target object from slipping. The reinforcement learning model, once trained, would be able to operate with lower computation costs than classical approaches such as MPC. We demonstrate the efficacy of this closed-loop RL-based controller for debris capture in a high-fidelity physics simulation environment and compare its performance with classical controllers.
A radio was designed to support simultaneous co-channel jamming and communications using an in-band Simultaneous Transmit and Receive (STAR) antenna array followed by two layers of adaptive estimator/subtractors. These cancellers use a reference signal obtained by tapping off a small fraction of the transmitted jamming signal. The estimator/subtractors can suppress other locally transmitted signals if they are tethered to the receiver to provide reference signals. The output of the implemented local interference cancelling front-end resembles a four-channel array, and array interference-nulling techniques can be applied to the four-channel output to suppress untethered-interference signals. Due to the preceding adaptive cancellers, the untethered-interference nulling algorithm used cannot require array calibration. The received communication signal has a known format and contains known symbol sub-sequences that can be used as training data. The proposed algorithm for untethered interference cancellation with signal training data and without array calibration maximizes the output signal-to-interference-plus-noise ratio using the training sequence to estimate Minimum Variance Distortionless Responce (MVDR) type array weights without knowing the signal Angle-of-Arrival (AoA). Here we call it MVDR for Uncalibrated Arrays (MUA). This paper describes the receiver design and presents simulation results for the MUA algorithm applied to the four-channel output. It is shown that with ideal self-interference cancelling untethered-interference nulling performance is good whenever the input Interference-to-Noise-Ratio is high enough for the algorithm to implicitly estimate the array interference response, and not so high that the algorithm cannot reliably detect signal training data, provided the interference and signal AoAs are at least 45° apart. Signal to Interference Ratio improvements as high as 50 dB have been observed when untethered interference is strong.
Chascii is currently developing the inter-spacecraft omnidirectional optical communicator (ISOC) to provide fast connectivity and navigation information to small spacecraft forming a swarm or a constellation in LEO and cislunar space. The ISOC operates at 1550 nm and employs a dodecahedron body that holds six optical telescopes and 20 external arrays of detectors for determining the angle of arrival of the incoming beams. Additionally, the ISOC features six fast avalanche photodetector receivers for high-speed data rate connectivity. The ISOC should be suitable for distances ranging from a few kilometers to a few thousand kilometers. It will provide ubiquitous coverage and gigabit connectivity among smallsats, forming a constellation around the moon. It will also offer continuous positional information among these spacecraft, including bearing, elevation, and range. We also expect the ISOC to provide fast, low-latency connectivity to assets on the lunar surface, such as landers, rovers, instruments, and astronauts. Chascii is currently developing a lunar ISOC, including all its transceivers, optics, and processing units. We are developing the ISOC as a key candidate to enable LunaNet. In this paper, we will present the development status of the ISOC as well as link budget calculations for operations around the moon using pulse-position modulation. We will present experimental results of angle-of-arrival testing using various experimental apparatus. We will also present connectivity results obtained with two ISOCs, including measured bit-error tests under different conditions. We will discuss our ISOC development roadmap, which includes LEO, GEO, and lunar missions, spanning the 2026-2029 timeframe. We believe the ISOC, once fully developed, will provide commercial, high-data-rate connectivity to future scientific, military, and commercial missions around LEO, cislunar space, and beyond.
Signals integration is used in digital communication systems with data fusion to enhance the performance and reduce the multipath effect. The two main approaches for signals integration in digital communication systems with data fusion are full and semi-full signals integration. In full signals integration systems, there are multiple receivers producing very large number of bits and the entire signals integration system is closely resembled analog multiple receiver implementations. This approach achieves the optimum performance at the expense of high cost and complexity. In semi-full signals integration systems, only few numbers of bits are used after preliminary processing of signals at each individual receiver. This method could reduce system complexity and cost at the expense of overall performance degradation. This paper provides performance analysis of full and semi-full signals integration approaches in digital communication systems in case of non-coherent frequency shift keying (NCFSK) receivers with Gaussian noise and Rician fading stochastic model. The performance loss due to semi-full signals integration is analyzed for different number of information bits.
Satellite constellations and swarms are a concept that is rapidly becoming more common in the world of space systems. Organizations ranging from the government to private internet providers are starting to utilize multiple satellites working together to accomplish their mission, often with the ability to communicate with one another in orbit. This survey aims to cover and provide a brief summarization and overview of the current world of satellite systems and constellations and the current state of their security using open source information. Focus is put on systems currently in development or deployed, real world cyber attacks against satellites, research being conducted on satellite testbeds for cybersecurity, and research that is being conducted for theoretical attacks against satellites. Systems covered include Starlink, Kuiper, ViaSat, GNSS, those from NRO, NASA, SDA, and several more. Each system is briefly compared and their security posture is analyzed from the perspective of an attacker and defender. It is expressed in this survey that several of these systems offer robust security, but several may contain gaps that attackers can exploit. Additionally, some of the experimental systems may offer new attack vectors, and thought is given to how those could impact the security of the constellations. The goal of this survey is to highlight the significance of cybersecurity in satellites, and where attackers might direct their attention in the future. It also briefly provides advice and resources for cybersecurity researchers to enter the field, with hopes of providing avenues for researchers to get started in a very challenging field.
This paper details the design, verification, and validation (V&V) of the Hardware Electronic Real-time Message Exchange System (HERMES), a satellite surrogate communication testbed (SSCT). HERMES leverages commercial off-the-shelf (COTS) hardware to emulate a deployed satellite communication subsystem. The system comprises a small and low-cost ground-station (GS) setup testbed for evaluating the data-link communication between the satellite and GS. HERMES weighs the capable open-source ground-station user interface OpenC3 COSMOS for sending packets while offering a firmware for emulating the responses from the satellite flight computer. The results show that HERMES offers excellent value to student-led CubeSat programs. One advantage of HERMES is the possibility of testing the communication between the satellite and GS without waiting for a functioning engineering model of the satellite and the onboard computer (OBC) subsystem firmware. This is achieved because the SSCT testbed offers an open-source surrogate satellite firmware deployed to the COTS microprocessors that emulate the satellite's OBC behavior. Another advantage of HERMES lies in its standardization of communications, which uses standard packetization. This standard lays the foundation for the early adoption of mission packet definitions adaptable to various mission objectives and payloads. Therefore, while HERMES can receive and decode uplink commands sent from COSMOS/GS and handle the packets to produce and send downlink telemetry back to the COSMOS/GS, it enables the GS development to advance in parallel to the satellite hardware and flight software. The radio frequency communication is transmitted, between two SDRs, while an embedded microprocessor emulates the flight radio leveraging GNU radio software. The experiment results demonstrate all packets sent to and from HERMES are being received and correctly interpreted for all configurations. The latency of the packet communication process between COSMOS and the GS hardware was calculated and saved as part of the meta-data for each experiment. Furthermore, HERMES was evaluated using a flight unit from the upcoming Virginia Tech's CubeSat UTProSat-1's OBC, slated to launch in 2025. The overall comparison results demonstrate the potential of HERMES as a surrogate COTS satellite subsystem and ground-station testbed for communication V&V with OpenC3 COSMOS.
Flexible spacecraft structures present significant challenges for physical and control system design due to nonlinear dynamics, mission constraints, environmental variables, and changing operational conditions. This paper presents a data-driven framework for constructing reduced-order surrogate models of flexible spacecraft using the method of Dynamic Mode Decomposition (DMD), followed by optimal sensor/actuator pair placement. High-fidelity simulation data from a nonlinear flexible spacecraft model, including coupled rigid-body and elastic modes, are captured by defining a mesh of nodes over the spacecraft body. The data-driven methods are then used to construct a modal model from the time histories of these node points. Optimal sensor/actuator placement for controllability and observability is performed via a nonlinear programming technique that maximizes the singular values of the Hankel matrix. Finally, the sensor placement and dynamics modeling approach is iterated to account for changes in the dynamic system introduced by sensor/actuator physical mass. The proposed methodology enables initialization of physical modeling without requiring a direct analytical model and provides a practical solution for onboard implementation in model-based control and estimation systems. Results demonstrate optimal design methodology with substantial model-order reduction while preserving dynamic fidelity, and provide insight into effective sensor-actuator configurations for estimation and control.
Objectives General Matrix-Matrix Multiplication (GEMM) and General Matrix-Vector Multiplication (GEVM) operations are critical in multifunction space communications applications, ranging from free-space optical integrated sensing and communications (ISAC) to ISAC-enabled satellite systems. The underlying components of such systems include beamforming, channel estimation, interfer- ence mitigation, and adaptive filtering, among others. This paper aims to develop an efficient implementation of GEMM and GEVM using a Multi-Instruction Multi-Datapath (MIMD)-based Domain Adaptive Processor (DAP). The goal is to achieve high throughput, low latency, and energy-efficient computation while maintaining scalability and reconfigurability. Methods The DAP integrates a runtime-reconfigurable systolic ar- ray within the DASH SoC platform. Unlike SIMD processors, the MIMD architecture enables fine-grained control, parallel execution, and tailored dataflow. A row-stationary, column-streaming strategy is introduced to reduce memory overhead and support pipelined computation across processing elements (PEs). A case study ex- amines GEMM for radar-communication interference mitigation, focusing on a 4×64 by 64×4 cross-covariance matrix mapped to an 8×8 PE array. Analytical modeling estimates latency, compute time, throughput, power, and energy efficiency across varying array sizes, geometries, and matrix dimensions. Execution time is broken down into instruction loading, data loading, and computation/routing costs. Results Scaling PEs improves latency up to 3× (with diminishing returns beyond 16 PEs). Computational cost grows nearly linearly, showing ∼2× improvement at 16 PEs. Throughput saturates at 8 PEs but increases steadily (≈2×) with more columns or PEs. Power consumption remains stable (≈1.08–1.16× growth under higher workloads), highlighting efficient streaming dataflow. Geometry affects performance: with 8 PEs, latency gains range from 1.25× (4×2) to 4× (1×8); with 16 PEs, gains range from 1.16× (4×4) to 2.11× (2×8). Diagonal mappings introduce higher overhead due to extensive PE usage. Scalability analysis shows consistent 1.9–2× gains in latency and throughput when input dimensions double, while energy scales linearly with workload. Conclusion The proposed MIMD-based DAP achieves significant improvements in GEMM/GEVM performance, delivering up to 8048.7 GOPS/W, surpassing contemporary accelerators such as Google TPU and MIT Eyeriss. Results validate the advantages of domain-specific, reconfigurable architectures for multifunction RF systems, combining high performance, scalability, and energy efficiency.
In recent years, multi-agent reinforcement learning (MARL) has emerged as a promising approach for multi-unmanned combat aerial vehicle (UCAV) autonomous countermeasure systems. However, conventional end-to-end MARL methods often lack expert knowledge guidance, leading to low training efficiency, which poses challenges for simulation-to-reality (sim2real) transition. In this study, we focus on the scenario of cooperative beyond-visual-range (BVR) aerial engagement involving multiple unmanned combat aerial vehicles. To address these challenges, we propose the hybrid-constrained multi-agent proximal policy optimization (HC-MAPPO) algorithm. First, we design a rule filter mechanism, where expert rules dictate agent behavior in well-understood states to ensure predictable and interpretable maneuvers, while the neural policy is applied otherwise. Second, we formulate the multi-agent aerial combat problem as a Constrained Markov Decision Process (CMDP) and incorporate a cost-critic network into the actor-critic architecture, which enables explicit estimation of long-term constraint costs and decouples penalty from task rewards. Third, we develop a bilevel optimization framework for constrained policy search, which provides theoretical convergence guarantees and demonstrates improved training stability over traditional Lagrangian-based methods. Empirical results demonstrate that HC-MAPPO achieves a superior success rate, improving the win rate by approximately 20\%–30\% compared to existing MARL baselines such as MAPPO and HAPPO. Ablation studies further confirm the necessity of both constraints: removing either one leads to performance degradation.
Anomaly prediction is a critical method used by aerospace engineers to ensure overall reliability, safety, and operational efficiency of space missions. The proposed model takes advantage of deep learning approach that combines autoencoders, attention mechanisms, and recurrent neural networks to detect unusual patterns in time series data from space missions. This allows the model to identify both spatial and temporal anomalies within the data. A key innovation is a new range calibration mechanism inspired by alpha-beta pruning. This approach dynamically detects the anomalies based on threshold values and assists in reducing false positives within feature ranges. Additionally, the model also employs a series of sequential post-processing techniques to optimization the overall F1-score for anomaly prediction. The proposed approach is evaluated on two real-world datasets Soil Moisture Active Passive (SMAP) and the Mars Science Laboratory (MSL) rover (Curiosity) from NASA. Performance results demonstrate superior performance when compared to the other baseline models. The approach can proactively mitigate mission-critical system failures, efficient resource allocation, and mission success in aerospace systems.
Reinforcement learning (RL) has emerged as a popular choice for training artificial intelligence algorithms in competitive scenarios due to recent successes of achieving superhuman performance in board and video game environments. However, there is currently a lack of open-source, competitive environments designed to exhibit challenges of realism in the domain of spacecraft control to assist in the transfer of RL algorithms to physical systems. We present AstroCraft, a space-based capture-the-flag environment comprised of two opposing teams, each containing multiple maneuverable satellites and a space station in geosynchronous equatorial orbit. The primary goal of each team is to maneuver a satellite to capture a flag at the opposing player’s station and return the flag to its own station without being tagged-out by opposing satellites. We first perform an experimental study on AstroCraft that elicits challenges from realisms such as long time horizons, complex dynamics, and stringent fuel constraints. Throughout, we conduct experiments on similar environments from the literature indicating that many of these realisms are not significantly present and do not degrade performance. Finally, we design an RL algorithm which first gathers data from a heuristic opponent competing against itself; constructing this dataset enables the application of Conservative Q-learning for offline pretraining before further online finetuning. This algorithm produces a model that is superior to the original heuristic opponent. We believe that the lessons learned from our experiments on AstroCraft provide promising avenues for constructing RL algorithms that overcome challenges of realism in simulation and physical systems alike.
Accurate and robust pose estimation of non-cooperative spacecraft is critical for autonomous rendezvous and on-orbit servicing. While monocular vision-based methods have attracted growing interest owing to their low cost and structural simplicity, achieving high-precision pose estimation under large scale variations in target distance and complex illumination conditions remains a formidable challenge. In this paper, we propose a novel dual-path prediction network reinforced with a geometric consistency constraint to address these issues. Our framework features two distinct yet complementary pathways. The first path employs a feature pyramid network to extract multi-resolution representations, from which stable keypoints are detected and subsequently integrated with a PnP solver, thereby enabling accurate pose estimation across targets with large scale variations. The second path employs an adaptive-weighted feature pyramid network augmented with a spatial self-attention module to effectively fuse multi-scale information and strengthen global contextual reasoning. Its output is processed by two direct regression heads for rotation and translation, hence improving accuracy and robustness under occlusion and degraded geometric conditions. To ensure coherence between the two pathways, we further introduce a geometric consistency loss that enforces alignment of their outputs during training, thereby improving stability and generalization. Experimental results on SPEED and SwissCube datasets demonstrate that our framework achieves substantial improvements over existing methods, particularly under extreme conditions.
In order to maintain the system performance and mission productivity of spacecraft, health monitoring and fault diagnostics are crucial. It's essential to ensure that a spacecraft is operating properly without anomalies, as it could jeopardize the whole mission. Traditional approaches are challenging to apply, as they often rely on post-mission data or resource limitations, which are insufficient for detecting subtle or emerging anomalies during flight. This paper introduces a simulation-driven diagnostics framework that uses NASA's SimuPy and a custom telemetry generator to emulate fault conditions and produce multivariate telemetry streams for spacecraft. We apply Random Forest classification models to the synthetic telemetry to detect and categorize anomalies. The proposed framework facilitates both pre-launch validation and in-flight anomaly detection. While previous approaches have primarily focused on retrospective failure analysis, our approach supports proactive diagnostics by simulating system behavior and dynamically injecting both nominal and fault conditions during runtime.
Fatigue crack growth under high-cycle fatigue is one of the most severe problems in the design, maintenance, and safe operation of aircraft structures. During operation, these structures experience millions of loading cycles, which cause the gradual growth of nucleated cracks leading to ultimate failure. Thus, accurate modeling of fatigue crack growth is necessary for ensuring structural integrity, maximizing inspection intervals, and extending the service life of aerospace structures. Conventional methods of modeling crack growth under high-cycle fatigue use linear elastic fracture mechanics to arrive at the cyclic stress intensity factor, which is then used in the Paris Law describing steady state crack growth. Paris law is highly non-linear, consisting of two constants, C and m, under fully reversible loading. These parameters are evaluated using Euler integration of Paris law and linear regression of scattered crack growth measurements from standard tests. However, a lack of constraints during the calibration can render the parameter estimates to be inaccurate particularly when the data is significantly scattered. To address this limitation, Physics-Informed Machine Learning (PIML) architectures are employed to calibrate the parameters of Paris law. However, before utilizing this calibration approach, the accuracy of Physics-Informed Neural Networks (PINNs) to integrate Paris law was tested. To this end, the predictions from Physics-Infused Long-Short Term Memory (PI-LSTM) and Implicit Euler Transfer Learning (IETL) architecture were also compared to Euler integration, and a reasonable agreement was obtained. Following this, these methods were applied to obtain the parameters from numerically generated data using some assumed C and m values. It was observed from the study that the method was not only able to calibrate the parameters, but also that the network could be used to predict crack growth when the amplitudes were modified. Finally, scattered data was artificially generated by choosing distributions of C and m. Subsequently, IETL was applied to the scattered data to calibrate the parameters and showed a satisfactory comparison. In summary, this study exemplifies the merits and demerits of different PIML methods when applied to predict crack growth from the Paris law. Furthermore, the approach allows both crack growth evolution and Paris constants to be predicted from limited experimental data, thereby reducing the need for repeated costly tests across different loading cases. Finally, the reliability of the PIML framework to predict crack growth for various amplitudes and block loading is demonstrated.
Aviation maintenance is a foundational pillar of operational safety, reliability, and mission success. As aircraft systems evolve to incorporate increasingly sophisticated technologies such as advanced avionics to integrated sensor networks maintenance personnel face mounting complexity in diagnostics, data interpretation, and procedural execution. These challenges are compounded by workforce shortages, Subject Matter Expert (SME) turnover, and the growing demand for predictive, data-driven maintenance solutions. In particular, the departure of experienced personnel often results in the loss of critical experience, including nuanced troubleshooting strategies and platform-specific insights that will challenge the ability to sustain operational continuity or support the next generation of maintainers. The traditional documentation and informal tribal knowledge, while historically effective, are increasingly inadequate in the face of complex systems, workforce turnover, and the demand for predictive insights and scalability. In both commercial and military domains, these limitations contribute to reduced readiness, inefficiencies, and elevated lifecycle costs. Recent advancements in Artificial Intelligence—particularly Large Language Models (LLMs) and predictive analytics—present a novel but complex opportunity to address these challenges. LLMs can ingest and synthesize unstructured data from manuals, technician notes, and maintenance logs, enabling intelligent diagnostics, contextual troubleshooting, and dynamic knowledge sharing. This paper will examine traditional maintenance practices and evaluate how different AI models can mitigate systemic inefficiencies while critically addressing the operational, ethical, and technical constraints inherent to their deployment.
Mobile multirobot systems have become increasingly utilized due to benefits such as redundancy, increased coverage, and the ability to create improved data products (like using several scalar field measurements to compute an instantaneous gradient). Multirobot systems include a wide variety of architectures; ranging from swarms, where large numbers of relatively simple robots form loose formations, to more centralized architectures where relatively few robots move with tight formation control. An example of a more centralized architecture is cluster control, which has been developed by the Robotic Systems Laboratory at Santa Clara University to control formations on land, in the water, and in air. Real world testing of multirobot systems can be challenging, therefore, there is a need to develop a robust indoor testbed that is easy to use for experimental verification, reliability and data analysis. In order to create the testbed, we optimized an OptiTrack motion capture system to track Crazyflie 2.1 microdrones within an enclosed 3D space. To achieve this, twenty-four infrared cameras were positioned around the 6 m by 3 m by 3 m workspace for precise tracking in 3D, and thoroughly calibrated. The resulting position data had error margins below 2 cm. This real-time position data was then broadcast over ROS2 for closed-loop control of the microdrones via PID controllers. After a brief description of cluster control, a two drone cluster definition is presented and forward and inverse kinematics as well as the inverse Jacobian are developed. In our paper, we will present Optitrack position data, plots illustrating the flight performances of single drones, as well as results of two-drone cluster flights. Performance tests include hovering, step responses, and multi waypoint navigation. Individual drone performance will be compared to cluster performance, in order to address the differences cluster control architecture provides. Additionally, we will include position error results, and standard deviations of error illustrating the successful implementation of PID tuning and cluster control. These results provide further support for the implementation of cluster control architecture, which can then be used for future formation control experiments.
Space programs face budget cuts and cancellations as their benefits may not justify their cost. In other words, their value (here: benefits minus cost) is insufficient or has not been identified (e.g. scientific gains, job creation). Defining the potential value of space programs is best addressed during its conception, i.e. architecting phase. Space program architecting approaches from literature do not explicitly consider the link between the system architecture and value delivery. We propose to systematically identify ways how value is delivered by a space program architecture from proven value delivery mechanisms. Those proven value delivery mechanisms are captured in the form of value creation patterns. Patterns capture problem-solution knowledge for a specific context. They were first introduced in architecture and later popularized in object-oriented software engineering. They were further applied in systems architecting, and recently in space systems architecting. We first develop a conceptual data model of space programs to structure organizational and technical concepts relevant to space programs, and the relationships between them. This is grounded in the ECSS Glossary and the NASA Systems Engineering Handbook. We then build a database of preliminary value creation patterns in space programs. Examples include a “dual use” pattern that was sourced from a review of the Luxembourg space sector, where the context is that the country’s space policy seeks to employ space infrastructures for the benefit of sectors other than space. The problem there is how value can be created for other sectors by using space infrastructures. Factors influencing the solution include development cost and commonality. One solution is to develop systems for dual use in space and on Earth. An example is the Luxembourg company Maana Electric, which develops ISRU appliances which can produce solar panels from sand on Earth and regolith on the Moon. Another example is the “diffusion” pattern. The context is a country with a non-space-related industrial base. The problem is how to advance the state of the art in that industrial base while contributing to space system development. Similarly to the “dual use” pattern, a key factor is the architectural similarity between the terrestrial and space systems that are developed. The solution is to utilize the capabilities of that industrial base in the development of a space system. A historical example is the Canadian STEAR program, where the country’s robotics industrial capability was applied in the development of the ISS Mobile Servicing System. To explore many similar patterns, complementing manual search, we use a Large Language Model (LLM). This LLM is then used to semantically search through the NASA Technical Reports Server and the ESA Data Discovery Portal for patterns matching or resembling the preliminary value creation patterns. This approach precedes a trade space exploration where space program architectures are designed using patterns, given a certain definition of value that may vary from different actors’ viewpoints.
Nowadays, successfully completing a strategic project is essential to ensuring an organization’s survival. This holds true for most companies whose goal is to sustain their operations and expand their market. The aerospace and aeronautical manufacturing sectors are undergoing a profound transformation driven by the accelerated integration of digital tools and innovative technologies. These changes are reshaping traditional project management practices, especially in the context of complex engineering projects. The adoption of IT tools, planning management software, 3D printing, computer simulation, or AI are examples of technologies that reduce the need for human and capital resources while simultaneously increasing design efficiency. This research focuses on the relationship between the use of various technologies in project management (especially R&D, design, and continuous improvement projects), the methodologies employed, and the performance indicators that measure and control the factors defining project success. This research adopts a qualitative approach based on semi-structured interviews with 25 professionals from the aerospace and aeronautical sectors across Canada, the United States, and France. The participants include project managers, engineers, and digital transformation specialists from leading organizations in civil aviation, major aerospace firms, and key systems integrators. The data were analyzed using thematic coding to identify recurring patterns and insights. Core topics of the discussions included the use of digital technologies, the application of project management methodologies (traditional, agile, or hybrid), and the perceived impact of these elements on project performance. These themes served as the analytical backbone of the study and guided the interpretation of results. The findings indicate that the adoption of digital tools positively correlates with project performance - particularly in terms of schedule adherence, cost control, and risk mitigation - when such tools are embedded within a coherent project governance structure. Metrics such as the Schedule Performance Index (SPI) and Cost Performance Index (CPI) were commonly used to monitor progress. However, advanced technologies like virtual reality and digital twins are not yet widely deployed across the sector. In contrast, solutions such as 3D printing, computational simulation, and project management software are more broadly adopted and integrated into daily operations. Artificial intelligence (AI) is an emerging trend, showing strong potential, yet its adoption remains constrained due to concerns over data sensitivity and a frequent reliance on in-house development. Moreover, the study reveals that the most effective outcomes are observed when organizations adopt a hybrid project management methodology - blending agile and traditional approaches - combined with simple, well-integrated digital tools tailored to the project context. This study contributes to a better understanding of the interplay between digital transformation and engineering project performance. It underscores the importance of adopting a systemic and strategic perspective when implementing digital solutions. The results offer actionable insights for decision-makers aiming to align technology investments with performance objectives. By shedding light on the enabling and limiting conditions of digital integration, this research helps bridge the gap between technological promise and practical impact in aerospace project environments.