
The proliferation of space debris has become a matter of grave concern for new and existing orbiting assets, leading to potential collisions and losses. One approach for mitigating the impacts of space debris involves the use of satellites with robotic manipulation arms mounted on them. The approach for capturing debris with such a system involves going to a rendezvous orbit with a previously identified debris object, using a vision-based grasp planner to find an approach trajectory for the robotic arm, and finally controlling the attitude of the satellite along with the joints of the robotic arm and gripper to execute a successful grasp. Building upon our previous work on grasp planning for satellite-mounted manipulators, after the orbit rendezvous stage, we propose a whole-body control strategy using reinforcement learning (RL) to control the combined satellite and robotic arm system. In order to execute debris capture in a closed-loop manner, we employ a lightweight vision-based grasp planner that allows for real-time visual servoing, as well as tactile sensing on the robotic arm’s end-effector to prevent the target object from slipping. The reinforcement learning model, once trained, would be able to operate with lower computation costs than classical approaches such as MPC. We demonstrate the efficacy of this closed-loop RL-based controller for debris capture in a high-fidelity physics simulation environment and compare its performance with classical controllers.
A radio was designed to support simultaneous co-channel jamming and communications using an in-band Simultaneous Transmit and Receive (STAR) antenna array [1] followed by two layers of adaptive estimator/subtractors. These cancellers use a reference signal obtained by tapping off a small fraction of the transmitted jamming signal. The estimator/subtractors can suppress other locally transmitted signals if they are tethered to the receiver to provide reference signals. The output of the implemented local interference cancelling front-end resembles a four-channel array, and array interference-nulling techniques can be applied to the four-channel output to suppress untethered interference signals. Due to the preceding adaptive cancellers, the untethered interference nulling algorithm used cannot require array calibration. The received communication signal has a known format and contains known symbol sub-sequences that can be used as training data. The proposed algorithm for untethered interference cancellation with signal training data and without array calibration appeared as an intermediate result in [2]. The algorithm maximizes the output signal-to-interference-plus-noise ratio using the training sequence to estimate Minimum Variance Distortionless Responce (MVDR) type array weights without knowing the signal Angle-of-Arrival (AoA). Here we call it MVDR for Uncalibrated Arrays (MUA). This paper describes the receiver design and presents simulation results for the MUA algorithm applied to the four-channel output. It is shown that with ideal self-interference cancelling, untethered interference nulling performance is good whenever the input Interference-to-Noise-Ratio is high enough for the algorithm to implicitly estimate the array interference response, and not so high that the algorithm cannot reliably detect signal training data, provided the interference and signal AoAs are at least 45° apart. Signal to Interference Ratio improvements as high as 50 dB have been observed when untethered interference is strong. References: [1] K. E. Kolodziej, B. T. Perry and J. S. Herd, "In-Band Full-Duplex Technology: Techniques and Systems Survey," in IEEE Transactions on Microwave Theory and Techniques, vol. 67, no. 7, July 2019, pp. 3025-3041 [2] Keith Forsythe, “Utilizing Waveform Features for Adaptive Beamforming and Direction Finding with Narrowband Signals”, Lincoln Laboratory Journal, Vol. 10, No. 2, 1997, pp 99-125. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Department of the Air Force under Air Force Contract No. FA8702-15-D-0001 or FA8702-25-D-B002. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of the Air Force.
Signals integration is used in digital communication systems with data fusion to enhance the performance and reduce the multipath effect. The two main approaches for signals integration in digital communication systems with data fusion are full and semi-full signals integration. In full signals integration systems, there are multiple receivers producing very large number of bits and the entire signals integration system is closely resembled analog multiple receiver implementations. This approach achieves the optimum performance at the expense of high cost and complexity. In semi-full signals integration systems, only few numbers of bits are used after preliminary processing of signals at each individual receiver. This method could reduce system complexity and cost at the expense of overall performance degradation. This paper provides performance analysis of full and semi-full signals integration approaches in digital communication systems in case of non-coherent frequency shift keying (NCFSK) receivers with Gaussian noise and Rician fading stochastic model. The performance loss due to semi-full signals integration is analyzed for different number of information bits.
In recent years, multi-agent reinforcement learning (MARL) has emerged as a promising approach for multi-unmanned combat aerial vehicle (UCAV) autonomous countermeasure systems. However, conventional end-to-end MARL methods often lack expert knowledge guidance, leading to low training efficiency, which poses challenges for simulation-to-reality (sim2real) transition. In this study, we focus on the scenario of cooperative beyond-visual-range (BVR) aerial engagement involving multiple unmanned combat aerial vehicles. To address these challenges, we propose the hybrid-constrained multi-agent proximal policy optimization (HC-MAPPO) algorithm. First, we design a rule filter mechanism, where expert rules dictate agent behavior in well-understood states to ensure predictable and interpretable maneuvers, while the neural policy is applied otherwise. Second, we formulate the multi-agent aerial combat problem as a Constrained Markov Decision Process (CMDP) and incorporate a cost-critic network into the actor-critic architecture, which enables explicit estimation of long-term constraint costs and decouples penalty from task rewards. Third, we develop a bilevel optimization framework for constrained policy search, which provides theoretical convergence guarantees and demonstrates improved training stability over traditional Lagrangian-based methods. Empirical results demonstrate that HC-MAPPO achieves a superior success rate, improving the win rate by approximately 20\%–30\% compared to existing MARL baselines such as MAPPO and HAPPO. Ablation studies further confirm the necessity of both constraints: removing either one leads to performance degradation.
Accurate and robust pose estimation of non-cooperative spacecraft is critical for autonomous rendezvous and on-orbit servicing. While monocular vision-based methods have attracted growing interest owing to their low cost and structural simplicity, achieving high-precision pose estimation under large scale variations in target distance and complex illumination conditions remains a formidable challenge. In this paper, we propose a novel dual-path prediction network reinforced with a geometric consistency constraint to address these issues. Our framework features two distinct yet complementary pathways. The first path employs a feature pyramid network to extract multi-resolution representations, from which stable keypoints are detected and subsequently integrated with a PnP solver, thereby enabling accurate pose estimation across targets with large scale variations. The second path employs an adaptive-weighted feature pyramid network augmented with a spatial self-attention module to effectively fuse multi-scale information and strengthen global contextual reasoning. Its output is processed by two direct regression heads for rotation and translation, hence improving accuracy and robustness under occlusion and degraded geometric conditions. To ensure coherence between the two pathways, we further introduce a geometric consistency loss that enforces alignment of their outputs during training, thereby improving stability and generalization. Experimental results on SPEED and SwissCube datasets demonstrate that our framework achieves substantial improvements over existing methods, particularly under extreme conditions.
Fatigue crack growth under high-cycle fatigue is one of the most severe problems in the design, maintenance, and safe operation of aircraft structures. During operation, these structures experience millions of loading cycles, which cause the gradual growth of nucleated cracks leading to ultimate failure. Thus, accurate modeling of fatigue crack growth is necessary for ensuring structural integrity, maximizing inspection intervals, and extending the service life of aerospace structures. Conventional methods of modeling crack growth under high-cycle fatigue use linear elastic fracture mechanics to arrive at the cyclic stress intensity factor, which is then used in the Paris Law describing steady state crack growth. Paris law is highly non-linear, consisting of two constants, C and m, under fully reversible loading. These parameters are evaluated using Euler integration of Paris law and linear regression of scattered crack growth measurements from standard tests. However, a lack of constraints during the calibration can render the parameter estimates to be inaccurate particularly when the data is significantly scattered. To address this limitation, Physics-Informed Machine Learning (PIML) architectures are employed to calibrate the parameters of Paris law. However, before utilizing this calibration approach, the accuracy of Physics-Informed Neural Networks (PINNs) to integrate Paris law was tested. To this end, the predictions from Physics-Infused Long-Short Term Memory (PI-LSTM) and Implicit Euler Transfer Learning (IETL) architecture were also compared to Euler integration, and a reasonable agreement was obtained. Following this, these methods were applied to obtain the parameters from numerically generated data using some assumed C and m values. It was observed from the study that the method was not only able to calibrate the parameters, but also that the network could be used to predict crack growth when the amplitudes were modified. Finally, scattered data was artificially generated by choosing distributions of C and m. Subsequently, IETL was applied to the scattered data to calibrate the parameters and showed a satisfactory comparison. In summary, this study exemplifies the merits and demerits of different PIML methods when applied to predict crack growth from the Paris law. Furthermore, the approach allows both crack growth evolution and Paris constants to be predicted from limited experimental data, thereby reducing the need for repeated costly tests across different loading cases. Finally, the reliability of the PIML framework to predict crack growth for various amplitudes and block loading is demonstrated.
Space programs face budget cuts and cancellations as their benefits may not justify their cost. In other words, their value (here: benefits minus cost) is insufficient or has not been identified (e.g. scientific gains, job creation). Defining the potential value of space programs is best addressed during its conception, i.e. architecting phase. Space program architecting approaches from literature do not explicitly consider the link between the system architecture and value delivery. We propose to systematically identify ways how value is delivered by a space program architecture from proven value delivery mechanisms. Those proven value delivery mechanisms are captured in the form of value creation patterns. Patterns capture problem-solution knowledge for a specific context. They were first introduced in architecture and later popularized in object-oriented software engineering. They were further applied in systems architecting, and recently in space systems architecting. We first develop a conceptual data model of space programs to structure organizational and technical concepts relevant to space programs, and the relationships between them. This is grounded in the ECSS Glossary and the NASA Systems Engineering Handbook. We then build a database of preliminary value creation patterns in space programs. Examples include a “dual use” pattern that was sourced from a review of the Luxembourg space sector, where the context is that the country’s space policy seeks to employ space infrastructures for the benefit of sectors other than space. The problem there is how value can be created for other sectors by using space infrastructures. Factors influencing the solution include development cost and commonality. One solution is to develop systems for dual use in space and on Earth. An example is the Luxembourg company Maana Electric, which develops ISRU appliances which can produce solar panels from sand on Earth and regolith on the Moon. Another example is the “diffusion” pattern. The context is a country with a non-space-related industrial base. The problem is how to advance the state of the art in that industrial base while contributing to space system development. Similarly to the “dual use” pattern, a key factor is the architectural similarity between the terrestrial and space systems that are developed. The solution is to utilize the capabilities of that industrial base in the development of a space system. A historical example is the Canadian STEAR program, where the country’s robotics industrial capability was applied in the development of the ISS Mobile Servicing System. To explore many similar patterns, complementing manual search, we use a Large Language Model (LLM). This LLM is then used to semantically search through the NASA Technical Reports Server and the ESA Data Discovery Portal for patterns matching or resembling the preliminary value creation patterns. This approach precedes a trade space exploration where space program architectures are designed using patterns, given a certain definition of value that may vary from different actors’ viewpoints.
Nowadays, successfully completing a strategic project is essential to ensuring an organization’s survival. This holds true for most companies whose goal is to sustain their operations and expand their market. The aerospace and aeronautical manufacturing sectors are undergoing a profound transformation driven by the accelerated integration of digital tools and innovative technologies. These changes are reshaping traditional project management practices, especially in the context of complex engineering projects. The adoption of IT tools, planning management software, 3D printing, computer simulation, or AI are examples of technologies that reduce the need for human and capital resources while simultaneously increasing design efficiency. This research focuses on the relationship between the use of various technologies in project management (especially R&D, design, and continuous improvement projects), the methodologies employed, and the performance indicators that measure and control the factors defining project success. This research adopts a qualitative approach based on semi-structured interviews with 25 professionals from the aerospace and aeronautical sectors across Canada, the United States, and France. The participants include project managers, engineers, and digital transformation specialists from leading organizations in civil aviation, major aerospace firms, and key systems integrators. The data were analyzed using thematic coding to identify recurring patterns and insights. Core topics of the discussions included the use of digital technologies, the application of project management methodologies (traditional, agile, or hybrid), and the perceived impact of these elements on project performance. These themes served as the analytical backbone of the study and guided the interpretation of results. The findings indicate that the adoption of digital tools positively correlates with project performance - particularly in terms of schedule adherence, cost control, and risk mitigation - when such tools are embedded within a coherent project governance structure. Metrics such as the Schedule Performance Index (SPI) and Cost Performance Index (CPI) were commonly used to monitor progress. However, advanced technologies like virtual reality and digital twins are not yet widely deployed across the sector. In contrast, solutions such as 3D printing, computational simulation, and project management software are more broadly adopted and integrated into daily operations. Artificial intelligence (AI) is an emerging trend, showing strong potential, yet its adoption remains constrained due to concerns over data sensitivity and a frequent reliance on in-house development. Moreover, the study reveals that the most effective outcomes are observed when organizations adopt a hybrid project management methodology - blending agile and traditional approaches - combined with simple, well-integrated digital tools tailored to the project context. This study contributes to a better understanding of the interplay between digital transformation and engineering project performance. It underscores the importance of adopting a systemic and strategic perspective when implementing digital solutions. The results offer actionable insights for decision-makers aiming to align technology investments with performance objectives. By shedding light on the enabling and limiting conditions of digital integration, this research helps bridge the gap between technological promise and practical impact in aerospace project environments.