AGENIUM SPACE (FR)
In the frame of the CORTEX project, AGENIUM Space develops a set of solutions to simplify high performance Deep Neural Networks to make possible AI analysis on-board. AGENIUM Space reduces networks complexity and optimize DNN execution in on-board hardware (SoC FPGA). The company evaluates different solutions to prepare on-board analysis before launch, as training models with very limited data sets (frugal networks) as well as to monitor model trustworthy, in particular for hyperspectral missions. Multiple applications require on-board high-performance data analysis provided by Deep Learning Networks. Nevertheless, without AGENIUM Space simplification and porting, that AI-based analysis cannot be executed in on-board hardware. Satellite operators will use AGENIUM Space solutions for Deep Learning at the edge to speed up information delivery to their customers without latency of on-ground processing. Extracting target information on-board avoids downlinking useless data and reduces the data volume to transfer, decreasing mission costs. Satellite manufacturers need on-board processing to increase platform capacity as the storage memory is only occupied by data considered as “interesting” after AI analysis. Hardware manufacturers need to provide to their Space clients with hardware plus software solutions including DNN analysis on-board. All of them impose strong requirements in terms of power consumption but also in analysis reliability and tight deadlines for mission preparation. Except for constellations composed by a unique repeated platform, the customers need a solution for training DNN with very reduced datasets. This project aims to reduce DNN complexity by models distillation and quantization with the aim of execute model inference in the reduced resources of the on-board hardware while preserving the original accuracy of the models. Simplified models are ported and optimized for some specific hardware architectures present on-board (Xilinx SoC FPGA). AGENIUM Space simplification workflow is generic, can be applied to different DNN architectures suitable for our clients. It brings high level of model simplification in terms of size and complexity reduction while maintaining roughly initial model performances (precision, accuracy).