In aviation, there are high hopes for AI. Potential applications range from runway recognition to the autopilot of the future. Other applications could be the planning of consumption-optimized flight routes or visual quality control during aircraft assembly. However, in order to be applied in aviation, procedures must be developed to certify AI systems and thus prove their safety.
A key component of certification is the explainability of AI algorithms. This means that the decisions made by the AI must be made comprehensible to humans. Explanations addressed to a user enable him or her to check the AI’s decisions for plausibility and to better understand the system.
Learning to identify landing sites for drones
In the project VeriKAS (LuFo VI, funded by the Germany Federal Ministry for Economic Affairs and Climate Action), ZAL GmbH and other partners address the question of how AI systems can be made certifiable and what role explanations can play in this. In the project, two use cases are developed in which AI supports humans in decision-making.
The first use case is the automatic detection of possible landing sites for a drone in an urban area. The aircraft is equipped with a camera for this purpose and the captured images with it are evaluated by a neural network. The operator of the drone is then shown a map with possible landing sites. In addition, for each possible landing site there is a compact explanation of which criteria were relevant for the evaluation, for example the flatness of the ground or the recognition of the surface being a roof or a green area. Based on this information, the drone operator selects a landing point. The drone then performs the landing maneuver using a (conventional) autopilot. For the training of the neural networks, a simulation environment is used, which contains an approx. half-square-kilometer 3D model of a part of Hamburg. In the simulation, the drone repeatedly performs landings, receives feedback on how good a selected landing point was, and thus learns to identify good landing sites over time.
Detecting foreign objects
The second use case is located in the aircraft production process. During aircraft assembly unwanted objects, so-called “foreign object debris” (FOD), are forgotten in the aircraft and later lead to damage. These can be tools, for example. For this use case, an AI is being developed that analyzes images from a camera depicting the assembly process, providing the production worker with information about where FODs are located. In addition, a comparison with known objects will indicate which FODs are involved.
Based on the two use cases, a certification process will be developed that certifies the safety of AI applications and thus paves the way for the autopilot of the future.
What do you think about AI in aviation? Share your opinion with us!
Dr. Felix Berteloot
Deputy Head of Automation
+49 40 248 595 143