News

“Alexa, land the plane!” – On the way to explainable and certifiable AI for aviation

Why does AI decide the way it does?

You know the debate from the car industry about autonomous driving. Safety issues and borderline cases are much discussed socially, as AI does not seem to be predictable there. These are the issues that so-called Explainable AI or XAI deals with. XAI is particularly relevant when AI has to make safety-relevant decisions, for example, when it has to land an aircraft autonomously.

Why are AI decisions not understandable?

Nowadays, deep neural networks celebrate one success after another. These are complex mathematical structures with over a billion parameters in some cases. Unlike conventional algorithms, they are not programmed but trained with a large amount of sample data. For problems such as recognizing objects in images or understanding language, they achieve a high degree of accuracy. Because of the high complexity, however, it is only partly comprehensible for humans how a deep neural network arrived at a certain decision. This is often referred to as a “black box” because in many cases it is not known exactly how an AI model works inside.

How does XAI work?

With the help of various methods from the field of XAI, it is possible to shed more light on the darkness. Using a so-called heat map, one can mark regions in an image that have particularly contributed to the neural network’s decision. In this way, the human can tell which image features the AI paid attention to, or whether it accidentally included the image caption in the decision, for example. Another possibility is to use a model that is easy to interpret from the outset (for example, a logistic regression). This makes the result comprehensible. However, interpretable models are often less accurate, so you have to weigh what is more important here.

Why is XAI needed in aviation?

If “pilot AI” is to land the aircraft in the future, this AI must go through a stringent certification process. EASA is currently developing initial guidance documents on how something like this could work and what to look out for. The first document for AI in assistance functions was published in April 2021. EUROCAE is also working on this topic in Working Group 114 and has already published the first document with an analysis of the gap between AI methods and aviation standards. The Explainability of AI plays a major role in the certification process and can facilitate the use of AI in aircraft.

What is ZAL doing in this area?

In June 2021, ZAL GmbH started a research project on XAI in aviation. The goal is to use aviation use cases to develop methods for XAI that will enable certification.

The ZAL experts will methodically perform a so-called structural inspection of neural networks. This means that the AI team will, for example, visually display which image areas are particularly relevant for the AI’s decisions.

As a use case, for example, the team is developing a program to find suitable landing sites for drones: Both for regular operations and for the event that the drone has to make an emergency landing. To do this, the experts are evaluating images from an onboard camera with the help of AI to find out what prompted the AI to assess a location as suitable.

Do you have any questions or feedback on this topic? Then we look forward to hearing from you at .