Your new research project is to be equipped with artificial intelligence? You want to experiment with AI? Then you’ve come to the right place. Our AI Aviation Lab invites you to experiment. Various tools for the development of AI prototypes are just waiting for you to try them out. And should questions arise: Our AI team will be happy to assist you. This especially applies to our new ARTIST software. The Artificial Intelligence Software Toolbox (ARTIST) is the laboratory’s new operating system. The program’s modules combine various functions and devices and are at the same time easy to use. Our ARTIST experts will be happy to show you what can be done with the seven modules of the program.
The Labeling Tool is a very useful tool. You can use it to spatially delimit objects in the images with a frame, the so-called bounding box, and assign them to a class. This significantly reduces the time of labeling. Labeling is a manual process that can hardly be automated. For this reason, we are looking for ways to circumvent this process using efficient methods. One possibility is to use 3D modeling software to generate images of synthetically generated objects. Our Blender Code automates important steps and generates any number of images of an object with random textures and backgrounds at the push of a button. Since textures and backgrounds vary, this is called Domain Randomization. This is very useful to train a robust AI model. The manual labeling is also not necessary here, because the Blender code automatically creates the appropriate bounding box for the object.
The labeled data can now be used to train a neural network which then independently classifies images and recognizes objects in images. For this purpose we have developed our own Training Pipeline. It contains useful scripts for training neural networks for object recognition. You can choose from several basic AI application libraries (AI frameworks) such as TensorFlow and PyTorch. Much of our experts’ experience and know-how in training neural networks has been incorporated into the Training Pipeline.
With the tools mentioned above you can create an AI model. To use this model, e.g. to control an intelligent robot with the help of a camera, you need four additional modules. Since we use the meta operating system Robot Operating System (ROS) for data communication between the sensors, the robot and the computing cluster, we call these modules nodes (a communication partner in the network). The task of the Camera Node is to send images at regular intervals. Various parameters such as frequency, image depth and image size can be set. The so-called Inference Node processes the received images and applies the previously trained neural network. The resulting predictions are sent to the Logics Node. It filters the data and processes them for sending to the robot.
Finally, the Robot Safety Node comes into play. It forms the direct interface to the robot controller and is therefore particularly important for safety. To prevent the robot from carrying out unauthorized actions (e.g. ramming a table), the instructions are first checked and restricted to an allowed spatial working area. Furthermore, the Robot Safety Node runs in a ROS area that cannot be reached from outside and can only interact with the Logics Node. This increases safety even further.
You see, being an ARTIST dramatically facilitate work in the AI Aviation Lab. Convince yourself. In addition to concrete research projects you might want to try our lab for workshops or hackathons. You want to learn more? Please contact us: .