You are here : Home > Image recognition system can classify, annotate, and explain how

Image recognition system can classify, annotate, and explain how


​Explainable AI—when an artificial intelligence produces an explanation of a decision along with the decision itself—is a requirement for certain applications. CEA-List, a CEA Tech institute, recently tackled this major issue in trusted AI by developing a new machine learning module that can classify images and annotate objects. The module has been integrated into the ExpressIF® AI platform.

Published on 3 November 2020

Increasingly, artificial intelligence is used to make decisions that affect our day-to-day lives. In this context, trust is vital. And to trust an AI's decision, you need to know how the AI arrived at that decision. In PhD research co-supervised by the CentraleSupélec MICS Lab (which studies mathematics and computing for complexity and systems), CEA-List developed a new machine learning module that can classify images and annotate objects and generate an explanation at the same time. This module has been integrated into the CEA-List symbolic AI platform ExpressIF®.

Here's how it works. First, a neural network "understands" an image by identifying specific areas that correspond to different objects in the image. The new algorithms integrated into ExpressIF® then take over, identifying the objects according to their relative positions, and then annotating them. What makes the algorithms so powerful is that they can learn to identify objects error-free from just a few (fewer than ten) images. The initial tests—carried out on abdominal MRI images—were encouraging. Not only was ExpressIF® able to automatically annotate the organs pictured, but it was also able to justify its annotations with an explanation produced in natural language. This is a clear step forward toward helping doctors interpret their patients' scans.

The solution augments users' trust in the AI's decision, of course. But it also aligns with future laws, which will likely require explainable AI for certain applications. Here, the researchers tested the new feature on medical images. However, it could also augment scene interpretation or the characterization of manufactured parts, for example.


Top page