Home Actualité internationale Weltnachrichten – AU – Accurate Neural Network Computer Vision Without ‘Black Box’
Actualité internationale

Weltnachrichten – AU – Accurate Neural Network Computer Vision Without ‘Black Box’

. . New research provides clues as to what goes on in the minds of machines when they are learning to see. A method developed by Dukes Cynthia Rudin shows how much. . .

. .

DURHAM, N. . C.. . – The artificial intelligence behind self-driving cars, medical image analysis and other computer vision applications is based on so-called deep neural networks.

These are loosely modeled on the brain and consist of layers of interconnected « neurons » – mathematical functions that send and receive information – that « fire » in response to characteristics of the input data. The first layer processes a raw data input – such as. B.. Pixels in an image – and forwards that information to the next layer above, triggering some of those neurons which then relay a signal to even higher layers until they finally get to a determination of what is in the input image.

But here’s the problem, says Duke computer science professor Cynthia Rudin. ‘For example, we can put in a medical picture and watch what comes out on the other end (‘ this is a picture of a malignant lesion ‘) but it’s hard to know what happened in between. ”

This is the so-called « black box » problem. What goes on in the head of the machine – the hidden layers of the network – is often unfathomable, even to the people who built it.

« The problem with deep learning models is that they are so complex that we don’t know what they are learning, » said Zhi Chen, a Ph. D.. . Student in Rudin’s laboratory with Duke. « You can often use information that we don’t want. Your reasoning processes can be completely wrong. ”

Rudin and her team have found a way to address this problem. By modifying the reasoning process behind the predictions, it is possible that researchers can better fix the networks or understand whether they are trustworthy.

Most approaches try to find out what retrospectively led a computer vision system to the correct answer by pointing to the key features or pixels that identified an image: “The growth of this chest x-ray has been classified as malignant because: For the model, these areas are critical to the classification of lung cancer. « Such approaches don’t reveal the network’s reasoning, just where it looked.

The Duke team tried differently. Rather than attempting to consider a network’s decision making post hoc, their method trains the network to show its work by expressing its understanding of concepts along the way. Their method shows how much the network is reminiscent of various concepts in order to decipher what it sees. « It untangles how different concepts are represented in the layers of the network, » said Rudin.

With an image of a library, for example, the approach can be used to determine whether and to what extent the various layers of the neural network depend on their mental representation of “books” in order to identify the scene.

The researchers found that with a small adjustment to a neural network, it is possible to identify objects and scenes in images just as precisely as the original network and still achieve a significant degree of interpretability in the network’s argumentation process. « The technique is very easy to use, » said Rudin.

The method controls the flow of information through the network. A standard part of a neural network is replaced by a new part. The new part only restricts a single neuron in the network to fire in response to a certain concept that humans understand. The concepts can be categories of everyday objects such as “book” or “bicycle”. But they can also be general characteristics, such as. B.. « Metal », « Wood », « Cold » or « Warm ». “If only one neuron is controlling information about a concept at a time, it is much easier to understand how the network is thinking. ”


The researchers tried their approach in a neural network trained on millions of labeled images to recognize different types of indoor and outdoor scenes, from classrooms and food courts to playgrounds and patios. Then they switched on pictures they hadn’t seen before. They also looked for the concepts that the network layers relied on most to process the data.

Chen shows a storyline that shows what happened when they put an image of an orange sunset on the network. Your trained neural network says that warm colors in the sunset image like orange in earlier layers of the network are more associated with the concept of « bed ». In short, the network strongly activates the “bed neuron” in early shifts. As the image moves through successive layers, the network gradually relies on a more nuanced mental representation of each concept, and the concept of « airplane » becomes more active than the term « beds », possibly because « airplanes » are more commonly associated with sky and clouds.

Of course, it’s only a small part of what’s happening. This way, however, the researchers can grasp important aspects of the network’s train of thought.

The researchers say their module can be connected to any neural network that recognizes images. In one experiment, they connected it to a neural network that was trained to detect skin cancer in photos.

Before an AI can learn to recognize melanoma, it must learn how melanoma looks different from normal moles and other benign spots on your skin by searching through thousands of training images that have been flagged and tagged by skin cancer experts.

But the network seemed to be setting up a concept of the « irregular border » that it had created itself, without the help of the training labels. The people commenting on the images for use in artificial intelligence applications did not notice this feature, but the machine did.

« Our method found a flaw in the data set, » said Rudin. If they had included this information in the data, it might have made it clearer whether the model was reasoning correctly. « This example just goes to show why we shouldn’t blindly trust ‘black box’ models without a clue of what is going on inside them, especially with tricky medical diagnoses, » said Rudin.

QUOTE: “Brightening up the concept for interpretable image recognition”, Zhi Chen, Yijie Bei and Cynthia Rudin. Nature Machine Intelligence, Dec.. . 7, 2020. DOI: 10. 1038 / s42256-020-00265-z

Computer vision, black box, artificial neural network, neuron, research, artificial intelligence, deep learning

Weltnachrichten – AU – Accurate Neural Network Computer Vision without ‘Black Box’

Ref: https://www.miragenews.com

A LIRE AUSSI ...

Incident tragique à Yaoundé : Paul Atanga Nji prend des mesures contre le désordre urbain.

L’incident tragique qui s’est produit dans le quartier Etoudi à Yaoundé 1er...

Bruno Bidjang : des excuses tardives n’ont pas suffi à éviter son incarcération.

La prison peut changer les priorités d’une personne, la faire revenir sur...

Le sous-préfet de Mora retrouvé mort dans son salon

Le sous-préfet de Mora, dans le département Mayo-Sava dans la région de...

[quads id=1]