Developing Artificial Intelligence That “Thinks” Like Humans

Creating human-like AI is more than just imitating human behavior: technology must also be able to process information or “think” like humans, if it is to be fully trusted.

New research, led by the School of Psychology and Neuroscience at the University of Glasgow, published in Patterns, uses 3D modeling to analyze how deep neural networks (part of a broader machine learning series) process information to visualize how your information is processed . Compatible with human beings.

This new work is expected to pave the way for the creation of more reliable artificial intelligence technology that can process information like humans and make mistakes that we can understand and predict.

One of the challenges that AI development still faces is how to better understand the thinking process of machines and whether it matches the way humans process information to ensure accuracy. Deep neural networks are generally considered to be the best behavior model in current human decision-making, and achieve or even exceed human performance in certain tasks. However, compared with humans, even seemingly simple visual discrimination tasks can reveal obvious inconsistencies and errors in AI models.

At present, deep neural network technology is used in applications such as facial recognition. Although it has been very successful in these fields, scientists still do not fully understand how these networks process information and when errors occur.

In this new study, the research team modeled the visual stimuli provided to the deep neural network and transformed them in a variety of ways, so as to prove the recognition similarity by processing similar information between humans and models. solved this problem. .

The lead author of the study, Professor Philip Scharns, director of the Institute of Neuroscience and Technology at the University of Glasgow, said: “For example, by constructing an artificial intelligence model that behaves like a human being to recognize a person’s face. You look at it. To become a person, we must make sure that the AI ​​model uses facial information from another person to recognize it. If AI does not do this, we can get the illusion that the system is working like a human, but then we realize that they are new or not It’s wrong if it’s confirmed.”

The researchers used a series of modifiable 3D faces and asked humans to rate the similarity of these randomly generated faces to four familiar identities. They then used this information to test whether deep neural networks scored the same for the same reasons, not only whether humans and artificial intelligence made the same decision, but also whether they rely on the same information. Importantly, through their method, researchers can visualize these results as 3D faces that drive human and network behavior. For example, a network that correctly classifies 2,000 identities is driven by a severely caricatured face, which shows that the facial information it recognizes is very different from human facial information.

The researchers hope that this work will pave the way for more reliable artificial intelligence technology that behaves more like humans and will reduce unpredictable errors.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button