Technology

A “New Nobel” – Computer Scientist Wins $1 Million Artificial Intelligence Prize

Duke University professors became the second recipient of the AAAI Squirrel Artificial Intelligence Award for creating socially responsible artificial intelligence.

Whether it’s preventing grid explosions, discovering patterns between past crimes, or optimizing resources for caring for critically ill patients, Cynthia Rudin, a computer scientist at Duke University, wants artificial intelligence (AI) to show its work. Especially when making decisions that have a profound impact on people’s lives.

Although many scholars in the field of machine learning development focus on improving algorithms, Rudin wants to use the power of artificial intelligence to help society. She chose to look for opportunities to apply machine learning technology to important social issues, and in the process realized that when humans can deeply understand and understand what it is doing, the potential of artificial intelligence can be best released.

Cynthia Rudin
Cynthia Rudin, professor of electrical and computer engineering and computer science at Duke University. Credit: Les Todd

Now, after 15 years of advocating and developing “explainable” machine learning algorithms, allowing humans to see the inside of AI, Rudin’s contribution in this field has won him a $1 million artificial intelligence squirrel AI award, an association for the benefit of mankind . To promote artificial intelligence (AAAI). Founded in 1979, AAAI is a well-known international scientific society that provides services to artificial intelligence researchers, practitioners and educators.

Rudin, a professor of computer science and engineering at Duke University, is the second recipient of the new annual award, which is funded by online education company Squirrel AI to recognize achievements in the field of artificial intelligence in a way that is comparable to the top awards in more traditional fields.

She was cited as “a pioneering scientific work in the field of transparent and interpretable artificial intelligence systems in real-world realization, advocating these functions in highly sensitive fields such as social justice and medical diagnosis, and serving as a role model for researchers and practitioners” .

“Only world-renowned recognitions, such as the Nobel Prize and the A.M. Turing Award from the Computer Association provide a monetary award of $1 million,” said Yolanda Gill, chairman and former chairman of the AAAI Awards Committee. “Professor Rudin’s work emphasizes the importance of transparency in artificial intelligence systems in high-risk areas. His courage in solving controversial issues highlights the importance of research in addressing the key challenges of responsible and ethical use of artificial intelligence. ”

Rudin’s first application project was a collaboration with Con Edison, an energy company responsible for powering New York City. Their task is to use machine learning to predict which manholes are at risk of explosion due to circuit degradation and overload. But he soon discovered that no matter how many recently published academic bells and whistles he added to the code, it was difficult to significantly improve performance when faced with the challenge of handling handwritten notes from dispatchers and company accounting records. The era of Thomas Edison.

“As we continue to use data, we have gained greater accuracy from simple classical statistical techniques and have a better understanding of the data,” Rudin said. “If we can understand what information the predictive model is using, we can seek useful feedback from United Edison engineers to improve our entire process. It is the interpretability of the process that helps improve the accuracy of our predictions, and It’s not a larger or more complex machine learning model. This is what I decided to do and the basis for my laboratory.”

In the next ten years, Rudin developed the technology of interpretable machine learning, which is a predictive model that can explain itself in a way that humans can understand. Although the code for designing these formulas is complex and sophisticated, these formulas can be small enough to be written in a few lines on an index card.

Rudin has applied his brand of interpretable machine learning to many influential projects. Together with his collaborators Brandon Westover and Aaron Struck at Massachusetts General Hospital, and his former student Berk Ustun, he designed a simple points-based system that can predict which patients are at greater risk of destructive epilepsy after a stroke or other brain injury. high. Together with his former MIT student Wang Tong and the Cambridge Police Department, he developed a model to help reveal the commonalities between crimes to determine whether they may be part of a series of crimes committed by criminals themselves. The open source program eventually became the basis of the New York Police Department’s Patternizr algorithm, a powerful piece of code that can determine whether a new crime committed in the city is related to a past crime.

“Cynthia is committed to solving important real-world problems, eager to work closely with experts in the field, and the ability to refine and interpret complex models is unparalleled,” said Daniel Wagner, Deputy Commissioner of the Cambridge Police Department. “His research has made a significant contribution to the field of crime analysis and surveillance. What is even more impressive is that she strongly criticizes the potentially unfair “black box” model in criminal justice and other high-risk fields, and vigorously advocates transparent security. Interpretation models, in which accurate, fair and unbiased results are crucial.”.

The black box model is the opposite of Rudin’s transparent code. The methods applied in these AI algorithms make it impossible for humans to understand the factors that the model relies on, the data that the model focuses on, and how they use the data. While this may not be a problem for trivial tasks like distinguishing dogs from cats, it can be a big problem for high-risk, life-changing decisions.

“Cynthia is changing the pattern of how artificial intelligence is used in social applications, by demonstrating traditional wisdom that black boxes are usually more accurate and often wrong, and redirecting efforts from black box models to interpretable models. Yang Jun said., Du Chairman of the IT department of the University of Krakow. “This makes it more difficult to get individuals (eg defendants) to accept the black box model in high-risk situations. The interpretability of Cynthia models is critical to their adoption in practice, because they enable human decision makers rather than replace them. ”

A shocking example involves COMPAS, an artificial intelligence algorithm used to make bail decisions in various states. The ProPublica investigation accused the algorithm of partially using race as a factor in its calculations. However, this allegation is difficult to prove because the details of the algorithm are proprietary information, and some important aspects of ProPublica’s analysis are problematic. Rudin’s team has proven that a simple interpretable model can accurately reveal the factors it considers, and it can also predict whether a person will commit a crime again. Rudin said this raises the question why black box models should be used for these types of high-risk decisions.

“We have been proving that for high-risk applications, as long as we carefully optimize our model, we won’t lose accuracy to gain interpretability,” Rudin said. “We have seen this in criminal justice decisions, including medical imaging, power grid maintenance decisions, financial loan decisions, and many other health care decisions. Knowing this is possible has changed the way we think artificial intelligence cannot explain ourselves.”

Throughout his career, Rudin has not only created these interpretable AI models, but also developed and released technology to help others do the same. This is not always easy. When he started publishing his work, the terms “data science” and “interpretable machine learning” did not exist, nor did his research fit perfectly, which meant that editors and reviewers didn’t know what to do. How to do it. Cynthia found that if a paper does not prove the theorem and claims that its algorithm is more accurate, it is past and usually still more difficult to publish.

As Rudin continues to help people and publish his interpretable designs, and as more attention to black box code continues to emerge, his influence has finally begun to turn the tide. Now, there are complete categories in machine learning magazines and conferences dedicated to interpretability and application work. Other colleagues in the field and their collaborators are expressing the importance of interpretability for the design of reliable AI systems.

“I have admired Cynthia from a very young age because of her independent spirit, her determination, and her unremitting exploration of truly understanding any new things she encounters in class and rehearsals,” said the outstanding James Ingrid Daubechies . B. Professor of Mathematics and Electrical and Computer Engineering at Duke University, one of the world’s leading signal processing researchers, and one of the supervisors of Dr. Rudin at Princeton University. “Even as a graduate student, she is also a community builder, defending the rest of her team. She let me enter the field of machine learning because before she pushed me gently but very persistently, I didn’t Having experienced this field. I am very happy for her to receive such a wonderful and well-deserved recognition!”

“I am extremely excited to see Cynthia’s work being honored in this way,” adds Robert Schapire, Rudin’s second PhD consultant and Microsoft research partner, whose “motivated” work helped lay the foundation for learning. Modern automation. “Because of her inspiring and insightful research, her independent thinking has led her in a direction that is very different from the mainstream, and her long-term focus on issues of practical and social significance.”

Rudin received a bachelor’s degree in mathematical physics and music theory from the University of Buffalo, and then completed a PhD in computational and applied mathematics at Princeton. Later, she worked as a postdoctoral researcher at the National Science Foundation of New York University and as a research assistant at Columbia University. Before joining Duke College in 2017, she became an associate professor of statistics at MIT, where she held positions in computer science, electrical and computer engineering, biostatistics and bioinformatics, and statistical science.

He has won the INFORMS Analytical Innovation Application Award three times, which recognizes the creative and unique application of analytical techniques, and is a member of the American Statistical Association and the Institute of Mathematical Statistics.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button