How explainable AI could speed up drug discovery

Explainable AI, a subset of standard AI technology, could help design new molecules for medications.

Just like a human, it’s hard to read AI’s mind. Explainable AI (XAI) could help us do just that by justifying a model’s decisions.

Now, researchers are using XAI to scrutinise predictive AI models more closely and to peer deeper into the field of chemistry.

A team of researchers will present their results at the autumn meeting of the American Chemical Society (ACS).

Justifying decision-making with explainable AI

AI’s vast number of uses has made it almost ubiquitous in today’s technological landscape.

However, many AI models are black boxes, meaning it’s not clear exactly what steps are taken to produce a result. When that result is something like a potential drug molecule, not understanding the steps might stir up scepticism among scientists and the public.

Rebecca Davis, a chemistry professor at the University of Manitoba, explained: “If we can come up with models that help provide some insight into how AI makes its decisions, it could potentially make scientists more comfortable with these methodologies.”

One way to provide that justification is with explainable AI. These machine learning algorithms can help us see behind the scenes of AI decision-making.

Though XAI can be applied in a variety of contexts, Davis’ research focuses on applying it to AI models for drug discovery, such as those used to predict new antibiotic candidates.

Considering that thousands of candidate molecules can be screened and rejected to approve just one new drug — and antibiotic resistance is a continuous threat to the efficacy of existing drugs — accurate and efficient prediction models are critical.

Seeing things humans might miss

The researchers started their work by feeding databases of known drug molecules into an AI model that would predict whether a compound would have a biological effect.

Then, they used an explainable AI model developed by collaborator Pascal Friederich at Germany’s Karlsruhe Institute of Technology to examine the specific parts of the drug molecules that led to the model’s prediction.

This helped explain why a particular molecule had activity or not, according to the model, and that helped the researchers understand what an AI model might deem important and how it creates categories once it has examined many different compounds.

The researchers realised that XAI can see things that humans might have missed; it can consider far more variables and data points at once than a human brain. For example, when screening a set of penicillin molecules, the XAI found something interesting.

“Many chemists think of penicillin’s core as the critical site for antibiotic activity,” said Davis. “But that’s not what the XAI saw.” Instead, it identified structures attached to that core as the critical factor in its classification, not the core itself.

Improving predictive models

In addition to identifying important molecular structures, the researchers hope to use explainable AI to improve predictive AI models.

Next, the team will partner with a microbiology lab to synthesise and test some of the compounds the improved AI models predict would work as antibiotics.

Ultimately, they hope XAI will help chemists create better, or perhaps entirely different, antibiotic compounds, which could help stem the tide of antibiotic-resistant pathogens.

Davis concluded: “AI causes a lot of distrust and uncertainty in people. But if we can ask AI to explain what it’s doing, there’s a greater likelihood that this technology will be accepted.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network