Michael Bronstein aims to unite the deep learning community
The ARA recipient is pioneering geometric deep learning, an approach that not only promises breakthroughs, but also a way to unify the machine learning “zoo”.
It is an era of explosive growth for machine learning (ML). The technology is driving breakthroughs in biochemistry, physics, materials science, medicine, and more. Professor Michael Bronstein is at the center of this growth. The chair in Machine Learning and Pattern Recognition in the Department of Computing at Imperial College London, Bronstein is using pioneering machine learning to push the boundaries of drug design, reveal the cancer-fighting properties of food, and even decode whale-speak.
In the field of deep learning we have this ‘zoo’ of different neural network architectures. It is often hard to see the relationship between different methods because so many things are being reinvented and rebranded.
Deep learning, a subset of ML, has emerged in the past decade, revolutionizing both the academic and industrial worlds. Bronstein says the enormous popularity of the field has led to a proliferation of different deep learning architectures for different types of data.
“In the field of deep learning we have this ‘zoo’ of different neural network architectures: convolutional nets are used for images, Transformers for text sequences — the list goes on. It is often hard to see the relationship between different methods because so many things are being reinvented and rebranded.”
So as one of just eight invited speakers at this year’s International Conference on Learning Representations (ICLR) conference, which commenced May 3, Bronstein will encourage the wider ML community to see their field from a broader perspective that is simultaneously old and new, and inspired by geometry.
Bronstein says the various deep-learning architectures can all be derived using the fundamental principles of symmetry and invariance — the cornerstones of modern geometry and physics. “This geometric approach offers a strong set of unifying principles, of which many popular deep learning architectures are particular examples. We can unify this zoo, using this common lens. We show this in a recent text co-authored with Joan Bruna, Taco Cohen, and Petar Veličković.”
Improving explainable AI’s explanations
Concept-based explanatory models are a popular approach to explainable AI but can suffer from confounders in training data. In a paper at ICLR, Amazon scientists use causal analysis to debias such models, improving performance and concept relevance.
Much of Bronstein’s recent deep-learning work involves graph neural networks. “A graph is essentially a mathematical representation of a network and it is a powerful abstraction of practically anything, from molecules in your body and how they interact, to social-network users and their relationships to each other,” he says.
Novel protein design
But one of the “coolest and most promising applications” of graph-based deep learning is in biological sciences and drug discovery and development, says Bronstein. A key part of his work, done in collaboration with Bruno Correia’s Laboratory of Protein Design & Immunoengineering at EPFL, involves ML-based research into the design of novel proteins with the potential to act as new drugs.
Traditional drugs bind to pocket-like sites on the surface of target proteins within the body. This binding may then activate a beneficial chemical response to combat an illness or block a detrimental disease process.
Many promising therapy targets involve interactions between proteins, such as “immune checkpoints”. These checkpoints exist to prevent our own immune system from destroying healthy cells in the body, but in some situations the system goes awry, allowing cancer cells to evade detection.
Immune checkpoint inhibitors — a new therapeutic approach against cancer recognized by a Nobel Prize in 2018 — work by blocking checkpoint proteins from binding with their partner proteins, allowing the immune system to kill cancer cells. One of the difficulties in designing such inhibitors is the interfaces of the respective proteins are flat, without the typical pocket-like structures, making them “undruggable” by traditional small molecules.
What is in common between CNNs, GNNs, LSTMs, Transformers, DeepSets, mesh CNNs? In a new post with @joanbruna @TacoCohen @PetarV_93 we show this zoo of neural nets can be seen through the lens of symmetry. #geometricdeeplearning is all you need!https://t.co/BMYfaz7sBK pic.twitter.com/3M5qOlA70I— Michael Bronstein (@mmbronstein) April 28, 2021
In the last few decades, however, a new breed of drugs called “biologics” has been designed to address this challenge in cancer treatment. Biologics are larger, more complex molecules — in the form of proteins, peptides, and antibodies — and so are more challenging to produce than conventional drugs. However, they have completely transformed the prognosis for some groups of patients with advanced cancer.
Bronstein and his collaborators are using graph neural networks to work out the shape of novel proteins that could then be synthesised to bind to these flat surfaces.
“We use geometric neural networks, modelling proteins as molecular surfaces, that can predict if and where two proteins will bind,” he said. “This allows us to build a new protein that is very likely to bind to the target.”
This is more than theoretical. Bronstein’s collaborators have already synthesized these novel proteins and confirmed that their designs make real-world sense.
“I'm not saying that we’ll have these drugs in the clinic tomorrow, but we now have a computational pipeline that is data-driven, that allows us to both predict properties of proteins, and to build proteins with the desired functionality,” he said.
I'm not saying that we’ll have these drugs in the clinic tomorrow, but we now have a computational pipeline that is data-driven.
They call the technological concept Molecular Surface Interaction Fingerprinting (MaSIF), and Bronstein takes pride in having the first paper with “Geometric Deep Learning” in the title to appear on the cover of a major biological journal (the February 2020 issue of Nature Methods).
Computational pipelines like this require significant computing power and extensive experimentation, which is why Bronstein’s new Amazon Research Award funding is foundational to his research.
“Amazon Web Service’s cloud computing is essential because when we run these experiments, we need many virtual instances of GPUs, and it allows us to rapidly scale our experiments,” he says. “For example, when we need to tune the parameters of our neural networks or run multiple deep-learning architectures simultaneously, the number of virtual machines we can create is unlimited. Without this capability, which we are heavily exploiting, my work would be impossible. This is how modern machine learning must be done.”
While Bronstein is primarily using his ARA award for access to high-performance computing, he has also been attracted to other AI/ML tools in the AWS suite of services. “We’ve started to experiment with the open-source Deep Graph Library in Amazon SageMaker for some of our projects on graph deep learning,” he notes.
Dark matter of nutrition
Bronstein has also been applying his techniques, and high-performance computing, in the search for what he calls the “dark matter of nutrition” — the variety of chemical types that occur naturally in foods that may help prevent and fight diseases, but which are poorly understood.
Using a training dataset comprised of drug molecules with a known anti-cancer effect, Bronstein and his collaborator, Kirill Veselkov of Imperial College London Faculty of Medicine, trained a graph-based classifier that ultimately predicted the anti-cancer profiles of 250 common food ingredients. A product of the work was a “food map” that revealed anti-cancer potential of each foodstuff, rated by the level of anti-cancer drug-like molecules it contained.
“We found prominent champions that we call ‘hyperfoods’,” says Bronstein.
The results were good news for tea drinkers and fans of foods including celery, sage, and citrus fruits. Such work also heralds machine learning’s nascent role in the development of AI-powered nutritional programs that could play a part in the prevention and treatment of a variety of health conditions.
But perhaps the most intriguing of Bronstein’s current passions is Project CETI, an ambitious international collaboration and winner of the 2020 Audacious Project prize that will apply ML technology in an effort to decipher sperm whale communication.
The project will use arrays of passive sensors and non-invasive robots to observe and listen in on populations of Caribbean sperm whales. Once an enormous amount of acoustic and behavioural data has been collected, advanced ML will be employed in a bid to understand the majestic creatures’ interactions, vocalizations, and behaviour patterns.
“CETI is a once-in-a-lifetime opportunity,” said Bronstein in a presentation about the project. “And I would say without exaggeration that it is probably the craziest moonshot that I have ever participated in.”