Research

Advancing AI theory with a first-principles understanding of deep neural networks

June 18, 2021

The steam engine powered the Industrial Revolution and changed manufacturing forever — and yet it wasn’t until the laws of thermodynamics and the principles of statistical mechanics were developed over the following century that scientists could fully explain at a theoretical level why and how it worked.

Lacking theoretical understanding didn’t stop people from improving on the steam engine, of course, but discovering the principles of the heat engine led to rapid improvements. And when scientists finally grasped statistical mechanics, the ramifications went far beyond building better and more efficient engines. Statistical mechanics led to an understanding that matter is made of atoms, foreshadowed the development of quantum mechanics, and (if you take a holistic view) even led to the transistor that powers the computer you’re using today.

AI today is at a similar juncture. Deep neural networks (DNNs) are a fixture of modern AI research, but they are more or less treated as a “black box.” While substantial progress has been made by AI practitioners, DNNs are typically thought of as too complicated to understand from first principles. Models are fine-tuned largely by trial and error — and while trial and error can be done intelligently, often informed by years of experience, it is carried out without any unified theoretical language with which to describe DNNs and how they function.

Today we are announcing the publication of The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks, a collaboration between Sho Yaida of Facebook AI Research, Dan Roberts of MIT and Salesforce, and Boris Hanin at Princeton. At a fundamental level, the book provides a theoretical framework for understanding DNNs from first principles. For AI practitioners, this understanding could significantly reduce the amount of trial and error needed to train these DNNs. It could, for example, reveal the optimal hyperparameters for any given model without going through the time- and compute-intensive experimentation required today.

The Principles of Deep Learning Theory will be published by Cambridge University Press in early 2022. The manuscript is now publicly available and the print version can be ordered here. “The book presents an appealing approach to machine learning based on expansions familiar in theoretical physics," said Eva Silverstein, a Professor of Physics at Stanford University. "It will be exciting to see how far these methods go in understanding and improving AI."

This is only the first step toward the much larger project of reimagining a science of AI, one that’s both derived from first principles and at the same time focused on describing how realistic models actually work. If successful, such a general theory of deep learning could potentially enable vastly more powerful AI models and perhaps even guide us toward a framework for studying universal aspects of intelligence.

Interacting neurons

Until now, theorists trying to understand DNNs typically relied on an idealization of such networks, the so-called infinite-width limit, in which DNNs are modeled with an infinite number of neurons per layer. Like the ideal gas law compared with a real gas, the infinite-width abstraction provides a starting point for theoretical analysis. But it often bears little resemblance to real-world deep learning models — especially neural networks of nontrivial depth, where the abstraction will deviate more and more from an accurate description. While occasionally useful, the infinite-width limit is overly simplistic and ignores many of the key features of real DNNs that make them such powerful tools.

Approaching the problem from a physicist’s perspective, The Principles of Deep Learning Theory improves on this infinite-width limit by laying out an effective theory of DNNs at finite width. Physicists traditionally aim to create the simplest and most ideal model possible that also incorporates the minimum complexity necessary for describing the real world. Here, that required backing off the infinite-width limit and systematically incorporating all the corrections needed to account for finite-width effects. In the language of physics, this means modeling the tiny interactions between neurons both in a layer and across layers.

These may sound like small changes, but the results are qualitatively different between the existing toy models and the one described in the book. Imagine two billiard balls heading toward each other. If you used a noninteracting model analogous to the infinite-width limit to calculate what was about to happen, you’d find that the balls pass right through each other and continue in the same direction. But obviously that’s not what happens. The electrons in the balls cannot occupy the same space, so they ricochet off each other.

Those interactions — however small they may be for individual electrons — are what prevent you from falling through your chair, through the floor, and straight toward the center of the earth. Those interactions matter in real life, they matter in physics, and they matter to DNNs as well.

Something Went Wrong
We're having trouble playing this video.

Taking into account similar interactions between neurons, the book’s theory finds that the real power of DNNs — their ability to learn representations of the world from data — is proportional to their aspect ratio, i.e., the depth-to-width ratio. This ratio is zero for infinite-width models and thus these toy models fail to capture depth, and their description becomes less and less accurate as the depth of the DNNs increases. In contrast, working with finite-width layers, the effective theory actually factors in depth — which is vital for representation learning and other applications where the D of the DNN really matters.

"In physics, effective field theories are a rigorous and systematic way to understand the complex interactions of particles,” said Jesse Thaler, Associate Professor of Physics at MIT and Director of the NSF AI Institute for Artificial Intelligence and Fundamental Interaction. “It is exciting to see that a similarly rigorous and systematic approach applies to understanding the dynamics of deep networks. Inspired by these developments, I look forward to more fruitful dialogue between the physics and AI communities."

Opening the box

While the framework described in the book can extend to the real-world DNNs used by the modern AI community — and provides a blueprint for doing so — the book itself mostly focuses on the simplest deep learning models (deep multilayer perceptrons) for the purposes of instruction.

Applied to this simplest architecture, the equations of the effective theory can be solved systematically. This means that we can have a first-principles understanding of the behavior of a DNN over the entire training trajectory. In particular, we can explicitly write down the function that a fully trained DNN is computing in order to make predictions on novel test examples.

Armed with this new effective theory, we hope theorists will be able to push for a deeper and more complete understanding of neural networks. There is much left to compute, but this work potentially brings the field closer to understanding what particular properties of these models enable them to perform intelligently.

We also hope that the book will help the AI community reduce the cycles of trial-and-error that sometimes constrain current progress. We want to help practitioners rapidly design better models — more efficient, better performant, faster to train, or perhaps all of these. In particular, those designing DNNs will be able to pick optimal hyperparameters without any training, and choose the optimal algorithms and model architecture for best results.

These are questions that over the years many felt could never be answered or explained. The Principles of Deep Learning Theory demonstrates that AI isn’t an inexplicable art, and that practical AI can be understood through fundamental scientific principles.

Theory informing practice

Hopefully this is just the beginning. We plan to continue our research, extending our theoretical framework to other model architectures and acquiring new results. And on a broader level, we hope the book demonstrates that theory can provide an understanding of real models of practical interest.

"In the history of science and technology, the engineering artifact often comes first: the telescope, the steam engine, digital communication. The theory that explains its function and its limitations often appears later: the laws of refraction, thermodynamics, and information theory,” said Facebook VP and Chief AI Scientist Yann LeCun. “With the emergence of deep learning, AI-powered engineering wonders have entered our lives — but our theoretical understanding of the power and limits of deep learning is still partial. This is one of the first books devoted to the theory of deep learning, and lays out the methods and results from recent theoretical approaches in a coherent manner."

While empirical results have propelled AI to new heights in recent years, we firmly believe that practice grounded in theory could help accelerate AI research — and possibly lead to the discovery of new fields we can’t even conceive of yet, just as statistical mechanics led to the Age of Information over a century ago.

This blog post was updated in April 2022 to include the Cambridge University Press link where the print version of the book can be ordered.

Written By

Research Scientist