When we prompt a large language model (LLM) to solve a complex polynomial equation, it does not just return an answer but uses its “chain of thought” to work through a solution. In a sense, the LLM behaves like a computer, a machine that computes the solution. But this machine is quite unlike what Alan Turing described as a universal model of computation almost 90 years ago.
In what sense can an LLM be thought of as a computer? Can it be universal, that is, able to solve any computable task, as a Turing machine does? If so, how does it learn this ability from finite data?
Current theories of machine learning are of little help in answering these questions, so we need new tools. In an earlier Amazon Science post, we argued that AI agents and the LLMs that power them are transductive-inference engines, despite being trained inductively in the mold of classical machine learning theory. Induction seeks generalization, or the ability to behave on future data as one did on past data. To achieve generalization, one must avoid memorization, i.e., overfitting the training data.
This works in theory, under the condition that both past and future data are drawn from the same distribution. In practice, however, such a condition cannot be verified, and in general, it doesn’t apply to high-value data in business, finance, climate science, and even language. That leaves us with no handle to explain how an LLM might learn how to verifiably solve a general computable task.
With transduction, by contrast, one seeks to reason through past data to craft solutions to new problems. Transduction is not about applying past solutions in the hope that they generalize; rather, it is about being able to retrieve portions of memory that matter when reasoning through new solutions. In transduction, memorization is not a stigma but a value. Using the test data, along with memory, to craft a solution during transductive inference is not overfitting but adaptive, query-specific computation — i.e., reasoning.
Inductive generalization is the kind of behavior one is forced to adopt when pressed for time. Such automatic, reactive behavior is sometimes referred to as “system-1” in cognitive psychology. Transduction instead requires looking at all data and performing query-specific variable-length inference-time computation — chain-of-thought reasoning in an LLM, whose length depends on the complexity of the query. Such deliberative behavior is often referred to as “system-2” and is what we wish to foster through learning. In this sense, transductive learning is a particular form of meta-learning, or learning to reason.
In 1964, Ray Solomonoff described a universally optimal algorithm for solving any problem through transductive inference, if we assume that memory and time are unbounded: execute all programs through a Turing machine, then average the outcome of those that reproduce the observed data. That will give the universally optimal answer — but it will generally take forever. What if we want not just a universally optimal but a universally fast algorithm?
In 1973 — in the same paper where he introduced the notion of NP completeness — Leonid Levin derived such an algorithm . Unfortunately, Levin’s so-called universal search is not viable in practice, nor does it help us understand LLMs; for one thing, it involves no learning. Nonetheless, Levin pointed to the critical importance of time when solving computational tasks. Later, in 1986, Solomonoff hinted at how learning can help reduce time.
In a new paper, we expand on these ideas and show how reducing inference time induces a trained model to operate transductively — i.e., to reason. In striving to reduce inference time, the model learns not just the statistical structure of the training data but also its algorithmic structure. It can then recombine algorithmic methods it’s learned in an infinite number of ways to address arbitrary new problems.
This insight has implications for how AI models are designed and trained. In particular, they should be designed to predict the marginal value of additional costs at inference time, and their training targets should include complexity costs, to force them to minimize time during inference.
This approach to learning turns classical statistical learning theory on its head. In classical statistical learning theory, the great danger is overfitting, so the goal is to regularize the solution, i.e., to minimize the information that the trained model retains from past data (beyond what matters for reducing the training loss). With transductive inference, on the other hand, the goal is to maximize the information retained, as it may come in handy for solving future problems.
The inversion of scaling laws
LLMs’ performance gains in the past few years have come mostly from scaling: increasing the number of model parameters has improved accuracy on benchmark datasets. This has led many to speculate that further increasing the models’ parameter counts could usher in an age of “superintelligence”, where the cognitive capacities of AI models exceed those of their human creators.
If scale does not lead to intelligence, what does? We argue that the answer is time.
In our paper, we argue the opposite: beyond a certain complexity, AI models enter what we call the savant regime, where learning becomes unnecessary, and better performance on the benchmarks comes with decreased “insight”. At the limit is the algorithm Solomonoff described in 1964, where any task can be solved by brute force.
If scale does not lead to intelligence, what does? We argue that the answer is time.
It’s an answer with some intuitive appeal. The concept of intelligence is fundamentally subjective and environment dependent. But while intelligence is hard to characterize, its absence is less so. Being unable to adapt to the speed of the environment is one among many behaviors that we call traits of non-intelligence (TONIs). TONIs are behaviors whose presence negates intelligence however one wishes to define it.
Many TONIs are timebound. Taking the same amount of (non-minimal) time and energy to solve repeated instances of the same task, to no better outcome, is a TONI. So is the inability to allocate resources commensurate to the goal, thus spending the same effort for a trivial task as for a complex one. Starting a task that is known to take longer than the lifetime of the universe to render any usable answer would be another TONI.
Given this intuition, how do we quantify the relationship between intelligence and time in AI models? The first step is to assess the amount of information contained in the models’ parameters; then we can see how it’s affected by the imposition of time constraints.
Algorithmic information
The standard way to measure information was proposed by Claude Shannon in a landmark 1948 paper that essentially created the field of information theory. Shannon defined the information content of a random variable as the entropy of its distribution. The more uncertainty about its value, the higher the information content.
On this definition, however, a given data sample’s information content is not a property of the sample itself; it’s a property of the distribution it was drawn from. For any given sample, however, there are infinitely many distributions from which it could have been drawn. If all you have is a sample — say, a string of ones and zeroes — how do you compute its information content?
In the 1960s, Solomonoff and, independently, Andrey Kolmogorov, addressed this problem, with an alternative notion of information, algorithmic information, which can be used to characterize the information content of arbitrary binary strings. For a given string, one can write a program that, when run through some computer, outputs that string. In fact, one can write infinitely many such programs and run each through many computers.
The shortest possible program that, run through a universal Turing machine, outputs the specific datum is a property of that datum. That program is the algorithmic minimal sufficient statistic, and its length is the algorithmic information (Kolmogorov-Solomonoff complexity) of that datum.
In his 1948 paper, Shannon also defined a metric called mutual information, which quantifies the information that can be inferred about the value of one variable by observing a correlated variable. This concept, too, can be extended to algorithmic information theory: the algorithmic mutual information between two data strings measures how much shorter the program for generating one string will be if you have access to the other.
Time is information
If we don’t know the distribution from which a model’s training data was drawn, and we don’t know whether the model’s future inputs will be drawn from the same distribution, how can we quantify the model’s future performance?
In our paper, we assume that most tasks can be solved by combining and transforming — in infinitely many possible ways — some ultimately finite, but a priori unknown, collection of methods. In that case, we can show that optimizing performance is a matter of maximizing the algorithmic mutual information between the model’s training data and future tasks.
Finding the shortest possible algorithm for generating a particular binary string is, however, an intractable problem (for all but the shortest strings). So computing the algorithmic mutual information between a model’s training data and future tasks is also intractable.
Nonetheless, in our paper, we prove that there is a fundamental relation between the speed with which a model can find a solution to a new task and the algorithmic mutual information between the solution and the training data. Specifically, we show that
log speed-up = I(h : D)
where h is the solution to the new task, D is the dataset the model was trained on, and I(h : D) is the algorithmic mutual information between the data and the solution.
This means that, during training, minimizing the time the model takes to perform an inference task will maximize the algorithmic information encoded in its weights. Reducing inference time ensures that, even as models’ parameter counts increase, they won’t descend into the savant regime, where they solve problems through brute force, without any insight or learning.
The value of time
You may have noticed that the equation relating inference time to algorithmic information doesn’t specify any units of measure. That’s because even the value of “time” is subjective. A zebra drinking from a pond does not know a priori how long it will take to be spotted by a predator. If it lingers too long, it ends up prey; if it panics and leaves, it ends up dehydrated.
Similarly, for an AI model, there is no single cost of time to train for and correspondingly no unique scale beyond which LLMs enter the savant regime. For some tasks, such as scientific discovery, the time constant is centuries, while for others, such as algorithmic trading, it’s milliseconds. We expect agents to be able to adapt to their environment, in some cases spawning smaller specialized models for specific classes of tasks, and even then, to provide users (who are part of an agent’s environment) with controls to adjust the cost of time depending on the context and domain of application.
The cost of time is already (partially and implicitly) factored into the process of training LLMs. During pretraining, the cost of time is effectively set to a minimum value, as the model is scored on the output of a single forward pass through the training data. Fine tuning the model for chain-of-thought reasoning requires annotated data, whose high cost imposes a bias toward shorter “ground truth” reasoning traces. Thus, LLMs already reflect the subjective cost of time to the annotators who assemble the training sets.
However, to enable the user to modulate resources at inference time, depending on the cost of the environment, models should be trained to predict the marginal value of one more step of computation relative to the expected final return. Furthermore, they need to be trained to condition on a target complexity, in order to learn how to provide an answer within a customer-specified cost or bound.
There are growing efforts to teach models the value of time, so they can adapt to the tasks at hand (with or without human supervision). These are certain to yield a better bang-to-buck ratio, but the theory predicts that, at some point, factoring in the cost of time will actually improve absolute performance in new tasks. For verifiable tasks, learning to reason comes from seeking the shortest chain of thought that yields a correct (verified) answer. Ultimately, imposing a cost on time should not impair reasoning performance.
A new paradigm for AI coding
Connecting these ideas to modern AI requires rethinking what computation means. LLMs are stochastic dynamical systems whose computational elements (context, weights, activations, chain of thought) do not resemble the “programs” in classical, minimalistic models of computation, such as universal Turing machines.
Yet LLMs are models of computation — maximalist models. They’re universal, like Turing machines, but in many ways, they’re antithetical, and they operate through entirely different mechanisms. It’s possible to “program” such stochastic dynamical systems using a two-level control strategy: high-level, open-loop, global planning and low-level, closed-loop feedback control.
That strategy can be realized with AI Functions, an open-source library released this week as part of Amazon’s Strands Labs, a GitHub repository for building AI agents. An existing programming language can be augmented with functions from the library. These are ordinary functions, in the syntax of the language, but their bodies are written in natural language instead of code, and they’re governed by pre- and post-conditions. These enable high-level, open-loop planning and verification, before a single line of code is written by AI, and they engender an automatic local feedback loop if the AI-generated code fails to clear all conditions. Minimizing time, which translates into cost, is at the core of the design and evaluation of the resulting agents.