Intelligence isn’t about parameter count. It’s about time.

As AI models grow larger, they become less insightful, not more. To ensure that they continue to learn, we need to reduce their inference time.

When we prompt a large language model (LLM) to solve a complex polynomial equation, it does not just return an answer but uses its “chain of thought” to work through a solution. In a sense, the LLM behaves like a computer, a machine that computes the solution. But this machine is quite unlike what Alan Turing described as a universal model of computation almost 90 years ago.

Stefano Soatto.png
Stefano Soatto is a vice president and distinguished scientists in the Amazon Web Services (AWS) Agentic AI organization.
Credit: UCLA Samueli

In what sense can an LLM be thought of as a computer? Can it be universal, that is, able to solve any computable task, as a Turing machine does? If so, how does it learn this ability from finite data?

Current theories of machine learning are of little help in answering these questions, so we need new tools. In an earlier Amazon Science post, we argued that AI agents and the LLMs that power them are transductive-inference engines, despite being trained inductively in the mold of classical machine learning theory. Induction seeks generalization, or the ability to behave on future data as one did on past data. To achieve generalization, one must avoid memorization, i.e., overfitting the training data.

This works in theory, under the condition that both past and future data are drawn from the same distribution. In practice, however, such a condition cannot be verified, and in general, it doesn’t apply to high-value data in business, finance, climate science, and even language. That leaves us with no handle to explain how an LLM might learn how to verifiably solve a general computable task.

With transduction, by contrast, one seeks to reason through past data to craft solutions to new problems. Transduction is not about applying past solutions in the hope that they generalize; rather, it is about being able to retrieve portions of memory that matter when reasoning through new solutions. In transduction, memorization is not a stigma but a value. Using the test data, along with memory, to craft a solution during transductive inference is not overfitting but adaptive, query-specific computation — i.e., reasoning.

Inductive generalization is the kind of behavior one is forced to adopt when pressed for time. Such automatic, reactive behavior is sometimes referred to as “system-1” in cognitive psychology. Transduction instead requires looking at all data and performing query-specific variable-length inference-time computation — chain-of-thought reasoning in an LLM, whose length depends on the complexity of the query. Such deliberative behavior is often referred to as “system-2” and is what we wish to foster through learning. In this sense, transductive learning is a particular form of meta-learning, or learning to reason.

In 1964, Ray Solomonoff described a universally optimal algorithm for solving any problem through transductive inference, if we assume that memory and time are unbounded: execute all programs through a Turing machine, then average the outcome of those that reproduce the observed data. That will give the universally optimal answer — but it will generally take forever. What if we want not just a universally optimal but a universally fast algorithm?

In 1973 — in the same paper where he introduced the notion of NP completeness — Leonid Levin derived such an algorithm . Unfortunately, Levin’s so-called universal search is not viable in practice, nor does it help us understand LLMs; for one thing, it involves no learning. Nonetheless, Levin pointed to the critical importance of time when solving computational tasks. Later, in 1986, Solomonoff hinted at how learning can help reduce time.

In a new paper, we expand on these ideas and show how reducing inference time induces a trained model to operate transductively — i.e., to reason. In striving to reduce inference time, the model learns not just the statistical structure of the training data but also its algorithmic structure. It can then recombine algorithmic methods it’s learned in an infinite number of ways to address arbitrary new problems.

This insight has implications for how AI models are designed and trained. In particular, they should be designed to predict the marginal value of additional costs at inference time, and their training targets should include complexity costs, to force them to minimize time during inference.

This approach to learning turns classical statistical learning theory on its head. In classical statistical learning theory, the great danger is overfitting, so the goal is to regularize the solution, i.e., to minimize the information that the trained model retains from past data (beyond what matters for reducing the training loss). With transductive inference, on the other hand, the goal is to maximize the information retained, as it may come in handy for solving future problems.

The inversion of scaling laws

LLMs’ performance gains in the past few years have come mostly from scaling: increasing the number of model parameters has improved accuracy on benchmark datasets. This has led many to speculate that further increasing the models’ parameter counts could usher in an age of “superintelligence”, where the cognitive capacities of AI models exceed those of their human creators.

If scale does not lead to intelligence, what does? We argue that the answer is time.

In our paper, we argue the opposite: beyond a certain complexity, AI models enter what we call the savant regime, where learning becomes unnecessary, and better performance on the benchmarks comes with decreased “insight”. At the limit is the algorithm Solomonoff described in 1964, where any task can be solved by brute force.

If scale does not lead to intelligence, what does? We argue that the answer is time.

It’s an answer with some intuitive appeal. The concept of intelligence is fundamentally subjective and environment dependent. But while intelligence is hard to characterize, its absence is less so. Being unable to adapt to the speed of the environment is one among many behaviors that we call traits of non-intelligence (TONIs). TONIs are behaviors whose presence negates intelligence however one wishes to define it.

Many TONIs are timebound. Taking the same amount of (non-minimal) time and energy to solve repeated instances of the same task, to no better outcome, is a TONI. So is the inability to allocate resources commensurate to the goal, thus spending the same effort for a trivial task as for a complex one. Starting a task that is known to take longer than the lifetime of the universe to render any usable answer would be another TONI.

Given this intuition, how do we quantify the relationship between intelligence and time in AI models? The first step is to assess the amount of information contained in the models’ parameters; then we can see how it’s affected by the imposition of time constraints.

Algorithmic information

The standard way to measure information was proposed by Claude Shannon in a landmark 1948 paper that essentially created the field of information theory. Shannon defined the information content of a random variable as the entropy of its distribution. The more uncertainty about its value, the higher the information content.

On this definition, however, a given data sample’s information content is not a property of the sample itself; it’s a property of the distribution it was drawn from. For any given sample, however, there are infinitely many distributions from which it could have been drawn. If all you have is a sample — say, a string of ones and zeroes — how do you compute its information content?

In the 1960s, Solomonoff and, independently, Andrey Kolmogorov, addressed this problem, with an alternative notion of information, algorithmic information, which can be used to characterize the information content of arbitrary binary strings. For a given string, one can write a program that, when run through some computer, outputs that string. In fact, one can write infinitely many such programs and run each through many computers.

The shortest possible program that, run through a universal Turing machine, outputs the specific datum is a property of that datum. That program is the algorithmic minimal sufficient statistic, and its length is the algorithmic information (Kolmogorov-Solomonoff complexity) of that datum.

In his 1948 paper, Shannon also defined a metric called mutual information, which quantifies the information that can be inferred about the value of one variable by observing a correlated variable. This concept, too, can be extended to algorithmic information theory: the algorithmic mutual information between two data strings measures how much shorter the program for generating one string will be if you have access to the other.

Time is information

If we don’t know the distribution from which a model’s training data was drawn, and we don’t know whether the model’s future inputs will be drawn from the same distribution, how can we quantify the model’s future performance?

In our paper, we assume that most tasks can be solved by combining and transforming — in infinitely many possible ways — some ultimately finite, but a priori unknown, collection of methods. In that case, we can show that optimizing performance is a matter of maximizing the algorithmic mutual information between the model’s training data and future tasks.

Finding the shortest possible algorithm for generating a particular binary string is, however, an intractable problem (for all but the shortest strings). So computing the algorithmic mutual information between a model’s training data and future tasks is also intractable.

Nonetheless, in our paper, we prove that there is a fundamental relation between the speed with which a model can find a solution to a new task and the algorithmic mutual information between the solution and the training data. Specifically, we show that

log speed-up = I(h : D)

where h is the solution to the new task, D is the dataset the model was trained on, and I(h : D) is the algorithmic mutual information between the data and the solution.

This means that, during training, minimizing the time the model takes to perform an inference task will maximize the algorithmic information encoded in its weights. Reducing inference time ensures that, even as models’ parameter counts increase, they won’t descend into the savant regime, where they solve problems through brute force, without any insight or learning.

The value of time

You may have noticed that the equation relating inference time to algorithmic information doesn’t specify any units of measure. That’s because even the value of “time” is subjective. A zebra drinking from a pond does not know a priori how long it will take to be spotted by a predator. If it lingers too long, it ends up prey; if it panics and leaves, it ends up dehydrated.

ZebraTime-16x9-01.png
Intelligence is about time — but the value of “time” is subjective. A zebra drinking from a pond does not know a priori how long it will take to be spotted by a predator. If it lingers too long, it ends up prey; if it panics and leaves, it ends up dehydrated.

Similarly, for an AI model, there is no single cost of time to train for and correspondingly no unique scale beyond which LLMs enter the savant regime. For some tasks, such as scientific discovery, the time constant is centuries, while for others, such as algorithmic trading, it’s milliseconds. We expect agents to be able to adapt to their environment, in some cases spawning smaller specialized models for specific classes of tasks, and even then, to provide users (who are part of an agent’s environment) with controls to adjust the cost of time depending on the context and domain of application.

The cost of time is already (partially and implicitly) factored into the process of training LLMs. During pretraining, the cost of time is effectively set to a minimum value, as the model is scored on the output of a single forward pass through the training data. Fine tuning the model for chain-of-thought reasoning requires annotated data, whose high cost imposes a bias toward shorter “ground truth” reasoning traces. Thus, LLMs already reflect the subjective cost of time to the annotators who assemble the training sets.

However, to enable the user to modulate resources at inference time, depending on the cost of the environment, models should be trained to predict the marginal value of one more step of computation relative to the expected final return. Furthermore, they need to be trained to condition on a target complexity, in order to learn how to provide an answer within a customer-specified cost or bound.

There are growing efforts to teach models the value of time, so they can adapt to the tasks at hand (with or without human supervision). These are certain to yield a better bang-to-buck ratio, but the theory predicts that, at some point, factoring in the cost of time will actually improve absolute performance in new tasks. For verifiable tasks, learning to reason comes from seeking the shortest chain of thought that yields a correct (verified) answer. Ultimately, imposing a cost on time should not impair reasoning performance.

A new paradigm for AI coding

Connecting these ideas to modern AI requires rethinking what computation means. LLMs are stochastic dynamical systems whose computational elements (context, weights, activations, chain of thought) do not resemble the “programs” in classical, minimalistic models of computation, such as universal Turing machines.

Yet LLMs are models of computation — maximalist models. They’re universal, like Turing machines, but in many ways, they’re antithetical, and they operate through entirely different mechanisms. It’s possible to “program” such stochastic dynamical systems using a two-level control strategy: high-level, open-loop, global planning and low-level, closed-loop feedback control.

That strategy can be realized with AI Functions, an open-source library released this week as part of Amazon’s Strands Labs, a GitHub repository for building AI agents. An existing programming language can be augmented with functions from the library. These are ordinary functions, in the syntax of the language, but their bodies are written in natural language instead of code, and they’re governed by pre- and post-conditions. These enable high-level, open-loop planning and verification, before a single line of code is written by AI, and they engender an automatic local feedback loop if the AI-generated code fails to clear all conditions. Minimizing time, which translates into cost, is at the core of the design and evaluation of the resulting agents.

Research areas

Related content

IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced ML systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real-world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning team for India Consumer Businesses. Machine Learning, Big Data and related quantitative sciences have been strategic to Amazon from the early years. Amazon has been a pioneer in areas such as recommendation engines, ecommerce fraud detection and large-scale optimization of fulfillment center operations. As Amazon has rapidly grown and diversified, the opportunity for applying machine learning has exploded. We have a very broad collection of practical problems where machine learning systems can dramatically improve the customer experience, reduce cost, and drive speed and automation. These include product bundle recommendations for millions of products, safeguarding financial transactions across by building the risk models, improving catalog quality via extracting product attribute values from structured/unstructured data for millions of products, enhancing address quality by powering customer suggestions We are developing state-of-the-art machine learning solutions to accelerate the Amazon India growth story. Amazon India is an exciting place to be at for a machine learning practitioner. We have the eagerness of a fresh startup to absorb machine learning solutions, and the scale of a mature firm to help support their development at the same time. As part of the India Machine Learning team, you will get to work alongside brilliant minds motivated to solve real-world machine learning problems that make a difference to millions of our customers. We encourage thought leadership and blue ocean thinking in ML. Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About the team International Machine Learning Team is responsible for building novel ML solutions that attack India first (and other Emerging Markets across MENA and LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide
ES, B, Barcelona
Are you a scientist passionate about advancing the frontiers of computer vision, machine learning, or large language models? Do you want to work on innovative research projects that lead to innovative products and scientific publications? Would you value access to extensive datasets? If you answer yes to any of these questions, you'll find a great fit at Amazon. We're seeking a hands-on researcher eager to derive, implement, and test the next generation of Generative AI, computer vision, ML, and NLP algorithms. Our research is innovative, multidisciplinary, and far-reaching. We aim to define, deploy, and publish pioneering research that pushes the boundaries of what's possible. To achieve our vision, we think big and tackle complex technological challenges at the forefront of our field. Where technology doesn't exist, we create it. Where it does, we adapt it to function at Amazon's scale. We need team members who are passionate, curious, and willing to learn continuously. Key job responsibilities * Derive novel computer vision and ML models or LLMs/VLMs. * Design and develop scalable ML models. * Create and work with large datasets * Work with large GPU clusters. * Work closely with software engineering teams to deploy your innovations. * Publish your work at major conferences/journals. * Mentor team members in the use of your AI models. A day in the life As a Senior Applied Scientist at Amazon, your typical day might look like this: * Dive into coding, deriving new ML models for computer vision or NLP * Experiment with massive datasets on our GPU clusters * Brainstorm with your team to solve complex AI challenges * Collaborate with engineers to turn your research into real products * Write up your findings for publication in top journals or conferences * Mentor junior team members on AI concepts and implementation About the team DiscoVision, a science unit within Amazon's UPMT, focuses on advancing visual content capabilities through state-of-the-art AI technology. Our team specializes in developing state-of-the-art technologies in text-to-image/video Generative AI, 3D modeling, and multimodal Large Language Models (LLMs).
US, WA, Seattle
Are you excited to help customers discover the hottest and best reviewed products? The Discovery Tech team helps customers discover and engage with new, popular and relevant products across Amazon worldwide. We do this by combining technology, science, and innovation to build new customer-facing features and experiences alongside advanced tools for marketers. You will be responsible for creating and building critical services that automatically generate, target, and optimize Amazon’s cross-category marketing and merchandising. Through the enablement of intelligent marketing campaigns that leverage machine-learning models, you will help to deliver the best possible shopping experience for Amazon’s customers all over the globe. We are looking for analytical problem solvers who enjoy diving into data, excited about data science and statistics, can multi-task, and can credibly interface between engineering teams and business stakeholders. Your analytical abilities, business understanding, and technical savvy will be used to identify specific and actionable opportunities to solve existing business problems and look around corners for future opportunities. Your domain spans the design, development, testing, and deployment of data-driven and highly scalable machine learning solutions in product recommendation. As an Applied Scientist, you bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. To know more about Amazon science, please visit https://www.amazon.science
IN, TS, Hyderabad
Do you want to join an innovative team of scientists who leverage machine learning and statistical techniques to revolutionize how businesses discover and purchase products on Amazon? Are you passionate about building intelligent systems that understand and predict complex B2B customer needs? The Amazon Business team is looking for exceptional Applied Science to help shape the future of B2B commerce. Amazon Business is one of Amazon's fastest-growing initiatives focused on serving business customers, from individual professionals to large institutions, with unique and complex purchasing needs. Our customers require sophisticated solutions that go beyond traditional B2C experiences, including bulk purchasing, approval workflows, and business-grade service support. The AB-MSET Applied Science team focuses on building intelligent systems for delivering personalized, contextual service experiences throughout the customer lifecycle. We apply advanced machine learning techniques to develop sophisticated intent detection models for business customer service needs, create intelligent matching algorithms for optimal service routing based on multiple variables including customer value, maturity, effort, and issue complexity, build predictive models to enable proactive service interventions, design recommendation systems for self-service solutions, and develop ML models for automated service resolution. As an Applied Scientist on the team, you will design and develop state-of-the-art ML models for service intent classification, routing optimization, and customer experience personalization. You will analyze large-scale business customer interaction data to identify patterns and opportunities for automation, create scalable solutions for complex B2B service scenarios using advanced ML techniques, and work closely with engineering teams to implement and deploy models in production. You will collaborate with business stakeholders to identify opportunities for ML applications, establish automated processes for model development, validation, and maintenance, lead research initiatives to advance the state-of-the-art in B2B service science, and mentor other scientists and engineers in applying ML techniques to business problems.
US, WA, Seattle
We are seeking a Principal Applied Scientist to lead research and development in automated reasoning, formal verification, and program analysis. You will drive innovation in making formal methods practical and accessible for real-world systems at cloud scale. Key job responsibilities - Lead research initiatives in automated reasoning, formal verification, SMT solving, model checking, or program analysis - Design and implement novel algorithms and techniques that advance the state of the art - Mentor and guide applied scientists, research scientists, and engineers - Collaborate with product teams to transition research into production systems - Define technical vision and strategy for automated reasoning initiatives - Represent AWS in the academic and research community - Drive cross-organizational impact through technical leadership About the team The Automated Reasoning Group at AWS develops and applies cutting-edge formal methods and automated reasoning techniques to ensure the security, reliability, and correctness of AWS services and customer applications. Our work innovates tools and services to perform verification at scale and apply them to build safe and secure systems at AWS. We are also pioneering the use of formal verification and automated reasoning to develop agentic systems, ensuring AI agents operate within defined safety boundaries.
US, WA, Bellevue
As an Applied Scientist on our Central Learning Solutions Team, you will play a critical role in driving the design, development, and delivery of learning programs and initiatives aimed at enhancing leadership and associate development within the organization. You will leverage your expertise in learning science, data analysis, and statistical model design to create impactful learning journey roadmap that align with organizational goals and priorities. Key job responsibilities Research and Analysis: - Conduct research on learning and development trends, theories, and best practices related to leadership and associate development - Analyze data to identify learning needs, performance gaps, and opportunities for improvement within the organization. - Use data-driven insights to inform the design and implementation of learning interventions. Program Design and Development: - Collaborate with cross-functional teams to develop comprehensive learning programs focused on leadership development and associate growth - Design learning experiences using evidence-based instructional strategies, adult learning principles, and innovative technologies. - Create engaging and interactive learning materials, including e-learning modules, instructor-led workshops, and multimedia resources. Evaluation and Continuous Improvement: - Develop evaluation frameworks to assess the effectiveness and impact of learning programs on leadership development and associate performance - Collect and analyze feedback from participants and stakeholders to identify strengths, areas for improvement, and future learning needs. - Iterate on learning interventions based on evaluation results and feedback to continuously improve program outcomes Thought Leadership and Collaboration: - Serve as a subject matter expert on learning science, instructional design, and leadership development within the organization - Collaborate with stakeholders across the company to align learning initiatives with strategic priorities and business objectives - Share knowledge and best practices with colleagues to foster a culture of continuous learning and development.
US, WA, Bellevue
Amazon Leo is an initiative to increase global broadband access through a constellation of 3,236 satellites in low Earth orbit (LEO). Its mission is to bring fast, affordable broadband to unserved and underserved communities around the world. Amazon Leo will help close the digital divide by delivering fast, affordable broadband to a wide range of customers, including consumers, businesses, government agencies, and other organizations operating in places without reliable connectivity. Do you get excited by aerospace, space exploration, and/or satellites? Do you want to help build solutions at Amazon Leo to transform the space industry? If so, then we would love to talk! Key job responsibilities Work cross-functionally with product, business development, and various technical teams (engineering, science, simulations, etc.) to execute on the long-term vision, strategy, and architecture for the science-based global demand forecast. Design and deliver modern, flexible, scalable solutions to integrate data from a variety of sources and systems (both internal and external) and develop Bandwidth Usage models at granular temporal and geographic grains, deployable to Leo traffic management systems. Work closely with the capacity planning science team to ensure that demand forecasts feed seamlessly into their systems to deliver continuous optimization of resources. Lead short and long terms technical roadmap definition efforts to deliver solutions that meet business needs in pre-launch, early-launch, and mature business phases. Synthesize and communicate insights and recommendations to audiences of varying levels of technical sophistication to drive change across Amazon Leo. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. About the team The Amazon Leo Global Demand Planning team's mission is to map customer demand across space and time. We enable Amazon Leo's long-term success by delivering actionable insights and scientific forecasts across geographies and customer segments to empower long range planning, capacity simulations, business strategy, and hardware manufacturing recommendations through scalable tools and durable mechanisms.
US, CA, Pasadena
Do you enjoy solving challenging problems and driving innovations in research? As a Research Science intern with the Quantum Algorithms Team at CQC, you will work alongside global experts to develop novel quantum algorithms, evaluate prospective applications of fault-tolerant quantum computers, and strengthen the long-term value proposition of quantum computing. A strong candidate will have experience applying methods of mathematical and numerical analysis to assess the performance of quantum algorithms and establish their advantage over classical algorithms. Key job responsibilities We are particularly interested in candidates with expertise in any of the following subareas related to quantum algorithms: quantum chemistry, many-body physics, quantum machine learning, cryptography, optimization theory, quantum complexity theory, quantum error correction & fault tolerance, quantum sensing, and scientific computing, among others. A day in the life Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. This is not a remote internship opportunity. About the team Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer.
US, CA, Pasadena
We’re on the lookout for the curious, those who think big and want to define the world of tomorrow. At Amazon, you will grow into the high impact, visionary person you know you’re ready to be. Every day will be filled with exciting new challenges, developing new skills, and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. The Amazon Web Services (AWS) Center for Quantum Computing (CQC) in Pasadena, CA, is looking for a Quantum Research Scientist Intern in the Device and Architecture Theory group. You will be joining a multi-disciplinary team of scientists, engineers, and technicians, all working at the forefront of quantum computing to innovate for the benefit of our customers. Key job responsibilities As an intern with the Device and Architecture Theory team, you will conduct pathfinding theoretical research to inform the development of next-generation quantum processors. Potential focus areas include device physics of superconducting circuits, novel qubits and gate schemes, and physical implementations of error-correcting codes. You will work closely with both theorists and experimentalists to explore these directions. We are looking for candidates with excellent problem-solving and communication skills who are eager to work collaboratively in a team environment. Amazon Science gives you insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in quantum computing and related fields. Our scientists continue to publish, teach, and engage with the academic community, in addition to utilizing our working backwards method to enrich the way we live and work. A day in the life Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. We are seeking a talented Applied Scientist to join our advanced robotics team, focusing on developing and applying cutting-edge simulation methodologies for advanced robotics systems. This role centers on research and development of physics-based simulation techniques, sim-to-real transfer methods, and machine learning approaches that enable rapid development, testing, and validation of robotic systems operating in complex, real-world environments. Key job responsibilities - Advance physics-based simulation fidelity for contact-rich manipulation and locomotion - Design and build high-performance simulation tools integrated into a production robotics stack - Translate research ideas into robust, scalable software pipelines - Develop methods to quantify and reduce simulation-to-reality gaps across design, safety, and control - Architect scalable simulation solutions for rigid and deformable body dynamics - Build simulation pipelines optimized for large-scale reinforcement and policy learning - Establish frameworks for continuous simulation improvement using real-world deployment data - Collaborate with engineering, science, and safety teams on simulation requirements and validation About the team Our team is building a comprehensive simulation platform for advanced robotics development, combining locomotion and manipulation capabilities. We operate at the cutting edge of physics simulation, reinforcement learning, and sim-to-real transfer, collaborating with world-class robotics engineers, applied scientists, and mechanical designers in a fast-paced, innovation-driven environment. This role uniquely combines fundamental research with real-world deployment. You will pursue core research questions in physics-based simulation while seeing your work translated into production systems, validated on real hardware, and informed by deployment data. Working alongside Simulation Software Engineers, you will help transform research ideas into scalable, production-grade simulation capabilities that directly impact how robots are designed, trained, and deployed.