Amazon Scholar John Preskill on the AWS quantum computing effort

The noted physicist answers 3 questions about the challenges of quantum computing and why he’s excited to be part of a technology development project.

In June, Amazon Web Services (AWS) announced that John Preskill, the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology, an advisor to the National Quantum Initiative, and one of the most respected researchers in the field of quantum information science, would be joining Amazon’s quantum computing research effort as an Amazon Scholar.

Quantum computing is an emerging technology with the potential to deliver large speedups — even exponential speedups — over classical computing on some computational problems.

John Preskill
John Preskill, the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology and an Amazon Scholar
Credit: Caltech / Lance Hayashida

Where a bit in an ordinary computer can take on the values 0 or 1, a quantum bit, or qubit, can take on the values 0, 1, or, in a state known as superposition, a combination of the two. Quantum computing depends on preserving both superposition and entanglement, a fragile condition in which the qubits’ quantum states are dependent on each other.

The goal of the AWS Center for Quantum Computing, on the Caltech campus, is to develop and build quantum computing technologies and deliver them onto the AWS cloud. At the center, Preskill will be joining his Caltech colleagues Oskar Painter and Fernando Brandao, the heads of AWS’s Quantum Hardware and Quantum Algorithms programs, respectively, and Gil Refael, the Taylor W. Lawrence Professor of Theoretical Physics at Caltech and, like Preskill, an Amazon Scholar.

Other Amazon Scholars contributing to the AWS quantum computing effort are Amir Safavi-Naeini, an assistant professor of applied physics at Stanford University, and Liang Jiang, a professor of molecular engineering at the University of Chicago.

Amazon Science asked Preskill three questions about the challenges of quantum computing and why he’s excited about AWS’s approach to meeting them.

Q: Why is quantum computing so hard?

What makes it so hard is we want our hardware to simultaneously satisfy a set of criteria that are nearly incompatible.

On the one hand, we need to keep the qubits almost perfectly isolated from the outside world. But not really, because we want to control the computation. Eventually, we’ve got to measure the qubits, and we've got to be able to tell them what to do. We're going have to have some control circuitry that determines what actual algorithm we’re running.

So why is it so important to keep them isolated from the outside world? It's because a very fundamental difference between quantum information and ordinary information expressed in bits is that you can't observe a quantum state without disturbing it. This is a manifestation of the uncertainty principle of quantum mechanics. Whenever you acquire information about a quantum state, there's some unavoidable, uncontrollable disturbance of the state.

So in the computation, we don't want to look at the state until the very end, when we're going to read it out. But even if we're not looking at it ourselves, the environment is looking at it. If the environment is interacting with the quantum system that encodes the information that we're processing, then there's some leakage of information to the outside, and that means some disturbance of the quantum state that we're trying to process.

Explore our new quantum technologies research section

Quantum computing has the potential to solve computational problems that are beyond the reach of today's classical computers. Find the latest quantum news, research papers, and more.

So really, we need to keep the quantum computer almost perfectly isolated from the outside world, or else it's going to fail. It's going to have errors. And that sounds ridiculously hard, because hardware is never going to be perfect. And that's where the idea of quantum error correction comes to the rescue.

The essence of the idea is that if you want to protect the quantum information, you have to store it in a very nonlocal way by means of what we call entanglement. Which is, of course, the origin of the quantum computer’s magic to begin with. A highly entangled state has the property that when you have the state shared among many parts of a system, you can look at the parts one at a time, and that doesn't reveal any of the information that is carried by the system, because it's really stored in these unusual nonlocal quantum correlations among the parts. And the environment interacts with the parts kind of locally, one at a time.

If we store the information in the form of this highly entangled state, the environment doesn't find out what the state is. And that's why we're able to protect it. And we've also figured out how to process information that's encoded in this very entangled, nonlocal way. That's how the idea of quantum error correction works. What makes it expensive is in order to get very good protection, we have to have the information shared among many qubits.

Q: Today’s error correction schemes can call for sharing the information of just one logical qubit — the one qubit actually involved in the quantum computation — across thousands of additional qubits. That sounds incredibly daunting, if your goal is to perform computations that involve dozens of logical qubits.

Well, that's why, as much as we can, we would like to incorporate the error resistance into the hardware itself rather than the software. The way we usually think about quantum error correction is we’ve got these noisy qubits — it's not to disparage them or anything: they're the best qubits we've got in a particular platform. But they're not really good enough for scaling up to solving really hard problems. So the solution which at least theoretically we know should work is that we use a code. That is, the information that we want to protect is encoded in the collective state of many qubits instead of just the individual qubits.

We're interested in what is fundamentally different between classical systems and quantum systems. And I don't know a statement that more dramatically expresses the difference than saying that there are problems that are easy quantumly and hard classically.

But the alternative approach is to try to use error correction ideas in the design of the hardware itself. Can we use an encoding that has some kind of intrinsic noise resistance at the physical level?

The original idea for doing this came from one of my Caltech colleagues, Alexei Kitaev, and his idea was that you could just design a material that sort of has its own strong quantum entanglement. Now people call these topological materials; what's important about them is they're highly entangled. And so the information is spread out in this very nonlocal way, which makes it hard to read the information locally.

Making a topological material is something people are trying to do. I think the idea is still brilliant, and maybe in the end it will be a game-changing idea. But so far it's just been too hard to make the materials that have the right properties.

A better bet for now might be to do something in-between. We want to have some protection at the hardware level, but not go as far as these topological materials. But if we can just make the error rate of the physical qubits lower, then we won't need so much overhead from the software protection on top.

Q: For a theorist like you, what’s the appeal of working on a project whose goal is to develop new technologies?

My training was in particle physics and cosmology, but in the mid-nineties, I got really excited because I heard about the possibility that if you could build a quantum computer, you could factor large numbers. As physicists, of course, we're interested in what is fundamentally different between classical systems and quantum systems. And I don't know a statement that more dramatically expresses the difference than saying that there are problems that are easy quantumly and hard classically.

The situation is we don't know much about what happens when a quantum system is very profoundly entangled, and the reason we don't know is because we can't simulate it on our computers. Our classical computers just can't do it. And that means that as theorists, we don't really have the tools to explain how those systems behave.

I have done a lot of work on these quantum error correcting codes. It was one of my main focuses for almost 15 years. There were a lot of issues of principle that I thought were important to address. Things like, What do you really need to know about noise for these things to work? This is still an important question, because we had to make some assumptions about the noise and the hardware to make progress.

I said the environment looks at the system locally, sort of one part at a time. That's actually an assumption. It's up to the environment to figure out how it wants to look at it. As physicists, we tend to think physics is kind of local, and things interact with other nearby things. But until we’re actually doing it in the lab, we won't really be sure how good that assumption is.

So this is the new frontier of the physical sciences, exploring these more and more complex systems of many particles interacting quantum mechanically, becoming highly entangled. Sometimes I call it the entanglement frontier. And I'm excited about what we can learn about physics by exploring that. I really think in AWS we are looking ahead to the big challenges. I'm pretty jazzed about this.

#403: Amazon Scholars

On November 2, 2020, John Preskill joined Simone Severini, the director of AWS Quantum Computing, for an interview with Simon Elisha, host of the Official AWS Podcast.

Research areas

Related content

GB, London
"Are you a MS or PhD student interested in the fields of Computer Science or Operational Research? Do you enjoy diving deep into hard technical problems and coming up with solutions that enable successful products? If this describes you, come join our research teams at Amazon. " Key job responsibilities The candidate will be responsible for the design and implementation of optimization algorithms. The candidate will translate high-level business problems into mathematical ones. Then, they will design and implement optimization algorithms to solve them. The candidate will be responsible also for the analysis and design of KPIs and input data quality. About the team ATS stands for Amazon Transportation Service, we are the middle-mile planners: we carry the packages from the warehouses to the cities in a limited amount of time to enable the “Amazon experience”. As the core research team, we grow with ATS business to support decision making in an increasingly complex ecosystem of a data-driven supply chain and e-commerce giant. We take pride in our algorithmic solutions: We schedule more than 1 million trucks with Amazon shipments annually; our algorithms are key to reducing CO2 emissions, protecting sites from being overwhelmed during peak days, and ensuring a smile on Amazon’s customer lips. We do not shy away from responsibility. Our mathematical algorithms provide confidence in leadership to invest in programs of several hundreds millions euros every year. Above all, we are having fun solving real-world problems, in real-world speed, while failing & learning along the way. We employ the most sophisticated tools: We use modular algorithmic designs in the domain of combinatorial optimization, solving complicated generalizations of core OR problems with the right level of decomposition, employing parallelization and approximation algorithms. We use deep learning, bandits, and reinforcement learning to put data into the loop of decision making. We like to learn new techniques to surprise business stakeholders by making possible what they cannot anticipate. For this reason, we work closely with Amazon scholars and experts from Academic institutions. We are open to hiring candidates to work out of one of the following locations: London, GBR
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a highly-skilled Senior Applied Scientist, to lead the development and implementation of cutting-edge algorithms and push the boundaries of efficient inference for Generative Artificial Intelligence (GenAI) models. As a Senior Applied Scientist, you will play a critical role in driving the development of GenAI technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Design and execute experiments to evaluate the performance of different decoding algorithms and models, and iterate quickly to improve results - Develop deep learning models for compression, system optimization, and inference - Collaborate with cross-functional teams of engineers and scientists to identify and solve complex problems in GenAI - Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA | New York, NY, USA
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Knowledge of econometrics, and basic familiarity with Python or R, is necessary. Experience with SQL is a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and apply econometric methods to support business decisions, collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Key job responsibilities Collaborate with business and science colleagues to understand the business problem and collect relevant data. Provide statistically rigorous analysis of data that contributes to business decision-making. Effectively communicate your results to colleagues and business leaders. A day in the life Meet with colleagues to discuss how the business currently works. Discuss ways in which the customer experience could be improved, and what data you'd need to test your hypotheses. Meet with data and business intelligence engineers to build an efficient data pipeline using SQL, spark and other big data tools. Propose and execute a plan to analyze your data, working closely with your econ colleagues. Use Amazon's development tools, coding your estimators in Python or R. Draft your findings for an internal knowledge sharing session. Iterate to improve your work and communicate your final results in a business document. About the team We are a team of four economists that works within the delivery experience org. Our goal is to improve the delivery experience for our customers while reducing costs. This mission puts us in a unique position to influence both the front end customer experience and the supply chain that ultimately places constraints on that experience. This means we often work with and influence teams outside of our own organization. As a result, we have the privilege of working with a diverse group of experts, including those in supply chain, operations, capacity management, and user experience. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
ES, B, Barcelona
"Are you a MS or PhD student interested in the fields of Computer Science or Operational Research? Do you enjoy diving deep into hard technical problems and coming up with solutions that enable successful products? If this describes you, come join our research teams at Amazon. " Key job responsibilities The candidate will be responsible for the design and implementation of optimization algorithms. The candidate will translate high-level business problems into mathematical ones. Then, they will design and implement optimization algorithms to solve them. The candidate will be responsible also for the analysis and design of KPIs and input data quality. About the team ATS stands for Amazon Transportation Service, we are the middle-mile planners: we carry the packages from the warehouses to the cities in a limited amount of time to enable the “Amazon experience”. As the core research team, we grow with ATS business to support decision making in an increasingly complex ecosystem of a data-driven supply chain and e-commerce giant. We take pride in our algorithmic solutions: We schedule more than 1 million trucks with Amazon shipments annually; our algorithms are key to reducing CO2 emissions, protecting sites from being overwhelmed during peak days, and ensuring a smile on Amazon’s customer lips. We do not shy away from responsibility. Our mathematical algorithms provide confidence in leadership to invest in programs of several hundreds millions euros every year. Above all, we are having fun solving real-world problems, in real-world speed, while failing & learning along the way. We employ the most sophisticated tools: We use modular algorithmic designs in the domain of combinatorial optimization, solving complicated generalizations of core OR problems with the right level of decomposition, employing parallelization and approximation algorithms. We use deep learning, bandits, and reinforcement learning to put data into the loop of decision making. We like to learn new techniques to surprise business stakeholders by making possible what they cannot anticipate. For this reason, we work closely with Amazon scholars and experts from Academic institutions. We are open to hiring candidates to work out of one of the following locations: Barcelona, B, ESP
IN, TN, Chennai
DESCRIPTION The Digital Acceleration (DA) team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms for solving Digital businesses problems. Key job responsibilities - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues BASIC QUALIFICATIONS - Experience building machine learning models or developing algorithms for business application - PhD, or a Master's degree and experience in CS, CE, ML or related field - Knowledge of programming languages such as C/C++, Python, Java or Perl - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. PREFERRED QUALIFICATIONS - 3+ years of building machine learning models or developing algorithms for business application experience - Have publications at top-tier peer-reviewed conferences or journals - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment We are open to hiring candidates to work out of one of the following locations: Chennai, TN, IND
US, VA, Arlington
Machine learning (ML) has been strategic to Amazon from the early years. We are pioneers in areas such as recommendation engines, product search, eCommerce fraud detection, and large-scale optimization of fulfillment center operations. The Generative AI team helps AWS customers accelerate the use of Generative AI to solve business and operational challenges and promote innovation in their organization. As an applied scientist, you are proficient in designing and developing advanced ML models to solve diverse challenges and opportunities. You will be working with terabytes of text, images, and other types of data to solve real-world problems. You'll design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for talented scientists capable of applying ML algorithms and cutting-edge deep learning (DL) and reinforcement learning approaches to areas such as drug discovery, customer segmentation, fraud prevention, capacity planning, predictive maintenance, pricing optimization, call center analytics, player pose estimation, event detection, and virtual assistant among others. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services. Key job responsibilities The primary responsibilities of this role are to: - Design, develop, and evaluate innovative ML models to solve diverse challenges and opportunities across industries - Interact with customer directly to understand their business problems, and help them with defining and implementing scalable Generative AI solutions to solve them - Work closely with account teams, research scientist teams, and product engineering teams to drive model implementations and new solution About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Atlanta, GA, USA | Austin, TX, USA | Houston, TX, USA | New York, NJ, USA | New York, NY, USA | San Francisco, CA, USA | Santa Clara, CA, USA | Seattle, WA, USA
US, WA, Seattle
Prime Video offers customers a vast collection of movies, series, and sports—all available to watch on hundreds of compatible devices. U.S. Prime members can also subscribe to 100+ channels including Max, discovery+, Paramount+ with SHOWTIME, BET+, MGM+, ViX+, PBS KIDS, NBA League Pass, MLB.TV, and STARZ with no extra apps to download, and no cable required. Prime Video is just one of the savings, convenience, and entertainment benefits included in a Prime membership. More than 200 million Prime members in 25 countries around the world enjoy access to Amazon’s enormous selection, exceptional value, and fast delivery. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As a Data Scientist at Amazon Prime Video, you will work with massive customer datasets, provide guidance to product teams on metrics of success, and influence feature launch decisions through statistical analysis of the outcomes of A/B experiments. You will develop machine learning models to facilitate understanding of customer's streaming behavior and build predictive models to inform personalization and ranking systems. You will work closely other scientists, economists and engineers to research new ways to improve operational efficiency of deployed models and metrics. A successful candidate will have a strong proven expertise in statistical modeling, machine learning, and experiment design, along with a solid practical understanding of strength and weakness of various scientific approaches. They have excellent communication skills, and can effectively communicate complex technical concepts with a range of technical and non-technical audience. They will be agile and capable of adapting to a fast-paced environment. They have an excellent track-record on delivering impactful projects, simplifying their approaches where necessary. A successful data scientist will own end-to-end team goals, operates with autonomy and strive to meet key deliverables in a timely manner, and with high quality. About the team Prime Video discovery science is a central team which defines customer and business success metrics, models, heuristics and econometric frameworks. The team develops, owns and operates a suite of data science and machine learning models that feed into online systems that are responsible for personalization and search relevance. The team is responsible for Prime Video’s experimentation practice and continuously innovates and upskills teams across the organization on science best practices. The team values diversity, collaboration and learning, and is excited to welcome a new member whose passion and creativity will help the team continue innovating and enhancing customer experience. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, NJ, Newark
Employer: Audible, Inc. Title: Data Scientist II Location: 1 Washington Street, Newark, NJ, 07102 Duties: Design and implement scalable and reliable approaches to support or automate decision making throughout the business. Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult business problems and cases in which the solution approach is unclear. Acquire data by building the necessary SQL/ETL queries. Import processes through various company specific interfaces for accessing RedShift, and S3/edX storage systems. Build relationships with stakeholders and counterparts, and communicate model outputs, observations, and key performance indicators (KPIs) to the management to develop sustainable and consumable products. Explore and analyze data by inspecting univariate distributions and multivariate interactions, constructing appropriate transformations, and tracking down the source and meaning of anomalies. Build production-ready models using statistical modeling, mathematical modeling, econometric modeling, machine learning algorithms, network modeling, social network modeling, natural language processing, or genetic algorithms. Validate models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. Implement models that comply with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production. Position reports into Newark, NJ office; however, telecommuting from a home office may be allowed. Requirements: Requires a Master’s in Statistics, Computer Science, Data Science, Machine Learning, Applied Math, Operations Research, Economics, or a related field plus two (2) years of Data Scientist or other occupation/position/job title with research or work experience related to data processing and predictive Machine Learning modeling at scale. Experience may be gained concurrently and must include: Two (2) years in each of the following: - Building statistical models and machine learning models using large datasets from multiple resources - Non-linear models including Neural Nets or Deep Learning, and Gradient Boosting - Applying specialized modelling software including Python, R, SAS, MATLAB, or Stata. One (1) year in the following: - Using database technologies including SQL or ETL. Alternatively, will accept a Bachelor's and five (5) years of experience. Multiple positions. Apply online: www.amazon.jobs Job Code: ADBL135. We are open to hiring candidates to work out of one of the following locations: Newark, NJ, USA
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of audio technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in AGI in audio domain. About the team Our team has a mission to push the envelope of AGI in audio domain, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA
DE, BE, Berlin
Are you fascinated by revolutionizing Alexa user experience with LLM? The Artificial General Intelligence (AGI) team is looking for an Applied Scientist with background in Large Language Model, Natural Language Process, Machine/Deep learning. You will be at the heart of the Alexa LLM transition working with a team of applied and research scientists to bring classic Alexa features and beyond into LLM empowered Alexa. You will interact in a cross-functional capacity with science, product and engineering leaders. Key job responsibilities * Work on core LLM technologies (supervised fine tuning, prompt optimization, etc.) to enable Alexa use cases * Research and develop novel metrics and algorithms for LLM evaluation * Communicating effectively with leadership team as well as with colleagues from science, engineering and business backgrounds. We are open to hiring candidates to work out of one of the following locations: Berlin, BE, DEU