Amazon’s quantum computing papers at QIP 2023

Research on “super-Grover” optimization, quantum algorithms for topological data analysis, and simulation of physical systems displays the range of Amazon’s interests in quantum computing.

At this year’s Quantum Information Processing Conference (QIP), members of Amazon Web Services' Quantum Technologies group are coauthors on three papers, which indicate the breadth of the group’s research interests.

In “Mind the gap: Achieving a super-Grover quantum speedup by jumping to the end”, Amazon research scientist Alexander Dalzell, Amazon quantum research scientist Nicola Pancotti, Earl Campbell of the University of Sheffield and Riverlane, and I present a quantum algorithm that improves on the efficiency of Grover’s algorithm, one of the few quantum algorithms to offer provable speedups relative to conventional algorithms. Although the improvement on Grover’s algorithm is small, it breaks a performance barrier that hadn’t previously been broken, and it points to a methodology that could enable still greater improvements.

Related content
As the major quantum computing conference celebrates its anniversary, we ask the conference chair and the head of Amazon’s quantum computing program to take stock.

In “A streamlined quantum algorithm for topological data analysis with exponentially fewer qubits”, Amazon research scientist Sam McArdle, Mario Berta of Aachen University, and András Gilyén of the Alfréd Rényi Institute of Mathematics in Budapest consider topological data analysis, a technique for analyzing big data. They present a new quantum algorithm for topological data analysis that, compared to the existing quantum algorithm, enables a quadratic speedup and an exponentially more efficient use of quantum memory.

For “Sparse random Hamiltonians are quantumly easy”, Chi-Fang (Anthony) Chen, a Caltech graduate student who was an Amazon intern when the work was done, won the conference's best-student-paper award. He's joined on the paper by Alex Dalzell and me, Mario Berta, and Caltech's Joel Tropp. The paper investigates the use of quantum computers to simulate physical properties of quantum systems. We prove that a particular model of physical systems — specifically, sparse, random Hamiltonians — can, with high probability, be efficiently simulated on a quantum computer.

Super-Grover quantum speedup

Grover’s algorithm is one of the few quantum algorithms that are known to provide speedups relative to classical computing. For instance, for the 3-SAT problem, which involves finding values for N variables that satisfy the constraints of an expression in formal logic, the running time of a brute-force classical algorithm is proportional to 2N; the running time of Grover’s algorithm is proportional to 2N/2.

Related content
Watch as the panel talks about everything from what got them interested in quantum research to where they see the field headed in the future.

Adiabatic quantum computing is an approach to quantum computing in which a quantum system is prepared so that, in its lowest-energy state (the “ground state”), it encodes the solution to a relatively simple problem. Then, some parameter of the system — say, the strength of a magnetic field — is gradually changed, so that the system encodes a more complex problem. If the system stays in its ground state through those changes, it will end up encoding the solution to the complex problem.

As the parameter is changed, however, the gaps between the system’s ground state and its first excited states vary, sometimes becoming infinitesimally small. If the parameter changes too quickly, the system may leap into one of its excited states, ruining the computation.

Hamiltonian energies.jpg
In adiabatic quantum computing, as the parameters (b) of a quantum system change, the gap between the system’s ground energy and its first excited state may vary.

In “Mind the gap: Achieving a super-Grover quantum speedup by jumping to the end”, we show that for an important class of optimization problems, it’s possible to compute an initial jump in the parameter setting that runs no risk of kicking the system into a higher energy state. Then, a second jump takes the parameter all the way to its maximum value.

Most of the time this will fail, but every once in a while, it will work: the system will stay in its ground state, solving the problem. The larger the initial jump, the greater the increase in success rate.

Super-Grover leap.gif
An initial, risk-free jump in the quantum system’s parameter setting (b) decreases the chances that jumping to the final setting will kick the system into an excited energy state.

Our paper proves that the algorithm has an infinitesimal but quantifiable advantage over Grover’s algorithm, and it reports a set of numerical experiments to determine the practicality of the approach. Those experiments suggest that the method, in fact, increases efficiency more than could be mathematically proven, although still too little to yield large practical benefits. The hope is that the method may lead to further improvements that could make a practical difference to quantum computers of the future.

Topological data analysis

Topology is a branch of mathematics that treats geometry at a high level of abstraction: on a topological description, any two objects with the same number of holes in them (say, a coffee cup and a donut) are identical.

Related content
New phase estimation technique reduces qubit count, while learning framework enables characterization of noisy quantum systems.

Mapping big data to a topological object — or manifold — can enable analyses that are difficult at lower levels of abstraction. Because topological descriptions are invariant to shape transformations, for instance, they are robust against noise in the data.

Topological data analysis often involves the computation of persistent Betti numbers, which characterize the number of holes in the manifold, a property that can carry important implications about the underlying data. In “A streamlined quantum algorithm for topological data analysis with exponentially fewer qubits”, the authors propose a new quantum algorithm for computing persistent Betti numbers. It offers a quadratic speedup relative to classical algorithms and uses quantum memory exponentially more efficiently than existing quantum algorithms.

Topological mapping.png
Connecting points in a data cloud produces closed surfaces (or “simplices”, such as the triangle ABC) that can be mapped to the surface of a topological object, such as a toroid (donut shape).

Data can be represented as points in a multidimensional space, and topological mapping can be thought of as drawing line segments between points in order to produce a surface, much the way animators create mesh outlines of 3-D objects. The maximum length of the lines defines the length scale of the mapping.

At short enough length scales, the data would be mapped to a large number of triangles, tetrahedra, and their higher-dimensional analogues, which are known as simplices. As the length scale increases, simplices link up to form larger complexes, and holes in the resulting manifold gradually disappear. The persistent Betti number is the number of holes that persist across a range of longer length scales.

Related content
Researchers affiliated with Amazon Web Services' Center for Quantum Computing are presenting their work this week at the Conference on Quantum Information Processing.

The researchers’ chief insight is, though the dimension of the representational space may be high, in most practical cases, the dimension of the holes is much lower. The researchers define a set of boundary operators, which find the boundaries (e.g., the surfaces of 3-D shapes) of complexes (combinations of simplices) in the representational space. In turn, the boundary operators (or more precisely, their eigenvectors) provide a new geometric description of the space, in which regions of the space are classified as holes or not-holes.

Since the holes are typically low dimensional, so is the space, which enables the researchers to introduce an exponentially more compact mapping of simplices to qubits, dramatically reducing the spatial resources required for the algorithm.

Sparse random Hamiltonians

The range of problems on which quantum computing might enable useful speedups, compared to classical computing, is still unclear. But one area where quantum computing is likely to offer advantages is in the simulation of quantum systems, such as molecules. Such simulations could yield insights in biochemistry and materials science, among other things.

Related content
New approach reduces the number of ancillary qubits required to implement the crucial T gate by at least an order of magnitude.

Often, in quantum simulation, we're interested in quantum systems' low-energy properties. But in general, it’s difficult to prove that a given quantum algorithm can prepare a quantum system in a low-energy state.

The energy of a quantum system is defined by its Hamiltonian, which can be represented as a matrix. In “Sparse random Hamiltonians are quantumly easy”, we show that for almost any Hamiltonian matrix that is sparse — meaning it has few nonzero entries — and random — meaning the locations of the nonzero entries are randomly assigned — it is possible to prepare a low-energy state.

Moreover, we show that the way to prepare such a state is simply to initialize the quantum memory that stores the model to a random state (known as preparing a maximally mixed state).

Semicircular distribution.png
The semicircular distribution of eigenvalues for a particular quantum system, the Pauli string ensemble.

The key to our proof is to generalize a well-known result for dense matrices — Wigner's semicircle distribution for Gaussian unitary ensembles (GUEs) — to sparse matrices. Computing the energy level of a quantum system from its Hamiltonian involves calculating the eigenvalues of the Hamiltonian matrix, a standard operation in linear algebra. Wigner showed that the eigenvalues of random dense matrices form a semicircular distribution. That is, the possible eigenvalues of random matrices don’t trail off to infinity in a long tail; instead, they have sharp demarcation points. There are no possible values above and below some clearly defined thresholds.

Related content
The noted physicist answers 3 questions about the challenges of quantum computing and why he’s excited to be part of a technology development project.

Dense Hamiltonians, however, are rare in nature. The Hamiltonians describing most of the physical systems that physicists and chemists care about are sparse. By showing that sparse Hamiltonians conform to the same semicircular distribution that dense Hamiltonians do, we prove that the number of experiments required to measure a low-energy state of a quantum simulation will not proliferate exponentially.

In the paper, we also show that any low-energy state must have non-negligible quantum circuit complexity, suggesting that it could not be computed efficiently by a classical computer — an argument for the necessity of using quantum computers to simulate quantum systems.

Research areas

Related content

US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Research Scientist specializing the design of microwave components for cryogenic environments. Working alongside other scientists and engineers, you will design and validate hardware performing microwave signal conditioning at cryogenic temperatures for AWS quantum processors. Candidates must have a background in both microwave theory and implementation. Working effectively within a cross-functional team environment is critical. The ideal candidate will have a proven track record of hardware development from requirements development to validation. Key job responsibilities Our scientists and engineers collaborate across diverse teams and projects to offer state of the art, cost effective solutions for the signal conditioning of AWS quantum processor systems at cryogenic temperatures. You’ll bring a passion for innovation, collaboration, and mentoring to: Solve layered technical problems across our cryogenic signal chain. Develop requirements with key system stakeholders, including quantum device, test and measurement, cryogenic hardware, and theory teams. Design, implement, test, deploy, and maintain innovative solutions that meet both performance and cost metrics. Research enabling technologies necessary for AWS to produce commercially viable quantum computers. A day in the life As you design and implement cryogenic microwave signal conditioning solutions, from requirements definition to deployment, you will also: Participate in requirements, design, and test reviews and communicate with internal stakeholders. Work cross-functionally to help drive decisions using your unique technical background and skill set. Refine and define standards and processes for operational excellence. Work in a high-paced, startup-like environment where you are provided the resources to innovate quickly. About the team AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, WA, Bellevue
The Routing and Planning organization supports all parcel and grocery delivery programs across Amazon Delivery. All these programs have different characteristics and require a large number of decision support systems to operate at scale. As part of Routing and Planning organization, you’ll partner closely with other scientists and engineers in a collegial environment with a clear path to business impact. We have an exciting portfolio of research areas including network optimization, routing, routing inputs, electric vehicles, delivery speed, capacity planning, geospatial planning and dispatch solutions for different last mile programs leveraging the latest OR, ML, and Generative AI methods, at a global scale. We are actively looking to hire senior scientists to lead one or more of these problem spaces. Successful candidates will have a deep knowledge of Operations Research and Machine Learning methods, experience in applying these methods to large-scale business problems, the ability to map models into production-worthy code in Python or Java, the communication skills necessary to explain complex technical approaches to a variety of stakeholders and customers, and the excitement to take iterative approaches to tackle big research challenges. Inclusive Team Culture Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which reminds team members to seek diverse perspectives, learn and be curious, and earn trust. Mentorship & Career Growth We care about your career growth too. Whether your goals are to explore new technologies, take on bigger opportunities, or get to the next level, we'll help you get there. Our business is growing fast and our people will grow with it. Key job responsibilities • Invent and design new solutions for scientifically-complex problem areas and identify opportunities for invention in existing or new business initiatives. • Successfully deliver large or critical solutions to complex problems in the support of medium-to-large business goals. • Influence the design of scientifically-complex software solutions or systems, for which you personally write significant parts of the critical scientific novelty. • Apply mathematical optimization techniques and algorithms to design optimal or near optimal solution methodologies to be used by in-house decision support tools and software. • Research, prototype, simulate, and experiment with these models and participate in the production level deployment in Python or Java. • Make insightful contributions to teams's roadmaps, goals, priorities, and approach. • Actively engage with the internal and external scientific communities by publishing scientific articles and participating in research conferences.
US, WA, Seattle
The Creator team’s mission is to make Amazon the Earth’s most desired destination for commerce creators and their content. We own the Associates and Influencer programs and brands across Amazon to ensure a cohesive experience, expand creators’ opportunities to earn through innovation, and launch experiences that reinforce feelings of achievement for creators. Within Creators, our Shoppable Content team focuses on enriching the Amazon shopping experience with inspiring and engaging content like shoppable videos that guides customers’ purchasing decisions, building products and services that enable creators to publish and manage shoppable content posts internal teams to build innovative, content-first experiences for customers. We’re seeking a customer-obsessed, data driven leader to manage our Science and Analytics teams. You will lead a team of Applied Scientists, Economists, Data Scientists, Business Intelligence Engineers, and Data Engineers to develop innovative solutions that help us address creator needs and make creators more successful on Amazon. You will define the strategic vision for how to make science foundational to everything we do, including leading the development of data models and analysis tools to represent the ground truth about creator measurement and test results to facilitate important business decisions, both off and on Amazon. Domains include creator incrementality, compensation, acquisition, recommendations, life cycle, and content quality. You will work with multiple engineering, software, economics and business teams to specify requirements, define data collection, interpretation strategies, data pipelines and implement data analysis and reporting tools. Your focus will be in optimizing the analysis of test results to enable the efficient growth of the Creator channel. You should be able to operate with a high level of autonomy and possess strong communication skills, both written and verbal. You have a combination of strong data analytical skills and business acumen, and a demonstrable ability to operate at scale. You can influence up, down, and across and thrive in entrepreneurial environments. You will excel at hiring and retaining a team of top performers, set strategic direction, build bridges with stakeholders, and cultivate a culture of invention and collaboration. Key job responsibilities · Own and prioritize the science and BI roadmaps for the Creator channel, working with our agile developers to size, scope, and weigh the trade-offs across that roadmap and other areas of the business. · Lead a team of data scientists and engineers skilled at using a variety of techniques including classic analytic techniques as well Data Science and Machine Learning to address hard problems. · Deeply understand the creator, their needs, the business landscape, and backend technologies. · Partner with business and engineering stakeholders across multiple teams to gather data/analytics requirements, and provide clear guidance to the team on prioritization and execution. · Run frequent experiments, in partnership with business stakeholders, to inform our innovation plans. · Ensure high availability for data infrastructure and high data quality, partnering with upstream teams as required.
US, WA, Bellevue
The Worldwide Design Engineering (WWDE) organization delivers innovative, effective and efficient engineering solutions that continually improve our customers’ experience. WWDE optimizes designs throughout the entire Amazon value chain providing overall fulfillment solutions from order receipt to last mile delivery. We are seeking a Simulation Scientist to assist in designing and optimizing the fulfillment network concepts and process improvement solutions using discrete event simulations for our World Wide Design Engineering Team. Successful candidates will be visionary technical expert and natural self-starter who have the drive to apply simulation and optimization tools to solve complex flow and buffer challenges during the development of next generation fulfillment solutions. The Simulation Scientist is expected to deep dive into complex problems and drive relentlessly towards innovative solutions working with cross functional teams. Be comfortable interfacing and influencing various functional teams and individuals at all levels of the organization in order to be successful. Lead strategic modelling and simulation projects related to drive process design decisions. Responsibilities: - Lead the design, implementation, and delivery of the simulation data science solutions to perform system of systems discrete event simulations for significantly complex operational processes that have a long-term impact on a product, business, or function using FlexSim, Demo 3D, AnyLogic or any other Discrete Event Simulation (DES) software packages - Lead strategic modeling and simulation research projects to drive process design decisions - Be an exemplary practitioner in simulation science discipline to establish best practices and simplify problems to develop discrete event simulations faster with higher standards - Identify and tackle intrinsically hard process flow simulation problems (e.g., highly complex, ambiguous, undefined, with less existing structure, or having significant business risk or potential for significant impact - Deliver artifacts that set the standard in the organization for excellence, from process flow control algorithm design to validation to implementations to technical documents using simulations - Be a pragmatic problem solver by applying judgment and simulation experience to balance cross-organization trade-offs between competing interests and effectively influence, negotiate, and communicate with internal and external business partners, contractors and vendors for multiple simulation projects - Provide simulation data and measurements that influence the business strategy of an organization. Write effective white papers and artifacts while documenting your approach, simulation outcomes, recommendations, and arguments - Lead and actively participate in reviews of simulation research science solutions. You bring clarity to complexity, probe assumptions, illuminate pitfalls, and foster shared understanding within simulation data science discipline - Pay a significant role in the career development of others, actively mentoring and educating the larger simulation data science community on trends, technologies, and best practices - Use advanced statistical /simulation tools and develop codes (python or another object oriented language) for data analysis , simulation, and developing modeling algorithms - Lead and coordinate simulation efforts between internal teams and outside vendors to develop optimal solutions for the network, including equipment specification, material flow control logic, process design, and site layout - Deliver results according to project schedules and quality Key job responsibilities • You influence the scientific strategy across multiple teams in your business area. You support go/no-go decisions, build consensus, and assist leaders in making trade-offs. You proactively clarify ambiguous problems, scientific deficiencies, and where your team’s solutions may bottleneck innovation for other teams. A day in the life The dat-to-day activities include challenging and problem solving scenario with fun filled environment working with talented and friendly team members. The internal stakeholders are IDEAS team members, WWDE design vertical and Global robotics team members. The team solve problems related to critical Capital decision making related to Material handling equipment and technology design solutions. About the team World Wide Design EngineeringSimulation Team’s mission is to apply advanced simulation tools and techniques to drive process flow design, optimization, and improvement for the Amazon Fulfillment Network. Team develops flow and buffer system simulation, physics simulation, package dynamics simulation and emulation models for various Amazon network facilities, such as Fulfillment Centers (FC), Inbound Cross-Dock (IXD) locations, Sort Centers, Airhubs, Delivery Stations, and Air hubs/Gateways. These intricate simulation models serve as invaluable tools, effectively identifying process flow bottlenecks and optimizing throughput. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques ML-India
US, WA, Seattle
Amazon's Global Fixed Marketing Campaign Measurement & Optimization (CMO) team is looking for a senior economic expert in causal inference and applied ML to advance the economic measurement, accuracy validation and optimization methodologies of Amazon's global multi-billion dollar fixed marketing spend. This is a thought leadership position to help set the long-term vision, drive methods innovation, and influence cross-org methods alignment. This role is also an expert in modeling and measuring marketing and customer value with proven capacity to innovate, scale measurement, and mentor talent. This candidate will also work closely with senior Fixed Marketing tech, product, finance and business leadership to devise science roadmaps for innovation and simplification, and adoption of insights to influence important resource allocation, fixed marketing spend and prioritization decisions. Excellent communication skills (verbal and written) are required to ensure success of this collaboration. The candidate must be passionate about advancing science for business and customer impact. Key job responsibilities - Advance measurement, accuracy validation, and optimization methodology within Fixed Marketing. - Motivate and drive data generation to size. - Develop novel, innovative and scalable marketing measurement techniques and methodologies. - Enable product and tech development to scale science solutions and approaches. A day in the life - Propose and refine economic and scientific measurement, accuracy validation, and optimization methodology to improve Fixed Marketing models, outputs and business results - Brief global fixed marketing and retails executives about FM measurement and optimization approaches, providing options to address strategic priorities. - Collaborate with and influence the broader scientific methodology community. About the team CMO's vision is to maximizing long-term free cash flow by providing reliable, accurate and useful global fixed marketing measurement and decision support. The team measures and helps optimize the incremental impact of Amazon (Stores, AWS, Devices) fixed marketing investment across TV, Digital, Social, Radio, and many other channels globally. This is a fully self supported team composed of scientists, economists, engineers, and product/program leaders with S-Team visibility. We are open to hiring candidates to work out of one of the following locations: Irvine, CA, USA | San Francisco, CA, USA | Seattle, WA, USA | Sunnyvale, CA, USA
GB, Cambridge
Our team builds generative AI solutions that will produce some of the future’s most influential voices in media and art. We develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video, with Amazon Game Studios and Alexa, the ground-breaking service that powers the audio for Echo. Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language, Audio and Video technology. As an Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and generative AI models to drive the state of the art in audio (and vocal arts) generation. Position Responsibilities: * Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. * Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. * Research and implement novel ML and statistical approaches to add value to the business. * Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
US, TX, Austin
The Workforce Solutions Analytics and Tech team is looking for a senior Applied Scientist who is interested in solving challenging optimization problems in the labor scheduling and operations efficiency space. We are actively looking to hire senior scientists to lead one or more of these problem spaces. Successful candidates will have a deep knowledge of Operations Research and Machine Learning methods, experience in applying these methods to large-scale business problems, the ability to map models into production-worthy code in Python or Java, the communication skills necessary to explain complex technical approaches to a variety of stakeholders and customers, and the excitement to take iterative approaches to tackle big research challenges. As a member of our team, you'll work on cutting-edge projects that directly impact over a million Amazon associates. This is a high-impact role with opportunities to designing and improving complex labor planning and cost optimization models. The successful candidate will be a self-starter comfortable with ambiguity, with strong attention to detail and outstanding ability in balancing technical leadership with strong business judgment to make the right decisions about model and method choices. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs. Key job responsibilities • Candidates will be responsible for developing solutions to better manage and optimize flexible labor capacity. The successful candidate should have solid research experience in one or more technical areas of Operations Research or Machine Learning. As a senior scientist, you will also help coach/mentor junior scientists on the team. • In this role, you will be a technical leader in applied science research with significant scope, impact, and high visibility. You will lead science initiatives for strategic optimization and capacity planning. They require superior logical thinkers who are able to quickly approach large ambiguous problems, turn high-level business requirements into mathematical models, identify the right solution approach, and contribute to the software development for production systems. • Invent and design new solutions for scientifically-complex problem areas and identify opportunities for invention in existing or new business initiatives. • Successfully deliver large or critical solutions to complex problems in the support of medium-to-large business goals. • Apply mathematical optimization techniques and algorithms to design optimal or near optimal solution methodologies to be used for labor planning. • Research, prototype, simulate, and experiment with these models and participate in the production level deployment in Python or Java. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Austin, TX, USA | Bellevue, WA, USA | Nashville, TN, USA | Seattle, WA, USA | Tempe, AZ, USA
CA, BC, Vancouver
Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in AI, Gen AI, Machine Learning, NLP, to help build LLM solutions for Amazon core shopping. Our team works on a variety of projects, including state of the art generative AI, LLM finetuning, alignment, prompt engineering, benchmarking solutions. Key job responsibilities As a Applied Scientist will be expected to work on state of the art technologies which will result in papers publications, however you will not be only theorizing about the algorithms, but you will also have the opportunity to implement them and see how they behave in the field. As a tech lead, this Applied scientist will also be expected to define the research direction, and influence multiple teams to build solutions that improve Amazon and Alexa customer experience. This is an incredible opportunity to validate your research on one of the most exciting Amazon AI products, where assumptions can be tested against real business scenarios and supported by an abundance of data. We are open to hiring candidates to work out of one of the following locations: Vancouver, BC, CAN
US, WA, Seattle
At Amazon, a large portion of our business is driven by third-party Sellers who set their own prices. The Pricing science team is seeking a Sr. Applied Scientist to use statistical and machine learning techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems, helping Marketplace Sellers offer Customers great prices. This role will be a key member of an Advanced Analytics team supporting Pricing related business challenges based in Seattle, WA. The Sr. Applied Scientist will work closely with other research scientists, machine learning experts, and economists to design and run experiments, research new algorithms, and find new ways to improve Seller Pricing to optimize the Customer experience. The Applied Scientist will partner with technology and product leaders to solve business and technology problems using scientific approaches to build new services that surprise and delight our customers. An Applied Scientist at Amazon applies scientific principles to support significant invention, develops code and are deeply involved in bringing their algorithms to production. They also work on cross-disciplinary efforts with other scientists within Amazon. The key strategic objectives for this role include: - Understanding drivers, impacts, and key influences on Pricing dynamics. - Optimizing Seller Pricing to improve the Customer experience. - Drive actions at scale to provide low prices and increased selection for customers using scientifically-based methods and decision making. - Helping to support production systems that take inputs from multiple models and make decisions in real time. - Automating feedback loops for algorithms in production. - Utilizing Amazon systems and tools to effectively work with terabytes of data. You can also learn more about Amazon science here -