Quantum key distribution and authentication: Separating facts from myths

Key exchange protocols and authentication mechanisms solve distinct problems and must be integrated in a secure communication system.

Quantum key distribution (QKD) is a technology that leverages the laws of quantum physics to securely share secret information between distant communicating parties. With QKD, quantum-mechanical properties ensure that if anyone tries to tamper with the secret-sharing process, the communicating parties will know. Keys established through QKD can then be used in traditional symmetric encryption or with other cryptographic technologies to secure communications.

“Record now, decrypt later" (RNDL) is a cybersecurity risk arising from advances in quantum computing. The term refers to the situation in which attackers record encrypted data today, even though they cannot decrypt it immediately. They store this data with the expectation that future quantum computers will be powerful enough to break the cryptographic algorithms currently securing it. Sensitive information such as financial records, healthcare data, or state secrets could be at risk, even years after it was transmitted.

Mitigating RNDL requires adopting quantum-resistant cryptographic methods, such as post-quantum cryptography (PQC) and/or quantum key distribution (QKD), to ensure confidentiality against future quantum advancements. AWS has invested in the migration to post-quantum cryptography to protect the confidentiality, integrity, and authenticity of customer data.

Quantum communication is important enough that in 2022, three of its pioneers won the Nobel Prize for physics. However, misconceptions about QKD’s role still persist. One of them is that QKD lacks practical value because it “doesn’t solve the authentication problem”. This view can obscure the broad benefits that QKD brings to secure communications when integrated properly into existing systems.

QKD should be viewed as a complement to — rather than a replacement for — existing cybersecurity frameworks. Functionally, QKD solves the same problem solved by other key establishment protocols, including the well-known Diffie-Hellman (DH) method and the module-lattice-based key encapsulation mechanism (ML-KEM), the standard recently ratified by the FIPS — but it does it in a fundamentally different way. Like those methods, QKD depends on strong authentication to defend against threats such as man-in-the-middle attacks, where an attacker poses as one of the communicating parties.

Related content
The head of Amazon Web Services’ quantum communication program on the Nobel winners’ influence on her field.

In short, key exchange protocols and authentication mechanisms are different security primitives for solving distinct problems and must be integrated together in a secure communication system.

The challenge, then, is not to give QKD an authentication mechanism but to understand how it can be integrated with other established mechanisms to strengthen the overall security infrastructure. As quantum technologies continue to evolve, it’s important to shift the conversation from skepticism about authentication to consideration of how QKD can be thoughtfully and practically implemented to address today’s and tomorrow’s cybersecurity needs — such as the need to mitigating the “record now, decrypt later” (RNDL) attack (see sidebar).

Understanding the role of authentication in QKD

When discussing authentication in the context of QKD, we focus on the classical digital channel that the parties use to exchange information about their activities on the quantum channel. This isn’t about user authentication methods, such as logging in with passwords or biometrics, but rather about authenticating the communicating entities and the data exchanged. Entity authentication ensures that the parties are who they claim to be; data authentication guarantees that the information received is the same as what was sent by the claimed source. QKD protocols include a classical-communication component that uses both authentication methods to assure the overall security of the interaction.

Entity authentication

Entity authentication is the process by which one party (the "prover") asserts its identity, and another party (the "verifier") validates that assertion. This typically involves a registration step, in which the verifier obtains reliable identification information about the prover, as a prelude to any further authentication activity. The purpose of this step is to establish a “root of trust” or “trust anchor”, ensuring that the verifier has a trusted baseline for future authentications.

Related content
Collaboration will seek to advance the development of a quantum internet.

Several entity authentication methods are in common use, each based on a different type of trust anchor:

  • Public-key-infrastructure (PKI) authentication: In this method, a prover’s certificate is issued by a trusted certificate authority (CA). The verifier relies on this CA, or the root CA in a certificate chain, to establish trust. The certificate acts as the trust anchor that links the prover’s identity to its public key.
  • PGP-/GPG-based (web of trust) authentication: Here, trust is decentralized. A prover’s public key is trusted if it has been vouched for by one or more trusted third parties, such as a mutual acquaintance or a public-key directory. These third parties serve as the trust anchors.
  • Pre-shared-key-based (PSK) authentication: In this case, both the prover and the verifier share a secret key that was exchanged via an offline or other secure out-of-band method. The trust anchor is the method of securely sharing this key a priori, such as a secure courier or another trusted channel.

These trust anchors form the technical backbones of all authentication systems. However, all entity authentication methods are based on a fundamental assumption: the prover is either the only party that holds the critical secret data (e.g., the prover’s private key in PKI or PGP) or the only other party that shares the secret with the verifier (PSK). If this assumption is broken — e.g., the prover's private key is stolen or compromised, or the PSK is leaked — the entire authentication process can fail.

Data authentication

Data authentication, also known as message authentication, ensures both the integrity and authenticity of the transmitted data. This means the data received by the verifier is exactly what the sender sent, and it came from a trusted source. As with entity authentication, the foundation of data authentication is the secure management of secret information shared by the communicating parties.

Related content
Among the ‘first wave’ of scientists to gain a PhD in quantum technology, the senior manager of research science discusses her two-decade-long career journey.

The most common approach to data authentication is symmetric cryptography, where both parties share a secret key. A keyed message authentication code (MAC), such as HMAC or GMAC, is used to compute a unique tag for the transmitted data. This tag allows the receiver to verify that the data hasn’t been altered during transit. The security of this method depends on the collision resistance of the chosen MAC algorithm — that is, the computational infeasibility of finding two or more plaintexts that could yield the same tag — and the confidentiality of the shared key. The authentication tag ensures data integrity, while the secret key guarantees the authenticity of the data origin.

An alternative method uses asymmetric cryptography with digital signatures. In this approach, the sender generates a signature using a private key and the data itself. The receiver, or anyone else, can verify the signature’s authenticity using the sender’s public key. This method provides data integrity through the signature algorithm, and it assures data origin authenticity as long as only the sender holds the private key. In this case, the public key serves as a verifiable link to the sender, ensuring that the signature is valid.

In both the symmetric and the asymmetric approaches, successful data authentication depends on effective entity authentication. Without knowing and trusting the identity of the sender, the verification of the data’s authenticity is compromised. Therefore, the strength of data authentication is closely tied to the integrity of the underlying entity authentication process.

Authentication in QKD

The first quantum cryptography protocol, known as BB84, was developed by Bennett and Brassard in 1984. It remains foundational to many modern QKD technologies, although notable advancements have been made since then.

Related content
New method enables entanglement between vacancy centers tuned to different wavelengths of light.

QKD protocols are unique because they rely on the fundamental principles of quantum physics, which allow for “information-theoretic security.” This is distinct from the security provided by computational complexity. In the quantum model, any attempt to eavesdrop on the key exchange is detectable, providing a layer of security that classical cryptography cannot offer.

QKD relies on an authenticated classical communication channel to ensure the integrity of the data exchanged between parties, but it does not depend on the confidentiality of that classical channel. (This is why RNDL is not an effective attack against QKD). Authentication just guarantees that the entities establishing keys are legitimate, protecting against man-in-the-middle attacks.

Currently, several commercial QKD products are available, many of which implement the original BB84 protocol and its variants. These solutions offer secure key distribution in real-world applications, and they all pair with strong authentication processes to ensure the communication remains secure from start to finish. By integrating both technologies, organizations can build communication infrastructures capable of withstanding both classical and quantum threats.

Authentication in QKD bootstrap: A manageable issue

During the initial bootstrap phase of a QKD system, the authentic classical channel is established using traditional authentication methods based on PKI or PSK. As discussed earlier, all of these methods ultimately rely on the establishment of a trust anchor.

Related content
Automated reasoning and optimizations specific to CPU microarchitectures improve both performance and assurance of correct implementation.

While confidentiality may need to be maintained for an extended period (sometimes decades), authentication is a real-time process. It verifies identity claims and checks data integrity in the moment. Compromising an authentication mechanism at some future point will not affect past verifications. Once an authentication process is successfully completed, the opportunity for an adversary to tamper with it has passed. That is, even if, in the future, a specific authentication mechanism used in QKD is broken by a new technology, QKD keys generated prior to that point are still safe to use, because no adversary can go back in time to compromise past QKD key generation.

This means that the reliance on traditional, non-QKD authentication methods presents an attack opportunity only during the bootstrap phase, which typically lasts just a few minutes. Given that this phase is so short compared to the overall life cycle of a QKD deployment, the potential risks posed by using authentication mechanisms are relatively minor.

Authentication after QKD bootstrap: Not a new issue

Once the bootstrap phase is complete, the QKD devices will have securely established shared keys. These keys can then be used for PSK-based authentication in future communications. In essence, QKD systems can maintain the authenticated classical communication channel by utilizing a small portion of the very keys they generate, ensuring continued secure communication beyond the initial setup phase.

It is important to note that if one of the QKD devices is compromised locally for whatever reason, the entire system’s security could be at risk. However, this is not a unique vulnerability introduced by QKD. Any cryptographic system faces similar challenges when the integrity of an endpoint is compromised. In this respect, QKD is no more susceptible to it than any other cryptographic system.

Overcoming key challenges to QKD’s role in cybersecurity

Up to now we have focused on clarifying the myths about authentication needs in QKD. Next we will discuss several other challenges in using QKD in practice.

Bridging the gap between QKD theory and implementation

While QKD protocols are theoretically secure, there remains a significant gap between theory and real-world implementations. Unlike traditional cryptographic methods, which rely on well-understood algorithms that can be thoroughly reviewed and certified, QKD systems depend on specialized hardware. This introduces complexity, as the process of reviewing and certifying QKD hardware is not yet mature.

Related content
Using time to last byte — rather than time to first byte — to assess the effects of data-heavy TLS 1.3 on real-world connections yields more encouraging results.

In conventional cryptography, risks like side-channel attacks — which use runtime clues such as memory access patterns or data retrieval times to deduce secrets — are well understood and mitigated through certification processes. QKD systems are following a similar path. The European Telecommunications Standards Institute (ETSI) has made a significant move by introducing the Common Criteria Protection Profile for QKD, the first international effort to create a standardized certification framework for these systems. ISO/IEC has also published standards on security requirements and test and evaluation methods for QKD. These represent crucial steps in building the same level of trust that traditional cryptography enjoys.

Once the certification process is fully established, confidence in QKD’s hardware implementations will continue to grow, enabling the cybersecurity community to embrace QKD as a reliable, cutting-edge solution for secure communication. Until then, the focus remains on advancing the review and certification processes to ensure that these systems meet the highest security standards.

QKD deployment considerations

One of the key challenges in the practical deployment of QKD is securely transporting the keys generated by QKD devices to their intended users. While it’s accepted that QKD is a robust mechanism for distributing keys to the QKD devices themselves, it does not cover the secure delivery of keys from the QKD device to the end user (or key consumer).

QKD diagram.png
A schematic representation of two endpoints — site A and site B — that want to communicate safely. The top line represents the user traffic being protected, and the bottom lines are the channels required to establish secure communication. An important practical consideration is how to transmit a key between a QKD device and an end user within an endpoint.

This issue arises whether the QKD system is deployed within a large intranet or a small local-area network. In both cases, the keys must be transported over a non-QKD system. The standard deployment requirement is that the key delivery from the QKD system to the key consumer occurs “within the same secure site”, and the definition of a “secure site” is up to the system operator.

Related content
Prize honors Amazon senior principal scientist and Penn professor for a protocol that achieves a theoretical limit on information-theoretic secure multiparty computation.

The best practice is to make the boundary of the secure site as small as is practical. One extreme option is to remove the need for transporting keys over classical networks entirely, by putting the QKD device and the key user’s computing hardware in the same physical unit. This eliminates the need for traditional network protocols for key transport and realizes the full security benefits of QKD without external dependency. In cases where the extreme option is infeasible or impractical, the secure site should cover only the local QKD system and the intended key consumers.

Conclusion

QKD-generated keys will remain secure even when quantum computers emerge, and communications using these keys are not vulnerable to RNDL attacks. For QKD to reach its full potential, however, the community must collaborate closely with the broader cybersecurity ecosystem, particularly in areas like cryptography and governance, risk, and compliance (GRC). By integrating the insights and frameworks established in these fields, QKD can overcome its current challenges in trust and implementation.

This collective effort is essential to ensure that QKD becomes a reliable and integral part of secure communication systems. As these collaborations deepen, QKD will be well-positioned to enhance existing security frameworks, paving the way for its adoption across industries and applications.

Related content

US, VA, Arlington
As a Survey Research Scientist within the Reputation Marketing & Insights team, your primary responsibility will be to help manage our employee communications research program, including a global tracking survey. The work will challenge you to be resourceful, think big while staying connected to the details, translate survey, focus group results, and advanced analytics into strategic direction, and embrace a high degree of change and ambiguity at speed. The scope and scale of what we strive to achieve is immense, but it is also meaningful and energizing. This is an individual contributor role. The right candidate possesses endless curiosity and passion for understanding employee perceptions and what drives them. You have end-to-end experience conducting qualitative research, robust large-scale surveys, campaign measurement, as well as advanced modeling skills to uncover perception drivers. You have proficiency in diving deep into large amounts of data and translating research into actionable insights/recommendations for internal communicators. You are an excellent writer who can effectively communicate data-driven insights and recommendations through written documents, presentations, and other internal communication channels. You are a creative problem-solver who seeks to deeply understand the business/communications so you can tailor research that informs stakeholder decision making and strategic messaging tactics. Key job responsibilities - Design and manage the execution of a global tracking survey focused on employee communications - Develop research to identify and test messages to drive employee perceptions - Use advanced statistical methodologies to better understand the relationship between key internal communications metrics and other related measures of perception (e.g., regression, structural equation modeling, latent growth curve modeling, Shapley analysis, etc.) - Develop causal and semi-causal measurement techniques to evaluate the perception impact of internal communications campaigns - Identify opportunities to simplify existing research processes and operate more nimbly - Engage in strategic discussions with internal partner teams to ensure our research generates actionable and on-point findings About the team This team sits within the CCR organization. Our focus is on conducting research that identifies messaging opportunities and informs communication strategies for Amazon as a brand.
US, CA, Santa Clara
Want to work on frontier, world class, AI-powered experiences for health customers and health providers? The Health Science & Analytics group in Amazon's Health Store & Technology organization is looking for a Senior Manager of Applied Science to lead a group of applied scientists and engineers to work hand in hand with physicians to build the future of AI-powered healthcare experiences. We have an ambitious roadmap which includes scaling recently launched products which are already delighting products and the opportunity to build disruptive, new experiences. This role will be responsible for leading the science and technology teams driving these key innovations on behalf of our customers. Key job responsibilities - Independently manage a team of scientists and engineers to sustainably deliver science driven products. - Define the vision and long-term technical roadmap to achieve multi-year business objectives. - Maintain and raise the science bar of the team’s deliverables and keep the broader Amazon Health Services organization apprised of the latest relevant technical developments in the field. - Work across business, clinical, and technical leaders to disambiguate product requirements and socialize progress towards key goals and deliverables. - Proactively identify risks and shape the technical roadmap in anticipation of industry trends in emerging AI subfields.
US, NY, New York
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist to work on pre-training methodologies for Generative Artificial Intelligence (GenAI) models. You will interact closely with our customers and with the academic and research communities. Key job responsibilities Join us to work as an integral part of a team that has experience with GenAI models in this space. We work on these areas: - Scaling laws - Hardware-informed efficient model architecture, low-precision training - Optimization methods, learning objectives, curriculum design - Deep learning theories on efficient hyperparameter search and self-supervised learning - Learning objectives and reinforcement learning methods - Distributed training methods and solutions - AI-assisted research About the team The AGI team has a mission to push the envelope in GenAI with Large Language Models (LLMs) and multimodal systems, in order to provide the best-possible experience for our customers.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities - Develop ML models for various recommendation & search systems using deep learning, online learning, and optimization methods - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals A day in the life We're using advanced approaches such as foundation models to connect information about our videos and customers from a variety of information sources, acquiring and processing data sets on a scale that only a few companies in the world can match. This will enable us to recommend titles effectively, even when we don't have a large behavioral signal (to tackle the cold-start title problem). It will also allow us to find our customer's niche interests, helping them discover groups of titles that they didn't even know existed. We are looking for creative & customer obsessed machine learning scientists who can apply the latest research, state of the art algorithms and ML to build highly scalable page personalization solutions. You'll be a research leader in the space and a hands-on ML practitioner, guiding and collaborating with talented teams of engineers and scientists and senior leaders in the Prime Video organization. You will also have the opportunity to publish your research at internal and external conferences.
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. You can work in San Francisco, CA or Seattle, WA. Perks - Medical, Dental, Vision & Disability Insurance - 401(k) - Maternity & Parental Leave - Flexible PTO - Amazon Employee Discount
IN, KA, Bengaluru
AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. Do you love problem solving? Are you looking for real world Supply Chain challenges? Do you have a desire to make a major contribution to the future, in the rapid growth environment of Cloud Computing? Amazon Web Services is looking for a highly motivated, Data Scientist to help build scalable, predictive and prescriptive business analytics solutions that supports AWS Supply Chain and Procurement organization. You will be part of the Supply Chain Analytics team working with Global Stakeholders, Data Engineers, Business Intelligence Engineers and Business Analysts to achieve our goals. We are seeking an innovative and technically strong data scientist with a background in optimization, machine learning, and statistical modeling/analysis. This role requires a team member to have strong quantitative modeling skills and the ability to apply optimization/statistical/machine learning methods to complex decision-making problems, with data coming from various data sources. The candidate should have strong communication skills, be able to work closely with stakeholders and translate data-driven findings into actionable insights. The successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and ability to work in a fast-paced and ever-changing environment. Key job responsibilities 1. Demonstrate thorough technical knowledge on feature engineering of massive datasets, effective exploratory data analysis, and model building using industry standard time Series Forecasting techniques like ARIMA, ARIMAX, Holt Winter and formulate ensemble model. 2. Proficiency in both Supervised(Linear/Logistic Regression) and UnSupervised algorithms(k means clustering, Principle Component Analysis, Market Basket analysis). 3. Experience in solving optimization problems like inventory and network optimization . Should have hands on experience in Linear Programming. 4. Work closely with internal stakeholders like the business teams, engineering teams and partner teams and align them with respect to your focus area 5. Detail-oriented and must have an aptitude for solving unstructured problems. You should work in a self-directed environment, own tasks and drive them to completion. 6. Excellent business and communication skills to be able to work with business owners to develop and define key business questions and to build data sets that answer those questions 7. Work with distributed machine learning and statistical algorithms to harness enormous volumes of data at scale to serve our customers About the team Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, NY, New York
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are looking for a self-motivated, passionate and resourceful Applied Scientist to bring diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. You will spend your time as a hands-on machine learning practitioner and a research leader. You will play a key role on the team, building and guiding machine learning models from the ground up. At the end of the day, you will have the reward of seeing your contributions benefit millions of Amazon.com customers worldwide. Key job responsibilities - Develop AI solutions for various Prime Video Search systems using Deep learning, GenAI, Reinforcement Learning, and optimization methods; - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses; - Effectively communicate technical and non-technical ideas with teammates and stakeholders; - Stay up-to-date with advancements and the latest modeling techniques in the field; - Publish your research findings in top conferences and journals. About the team Prime Video Search Science team owns science solution to power search experience on various devices, from sourcing, relevance, ranking, to name a few. We work closely with the engineering teams to launch our solutions in production.
US, WA, Bellevue
Are you interested in a unique opportunity to advance the accuracy and efficiency of Artificial General Intelligence (AGI) systems? If so, you're at the right place! As a Quantitative Researcher on our team, you will be working at the intersection of mathematics, computer science, and finance, you will collaborate with a diverse team of engineers in a fast-paced, intellectually challenging environment where innovative thinking is encouraged and rewarded. We operate at Amazon's large scale with the energy of a nimble start-up. If you have a learner's mindset, enjoy solving challenging problems, and value an inclusive team culture, you will thrive in this role, and we hope to hear from you. Key job responsibilities * Conduct statistical analyses on web-scale datasets to develop state-of-the-art multimodal large language models * Conceptualize and develop mathematical models, data sampling and preparation strategies to continuously improve existing algorithms * Identify and utilize data sources to drive innovation and improvements to our LLMs About the team We are passionate engineers and scientists dedicated to pushing the boundaries of innovation. We evaluate and represent the customer perspective through accurate benchmarking.
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with world-class scientists and engineers to develop novel data, modeling and engineering solutions to support the responsible AI initiatives at AGI. Your work will directly impact our customers in the form of products and services that make use of audio technology. About the team While the rapid advancements in Generative AI have captivated global attention, we see these as just the starting point. Our team is dedicated to pushing the boundaries of what’s possible, leveraging Amazon’s unparalleled ML infrastructure, computing resources, and commitment to responsible AI principles. And Amazon’s leadership principle of customer obsession guides our approach, prioritizing our customers’ needs and preferences each step of the way.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Senior Applied Scientist, to lead the development and implementation of algorithms and models for supervised fine-tuning and reinforcement learning through human feedback; with a focus across text, image, and video modalities. As a Senior Applied Scientist, you will play a critical role in driving the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in GenAI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of GenAI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports - Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team